On Wed, May 26, 2021 at 02:08:19PM +0100, Daniel Stone wrote:
Hey,
On Wed, 26 May 2021 at 13:35, Daniel Vetter daniel@ffwll.ch wrote:
On Wed, May 26, 2021 at 1:09 PM Daniel Stone daniel@fooishbar.org wrote:
Yeah, I don't think there's any difference between shared and exclusive wrt safety. The difference lies in, well, exclusive putting a hard serialisation barrier between everything which comes before and everything that comes after, and shared being more relaxed to allow for reads to retire in parallel.
As said below, I think there's a good argument for the latter once you get out of the very straightforward uses. One of the arguments for these ioctls is to eliminate oversync, but then the import ioctl mandates oversync in the case where the consumer only does non-destructive reads - which is the case for the vast majority of users!
Just wanted to comment on this: Right now we attach always attach a shared end-of-batch fence to every dma_resv. So reads are automatically and always synced. So in that sense having an explicit ioct to set the read fence is not really useful, since at most you just make everything worse.
Are you saying that if a compositor imports a client-provided dmabuf as an EGLImage to use as a source texture for its rendering, and then provides it to VA-API or V4L2 to use as a media encode source (both purely read-only ops), that these will both serialise against each other? Like, my media decode job won't begin execution until the composition read has fully retired?
If so, a) good lord that hurts, and b) what are shared fences actually ... for?
Shared is shared, I just meant to say that we always add the shared fence. So an explicit ioctl to add more shared fences is kinda pointless.
So yeah on a good driver this will run in parallel. On a not-so-good driver (which currently includes amdgpu and panfrost) this will serialize, because those drivers don't have the concept of a non-exclusive fence for such shared buffers (amdgpu does not sync internally, but will sync as soon as it's cross-drm_file). -Daniel