On Fri, Jan 12, 2018 at 10:40:16AM +0000, Chris Wilson wrote:
Quoting Thierry Reding (2018-01-11 22:22:46)
From: Thierry Reding treding@nvidia.com
This set of patches adds support for fences to Tegra DRM and complements the fence FD support for Nouveau. Technically this isn't necessary for a fence-based synchronization loop with Nouveau because the KMS core takes care of all that, but engines behind host1x can use the IOCTL extensions provided here to emit fence FDs that in turn can be used to synchronize their jobs with either the scanout engine or the GPU.
Whilst hooking up fences, I advise you to also hook up drm_syncobj. Internally they each resolve to another fence, so the mechanics are identical, you just need another array in the uABI for in/out syncobj. The advantage of drm_syncobj is that userspace can track internal fences using inexhaustible handles, reserving the precious fd for IPC or KMS.
I'm not sure that I properly understand how to use these. It looks as if they are better fence FDs, so in case where you submit internal work you would go with a drm_syncobj and when you need access to the fence from a different process or driver, you should use an FD.
Doesn't this mean we can cover this by just adding a flag that marks the fence as being a handle or an FD? Do we have situations where we want an FD *and* a handle returned as result of the job submission?
For the above it would suffice to add two additional flags:
#define DRM_TEGRA_SUBMIT_WAIT_SYNCOBJ (1 << 2) #define DRM_TEGRA_SUBMIT_EMIT_SYNCOBJ (1 << 3)
which would even allow both to be combined:
DRM_TEGRA_SUBMIT_WAIT_SYNCOBJ | DRM_TEGRA_SUBMIT_EMIT_FENCE_FD
would allow the job to wait for an internal syncobj (defined by handle in the fence member) and return a fence (as FD in the fence member) to pass on to another process or driver as prefence.
Thierry