Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over-synchronization.
This patch series actually contains two new ioctls. There is the export one mentioned above as well as an RFC for an import ioctl which provides the other half. The intention is to land the export ioctl since it seems like there's no real disagreement on that one. The import ioctl, however, has a lot of debate around it so it's intended to be RFC-only for now.
Mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037 IGT tests: https://patchwork.freedesktop.org/series/90490/
v10 (Jason Ekstrand, Daniel Vetter): - Add reviews/acks - Add a patch to rename _rcu to _unlocked - Split things better so import is clearly RFC status
v11 (Daniel Vetter): - Add more CCs to try and get maintainers - Add a patch to document DMA_BUF_IOCTL_SYNC - Generally better docs - Use separate structs for import/export (easier to document) - Fix an issue in the import patch
v12 (Daniel Vetter): - Better docs for DMA_BUF_IOCTL_SYNC
v12 (Christian König): - Drop the rename patch in favor of Christian's series - Add a comment to the commit message for the dma-buf sync_file export ioctl saying why we made it an ioctl on dma-buf
v13 (Jason Ekstrand): - Rebase on Christian König's fence rework
Cc: Christian König christian.koenig@amd.com Cc: Michel Dänzer michel@daenzer.net Cc: Dave Airlie airlied@redhat.com Cc: Bas Nieuwenhuizen bas@basnieuwenhuizen.nl Cc: Daniel Stone daniels@collabora.com Cc: mesa-dev@lists.freedesktop.org Cc: wayland-devel@lists.freedesktop.org
Jason Ekstrand (2): dma-buf: Add an API for exporting sync files (v13) dma-buf: Add an API for importing sync files (v8)
drivers/dma-buf/dma-buf.c | 100 +++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 57 ++++++++++++++++++++ 2 files changed, 157 insertions(+)
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over-synchronization.
By making this an ioctl on the dma-buf itself, it allows this new functionality to be used in an entirely driver-agnostic way without having access to a DRM fd. This makes it ideal for use in driver-generic code in Mesa or in a client such as a compositor where the DRM fd may be hard to reach.
v2 (Jason Ekstrand): - Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand): - Lock around setting shared fences as well as exclusive - Mark SIGNAL_SYNC_FILE as a read-write ioctl. - Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand): - Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand): - Rename the IOCTLs to import/export rather than wait/signal - Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand): - Drop the sync_file import as it was all-around sketchy and not nearly as useful as import. - Re-introduce READ/WRITE flag support for export - Rework the commit message
v7 (Jason Ekstrand): - Require at least one sync flag - Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference - Use _rcu helpers since we're accessing the dma_resv read-only
v8 (Jason Ekstrand): - Return -ENOMEM if the sync_file_create fails - Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)
v9 (Jason Ekstrand): - Add documentation for the new ioctl
v10 (Jason Ekstrand): - Go back to dma_buf_sync_file as the ioctl struct name
v11 (Daniel Vetter): - Go back to dma_buf_export_sync_file as the ioctl struct name - Better kerneldoc describing what the read/write flags do
v12 (Christian König): - Document why we chose to make it an ioctl on dma-buf
v12 (Jason Ekstrand): - Rebase on Christian König's fence rework
Signed-off-by: Jason Ekstrand jason@jlekstrand.net Acked-by: Simon Ser contact@emersion.fr Acked-by: Christian König christian.koenig@amd.com Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch Cc: Sumit Semwal sumit.semwal@linaro.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com --- drivers/dma-buf/dma-buf.c | 64 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 35 ++++++++++++++++++++ 2 files changed, 99 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 79795857be3e..529e0611e53b 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -192,6 +193,9 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) * Note that this only signals the completion of the respective fences, i.e. the * DMA transfers are complete. Cache flushing and any other necessary * preparations before CPU access can begin still need to happen. + * + * As an alternative to poll(), the set of fences on DMA buffer can be + * exported as a &sync_file using &dma_buf_sync_file_export. */
static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb) @@ -326,6 +330,61 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return 0; }
+#if IS_ENABLED(CONFIG_SYNC_FILE) +static long dma_buf_export_sync_file(struct dma_buf *dmabuf, + void __user *user_data) +{ + struct dma_buf_export_sync_file arg; + enum dma_resv_usage usage; + struct dma_fence *fence = NULL; + struct sync_file *sync_file; + int fd, ret; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags & ~DMA_BUF_SYNC_RW) + return -EINVAL; + + if ((arg.flags & DMA_BUF_SYNC_RW) == 0) + return -EINVAL; + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) + return fd; + + usage = (arg.flags & DMA_BUF_SYNC_WRITE) ? DMA_RESV_USAGE_WRITE : + DMA_RESV_USAGE_READ; + ret = dma_resv_get_singleton(dmabuf->resv, usage, &fence); + if (ret) + goto err_put_fd; + + if (!fence) + fence = dma_fence_get_stub(); + + sync_file = sync_file_create(fence); + + dma_fence_put(fence); + + if (!sync_file) { + ret = -ENOMEM; + goto err_put_fd; + } + + fd_install(fd, sync_file->file); + + arg.fd = fd; + if (copy_to_user(user_data, &arg, sizeof(arg))) + return -EFAULT; + + return 0; + +err_put_fd: + put_unused_fd(fd); + return ret; +} +#endif + static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -369,6 +428,11 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME_B: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+#if IS_ENABLED(CONFIG_SYNC_FILE) + case DMA_BUF_IOCTL_EXPORT_SYNC_FILE: + return dma_buf_export_sync_file(dmabuf, (void __user *)arg); +#endif + default: return -ENOTTY; } diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 8e4a2ca0bcbf..46f1e3e98b02 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -85,6 +85,40 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+/** + * struct dma_buf_export_sync_file - Get a sync_file from a dma-buf + * + * Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve the + * current set of fences on a dma-buf file descriptor as a sync_file. CPU + * waits via poll() or other driver-specific mechanisms typically wait on + * whatever fences are on the dma-buf at the time the wait begins. This + * is similar except that it takes a snapshot of the current fences on the + * dma-buf for waiting later instead of waiting immediately. This is + * useful for modern graphics APIs such as Vulkan which assume an explicit + * synchronization model but still need to inter-operate with dma-buf. + */ +struct dma_buf_export_sync_file { + /** + * @flags: Read/write flags + * + * Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both. + * + * If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set, + * the returned sync file waits on any writers of the dma-buf to + * complete. Waiting on the returned sync file is equivalent to + * poll() with POLLIN. + * + * If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on + * any users of the dma-buf (read or write) to complete. Waiting + * on the returned sync file is equivalent to poll() with POLLOUT. + * If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this + * is equivalent to just DMA_BUF_SYNC_WRITE. + */ + __u32 flags; + /** @fd: Returned sync file descriptor */ + __s32 fd; +}; + #define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -94,5 +128,6 @@ struct dma_buf_sync { #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file)
#endif
On Wed, May 04, 2022 at 03:34:03PM -0500, Jason Ekstrand wrote:
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over-synchronization.
By making this an ioctl on the dma-buf itself, it allows this new functionality to be used in an entirely driver-agnostic way without having access to a DRM fd. This makes it ideal for use in driver-generic code in Mesa or in a client such as a compositor where the DRM fd may be hard to reach.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand):
- Drop the sync_file import as it was all-around sketchy and not nearly as useful as import.
- Re-introduce READ/WRITE flag support for export
- Rework the commit message
v7 (Jason Ekstrand):
- Require at least one sync flag
- Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference
- Use _rcu helpers since we're accessing the dma_resv read-only
v8 (Jason Ekstrand):
- Return -ENOMEM if the sync_file_create fails
- Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)
v9 (Jason Ekstrand):
- Add documentation for the new ioctl
v10 (Jason Ekstrand):
- Go back to dma_buf_sync_file as the ioctl struct name
v11 (Daniel Vetter):
- Go back to dma_buf_export_sync_file as the ioctl struct name
- Better kerneldoc describing what the read/write flags do
v12 (Christian König):
- Document why we chose to make it an ioctl on dma-buf
v12 (Jason Ekstrand):
- Rebase on Christian König's fence rework
Signed-off-by: Jason Ekstrand jason@jlekstrand.net Acked-by: Simon Ser contact@emersion.fr Acked-by: Christian König christian.koenig@amd.com Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
Not sure which version it was that I reviewed, but with dma_resv_usage this all looks neat and tidy. One nit below.
Cc: Sumit Semwal sumit.semwal@linaro.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com
drivers/dma-buf/dma-buf.c | 64 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 35 ++++++++++++++++++++ 2 files changed, 99 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 79795857be3e..529e0611e53b 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -192,6 +193,9 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
- Note that this only signals the completion of the respective fences, i.e. the
- DMA transfers are complete. Cache flushing and any other necessary
- preparations before CPU access can begin still need to happen.
- As an alternative to poll(), the set of fences on DMA buffer can be
*/
- exported as a &sync_file using &dma_buf_sync_file_export.
static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb) @@ -326,6 +330,61 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return 0; }
+#if IS_ENABLED(CONFIG_SYNC_FILE) +static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
void __user *user_data)
+{
- struct dma_buf_export_sync_file arg;
- enum dma_resv_usage usage;
- struct dma_fence *fence = NULL;
- struct sync_file *sync_file;
- int fd, ret;
- if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
- if (arg.flags & ~DMA_BUF_SYNC_RW)
return -EINVAL;
- if ((arg.flags & DMA_BUF_SYNC_RW) == 0)
return -EINVAL;
We allow userspace to set both SYNC_READ and SYNC_WRITE here, I think
if ((arg.flags & DMA_BUF_SYNC_RW) == DMA_BUF_SYNC_RW) return -EINVAL;
is missing?
Also maybe a case to add to your igt.
- fd = get_unused_fd_flags(O_CLOEXEC);
- if (fd < 0)
return fd;
- usage = (arg.flags & DMA_BUF_SYNC_WRITE) ? DMA_RESV_USAGE_WRITE :
DMA_RESV_USAGE_READ;
- ret = dma_resv_get_singleton(dmabuf->resv, usage, &fence);
- if (ret)
goto err_put_fd;
- if (!fence)
fence = dma_fence_get_stub();
- sync_file = sync_file_create(fence);
- dma_fence_put(fence);
- if (!sync_file) {
ret = -ENOMEM;
goto err_put_fd;
- }
- fd_install(fd, sync_file->file);
- arg.fd = fd;
- if (copy_to_user(user_data, &arg, sizeof(arg)))
return -EFAULT;
- return 0;
+err_put_fd:
- put_unused_fd(fd);
- return ret;
+} +#endif
static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -369,6 +428,11 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME_B: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+#if IS_ENABLED(CONFIG_SYNC_FILE)
- case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
+#endif
- default: return -ENOTTY; }
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 8e4a2ca0bcbf..46f1e3e98b02 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -85,6 +85,40 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+/**
- struct dma_buf_export_sync_file - Get a sync_file from a dma-buf
- Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve the
- current set of fences on a dma-buf file descriptor as a sync_file. CPU
- waits via poll() or other driver-specific mechanisms typically wait on
- whatever fences are on the dma-buf at the time the wait begins. This
- is similar except that it takes a snapshot of the current fences on the
- dma-buf for waiting later instead of waiting immediately. This is
- useful for modern graphics APIs such as Vulkan which assume an explicit
- synchronization model but still need to inter-operate with dma-buf.
- */
+struct dma_buf_export_sync_file {
- /**
* @flags: Read/write flags
*
* Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both.
*
* If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set,
* the returned sync file waits on any writers of the dma-buf to
* complete. Waiting on the returned sync file is equivalent to
* poll() with POLLIN.
*
* If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on
* any users of the dma-buf (read or write) to complete. Waiting
* on the returned sync file is equivalent to poll() with POLLOUT.
* If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this
* is equivalent to just DMA_BUF_SYNC_WRITE.
*/
- __u32 flags;
- /** @fd: Returned sync file descriptor */
- __s32 fd;
+};
#define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -94,5 +128,6 @@ struct dma_buf_sync { #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file)
With the one nit fixed for this version:
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
#endif
2.36.0
On Wed, May 4, 2022 at 5:49 PM Daniel Vetter daniel@ffwll.ch wrote:
On Wed, May 04, 2022 at 03:34:03PM -0500, Jason Ekstrand wrote:
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over-synchronization.
By making this an ioctl on the dma-buf itself, it allows this new functionality to be used in an entirely driver-agnostic way without having access to a DRM fd. This makes it ideal for use in driver-generic code in Mesa or in a client such as a compositor where the DRM fd may be hard to reach.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand):
- Drop the sync_file import as it was all-around sketchy and not nearly as useful as import.
- Re-introduce READ/WRITE flag support for export
- Rework the commit message
v7 (Jason Ekstrand):
- Require at least one sync flag
- Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference
- Use _rcu helpers since we're accessing the dma_resv read-only
v8 (Jason Ekstrand):
- Return -ENOMEM if the sync_file_create fails
- Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)
v9 (Jason Ekstrand):
- Add documentation for the new ioctl
v10 (Jason Ekstrand):
- Go back to dma_buf_sync_file as the ioctl struct name
v11 (Daniel Vetter):
- Go back to dma_buf_export_sync_file as the ioctl struct name
- Better kerneldoc describing what the read/write flags do
v12 (Christian König):
- Document why we chose to make it an ioctl on dma-buf
v12 (Jason Ekstrand):
- Rebase on Christian König's fence rework
Signed-off-by: Jason Ekstrand jason@jlekstrand.net Acked-by: Simon Ser contact@emersion.fr Acked-by: Christian König christian.koenig@amd.com Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
Not sure which version it was that I reviewed, but with dma_resv_usage this all looks neat and tidy. One nit below.
Cc: Sumit Semwal sumit.semwal@linaro.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com
drivers/dma-buf/dma-buf.c | 64 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 35 ++++++++++++++++++++ 2 files changed, 99 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 79795857be3e..529e0611e53b 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -192,6 +193,9 @@ static loff_t dma_buf_llseek(struct file *file,
loff_t offset, int whence)
- Note that this only signals the completion of the respective fences,
i.e. the
- DMA transfers are complete. Cache flushing and any other necessary
- preparations before CPU access can begin still need to happen.
- As an alternative to poll(), the set of fences on DMA buffer can be
*/
- exported as a &sync_file using &dma_buf_sync_file_export.
static void dma_buf_poll_cb(struct dma_fence *fence, struct
dma_fence_cb *cb)
@@ -326,6 +330,61 @@ static long dma_buf_set_name(struct dma_buf
*dmabuf, const char __user *buf)
return 0;
}
+#if IS_ENABLED(CONFIG_SYNC_FILE) +static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
void __user *user_data)
+{
struct dma_buf_export_sync_file arg;
enum dma_resv_usage usage;
struct dma_fence *fence = NULL;
struct sync_file *sync_file;
int fd, ret;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags & ~DMA_BUF_SYNC_RW)
return -EINVAL;
if ((arg.flags & DMA_BUF_SYNC_RW) == 0)
return -EINVAL;
We allow userspace to set both SYNC_READ and SYNC_WRITE here, I think
if ((arg.flags & DMA_BUF_SYNC_RW) == DMA_BUF_SYNC_RW) return -EINVAL;
is missing?
We could, but I don't really get why we should disallow that. SYNC_READ | SYNC_WRITE is the same as SYNC_WRITE and that seems like perfectly sane behavior to me.
--Jason
Also maybe a case to add to your igt.
fd = get_unused_fd_flags(O_CLOEXEC);
if (fd < 0)
return fd;
usage = (arg.flags & DMA_BUF_SYNC_WRITE) ? DMA_RESV_USAGE_WRITE :
DMA_RESV_USAGE_READ;
ret = dma_resv_get_singleton(dmabuf->resv, usage, &fence);
if (ret)
goto err_put_fd;
if (!fence)
fence = dma_fence_get_stub();
sync_file = sync_file_create(fence);
dma_fence_put(fence);
if (!sync_file) {
ret = -ENOMEM;
goto err_put_fd;
}
fd_install(fd, sync_file->file);
arg.fd = fd;
if (copy_to_user(user_data, &arg, sizeof(arg)))
return -EFAULT;
return 0;
+err_put_fd:
put_unused_fd(fd);
return ret;
+} +#endif
static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -369,6 +428,11 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME_B: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+#if IS_ENABLED(CONFIG_SYNC_FILE)
case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
return dma_buf_export_sync_file(dmabuf, (void __user
*)arg);
+#endif
default: return -ENOTTY; }
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 8e4a2ca0bcbf..46f1e3e98b02 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -85,6 +85,40 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+/**
- struct dma_buf_export_sync_file - Get a sync_file from a dma-buf
- Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve
the
- current set of fences on a dma-buf file descriptor as a sync_file.
CPU
- waits via poll() or other driver-specific mechanisms typically wait
on
- whatever fences are on the dma-buf at the time the wait begins. This
- is similar except that it takes a snapshot of the current fences on
the
- dma-buf for waiting later instead of waiting immediately. This is
- useful for modern graphics APIs such as Vulkan which assume an
explicit
- synchronization model but still need to inter-operate with dma-buf.
- */
+struct dma_buf_export_sync_file {
/**
* @flags: Read/write flags
*
* Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both.
*
* If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set,
* the returned sync file waits on any writers of the dma-buf to
* complete. Waiting on the returned sync file is equivalent to
* poll() with POLLIN.
*
* If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on
* any users of the dma-buf (read or write) to complete. Waiting
* on the returned sync file is equivalent to poll() with POLLOUT.
* If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this
* is equivalent to just DMA_BUF_SYNC_WRITE.
*/
__u32 flags;
/** @fd: Returned sync file descriptor */
__s32 fd;
+};
#define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -94,5 +128,6 @@ struct dma_buf_sync { #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2,
struct dma_buf_export_sync_file)
With the one nit fixed for this version:
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
#endif
2.36.0
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
On Thu, May 05, 2022 at 03:05:44AM -0500, Jason Ekstrand wrote:
On Wed, May 4, 2022 at 5:49 PM Daniel Vetter daniel@ffwll.ch wrote:
On Wed, May 04, 2022 at 03:34:03PM -0500, Jason Ekstrand wrote:
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over-synchronization.
By making this an ioctl on the dma-buf itself, it allows this new functionality to be used in an entirely driver-agnostic way without having access to a DRM fd. This makes it ideal for use in driver-generic code in Mesa or in a client such as a compositor where the DRM fd may be hard to reach.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand):
- Drop the sync_file import as it was all-around sketchy and not nearly as useful as import.
- Re-introduce READ/WRITE flag support for export
- Rework the commit message
v7 (Jason Ekstrand):
- Require at least one sync flag
- Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference
- Use _rcu helpers since we're accessing the dma_resv read-only
v8 (Jason Ekstrand):
- Return -ENOMEM if the sync_file_create fails
- Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)
v9 (Jason Ekstrand):
- Add documentation for the new ioctl
v10 (Jason Ekstrand):
- Go back to dma_buf_sync_file as the ioctl struct name
v11 (Daniel Vetter):
- Go back to dma_buf_export_sync_file as the ioctl struct name
- Better kerneldoc describing what the read/write flags do
v12 (Christian König):
- Document why we chose to make it an ioctl on dma-buf
v12 (Jason Ekstrand):
- Rebase on Christian König's fence rework
Signed-off-by: Jason Ekstrand jason@jlekstrand.net Acked-by: Simon Ser contact@emersion.fr Acked-by: Christian König christian.koenig@amd.com Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
Not sure which version it was that I reviewed, but with dma_resv_usage this all looks neat and tidy. One nit below.
Cc: Sumit Semwal sumit.semwal@linaro.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com
drivers/dma-buf/dma-buf.c | 64 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 35 ++++++++++++++++++++ 2 files changed, 99 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 79795857be3e..529e0611e53b 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -192,6 +193,9 @@ static loff_t dma_buf_llseek(struct file *file,
loff_t offset, int whence)
- Note that this only signals the completion of the respective fences,
i.e. the
- DMA transfers are complete. Cache flushing and any other necessary
- preparations before CPU access can begin still need to happen.
- As an alternative to poll(), the set of fences on DMA buffer can be
*/
- exported as a &sync_file using &dma_buf_sync_file_export.
static void dma_buf_poll_cb(struct dma_fence *fence, struct
dma_fence_cb *cb)
@@ -326,6 +330,61 @@ static long dma_buf_set_name(struct dma_buf
*dmabuf, const char __user *buf)
return 0;
}
+#if IS_ENABLED(CONFIG_SYNC_FILE) +static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
void __user *user_data)
+{
struct dma_buf_export_sync_file arg;
enum dma_resv_usage usage;
struct dma_fence *fence = NULL;
struct sync_file *sync_file;
int fd, ret;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags & ~DMA_BUF_SYNC_RW)
return -EINVAL;
if ((arg.flags & DMA_BUF_SYNC_RW) == 0)
return -EINVAL;
We allow userspace to set both SYNC_READ and SYNC_WRITE here, I think
if ((arg.flags & DMA_BUF_SYNC_RW) == DMA_BUF_SYNC_RW) return -EINVAL;
is missing?
We could, but I don't really get why we should disallow that. SYNC_READ | SYNC_WRITE is the same as SYNC_WRITE and that seems like perfectly sane behavior to me.
Yeah, but it's resulting in some really confusing semantics:
- SYNC_WRITE gives you the write fences - SYNC_READ gives you the read fences _and_ the write fences - SYNC_WRITE | SYNC_READ gives you only the write fences
Someone will get this wrong. Also pondering some more we reuse the sync flags from the cpu flush helpers, and there you need to set them for the access you're about to do. And that's also how all the drivers use, which means maybe the more natural meaning if these flags would be:
- SYNC_WRITE | SYNC_READ (or just SYNC_WRITE) gives you both read and write fences, since those are the fences you need to wait on before you start writing - SYNC_READ only gives you the read fence
This is also what Christian implemented in the dma_resv_usage_rw() helper for implicit sync. -Daniel
--Jason
Also maybe a case to add to your igt.
fd = get_unused_fd_flags(O_CLOEXEC);
if (fd < 0)
return fd;
usage = (arg.flags & DMA_BUF_SYNC_WRITE) ? DMA_RESV_USAGE_WRITE :
DMA_RESV_USAGE_READ;
ret = dma_resv_get_singleton(dmabuf->resv, usage, &fence);
if (ret)
goto err_put_fd;
if (!fence)
fence = dma_fence_get_stub();
sync_file = sync_file_create(fence);
dma_fence_put(fence);
if (!sync_file) {
ret = -ENOMEM;
goto err_put_fd;
}
fd_install(fd, sync_file->file);
arg.fd = fd;
if (copy_to_user(user_data, &arg, sizeof(arg)))
return -EFAULT;
return 0;
+err_put_fd:
put_unused_fd(fd);
return ret;
+} +#endif
static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -369,6 +428,11 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME_B: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+#if IS_ENABLED(CONFIG_SYNC_FILE)
case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
return dma_buf_export_sync_file(dmabuf, (void __user
*)arg);
+#endif
default: return -ENOTTY; }
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 8e4a2ca0bcbf..46f1e3e98b02 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -85,6 +85,40 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+/**
- struct dma_buf_export_sync_file - Get a sync_file from a dma-buf
- Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve
the
- current set of fences on a dma-buf file descriptor as a sync_file.
CPU
- waits via poll() or other driver-specific mechanisms typically wait
on
- whatever fences are on the dma-buf at the time the wait begins. This
- is similar except that it takes a snapshot of the current fences on
the
- dma-buf for waiting later instead of waiting immediately. This is
- useful for modern graphics APIs such as Vulkan which assume an
explicit
- synchronization model but still need to inter-operate with dma-buf.
- */
+struct dma_buf_export_sync_file {
/**
* @flags: Read/write flags
*
* Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both.
*
* If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set,
* the returned sync file waits on any writers of the dma-buf to
* complete. Waiting on the returned sync file is equivalent to
* poll() with POLLIN.
*
* If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on
* any users of the dma-buf (read or write) to complete. Waiting
* on the returned sync file is equivalent to poll() with POLLOUT.
* If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this
* is equivalent to just DMA_BUF_SYNC_WRITE.
*/
__u32 flags;
/** @fd: Returned sync file descriptor */
__s32 fd;
+};
#define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -94,5 +128,6 @@ struct dma_buf_sync { #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2,
struct dma_buf_export_sync_file)
With the one nit fixed for this version:
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
#endif
2.36.0
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
On Thu, May 5, 2022 at 3:23 AM Daniel Vetter daniel@ffwll.ch wrote:
On Thu, May 05, 2022 at 03:05:44AM -0500, Jason Ekstrand wrote:
On Wed, May 4, 2022 at 5:49 PM Daniel Vetter daniel@ffwll.ch wrote:
On Wed, May 04, 2022 at 03:34:03PM -0500, Jason Ekstrand wrote:
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't
too
bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered
written.
The harder part is the compositor -> client synchronization when we
get
the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the
point
in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in
such
a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering
to
the buffer again, any CPU waits on the buffer or GPU dependencies
will
wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot
of
the implicit synchronization state of a given dma-buf in the form of
a
sync file. It's effectively the same as a poll() or I915_GEM_WAIT
only,
instead of CPU waiting directly, it encapsulates the wait operation,
at
the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from
the
dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately
turn
it into a VkFence or VkSemaphore without any over-synchronization.
By making this an ioctl on the dma-buf itself, it allows this new functionality to be used in an entirely driver-agnostic way without having access to a DRM fd. This makes it ideal for use in
driver-generic
code in Mesa or in a client such as a compositor where the DRM fd
may be
hard to reach.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand):
- Drop the sync_file import as it was all-around sketchy and not
nearly
as useful as import.
- Re-introduce READ/WRITE flag support for export
- Rework the commit message
v7 (Jason Ekstrand):
- Require at least one sync flag
- Fix a refcounting bug: dma_resv_get_excl() doesn't take a
reference
- Use _rcu helpers since we're accessing the dma_resv read-only
v8 (Jason Ekstrand):
- Return -ENOMEM if the sync_file_create fails
- Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)
v9 (Jason Ekstrand):
- Add documentation for the new ioctl
v10 (Jason Ekstrand):
- Go back to dma_buf_sync_file as the ioctl struct name
v11 (Daniel Vetter):
- Go back to dma_buf_export_sync_file as the ioctl struct name
- Better kerneldoc describing what the read/write flags do
v12 (Christian König):
- Document why we chose to make it an ioctl on dma-buf
v12 (Jason Ekstrand):
- Rebase on Christian König's fence rework
Signed-off-by: Jason Ekstrand jason@jlekstrand.net Acked-by: Simon Ser contact@emersion.fr Acked-by: Christian König christian.koenig@amd.com Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
Not sure which version it was that I reviewed, but with dma_resv_usage this all looks neat and tidy. One nit below.
Cc: Sumit Semwal sumit.semwal@linaro.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com
drivers/dma-buf/dma-buf.c | 64
++++++++++++++++++++++++++++++++++++
include/uapi/linux/dma-buf.h | 35 ++++++++++++++++++++ 2 files changed, 99 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 79795857be3e..529e0611e53b 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -192,6 +193,9 @@ static loff_t dma_buf_llseek(struct file *file,
loff_t offset, int whence)
- Note that this only signals the completion of the respective
fences,
i.e. the
- DMA transfers are complete. Cache flushing and any other
necessary
- preparations before CPU access can begin still need to happen.
- As an alternative to poll(), the set of fences on DMA buffer can
be
*/
- exported as a &sync_file using &dma_buf_sync_file_export.
static void dma_buf_poll_cb(struct dma_fence *fence, struct
dma_fence_cb *cb)
@@ -326,6 +330,61 @@ static long dma_buf_set_name(struct dma_buf
*dmabuf, const char __user *buf)
return 0;
}
+#if IS_ENABLED(CONFIG_SYNC_FILE) +static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
void __user *user_data)
+{
struct dma_buf_export_sync_file arg;
enum dma_resv_usage usage;
struct dma_fence *fence = NULL;
struct sync_file *sync_file;
int fd, ret;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags & ~DMA_BUF_SYNC_RW)
return -EINVAL;
if ((arg.flags & DMA_BUF_SYNC_RW) == 0)
return -EINVAL;
We allow userspace to set both SYNC_READ and SYNC_WRITE here, I think
if ((arg.flags & DMA_BUF_SYNC_RW) == DMA_BUF_SYNC_RW) return -EINVAL;
is missing?
We could, but I don't really get why we should disallow that. SYNC_READ
|
SYNC_WRITE is the same as SYNC_WRITE and that seems like perfectly sane behavior to me.
Yeah, but it's resulting in some really confusing semantics:
- SYNC_WRITE gives you the write fences
- SYNC_READ gives you the read fences _and_ the write fences
- SYNC_WRITE | SYNC_READ gives you only the write fences
Someone will get this wrong. Also pondering some more we reuse the sync flags from the cpu flush helpers, and there you need to set them for the access you're about to do. And that's also how all the drivers use, which means maybe the more natural meaning if these flags would be:
- SYNC_WRITE | SYNC_READ (or just SYNC_WRITE) gives you both read and write fences, since those are the fences you need to wait on before you start writing
- SYNC_READ only gives you the read fence
This is also what Christian implemented in the dma_resv_usage_rw() helper for implicit sync.
Yup. I've reworked to use dma_rev_usage_rw() to fix the bug.
--Jason
-Daniel
--Jason
Also maybe a case to add to your igt.
fd = get_unused_fd_flags(O_CLOEXEC);
if (fd < 0)
return fd;
usage = (arg.flags & DMA_BUF_SYNC_WRITE) ?
DMA_RESV_USAGE_WRITE :
DMA_RESV_USAGE_READ;
ret = dma_resv_get_singleton(dmabuf->resv, usage, &fence);
if (ret)
goto err_put_fd;
if (!fence)
fence = dma_fence_get_stub();
sync_file = sync_file_create(fence);
dma_fence_put(fence);
if (!sync_file) {
ret = -ENOMEM;
goto err_put_fd;
}
fd_install(fd, sync_file->file);
arg.fd = fd;
if (copy_to_user(user_data, &arg, sizeof(arg)))
return -EFAULT;
return 0;
+err_put_fd:
put_unused_fd(fd);
return ret;
+} +#endif
static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -369,6 +428,11 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME_B: return dma_buf_set_name(dmabuf, (const char __user
*)arg);
+#if IS_ENABLED(CONFIG_SYNC_FILE)
case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
return dma_buf_export_sync_file(dmabuf, (void __user
*)arg);
+#endif
default: return -ENOTTY; }
diff --git a/include/uapi/linux/dma-buf.h
b/include/uapi/linux/dma-buf.h
index 8e4a2ca0bcbf..46f1e3e98b02 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -85,6 +85,40 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+/**
- struct dma_buf_export_sync_file - Get a sync_file from a dma-buf
- Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to
retrieve
the
- current set of fences on a dma-buf file descriptor as a
sync_file.
CPU
- waits via poll() or other driver-specific mechanisms typically
wait
on
- whatever fences are on the dma-buf at the time the wait begins.
This
- is similar except that it takes a snapshot of the current fences
on
the
- dma-buf for waiting later instead of waiting immediately. This
is
- useful for modern graphics APIs such as Vulkan which assume an
explicit
- synchronization model but still need to inter-operate with
dma-buf.
- */
+struct dma_buf_export_sync_file {
/**
* @flags: Read/write flags
*
* Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both.
*
* If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not
set,
* the returned sync file waits on any writers of the dma-buf
to
* complete. Waiting on the returned sync file is equivalent
to
* poll() with POLLIN.
*
* If DMA_BUF_SYNC_WRITE is set, the returned sync file waits
on
* any users of the dma-buf (read or write) to complete.
Waiting
* on the returned sync file is equivalent to poll() with
POLLOUT.
* If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set,
this
* is equivalent to just DMA_BUF_SYNC_WRITE.
*/
__u32 flags;
/** @fd: Returned sync file descriptor */
__s32 fd;
+};
#define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct
dma_buf_sync)
@@ -94,5 +128,6 @@ struct dma_buf_sync { #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2,
struct dma_buf_export_sync_file)
With the one nit fixed for this version:
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
#endif
2.36.0
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
Am 04.05.22 um 22:34 schrieb Jason Ekstrand:
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over-synchronization.
By making this an ioctl on the dma-buf itself, it allows this new functionality to be used in an entirely driver-agnostic way without having access to a DRM fd. This makes it ideal for use in driver-generic code in Mesa or in a client such as a compositor where the DRM fd may be hard to reach.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand):
- Drop the sync_file import as it was all-around sketchy and not nearly as useful as import.
- Re-introduce READ/WRITE flag support for export
- Rework the commit message
v7 (Jason Ekstrand):
- Require at least one sync flag
- Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference
- Use _rcu helpers since we're accessing the dma_resv read-only
v8 (Jason Ekstrand):
- Return -ENOMEM if the sync_file_create fails
- Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)
v9 (Jason Ekstrand):
- Add documentation for the new ioctl
v10 (Jason Ekstrand):
- Go back to dma_buf_sync_file as the ioctl struct name
v11 (Daniel Vetter):
- Go back to dma_buf_export_sync_file as the ioctl struct name
- Better kerneldoc describing what the read/write flags do
v12 (Christian König):
- Document why we chose to make it an ioctl on dma-buf
v12 (Jason Ekstrand):
- Rebase on Christian König's fence rework
Signed-off-by: Jason Ekstrand jason@jlekstrand.net Acked-by: Simon Ser contact@emersion.fr Acked-by: Christian König christian.koenig@amd.com Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch Cc: Sumit Semwal sumit.semwal@linaro.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com
drivers/dma-buf/dma-buf.c | 64 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 35 ++++++++++++++++++++ 2 files changed, 99 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 79795857be3e..529e0611e53b 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -192,6 +193,9 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
- Note that this only signals the completion of the respective fences, i.e. the
- DMA transfers are complete. Cache flushing and any other necessary
- preparations before CPU access can begin still need to happen.
- As an alternative to poll(), the set of fences on DMA buffer can be
- exported as a &sync_file using &dma_buf_sync_file_export.
*/
static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
@@ -326,6 +330,61 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return 0; }
+#if IS_ENABLED(CONFIG_SYNC_FILE) +static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
void __user *user_data)
+{
- struct dma_buf_export_sync_file arg;
- enum dma_resv_usage usage;
- struct dma_fence *fence = NULL;
- struct sync_file *sync_file;
- int fd, ret;
- if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
- if (arg.flags & ~DMA_BUF_SYNC_RW)
return -EINVAL;
- if ((arg.flags & DMA_BUF_SYNC_RW) == 0)
return -EINVAL;
- fd = get_unused_fd_flags(O_CLOEXEC);
- if (fd < 0)
return fd;
- usage = (arg.flags & DMA_BUF_SYNC_WRITE) ? DMA_RESV_USAGE_WRITE :
DMA_RESV_USAGE_READ;
- ret = dma_resv_get_singleton(dmabuf->resv, usage, &fence);
- if (ret)
goto err_put_fd;
- if (!fence)
fence = dma_fence_get_stub();
- sync_file = sync_file_create(fence);
- dma_fence_put(fence);
- if (!sync_file) {
ret = -ENOMEM;
goto err_put_fd;
- }
- fd_install(fd, sync_file->file);
- arg.fd = fd;
- if (copy_to_user(user_data, &arg, sizeof(arg)))
return -EFAULT;
I know we had that discussion before, but I'm not 100% any more what the outcome was.
The problem here is that when the copy_to_user fails we have a file descriptor which is valid, but userspace doesn't know anything about it.
I only see a few possibilities here: 1. Keep it like this and just assume that a process which you can't copy the fd to is also dying (a bit to much assumption for my taste).
2. Close the file descriptor when this happens (not ideal either).
3. Instead of returning the fd in the parameter structure return it as IOCTL result.
Number 3 is what drm_prime_handle_to_fd_ioctl() is doing as well and IIRC we said that this is probably the best option.
Apart from that the patch set looks really clean to me now.
Regards, Christian.
- return 0;
+err_put_fd:
- put_unused_fd(fd);
- return ret;
+} +#endif
- static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) {
@@ -369,6 +428,11 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME_B: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+#if IS_ENABLED(CONFIG_SYNC_FILE)
- case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
+#endif
- default: return -ENOTTY; }
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 8e4a2ca0bcbf..46f1e3e98b02 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -85,6 +85,40 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+/**
- struct dma_buf_export_sync_file - Get a sync_file from a dma-buf
- Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve the
- current set of fences on a dma-buf file descriptor as a sync_file. CPU
- waits via poll() or other driver-specific mechanisms typically wait on
- whatever fences are on the dma-buf at the time the wait begins. This
- is similar except that it takes a snapshot of the current fences on the
- dma-buf for waiting later instead of waiting immediately. This is
- useful for modern graphics APIs such as Vulkan which assume an explicit
- synchronization model but still need to inter-operate with dma-buf.
- */
+struct dma_buf_export_sync_file {
- /**
* @flags: Read/write flags
*
* Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both.
*
* If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set,
* the returned sync file waits on any writers of the dma-buf to
* complete. Waiting on the returned sync file is equivalent to
* poll() with POLLIN.
*
* If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on
* any users of the dma-buf (read or write) to complete. Waiting
* on the returned sync file is equivalent to poll() with POLLOUT.
* If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this
* is equivalent to just DMA_BUF_SYNC_WRITE.
*/
- __u32 flags;
- /** @fd: Returned sync file descriptor */
- __s32 fd;
+};
- #define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -94,5 +128,6 @@ struct dma_buf_sync { #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file)
#endif
On Thu, May 5, 2022 at 1:25 AM Christian König christian.koenig@amd.com wrote:
Am 04.05.22 um 22:34 schrieb Jason Ekstrand:
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over-synchronization.
By making this an ioctl on the dma-buf itself, it allows this new functionality to be used in an entirely driver-agnostic way without having access to a DRM fd. This makes it ideal for use in driver-generic code in Mesa or in a client such as a compositor where the DRM fd may be hard to reach.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand):
- Drop the sync_file import as it was all-around sketchy and not nearly as useful as import.
- Re-introduce READ/WRITE flag support for export
- Rework the commit message
v7 (Jason Ekstrand):
- Require at least one sync flag
- Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference
- Use _rcu helpers since we're accessing the dma_resv read-only
v8 (Jason Ekstrand):
- Return -ENOMEM if the sync_file_create fails
- Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)
v9 (Jason Ekstrand):
- Add documentation for the new ioctl
v10 (Jason Ekstrand):
- Go back to dma_buf_sync_file as the ioctl struct name
v11 (Daniel Vetter):
- Go back to dma_buf_export_sync_file as the ioctl struct name
- Better kerneldoc describing what the read/write flags do
v12 (Christian König):
- Document why we chose to make it an ioctl on dma-buf
v12 (Jason Ekstrand):
- Rebase on Christian König's fence rework
Signed-off-by: Jason Ekstrand jason@jlekstrand.net Acked-by: Simon Ser contact@emersion.fr Acked-by: Christian König christian.koenig@amd.com Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch Cc: Sumit Semwal sumit.semwal@linaro.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com
drivers/dma-buf/dma-buf.c | 64 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 35 ++++++++++++++++++++ 2 files changed, 99 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 79795857be3e..529e0611e53b 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -192,6 +193,9 @@ static loff_t dma_buf_llseek(struct file *file,
loff_t offset, int whence)
- Note that this only signals the completion of the respective
fences, i.e. the
- DMA transfers are complete. Cache flushing and any other necessary
- preparations before CPU access can begin still need to happen.
- As an alternative to poll(), the set of fences on DMA buffer can be
- exported as a &sync_file using &dma_buf_sync_file_export.
*/
static void dma_buf_poll_cb(struct dma_fence *fence, struct
dma_fence_cb *cb)
@@ -326,6 +330,61 @@ static long dma_buf_set_name(struct dma_buf
*dmabuf, const char __user *buf)
return 0;
}
+#if IS_ENABLED(CONFIG_SYNC_FILE) +static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
void __user *user_data)
+{
struct dma_buf_export_sync_file arg;
enum dma_resv_usage usage;
struct dma_fence *fence = NULL;
struct sync_file *sync_file;
int fd, ret;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags & ~DMA_BUF_SYNC_RW)
return -EINVAL;
if ((arg.flags & DMA_BUF_SYNC_RW) == 0)
return -EINVAL;
fd = get_unused_fd_flags(O_CLOEXEC);
if (fd < 0)
return fd;
usage = (arg.flags & DMA_BUF_SYNC_WRITE) ? DMA_RESV_USAGE_WRITE :
DMA_RESV_USAGE_READ;
ret = dma_resv_get_singleton(dmabuf->resv, usage, &fence);
if (ret)
goto err_put_fd;
if (!fence)
fence = dma_fence_get_stub();
sync_file = sync_file_create(fence);
dma_fence_put(fence);
if (!sync_file) {
ret = -ENOMEM;
goto err_put_fd;
}
fd_install(fd, sync_file->file);
arg.fd = fd;
if (copy_to_user(user_data, &arg, sizeof(arg)))
return -EFAULT;
I know we had that discussion before, but I'm not 100% any more what the outcome was.
The problem here is that when the copy_to_user fails we have a file descriptor which is valid, but userspace doesn't know anything about it.
I only see a few possibilities here:
- Keep it like this and just assume that a process which you can't copy
the fd to is also dying (a bit to much assumption for my taste).
Close the file descriptor when this happens (not ideal either).
Instead of returning the fd in the parameter structure return it as
IOCTL result.
Number 3 is what drm_prime_handle_to_fd_ioctl() is doing as well and IIRC we said that this is probably the best option.
I don't have a strong preference here, so I'll go with whatever in the end but let me at least explain my reasoning. First, this was based on the FD import/export in syncobj which stuffs the FD in the args struct. If `copy_to_user` is a problem here, it's a problem there as well. Second, the only way `copy_to_user` can fail is if the client gives us a read-only page or somehow manages to race removing the page from their address space (via unmap(), for instance) with this ioctl. Both of those seem like pretty serious client errors to me. That, or the client is in the process of dying, in which case we really don't care.
--Jason
Apart from that the patch set looks really clean to me now.
Regards, Christian.
return 0;
+err_put_fd:
put_unused_fd(fd);
return ret;
+} +#endif
- static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) {
@@ -369,6 +428,11 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME_B: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+#if IS_ENABLED(CONFIG_SYNC_FILE)
case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
return dma_buf_export_sync_file(dmabuf, (void __user
*)arg);
+#endif
default: return -ENOTTY; }
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 8e4a2ca0bcbf..46f1e3e98b02 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -85,6 +85,40 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+/**
- struct dma_buf_export_sync_file - Get a sync_file from a dma-buf
- Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve
the
- current set of fences on a dma-buf file descriptor as a sync_file.
CPU
- waits via poll() or other driver-specific mechanisms typically wait
on
- whatever fences are on the dma-buf at the time the wait begins. This
- is similar except that it takes a snapshot of the current fences on
the
- dma-buf for waiting later instead of waiting immediately. This is
- useful for modern graphics APIs such as Vulkan which assume an
explicit
- synchronization model but still need to inter-operate with dma-buf.
- */
+struct dma_buf_export_sync_file {
/**
* @flags: Read/write flags
*
* Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both.
*
* If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set,
* the returned sync file waits on any writers of the dma-buf to
* complete. Waiting on the returned sync file is equivalent to
* poll() with POLLIN.
*
* If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on
* any users of the dma-buf (read or write) to complete. Waiting
* on the returned sync file is equivalent to poll() with POLLOUT.
* If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this
* is equivalent to just DMA_BUF_SYNC_WRITE.
*/
__u32 flags;
/** @fd: Returned sync file descriptor */
__s32 fd;
+};
- #define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -94,5 +128,6 @@ struct dma_buf_sync { #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2,
struct dma_buf_export_sync_file)
#endif
Am 05.05.22 um 10:10 schrieb Jason Ekstrand:
On Thu, May 5, 2022 at 1:25 AM Christian König christian.koenig@amd.com wrote:
[SNIP] > + fd_install(fd, sync_file->file); > + > + arg.fd = fd; > + if (copy_to_user(user_data, &arg, sizeof(arg))) > + return -EFAULT; I know we had that discussion before, but I'm not 100% any more what the outcome was. The problem here is that when the copy_to_user fails we have a file descriptor which is valid, but userspace doesn't know anything about it. I only see a few possibilities here: 1. Keep it like this and just assume that a process which you can't copy the fd to is also dying (a bit to much assumption for my taste). 2. Close the file descriptor when this happens (not ideal either). 3. Instead of returning the fd in the parameter structure return it as IOCTL result. Number 3 is what drm_prime_handle_to_fd_ioctl() is doing as well and IIRC we said that this is probably the best option.
I don't have a strong preference here, so I'll go with whatever in the end but let me at least explain my reasoning. First, this was based on the FD import/export in syncobj which stuffs the FD in the args struct. If `copy_to_user` is a problem here, it's a problem there as well. Second, the only way `copy_to_user` can fail is if the client gives us a read-only page or somehow manages to race removing the page from their address space (via unmap(), for instance) with this ioctl. Both of those seem like pretty serious client errors to me. That, or the client is in the process of dying, in which case we really don't care.
Yeah, I know about that copy_to_user() issue in the syncobj and also some driver specific handling.
That's why we discussed this before and IIRC somebody indeed ran into an issue with -EFAULT and that was the reason all this bubbled up.
I don't have a strong preference either, but I think we should try to learn from previous mistakes and design new interfaces based on such experience.
Christian.
--Jason
On Thu, May 05, 2022 at 10:27:39AM +0200, Christian König wrote:
Am 05.05.22 um 10:10 schrieb Jason Ekstrand:
On Thu, May 5, 2022 at 1:25 AM Christian König christian.koenig@amd.com wrote:
[SNIP] > + fd_install(fd, sync_file->file); > + > + arg.fd = fd; > + if (copy_to_user(user_data, &arg, sizeof(arg))) > + return -EFAULT; I know we had that discussion before, but I'm not 100% any more what the outcome was. The problem here is that when the copy_to_user fails we have a file descriptor which is valid, but userspace doesn't know anything about it. I only see a few possibilities here: 1. Keep it like this and just assume that a process which you can't copy the fd to is also dying (a bit to much assumption for my taste). 2. Close the file descriptor when this happens (not ideal either). 3. Instead of returning the fd in the parameter structure return it as IOCTL result. Number 3 is what drm_prime_handle_to_fd_ioctl() is doing as well and IIRC we said that this is probably the best option.
I don't have a strong preference here, so I'll go with whatever in the end but let me at least explain my reasoning. First, this was based on the FD import/export in syncobj which stuffs the FD in the args struct. If `copy_to_user` is a problem here, it's a problem there as well. Second, the only way `copy_to_user` can fail is if the client gives us a read-only page or somehow manages to race removing the page from their address space (via unmap(), for instance) with this ioctl. Both of those seem like pretty serious client errors to me. That, or the client is in the process of dying, in which case we really don't care.
Yeah, I know about that copy_to_user() issue in the syncobj and also some driver specific handling.
That's why we discussed this before and IIRC somebody indeed ran into an issue with -EFAULT and that was the reason all this bubbled up.
I don't have a strong preference either, but I think we should try to learn from previous mistakes and design new interfaces based on such experience.
We have this in a bunch of places (like execbuf tail handling after drm_sched_job_push()) and I think what we commonly do is just try to clean up the mess a bit and fail.
I think what you could do here is do the copy_to_user before you do the fd_install, and if the copy_to_user fails you just clean up everything and fail. That just means there's a small window where userspace has an fd reserve that didn't end up being used, but also in real apps this just never matters.
Leaking the fd is maybe not the best option, but meh. -Daniel
Christian.
--Jason
This patch is analogous to the previous sync file export patch in that it allows you to import a sync_file into a dma-buf. Unlike the previous patch, however, this does add genuinely new functionality to dma-buf. Without this, the only way to attach a sync_file to a dma-buf is to submit a batch to your driver of choice which waits on the sync_file and claims to write to the dma-buf. Even if said batch is a no-op, a submit is typically way more overhead than just attaching a fence. A submit may also imply extra synchronization with other work because it happens on a hardware queue.
In the Vulkan world, this is useful for dealing with the out-fence from vkQueuePresent. Current Linux window-systems (X11, Wayland, etc.) all rely on dma-buf implicit sync. Since Vulkan is an explicit sync API, we get a set of fences (VkSemaphores) in vkQueuePresent and have to stash those as an exclusive (write) fence on the dma-buf. We handle it in Mesa today with the above mentioned dummy submit trick. This ioctl would allow us to set it directly without the dummy submit.
This may also open up possibilities for GPU drivers to move away from implicit sync for their kernel driver uAPI and instead provide sync files and rely on dma-buf import/export for communicating with other implicit sync clients.
We make the explicit choice here to only allow setting RW fences which translates to an exclusive fence on the dma_resv. There's no use for read-only fences for communicating with other implicit sync userspace and any such attempts are likely to be racy at best. When we got to insert the RW fence, the actual fence we set as the new exclusive fence is a combination of the sync_file provided by the user and all the other fences on the dma_resv. This ensures that the newly added exclusive fence will never signal before the old one would have and ensures that we don't break any dma_resv contracts. We require userspace to specify RW in the flags for symmetry with the export ioctl and in case we ever want to support read fences in the future.
There is one downside here that's worth documenting: If two clients writing to the same dma-buf using this API race with each other, their actions on the dma-buf may happen in parallel or in an undefined order. Both with and without this API, the pattern is the same: Collect all the fences on dma-buf, submit work which depends on said fences, and then set a new exclusive (write) fence on the dma-buf which depends on said work. The difference is that, when it's all handled by the GPU driver's submit ioctl, the three operations happen atomically under the dma_resv lock. If two userspace submits race, one will happen before the other. You aren't guaranteed which but you are guaranteed that they're strictly ordered. If userspace manages the fences itself, then these three operations happen separately and the two render operations may happen genuinely in parallel or get interleaved. However, this is a case of userspace racing with itself. As long as we ensure userspace can't back the kernel into a corner, it should be fine.
v2 (Jason Ekstrand): - Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand): - Lock around setting shared fences as well as exclusive - Mark SIGNAL_SYNC_FILE as a read-write ioctl. - Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand): - Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand): - Rename the IOCTLs to import/export rather than wait/signal - Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand): - Split import and export into separate patches - New commit message
v7 (Daniel Vetter): - Fix the uapi header to use the right struct in the ioctl - Use a separate dma_buf_import_sync_file struct - Add kerneldoc for dma_buf_import_sync_file
v8 (Jason Ekstrand): - Rebase on Christian König's fence rework
Signed-off-by: Jason Ekstrand jason@jlekstrand.net Cc: Christian König christian.koenig@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Sumit Semwal sumit.semwal@linaro.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com --- drivers/dma-buf/dma-buf.c | 36 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 22 ++++++++++++++++++++++ 2 files changed, 58 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 529e0611e53b..68aac6f694f9 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -383,6 +383,40 @@ static long dma_buf_export_sync_file(struct dma_buf *dmabuf, put_unused_fd(fd); return ret; } + +static long dma_buf_import_sync_file(struct dma_buf *dmabuf, + const void __user *user_data) +{ + struct dma_buf_import_sync_file arg; + struct dma_fence *fence; + enum dma_resv_usage usage; + int ret = 0; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags != DMA_BUF_SYNC_RW) + return -EINVAL; + + fence = sync_file_get_fence(arg.fd); + if (!fence) + return -EINVAL; + + usage = (arg.flags & DMA_BUF_SYNC_WRITE) ? DMA_RESV_USAGE_WRITE : + DMA_RESV_USAGE_READ; + + dma_resv_lock(dmabuf->resv, NULL); + + ret = dma_resv_reserve_fences(dmabuf->resv, 1); + if (!ret) + dma_resv_add_fence(dmabuf->resv, fence, usage); + + dma_resv_unlock(dmabuf->resv); + + dma_fence_put(fence); + + return ret; +} #endif
static long dma_buf_ioctl(struct file *file, @@ -431,6 +465,8 @@ static long dma_buf_ioctl(struct file *file, #if IS_ENABLED(CONFIG_SYNC_FILE) case DMA_BUF_IOCTL_EXPORT_SYNC_FILE: return dma_buf_export_sync_file(dmabuf, (void __user *)arg); + case DMA_BUF_IOCTL_IMPORT_SYNC_FILE: + return dma_buf_import_sync_file(dmabuf, (const void __user *)arg); #endif
default: diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 46f1e3e98b02..913119bf2201 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -119,6 +119,27 @@ struct dma_buf_export_sync_file { __s32 fd; };
+/** + * struct dma_buf_import_sync_file - Insert a sync_file into a dma-buf + * + * Userspace can perform a DMA_BUF_IOCTL_IMPORT_SYNC_FILE to insert a + * sync_file into a dma-buf for the purposes of implicit synchronization + * with other dma-buf consumers. This allows clients using explicitly + * synchronized APIs such as Vulkan to inter-op with dma-buf consumers + * which expect implicit synchronization such as OpenGL or most media + * drivers/video. + */ +struct dma_buf_import_sync_file { + /** + * @flags: Read/write flags + * + * Must be DMA_BUF_SYNC_RW. + */ + __u32 flags; + /** @fd: Sync file descriptor */ + __s32 fd; +}; + #define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -129,5 +150,6 @@ struct dma_buf_export_sync_file { #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) #define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file) +#define DMA_BUF_IOCTL_IMPORT_SYNC_FILE _IOW(DMA_BUF_BASE, 3, struct dma_buf_import_sync_file)
#endif
On Wed, May 04, 2022 at 03:34:04PM -0500, Jason Ekstrand wrote:
This patch is analogous to the previous sync file export patch in that it allows you to import a sync_file into a dma-buf. Unlike the previous patch, however, this does add genuinely new functionality to dma-buf. Without this, the only way to attach a sync_file to a dma-buf is to submit a batch to your driver of choice which waits on the sync_file and claims to write to the dma-buf. Even if said batch is a no-op, a submit is typically way more overhead than just attaching a fence. A submit may also imply extra synchronization with other work because it happens on a hardware queue.
In the Vulkan world, this is useful for dealing with the out-fence from vkQueuePresent. Current Linux window-systems (X11, Wayland, etc.) all rely on dma-buf implicit sync. Since Vulkan is an explicit sync API, we get a set of fences (VkSemaphores) in vkQueuePresent and have to stash those as an exclusive (write) fence on the dma-buf. We handle it in Mesa today with the above mentioned dummy submit trick. This ioctl would allow us to set it directly without the dummy submit.
This may also open up possibilities for GPU drivers to move away from implicit sync for their kernel driver uAPI and instead provide sync files and rely on dma-buf import/export for communicating with other implicit sync clients.
We make the explicit choice here to only allow setting RW fences which translates to an exclusive fence on the dma_resv. There's no use for read-only fences for communicating with other implicit sync userspace and any such attempts are likely to be racy at best. When we got to insert the RW fence, the actual fence we set as the new exclusive fence is a combination of the sync_file provided by the user and all the other fences on the dma_resv. This ensures that the newly added exclusive fence will never signal before the old one would have and ensures that we don't break any dma_resv contracts. We require userspace to specify RW in the flags for symmetry with the export ioctl and in case we ever want to support read fences in the future.
There is one downside here that's worth documenting: If two clients writing to the same dma-buf using this API race with each other, their actions on the dma-buf may happen in parallel or in an undefined order. Both with and without this API, the pattern is the same: Collect all the fences on dma-buf, submit work which depends on said fences, and then set a new exclusive (write) fence on the dma-buf which depends on said work. The difference is that, when it's all handled by the GPU driver's submit ioctl, the three operations happen atomically under the dma_resv lock. If two userspace submits race, one will happen before the other. You aren't guaranteed which but you are guaranteed that they're strictly ordered. If userspace manages the fences itself, then these three operations happen separately and the two render operations may happen genuinely in parallel or get interleaved. However, this is a case of userspace racing with itself. As long as we ensure userspace can't back the kernel into a corner, it should be fine.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand):
- Split import and export into separate patches
- New commit message
v7 (Daniel Vetter):
- Fix the uapi header to use the right struct in the ioctl
- Use a separate dma_buf_import_sync_file struct
- Add kerneldoc for dma_buf_import_sync_file
v8 (Jason Ekstrand):
- Rebase on Christian König's fence rework
Signed-off-by: Jason Ekstrand jason@jlekstrand.net Cc: Christian König christian.koenig@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Sumit Semwal sumit.semwal@linaro.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com
drivers/dma-buf/dma-buf.c | 36 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 22 ++++++++++++++++++++++ 2 files changed, 58 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 529e0611e53b..68aac6f694f9 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -383,6 +383,40 @@ static long dma_buf_export_sync_file(struct dma_buf *dmabuf, put_unused_fd(fd); return ret; }
+static long dma_buf_import_sync_file(struct dma_buf *dmabuf,
const void __user *user_data)
+{
- struct dma_buf_import_sync_file arg;
- struct dma_fence *fence;
- enum dma_resv_usage usage;
- int ret = 0;
- if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
- if (arg.flags != DMA_BUF_SYNC_RW)
I think the flag validation here looks wrong? I think needs needs the exact same 3 checks as the export ioctl.
return -EINVAL;
- fence = sync_file_get_fence(arg.fd);
- if (!fence)
return -EINVAL;
- usage = (arg.flags & DMA_BUF_SYNC_WRITE) ? DMA_RESV_USAGE_WRITE :
DMA_RESV_USAGE_READ;
- dma_resv_lock(dmabuf->resv, NULL);
- ret = dma_resv_reserve_fences(dmabuf->resv, 1);
- if (!ret)
dma_resv_add_fence(dmabuf->resv, fence, usage);
- dma_resv_unlock(dmabuf->resv);
- dma_fence_put(fence);
- return ret;
+} #endif
static long dma_buf_ioctl(struct file *file, @@ -431,6 +465,8 @@ static long dma_buf_ioctl(struct file *file, #if IS_ENABLED(CONFIG_SYNC_FILE) case DMA_BUF_IOCTL_EXPORT_SYNC_FILE: return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
- case DMA_BUF_IOCTL_IMPORT_SYNC_FILE:
return dma_buf_import_sync_file(dmabuf, (const void __user *)arg);
#endif
default: diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 46f1e3e98b02..913119bf2201 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -119,6 +119,27 @@ struct dma_buf_export_sync_file { __s32 fd; };
+/**
- struct dma_buf_import_sync_file - Insert a sync_file into a dma-buf
- Userspace can perform a DMA_BUF_IOCTL_IMPORT_SYNC_FILE to insert a
- sync_file into a dma-buf for the purposes of implicit synchronization
- with other dma-buf consumers. This allows clients using explicitly
- synchronized APIs such as Vulkan to inter-op with dma-buf consumers
- which expect implicit synchronization such as OpenGL or most media
- drivers/video.
- */
+struct dma_buf_import_sync_file {
- /**
* @flags: Read/write flags
*
* Must be DMA_BUF_SYNC_RW.
The checks are wrong, but the intent of your implementation looks a lot more like you allow both SYNC_WRITE and SYNC_READ, and I think that makes a lot of sense. Especially since we can now true sync-less access for vk with DMA_RESV_USAGE_BOOKKEEPING, so allowing userspace to explicit set read will be needed.
Or does vk only allow you to set write fences anyway? That would suck for the vk app + gl compositor case a bit, so I hope not.
*/
- __u32 flags;
- /** @fd: Sync file descriptor */
- __s32 fd;
+};
#define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -129,5 +150,6 @@ struct dma_buf_export_sync_file { #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) #define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file) +#define DMA_BUF_IOCTL_IMPORT_SYNC_FILE _IOW(DMA_BUF_BASE, 3, struct dma_buf_import_sync_file)
With the flag nits sorted out:
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
#endif
2.36.0
On Wed, May 4, 2022 at 5:53 PM Daniel Vetter daniel@ffwll.ch wrote:
On Wed, May 04, 2022 at 03:34:04PM -0500, Jason Ekstrand wrote:
This patch is analogous to the previous sync file export patch in that it allows you to import a sync_file into a dma-buf. Unlike the previous patch, however, this does add genuinely new functionality to dma-buf. Without this, the only way to attach a sync_file to a dma-buf is to submit a batch to your driver of choice which waits on the sync_file and claims to write to the dma-buf. Even if said batch is a no-op, a submit is typically way more overhead than just attaching a fence. A submit may also imply extra synchronization with other work because it happens on a hardware queue.
In the Vulkan world, this is useful for dealing with the out-fence from vkQueuePresent. Current Linux window-systems (X11, Wayland, etc.) all rely on dma-buf implicit sync. Since Vulkan is an explicit sync API, we get a set of fences (VkSemaphores) in vkQueuePresent and have to stash those as an exclusive (write) fence on the dma-buf. We handle it in Mesa today with the above mentioned dummy submit trick. This ioctl would allow us to set it directly without the dummy submit.
This may also open up possibilities for GPU drivers to move away from implicit sync for their kernel driver uAPI and instead provide sync files and rely on dma-buf import/export for communicating with other implicit sync clients.
We make the explicit choice here to only allow setting RW fences which translates to an exclusive fence on the dma_resv. There's no use for read-only fences for communicating with other implicit sync userspace and any such attempts are likely to be racy at best. When we got to insert the RW fence, the actual fence we set as the new exclusive fence is a combination of the sync_file provided by the user and all the other fences on the dma_resv. This ensures that the newly added exclusive fence will never signal before the old one would have and ensures that we don't break any dma_resv contracts. We require userspace to specify RW in the flags for symmetry with the export ioctl and in case we ever want to support read fences in the future.
There is one downside here that's worth documenting: If two clients writing to the same dma-buf using this API race with each other, their actions on the dma-buf may happen in parallel or in an undefined order. Both with and without this API, the pattern is the same: Collect all the fences on dma-buf, submit work which depends on said fences, and then set a new exclusive (write) fence on the dma-buf which depends on said work. The difference is that, when it's all handled by the GPU driver's submit ioctl, the three operations happen atomically under the dma_resv lock. If two userspace submits race, one will happen before the other. You aren't guaranteed which but you are guaranteed that they're strictly ordered. If userspace manages the fences itself, then these three operations happen separately and the two render operations may happen genuinely in parallel or get interleaved. However, this is a case of userspace racing with itself. As long as we ensure userspace can't back the kernel into a corner, it should be fine.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand):
- Split import and export into separate patches
- New commit message
v7 (Daniel Vetter):
- Fix the uapi header to use the right struct in the ioctl
- Use a separate dma_buf_import_sync_file struct
- Add kerneldoc for dma_buf_import_sync_file
v8 (Jason Ekstrand):
- Rebase on Christian König's fence rework
Signed-off-by: Jason Ekstrand jason@jlekstrand.net Cc: Christian König christian.koenig@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Sumit Semwal sumit.semwal@linaro.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com
drivers/dma-buf/dma-buf.c | 36 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 22 ++++++++++++++++++++++ 2 files changed, 58 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 529e0611e53b..68aac6f694f9 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -383,6 +383,40 @@ static long dma_buf_export_sync_file(struct dma_buf
*dmabuf,
put_unused_fd(fd); return ret;
}
+static long dma_buf_import_sync_file(struct dma_buf *dmabuf,
const void __user *user_data)
+{
struct dma_buf_import_sync_file arg;
struct dma_fence *fence;
enum dma_resv_usage usage;
int ret = 0;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags != DMA_BUF_SYNC_RW)
I think the flag validation here looks wrong? I think needs needs the exact same 3 checks as the export ioctl.
Yup. Fixed. By which I mean I stuck in the 2-check version. Let's chat on patch 1 about whether or not RW should be allowed.
return -EINVAL;
fence = sync_file_get_fence(arg.fd);
if (!fence)
return -EINVAL;
usage = (arg.flags & DMA_BUF_SYNC_WRITE) ? DMA_RESV_USAGE_WRITE :
DMA_RESV_USAGE_READ;
dma_resv_lock(dmabuf->resv, NULL);
ret = dma_resv_reserve_fences(dmabuf->resv, 1);
if (!ret)
dma_resv_add_fence(dmabuf->resv, fence, usage);
dma_resv_unlock(dmabuf->resv);
dma_fence_put(fence);
return ret;
+} #endif
static long dma_buf_ioctl(struct file *file, @@ -431,6 +465,8 @@ static long dma_buf_ioctl(struct file *file, #if IS_ENABLED(CONFIG_SYNC_FILE) case DMA_BUF_IOCTL_EXPORT_SYNC_FILE: return dma_buf_export_sync_file(dmabuf, (void __user
*)arg);
case DMA_BUF_IOCTL_IMPORT_SYNC_FILE:
return dma_buf_import_sync_file(dmabuf, (const void __user
*)arg);
#endif
default:
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 46f1e3e98b02..913119bf2201 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -119,6 +119,27 @@ struct dma_buf_export_sync_file { __s32 fd; };
+/**
- struct dma_buf_import_sync_file - Insert a sync_file into a dma-buf
- Userspace can perform a DMA_BUF_IOCTL_IMPORT_SYNC_FILE to insert a
- sync_file into a dma-buf for the purposes of implicit synchronization
- with other dma-buf consumers. This allows clients using explicitly
- synchronized APIs such as Vulkan to inter-op with dma-buf consumers
- which expect implicit synchronization such as OpenGL or most media
- drivers/video.
- */
+struct dma_buf_import_sync_file {
/**
* @flags: Read/write flags
*
* Must be DMA_BUF_SYNC_RW.
The checks are wrong, but the intent of your implementation looks a lot more like you allow both SYNC_WRITE and SYNC_READ, and I think that makes a lot of sense. Especially since we can now true sync-less access for vk with DMA_RESV_USAGE_BOOKKEEPING, so allowing userspace to explicit set read will be needed.
Or does vk only allow you to set write fences anyway? That would suck for the vk app + gl compositor case a bit, so I hope not.
I just forgot to update the docs. The reason for only allowing RW before was because we were all scared of inserting shared fences and not exclusive fences. Now that we have the fence rework, I think being able to stick in a read fence is safe. Not sure if it's useful, but it's at least safe.
--Jason
*/
__u32 flags;
/** @fd: Sync file descriptor */
__s32 fd;
+};
#define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -129,5 +150,6 @@ struct dma_buf_export_sync_file { #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) #define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2,
struct dma_buf_export_sync_file)
+#define DMA_BUF_IOCTL_IMPORT_SYNC_FILE _IOW(DMA_BUF_BASE, 3,
struct dma_buf_import_sync_file)
With the flag nits sorted out:
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
#endif
2.36.0
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
On Thu, May 05, 2022 at 03:13:55AM -0500, Jason Ekstrand wrote:
On Wed, May 4, 2022 at 5:53 PM Daniel Vetter daniel@ffwll.ch wrote:
On Wed, May 04, 2022 at 03:34:04PM -0500, Jason Ekstrand wrote:
This patch is analogous to the previous sync file export patch in that it allows you to import a sync_file into a dma-buf. Unlike the previous patch, however, this does add genuinely new functionality to dma-buf. Without this, the only way to attach a sync_file to a dma-buf is to submit a batch to your driver of choice which waits on the sync_file and claims to write to the dma-buf. Even if said batch is a no-op, a submit is typically way more overhead than just attaching a fence. A submit may also imply extra synchronization with other work because it happens on a hardware queue.
In the Vulkan world, this is useful for dealing with the out-fence from vkQueuePresent. Current Linux window-systems (X11, Wayland, etc.) all rely on dma-buf implicit sync. Since Vulkan is an explicit sync API, we get a set of fences (VkSemaphores) in vkQueuePresent and have to stash those as an exclusive (write) fence on the dma-buf. We handle it in Mesa today with the above mentioned dummy submit trick. This ioctl would allow us to set it directly without the dummy submit.
This may also open up possibilities for GPU drivers to move away from implicit sync for their kernel driver uAPI and instead provide sync files and rely on dma-buf import/export for communicating with other implicit sync clients.
We make the explicit choice here to only allow setting RW fences which translates to an exclusive fence on the dma_resv. There's no use for read-only fences for communicating with other implicit sync userspace and any such attempts are likely to be racy at best. When we got to insert the RW fence, the actual fence we set as the new exclusive fence is a combination of the sync_file provided by the user and all the other fences on the dma_resv. This ensures that the newly added exclusive fence will never signal before the old one would have and ensures that we don't break any dma_resv contracts. We require userspace to specify RW in the flags for symmetry with the export ioctl and in case we ever want to support read fences in the future.
There is one downside here that's worth documenting: If two clients writing to the same dma-buf using this API race with each other, their actions on the dma-buf may happen in parallel or in an undefined order. Both with and without this API, the pattern is the same: Collect all the fences on dma-buf, submit work which depends on said fences, and then set a new exclusive (write) fence on the dma-buf which depends on said work. The difference is that, when it's all handled by the GPU driver's submit ioctl, the three operations happen atomically under the dma_resv lock. If two userspace submits race, one will happen before the other. You aren't guaranteed which but you are guaranteed that they're strictly ordered. If userspace manages the fences itself, then these three operations happen separately and the two render operations may happen genuinely in parallel or get interleaved. However, this is a case of userspace racing with itself. As long as we ensure userspace can't back the kernel into a corner, it should be fine.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand):
- Split import and export into separate patches
- New commit message
v7 (Daniel Vetter):
- Fix the uapi header to use the right struct in the ioctl
- Use a separate dma_buf_import_sync_file struct
- Add kerneldoc for dma_buf_import_sync_file
v8 (Jason Ekstrand):
- Rebase on Christian König's fence rework
Signed-off-by: Jason Ekstrand jason@jlekstrand.net Cc: Christian König christian.koenig@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Sumit Semwal sumit.semwal@linaro.org Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com
drivers/dma-buf/dma-buf.c | 36 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 22 ++++++++++++++++++++++ 2 files changed, 58 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 529e0611e53b..68aac6f694f9 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -383,6 +383,40 @@ static long dma_buf_export_sync_file(struct dma_buf
*dmabuf,
put_unused_fd(fd); return ret;
}
+static long dma_buf_import_sync_file(struct dma_buf *dmabuf,
const void __user *user_data)
+{
struct dma_buf_import_sync_file arg;
struct dma_fence *fence;
enum dma_resv_usage usage;
int ret = 0;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags != DMA_BUF_SYNC_RW)
I think the flag validation here looks wrong? I think needs needs the exact same 3 checks as the export ioctl.
Yup. Fixed. By which I mean I stuck in the 2-check version. Let's chat on patch 1 about whether or not RW should be allowed.
return -EINVAL;
fence = sync_file_get_fence(arg.fd);
if (!fence)
return -EINVAL;
usage = (arg.flags & DMA_BUF_SYNC_WRITE) ? DMA_RESV_USAGE_WRITE :
DMA_RESV_USAGE_READ;
dma_resv_lock(dmabuf->resv, NULL);
ret = dma_resv_reserve_fences(dmabuf->resv, 1);
if (!ret)
dma_resv_add_fence(dmabuf->resv, fence, usage);
dma_resv_unlock(dmabuf->resv);
dma_fence_put(fence);
return ret;
+} #endif
static long dma_buf_ioctl(struct file *file, @@ -431,6 +465,8 @@ static long dma_buf_ioctl(struct file *file, #if IS_ENABLED(CONFIG_SYNC_FILE) case DMA_BUF_IOCTL_EXPORT_SYNC_FILE: return dma_buf_export_sync_file(dmabuf, (void __user
*)arg);
case DMA_BUF_IOCTL_IMPORT_SYNC_FILE:
return dma_buf_import_sync_file(dmabuf, (const void __user
*)arg);
#endif
default:
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 46f1e3e98b02..913119bf2201 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -119,6 +119,27 @@ struct dma_buf_export_sync_file { __s32 fd; };
+/**
- struct dma_buf_import_sync_file - Insert a sync_file into a dma-buf
- Userspace can perform a DMA_BUF_IOCTL_IMPORT_SYNC_FILE to insert a
- sync_file into a dma-buf for the purposes of implicit synchronization
- with other dma-buf consumers. This allows clients using explicitly
- synchronized APIs such as Vulkan to inter-op with dma-buf consumers
- which expect implicit synchronization such as OpenGL or most media
- drivers/video.
- */
+struct dma_buf_import_sync_file {
/**
* @flags: Read/write flags
*
* Must be DMA_BUF_SYNC_RW.
The checks are wrong, but the intent of your implementation looks a lot more like you allow both SYNC_WRITE and SYNC_READ, and I think that makes a lot of sense. Especially since we can now true sync-less access for vk with DMA_RESV_USAGE_BOOKKEEPING, so allowing userspace to explicit set read will be needed.
Or does vk only allow you to set write fences anyway? That would suck for the vk app + gl compositor case a bit, so I hope not.
I just forgot to update the docs. The reason for only allowing RW before was because we were all scared of inserting shared fences and not exclusive fences. Now that we have the fence rework, I think being able to stick in a read fence is safe. Not sure if it's useful, but it's at least safe.
For compositors we do need to be able to insert read fences, or things go wrong I think with concurrency with apps and readback. If the compositor marks its access as writing, then you stall the client when it wants to copy stuff over to the next buffer or do some temporal post processing or whatever. Or do we just entirely rely on winsys events for handing the buffers back? I'm honestly not sure on this part ... and there's probably some buffer sharing where these read fences do matter. -Daniel
--Jason
*/
__u32 flags;
/** @fd: Sync file descriptor */
__s32 fd;
+};
#define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -129,5 +150,6 @@ struct dma_buf_export_sync_file { #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) #define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2,
struct dma_buf_export_sync_file)
+#define DMA_BUF_IOCTL_IMPORT_SYNC_FILE _IOW(DMA_BUF_BASE, 3,
struct dma_buf_import_sync_file)
With the flag nits sorted out:
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
#endif
2.36.0
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
dri-devel@lists.freedesktop.org