Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
Signed-off-by: Jason Ekstrand jason@jlekstrand.net ---
This is marked as an RFC because I intend it to start a discussion about how to solve a problem. The current patch compiles but that's it for now. I'll be writing IGT tests and Vulkan driver patches which exercise it over the next couple of days. In the mean time, feel free to tell me why you think this is a great and/or terrible idea. :-)
--Jason
drivers/dma-buf/dma-buf.c | 115 +++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 13 +++- 2 files changed, 126 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index d4097856c86b..3845b87e209e 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -348,6 +349,114 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+static long dma_buf_wait_sync_file(struct dma_buf *dmabuf, + const void __user *user_data) +{ + struct dma_buf_sync_file arg; + struct dma_fence *fence; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE) + return -EINVAL; + + fence = sync_file_get_fence(arg.fd); + if (!fence) + return -EINVAL; + + if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) { + dma_resv_add_excl_fence(dmabuf->resv, fence); + } else { + dma_resv_add_shared_fence(dmabuf->resv, fence); + } + + return 0; +} + +static long dma_buf_signal_sync_file(struct dma_buf *dmabuf, + void __user *user_data) +{ + struct dma_buf_sync_file arg; + struct dma_fence *fence = NULL; + struct sync_file *sync_file; + int fd, ret; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE) + return -EINVAL; + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) + return fd; + + if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) { + /* We need to include both the exclusive fence and all of + * the shared fences in our fence. + */ + struct dma_fence **fences = NULL; + unsigned i, num_fences = 0; + + ret = dma_resv_get_fences_rcu(dmabuf->resv, NULL, + &num_fences, &fences); + if (ret) + goto err_put_fd; + + if (num_fences == 0) { + fence = dma_fence_get_stub(); + } else if (num_fences == 1) { + fence = fences[0]; + kfree(fences); + } else { + struct dma_fence_array *fence_arr; + + fence_arr = dma_fence_array_create(num_fences, fences, + dma_fence_context_alloc(1), + 1, false); + if (!fence_arr) { + for (i = 0; i < num_fences; i++) + dma_fence_put(fences[i]); + kfree(fences); + ret = -ENOMEM; + goto err_put_fd; + } + + /* The fence array now owns fences_arr and our + * references to each of the individual fences. We + * only own a reference to the one array fence. + */ + fence = &fence_arr->base; + } + } else { + fence = dma_resv_get_excl_rcu(dmabuf->resv); + if (!fence) + fence = dma_fence_get_stub(); + } + + sync_file = sync_file_create(fence); + + dma_fence_put(fence); + + if (!sync_file) { + ret = -EINVAL; + goto err_put_fd; + } + + fd_install(fd, sync_file->file); + + arg.fd = fd; + if (copy_to_user(user_data, &arg, sizeof(arg))) + return -EFAULT; + + return 0; + +err_put_fd: + put_unused_fd(fd); + return ret; +} + static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -390,6 +499,12 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+ case DMA_BUF_IOCTL_WAIT_SYNC_FILE: + return dma_buf_wait_sync_file(dmabuf, (const void __user *)arg); + + case DMA_BUF_IOCTL_SIGNAL_SYNC_FILE: + return dma_buf_signal_sync_file(dmabuf, (void __user *)arg); + default: return -ENOTTY; } diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index dbc7092e04b5..825b9a913c89 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -37,8 +37,17 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+struct dma_buf_sync_file { + __u32 flags; + __s32 fd; +}; + +#define DMA_BUF_SYNC_FILE_SYNC_WRITE (1 << 0) + #define DMA_BUF_BASE 'b' -#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) -#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) +#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_WAIT_SYNC_FILE _IOW(DMA_BUF_BASE, 2, struct dma_buf_sync) +#define DMA_BUF_IOCTL_SIGNAL_SYNC_FILE _IOW(DMA_BUF_BASE, 3, struct dma_buf_sync)
#endif
Hi Jason,
Am 26.02.20 um 00:58 schrieb Jason Ekstrand:
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
This is marked as an RFC because I intend it to start a discussion about how to solve a problem. The current patch compiles but that's it for now. I'll be writing IGT tests and Vulkan driver patches which exercise it over the next couple of days. In the mean time, feel free to tell me why you think this is a great and/or terrible idea. :-)
For the exporting part I think it is an absolutely great idea because it simplifies compatibility with explicit sync quite a bit.
But for the importing part it is a clear NAK at the moment. See we can't allow userspace to mess with DMA-buf fences in that way because it rips open a security hole you can push an elephant through.
Just imagine that you access some DMA-buf with a shader and that operation is presented as a fence on the DMA-bufs reservation object. And now you can go ahead and replace that fence and free up the memory.
Tricking the Linux kernel into allocating page tables in that freed memory is trivial and that's basically it you can overwrite page tables with your shader and gain access to all of system memory :)
What we could do is to always make sure that the added fences will complete later than the already existing ones, but that is also rather tricky to get right. I wouldn't do that if we don't have a rather big use case for this.
Regards, Christian.
--Jason
[SNIP]
On Wed, Feb 26, 2020 at 10:16:05AM +0100, Christian König wrote:
Hi Jason,
Am 26.02.20 um 00:58 schrieb Jason Ekstrand:
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
This is marked as an RFC because I intend it to start a discussion about how to solve a problem. The current patch compiles but that's it for now. I'll be writing IGT tests and Vulkan driver patches which exercise it over the next couple of days. In the mean time, feel free to tell me why you think this is a great and/or terrible idea. :-)
For the exporting part I think it is an absolutely great idea because it simplifies compatibility with explicit sync quite a bit.
But for the importing part it is a clear NAK at the moment. See we can't allow userspace to mess with DMA-buf fences in that way because it rips open a security hole you can push an elephant through.
Just imagine that you access some DMA-buf with a shader and that operation is presented as a fence on the DMA-bufs reservation object. And now you can go ahead and replace that fence and free up the memory.
Tricking the Linux kernel into allocating page tables in that freed memory is trivial and that's basically it you can overwrite page tables with your shader and gain access to all of system memory :)
What we could do is to always make sure that the added fences will complete later than the already existing ones, but that is also rather tricky to get right. I wouldn't do that if we don't have a rather big use case for this.
I think the main use-case for adding a fence is adding a write fence for vk winsys buffers, which run without any sync at all. So essentially what we'd do is promote one of the read fences which are already attached to be the write fence.
But yeah making sure we don't break any of the dma_resv guarantees about how these fences works is going to be somewhat tricky. Probably can reuse a big chunk of the fence container work we've done for syncobj timelines, since they have some of the same issues of having to chain fences to not break the world. -Daniel
Regards, Christian.
--Jason
[SNIP]
On Wed, Feb 26, 2020 at 4:05 AM Daniel Vetter daniel@ffwll.ch wrote:
On Wed, Feb 26, 2020 at 10:16:05AM +0100, Christian König wrote:
Hi Jason,
Am 26.02.20 um 00:58 schrieb Jason Ekstrand:
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
This is marked as an RFC because I intend it to start a discussion about how to solve a problem. The current patch compiles but that's it for now. I'll be writing IGT tests and Vulkan driver patches which exercise it over the next couple of days. In the mean time, feel free to tell me why you think this is a great and/or terrible idea. :-)
For the exporting part I think it is an absolutely great idea because it simplifies compatibility with explicit sync quite a bit.
Unfortunately, it only helps half of explicit sync and not the half that's hard to deal with from Vulkan. :-/
But for the importing part it is a clear NAK at the moment. See we can't allow userspace to mess with DMA-buf fences in that way because it rips open a security hole you can push an elephant through.
Oh, sure, I'm 100% sure I did that part wrong. Why else would I send the patch but to have someone who actually knows what they're doing tell me how to do it correctly? :-P
Just imagine that you access some DMA-buf with a shader and that operation is presented as a fence on the DMA-bufs reservation object. And now you can go ahead and replace that fence and free up the memory.
Tricking the Linux kernel into allocating page tables in that freed memory is trivial and that's basically it you can overwrite page tables with your shader and gain access to all of system memory :)
What we could do is to always make sure that the added fences will complete later than the already existing ones, but that is also rather tricky to get right. I wouldn't do that if we don't have a rather big use case for this.
Right. I thought about that but I'm still learning how dma_resv works. It'd be easy enough to make a fence array that contains both the old fence and the new fence and replace the old fence with that. What I don't know is the proper way to replace the exclusive fence safely. Some sort of atomic_cpxchg loop, perhaps? I presume there's some way of doing it properly because DRM drivers are doing it all the time.
I think the main use-case for adding a fence is adding a write fence for vk winsys buffers, which run without any sync at all. So essentially what we'd do is promote one of the read fences which are already attached to be the write fence.
Correct. We're effectively doing an import in ANV today but we're doing it with a dummy execbuf which claims to write the BO and has a batch that's just MI_BATCH_BUFFER_END.
But yeah making sure we don't break any of the dma_resv guarantees about how these fences works is going to be somewhat tricky. Probably can reuse a big chunk of the fence container work we've done for syncobj timelines, since they have some of the same issues of having to chain fences to not break the world.
Happy to not break the world. I just don't know how yet. :-)
--Jason
On Wed, Feb 26, 2020 at 4:29 PM Jason Ekstrand jason@jlekstrand.net wrote:
On Wed, Feb 26, 2020 at 4:05 AM Daniel Vetter daniel@ffwll.ch wrote:
On Wed, Feb 26, 2020 at 10:16:05AM +0100, Christian König wrote:
Hi Jason,
Am 26.02.20 um 00:58 schrieb Jason Ekstrand:
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
This is marked as an RFC because I intend it to start a discussion about how to solve a problem. The current patch compiles but that's it for now. I'll be writing IGT tests and Vulkan driver patches which exercise it over the next couple of days. In the mean time, feel free to tell me why you think this is a great and/or terrible idea. :-)
For the exporting part I think it is an absolutely great idea because it simplifies compatibility with explicit sync quite a bit.
Unfortunately, it only helps half of explicit sync and not the half that's hard to deal with from Vulkan. :-/
But for the importing part it is a clear NAK at the moment. See we can't allow userspace to mess with DMA-buf fences in that way because it rips open a security hole you can push an elephant through.
Oh, sure, I'm 100% sure I did that part wrong. Why else would I send the patch but to have someone who actually knows what they're doing tell me how to do it correctly? :-P
Just imagine that you access some DMA-buf with a shader and that operation is presented as a fence on the DMA-bufs reservation object. And now you can go ahead and replace that fence and free up the memory.
Tricking the Linux kernel into allocating page tables in that freed memory is trivial and that's basically it you can overwrite page tables with your shader and gain access to all of system memory :)
What we could do is to always make sure that the added fences will complete later than the already existing ones, but that is also rather tricky to get right. I wouldn't do that if we don't have a rather big use case for this.
Right. I thought about that but I'm still learning how dma_resv works. It'd be easy enough to make a fence array that contains both the old fence and the new fence and replace the old fence with that. What I don't know is the proper way to replace the exclusive fence safely. Some sort of atomic_cpxchg loop, perhaps? I presume there's some way of doing it properly because DRM drivers are doing it all the time.
I think for an exclusive fence you may need to create a fence array that includes the existing exclusive and shared fences in the dma_resv combined with the added fence.
However, I'm not sure what the best way is to do garbage collection on that so that we don't get an impossibly list of fence arrays. (Note the dma_resv has a lock that needs to be taken before adding an exclusive fence, might be useful). Some code that does a thing like this is __dma_resv_make_exclusive in drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
The other piece of the puzzle is that on the submit path this would need something to ignore implicit fences. And there semantically the question comes up whether it is safe for a driver to ignore exclusive fences from another driver. (and then we have amdgpu which has its own rules on exclusiveness of its shared fences based on the context. e.g. the current option to ignore implicit fences for a buffer still syncs on exclusive fences on the buffer).
I think the main use-case for adding a fence is adding a write fence for vk winsys buffers, which run without any sync at all. So essentially what we'd do is promote one of the read fences which are already attached to be the write fence.
Correct. We're effectively doing an import in ANV today but we're doing it with a dummy execbuf which claims to write the BO and has a batch that's just MI_BATCH_BUFFER_END.
But yeah making sure we don't break any of the dma_resv guarantees about how these fences works is going to be somewhat tricky. Probably can reuse a big chunk of the fence container work we've done for syncobj timelines, since they have some of the same issues of having to chain fences to not break the world.
Happy to not break the world. I just don't know how yet. :-)
--Jason
Am 26.02.20 um 17:46 schrieb Bas Nieuwenhuizen:
On Wed, Feb 26, 2020 at 4:29 PM Jason Ekstrand jason@jlekstrand.net wrote:
On Wed, Feb 26, 2020 at 4:05 AM Daniel Vetter daniel@ffwll.ch wrote:
On Wed, Feb 26, 2020 at 10:16:05AM +0100, Christian König wrote: [SNIP]
Just imagine that you access some DMA-buf with a shader and that operation is presented as a fence on the DMA-bufs reservation object. And now you can go ahead and replace that fence and free up the memory.
Tricking the Linux kernel into allocating page tables in that freed memory is trivial and that's basically it you can overwrite page tables with your shader and gain access to all of system memory :)
What we could do is to always make sure that the added fences will complete later than the already existing ones, but that is also rather tricky to get right. I wouldn't do that if we don't have a rather big use case for this.
Right. I thought about that but I'm still learning how dma_resv works. It'd be easy enough to make a fence array that contains both the old fence and the new fence and replace the old fence with that. What I don't know is the proper way to replace the exclusive fence safely. Some sort of atomic_cpxchg loop, perhaps? I presume there's some way of doing it properly because DRM drivers are doing it all the time.
First of all you need to grab the lock of the dma_resv object or you can't replace the exclusive nor the shared ones.
This way you don't need to do a atomic_cmpxchg or anything else and still guarantee correct ordering.
I think for an exclusive fence you may need to create a fence array that includes the existing exclusive and shared fences in the dma_resv combined with the added fence.
Yes, that at least gives us the correct synchronization.
However, I'm not sure what the best way is to do garbage collection on that so that we don't get an impossibly list of fence arrays.
Exactly yes. That's also the reason why the dma_fence_chain container I came up with for the sync timeline stuff has such a rather sophisticated garbage collection.
When some of the included fences signal you need to free up the array/chain and make sure that the memory for the container can be reused.
(Note the dma_resv has a lock that needs to be taken before adding an exclusive fence, might be useful). Some code that does a thing like this is __dma_resv_make_exclusive in drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
Wanted to move that into dma_resv.c for quite a while since there are quite a few other cases where we need this.
Regards, Christian.
The other piece of the puzzle is that on the submit path this would need something to ignore implicit fences. And there semantically the question comes up whether it is safe for a driver to ignore exclusive fences from another driver. (and then we have amdgpu which has its own rules on exclusiveness of its shared fences based on the context. e.g. the current option to ignore implicit fences for a buffer still syncs on exclusive fences on the buffer).
On Thu, Feb 27, 2020 at 2:28 AM Christian König christian.koenig@amd.com wrote:
Am 26.02.20 um 17:46 schrieb Bas Nieuwenhuizen:
On Wed, Feb 26, 2020 at 4:29 PM Jason Ekstrand jason@jlekstrand.net wrote:
On Wed, Feb 26, 2020 at 4:05 AM Daniel Vetter daniel@ffwll.ch wrote:
On Wed, Feb 26, 2020 at 10:16:05AM +0100, Christian König wrote: [SNIP]
Just imagine that you access some DMA-buf with a shader and that operation is presented as a fence on the DMA-bufs reservation object. And now you can go ahead and replace that fence and free up the memory.
Tricking the Linux kernel into allocating page tables in that freed memory is trivial and that's basically it you can overwrite page tables with your shader and gain access to all of system memory :)
What we could do is to always make sure that the added fences will complete later than the already existing ones, but that is also rather tricky to get right. I wouldn't do that if we don't have a rather big use case for this.
Right. I thought about that but I'm still learning how dma_resv works. It'd be easy enough to make a fence array that contains both the old fence and the new fence and replace the old fence with that. What I don't know is the proper way to replace the exclusive fence safely. Some sort of atomic_cpxchg loop, perhaps? I presume there's some way of doing it properly because DRM drivers are doing it all the time.
First of all you need to grab the lock of the dma_resv object or you can't replace the exclusive nor the shared ones.
This way you don't need to do a atomic_cmpxchg or anything else and still guarantee correct ordering.
Fixed in v3.
I think for an exclusive fence you may need to create a fence array that includes the existing exclusive and shared fences in the dma_resv combined with the added fence.
Yes, that at least gives us the correct synchronization.
Fixed in v2
However, I'm not sure what the best way is to do garbage collection on that so that we don't get an impossibly list of fence arrays.
Exactly yes. That's also the reason why the dma_fence_chain container I came up with for the sync timeline stuff has such a rather sophisticated garbage collection.
When some of the included fences signal you need to free up the array/chain and make sure that the memory for the container can be reused.
Currently (as of v2), I'm using dma_fence_array and being careful to not bother constructing one if there's only one fence in play. Is this insufficient? If so, maybe we should consider improving dma_fence_array.
(Note the dma_resv has a lock that needs to be taken before adding an exclusive fence, might be useful). Some code that does a thing like this is __dma_resv_make_exclusive in drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
Wanted to move that into dma_resv.c for quite a while since there are quite a few other cases where we need this.
I've roughly done that. The primary difference is that my version takes an optional additional fence to add to the array. This makes it a bit more complicated but I think I got it mostly right.
I've also written userspace code which exercises this and it seems to work. Hopefully, that will give a better idea of what I'm trying to accomplish.
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037
--Jason
Am 03.03.20 um 20:10 schrieb Jason Ekstrand:
On Thu, Feb 27, 2020 at 2:28 AM Christian König christian.koenig@amd.com wrote:
[SNIP]
However, I'm not sure what the best way is to do garbage collection on that so that we don't get an impossibly list of fence arrays.
Exactly yes. That's also the reason why the dma_fence_chain container I came up with for the sync timeline stuff has such a rather sophisticated garbage collection.
When some of the included fences signal you need to free up the array/chain and make sure that the memory for the container can be reused.
Currently (as of v2), I'm using dma_fence_array and being careful to not bother constructing one if there's only one fence in play. Is this insufficient? If so, maybe we should consider improving dma_fence_array.
That still won't work correctly in all cases. See the problem is not only optimization, but also avoiding situations where userspace can abuse the interface to do nasty things.
For example if userspace just calls that function in a loop you can create a long chain of dma_fence_array objects.
If that chain is then suddenly released the recursive dropping of references can overwrite the kernel stack.
For reference see what dance is necessary in the dma_fence_chain_release function to avoid that:
/* Manually unlink the chain as much as possible to avoid recursion * and potential stack overflow. */ while ((prev = rcu_dereference_protected(chain->prev, true))) {
....
It took me quite a while to figure out how to do this without causing issues. But I don't see how this would be possible for dma_fence_array.
As far as I can see the only real option to implement this would be to change the dma_resv object container so that you can add fences without overriding existing ones.
For shared fences that can be done relative easily, but I absolutely don't see how to do this for exclusive ones without a larger rework.
(Note the dma_resv has a lock that needs to be taken before adding an exclusive fence, might be useful). Some code that does a thing like this is __dma_resv_make_exclusive in drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
Wanted to move that into dma_resv.c for quite a while since there are quite a few other cases where we need this.
I've roughly done that. The primary difference is that my version takes an optional additional fence to add to the array. This makes it a bit more complicated but I think I got it mostly right.
I've also written userspace code which exercises this and it seems to work. Hopefully, that will give a better idea of what I'm trying to accomplish.
Yes, that is indeed a really nice to have feature.
Regards, Christian.
On Wed, Mar 4, 2020 at 2:34 AM Christian König christian.koenig@amd.com wrote:
Am 03.03.20 um 20:10 schrieb Jason Ekstrand:
On Thu, Feb 27, 2020 at 2:28 AM Christian König christian.koenig@amd.com wrote:
[SNIP]
However, I'm not sure what the best way is to do garbage collection on that so that we don't get an impossibly list of fence arrays.
Exactly yes. That's also the reason why the dma_fence_chain container I came up with for the sync timeline stuff has such a rather sophisticated garbage collection.
When some of the included fences signal you need to free up the array/chain and make sure that the memory for the container can be reused.
Currently (as of v2), I'm using dma_fence_array and being careful to not bother constructing one if there's only one fence in play. Is this insufficient? If so, maybe we should consider improving dma_fence_array.
That still won't work correctly in all cases. See the problem is not only optimization, but also avoiding situations where userspace can abuse the interface to do nasty things.
For example if userspace just calls that function in a loop you can create a long chain of dma_fence_array objects.
If that chain is then suddenly released the recursive dropping of references can overwrite the kernel stack.
For reference see what dance is necessary in the dma_fence_chain_release function to avoid that:
/* Manually unlink the chain as much as possible to avoid
recursion * and potential stack overflow. */ while ((prev = rcu_dereference_protected(chain->prev, true))) {
....
It took me quite a while to figure out how to do this without causing issues. But I don't see how this would be possible for dma_fence_array.
Ah, I see the issue now! It hadn't even occurred to me that userspace could use this to build up an infinite recursion chain. That's nasty! I'll give this some more thought and see if can come up with something clever.
Here's one thought: We could make dma_fence_array automatically collapse any arrays it references and instead directly reference their fences. This way, no matter how much the client chains things, they will never get more than one dma_fence_array. Of course, the difficulty here (answering my own question) comes if they ping-pong back-and-forth between something which constructs a dma_fence_array and something which constructs a dma_fence_chain to get array-of-chain-of-array-of-chain-of-... More thought needed.
As far as I can see the only real option to implement this would be to change the dma_resv object container so that you can add fences without overriding existing ones.
For shared fences that can be done relative easily, but I absolutely don't see how to do this for exclusive ones without a larger rework.
Fair enough. Thanks for taking the time to explain the issue. I'll give this some more thought.
--Jason
(Note the dma_resv has a lock that needs to be taken before adding an exclusive fence, might be useful). Some code that does a thing like this is __dma_resv_make_exclusive in drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
Wanted to move that into dma_resv.c for quite a while since there are quite a few other cases where we need this.
I've roughly done that. The primary difference is that my version takes an optional additional fence to add to the array. This makes it a bit more complicated but I think I got it mostly right.
I've also written userspace code which exercises this and it seems to work. Hopefully, that will give a better idea of what I'm trying to accomplish.
Yes, that is indeed a really nice to have feature.
Regards, Christian.
On Wed, Mar 4, 2020 at 10:27 AM Jason Ekstrand jason@jlekstrand.net wrote:
On Wed, Mar 4, 2020 at 2:34 AM Christian König christian.koenig@amd.com wrote:
Am 03.03.20 um 20:10 schrieb Jason Ekstrand:
On Thu, Feb 27, 2020 at 2:28 AM Christian König christian.koenig@amd.com wrote:
[SNIP]
However, I'm not sure what the best way is to do garbage collection on that so that we don't get an impossibly list of fence arrays.
Exactly yes. That's also the reason why the dma_fence_chain container I came up with for the sync timeline stuff has such a rather sophisticated garbage collection.
When some of the included fences signal you need to free up the array/chain and make sure that the memory for the container can be reused.
Currently (as of v2), I'm using dma_fence_array and being careful to not bother constructing one if there's only one fence in play. Is this insufficient? If so, maybe we should consider improving dma_fence_array.
That still won't work correctly in all cases. See the problem is not only optimization, but also avoiding situations where userspace can abuse the interface to do nasty things.
For example if userspace just calls that function in a loop you can create a long chain of dma_fence_array objects.
If that chain is then suddenly released the recursive dropping of references can overwrite the kernel stack.
For reference see what dance is necessary in the dma_fence_chain_release function to avoid that:
/* Manually unlink the chain as much as possible to avoid
recursion * and potential stack overflow. */ while ((prev = rcu_dereference_protected(chain->prev, true))) {
....
It took me quite a while to figure out how to do this without causing issues. But I don't see how this would be possible for dma_fence_array.
Ah, I see the issue now! It hadn't even occurred to me that userspace could use this to build up an infinite recursion chain. That's nasty! I'll give this some more thought and see if can come up with something clever.
Here's one thought: We could make dma_fence_array automatically collapse any arrays it references and instead directly reference their fences. This way, no matter how much the client chains things, they will never get more than one dma_fence_array. Of course, the difficulty here (answering my own question) comes if they ping-pong back-and-forth between something which constructs a dma_fence_array and something which constructs a dma_fence_chain to get array-of-chain-of-array-of-chain-of-... More thought needed.
Answering my own questions again... I think the array-of-chain-of-array case is also solvable.
For array-of-chain, we can simply add all unsignaled dma_fences in the chain to the array. The array won't signal until all of them have which is exactly the same behavior as if we'd added the chain itself.
For chain-of-array, we can add all unsignaled dma_fences in the array to the same point in the chain. There may be some fiddling with the chain numbering required here but I think we can get it so the chain won't signal until everything in the array has signaled and we get the same behavior as if we'd added the dma_fence_array to the chain.
In both cases, we end up with either a single array or a single and destruction doesn't require recursion. Thoughts?
--Jason
Am 04.03.20 um 17:41 schrieb Jason Ekstrand:
On Wed, Mar 4, 2020 at 10:27 AM Jason Ekstrand jason@jlekstrand.net wrote:
On Wed, Mar 4, 2020 at 2:34 AM Christian König christian.koenig@amd.com wrote:
Am 03.03.20 um 20:10 schrieb Jason Ekstrand:
On Thu, Feb 27, 2020 at 2:28 AM Christian König christian.koenig@amd.com wrote: [SNIP]
For reference see what dance is necessary in the dma_fence_chain_release function to avoid that:
/* Manually unlink the chain as much as possible to avoid
recursion * and potential stack overflow. */ while ((prev = rcu_dereference_protected(chain->prev, true))) {
....
It took me quite a while to figure out how to do this without causing issues. But I don't see how this would be possible for dma_fence_array.
Ah, I see the issue now! It hadn't even occurred to me that userspace could use this to build up an infinite recursion chain. That's nasty!
Yeah, when I first stumbled over it it was like why the heck is my code crashing in an interrupt handler?
Realizing that this is stack corruption because of the long chain we constructed was quite an enlightenment.
And then it took me even longer to fix it :)
I'll give this some more thought and see if can come up with something clever.
Here's one thought: We could make dma_fence_array automatically collapse any arrays it references and instead directly reference their fences. This way, no matter how much the client chains things, they will never get more than one dma_fence_array. Of course, the difficulty here (answering my own question) comes if they ping-pong back-and-forth between something which constructs a dma_fence_array and something which constructs a dma_fence_chain to get array-of-chain-of-array-of-chain-of-... More thought needed.
Condensing the fences into a larger array can certainly work, yes.
Answering my own questions again... I think the array-of-chain-of-array case is also solvable.
For array-of-chain, we can simply add all unsignaled dma_fences in the chain to the array. The array won't signal until all of them have which is exactly the same behavior as if we'd added the chain itself.
Yeah, that should work. Probably best to implement something like a cursor to walk all fences in the data structure.
For chain-of-array, we can add all unsignaled dma_fences in the array to the same point in the chain. There may be some fiddling with the chain numbering required here but I think we can get it so the chain won't signal until everything in the array has signaled and we get the same behavior as if we'd added the dma_fence_array to the chain.
Well as far as I can see this won't work because it would break the semantics of the timeline sync.
But I think I know a different way which should work: A dma_fence_chain can still contain a dma_fence_array, only the other way around is forbidden. Then we create the cursor functionality in such a way that it allows us to deep dive into the data structure and return all containing fences one by one.
I can prototype that if you want, shouldn't be more than a few hours of hacking anyway.
Regards, Christian.
In both cases, we end up with either a single array or a single and destruction doesn't require recursion. Thoughts?
--Jason
On Thu, Mar 5, 2020 at 7:06 AM Christian König christian.koenig@amd.com wrote:
Am 04.03.20 um 17:41 schrieb Jason Ekstrand:
On Wed, Mar 4, 2020 at 10:27 AM Jason Ekstrand jason@jlekstrand.net wrote:
On Wed, Mar 4, 2020 at 2:34 AM Christian König christian.koenig@amd.com wrote:
Am 03.03.20 um 20:10 schrieb Jason Ekstrand:
On Thu, Feb 27, 2020 at 2:28 AM Christian König christian.koenig@amd.com wrote: [SNIP]
For reference see what dance is necessary in the dma_fence_chain_release function to avoid that:
/* Manually unlink the chain as much as possible to avoid
recursion * and potential stack overflow. */ while ((prev = rcu_dereference_protected(chain->prev, true))) {
....
It took me quite a while to figure out how to do this without causing issues. But I don't see how this would be possible for dma_fence_array.
Ah, I see the issue now! It hadn't even occurred to me that userspace could use this to build up an infinite recursion chain. That's nasty!
Yeah, when I first stumbled over it it was like why the heck is my code crashing in an interrupt handler?
Realizing that this is stack corruption because of the long chain we constructed was quite an enlightenment.
And then it took me even longer to fix it :)
Fun....
I'll give this some more thought and see if can come up with something clever.
Here's one thought: We could make dma_fence_array automatically collapse any arrays it references and instead directly reference their fences. This way, no matter how much the client chains things, they will never get more than one dma_fence_array. Of course, the difficulty here (answering my own question) comes if they ping-pong back-and-forth between something which constructs a dma_fence_array and something which constructs a dma_fence_chain to get array-of-chain-of-array-of-chain-of-... More thought needed.
Condensing the fences into a larger array can certainly work, yes.
Answering my own questions again... I think the array-of-chain-of-array case is also solvable.
For array-of-chain, we can simply add all unsignaled dma_fences in the chain to the array. The array won't signal until all of them have which is exactly the same behavior as if we'd added the chain itself.
Yeah, that should work. Probably best to implement something like a cursor to walk all fences in the data structure.
For chain-of-array, we can add all unsignaled dma_fences in the array to the same point in the chain. There may be some fiddling with the chain numbering required here but I think we can get it so the chain won't signal until everything in the array has signaled and we get the same behavior as if we'd added the dma_fence_array to the chain.
Well as far as I can see this won't work because it would break the semantics of the timeline sync.
I'm not 100% convinced it has to. We already have support for the seqno regressing and we ensure that we still wait for all the fences. I thought maybe we could use that but I haven't spent enough time looking at the details to be sure. I may be missing something.
But I think I know a different way which should work: A dma_fence_chain can still contain a dma_fence_array, only the other way around is forbidden. Then we create the cursor functionality in such a way that it allows us to deep dive into the data structure and return all containing fences one by one.
Agreed. As long as one container is able to consume the other, it's fine.
I can prototype that if you want, shouldn't be more than a few hours of hacking anyway.
If you'd like to, go for it. I'd be happy to give it a go as well but if you already know what you want, it may be easier for you to just write the patch for the cursor.
Two more questions:
1. Do you want this collapsing to happen every time we create a dma_fence_array or should it be a special entrypoint? Collapsing all the time likely means doing extra array calculations instead of the dma_fence_array taking ownership of the array that's passed in. My gut says that cost is ok; but my gut doesn't spend much time in kernel space.
2. When we do the collapsing, should we call dma_fence_is_signaled() to avoid adding signaled fences to the array? It seems like avoiding adding references to fences that are already signaled would let the kernel clean them up faster and reduce the likelihood that a fence will hang around forever because it keeps getting added to arrays with other unsignaled fences.
--Jason
Am 05.03.20 um 16:54 schrieb Jason Ekstrand:
On Thu, Mar 5, 2020 at 7:06 AM Christian König christian.koenig@amd.com wrote:
[SNIP] Well as far as I can see this won't work because it would break the semantics of the timeline sync.
I'm not 100% convinced it has to. We already have support for the seqno regressing and we ensure that we still wait for all the fences. I thought maybe we could use that but I haven't spent enough time looking at the details to be sure. I may be missing something.
That won't work. The seqno regression works by punishing userspace for doing something stupid and undefined.
Be we can't do that under normal circumstances.
I can prototype that if you want, shouldn't be more than a few hours of hacking anyway.
If you'd like to, go for it. I'd be happy to give it a go as well but if you already know what you want, it may be easier for you to just write the patch for the cursor.
Send you two patches for that a few minutes ago. But keep in mind that those are completely untested.
Two more questions:
- Do you want this collapsing to happen every time we create a
dma_fence_array or should it be a special entrypoint? Collapsing all the time likely means doing extra array calculations instead of the dma_fence_array taking ownership of the array that's passed in. My gut says that cost is ok; but my gut doesn't spend much time in kernel space.
In my prototype implementation that is a dma_resv function you call and get either a single fence or a dma_fence_array with the collapsed fences in return.
But I wouldn't add that to the general dma_fence_array_init function since this is still a rather special case. Well see the patches, they should be pretty self explaining.
- When we do the collapsing, should we call dma_fence_is_signaled()
to avoid adding signaled fences to the array? It seems like avoiding adding references to fences that are already signaled would let the kernel clean them up faster and reduce the likelihood that a fence will hang around forever because it keeps getting added to arrays with other unsignaled fences.
I think so. Can't think of a good reason why we would want to add already signaled fences to the array.
Christian.
--Jason
On Mon, Mar 9, 2020 at 11:21 AM Christian König christian.koenig@amd.com wrote:
Am 05.03.20 um 16:54 schrieb Jason Ekstrand:
On Thu, Mar 5, 2020 at 7:06 AM Christian König christian.koenig@amd.com wrote:
[SNIP] Well as far as I can see this won't work because it would break the semantics of the timeline sync.
I'm not 100% convinced it has to. We already have support for the seqno regressing and we ensure that we still wait for all the fences. I thought maybe we could use that but I haven't spent enough time looking at the details to be sure. I may be missing something.
That won't work. The seqno regression works by punishing userspace for doing something stupid and undefined.
Be we can't do that under normal circumstances.
I can prototype that if you want, shouldn't be more than a few hours of hacking anyway.
If you'd like to, go for it. I'd be happy to give it a go as well but if you already know what you want, it may be easier for you to just write the patch for the cursor.
Send you two patches for that a few minutes ago. But keep in mind that those are completely untested.
No worries. They were full of bugs but I think I've got them sorted out now. The v2's I'm about to send seem to work. I'm going to leave a Vulkan demo running all night long just to make sure I'm not leaking memory like mad.
--Jason
Two more questions:
- Do you want this collapsing to happen every time we create a
dma_fence_array or should it be a special entrypoint? Collapsing all the time likely means doing extra array calculations instead of the dma_fence_array taking ownership of the array that's passed in. My gut says that cost is ok; but my gut doesn't spend much time in kernel space.
In my prototype implementation that is a dma_resv function you call and get either a single fence or a dma_fence_array with the collapsed fences in return.
But I wouldn't add that to the general dma_fence_array_init function since this is still a rather special case. Well see the patches, they should be pretty self explaining.
- When we do the collapsing, should we call dma_fence_is_signaled()
to avoid adding signaled fences to the array? It seems like avoiding adding references to fences that are already signaled would let the kernel clean them up faster and reduce the likelihood that a fence will hang around forever because it keeps getting added to arrays with other unsignaled fences.
I think so. Can't think of a good reason why we would want to add already signaled fences to the array.
Christian.
--Jason
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
v2 (Jason Ekstrand): - Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-buf.c | 162 +++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 13 ++- 2 files changed, 173 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index d4097856c86b..162f90e8896b 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -348,6 +349,161 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+/* This function takes a ref to add_fence on success. The caller still + * owns its ref and has to dma_fence_put it. + */ +static struct dma_fence *dma_buf_get_unified_fence(struct dma_buf *dmabuf, + struct dma_fence *add_fence) +{ + struct dma_fence **fences = NULL; + struct dma_fence_array *array; + unsigned i, num_fences = 0; + int ret; + + ret = dma_resv_get_fences_rcu(dmabuf->resv, NULL, + &num_fences, &fences); + if (ret) + return NULL; /* ret can only be 0 or -ENOMEM */ + + if (num_fences == 0) { + if (add_fence) { + return add_fence; + } else { + return dma_fence_get_stub(); + } + } else if (num_fences == 1 && (!add_fence || add_fence == fences[0])) { + struct dma_fence *fence = fences[0]; + kfree(fences); + return fence; + } + + if (add_fence) { + struct dma_fence **nfences; + size_t sz; + + /* Get a ref to add_fence so that we have a ref to every + * fence we are going to put in the array. + */ + dma_fence_get(add_fence); + + sz = (num_fences + 1) * sizeof(*fences); + nfences = krealloc(fences, sz, GFP_NOWAIT | __GFP_NOWARN); + if (!nfences) + goto err_put_fences; + + nfences[num_fences++] = add_fence; + } + + array = dma_fence_array_create(num_fences, fences, + dma_fence_context_alloc(1), + 1, false); + if (!array) + goto err_put_fences; + + /* The fence array now owns fences_arr and our references to each + * of the individual fences. We only own a reference to the one + * array fence. + */ + + return &array->base; + +err_put_fences: + for (i = 0; i < num_fences; i++) + dma_fence_put(fences[0]); + dma_fence_put(add_fence); + kfree(fences); + return NULL; +} + +static long dma_buf_wait_sync_file(struct dma_buf *dmabuf, + const void __user *user_data) +{ + struct dma_buf_sync_file arg; + struct dma_fence *fence, *unified_fence; + int ret; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE) + return -EINVAL; + + fence = sync_file_get_fence(arg.fd); + if (!fence) + return -EINVAL; + + if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) { + dma_resv_lock(dmabuf->resv, NULL); + unified_fence = dma_buf_get_unified_fence(dmabuf, fence); + if (unified_fence) + dma_resv_add_excl_fence(dmabuf->resv, fence); + else + ret = -ENOMEM; + dma_resv_unlock(dmabuf->resv); + } else { + dma_resv_add_shared_fence(dmabuf->resv, fence); + } + + dma_fence_put(fence); + + return ret; +} + +static long dma_buf_signal_sync_file(struct dma_buf *dmabuf, + void __user *user_data) +{ + struct dma_buf_sync_file arg; + struct dma_fence *fence = NULL; + struct sync_file *sync_file; + int fd, ret; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE) + return -EINVAL; + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) + return fd; + + if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) { + /* We need to include both the exclusive fence and all of + * the shared fences in our fence. + */ + fence = dma_buf_get_unified_fence(dmabuf, NULL); + if (!fence) { + ret = -ENOMEM; + goto err_put_fd; + } + } else { + fence = dma_resv_get_excl_rcu(dmabuf->resv); + if (!fence) + fence = dma_fence_get_stub(); + } + + sync_file = sync_file_create(fence); + + dma_fence_put(fence); + + if (!sync_file) { + ret = -EINVAL; + goto err_put_fd; + } + + fd_install(fd, sync_file->file); + + arg.fd = fd; + if (copy_to_user(user_data, &arg, sizeof(arg))) + return -EFAULT; + + return 0; + +err_put_fd: + put_unused_fd(fd); + return ret; +} + static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -390,6 +546,12 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+ case DMA_BUF_IOCTL_WAIT_SYNC_FILE: + return dma_buf_wait_sync_file(dmabuf, (const void __user *)arg); + + case DMA_BUF_IOCTL_SIGNAL_SYNC_FILE: + return dma_buf_signal_sync_file(dmabuf, (void __user *)arg); + default: return -ENOTTY; } diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index dbc7092e04b5..825b9a913c89 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -37,8 +37,17 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+struct dma_buf_sync_file { + __u32 flags; + __s32 fd; +}; + +#define DMA_BUF_SYNC_FILE_SYNC_WRITE (1 << 0) + #define DMA_BUF_BASE 'b' -#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) -#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) +#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_WAIT_SYNC_FILE _IOW(DMA_BUF_BASE, 2, struct dma_buf_sync) +#define DMA_BUF_IOCTL_SIGNAL_SYNC_FILE _IOW(DMA_BUF_BASE, 3, struct dma_buf_sync)
#endif
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
v2 (Jason Ekstrand): - Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand): - Lock around setting shared fences as well as exclusive - Mark SIGNAL_SYNC_FILE as a read-write ioctl. - Initialize ret to 0 in dma_buf_wait_sync_file
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-buf.c | 164 +++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 13 ++- 2 files changed, 175 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index d4097856c86b..2c4608bae3c2 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -348,6 +349,163 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+/* This function takes a ref to add_fence on success. The caller still + * owns its ref and has to dma_fence_put it. + */ +static struct dma_fence *dma_buf_get_unified_fence(struct dma_buf *dmabuf, + struct dma_fence *add_fence) +{ + struct dma_fence **fences = NULL; + struct dma_fence_array *array; + unsigned i, num_fences = 0; + int ret; + + ret = dma_resv_get_fences_rcu(dmabuf->resv, NULL, + &num_fences, &fences); + if (ret) + return NULL; /* ret can only be 0 or -ENOMEM */ + + if (num_fences == 0) { + if (add_fence) { + return add_fence; + } else { + return dma_fence_get_stub(); + } + } else if (num_fences == 1 && (!add_fence || add_fence == fences[0])) { + struct dma_fence *fence = fences[0]; + kfree(fences); + return fence; + } + + if (add_fence) { + struct dma_fence **nfences; + size_t sz; + + /* Get a ref to add_fence so that we have a ref to every + * fence we are going to put in the array. + */ + dma_fence_get(add_fence); + + sz = (num_fences + 1) * sizeof(*fences); + nfences = krealloc(fences, sz, GFP_NOWAIT | __GFP_NOWARN); + if (!nfences) + goto err_put_fences; + + nfences[num_fences++] = add_fence; + } + + array = dma_fence_array_create(num_fences, fences, + dma_fence_context_alloc(1), + 1, false); + if (!array) + goto err_put_fences; + + /* The fence array now owns fences_arr and our references to each + * of the individual fences. We only own a reference to the one + * array fence. + */ + + return &array->base; + +err_put_fences: + for (i = 0; i < num_fences; i++) + dma_fence_put(fences[0]); + dma_fence_put(add_fence); + kfree(fences); + return NULL; +} + +static long dma_buf_wait_sync_file(struct dma_buf *dmabuf, + const void __user *user_data) +{ + struct dma_buf_sync_file arg; + struct dma_fence *fence, *unified_fence; + int ret = 0; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE) + return -EINVAL; + + fence = sync_file_get_fence(arg.fd); + if (!fence) + return -EINVAL; + + dma_resv_lock(dmabuf->resv, NULL); + + if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) { + unified_fence = dma_buf_get_unified_fence(dmabuf, fence); + if (unified_fence) + dma_resv_add_excl_fence(dmabuf->resv, fence); + else + ret = -ENOMEM; + } else { + dma_resv_add_shared_fence(dmabuf->resv, fence); + } + + dma_resv_unlock(dmabuf->resv); + + dma_fence_put(fence); + + return ret; +} + +static long dma_buf_signal_sync_file(struct dma_buf *dmabuf, + void __user *user_data) +{ + struct dma_buf_sync_file arg; + struct dma_fence *fence = NULL; + struct sync_file *sync_file; + int fd, ret; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE) + return -EINVAL; + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) + return fd; + + if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) { + /* We need to include both the exclusive fence and all of + * the shared fences in our fence. + */ + fence = dma_buf_get_unified_fence(dmabuf, NULL); + if (!fence) { + ret = -ENOMEM; + goto err_put_fd; + } + } else { + fence = dma_resv_get_excl_rcu(dmabuf->resv); + if (!fence) + fence = dma_fence_get_stub(); + } + + sync_file = sync_file_create(fence); + + dma_fence_put(fence); + + if (!sync_file) { + ret = -EINVAL; + goto err_put_fd; + } + + fd_install(fd, sync_file->file); + + arg.fd = fd; + if (copy_to_user(user_data, &arg, sizeof(arg))) + return -EFAULT; + + return 0; + +err_put_fd: + put_unused_fd(fd); + return ret; +} + static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -390,6 +548,12 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+ case DMA_BUF_IOCTL_WAIT_SYNC_FILE: + return dma_buf_wait_sync_file(dmabuf, (const void __user *)arg); + + case DMA_BUF_IOCTL_SIGNAL_SYNC_FILE: + return dma_buf_signal_sync_file(dmabuf, (void __user *)arg); + default: return -ENOTTY; } diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index dbc7092e04b5..86e07acca90c 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -37,8 +37,17 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+struct dma_buf_sync_file { + __u32 flags; + __s32 fd; +}; + +#define DMA_BUF_SYNC_FILE_SYNC_WRITE (1 << 0) + #define DMA_BUF_BASE 'b' -#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) -#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) +#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_WAIT_SYNC_FILE _IOW(DMA_BUF_BASE, 2, struct dma_buf_sync) +#define DMA_BUF_IOCTL_SIGNAL_SYNC_FILE _IOWR(DMA_BUF_BASE, 3, struct dma_buf_sync)
#endif
Vulkan WSI user of the new API:
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037
On Tue, Mar 3, 2020 at 1:03 PM Jason Ekstrand jason@jlekstrand.net wrote:
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
drivers/dma-buf/dma-buf.c | 164 +++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 13 ++- 2 files changed, 175 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index d4097856c86b..2c4608bae3c2 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -348,6 +349,163 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+/* This function takes a ref to add_fence on success. The caller still
- owns its ref and has to dma_fence_put it.
- */
+static struct dma_fence *dma_buf_get_unified_fence(struct dma_buf *dmabuf,
struct dma_fence *add_fence)
+{
struct dma_fence **fences = NULL;
struct dma_fence_array *array;
unsigned i, num_fences = 0;
int ret;
ret = dma_resv_get_fences_rcu(dmabuf->resv, NULL,
&num_fences, &fences);
if (ret)
return NULL; /* ret can only be 0 or -ENOMEM */
if (num_fences == 0) {
if (add_fence) {
return add_fence;
} else {
return dma_fence_get_stub();
}
} else if (num_fences == 1 && (!add_fence || add_fence == fences[0])) {
struct dma_fence *fence = fences[0];
kfree(fences);
return fence;
}
if (add_fence) {
struct dma_fence **nfences;
size_t sz;
/* Get a ref to add_fence so that we have a ref to every
* fence we are going to put in the array.
*/
dma_fence_get(add_fence);
sz = (num_fences + 1) * sizeof(*fences);
nfences = krealloc(fences, sz, GFP_NOWAIT | __GFP_NOWARN);
if (!nfences)
goto err_put_fences;
nfences[num_fences++] = add_fence;
}
array = dma_fence_array_create(num_fences, fences,
dma_fence_context_alloc(1),
1, false);
if (!array)
goto err_put_fences;
/* The fence array now owns fences_arr and our references to each
* of the individual fences. We only own a reference to the one
* array fence.
*/
return &array->base;
+err_put_fences:
for (i = 0; i < num_fences; i++)
dma_fence_put(fences[0]);
dma_fence_put(add_fence);
kfree(fences);
return NULL;
+}
+static long dma_buf_wait_sync_file(struct dma_buf *dmabuf,
const void __user *user_data)
+{
struct dma_buf_sync_file arg;
struct dma_fence *fence, *unified_fence;
int ret = 0;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE)
return -EINVAL;
fence = sync_file_get_fence(arg.fd);
if (!fence)
return -EINVAL;
dma_resv_lock(dmabuf->resv, NULL);
if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) {
unified_fence = dma_buf_get_unified_fence(dmabuf, fence);
if (unified_fence)
dma_resv_add_excl_fence(dmabuf->resv, fence);
else
ret = -ENOMEM;
} else {
dma_resv_add_shared_fence(dmabuf->resv, fence);
}
dma_resv_unlock(dmabuf->resv);
dma_fence_put(fence);
return ret;
+}
+static long dma_buf_signal_sync_file(struct dma_buf *dmabuf,
void __user *user_data)
+{
struct dma_buf_sync_file arg;
struct dma_fence *fence = NULL;
struct sync_file *sync_file;
int fd, ret;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE)
return -EINVAL;
fd = get_unused_fd_flags(O_CLOEXEC);
if (fd < 0)
return fd;
if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) {
/* We need to include both the exclusive fence and all of
* the shared fences in our fence.
*/
fence = dma_buf_get_unified_fence(dmabuf, NULL);
if (!fence) {
ret = -ENOMEM;
goto err_put_fd;
}
} else {
fence = dma_resv_get_excl_rcu(dmabuf->resv);
if (!fence)
fence = dma_fence_get_stub();
}
sync_file = sync_file_create(fence);
dma_fence_put(fence);
if (!sync_file) {
ret = -EINVAL;
goto err_put_fd;
}
fd_install(fd, sync_file->file);
arg.fd = fd;
if (copy_to_user(user_data, &arg, sizeof(arg)))
return -EFAULT;
return 0;
+err_put_fd:
put_unused_fd(fd);
return ret;
+}
static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -390,6 +548,12 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME: return dma_buf_set_name(dmabuf, (const char __user *)arg);
case DMA_BUF_IOCTL_WAIT_SYNC_FILE:
return dma_buf_wait_sync_file(dmabuf, (const void __user *)arg);
case DMA_BUF_IOCTL_SIGNAL_SYNC_FILE:
return dma_buf_signal_sync_file(dmabuf, (void __user *)arg);
default: return -ENOTTY; }
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index dbc7092e04b5..86e07acca90c 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -37,8 +37,17 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+struct dma_buf_sync_file {
__u32 flags;
__s32 fd;
+};
+#define DMA_BUF_SYNC_FILE_SYNC_WRITE (1 << 0)
#define DMA_BUF_BASE 'b' -#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) -#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) +#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_WAIT_SYNC_FILE _IOW(DMA_BUF_BASE, 2, struct dma_buf_sync) +#define DMA_BUF_IOCTL_SIGNAL_SYNC_FILE _IOWR(DMA_BUF_BASE, 3, struct dma_buf_sync)
#endif
2.24.1
From: Christian König ckoenig.leichtzumerken@gmail.com
Add a helper to iterate over all fences in a dma_fence_array object.
v2 (Jason Ekstrand) - Return NULL from dma_fence_array_first if head == NULL. This matches the iterator behavior of dma_fence_chain_for_each in that it iterates zero times if head == NULL. - Return NULL from dma_fence_array_next if index > array->num_fences.
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-fence-array.c | 27 +++++++++++++++++++++++++++ include/linux/dma-fence-array.h | 17 +++++++++++++++++ 2 files changed, 44 insertions(+)
diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c index d3fbd950be94..2ac1afc697d0 100644 --- a/drivers/dma-buf/dma-fence-array.c +++ b/drivers/dma-buf/dma-fence-array.c @@ -201,3 +201,30 @@ bool dma_fence_match_context(struct dma_fence *fence, u64 context) return true; } EXPORT_SYMBOL(dma_fence_match_context); + +struct dma_fence *dma_fence_array_first(struct dma_fence *head) +{ + struct dma_fence_array *array; + + if (!head) + return NULL; + + array = to_dma_fence_array(head); + if (!array) + return head; + + return array->fences[0]; +} +EXPORT_SYMBOL(dma_fence_array_first); + +struct dma_fence *dma_fence_array_next(struct dma_fence *head, + unsigned int index) +{ + struct dma_fence_array *array = to_dma_fence_array(head); + + if (!array || index >= array->num_fences) + return NULL; + + return array->fences[index]; +} +EXPORT_SYMBOL(dma_fence_array_next); diff --git a/include/linux/dma-fence-array.h b/include/linux/dma-fence-array.h index 303dd712220f..588ac8089dd6 100644 --- a/include/linux/dma-fence-array.h +++ b/include/linux/dma-fence-array.h @@ -74,6 +74,19 @@ to_dma_fence_array(struct dma_fence *fence) return container_of(fence, struct dma_fence_array, base); }
+/** + * dma_fence_array_for_each - iterate over all fences in array + * @fence: current fence + * @index: index into the array + * @head: potential dma_fence_array object + * + * Test if @array is a dma_fence_array object and if yes iterate over all fences + * in the array. If not just iterate over the fence in @array itself. + */ +#define dma_fence_array_for_each(fence, index, head) \ + for (index = 0, fence = dma_fence_array_first(head); fence; \ + ++(index), fence = dma_fence_array_next(head, index)) + struct dma_fence_array *dma_fence_array_create(int num_fences, struct dma_fence **fences, u64 context, unsigned seqno, @@ -81,4 +94,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
bool dma_fence_match_context(struct dma_fence *fence, u64 context);
+struct dma_fence *dma_fence_array_first(struct dma_fence *head); +struct dma_fence *dma_fence_array_next(struct dma_fence *head, + unsigned int index); + #endif /* __LINUX_DMA_FENCE_ARRAY_H */
From: Christian König ckoenig.leichtzumerken@gmail.com
Add a helper function to get a single fence representing all fences in a dma_resv object.
This fence is either the only one in the object or all not signaled fences of the object in a flatted out dma_fence_array.
v2 (Jason Ekstrand): - Take reference of fences both for creating the dma_fence_array and in the case where we return one fence. - Handle the case where dma_resv_get_list() returns NULL
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-resv.c | 118 +++++++++++++++++++++++++++++++++++++ include/linux/dma-resv.h | 3 + 2 files changed, 121 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 4264e64788c4..66591d8ab7ef 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -33,6 +33,8 @@ */
#include <linux/dma-resv.h> +#include <linux/dma-fence-chain.h> +#include <linux/dma-fence-array.h> #include <linux/export.h> #include <linux/sched/mm.h>
@@ -47,6 +49,19 @@ * write-side updates. */
+/** + * dma_fence_deep_dive_for_each - deep dive into the fence containers + * @fence: resulting fence + * @chain: variable for a dma_fence_chain + * @index: index into a dma_fence_array + * @head: starting point + * + * Helper to deep dive into the fence containers for flattening them. + */ +#define dma_fence_deep_dive_for_each(fence, chain, index, head) \ + dma_fence_chain_for_each(chain, head) \ + dma_fence_array_for_each(fence, index, chain) + DEFINE_WD_CLASS(reservation_ww_class); EXPORT_SYMBOL(reservation_ww_class);
@@ -516,6 +531,109 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj, } EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
+/** + * dma_resv_get_singleton - get a single fence for the dma_resv object + * @obj: the reservation object + * @extra: extra fence to add to the resulting array + * @result: resulting dma_fence + * + * Get a single fence representing all unsignaled fences in the dma_resv object + * plus the given extra fence. If we got only one fence return a new + * reference to that, otherwise return a dma_fence_array object. + * + * RETURNS + * Returns -NOMEM if allocations fail, zero otherwise. + */ +int dma_resv_get_singleton(struct dma_resv *obj, struct dma_fence *extra, + struct dma_fence **result) +{ + struct dma_resv_list *fobj = dma_resv_get_list(obj); + struct dma_fence *excl = dma_resv_get_excl(obj); + struct dma_fence *fence, *chain, **fences; + struct dma_fence_array *array; + unsigned int num_fences, shared_count; + unsigned int i, j; + + num_fences = 0; + *result = NULL; + + dma_fence_deep_dive_for_each(fence, chain, i, extra) { + if (dma_fence_is_signaled(fence)) + continue; + + *result = fence; + ++num_fences; + } + + dma_fence_deep_dive_for_each(fence, chain, i, excl) { + if (dma_fence_is_signaled(fence)) + continue; + + *result = fence; + ++num_fences; + } + + shared_count = fobj ? fobj->shared_count : 0; + for (i = 0; i < shared_count; ++i) { + struct dma_fence *f; + + f = rcu_dereference_protected(fobj->shared[i], + dma_resv_held(obj)); + dma_fence_deep_dive_for_each(fence, chain, j, f) { + if (dma_fence_is_signaled(fence)) + continue; + + *result = fence; + ++num_fences; + } + } + + if (num_fences <= 1) { + *result = dma_fence_get(*result); + return 0; + } + + fences = kmalloc_array(num_fences, sizeof(struct dma_fence*), + GFP_KERNEL); + if (!fences) + return -ENOMEM; + + num_fences = 0; + + dma_fence_deep_dive_for_each(fence, chain, i, extra) + if (!dma_fence_is_signaled(fence)) + fences[num_fences++] = dma_fence_get(fence); + + dma_fence_deep_dive_for_each(fence, chain, i, excl) + if (!dma_fence_is_signaled(fence)) + fences[num_fences++] = dma_fence_get(fence); + + for (i = 0; i < shared_count; ++i) { + struct dma_fence *f; + + f = rcu_dereference_protected(fobj->shared[i], + dma_resv_held(obj)); + dma_fence_deep_dive_for_each(fence, chain, j, f) + if (!dma_fence_is_signaled(fence)) + fences[num_fences++] = dma_fence_get(fence); + } + + array = dma_fence_array_create(num_fences, fences, + dma_fence_context_alloc(1), + 1, false); + if (!array) + goto error_free; + + *result = &array->base; + return 0; + +error_free: + while (num_fences--) + dma_fence_put(fences[num_fences]); + kfree(fences); + return -ENOMEM; +} + /** * dma_resv_wait_timeout_rcu - Wait on reservation's objects * shared and/or exclusive fences. diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index ee50d10f052b..d50e753e4550 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -287,6 +287,9 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
+int dma_resv_get_singleton(struct dma_resv *obj, struct dma_fence *extra, + struct dma_fence **result); + long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr, unsigned long timeout);
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
v2 (Jason Ekstrand): - Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand): - Lock around setting shared fences as well as exclusive - Mark SIGNAL_SYNC_FILE as a read-write ioctl. - Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand): - Use the new dma_resv_get_singleton helper
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-buf.c | 96 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 13 ++++- 2 files changed, 107 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index d4097856c86b..09973c689866 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -348,6 +349,95 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+static long dma_buf_wait_sync_file(struct dma_buf *dmabuf, + const void __user *user_data) +{ + struct dma_buf_sync_file arg; + struct dma_fence *fence; + int ret = 0; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE) + return -EINVAL; + + fence = sync_file_get_fence(arg.fd); + if (!fence) + return -EINVAL; + + dma_resv_lock(dmabuf->resv, NULL); + + if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) { + struct dma_fence *singleton = NULL; + ret = dma_resv_get_singleton(dmabuf->resv, fence, &singleton); + if (!ret && singleton) + dma_resv_add_excl_fence(dmabuf->resv, singleton); + } else { + dma_resv_add_shared_fence(dmabuf->resv, fence); + } + + dma_resv_unlock(dmabuf->resv); + + dma_fence_put(fence); + + return ret; +} + +static long dma_buf_signal_sync_file(struct dma_buf *dmabuf, + void __user *user_data) +{ + struct dma_buf_sync_file arg; + struct dma_fence *fence = NULL; + struct sync_file *sync_file; + int fd, ret; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE) + return -EINVAL; + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) + return fd; + + if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) { + /* We need to include both the exclusive fence and all of + * the shared fences in our fence. + */ + ret = dma_resv_get_singleton(dmabuf->resv, NULL, &fence); + if (ret) + goto err_put_fd; + } else { + fence = dma_resv_get_excl_rcu(dmabuf->resv); + } + + if (!fence) + fence = dma_fence_get_stub(); + + sync_file = sync_file_create(fence); + + dma_fence_put(fence); + + if (!sync_file) { + ret = -EINVAL; + goto err_put_fd; + } + + fd_install(fd, sync_file->file); + + arg.fd = fd; + if (copy_to_user(user_data, &arg, sizeof(arg))) + return -EFAULT; + + return 0; + +err_put_fd: + put_unused_fd(fd); + return ret; +} + static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -390,6 +480,12 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+ case DMA_BUF_IOCTL_WAIT_SYNC_FILE: + return dma_buf_wait_sync_file(dmabuf, (const void __user *)arg); + + case DMA_BUF_IOCTL_SIGNAL_SYNC_FILE: + return dma_buf_signal_sync_file(dmabuf, (void __user *)arg); + default: return -ENOTTY; } diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index dbc7092e04b5..86e07acca90c 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -37,8 +37,17 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+struct dma_buf_sync_file { + __u32 flags; + __s32 fd; +}; + +#define DMA_BUF_SYNC_FILE_SYNC_WRITE (1 << 0) + #define DMA_BUF_BASE 'b' -#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) -#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) +#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_WAIT_SYNC_FILE _IOW(DMA_BUF_BASE, 2, struct dma_buf_sync) +#define DMA_BUF_IOCTL_SIGNAL_SYNC_FILE _IOWR(DMA_BUF_BASE, 3, struct dma_buf_sync)
#endif
Am 11.03.20 um 04:43 schrieb Jason Ekstrand:
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
drivers/dma-buf/dma-buf.c | 96 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 13 ++++- 2 files changed, 107 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index d4097856c86b..09973c689866 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -348,6 +349,95 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+static long dma_buf_wait_sync_file(struct dma_buf *dmabuf,
const void __user *user_data)
+{
- struct dma_buf_sync_file arg;
- struct dma_fence *fence;
- int ret = 0;
- if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
- if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE)
return -EINVAL;
- fence = sync_file_get_fence(arg.fd);
- if (!fence)
return -EINVAL;
- dma_resv_lock(dmabuf->resv, NULL);
- if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) {
struct dma_fence *singleton = NULL;
ret = dma_resv_get_singleton(dmabuf->resv, fence, &singleton);
if (!ret && singleton)
dma_resv_add_excl_fence(dmabuf->resv, singleton);
- } else {
dma_resv_add_shared_fence(dmabuf->resv, fence);
- }
You also need to create a singleton when adding a shared fences.
The problem is that shared fences must always signal after exclusive ones and you can't guarantee that for the fence you add here.
Regards, Christian.
- dma_resv_unlock(dmabuf->resv);
- dma_fence_put(fence);
- return ret;
+}
+static long dma_buf_signal_sync_file(struct dma_buf *dmabuf,
void __user *user_data)
+{
- struct dma_buf_sync_file arg;
- struct dma_fence *fence = NULL;
- struct sync_file *sync_file;
- int fd, ret;
- if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
- if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE)
return -EINVAL;
- fd = get_unused_fd_flags(O_CLOEXEC);
- if (fd < 0)
return fd;
- if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) {
/* We need to include both the exclusive fence and all of
* the shared fences in our fence.
*/
ret = dma_resv_get_singleton(dmabuf->resv, NULL, &fence);
if (ret)
goto err_put_fd;
- } else {
fence = dma_resv_get_excl_rcu(dmabuf->resv);
- }
- if (!fence)
fence = dma_fence_get_stub();
- sync_file = sync_file_create(fence);
- dma_fence_put(fence);
- if (!sync_file) {
ret = -EINVAL;
goto err_put_fd;
- }
- fd_install(fd, sync_file->file);
- arg.fd = fd;
- if (copy_to_user(user_data, &arg, sizeof(arg)))
return -EFAULT;
- return 0;
+err_put_fd:
- put_unused_fd(fd);
- return ret;
+}
- static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) {
@@ -390,6 +480,12 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME: return dma_buf_set_name(dmabuf, (const char __user *)arg);
- case DMA_BUF_IOCTL_WAIT_SYNC_FILE:
return dma_buf_wait_sync_file(dmabuf, (const void __user *)arg);
- case DMA_BUF_IOCTL_SIGNAL_SYNC_FILE:
return dma_buf_signal_sync_file(dmabuf, (void __user *)arg);
- default: return -ENOTTY; }
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index dbc7092e04b5..86e07acca90c 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -37,8 +37,17 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+struct dma_buf_sync_file {
- __u32 flags;
- __s32 fd;
+};
+#define DMA_BUF_SYNC_FILE_SYNC_WRITE (1 << 0)
- #define DMA_BUF_BASE 'b'
-#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) -#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) +#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_WAIT_SYNC_FILE _IOW(DMA_BUF_BASE, 2, struct dma_buf_sync) +#define DMA_BUF_IOCTL_SIGNAL_SYNC_FILE _IOWR(DMA_BUF_BASE, 3, struct dma_buf_sync)
#endif
On Wed, Mar 11, 2020 at 8:18 AM Christian König christian.koenig@amd.com wrote:
Am 11.03.20 um 04:43 schrieb Jason Ekstrand:
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
drivers/dma-buf/dma-buf.c | 96 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 13 ++++- 2 files changed, 107 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index d4097856c86b..09973c689866 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -348,6 +349,95 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+static long dma_buf_wait_sync_file(struct dma_buf *dmabuf,
const void __user *user_data)
+{
struct dma_buf_sync_file arg;
struct dma_fence *fence;
int ret = 0;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE)
return -EINVAL;
fence = sync_file_get_fence(arg.fd);
if (!fence)
return -EINVAL;
dma_resv_lock(dmabuf->resv, NULL);
if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) {
struct dma_fence *singleton = NULL;
ret = dma_resv_get_singleton(dmabuf->resv, fence, &singleton);
if (!ret && singleton)
dma_resv_add_excl_fence(dmabuf->resv, singleton);
} else {
dma_resv_add_shared_fence(dmabuf->resv, fence);
}
You also need to create a singleton when adding a shared fences.
The problem is that shared fences must always signal after exclusive ones and you can't guarantee that for the fence you add here.
I'm beginning to think that I should just drop the flags and always wait on all fences and always take what's currently the "write" path. Otherwise, something's going to get it wrong somewhere. Thoughts?
Also, Michelle (added to CC) commented on IRC today that amdgpu does something with implicit sync fences where it sorts out the fences which affect one queue vs. others. He thought that stuffing fences in the dma-buf in this way might cause that to not work. Thoughts?
--Jason
Regards, Christian.
dma_resv_unlock(dmabuf->resv);
dma_fence_put(fence);
return ret;
+}
+static long dma_buf_signal_sync_file(struct dma_buf *dmabuf,
void __user *user_data)
+{
struct dma_buf_sync_file arg;
struct dma_fence *fence = NULL;
struct sync_file *sync_file;
int fd, ret;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE)
return -EINVAL;
fd = get_unused_fd_flags(O_CLOEXEC);
if (fd < 0)
return fd;
if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) {
/* We need to include both the exclusive fence and all of
* the shared fences in our fence.
*/
ret = dma_resv_get_singleton(dmabuf->resv, NULL, &fence);
if (ret)
goto err_put_fd;
} else {
fence = dma_resv_get_excl_rcu(dmabuf->resv);
}
if (!fence)
fence = dma_fence_get_stub();
sync_file = sync_file_create(fence);
dma_fence_put(fence);
if (!sync_file) {
ret = -EINVAL;
goto err_put_fd;
}
fd_install(fd, sync_file->file);
arg.fd = fd;
if (copy_to_user(user_data, &arg, sizeof(arg)))
return -EFAULT;
return 0;
+err_put_fd:
put_unused_fd(fd);
return ret;
+}
- static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) {
@@ -390,6 +480,12 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME: return dma_buf_set_name(dmabuf, (const char __user *)arg);
case DMA_BUF_IOCTL_WAIT_SYNC_FILE:
return dma_buf_wait_sync_file(dmabuf, (const void __user *)arg);
case DMA_BUF_IOCTL_SIGNAL_SYNC_FILE:
return dma_buf_signal_sync_file(dmabuf, (void __user *)arg);
default: return -ENOTTY; }
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index dbc7092e04b5..86e07acca90c 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -37,8 +37,17 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+struct dma_buf_sync_file {
__u32 flags;
__s32 fd;
+};
+#define DMA_BUF_SYNC_FILE_SYNC_WRITE (1 << 0)
- #define DMA_BUF_BASE 'b'
-#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) -#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) +#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_WAIT_SYNC_FILE _IOW(DMA_BUF_BASE, 2, struct dma_buf_sync) +#define DMA_BUF_IOCTL_SIGNAL_SYNC_FILE _IOWR(DMA_BUF_BASE, 3, struct dma_buf_sync)
#endif
Am 12.03.20 um 16:57 schrieb Jason Ekstrand:
On Wed, Mar 11, 2020 at 8:18 AM Christian König christian.koenig@amd.com wrote:
Am 11.03.20 um 04:43 schrieb Jason Ekstrand:
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
drivers/dma-buf/dma-buf.c | 96 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 13 ++++- 2 files changed, 107 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index d4097856c86b..09973c689866 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -348,6 +349,95 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+static long dma_buf_wait_sync_file(struct dma_buf *dmabuf,
const void __user *user_data)
+{
struct dma_buf_sync_file arg;
struct dma_fence *fence;
int ret = 0;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE)
return -EINVAL;
fence = sync_file_get_fence(arg.fd);
if (!fence)
return -EINVAL;
dma_resv_lock(dmabuf->resv, NULL);
if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) {
struct dma_fence *singleton = NULL;
ret = dma_resv_get_singleton(dmabuf->resv, fence, &singleton);
if (!ret && singleton)
dma_resv_add_excl_fence(dmabuf->resv, singleton);
} else {
dma_resv_add_shared_fence(dmabuf->resv, fence);
}
You also need to create a singleton when adding a shared fences.
The problem is that shared fences must always signal after exclusive ones and you can't guarantee that for the fence you add here.
I'm beginning to think that I should just drop the flags and always wait on all fences and always take what's currently the "write" path. Otherwise, something's going to get it wrong somewhere. Thoughts?
If that is sufficient for your use case then that is certainly the more defensive (e.g. less dangerous) approach.
Also, Michelle (added to CC) commented on IRC today that amdgpu does something with implicit sync fences where it sorts out the fences which affect one queue vs. others. He thought that stuffing fences in the dma-buf in this way might cause that to not work. Thoughts?
Yes that is correct. What amdgpu does is it ignores all fences from the same process.
E.g. when A submits IBs 1, 2 and 3 and then B submits IB 4 then 4 waits for 1,2,3, but 1,2,3 can run parallel to each other.
And yes adding anything as explicit sync would break that, but I don't think that this is much of a problem.
Regards, Christian.
--Jason
Regards, Christian.
dma_resv_unlock(dmabuf->resv);
dma_fence_put(fence);
return ret;
+}
+static long dma_buf_signal_sync_file(struct dma_buf *dmabuf,
void __user *user_data)
+{
struct dma_buf_sync_file arg;
struct dma_fence *fence = NULL;
struct sync_file *sync_file;
int fd, ret;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE)
return -EINVAL;
fd = get_unused_fd_flags(O_CLOEXEC);
if (fd < 0)
return fd;
if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) {
/* We need to include both the exclusive fence and all of
* the shared fences in our fence.
*/
ret = dma_resv_get_singleton(dmabuf->resv, NULL, &fence);
if (ret)
goto err_put_fd;
} else {
fence = dma_resv_get_excl_rcu(dmabuf->resv);
}
if (!fence)
fence = dma_fence_get_stub();
sync_file = sync_file_create(fence);
dma_fence_put(fence);
if (!sync_file) {
ret = -EINVAL;
goto err_put_fd;
}
fd_install(fd, sync_file->file);
arg.fd = fd;
if (copy_to_user(user_data, &arg, sizeof(arg)))
return -EFAULT;
return 0;
+err_put_fd:
put_unused_fd(fd);
return ret;
+}
- static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) {
@@ -390,6 +480,12 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME: return dma_buf_set_name(dmabuf, (const char __user *)arg);
case DMA_BUF_IOCTL_WAIT_SYNC_FILE:
return dma_buf_wait_sync_file(dmabuf, (const void __user *)arg);
case DMA_BUF_IOCTL_SIGNAL_SYNC_FILE:
return dma_buf_signal_sync_file(dmabuf, (void __user *)arg);
default: return -ENOTTY; }
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index dbc7092e04b5..86e07acca90c 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -37,8 +37,17 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+struct dma_buf_sync_file {
__u32 flags;
__s32 fd;
+};
+#define DMA_BUF_SYNC_FILE_SYNC_WRITE (1 << 0)
- #define DMA_BUF_BASE 'b'
-#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) -#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) +#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_WAIT_SYNC_FILE _IOW(DMA_BUF_BASE, 2, struct dma_buf_sync) +#define DMA_BUF_IOCTL_SIGNAL_SYNC_FILE _IOWR(DMA_BUF_BASE, 3, struct dma_buf_sync)
#endif
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
v2 (Jason Ekstrand): - Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand): - Lock around setting shared fences as well as exclusive - Mark SIGNAL_SYNC_FILE as a read-write ioctl. - Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand): - Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand): - Rename the IOCTLs to import/export rather than wait/signal - Drop the WRITE flag and always get/set the exclusive fence
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-buf.c | 84 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 11 ++++- 2 files changed, 93 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index d4097856c86b..d34d27aa3077 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -348,6 +349,83 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+static long dma_buf_import_sync_file(struct dma_buf *dmabuf, + const void __user *user_data) +{ + struct dma_buf_sync_file arg; + struct dma_fence *fence, *singleton = NULL; + int ret = 0; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags != 0) + return -EINVAL; + + fence = sync_file_get_fence(arg.fd); + if (!fence) + return -EINVAL; + + dma_resv_lock(dmabuf->resv, NULL); + + ret = dma_resv_get_singleton(dmabuf->resv, fence, &singleton); + if (!ret && singleton) + dma_resv_add_excl_fence(dmabuf->resv, singleton); + + dma_resv_unlock(dmabuf->resv); + + dma_fence_put(fence); + + return ret; +} + +static long dma_buf_export_sync_file(struct dma_buf *dmabuf, + void __user *user_data) +{ + struct dma_buf_sync_file arg; + struct dma_fence *fence = NULL; + struct sync_file *sync_file; + int fd, ret; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags != 0) + return -EINVAL; + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) + return fd; + + ret = dma_resv_get_singleton(dmabuf->resv, NULL, &fence); + if (ret) + goto err_put_fd; + + if (!fence) + fence = dma_fence_get_stub(); + + sync_file = sync_file_create(fence); + + dma_fence_put(fence); + + if (!sync_file) { + ret = -EINVAL; + goto err_put_fd; + } + + fd_install(fd, sync_file->file); + + arg.fd = fd; + if (copy_to_user(user_data, &arg, sizeof(arg))) + return -EFAULT; + + return 0; + +err_put_fd: + put_unused_fd(fd); + return ret; +} + static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -390,6 +468,12 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+ case DMA_BUF_IOCTL_IMPORT_SYNC_FILE: + return dma_buf_import_sync_file(dmabuf, (const void __user *)arg); + + case DMA_BUF_IOCTL_EXPORT_SYNC_FILE: + return dma_buf_export_sync_file(dmabuf, (void __user *)arg); + default: return -ENOTTY; } diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index dbc7092e04b5..b746c6459e24 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -37,8 +37,15 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+struct dma_buf_sync_file { + __u32 flags; + __s32 fd; +}; + #define DMA_BUF_BASE 'b' -#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) -#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) +#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) +#define DMA_BUF_IOCTL_IMPORT_SYNC_FILE _IOW(DMA_BUF_BASE, 2, struct dma_buf_sync) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 3, struct dma_buf_sync)
#endif
On 2020-03-17 10:21 p.m., Jason Ekstrand wrote:
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
What's the status of this? DMA_BUF_IOCTL_EXPORT_SYNC_FILE would be useful for Wayland compositors to wait for client buffers to become ready without being prone to getting delayed by later HW access to them, so it would be nice to merge that at least (if DMA_BUF_IOCTL_IMPORT_SYNC_FILE is still controversial).
On Wed, Sep 30, 2020 at 11:39:06AM +0200, Michel Dänzer wrote:
On 2020-03-17 10:21 p.m., Jason Ekstrand wrote:
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
What's the status of this? DMA_BUF_IOCTL_EXPORT_SYNC_FILE would be useful for Wayland compositors to wait for client buffers to become ready without being prone to getting delayed by later HW access to them, so it would be nice to merge that at least (if DMA_BUF_IOCTL_IMPORT_SYNC_FILE is still controversial).
I think the missing bits are just the usual stuff - igt testcases - userspace using the new ioctls - review of the entire pile
I don't think there's any fundamental objections aside from "no one ever pushed this over the finish line".
Cheers, Daniel
On Wed, Sep 30, 2020 at 4:55 AM Daniel Vetter daniel@ffwll.ch wrote:
On Wed, Sep 30, 2020 at 11:39:06AM +0200, Michel Dänzer wrote:
On 2020-03-17 10:21 p.m., Jason Ekstrand wrote:
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
What's the status of this? DMA_BUF_IOCTL_EXPORT_SYNC_FILE would be useful for Wayland compositors to wait for client buffers to become ready without being prone to getting delayed by later HW access to them, so it would be nice to merge that at least (if DMA_BUF_IOCTL_IMPORT_SYNC_FILE is still controversial).
I think the missing bits are just the usual stuff
- igt testcases
- userspace using the new ioctls
- review of the entire pile
I don't think there's any fundamental objections aside from "no one ever pushed this over the finish line".
I just re-upped the patch series and you should have been on the CC for the cover letter. The Mesa MR is here: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037 I'm going to try and knock out an IGT test quick but I don't know how much is really practical to test in that environment besides a basic fuzz for "does the IOCTL return a sync file".
I've dropped all the sync_file import stuff in the latest version. It would have been useful for testing but it's also where all the scary stuff lives and I'm no longer convinced it solves any real problems for userspace.
--Jason
On Mon, Mar 15, 2021 at 10:11 PM Jason Ekstrand jason@jlekstrand.net wrote:
On Wed, Sep 30, 2020 at 4:55 AM Daniel Vetter daniel@ffwll.ch wrote:
On Wed, Sep 30, 2020 at 11:39:06AM +0200, Michel Dänzer wrote:
On 2020-03-17 10:21 p.m., Jason Ekstrand wrote:
Explicit synchronization is the future. At least, that seems to be what most userspace APIs are agreeing on at this point. However, most of our Linux APIs (both userspace and kernel UAPI) are currently built around implicit synchronization with dma-buf. While work is ongoing to change many of the userspace APIs and protocols to an explicit synchronization model, switching over piecemeal is difficult due to the number of potential components involved. On the kernel side, many drivers use dma-buf including GPU (3D/compute), display, v4l, and others. In userspace, we have X11, several Wayland compositors, 3D drivers, compute drivers (OpenCL etc.), media encode/decode, and the list goes on.
This patch provides a path forward by allowing userspace to manually manage the fences attached to a dma-buf. Alternatively, one can think of this as making dma-buf's implicit synchronization simply a carrier for an explicit fence. This is accomplished by adding two IOCTLs to dma-buf for importing and exporting a sync file to/from the dma-buf. This way a userspace component which is uses explicit synchronization, such as a Vulkan driver, can manually set the write fence on a buffer before handing it off to an implicitly synchronized component such as a Wayland compositor or video encoder. In this way, each of the different components can be upgraded to an explicit synchronization model one at a time as long as the userspace pieces connecting them are aware of it and import/export fences at the right times.
There is a potential race condition with this API if userspace is not careful. A typical use case for implicit synchronization is to wait for the dma-buf to be ready, use it, and then signal it for some other component. Because a sync_file cannot be created until it is guaranteed to complete in finite time, userspace can only signal the dma-buf after it has already submitted the work which uses it to the kernel and has received a sync_file back. There is no way to atomically submit a wait-use-signal operation. This is not, however, really a problem with this API so much as it is a problem with explicit synchronization itself. The way this is typically handled is to have very explicit ownership transfer points in the API or protocol which ensure that only one component is using it at any given time. Both X11 (via the PRESENT extension) and Wayland provide such ownership transfer points via explicit present and idle messages.
The decision was intentionally made in this patch to make the import and export operations IOCTLs on the dma-buf itself rather than as a DRM IOCTL. This makes it the import/export operation universal across all components which use dma-buf including GPU, display, v4l, and others. It also means that a userspace component can do the import/export without access to the DRM fd which may be tricky to get in cases where the client communicates with DRM via a userspace API such as OpenGL or Vulkan. At a future date we may choose to add direct import/export APIs to components such as drm_syncobj to avoid allocating a file descriptor and going through two ioctls. However, that seems to be something of a micro-optimization as import/export operations are likely to happen at a rate of a few per frame of rendered or decoded video.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
What's the status of this? DMA_BUF_IOCTL_EXPORT_SYNC_FILE would be useful for Wayland compositors to wait for client buffers to become ready without being prone to getting delayed by later HW access to them, so it would be nice to merge that at least (if DMA_BUF_IOCTL_IMPORT_SYNC_FILE is still controversial).
I think the missing bits are just the usual stuff
- igt testcases
- userspace using the new ioctls
- review of the entire pile
I don't think there's any fundamental objections aside from "no one ever pushed this over the finish line".
I just re-upped the patch series and you should have been on the CC for the cover letter. The Mesa MR is here: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037 I'm going to try and knock out an IGT test quick but I don't know how much is really practical to test in that environment besides a basic fuzz for "does the IOCTL return a sync file".
With vgem you should be able to control the interface fully, since that allows you to control dma_fences we attach to stuff. Or at least it has some of the ingredients, and a bunch of igts test it. E.g. we have one that creates a fence with vgem and then checks (using crc) that your display driver works correctly with the page flip.
I've dropped all the sync_file import stuff in the latest version. It would have been useful for testing but it's also where all the scary stuff lives and I'm no longer convinced it solves any real problems for userspace.
Yeah vgem should allow you to get at least some of that sorted. -Daniel
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over- synchronization.
Mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037
Cc: Christian König christian.koenig@amd.com Cc: Michel Dänzer michel@daenzer.net Cc: Dave Airlie airlied@redhat.com Cc: Bas Nieuwenhuizen bas@basnieuwenhuizen.nl Cc: Daniel Stone daniels@collabora.com
Christian König (2): dma-buf: add dma_fence_array_for_each (v2) dma-buf: add dma_resv_get_singleton (v2)
Jason Ekstrand (1): dma-buf: Add an API for exporting sync files (v6)
drivers/dma-buf/dma-buf.c | 55 ++++++++++++++ drivers/dma-buf/dma-fence-array.c | 27 +++++++ drivers/dma-buf/dma-resv.c | 118 ++++++++++++++++++++++++++++++ include/linux/dma-fence-array.h | 17 +++++ include/linux/dma-resv.h | 3 + include/uapi/linux/dma-buf.h | 6 ++ 6 files changed, 226 insertions(+)
From: Christian König ckoenig.leichtzumerken@gmail.com
Add a helper to iterate over all fences in a dma_fence_array object.
v2 (Jason Ekstrand) - Return NULL from dma_fence_array_first if head == NULL. This matches the iterator behavior of dma_fence_chain_for_each in that it iterates zero times if head == NULL. - Return NULL from dma_fence_array_next if index > array->num_fences.
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-fence-array.c | 27 +++++++++++++++++++++++++++ include/linux/dma-fence-array.h | 17 +++++++++++++++++ 2 files changed, 44 insertions(+)
diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c index d3fbd950be944..2ac1afc697d0f 100644 --- a/drivers/dma-buf/dma-fence-array.c +++ b/drivers/dma-buf/dma-fence-array.c @@ -201,3 +201,30 @@ bool dma_fence_match_context(struct dma_fence *fence, u64 context) return true; } EXPORT_SYMBOL(dma_fence_match_context); + +struct dma_fence *dma_fence_array_first(struct dma_fence *head) +{ + struct dma_fence_array *array; + + if (!head) + return NULL; + + array = to_dma_fence_array(head); + if (!array) + return head; + + return array->fences[0]; +} +EXPORT_SYMBOL(dma_fence_array_first); + +struct dma_fence *dma_fence_array_next(struct dma_fence *head, + unsigned int index) +{ + struct dma_fence_array *array = to_dma_fence_array(head); + + if (!array || index >= array->num_fences) + return NULL; + + return array->fences[index]; +} +EXPORT_SYMBOL(dma_fence_array_next); diff --git a/include/linux/dma-fence-array.h b/include/linux/dma-fence-array.h index 303dd712220fd..588ac8089dd61 100644 --- a/include/linux/dma-fence-array.h +++ b/include/linux/dma-fence-array.h @@ -74,6 +74,19 @@ to_dma_fence_array(struct dma_fence *fence) return container_of(fence, struct dma_fence_array, base); }
+/** + * dma_fence_array_for_each - iterate over all fences in array + * @fence: current fence + * @index: index into the array + * @head: potential dma_fence_array object + * + * Test if @array is a dma_fence_array object and if yes iterate over all fences + * in the array. If not just iterate over the fence in @array itself. + */ +#define dma_fence_array_for_each(fence, index, head) \ + for (index = 0, fence = dma_fence_array_first(head); fence; \ + ++(index), fence = dma_fence_array_next(head, index)) + struct dma_fence_array *dma_fence_array_create(int num_fences, struct dma_fence **fences, u64 context, unsigned seqno, @@ -81,4 +94,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
bool dma_fence_match_context(struct dma_fence *fence, u64 context);
+struct dma_fence *dma_fence_array_first(struct dma_fence *head); +struct dma_fence *dma_fence_array_next(struct dma_fence *head, + unsigned int index); + #endif /* __LINUX_DMA_FENCE_ARRAY_H */
From: Christian König ckoenig.leichtzumerken@gmail.com
Add a helper function to get a single fence representing all fences in a dma_resv object.
This fence is either the only one in the object or all not signaled fences of the object in a flatted out dma_fence_array.
v2 (Jason Ekstrand): - Take reference of fences both for creating the dma_fence_array and in the case where we return one fence. - Handle the case where dma_resv_get_list() returns NULL
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-resv.c | 118 +++++++++++++++++++++++++++++++++++++ include/linux/dma-resv.h | 3 + 2 files changed, 121 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 6ddbeb5dfbf65..1733f1ec86a44 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -33,6 +33,8 @@ */
#include <linux/dma-resv.h> +#include <linux/dma-fence-chain.h> +#include <linux/dma-fence-array.h> #include <linux/export.h> #include <linux/mm.h> #include <linux/sched/mm.h> @@ -49,6 +51,19 @@ * write-side updates. */
+/** + * dma_fence_deep_dive_for_each - deep dive into the fence containers + * @fence: resulting fence + * @chain: variable for a dma_fence_chain + * @index: index into a dma_fence_array + * @head: starting point + * + * Helper to deep dive into the fence containers for flattening them. + */ +#define dma_fence_deep_dive_for_each(fence, chain, index, head) \ + dma_fence_chain_for_each(chain, head) \ + dma_fence_array_for_each(fence, index, chain) + DEFINE_WD_CLASS(reservation_ww_class); EXPORT_SYMBOL(reservation_ww_class);
@@ -517,6 +532,109 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj, } EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
+/** + * dma_resv_get_singleton - get a single fence for the dma_resv object + * @obj: the reservation object + * @extra: extra fence to add to the resulting array + * @result: resulting dma_fence + * + * Get a single fence representing all unsignaled fences in the dma_resv object + * plus the given extra fence. If we got only one fence return a new + * reference to that, otherwise return a dma_fence_array object. + * + * RETURNS + * Returns -NOMEM if allocations fail, zero otherwise. + */ +int dma_resv_get_singleton(struct dma_resv *obj, struct dma_fence *extra, + struct dma_fence **result) +{ + struct dma_resv_list *fobj = dma_resv_get_list(obj); + struct dma_fence *excl = dma_resv_get_excl(obj); + struct dma_fence *fence, *chain, **fences; + struct dma_fence_array *array; + unsigned int num_fences, shared_count; + unsigned int i, j; + + num_fences = 0; + *result = NULL; + + dma_fence_deep_dive_for_each(fence, chain, i, extra) { + if (dma_fence_is_signaled(fence)) + continue; + + *result = fence; + ++num_fences; + } + + dma_fence_deep_dive_for_each(fence, chain, i, excl) { + if (dma_fence_is_signaled(fence)) + continue; + + *result = fence; + ++num_fences; + } + + shared_count = fobj ? fobj->shared_count : 0; + for (i = 0; i < shared_count; ++i) { + struct dma_fence *f; + + f = rcu_dereference_protected(fobj->shared[i], + dma_resv_held(obj)); + dma_fence_deep_dive_for_each(fence, chain, j, f) { + if (dma_fence_is_signaled(fence)) + continue; + + *result = fence; + ++num_fences; + } + } + + if (num_fences <= 1) { + *result = dma_fence_get(*result); + return 0; + } + + fences = kmalloc_array(num_fences, sizeof(struct dma_fence*), + GFP_KERNEL); + if (!fences) + return -ENOMEM; + + num_fences = 0; + + dma_fence_deep_dive_for_each(fence, chain, i, extra) + if (!dma_fence_is_signaled(fence)) + fences[num_fences++] = dma_fence_get(fence); + + dma_fence_deep_dive_for_each(fence, chain, i, excl) + if (!dma_fence_is_signaled(fence)) + fences[num_fences++] = dma_fence_get(fence); + + for (i = 0; i < shared_count; ++i) { + struct dma_fence *f; + + f = rcu_dereference_protected(fobj->shared[i], + dma_resv_held(obj)); + dma_fence_deep_dive_for_each(fence, chain, j, f) + if (!dma_fence_is_signaled(fence)) + fences[num_fences++] = dma_fence_get(fence); + } + + array = dma_fence_array_create(num_fences, fences, + dma_fence_context_alloc(1), + 1, false); + if (!array) + goto error_free; + + *result = &array->base; + return 0; + +error_free: + while (num_fences--) + dma_fence_put(fences[num_fences]); + kfree(fences); + return -ENOMEM; +} + /** * dma_resv_wait_timeout_rcu - Wait on reservation's objects * shared and/or exclusive fences. diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index d44a77e8a7e34..78881eb6ccc82 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -285,6 +285,9 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
+int dma_resv_get_singleton(struct dma_resv *obj, struct dma_fence *extra, + struct dma_fence **result); + long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr, unsigned long timeout);
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over- synchronization.
v2 (Jason Ekstrand): - Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand): - Lock around setting shared fences as well as exclusive - Mark SIGNAL_SYNC_FILE as a read-write ioctl. - Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand): - Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand): - Rename the IOCTLs to import/export rather than wait/signal - Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand): - Drop the sync_file import as it was all-around sketchy and not nearly as useful as import. - Re-introduce READ/WRITE flag support for export - Rework the commit message
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-buf.c | 55 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 6 ++++ 2 files changed, 61 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index f264b70c383eb..e7f9dd62c19a9 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -362,6 +363,57 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+static long dma_buf_export_sync_file(struct dma_buf *dmabuf, + void __user *user_data) +{ + struct dma_buf_sync_file arg; + struct dma_fence *fence = NULL; + struct sync_file *sync_file; + int fd, ret; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags & ~DMA_BUF_SYNC_RW) + return -EINVAL; + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) + return fd; + + if (arg.flags & DMA_BUF_SYNC_WRITE) { + ret = dma_resv_get_singleton(dmabuf->resv, NULL, &fence); + if (ret) + goto err_put_fd; + } else if (arg.flags & DMA_BUF_SYNC_READ) { + fence = dma_resv_get_excl(dmabuf->resv); + } + + if (!fence) + fence = dma_fence_get_stub(); + + sync_file = sync_file_create(fence); + + dma_fence_put(fence); + + if (!sync_file) { + ret = -EINVAL; + goto err_put_fd; + } + + fd_install(fd, sync_file->file); + + arg.fd = fd; + if (copy_to_user(user_data, &arg, sizeof(arg))) + return -EFAULT; + + return 0; + +err_put_fd: + put_unused_fd(fd); + return ret; +} + static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -405,6 +457,9 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME_B: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+ case DMA_BUF_IOCTL_EXPORT_SYNC_FILE: + return dma_buf_export_sync_file(dmabuf, (void __user *)arg); + default: return -ENOTTY; } diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 7f30393b92c3b..9bce1e8bd31d3 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -37,6 +37,11 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+struct dma_buf_sync_file { + __u32 flags; + __s32 fd; +}; + #define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -46,5 +51,6 @@ struct dma_buf_sync { #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct dma_buf_sync_file)
#endif
On Mon, Mar 15, 2021 at 4:05 PM Jason Ekstrand jason@jlekstrand.net wrote:
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over- synchronization.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand):
- Drop the sync_file import as it was all-around sketchy and not nearly as useful as import.
- Re-introduce READ/WRITE flag support for export
- Rework the commit message
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
drivers/dma-buf/dma-buf.c | 55 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 6 ++++ 2 files changed, 61 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index f264b70c383eb..e7f9dd62c19a9 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -362,6 +363,57 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
void __user *user_data)
+{
struct dma_buf_sync_file arg;
struct dma_fence *fence = NULL;
struct sync_file *sync_file;
int fd, ret;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags & ~DMA_BUF_SYNC_RW)
return -EINVAL;
fd = get_unused_fd_flags(O_CLOEXEC);
if (fd < 0)
return fd;
if (arg.flags & DMA_BUF_SYNC_WRITE) {
ret = dma_resv_get_singleton(dmabuf->resv, NULL, &fence);
if (ret)
goto err_put_fd;
} else if (arg.flags & DMA_BUF_SYNC_READ) {
fence = dma_resv_get_excl(dmabuf->resv);
}
if (!fence)
fence = dma_fence_get_stub();
sync_file = sync_file_create(fence);
dma_fence_put(fence);
if (!sync_file) {
ret = -EINVAL;
Should this be -EINVAL or -ENOMEM?
goto err_put_fd;
}
fd_install(fd, sync_file->file);
arg.fd = fd;
if (copy_to_user(user_data, &arg, sizeof(arg)))
return -EFAULT;
return 0;
+err_put_fd:
put_unused_fd(fd);
return ret;
+}
static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -405,6 +457,9 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME_B: return dma_buf_set_name(dmabuf, (const char __user *)arg);
case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
default: return -ENOTTY; }
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 7f30393b92c3b..9bce1e8bd31d3 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -37,6 +37,11 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+struct dma_buf_sync_file {
__u32 flags;
__s32 fd;
+};
#define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -46,5 +51,6 @@ struct dma_buf_sync { #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct dma_buf_sync_file)
#endif
2.29.2
On 2021-03-16 12:10 a.m., Jason Ekstrand wrote:
On Mon, Mar 15, 2021 at 4:05 PM Jason Ekstrand jason@jlekstrand.net wrote:
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over- synchronization.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand):
- Drop the sync_file import as it was all-around sketchy and not nearly as useful as import.
- Re-introduce READ/WRITE flag support for export
- Rework the commit message
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
drivers/dma-buf/dma-buf.c | 55 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 6 ++++ 2 files changed, 61 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index f264b70c383eb..e7f9dd62c19a9 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c
[...]
@@ -362,6 +363,57 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
void __user *user_data)
+{
struct dma_buf_sync_file arg;
struct dma_fence *fence = NULL;
struct sync_file *sync_file;
int fd, ret;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags & ~DMA_BUF_SYNC_RW)
return -EINVAL;
fd = get_unused_fd_flags(O_CLOEXEC);
if (fd < 0)
return fd;
if (arg.flags & DMA_BUF_SYNC_WRITE) {
ret = dma_resv_get_singleton(dmabuf->resv, NULL, &fence);
if (ret)
goto err_put_fd;
} else if (arg.flags & DMA_BUF_SYNC_READ) {
fence = dma_resv_get_excl(dmabuf->resv);
}
if (!fence)
fence = dma_fence_get_stub();
sync_file = sync_file_create(fence);
dma_fence_put(fence);
if (!sync_file) {
ret = -EINVAL;
Should this be -EINVAL or -ENOMEM?
The latter makes more sense to me, since sync_file_create returning NULL is not related to invalid ioctl parameters.
On Tue, Mar 16, 2021 at 3:51 AM Michel Dänzer michel@daenzer.net wrote:
On 2021-03-16 12:10 a.m., Jason Ekstrand wrote:
On Mon, Mar 15, 2021 at 4:05 PM Jason Ekstrand jason@jlekstrand.net wrote:
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over- synchronization.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand):
- Drop the sync_file import as it was all-around sketchy and not nearly as useful as import.
- Re-introduce READ/WRITE flag support for export
- Rework the commit message
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
drivers/dma-buf/dma-buf.c | 55 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 6 ++++ 2 files changed, 61 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index f264b70c383eb..e7f9dd62c19a9 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c
[...]
@@ -362,6 +363,57 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
void __user *user_data)
+{
struct dma_buf_sync_file arg;
struct dma_fence *fence = NULL;
struct sync_file *sync_file;
int fd, ret;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags & ~DMA_BUF_SYNC_RW)
return -EINVAL;
fd = get_unused_fd_flags(O_CLOEXEC);
if (fd < 0)
return fd;
if (arg.flags & DMA_BUF_SYNC_WRITE) {
ret = dma_resv_get_singleton(dmabuf->resv, NULL, &fence);
if (ret)
goto err_put_fd;
} else if (arg.flags & DMA_BUF_SYNC_READ) {
fence = dma_resv_get_excl(dmabuf->resv);
}
if (!fence)
fence = dma_fence_get_stub();
sync_file = sync_file_create(fence);
dma_fence_put(fence);
if (!sync_file) {
ret = -EINVAL;
Should this be -EINVAL or -ENOMEM?
The latter makes more sense to me, since sync_file_create returning NULL is not related to invalid ioctl parameters.
I've switched to -ENOMEM. It'll be part of v8 whenever I send it out. I'd like to get some "real" review first.
--Jason
Hi Jason,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on tegra-drm/drm/tegra/for-next] [also build test ERROR on linus/master v5.12-rc3 next-20210315] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Jason-Ekstrand/dma-buf-add-dma_fenc... base: git://anongit.freedesktop.org/tegra/linux.git drm/tegra/for-next config: xtensa-randconfig-r022-20210315 (attached as .config) compiler: xtensa-linux-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/570269f5d3ec3a13936fa2224682d39c8a03... git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Jason-Ekstrand/dma-buf-add-dma_fence_array_for_each-v2/20210316-060509 git checkout 570269f5d3ec3a13936fa2224682d39c8a037126 # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=xtensa
If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot lkp@intel.com
All errors (new ones prefixed by >>):
xtensa-linux-ld: drivers/dma-buf/dma-buf.o: in function `dma_buf_dynamic_attach':
dma-buf.c:(.text+0xf4c): undefined reference to `sync_file_create'
xtensa-linux-ld: drivers/dma-buf/dma-buf.o: in function `dma_buf_unmap_attachment': dma-buf.c:(.text+0x10b0): undefined reference to `sync_file_create'
--- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Hi Jason,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on tegra-drm/drm/tegra/for-next] [also build test ERROR on linus/master v5.12-rc3 next-20210315] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Jason-Ekstrand/dma-buf-add-dma_fenc... base: git://anongit.freedesktop.org/tegra/linux.git drm/tegra/for-next config: parisc-randconfig-s031-20210315 (attached as .config) compiler: hppa-linux-gcc (GCC) 9.3.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # apt-get install sparse # sparse version: v0.6.3-277-gc089cd2d-dirty # https://github.com/0day-ci/linux/commit/570269f5d3ec3a13936fa2224682d39c8a03... git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Jason-Ekstrand/dma-buf-add-dma_fence_array_for_each-v2/20210316-060509 git checkout 570269f5d3ec3a13936fa2224682d39c8a037126 # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=parisc
If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot lkp@intel.com
All errors (new ones prefixed by >>):
hppa-linux-ld: drivers/dma-buf/dma-buf.o: in function `dma_buf_ioctl':
(.text+0x1944): undefined reference to `sync_file_create'
--- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
I-G-T tests: https://lists.freedesktop.org/archives/igt-dev/2021-March/029825.html
On Mon, Mar 15, 2021 at 4:04 PM Jason Ekstrand jason@jlekstrand.net wrote:
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over- synchronization.
Mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037
Cc: Christian König christian.koenig@amd.com Cc: Michel Dänzer michel@daenzer.net Cc: Dave Airlie airlied@redhat.com Cc: Bas Nieuwenhuizen bas@basnieuwenhuizen.nl Cc: Daniel Stone daniels@collabora.com
Christian König (2): dma-buf: add dma_fence_array_for_each (v2) dma-buf: add dma_resv_get_singleton (v2)
Jason Ekstrand (1): dma-buf: Add an API for exporting sync files (v6)
drivers/dma-buf/dma-buf.c | 55 ++++++++++++++ drivers/dma-buf/dma-fence-array.c | 27 +++++++ drivers/dma-buf/dma-resv.c | 118 ++++++++++++++++++++++++++++++ include/linux/dma-fence-array.h | 17 +++++ include/linux/dma-resv.h | 3 + include/uapi/linux/dma-buf.h | 6 ++ 6 files changed, 226 insertions(+)
-- 2.29.2
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over- synchronization.
The only remaining TODO item as far as I know is that some kernel CI system is sending me nastygrams about build system problems. With this change, dma-fence now has a dependency on sync_file and I really don't know how best to solve it. Should we make sync_file no longer optional? Should I hide the new ioctl behind a #define? If so, what #define? I'm a bit lost when it comes to KConfig, I'm afraid.
Mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037 IGT tests: https://lists.freedesktop.org/archives/igt-dev/2021-March/029825.html
Cc: Christian König christian.koenig@amd.com Cc: Michel Dänzer michel@daenzer.net Cc: Dave Airlie airlied@redhat.com Cc: Bas Nieuwenhuizen bas@basnieuwenhuizen.nl Cc: Daniel Stone daniels@collabora.com
Christian König (2): dma-buf: add dma_fence_array_for_each (v2) dma-buf: add dma_resv_get_singleton (v2)
Jason Ekstrand (1): dma-buf: Add an API for exporting sync files (v6)
drivers/dma-buf/dma-buf.c | 55 ++++++++++++++ drivers/dma-buf/dma-fence-array.c | 27 +++++++ drivers/dma-buf/dma-resv.c | 118 ++++++++++++++++++++++++++++++ include/linux/dma-fence-array.h | 17 +++++ include/linux/dma-resv.h | 3 + include/uapi/linux/dma-buf.h | 6 ++ 6 files changed, 226 insertions(+)
From: Christian König ckoenig.leichtzumerken@gmail.com
Add a helper to iterate over all fences in a dma_fence_array object.
v2 (Jason Ekstrand) - Return NULL from dma_fence_array_first if head == NULL. This matches the iterator behavior of dma_fence_chain_for_each in that it iterates zero times if head == NULL. - Return NULL from dma_fence_array_next if index > array->num_fences.
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-fence-array.c | 27 +++++++++++++++++++++++++++ include/linux/dma-fence-array.h | 17 +++++++++++++++++ 2 files changed, 44 insertions(+)
diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c index d3fbd950be944..2ac1afc697d0f 100644 --- a/drivers/dma-buf/dma-fence-array.c +++ b/drivers/dma-buf/dma-fence-array.c @@ -201,3 +201,30 @@ bool dma_fence_match_context(struct dma_fence *fence, u64 context) return true; } EXPORT_SYMBOL(dma_fence_match_context); + +struct dma_fence *dma_fence_array_first(struct dma_fence *head) +{ + struct dma_fence_array *array; + + if (!head) + return NULL; + + array = to_dma_fence_array(head); + if (!array) + return head; + + return array->fences[0]; +} +EXPORT_SYMBOL(dma_fence_array_first); + +struct dma_fence *dma_fence_array_next(struct dma_fence *head, + unsigned int index) +{ + struct dma_fence_array *array = to_dma_fence_array(head); + + if (!array || index >= array->num_fences) + return NULL; + + return array->fences[index]; +} +EXPORT_SYMBOL(dma_fence_array_next); diff --git a/include/linux/dma-fence-array.h b/include/linux/dma-fence-array.h index 303dd712220fd..588ac8089dd61 100644 --- a/include/linux/dma-fence-array.h +++ b/include/linux/dma-fence-array.h @@ -74,6 +74,19 @@ to_dma_fence_array(struct dma_fence *fence) return container_of(fence, struct dma_fence_array, base); }
+/** + * dma_fence_array_for_each - iterate over all fences in array + * @fence: current fence + * @index: index into the array + * @head: potential dma_fence_array object + * + * Test if @array is a dma_fence_array object and if yes iterate over all fences + * in the array. If not just iterate over the fence in @array itself. + */ +#define dma_fence_array_for_each(fence, index, head) \ + for (index = 0, fence = dma_fence_array_first(head); fence; \ + ++(index), fence = dma_fence_array_next(head, index)) + struct dma_fence_array *dma_fence_array_create(int num_fences, struct dma_fence **fences, u64 context, unsigned seqno, @@ -81,4 +94,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
bool dma_fence_match_context(struct dma_fence *fence, u64 context);
+struct dma_fence *dma_fence_array_first(struct dma_fence *head); +struct dma_fence *dma_fence_array_next(struct dma_fence *head, + unsigned int index); + #endif /* __LINUX_DMA_FENCE_ARRAY_H */
Add a helper function to get a single fence representing all fences in a dma_resv object.
This fence is either the only one in the object or all not signaled fences of the object in a flatted out dma_fence_array.
v2 (Jason Ekstrand): - Take reference of fences both for creating the dma_fence_array and in the case where we return one fence. - Handle the case where dma_resv_get_list() returns NULL
v3 (Jason Ekstrand): - Add an _rcu suffix because it is read-only - Rewrite to use dma_resv_get_fences_rcu so it's RCU-safe - Add an EXPORT_SYMBOL_GPL declaration - Re-author the patch to Jason since very little is left of Christian König's original patch
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-resv.c | 104 +++++++++++++++++++++++++++++++++++++ include/linux/dma-resv.h | 2 + 2 files changed, 106 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 6ddbeb5dfbf65..5dd4c38bd9cb4 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -33,6 +33,8 @@ */
#include <linux/dma-resv.h> +#include <linux/dma-fence-chain.h> +#include <linux/dma-fence-array.h> #include <linux/export.h> #include <linux/mm.h> #include <linux/sched/mm.h> @@ -49,6 +51,19 @@ * write-side updates. */
+/** + * dma_fence_deep_dive_for_each - deep dive into the fence containers + * @fence: resulting fence + * @chain: variable for a dma_fence_chain + * @index: index into a dma_fence_array + * @head: starting point + * + * Helper to deep dive into the fence containers for flattening them. + */ +#define dma_fence_deep_dive_for_each(fence, chain, index, head) \ + dma_fence_chain_for_each(chain, head) \ + dma_fence_array_for_each(fence, index, chain) + DEFINE_WD_CLASS(reservation_ww_class); EXPORT_SYMBOL(reservation_ww_class);
@@ -517,6 +532,95 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj, } EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
+/** + * dma_resv_get_singleton - get a single fence for the dma_resv object + * @obj: the reservation object + * @extra: extra fence to add to the resulting array + * @result: resulting dma_fence + * + * Get a single fence representing all unsignaled fences in the dma_resv object + * plus the given extra fence. If we got only one fence return a new + * reference to that, otherwise return a dma_fence_array object. + * + * RETURNS + * Returns -NOMEM if allocations fail, zero otherwise. + */ +int dma_resv_get_singleton_rcu(struct dma_resv *obj, struct dma_fence **result) +{ + struct dma_fence **resv_fences, *fence, *chain, **fences; + struct dma_fence_array *array; + unsigned int num_resv_fences, num_fences; + unsigned int ret, i, j; + + ret = dma_resv_get_fences_rcu(obj, NULL, &num_resv_fences, &resv_fences); + if (ret) + return ret; + + num_fences = 0; + *result = NULL; + + if (num_resv_fences == 0) + return 0; + + for (i = 0; i < num_resv_fences; ++i) { + dma_fence_deep_dive_for_each(fence, chain, j, resv_fences[i]) { + if (dma_fence_is_signaled(fence)) + continue; + + *result = fence; + ++num_fences; + } + } + + if (num_fences <= 1) { + *result = dma_fence_get(*result); + goto put_resv_fences; + } + + fences = kmalloc_array(num_fences, sizeof(struct dma_fence*), + GFP_KERNEL); + if (!fences) { + *result = NULL; + ret = -ENOMEM; + goto put_resv_fences; + } + + num_fences = 0; + for (i = 0; i < num_resv_fences; ++i) { + dma_fence_deep_dive_for_each(fence, chain, j, resv_fences[i]) { + if (!dma_fence_is_signaled(fence)) + fences[num_fences++] = dma_fence_get(fence); + } + } + + if (num_fences <= 1) { + *result = num_fences ? fences[0] : NULL; + kfree(fences); + goto put_resv_fences; + } + + array = dma_fence_array_create(num_fences, fences, + dma_fence_context_alloc(1), + 1, false); + if (array) { + *result = &array->base; + } else { + *result = NULL; + while (num_fences--) + dma_fence_put(fences[num_fences]); + kfree(fences); + ret = -ENOMEM; + } + +put_resv_fences: + while (num_resv_fences--) + dma_fence_put(resv_fences[num_resv_fences]); + kfree(resv_fences); + + return ret; +} +EXPORT_SYMBOL_GPL(dma_resv_get_singleton_rcu); + /** * dma_resv_wait_timeout_rcu - Wait on reservation's objects * shared and/or exclusive fences. diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index d44a77e8a7e34..5f82894fed0b9 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -285,6 +285,8 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
+int dma_resv_get_singleton_rcu(struct dma_resv *obj, struct dma_fence **result); + long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr, unsigned long timeout);
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over- synchronization.
v2 (Jason Ekstrand): - Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand): - Lock around setting shared fences as well as exclusive - Mark SIGNAL_SYNC_FILE as a read-write ioctl. - Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand): - Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand): - Rename the IOCTLs to import/export rather than wait/signal - Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand): - Drop the sync_file import as it was all-around sketchy and not nearly as useful as import. - Re-introduce READ/WRITE flag support for export - Rework the commit message
v7 (Jason Ekstrand): - Require at least one sync flag - Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference - Use _rcu helpers since we're accessing the dma_resv read-only
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-buf.c | 58 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 6 ++++ 2 files changed, 64 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index f264b70c383eb..69200d019ac90 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -362,6 +363,60 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+static long dma_buf_export_sync_file(struct dma_buf *dmabuf, + void __user *user_data) +{ + struct dma_buf_sync_file arg; + struct dma_fence *fence = NULL; + struct sync_file *sync_file; + int fd, ret; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags & ~DMA_BUF_SYNC_RW) + return -EINVAL; + + if ((arg.flags & DMA_BUF_SYNC_RW) == 0) + return -EINVAL; + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) + return fd; + + if (arg.flags & DMA_BUF_SYNC_WRITE) { + ret = dma_resv_get_singleton_rcu(dmabuf->resv, &fence); + if (ret) + goto err_put_fd; + } else if (arg.flags & DMA_BUF_SYNC_READ) { + fence = dma_resv_get_excl_rcu(dmabuf->resv); + } + + if (!fence) + fence = dma_fence_get_stub(); + + sync_file = sync_file_create(fence); + + dma_fence_put(fence); + + if (!sync_file) { + ret = -EINVAL; + goto err_put_fd; + } + + fd_install(fd, sync_file->file); + + arg.fd = fd; + if (copy_to_user(user_data, &arg, sizeof(arg))) + return -EFAULT; + + return 0; + +err_put_fd: + put_unused_fd(fd); + return ret; +} + static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -405,6 +460,9 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME_B: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+ case DMA_BUF_IOCTL_EXPORT_SYNC_FILE: + return dma_buf_export_sync_file(dmabuf, (void __user *)arg); + default: return -ENOTTY; } diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 7f30393b92c3b..9bce1e8bd31d3 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -37,6 +37,11 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+struct dma_buf_sync_file { + __u32 flags; + __s32 fd; +}; + #define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -46,5 +51,6 @@ struct dma_buf_sync { #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct dma_buf_sync_file)
#endif
Hi Jason,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on tegra-drm/drm/tegra/for-next] [also build test ERROR on linus/master v5.12-rc3 next-20210316] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Jason-Ekstrand/dma-buf-add-dma_fenc... base: git://anongit.freedesktop.org/tegra/linux.git drm/tegra/for-next config: h8300-randconfig-r006-20210316 (attached as .config) compiler: h8300-linux-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/c5e7feadea6d3792fc5e76c66944982eb5f4... git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Jason-Ekstrand/dma-buf-add-dma_fence_array_for_each-v2/20210316-135402 git checkout c5e7feadea6d3792fc5e76c66944982eb5f4780b # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=h8300
If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot lkp@intel.com
All errors (new ones prefixed by >>):
h8300-linux-ld: drivers/dma-buf/dma-buf.o: in function `.L385':
dma-buf.c:(.text+0x18e1): undefined reference to `sync_file_create'
--- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over- synchronization.
Mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037 IGT tests: https://lists.freedesktop.org/archives/igt-dev/2021-March/029825.html
Cc: Christian König christian.koenig@amd.com Cc: Michel Dänzer michel@daenzer.net Cc: Dave Airlie airlied@redhat.com Cc: Bas Nieuwenhuizen bas@basnieuwenhuizen.nl Cc: Daniel Stone daniels@collabora.com
Christian König (2): dma-buf: add dma_fence_array_for_each (v2) dma-buf: add dma_resv_get_singleton (v2)
Jason Ekstrand (1): dma-buf: Add an API for exporting sync files (v8)
drivers/dma-buf/dma-buf.c | 55 ++++++++++++++ drivers/dma-buf/dma-fence-array.c | 27 +++++++ drivers/dma-buf/dma-resv.c | 118 ++++++++++++++++++++++++++++++ include/linux/dma-fence-array.h | 17 +++++ include/linux/dma-resv.h | 3 + include/uapi/linux/dma-buf.h | 6 ++ 6 files changed, 226 insertions(+)
From: Christian König ckoenig.leichtzumerken@gmail.com
Add a helper to iterate over all fences in a dma_fence_array object.
v2 (Jason Ekstrand) - Return NULL from dma_fence_array_first if head == NULL. This matches the iterator behavior of dma_fence_chain_for_each in that it iterates zero times if head == NULL. - Return NULL from dma_fence_array_next if index > array->num_fences.
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-fence-array.c | 27 +++++++++++++++++++++++++++ include/linux/dma-fence-array.h | 17 +++++++++++++++++ 2 files changed, 44 insertions(+)
diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c index d3fbd950be944..2ac1afc697d0f 100644 --- a/drivers/dma-buf/dma-fence-array.c +++ b/drivers/dma-buf/dma-fence-array.c @@ -201,3 +201,30 @@ bool dma_fence_match_context(struct dma_fence *fence, u64 context) return true; } EXPORT_SYMBOL(dma_fence_match_context); + +struct dma_fence *dma_fence_array_first(struct dma_fence *head) +{ + struct dma_fence_array *array; + + if (!head) + return NULL; + + array = to_dma_fence_array(head); + if (!array) + return head; + + return array->fences[0]; +} +EXPORT_SYMBOL(dma_fence_array_first); + +struct dma_fence *dma_fence_array_next(struct dma_fence *head, + unsigned int index) +{ + struct dma_fence_array *array = to_dma_fence_array(head); + + if (!array || index >= array->num_fences) + return NULL; + + return array->fences[index]; +} +EXPORT_SYMBOL(dma_fence_array_next); diff --git a/include/linux/dma-fence-array.h b/include/linux/dma-fence-array.h index 303dd712220fd..588ac8089dd61 100644 --- a/include/linux/dma-fence-array.h +++ b/include/linux/dma-fence-array.h @@ -74,6 +74,19 @@ to_dma_fence_array(struct dma_fence *fence) return container_of(fence, struct dma_fence_array, base); }
+/** + * dma_fence_array_for_each - iterate over all fences in array + * @fence: current fence + * @index: index into the array + * @head: potential dma_fence_array object + * + * Test if @array is a dma_fence_array object and if yes iterate over all fences + * in the array. If not just iterate over the fence in @array itself. + */ +#define dma_fence_array_for_each(fence, index, head) \ + for (index = 0, fence = dma_fence_array_first(head); fence; \ + ++(index), fence = dma_fence_array_next(head, index)) + struct dma_fence_array *dma_fence_array_create(int num_fences, struct dma_fence **fences, u64 context, unsigned seqno, @@ -81,4 +94,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
bool dma_fence_match_context(struct dma_fence *fence, u64 context);
+struct dma_fence *dma_fence_array_first(struct dma_fence *head); +struct dma_fence *dma_fence_array_next(struct dma_fence *head, + unsigned int index); + #endif /* __LINUX_DMA_FENCE_ARRAY_H */
Am 17.03.21 um 23:19 schrieb Jason Ekstrand:
From: Christian König ckoenig.leichtzumerken@gmail.com
Add a helper to iterate over all fences in a dma_fence_array object.
v2 (Jason Ekstrand)
- Return NULL from dma_fence_array_first if head == NULL. This matches the iterator behavior of dma_fence_chain_for_each in that it iterates zero times if head == NULL.
- Return NULL from dma_fence_array_next if index > array->num_fences.
Is there any reason you send this patch alone out once more?
I don't see any changes compared to the last version.
Christian.
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
drivers/dma-buf/dma-fence-array.c | 27 +++++++++++++++++++++++++++ include/linux/dma-fence-array.h | 17 +++++++++++++++++ 2 files changed, 44 insertions(+)
diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c index d3fbd950be944..2ac1afc697d0f 100644 --- a/drivers/dma-buf/dma-fence-array.c +++ b/drivers/dma-buf/dma-fence-array.c @@ -201,3 +201,30 @@ bool dma_fence_match_context(struct dma_fence *fence, u64 context) return true; } EXPORT_SYMBOL(dma_fence_match_context);
+struct dma_fence *dma_fence_array_first(struct dma_fence *head) +{
- struct dma_fence_array *array;
- if (!head)
return NULL;
- array = to_dma_fence_array(head);
- if (!array)
return head;
- return array->fences[0];
+} +EXPORT_SYMBOL(dma_fence_array_first);
+struct dma_fence *dma_fence_array_next(struct dma_fence *head,
unsigned int index)
+{
- struct dma_fence_array *array = to_dma_fence_array(head);
- if (!array || index >= array->num_fences)
return NULL;
- return array->fences[index];
+} +EXPORT_SYMBOL(dma_fence_array_next); diff --git a/include/linux/dma-fence-array.h b/include/linux/dma-fence-array.h index 303dd712220fd..588ac8089dd61 100644 --- a/include/linux/dma-fence-array.h +++ b/include/linux/dma-fence-array.h @@ -74,6 +74,19 @@ to_dma_fence_array(struct dma_fence *fence) return container_of(fence, struct dma_fence_array, base); }
+/**
- dma_fence_array_for_each - iterate over all fences in array
- @fence: current fence
- @index: index into the array
- @head: potential dma_fence_array object
- Test if @array is a dma_fence_array object and if yes iterate over all fences
- in the array. If not just iterate over the fence in @array itself.
- */
+#define dma_fence_array_for_each(fence, index, head) \
- for (index = 0, fence = dma_fence_array_first(head); fence; \
++(index), fence = dma_fence_array_next(head, index))
- struct dma_fence_array *dma_fence_array_create(int num_fences, struct dma_fence **fences, u64 context, unsigned seqno,
@@ -81,4 +94,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
bool dma_fence_match_context(struct dma_fence *fence, u64 context);
+struct dma_fence *dma_fence_array_first(struct dma_fence *head); +struct dma_fence *dma_fence_array_next(struct dma_fence *head,
unsigned int index);
- #endif /* __LINUX_DMA_FENCE_ARRAY_H */
On Thu, Mar 18, 2021 at 10:38:11AM +0100, Christian König wrote:
Am 17.03.21 um 23:19 schrieb Jason Ekstrand:
From: Christian König ckoenig.leichtzumerken@gmail.com
Add a helper to iterate over all fences in a dma_fence_array object.
v2 (Jason Ekstrand)
- Return NULL from dma_fence_array_first if head == NULL. This matches the iterator behavior of dma_fence_chain_for_each in that it iterates zero times if head == NULL.
- Return NULL from dma_fence_array_next if index > array->num_fences.
Is there any reason you send this patch alone out once more?
I don't see any changes compared to the last version.
Last patch has changed. Also I think mail delivery is a bit wobbly right now. -Daniel
Christian.
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
drivers/dma-buf/dma-fence-array.c | 27 +++++++++++++++++++++++++++ include/linux/dma-fence-array.h | 17 +++++++++++++++++ 2 files changed, 44 insertions(+)
diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c index d3fbd950be944..2ac1afc697d0f 100644 --- a/drivers/dma-buf/dma-fence-array.c +++ b/drivers/dma-buf/dma-fence-array.c @@ -201,3 +201,30 @@ bool dma_fence_match_context(struct dma_fence *fence, u64 context) return true; } EXPORT_SYMBOL(dma_fence_match_context);
+struct dma_fence *dma_fence_array_first(struct dma_fence *head) +{
- struct dma_fence_array *array;
- if (!head)
return NULL;
- array = to_dma_fence_array(head);
- if (!array)
return head;
- return array->fences[0];
+} +EXPORT_SYMBOL(dma_fence_array_first);
+struct dma_fence *dma_fence_array_next(struct dma_fence *head,
unsigned int index)
+{
- struct dma_fence_array *array = to_dma_fence_array(head);
- if (!array || index >= array->num_fences)
return NULL;
- return array->fences[index];
+} +EXPORT_SYMBOL(dma_fence_array_next); diff --git a/include/linux/dma-fence-array.h b/include/linux/dma-fence-array.h index 303dd712220fd..588ac8089dd61 100644 --- a/include/linux/dma-fence-array.h +++ b/include/linux/dma-fence-array.h @@ -74,6 +74,19 @@ to_dma_fence_array(struct dma_fence *fence) return container_of(fence, struct dma_fence_array, base); } +/**
- dma_fence_array_for_each - iterate over all fences in array
- @fence: current fence
- @index: index into the array
- @head: potential dma_fence_array object
- Test if @array is a dma_fence_array object and if yes iterate over all fences
- in the array. If not just iterate over the fence in @array itself.
- */
+#define dma_fence_array_for_each(fence, index, head) \
- for (index = 0, fence = dma_fence_array_first(head); fence; \
++(index), fence = dma_fence_array_next(head, index))
- struct dma_fence_array *dma_fence_array_create(int num_fences, struct dma_fence **fences, u64 context, unsigned seqno,
@@ -81,4 +94,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences, bool dma_fence_match_context(struct dma_fence *fence, u64 context); +struct dma_fence *dma_fence_array_first(struct dma_fence *head); +struct dma_fence *dma_fence_array_next(struct dma_fence *head,
unsigned int index);
- #endif /* __LINUX_DMA_FENCE_ARRAY_H */
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On March 18, 2021 08:13:41 Daniel Vetter daniel@ffwll.ch wrote:
On Thu, Mar 18, 2021 at 10:38:11AM +0100, Christian König wrote:
Am 17.03.21 um 23:19 schrieb Jason Ekstrand:
From: Christian König ckoenig.leichtzumerken@gmail.com
Add a helper to iterate over all fences in a dma_fence_array object.
v2 (Jason Ekstrand)
- Return NULL from dma_fence_array_first if head == NULL. This matches
the iterator behavior of dma_fence_chain_for_each in that it iterates zero times if head == NULL.
- Return NULL from dma_fence_array_next if index > array->num_fences.
Is there any reason you send this patch alone out once more?
I don't see any changes compared to the last version.
Last patch has changed. Also I think mail delivery is a bit wobbly right now.
Sorry. I'm still getting used to mailing lists again. Too spoiled by GitLab. My intention was to re-send the series because I updated the last one. I think it's in pretty good shape now.
--Jason
-Daniel
Christian.
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
drivers/dma-buf/dma-fence-array.c | 27 +++++++++++++++++++++++++++ include/linux/dma-fence-array.h | 17 +++++++++++++++++ 2 files changed, 44 insertions(+)
diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c index d3fbd950be944..2ac1afc697d0f 100644 --- a/drivers/dma-buf/dma-fence-array.c +++ b/drivers/dma-buf/dma-fence-array.c @@ -201,3 +201,30 @@ bool dma_fence_match_context(struct dma_fence *fence, u64 context) return true; } EXPORT_SYMBOL(dma_fence_match_context);
+struct dma_fence *dma_fence_array_first(struct dma_fence *head) +{
- struct dma_fence_array *array;
- if (!head)
- return NULL;
- array = to_dma_fence_array(head);
- if (!array)
- return head;
- return array->fences[0];
+} +EXPORT_SYMBOL(dma_fence_array_first);
+struct dma_fence *dma_fence_array_next(struct dma_fence *head,
unsigned int index)
+{
- struct dma_fence_array *array = to_dma_fence_array(head);
- if (!array || index >= array->num_fences)
- return NULL;
- return array->fences[index];
+} +EXPORT_SYMBOL(dma_fence_array_next); diff --git a/include/linux/dma-fence-array.h b/include/linux/dma-fence-array.h index 303dd712220fd..588ac8089dd61 100644 --- a/include/linux/dma-fence-array.h +++ b/include/linux/dma-fence-array.h @@ -74,6 +74,19 @@ to_dma_fence_array(struct dma_fence *fence) return container_of(fence, struct dma_fence_array, base); } +/**
- dma_fence_array_for_each - iterate over all fences in array
- @fence: current fence
- @index: index into the array
- @head: potential dma_fence_array object
- Test if @array is a dma_fence_array object and if yes iterate over all
fences
- in the array. If not just iterate over the fence in @array itself.
- */
+#define dma_fence_array_for_each(fence, index, head) \
- for (index = 0, fence = dma_fence_array_first(head); fence; \
++(index), fence = dma_fence_array_next(head, index))
struct dma_fence_array *dma_fence_array_create(int num_fences, struct dma_fence **fences, u64 context, unsigned seqno, @@ -81,4 +94,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences, bool dma_fence_match_context(struct dma_fence *fence, u64 context); +struct dma_fence *dma_fence_array_first(struct dma_fence *head); +struct dma_fence *dma_fence_array_next(struct dma_fence *head,
unsigned int index);
#endif /* __LINUX_DMA_FENCE_ARRAY_H */
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
Add a helper function to get a single fence representing all fences in a dma_resv object.
This fence is either the only one in the object or all not signaled fences of the object in a flatted out dma_fence_array.
v2 (Jason Ekstrand): - Take reference of fences both for creating the dma_fence_array and in the case where we return one fence. - Handle the case where dma_resv_get_list() returns NULL
v3 (Jason Ekstrand): - Add an _rcu suffix because it is read-only - Rewrite to use dma_resv_get_fences_rcu so it's RCU-safe - Add an EXPORT_SYMBOL_GPL declaration - Re-author the patch to Jason since very little is left of Christian König's original patch
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-resv.c | 104 +++++++++++++++++++++++++++++++++++++ include/linux/dma-resv.h | 2 + 2 files changed, 106 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 6ddbeb5dfbf65..5dd4c38bd9cb4 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -33,6 +33,8 @@ */
#include <linux/dma-resv.h> +#include <linux/dma-fence-chain.h> +#include <linux/dma-fence-array.h> #include <linux/export.h> #include <linux/mm.h> #include <linux/sched/mm.h> @@ -49,6 +51,19 @@ * write-side updates. */
+/** + * dma_fence_deep_dive_for_each - deep dive into the fence containers + * @fence: resulting fence + * @chain: variable for a dma_fence_chain + * @index: index into a dma_fence_array + * @head: starting point + * + * Helper to deep dive into the fence containers for flattening them. + */ +#define dma_fence_deep_dive_for_each(fence, chain, index, head) \ + dma_fence_chain_for_each(chain, head) \ + dma_fence_array_for_each(fence, index, chain) + DEFINE_WD_CLASS(reservation_ww_class); EXPORT_SYMBOL(reservation_ww_class);
@@ -517,6 +532,95 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj, } EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
+/** + * dma_resv_get_singleton - get a single fence for the dma_resv object + * @obj: the reservation object + * @extra: extra fence to add to the resulting array + * @result: resulting dma_fence + * + * Get a single fence representing all unsignaled fences in the dma_resv object + * plus the given extra fence. If we got only one fence return a new + * reference to that, otherwise return a dma_fence_array object. + * + * RETURNS + * Returns -NOMEM if allocations fail, zero otherwise. + */ +int dma_resv_get_singleton_rcu(struct dma_resv *obj, struct dma_fence **result) +{ + struct dma_fence **resv_fences, *fence, *chain, **fences; + struct dma_fence_array *array; + unsigned int num_resv_fences, num_fences; + unsigned int ret, i, j; + + ret = dma_resv_get_fences_rcu(obj, NULL, &num_resv_fences, &resv_fences); + if (ret) + return ret; + + num_fences = 0; + *result = NULL; + + if (num_resv_fences == 0) + return 0; + + for (i = 0; i < num_resv_fences; ++i) { + dma_fence_deep_dive_for_each(fence, chain, j, resv_fences[i]) { + if (dma_fence_is_signaled(fence)) + continue; + + *result = fence; + ++num_fences; + } + } + + if (num_fences <= 1) { + *result = dma_fence_get(*result); + goto put_resv_fences; + } + + fences = kmalloc_array(num_fences, sizeof(struct dma_fence*), + GFP_KERNEL); + if (!fences) { + *result = NULL; + ret = -ENOMEM; + goto put_resv_fences; + } + + num_fences = 0; + for (i = 0; i < num_resv_fences; ++i) { + dma_fence_deep_dive_for_each(fence, chain, j, resv_fences[i]) { + if (!dma_fence_is_signaled(fence)) + fences[num_fences++] = dma_fence_get(fence); + } + } + + if (num_fences <= 1) { + *result = num_fences ? fences[0] : NULL; + kfree(fences); + goto put_resv_fences; + } + + array = dma_fence_array_create(num_fences, fences, + dma_fence_context_alloc(1), + 1, false); + if (array) { + *result = &array->base; + } else { + *result = NULL; + while (num_fences--) + dma_fence_put(fences[num_fences]); + kfree(fences); + ret = -ENOMEM; + } + +put_resv_fences: + while (num_resv_fences--) + dma_fence_put(resv_fences[num_resv_fences]); + kfree(resv_fences); + + return ret; +} +EXPORT_SYMBOL_GPL(dma_resv_get_singleton_rcu); + /** * dma_resv_wait_timeout_rcu - Wait on reservation's objects * shared and/or exclusive fences. diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index d44a77e8a7e34..5f82894fed0b9 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -285,6 +285,8 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
+int dma_resv_get_singleton_rcu(struct dma_resv *obj, struct dma_fence **result); + long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr, unsigned long timeout);
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over- synchronization.
v2 (Jason Ekstrand): - Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand): - Lock around setting shared fences as well as exclusive - Mark SIGNAL_SYNC_FILE as a read-write ioctl. - Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand): - Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand): - Rename the IOCTLs to import/export rather than wait/signal - Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand): - Drop the sync_file import as it was all-around sketchy and not nearly as useful as import. - Re-introduce READ/WRITE flag support for export - Rework the commit message
v7 (Jason Ekstrand): - Require at least one sync flag - Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference - Use _rcu helpers since we're accessing the dma_resv read-only
v8 (Jason Ekstrand): - Return -ENOMEM if the sync_file_create fails - Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)
Signed-off-by: Jason Ekstrand jason@jlekstrand.net --- drivers/dma-buf/dma-buf.c | 62 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 6 ++++ 2 files changed, 68 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index f264b70c383eb..a5e4b0b6a049c 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -362,6 +363,62 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+#if IS_ENABLED(CONFIG_SYNC_FILE) +static long dma_buf_export_sync_file(struct dma_buf *dmabuf, + void __user *user_data) +{ + struct dma_buf_sync_file arg; + struct dma_fence *fence = NULL; + struct sync_file *sync_file; + int fd, ret; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags & ~DMA_BUF_SYNC_RW) + return -EINVAL; + + if ((arg.flags & DMA_BUF_SYNC_RW) == 0) + return -EINVAL; + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) + return fd; + + if (arg.flags & DMA_BUF_SYNC_WRITE) { + ret = dma_resv_get_singleton_rcu(dmabuf->resv, &fence); + if (ret) + goto err_put_fd; + } else if (arg.flags & DMA_BUF_SYNC_READ) { + fence = dma_resv_get_excl_rcu(dmabuf->resv); + } + + if (!fence) + fence = dma_fence_get_stub(); + + sync_file = sync_file_create(fence); + + dma_fence_put(fence); + + if (!sync_file) { + ret = -ENOMEM; + goto err_put_fd; + } + + fd_install(fd, sync_file->file); + + arg.fd = fd; + if (copy_to_user(user_data, &arg, sizeof(arg))) + return -EFAULT; + + return 0; + +err_put_fd: + put_unused_fd(fd); + return ret; +} +#endif + static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -405,6 +462,11 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME_B: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+#if IS_ENABLED(CONFIG_SYNC_FILE) + case DMA_BUF_IOCTL_EXPORT_SYNC_FILE: + return dma_buf_export_sync_file(dmabuf, (void __user *)arg); +#endif + default: return -ENOTTY; } diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 7f30393b92c3b..9bce1e8bd31d3 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -37,6 +37,11 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+struct dma_buf_sync_file { + __u32 flags; + __s32 fd; +}; + #define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -46,5 +51,6 @@ struct dma_buf_sync { #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct dma_buf_sync_file)
#endif
Adding mesa-dev and wayland-devel for broader circulation.
On Wed, Mar 17, 2021 at 5:19 PM Jason Ekstrand jason@jlekstrand.net wrote:
Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over- synchronization.
v2 (Jason Ekstrand):
- Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence.
v3 (Jason Ekstrand):
- Lock around setting shared fences as well as exclusive
- Mark SIGNAL_SYNC_FILE as a read-write ioctl.
- Initialize ret to 0 in dma_buf_wait_sync_file
v4 (Jason Ekstrand):
- Use the new dma_resv_get_singleton helper
v5 (Jason Ekstrand):
- Rename the IOCTLs to import/export rather than wait/signal
- Drop the WRITE flag and always get/set the exclusive fence
v6 (Jason Ekstrand):
- Drop the sync_file import as it was all-around sketchy and not nearly as useful as import.
- Re-introduce READ/WRITE flag support for export
- Rework the commit message
v7 (Jason Ekstrand):
- Require at least one sync flag
- Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference
- Use _rcu helpers since we're accessing the dma_resv read-only
v8 (Jason Ekstrand):
- Return -ENOMEM if the sync_file_create fails
- Predicate support on IS_ENABLED(CONFIG_SYNC_FILE)
Signed-off-by: Jason Ekstrand jason@jlekstrand.net
drivers/dma-buf/dma-buf.c | 62 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 6 ++++ 2 files changed, 68 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index f264b70c383eb..a5e4b0b6a049c 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include <linux/debugfs.h> #include <linux/module.h> #include <linux/seq_file.h> +#include <linux/sync_file.h> #include <linux/poll.h> #include <linux/dma-resv.h> #include <linux/mm.h> @@ -362,6 +363,62 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; }
+#if IS_ENABLED(CONFIG_SYNC_FILE) +static long dma_buf_export_sync_file(struct dma_buf *dmabuf,
void __user *user_data)
+{
struct dma_buf_sync_file arg;
struct dma_fence *fence = NULL;
struct sync_file *sync_file;
int fd, ret;
if (copy_from_user(&arg, user_data, sizeof(arg)))
return -EFAULT;
if (arg.flags & ~DMA_BUF_SYNC_RW)
return -EINVAL;
if ((arg.flags & DMA_BUF_SYNC_RW) == 0)
return -EINVAL;
fd = get_unused_fd_flags(O_CLOEXEC);
if (fd < 0)
return fd;
if (arg.flags & DMA_BUF_SYNC_WRITE) {
ret = dma_resv_get_singleton_rcu(dmabuf->resv, &fence);
if (ret)
goto err_put_fd;
} else if (arg.flags & DMA_BUF_SYNC_READ) {
fence = dma_resv_get_excl_rcu(dmabuf->resv);
}
if (!fence)
fence = dma_fence_get_stub();
sync_file = sync_file_create(fence);
dma_fence_put(fence);
if (!sync_file) {
ret = -ENOMEM;
goto err_put_fd;
}
fd_install(fd, sync_file->file);
arg.fd = fd;
if (copy_to_user(user_data, &arg, sizeof(arg)))
return -EFAULT;
return 0;
+err_put_fd:
put_unused_fd(fd);
return ret;
+} +#endif
static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -405,6 +462,11 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME_B: return dma_buf_set_name(dmabuf, (const char __user *)arg);
+#if IS_ENABLED(CONFIG_SYNC_FILE)
case DMA_BUF_IOCTL_EXPORT_SYNC_FILE:
return dma_buf_export_sync_file(dmabuf, (void __user *)arg);
+#endif
default: return -ENOTTY; }
diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 7f30393b92c3b..9bce1e8bd31d3 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -37,6 +37,11 @@ struct dma_buf_sync {
#define DMA_BUF_NAME_LEN 32
+struct dma_buf_sync_file {
__u32 flags;
__s32 fd;
+};
#define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
@@ -46,5 +51,6 @@ struct dma_buf_sync { #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct dma_buf_sync_file)
#endif
2.29.2
From a user-space point-of-view, this looks super useful! The uAPI sounds
good to me.
Acked-by: Simon Ser contact@emersion.fr
Would be nice to have some short docs as well. Here's an example of a patch adding some docs for an ioctl [1], if you aren't familiar with that. I think just some basic stuff (description, which parameters are in/out, what the flags are) would be great.
On Tue, Mar 23, 2021 at 2:06 PM Simon Ser contact@emersion.fr wrote:
From a user-space point-of-view, this looks super useful! The uAPI sounds good to me.
Acked-by: Simon Ser contact@emersion.fr
Would be nice to have some short docs as well. Here's an example of a patch adding some docs for an ioctl [1], if you aren't familiar with that. I think just some basic stuff (description, which parameters are in/out, what the flags are) would be great.
Good call. v9 will have docs.
--Jason
dri-devel@lists.freedesktop.org