This patchset implements the current proposal for virtio cross-device resource sharing [1], with minor changes based on recent comments. It is expected that this will be used to import virtio resources into the virtio-video driver currently under discussion [2].
This patchset adds a new hook to dma-buf, for querying the dma-buf's underlying virtio UUID. This hook is then plumbed through DRM PRIME buffers, and finally implemented in virtgpu.
[1] https://markmail.org/thread/jsaoqy7phrqdcpqu [2] https://markmail.org/thread/p5d3k566srtdtute
v2 -> v3 changes: - Remove ifdefs. - Simplify virtgpu_gem_prime_export as it can only be called once. - Use virtio_gpu_vbuffer's objs field instead of abusing data_buf.
David Stevens (4): dma-buf: add support for virtio exported objects drm/prime: add support for virtio exported objects virtio-gpu: add VIRTIO_GPU_F_RESOURCE_UUID feature drm/virtio: Support virtgpu exported resources
drivers/dma-buf/dma-buf.c | 12 ++++++ drivers/gpu/drm/drm_prime.c | 23 +++++++++++ drivers/gpu/drm/virtio/virtgpu_drv.c | 3 ++ drivers/gpu/drm/virtio/virtgpu_drv.h | 18 +++++++++ drivers/gpu/drm/virtio/virtgpu_kms.c | 4 ++ drivers/gpu/drm/virtio/virtgpu_prime.c | 41 +++++++++++++++++-- drivers/gpu/drm/virtio/virtgpu_vq.c | 55 ++++++++++++++++++++++++++ include/drm/drm_drv.h | 10 +++++ include/linux/dma-buf.h | 18 +++++++++ include/uapi/linux/virtio_gpu.h | 19 +++++++++ 10 files changed, 200 insertions(+), 3 deletions(-)
This change adds a new dma-buf operation that allows dma-bufs to be used by virtio drivers to share exported objects. The new operation allows the importing driver to query the exporting driver for the UUID which identifies the underlying exported object.
Signed-off-by: David Stevens stevensd@chromium.org --- drivers/dma-buf/dma-buf.c | 12 ++++++++++++ include/linux/dma-buf.h | 18 ++++++++++++++++++ 2 files changed, 30 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index d4097856c86b..fa5210ba6aaa 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1158,6 +1158,18 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) } EXPORT_SYMBOL_GPL(dma_buf_vunmap);
+int dma_buf_get_uuid(struct dma_buf *dmabuf, uuid_t *uuid) +{ + if (WARN_ON(!dmabuf) || !uuid) + return -EINVAL; + + if (!dmabuf->ops->get_uuid) + return -ENODEV; + + return dmabuf->ops->get_uuid(dmabuf, uuid); +} +EXPORT_SYMBOL_GPL(dma_buf_get_uuid); + #ifdef CONFIG_DEBUG_FS static int dma_buf_debug_show(struct seq_file *s, void *unused) { diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index abf5459a5b9d..00758523597d 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -251,6 +251,21 @@ struct dma_buf_ops {
void *(*vmap)(struct dma_buf *); void (*vunmap)(struct dma_buf *, void *vaddr); + + /** + * @get_uuid + * + * This is called by dma_buf_get_uuid to get the UUID which identifies + * the buffer to virtio devices. + * + * This callback is optional. + * + * Returns: + * + * 0 on success or a negative error code on failure. On success uuid + * will be populated with the buffer's UUID. + */ + int (*get_uuid)(struct dma_buf *dmabuf, uuid_t *uuid); };
/** @@ -444,4 +459,7 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, unsigned long); void *dma_buf_vmap(struct dma_buf *); void dma_buf_vunmap(struct dma_buf *, void *vaddr); + +int dma_buf_get_uuid(struct dma_buf *dmabuf, uuid_t *uuid); + #endif /* __DMA_BUF_H__ */
On Wed, Mar 11, 2020 at 12:20 PM David Stevens stevensd@chromium.org wrote:
This change adds a new dma-buf operation that allows dma-bufs to be used by virtio drivers to share exported objects. The new operation allows the importing driver to query the exporting driver for the UUID which identifies the underlying exported object.
Signed-off-by: David Stevens stevensd@chromium.org
Adding Tomasz Figa, I've discussed this with him at elce last year I think. Just to make sure.
Bunch of things: - obviously we need the users of this in a few drivers, can't really review anything stand-alone - adding very specific ops to the generic interface is rather awkward, eventually everyone wants that and we end up in a mess. I think the best solution here would be if we create a struct virtio_dma_buf which subclasses dma-buf, add a (hopefully safe) runtime upcasting functions, and then a virtio_dma_buf_get_uuid() function. Just storing the uuid should be doable (assuming this doesn't change during the lifetime of the buffer), so no need for a callback. - for the runtime upcasting the usual approach is to check the ->ops pointer. Which means that would need to be the same for all virtio dma_bufs, which might get a bit awkward. But I'd really prefer we not add allocator specific stuff like this to dma-buf. -Daniel
drivers/dma-buf/dma-buf.c | 12 ++++++++++++ include/linux/dma-buf.h | 18 ++++++++++++++++++ 2 files changed, 30 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index d4097856c86b..fa5210ba6aaa 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1158,6 +1158,18 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) } EXPORT_SYMBOL_GPL(dma_buf_vunmap);
+int dma_buf_get_uuid(struct dma_buf *dmabuf, uuid_t *uuid) +{
if (WARN_ON(!dmabuf) || !uuid)
return -EINVAL;
if (!dmabuf->ops->get_uuid)
return -ENODEV;
return dmabuf->ops->get_uuid(dmabuf, uuid);
+} +EXPORT_SYMBOL_GPL(dma_buf_get_uuid);
#ifdef CONFIG_DEBUG_FS static int dma_buf_debug_show(struct seq_file *s, void *unused) { diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index abf5459a5b9d..00758523597d 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -251,6 +251,21 @@ struct dma_buf_ops {
void *(*vmap)(struct dma_buf *); void (*vunmap)(struct dma_buf *, void *vaddr);
/**
* @get_uuid
*
* This is called by dma_buf_get_uuid to get the UUID which identifies
* the buffer to virtio devices.
*
* This callback is optional.
*
* Returns:
*
* 0 on success or a negative error code on failure. On success uuid
* will be populated with the buffer's UUID.
*/
int (*get_uuid)(struct dma_buf *dmabuf, uuid_t *uuid);
};
/** @@ -444,4 +459,7 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, unsigned long); void *dma_buf_vmap(struct dma_buf *); void dma_buf_vunmap(struct dma_buf *, void *vaddr);
+int dma_buf_get_uuid(struct dma_buf *dmabuf, uuid_t *uuid);
#endif /* __DMA_BUF_H__ */
2.25.1.481.gfbce0eb801-goog
On Thu, May 14, 2020 at 12:45 AM Daniel Vetter daniel@ffwll.ch wrote:
On Wed, Mar 11, 2020 at 12:20 PM David Stevens stevensd@chromium.org wrote:
This change adds a new dma-buf operation that allows dma-bufs to be used by virtio drivers to share exported objects. The new operation allows the importing driver to query the exporting driver for the UUID which identifies the underlying exported object.
Signed-off-by: David Stevens stevensd@chromium.org
Adding Tomasz Figa, I've discussed this with him at elce last year I think. Just to make sure.
Bunch of things:
- obviously we need the users of this in a few drivers, can't really
review anything stand-alone
Here is a link to the usage of this feature by the currently under development virtio-video driver: https://markmail.org/thread/j4xlqaaim266qpks
- adding very specific ops to the generic interface is rather awkward,
eventually everyone wants that and we end up in a mess. I think the best solution here would be if we create a struct virtio_dma_buf which subclasses dma-buf, add a (hopefully safe) runtime upcasting functions, and then a virtio_dma_buf_get_uuid() function. Just storing the uuid should be doable (assuming this doesn't change during the lifetime of the buffer), so no need for a callback.
So you would prefer a solution similar to the original version of this patchset? https://markmail.org/message/z7if4u56q5fmaok4
On Thu, May 14, 2020 at 11:08:52AM +0900, David Stevens wrote:
On Thu, May 14, 2020 at 12:45 AM Daniel Vetter daniel@ffwll.ch wrote:
On Wed, Mar 11, 2020 at 12:20 PM David Stevens stevensd@chromium.org wrote:
This change adds a new dma-buf operation that allows dma-bufs to be used by virtio drivers to share exported objects. The new operation allows the importing driver to query the exporting driver for the UUID which identifies the underlying exported object.
Signed-off-by: David Stevens stevensd@chromium.org
Adding Tomasz Figa, I've discussed this with him at elce last year I think. Just to make sure.
Bunch of things:
- obviously we need the users of this in a few drivers, can't really
review anything stand-alone
Here is a link to the usage of this feature by the currently under development virtio-video driver: https://markmail.org/thread/j4xlqaaim266qpks
- adding very specific ops to the generic interface is rather awkward,
eventually everyone wants that and we end up in a mess. I think the best solution here would be if we create a struct virtio_dma_buf which subclasses dma-buf, add a (hopefully safe) runtime upcasting functions, and then a virtio_dma_buf_get_uuid() function. Just storing the uuid should be doable (assuming this doesn't change during the lifetime of the buffer), so no need for a callback.
So you would prefer a solution similar to the original version of this patchset? https://markmail.org/message/z7if4u56q5fmaok4
yup. -Daniel
Hi,
- for the runtime upcasting the usual approach is to check the ->ops
pointer. Which means that would need to be the same for all virtio dma_bufs, which might get a bit awkward. But I'd really prefer we not add allocator specific stuff like this to dma-buf.
This is exactly the problem, it gets messy quickly, also when it comes to using the drm_prime.c helpers ...
take care, Gerd
On Thu, May 14, 2020 at 09:59:52AM +0200, Gerd Hoffmann wrote:
Hi,
- for the runtime upcasting the usual approach is to check the ->ops
pointer. Which means that would need to be the same for all virtio dma_bufs, which might get a bit awkward. But I'd really prefer we not add allocator specific stuff like this to dma-buf.
This is exactly the problem, it gets messy quickly, also when it comes to using the drm_prime.c helpers ...
drm_prime.c helpers (not the core bits) exist becaues nvidia needed something that wasnt EXPORT_SYMBOL_GPL.
I wouldn't shed a big tear if they don't fit anymore, they're kinda not great to begin with. Much midlayer, not much of valued added, but at least the _GPL is gone. -Daniel
Sorry for the duplicate reply, didn't notice this until now.
Just storing the uuid should be doable (assuming this doesn't change during the lifetime of the buffer), so no need for a callback.
Directly storing the uuid doesn't work that well because of synchronization issues. The uuid needs to be shared between multiple virtio devices with independent command streams, so to prevent races between importing and exporting, the exporting driver can't share the uuid with other drivers until it knows that the device has finished registering the uuid. That requires a round trip to and then back from the device. Using a callback allows the latency from that round trip registration to be hidden.
-David
On Thu, May 14, 2020 at 05:19:40PM +0900, David Stevens wrote:
Sorry for the duplicate reply, didn't notice this until now.
Just storing the uuid should be doable (assuming this doesn't change during the lifetime of the buffer), so no need for a callback.
Directly storing the uuid doesn't work that well because of synchronization issues. The uuid needs to be shared between multiple virtio devices with independent command streams, so to prevent races between importing and exporting, the exporting driver can't share the uuid with other drivers until it knows that the device has finished registering the uuid. That requires a round trip to and then back from the device. Using a callback allows the latency from that round trip registration to be hidden.
Uh, that means you actually do something and there's locking involved. Makes stuff more complicated, invariant attributes are a lot easier generally. Registering that uuid just always doesn't work, and blocking when you're exporting? -Daniel
On Thu, May 14, 2020 at 9:30 PM Daniel Vetter daniel@ffwll.ch wrote:
On Thu, May 14, 2020 at 05:19:40PM +0900, David Stevens wrote:
Sorry for the duplicate reply, didn't notice this until now.
Just storing the uuid should be doable (assuming this doesn't change during the lifetime of the buffer), so no need for a callback.
Directly storing the uuid doesn't work that well because of synchronization issues. The uuid needs to be shared between multiple virtio devices with independent command streams, so to prevent races between importing and exporting, the exporting driver can't share the uuid with other drivers until it knows that the device has finished registering the uuid. That requires a round trip to and then back from the device. Using a callback allows the latency from that round trip registration to be hidden.
Uh, that means you actually do something and there's locking involved. Makes stuff more complicated, invariant attributes are a lot easier generally. Registering that uuid just always doesn't work, and blocking when you're exporting?
Registering the id at creation and blocking in gem export is feasible, but it doesn't work well for systems with a centralized buffer allocator that doesn't support batch allocations (e.g. gralloc). In such a system, the round trip latency would almost certainly be included in the buffer allocation time. At least on the system I'm working on, I suspect that would add 10s of milliseconds of startup latency to video pipelines (although I haven't benchmarked the difference). Doing the blocking as late as possible means most or all of the latency can be hidden behind other pipeline setup work.
In terms of complexity, I think the synchronization would be basically the same in either approach, just in different locations. All it would do is alleviate the need for a callback to fetch the UUID.
-David
On Fri, May 15, 2020 at 02:07:06PM +0900, David Stevens wrote:
On Thu, May 14, 2020 at 9:30 PM Daniel Vetter daniel@ffwll.ch wrote:
On Thu, May 14, 2020 at 05:19:40PM +0900, David Stevens wrote:
Sorry for the duplicate reply, didn't notice this until now.
Just storing the uuid should be doable (assuming this doesn't change during the lifetime of the buffer), so no need for a callback.
Directly storing the uuid doesn't work that well because of synchronization issues. The uuid needs to be shared between multiple virtio devices with independent command streams, so to prevent races between importing and exporting, the exporting driver can't share the uuid with other drivers until it knows that the device has finished registering the uuid. That requires a round trip to and then back from the device. Using a callback allows the latency from that round trip registration to be hidden.
Uh, that means you actually do something and there's locking involved. Makes stuff more complicated, invariant attributes are a lot easier generally. Registering that uuid just always doesn't work, and blocking when you're exporting?
Registering the id at creation and blocking in gem export is feasible, but it doesn't work well for systems with a centralized buffer allocator that doesn't support batch allocations (e.g. gralloc). In such a system, the round trip latency would almost certainly be included in the buffer allocation time. At least on the system I'm working on, I suspect that would add 10s of milliseconds of startup latency to video pipelines (although I haven't benchmarked the difference). Doing the blocking as late as possible means most or all of the latency can be hidden behind other pipeline setup work.
In terms of complexity, I think the synchronization would be basically the same in either approach, just in different locations. All it would do is alleviate the need for a callback to fetch the UUID.
Hm ok. I guess if we go with the older patch, where this all is a lot more just code in virtio, doing an extra function to allocate the uuid sounds fine. Then synchronization is entirely up to the virtio subsystem and not a dma-buf problem (and hence not mine). You can use dma_resv_lock or so, but no need to. But with callbacks potentially going both ways things always get a bit interesting wrt locking - this is what makes peer2peer dma-buf so painful right now. Hence I'd like to avoid that if needed, at least at the dma-buf level. virtio code I don't mind what you do there :-)
Cheers, Daniel
Hello David,
On Fri, 15 May 2020 at 19:33, Daniel Vetter daniel@ffwll.ch wrote:
On Fri, May 15, 2020 at 02:07:06PM +0900, David Stevens wrote:
On Thu, May 14, 2020 at 9:30 PM Daniel Vetter daniel@ffwll.ch wrote:
On Thu, May 14, 2020 at 05:19:40PM +0900, David Stevens wrote:
Sorry for the duplicate reply, didn't notice this until now.
Just storing the uuid should be doable (assuming this doesn't change during the lifetime of the buffer), so no need for a callback.
Directly storing the uuid doesn't work that well because of synchronization issues. The uuid needs to be shared between multiple virtio devices with independent command streams, so to prevent races between importing and exporting, the exporting driver can't share the uuid with other drivers until it knows that the device has finished registering the uuid. That requires a round trip to and then back from the device. Using a callback allows the latency from that round trip registration to be hidden.
Uh, that means you actually do something and there's locking involved. Makes stuff more complicated, invariant attributes are a lot easier generally. Registering that uuid just always doesn't work, and blocking when you're exporting?
Registering the id at creation and blocking in gem export is feasible, but it doesn't work well for systems with a centralized buffer allocator that doesn't support batch allocations (e.g. gralloc). In such a system, the round trip latency would almost certainly be included in the buffer allocation time. At least on the system I'm working on, I suspect that would add 10s of milliseconds of startup latency to video pipelines (although I haven't benchmarked the difference). Doing the blocking as late as possible means most or all of the latency can be hidden behind other pipeline setup work.
In terms of complexity, I think the synchronization would be basically the same in either approach, just in different locations. All it would do is alleviate the need for a callback to fetch the UUID.
I think I agree with Daniel there - this seems best suited for code within virtio.
Hm ok. I guess if we go with the older patch, where this all is a lot more just code in virtio, doing an extra function to allocate the uuid sounds fine. Then synchronization is entirely up to the virtio subsystem and not a dma-buf problem (and hence not mine). You can use dma_resv_lock or so, but no need to. But with callbacks potentially going both ways things always get a bit interesting wrt locking - this is what makes peer2peer dma-buf so painful right now. Hence I'd like to avoid that if needed, at least at the dma-buf level. virtio code I don't mind what you do there :-)
Cheers, Daniel
Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
Best, Sumit.
This change exposes dma-buf's get_uuid callback to PRIME drivers.
Signed-off-by: David Stevens stevensd@chromium.org --- drivers/gpu/drm/drm_prime.c | 23 +++++++++++++++++++++++ include/drm/drm_drv.h | 10 ++++++++++ 2 files changed, 33 insertions(+)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 86d9b0e45c8c..50fed8497d3c 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -779,6 +779,28 @@ int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma) } EXPORT_SYMBOL(drm_gem_dmabuf_mmap);
+/** + * drm_gem_dmabuf_get_uuid - dma_buf get_uuid implementation for GEM + * @dma_buf: buffer to query + * @uuid: uuid outparam + * + * Queries the buffer's virtio UUID. This can be used as the + * &dma_buf_ops.get_uuid callback. Calls into &drm_driver.gem_prime_get_uuid. + * + * Returns 0 on success or a negative error code on failure. + */ +int drm_gem_dmabuf_get_uuid(struct dma_buf *dma_buf, uuid_t *uuid) +{ + struct drm_gem_object *obj = dma_buf->priv; + struct drm_device *dev = obj->dev; + + if (!dev->driver->gem_prime_get_uuid) + return -ENODEV; + + return dev->driver->gem_prime_get_uuid(obj, uuid); +} +EXPORT_SYMBOL(drm_gem_dmabuf_get_uuid); + static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = { .cache_sgt_mapping = true, .attach = drm_gem_map_attach, @@ -789,6 +811,7 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = { .mmap = drm_gem_dmabuf_mmap, .vmap = drm_gem_dmabuf_vmap, .vunmap = drm_gem_dmabuf_vunmap, + .get_uuid = drm_gem_dmabuf_get_uuid, };
/** diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h index 77685ed7aa65..61e3ff341844 100644 --- a/include/drm/drm_drv.h +++ b/include/drm/drm_drv.h @@ -29,6 +29,7 @@
#include <linux/list.h> #include <linux/irqreturn.h> +#include <linux/uuid.h>
#include <drm/drm_device.h>
@@ -639,6 +640,15 @@ struct drm_driver { int (*gem_prime_mmap)(struct drm_gem_object *obj, struct vm_area_struct *vma);
+ /** + * @gem_prime_get_uuid + * + * get_uuid hook for GEM drivers. Retrieves the virtio uuid of the + * given GEM buffer. + */ + int (*gem_prime_get_uuid)(struct drm_gem_object *obj, + uuid_t *uuid); + /** * @dumb_create: *
This feature allows the guest to request a UUID from the host for a particular virtio_gpu resource. The UUID can then be shared with other virtio devices, to allow the other host devices to access the virtio_gpu's corresponding host resource.
Signed-off-by: David Stevens stevensd@chromium.org --- include/uapi/linux/virtio_gpu.h | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h index 0c85914d9369..9721d58b4d58 100644 --- a/include/uapi/linux/virtio_gpu.h +++ b/include/uapi/linux/virtio_gpu.h @@ -50,6 +50,10 @@ * VIRTIO_GPU_CMD_GET_EDID */ #define VIRTIO_GPU_F_EDID 1 +/* + * VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID + */ +#define VIRTIO_GPU_F_RESOURCE_UUID 2
enum virtio_gpu_ctrl_type { VIRTIO_GPU_UNDEFINED = 0, @@ -66,6 +70,7 @@ enum virtio_gpu_ctrl_type { VIRTIO_GPU_CMD_GET_CAPSET_INFO, VIRTIO_GPU_CMD_GET_CAPSET, VIRTIO_GPU_CMD_GET_EDID, + VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID,
/* 3d commands */ VIRTIO_GPU_CMD_CTX_CREATE = 0x0200, @@ -87,6 +92,7 @@ enum virtio_gpu_ctrl_type { VIRTIO_GPU_RESP_OK_CAPSET_INFO, VIRTIO_GPU_RESP_OK_CAPSET, VIRTIO_GPU_RESP_OK_EDID, + VIRTIO_GPU_RESP_OK_RESOURCE_UUID,
/* error responses */ VIRTIO_GPU_RESP_ERR_UNSPEC = 0x1200, @@ -340,4 +346,17 @@ enum virtio_gpu_formats { VIRTIO_GPU_FORMAT_R8G8B8X8_UNORM = 134, };
+/* VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID */ +struct virtio_gpu_resource_assign_uuid { + struct virtio_gpu_ctrl_hdr hdr; + __le32 resource_id; + __le32 padding; +}; + +/* VIRTIO_GPU_RESP_OK_RESOURCE_UUID */ +struct virtio_gpu_resp_resource_uuid { + struct virtio_gpu_ctrl_hdr hdr; + __u8 uuid[16]; +}; + #endif
Add support for UUID-based resource sharing mechanism to virtgpu. This implements the new virtgpu commands and hooks them up to dma-buf's get_uuid callback.
Signed-off-by: David Stevens stevensd@chromium.org --- drivers/gpu/drm/virtio/virtgpu_drv.c | 3 ++ drivers/gpu/drm/virtio/virtgpu_drv.h | 18 +++++++++ drivers/gpu/drm/virtio/virtgpu_kms.c | 4 ++ drivers/gpu/drm/virtio/virtgpu_prime.c | 41 +++++++++++++++++-- drivers/gpu/drm/virtio/virtgpu_vq.c | 55 ++++++++++++++++++++++++++ 5 files changed, 118 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c index ab4bed78e656..776e6667042e 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.c +++ b/drivers/gpu/drm/virtio/virtgpu_drv.c @@ -165,6 +165,7 @@ static unsigned int features[] = { VIRTIO_GPU_F_VIRGL, #endif VIRTIO_GPU_F_EDID, + VIRTIO_GPU_F_RESOURCE_UUID, }; static struct virtio_driver virtio_gpu_driver = { .feature_table = features, @@ -202,7 +203,9 @@ static struct drm_driver driver = { .prime_handle_to_fd = drm_gem_prime_handle_to_fd, .prime_fd_to_handle = drm_gem_prime_fd_to_handle, .gem_prime_mmap = drm_gem_prime_mmap, + .gem_prime_export = virtgpu_gem_prime_export, .gem_prime_import_sg_table = virtgpu_gem_prime_import_sg_table, + .gem_prime_get_uuid = virtgpu_gem_prime_get_uuid,
.gem_create_object = virtio_gpu_create_object, .fops = &virtio_gpu_driver_fops, diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index af9403e1cf78..fab65f0f5a4d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -49,6 +49,10 @@ #define DRIVER_MINOR 1 #define DRIVER_PATCHLEVEL 0
+#define UUID_INITIALIZING 0 +#define UUID_INITIALIZED 1 +#define UUID_INITIALIZATION_FAILED 2 + struct virtio_gpu_object_params { uint32_t format; uint32_t width; @@ -75,6 +79,9 @@ struct virtio_gpu_object {
bool dumb; bool created; + + int uuid_state; + uuid_t uuid; }; #define gem_to_virtio_gpu_obj(gobj) \ container_of((gobj), struct virtio_gpu_object, base.base) @@ -196,6 +203,7 @@ struct virtio_gpu_device { bool has_virgl_3d; bool has_edid; bool has_indirect; + bool has_resource_assign_uuid;
struct work_struct config_changed_work;
@@ -206,6 +214,8 @@ struct virtio_gpu_device { struct virtio_gpu_drv_capset *capsets; uint32_t num_capsets; struct list_head cap_cache; + + spinlock_t resource_export_lock; };
struct virtio_gpu_fpriv { @@ -338,6 +348,10 @@ void virtio_gpu_dequeue_fence_func(struct work_struct *work); void virtio_gpu_disable_notify(struct virtio_gpu_device *vgdev); void virtio_gpu_enable_notify(struct virtio_gpu_device *vgdev);
+int +virtio_gpu_cmd_resource_assign_uuid(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs); + /* virtio_gpu_display.c */ void virtio_gpu_modeset_init(struct virtio_gpu_device *vgdev); void virtio_gpu_modeset_fini(struct virtio_gpu_device *vgdev); @@ -366,6 +380,10 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, struct virtio_gpu_object **bo_ptr, struct virtio_gpu_fence *fence); /* virtgpu_prime.c */ +struct dma_buf *virtgpu_gem_prime_export(struct drm_gem_object *obj, + int flags); +int virtgpu_gem_prime_get_uuid(struct drm_gem_object *obj, + uuid_t *uuid); struct drm_gem_object *virtgpu_gem_prime_import_sg_table( struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c index 4009c2f97d08..5a2aeb6d2f35 100644 --- a/drivers/gpu/drm/virtio/virtgpu_kms.c +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c @@ -134,6 +134,7 @@ int virtio_gpu_init(struct drm_device *dev) vgdev->dev = dev->dev;
spin_lock_init(&vgdev->display_info_lock); + spin_lock_init(&vgdev->resource_export_lock); ida_init(&vgdev->ctx_id_ida); ida_init(&vgdev->resource_ida); init_waitqueue_head(&vgdev->resp_wq); @@ -162,6 +163,9 @@ int virtio_gpu_init(struct drm_device *dev) if (virtio_has_feature(vgdev->vdev, VIRTIO_RING_F_INDIRECT_DESC)) { vgdev->has_indirect = true; } + if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_RESOURCE_UUID)) { + vgdev->has_resource_assign_uuid = true; + }
DRM_INFO("features: %cvirgl %cedid\n", vgdev->has_virgl_3d ? '+' : '-', diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c index 050d24c39a8f..7c6357f59877 100644 --- a/drivers/gpu/drm/virtio/virtgpu_prime.c +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c @@ -26,9 +26,44 @@
#include "virtgpu_drv.h"
-/* Empty Implementations as there should not be any other driver for a virtual - * device that might share buffers with virtgpu - */ +int virtgpu_gem_prime_get_uuid(struct drm_gem_object *obj, + uuid_t *uuid) +{ + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); + struct virtio_gpu_device *vgdev = obj->dev->dev_private; + + wait_event(vgdev->resp_wq, bo->uuid_state != UUID_INITIALIZING); + if (bo->uuid_state != UUID_INITIALIZED) + return -ENODEV; + + uuid_copy(uuid, &bo->uuid); + + return 0; +} + +struct dma_buf *virtgpu_gem_prime_export(struct drm_gem_object *obj, + int flags) +{ + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); + struct virtio_gpu_device *vgdev = obj->dev->dev_private; + struct virtio_gpu_object_array *objs; + int ret = 0; + + if (vgdev->has_resource_assign_uuid) { + objs = virtio_gpu_array_alloc(1); + if (!objs) + return ERR_PTR(-ENOMEM); + virtio_gpu_array_add_obj(objs, &bo->base.base); + + ret = virtio_gpu_cmd_resource_assign_uuid(vgdev, objs); + if (ret) + return ERR_PTR(ret); + } else { + bo->uuid_state = UUID_INITIALIZATION_FAILED; + } + + return drm_gem_prime_export(obj, flags); +}
struct drm_gem_object *virtgpu_gem_prime_import_sg_table( struct drm_device *dev, struct dma_buf_attachment *attach, diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index cfe9c54f87a3..b968eaa46bb0 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -1111,3 +1111,58 @@ void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, memcpy(cur_p, &output->cursor, sizeof(output->cursor)); virtio_gpu_queue_cursor(vgdev, vbuf); } + +static void virtio_gpu_cmd_resource_uuid_cb(struct virtio_gpu_device *vgdev, + struct virtio_gpu_vbuffer *vbuf) +{ + struct virtio_gpu_object *obj = + gem_to_virtio_gpu_obj(vbuf->objs->objs[0]); + struct virtio_gpu_resp_resource_uuid *resp = + (struct virtio_gpu_resp_resource_uuid *)vbuf->resp_buf; + uint32_t resp_type = le32_to_cpu(resp->hdr.type); + + spin_lock(&vgdev->resource_export_lock); + WARN_ON(obj->uuid_state != UUID_INITIALIZING); + + if (resp_type == VIRTIO_GPU_RESP_OK_RESOURCE_UUID && + obj->uuid_state == UUID_INITIALIZING) { + memcpy(&obj->uuid.b, resp->uuid, sizeof(obj->uuid.b)); + obj->uuid_state = UUID_INITIALIZED; + } else { + obj->uuid_state = UUID_INITIALIZATION_FAILED; + } + spin_unlock(&vgdev->resource_export_lock); + + wake_up_all(&vgdev->resp_wq); +} + +int +virtio_gpu_cmd_resource_assign_uuid(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs) +{ + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]); + struct virtio_gpu_resource_assign_uuid *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + struct virtio_gpu_resp_resource_uuid *resp_buf; + + resp_buf = kzalloc(sizeof(*resp_buf), GFP_KERNEL); + if (!resp_buf) { + spin_lock(&vgdev->resource_export_lock); + bo->uuid_state = UUID_INITIALIZATION_FAILED; + spin_unlock(&vgdev->resource_export_lock); + virtio_gpu_array_put_free(objs); + return -ENOMEM; + } + + cmd_p = virtio_gpu_alloc_cmd_resp(vgdev, + virtio_gpu_cmd_resource_uuid_cb, &vbuf, sizeof(*cmd_p), + sizeof(struct virtio_gpu_resp_resource_uuid), resp_buf); + memset(cmd_p, 0, sizeof(*cmd_p)); + + cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID); + cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); + + vbuf->objs = objs; + virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); + return 0; +}
On Wed, Mar 11, 2020 at 08:20:04PM +0900, David Stevens wrote:
Add support for UUID-based resource sharing mechanism to virtgpu. This implements the new virtgpu commands and hooks them up to dma-buf's get_uuid callback.
Signed-off-by: David Stevens stevensd@chromium.org
drivers/gpu/drm/virtio/virtgpu_drv.c | 3 ++ drivers/gpu/drm/virtio/virtgpu_drv.h | 18 +++++++++ drivers/gpu/drm/virtio/virtgpu_kms.c | 4 ++ drivers/gpu/drm/virtio/virtgpu_prime.c | 41 +++++++++++++++++-- drivers/gpu/drm/virtio/virtgpu_vq.c | 55 ++++++++++++++++++++++++++ 5 files changed, 118 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c index ab4bed78e656..776e6667042e 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.c +++ b/drivers/gpu/drm/virtio/virtgpu_drv.c @@ -165,6 +165,7 @@ static unsigned int features[] = { VIRTIO_GPU_F_VIRGL, #endif VIRTIO_GPU_F_EDID,
- VIRTIO_GPU_F_RESOURCE_UUID,
}; static struct virtio_driver virtio_gpu_driver = { .feature_table = features, @@ -202,7 +203,9 @@ static struct drm_driver driver = { .prime_handle_to_fd = drm_gem_prime_handle_to_fd, .prime_fd_to_handle = drm_gem_prime_fd_to_handle, .gem_prime_mmap = drm_gem_prime_mmap,
.gem_prime_export = virtgpu_gem_prime_export, .gem_prime_import_sg_table = virtgpu_gem_prime_import_sg_table,
.gem_prime_get_uuid = virtgpu_gem_prime_get_uuid,
.gem_create_object = virtio_gpu_create_object, .fops = &virtio_gpu_driver_fops,
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index af9403e1cf78..fab65f0f5a4d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -49,6 +49,10 @@ #define DRIVER_MINOR 1 #define DRIVER_PATCHLEVEL 0
+#define UUID_INITIALIZING 0 +#define UUID_INITIALIZED 1 +#define UUID_INITIALIZATION_FAILED 2
struct virtio_gpu_object_params { uint32_t format; uint32_t width; @@ -75,6 +79,9 @@ struct virtio_gpu_object {
bool dumb; bool created;
- int uuid_state;
- uuid_t uuid;
}; #define gem_to_virtio_gpu_obj(gobj) \ container_of((gobj), struct virtio_gpu_object, base.base) @@ -196,6 +203,7 @@ struct virtio_gpu_device { bool has_virgl_3d; bool has_edid; bool has_indirect;
bool has_resource_assign_uuid;
struct work_struct config_changed_work;
@@ -206,6 +214,8 @@ struct virtio_gpu_device { struct virtio_gpu_drv_capset *capsets; uint32_t num_capsets; struct list_head cap_cache;
- spinlock_t resource_export_lock;
};
struct virtio_gpu_fpriv { @@ -338,6 +348,10 @@ void virtio_gpu_dequeue_fence_func(struct work_struct *work); void virtio_gpu_disable_notify(struct virtio_gpu_device *vgdev); void virtio_gpu_enable_notify(struct virtio_gpu_device *vgdev);
+int +virtio_gpu_cmd_resource_assign_uuid(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object_array *objs);
/* virtio_gpu_display.c */ void virtio_gpu_modeset_init(struct virtio_gpu_device *vgdev); void virtio_gpu_modeset_fini(struct virtio_gpu_device *vgdev); @@ -366,6 +380,10 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, struct virtio_gpu_object **bo_ptr, struct virtio_gpu_fence *fence); /* virtgpu_prime.c */ +struct dma_buf *virtgpu_gem_prime_export(struct drm_gem_object *obj,
int flags);
+int virtgpu_gem_prime_get_uuid(struct drm_gem_object *obj,
uuid_t *uuid);
struct drm_gem_object *virtgpu_gem_prime_import_sg_table( struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c index 4009c2f97d08..5a2aeb6d2f35 100644 --- a/drivers/gpu/drm/virtio/virtgpu_kms.c +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c @@ -134,6 +134,7 @@ int virtio_gpu_init(struct drm_device *dev) vgdev->dev = dev->dev;
spin_lock_init(&vgdev->display_info_lock);
- spin_lock_init(&vgdev->resource_export_lock); ida_init(&vgdev->ctx_id_ida); ida_init(&vgdev->resource_ida); init_waitqueue_head(&vgdev->resp_wq);
@@ -162,6 +163,9 @@ int virtio_gpu_init(struct drm_device *dev) if (virtio_has_feature(vgdev->vdev, VIRTIO_RING_F_INDIRECT_DESC)) { vgdev->has_indirect = true; }
- if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_RESOURCE_UUID)) {
vgdev->has_resource_assign_uuid = true;
- }
Just a question: this relies on DMA bufs so I assume it is not really assumed to work when DMA API is bypassed, right? Rather than worry what does it mean, how about just disabling this feature without PLATFORM_DMA for now?
DRM_INFO("features: %cvirgl %cedid\n", vgdev->has_virgl_3d ? '+' : '-', diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c index 050d24c39a8f..7c6357f59877 100644 --- a/drivers/gpu/drm/virtio/virtgpu_prime.c +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c @@ -26,9 +26,44 @@
#include "virtgpu_drv.h"
-/* Empty Implementations as there should not be any other driver for a virtual
- device that might share buffers with virtgpu
- */
+int virtgpu_gem_prime_get_uuid(struct drm_gem_object *obj,
uuid_t *uuid)
+{
- struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
- struct virtio_gpu_device *vgdev = obj->dev->dev_private;
- wait_event(vgdev->resp_wq, bo->uuid_state != UUID_INITIALIZING);
- if (bo->uuid_state != UUID_INITIALIZED)
return -ENODEV;
- uuid_copy(uuid, &bo->uuid);
- return 0;
+}
+struct dma_buf *virtgpu_gem_prime_export(struct drm_gem_object *obj,
int flags)
+{
- struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
- struct virtio_gpu_device *vgdev = obj->dev->dev_private;
- struct virtio_gpu_object_array *objs;
- int ret = 0;
- if (vgdev->has_resource_assign_uuid) {
objs = virtio_gpu_array_alloc(1);
if (!objs)
return ERR_PTR(-ENOMEM);
virtio_gpu_array_add_obj(objs, &bo->base.base);
ret = virtio_gpu_cmd_resource_assign_uuid(vgdev, objs);
if (ret)
return ERR_PTR(ret);
- } else {
bo->uuid_state = UUID_INITIALIZATION_FAILED;
- }
- return drm_gem_prime_export(obj, flags);
+}
struct drm_gem_object *virtgpu_gem_prime_import_sg_table( struct drm_device *dev, struct dma_buf_attachment *attach, diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index cfe9c54f87a3..b968eaa46bb0 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -1111,3 +1111,58 @@ void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, memcpy(cur_p, &output->cursor, sizeof(output->cursor)); virtio_gpu_queue_cursor(vgdev, vbuf); }
+static void virtio_gpu_cmd_resource_uuid_cb(struct virtio_gpu_device *vgdev,
struct virtio_gpu_vbuffer *vbuf)
+{
- struct virtio_gpu_object *obj =
gem_to_virtio_gpu_obj(vbuf->objs->objs[0]);
- struct virtio_gpu_resp_resource_uuid *resp =
(struct virtio_gpu_resp_resource_uuid *)vbuf->resp_buf;
- uint32_t resp_type = le32_to_cpu(resp->hdr.type);
- spin_lock(&vgdev->resource_export_lock);
- WARN_ON(obj->uuid_state != UUID_INITIALIZING);
- if (resp_type == VIRTIO_GPU_RESP_OK_RESOURCE_UUID &&
obj->uuid_state == UUID_INITIALIZING) {
memcpy(&obj->uuid.b, resp->uuid, sizeof(obj->uuid.b));
obj->uuid_state = UUID_INITIALIZED;
- } else {
obj->uuid_state = UUID_INITIALIZATION_FAILED;
- }
- spin_unlock(&vgdev->resource_export_lock);
- wake_up_all(&vgdev->resp_wq);
+}
+int +virtio_gpu_cmd_resource_assign_uuid(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object_array *objs)
+{
- struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]);
- struct virtio_gpu_resource_assign_uuid *cmd_p;
- struct virtio_gpu_vbuffer *vbuf;
- struct virtio_gpu_resp_resource_uuid *resp_buf;
- resp_buf = kzalloc(sizeof(*resp_buf), GFP_KERNEL);
- if (!resp_buf) {
spin_lock(&vgdev->resource_export_lock);
bo->uuid_state = UUID_INITIALIZATION_FAILED;
spin_unlock(&vgdev->resource_export_lock);
virtio_gpu_array_put_free(objs);
return -ENOMEM;
- }
- cmd_p = virtio_gpu_alloc_cmd_resp(vgdev,
virtio_gpu_cmd_resource_uuid_cb, &vbuf, sizeof(*cmd_p),
sizeof(struct virtio_gpu_resp_resource_uuid), resp_buf);
- memset(cmd_p, 0, sizeof(*cmd_p));
- cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID);
- cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
- vbuf->objs = objs;
- virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
- return 0;
+}
2.25.1.481.gfbce0eb801-goog
if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_RESOURCE_UUID)) {
vgdev->has_resource_assign_uuid = true;
}
Just a question: this relies on DMA bufs so I assume it is not really assumed to work when DMA API is bypassed, right? Rather than worry what does it mean, how about just disabling this feature without PLATFORM_DMA for now?
By PLATFORM_DMA, do you mean CONFIG_DMA_SHARED_BUFFER? Virtio-gpu depends on DRM, which selects that feature. So I think DMA bufs should always be available when virtio-gpu is present.
-David
On Fri, May 15, 2020 at 04:26:15PM +0900, David Stevens wrote:
if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_RESOURCE_UUID)) {
vgdev->has_resource_assign_uuid = true;
}
Just a question: this relies on DMA bufs so I assume it is not really assumed to work when DMA API is bypassed, right? Rather than worry what does it mean, how about just disabling this feature without PLATFORM_DMA for now?
By PLATFORM_DMA, do you mean CONFIG_DMA_SHARED_BUFFER?
Sorry, no. I mean VIRTIO_F_IOMMU_PLATFORM which in the future will be renamed to VIRTIO_F_PLATFORM_ACCESS.
Virtio-gpu depends on DRM, which selects that feature. So I think DMA bufs should always be available when virtio-gpu is present.
-David
On Mon, Jun 8, 2020 at 6:43 PM Michael S. Tsirkin mst@redhat.com wrote:
On Fri, May 15, 2020 at 04:26:15PM +0900, David Stevens wrote:
if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_RESOURCE_UUID)) {
vgdev->has_resource_assign_uuid = true;
}
Just a question: this relies on DMA bufs so I assume it is not really assumed to work when DMA API is bypassed, right? Rather than worry what does it mean, how about just disabling this feature without PLATFORM_DMA for now?
By PLATFORM_DMA, do you mean CONFIG_DMA_SHARED_BUFFER?
Sorry, no. I mean VIRTIO_F_IOMMU_PLATFORM which in the future will be renamed to VIRTIO_F_PLATFORM_ACCESS.
Shouldn't things work independent of whether or not that feature is set? If a virtio driver properly uses the dma_buf APIs (which virtgpu seems to), then that should take care of any mapping/synchronization related to VIRTIO_F_IOMMU_PLATFORM. If anything, the case where VIRTIO_F_IOMMU_PLATFORM isn't set is easier, since then we know that the "the device has same access [sic] to memory addresses supplied to it as the driver has", according to the specification.
-David
On Mon, Jun 08, 2020 at 07:36:55PM +0900, David Stevens wrote:
On Mon, Jun 8, 2020 at 6:43 PM Michael S. Tsirkin mst@redhat.com wrote:
On Fri, May 15, 2020 at 04:26:15PM +0900, David Stevens wrote:
if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_RESOURCE_UUID)) {
vgdev->has_resource_assign_uuid = true;
}
Just a question: this relies on DMA bufs so I assume it is not really assumed to work when DMA API is bypassed, right? Rather than worry what does it mean, how about just disabling this feature without PLATFORM_DMA for now?
By PLATFORM_DMA, do you mean CONFIG_DMA_SHARED_BUFFER?
Sorry, no. I mean VIRTIO_F_IOMMU_PLATFORM which in the future will be renamed to VIRTIO_F_PLATFORM_ACCESS.
Shouldn't things work independent of whether or not that feature is set? If a virtio driver properly uses the dma_buf APIs (which virtgpu seems to), then that should take care of any mapping/synchronization related to VIRTIO_F_IOMMU_PLATFORM. If anything, the case where VIRTIO_F_IOMMU_PLATFORM isn't set is easier, since then we know that the "the device has same access [sic] to memory addresses supplied to it as the driver has", according to the specification.
-David
I don't know much about drm so I can't tell, I was hoping Gerd can explain.
On Wed, Mar 11, 2020 at 08:20:00PM +0900, David Stevens wrote:
This patchset implements the current proposal for virtio cross-device resource sharing [1], with minor changes based on recent comments. It is expected that this will be used to import virtio resources into the virtio-video driver currently under discussion [2].
This patchset adds a new hook to dma-buf, for querying the dma-buf's underlying virtio UUID. This hook is then plumbed through DRM PRIME buffers, and finally implemented in virtgpu.
Looks all fine to me. We should wait for the virtio protocol update (patch 3/4) being accepted into the virtio specification. When this is done I'll go commit this to drm-misc-next.
cheers, Gerd
dri-devel@lists.freedesktop.org