The main motivation behind this is to have eventually have something like this:
struct virtio_gpu_shmem { struct drm_gem_shmem_object base; uint32_t hw_res_handle; struct sg_table *pages; (...) };
struct virtio_gpu_vram { struct drm_gem_object base; // or *drm_gem_vram_object uint32_t hw_res_handle; {offset, range}; (...) };
Sending this out to solicit feedback on this approach. Whichever approach we decide, landing incremental changes to internal structures is reduces rebasing costs and avoids mega-changes.
Gurchetan Singh (8): drm/virtio: make mmap callback consistent with callbacks drm/virtio: add virtio_gpu_is_shmem helper drm/virtio: add virtio_gpu_get_handle function drm/virtio: make RESOURCE_UNREF use virtio_gpu_get_handle(..) drm/virtio: make {ATTACH_RESOURCE, DETACH_RESOURCE} use virtio_gpu_get_handle(..) drm/virtio: rename virtio_gpu_object_array to virtio_gpu_gem_array drm/virtio: rename virtio_gpu_object_params to virtio_gpu_create_params drm/virtio: rename virtio_gpu_object to virtio_gpu_shmem
drivers/gpu/drm/virtio/virtgpu_drv.h | 72 ++++++------ drivers/gpu/drm/virtio/virtgpu_gem.c | 132 ++++++++++----------- drivers/gpu/drm/virtio/virtgpu_ioctl.c | 50 ++++---- drivers/gpu/drm/virtio/virtgpu_object.c | 148 +++++++++++++----------- drivers/gpu/drm/virtio/virtgpu_plane.c | 52 ++++----- drivers/gpu/drm/virtio/virtgpu_vq.c | 113 +++++++++--------- 6 files changed, 298 insertions(+), 269 deletions(-)
This is a very, very minor cleanup.
Signed-off-by: Gurchetan Singh gurchetansingh@chromium.org --- drivers/gpu/drm/virtio/virtgpu_object.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 3d2a6d489bfc..07de3260118a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -106,7 +106,7 @@ static const struct drm_gem_object_funcs virtio_gpu_gem_funcs = { .get_sg_table = drm_gem_shmem_get_sg_table, .vmap = drm_gem_shmem_vmap, .vunmap = drm_gem_shmem_vunmap, - .mmap = &drm_gem_shmem_mmap, + .mmap = drm_gem_shmem_mmap, };
struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
The plan is use have both shmem and virtual "vram" running side-by-side in virtio-gpu. It looks like we'll eventually use struct drm_gem_object as a base class, and we'll need to convert to shmem and vram objects on the fly. As a first step, add a virtio_gpu_is_shmem helper. Thanks to kraxel for suggesting this approach on Gitlab.
Suggested by: Gerd Hoffman kraxel@redhat.com Signed-off-by: Gurchetan Singh gurchetansingh@chromium.org --- drivers/gpu/drm/virtio/virtgpu_drv.h | 3 +++ drivers/gpu/drm/virtio/virtgpu_object.c | 9 +++++++-- 2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 95a7443baaba..ce73895cf74b 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -365,6 +365,9 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_params *params, struct virtio_gpu_object **bo_ptr, struct virtio_gpu_fence *fence); + +bool virtio_gpu_is_shmem(struct drm_gem_object *obj); + /* virtgpu_prime.c */ struct drm_gem_object *virtgpu_gem_prime_import_sg_table( struct drm_device *dev, struct dma_buf_attachment *attach, diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 07de3260118a..c5cad949eb8d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -95,7 +95,7 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj) virtio_gpu_cleanup_object(bo); }
-static const struct drm_gem_object_funcs virtio_gpu_gem_funcs = { +static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = { .free = virtio_gpu_free_object, .open = virtio_gpu_gem_object_open, .close = virtio_gpu_gem_object_close, @@ -109,6 +109,11 @@ static const struct drm_gem_object_funcs virtio_gpu_gem_funcs = { .mmap = drm_gem_shmem_mmap, };
+bool virtio_gpu_is_shmem(struct drm_gem_object *obj) +{ + return obj->funcs == &virtio_gpu_shmem_funcs; +} + struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev, size_t size) { @@ -118,7 +123,7 @@ struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev, if (!bo) return NULL;
- bo->base.base.funcs = &virtio_gpu_gem_funcs; + bo->base.base.funcs = &virtio_gpu_shmem_funcs; return &bo->base.base; }
Whether the resource is a shmem based resource or a (planned) vram based resource, it will have a resource handle associated with it. Since the hypercall interface works on resource handles, add a function that returns the handle given a GEM object.
Signed-off-by: Gurchetan Singh gurchetansingh@chromium.org --- drivers/gpu/drm/virtio/virtgpu_drv.h | 1 + drivers/gpu/drm/virtio/virtgpu_object.c | 13 +++++++++++++ 2 files changed, 14 insertions(+)
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index ce73895cf74b..48ca1316ef7b 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -367,6 +367,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, struct virtio_gpu_fence *fence);
bool virtio_gpu_is_shmem(struct drm_gem_object *obj); +uint32_t virtio_gpu_get_handle(struct drm_gem_object *obj);
/* virtgpu_prime.c */ struct drm_gem_object *virtgpu_gem_prime_import_sg_table( diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index c5cad949eb8d..283b6dadd7c8 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -62,6 +62,19 @@ static void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t } }
+uint32_t virtio_gpu_get_handle(struct drm_gem_object *obj) +{ + if (virtio_gpu_is_shmem(obj)) { + struct virtio_gpu_object *bo; + + bo = gem_to_virtio_gpu_obj(obj); + return bo->hw_res_handle; + } + + DRM_ERROR("resource handle not found\n"); + return 0; +} + void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo) { struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
This hypercall is reusable for both shmem and (planned) vram based virtgpu objects.
Signed-off-by: Gurchetan Singh gurchetansingh@chromium.org --- drivers/gpu/drm/virtio/virtgpu_drv.h | 2 +- drivers/gpu/drm/virtio/virtgpu_object.c | 2 +- drivers/gpu/drm/virtio/virtgpu_vq.c | 17 +++++++++++------ 3 files changed, 13 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 48ca1316ef7b..0e99487f2105 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -267,7 +267,7 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_array *objs, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, - struct virtio_gpu_object *bo); + struct drm_gem_object *obj); void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, uint64_t offset, uint32_t width, uint32_t height, diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 283b6dadd7c8..84df573e13de 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -100,7 +100,7 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj) struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
if (bo->created) { - virtio_gpu_cmd_unref_resource(vgdev, bo); + virtio_gpu_cmd_unref_resource(vgdev, obj); virtio_gpu_notify(vgdev); /* completion handler calls virtio_gpu_cleanup_object() */ return; diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 5e2375e0f7bb..feceda66da75 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -521,28 +521,33 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, static void virtio_gpu_cmd_unref_cb(struct virtio_gpu_device *vgdev, struct virtio_gpu_vbuffer *vbuf) { - struct virtio_gpu_object *bo; + struct drm_gem_object *obj;
- bo = vbuf->resp_cb_data; + obj = vbuf->resp_cb_data; vbuf->resp_cb_data = NULL;
- virtio_gpu_cleanup_object(bo); + if (obj && virtio_gpu_is_shmem(obj)) { + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); + + virtio_gpu_cleanup_object(bo); + } }
void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, - struct virtio_gpu_object *bo) + struct drm_gem_object *obj) { struct virtio_gpu_resource_unref *cmd_p; struct virtio_gpu_vbuffer *vbuf; + uint32_t handle = virtio_gpu_get_handle(obj);
cmd_p = virtio_gpu_alloc_cmd_cb(vgdev, &vbuf, sizeof(*cmd_p), virtio_gpu_cmd_unref_cb); memset(cmd_p, 0, sizeof(*cmd_p));
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_UNREF); - cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); + cmd_p->resource_id = cpu_to_le32(handle);
- vbuf->resp_cb_data = bo; + vbuf->resp_cb_data = obj; virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); }
These hypercalls are reusable by both shmem and (planned) vram based virtio_gpu objects.
Signed-off-by: Gurchetan Singh gurchetansingh@chromium.org --- drivers/gpu/drm/virtio/virtgpu_vq.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index feceda66da75..14e64c20eda4 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -944,7 +944,7 @@ void virtio_gpu_cmd_context_attach_resource(struct virtio_gpu_device *vgdev, uint32_t ctx_id, struct virtio_gpu_object_array *objs) { - struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]); + uint32_t handle = virtio_gpu_get_handle(objs->objs[0]); struct virtio_gpu_ctx_resource *cmd_p; struct virtio_gpu_vbuffer *vbuf;
@@ -954,7 +954,7 @@ void virtio_gpu_cmd_context_attach_resource(struct virtio_gpu_device *vgdev,
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE); cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id); - cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); + cmd_p->resource_id = cpu_to_le32(handle); virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); }
@@ -962,7 +962,7 @@ void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev, uint32_t ctx_id, struct virtio_gpu_object_array *objs) { - struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]); + uint32_t handle = virtio_gpu_get_handle(objs->objs[0]); struct virtio_gpu_ctx_resource *cmd_p; struct virtio_gpu_vbuffer *vbuf;
@@ -972,7 +972,7 @@ void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev,
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE); cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id); - cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); + cmd_p->resource_id = cpu_to_le32(handle); virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); }
The plan is to use this array with VRAM objects too.
Signed-off-by: Gurchetan Singh gurchetansingh@chromium.org --- drivers/gpu/drm/virtio/virtgpu_drv.h | 36 ++++---- drivers/gpu/drm/virtio/virtgpu_gem.c | 116 ++++++++++++------------ drivers/gpu/drm/virtio/virtgpu_ioctl.c | 32 +++---- drivers/gpu/drm/virtio/virtgpu_object.c | 20 ++-- drivers/gpu/drm/virtio/virtgpu_plane.c | 22 ++--- drivers/gpu/drm/virtio/virtgpu_vq.c | 62 +++++++------ 6 files changed, 145 insertions(+), 143 deletions(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 0e99487f2105..a1888a20d311 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -79,7 +79,7 @@ struct virtio_gpu_object { #define gem_to_virtio_gpu_obj(gobj) \ container_of((gobj), struct virtio_gpu_object, base.base)
-struct virtio_gpu_object_array { +struct virtio_gpu_gem_array { struct ww_acquire_ctx ticket; struct list_head next; u32 nents, total; @@ -118,7 +118,7 @@ struct virtio_gpu_vbuffer { virtio_gpu_resp_cb resp_cb; void *resp_cb_data;
- struct virtio_gpu_object_array *objs; + struct virtio_gpu_gem_array *array; struct list_head list; };
@@ -244,18 +244,18 @@ int virtio_gpu_mode_dumb_mmap(struct drm_file *file_priv, struct drm_device *dev, uint32_t handle, uint64_t *offset_p);
-struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents); -struct virtio_gpu_object_array* +struct virtio_gpu_gem_array *virtio_gpu_array_alloc(u32 nents); +struct virtio_gpu_gem_array* virtio_gpu_array_from_handles(struct drm_file *drm_file, u32 *handles, u32 nents); -void virtio_gpu_array_add_obj(struct virtio_gpu_object_array *objs, +void virtio_gpu_array_add_obj(struct virtio_gpu_gem_array *array, struct drm_gem_object *obj); -int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs); -void virtio_gpu_array_unlock_resv(struct virtio_gpu_object_array *objs); -void virtio_gpu_array_add_fence(struct virtio_gpu_object_array *objs, +int virtio_gpu_array_lock_resv(struct virtio_gpu_gem_array *array); +void virtio_gpu_array_unlock_resv(struct virtio_gpu_gem_array *array); +void virtio_gpu_array_add_fence(struct virtio_gpu_gem_array *array, struct dma_fence *fence); -void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs); +void virtio_gpu_array_put_free(struct virtio_gpu_gem_array *array); void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, - struct virtio_gpu_object_array *objs); + struct virtio_gpu_gem_array *array); void virtio_gpu_array_put_free_work(struct work_struct *work);
/* virtio vg */ @@ -264,7 +264,7 @@ void virtio_gpu_free_vbufs(struct virtio_gpu_device *vgdev); void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo, struct virtio_gpu_object_params *params, - struct virtio_gpu_object_array *objs, + struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, struct drm_gem_object *obj); @@ -272,7 +272,7 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, uint64_t offset, uint32_t width, uint32_t height, uint32_t x, uint32_t y, - struct virtio_gpu_object_array *objs, + struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_resource_flush(struct virtio_gpu_device *vgdev, uint32_t resource_id, @@ -302,32 +302,32 @@ void virtio_gpu_cmd_context_destroy(struct virtio_gpu_device *vgdev, uint32_t id); void virtio_gpu_cmd_context_attach_resource(struct virtio_gpu_device *vgdev, uint32_t ctx_id, - struct virtio_gpu_object_array *objs); + struct virtio_gpu_gem_array *array); void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev, uint32_t ctx_id, - struct virtio_gpu_object_array *objs); + struct virtio_gpu_gem_array *array); void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev, void *data, uint32_t data_size, uint32_t ctx_id, - struct virtio_gpu_object_array *objs, + struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev, uint32_t ctx_id, uint64_t offset, uint32_t level, struct drm_virtgpu_3d_box *box, - struct virtio_gpu_object_array *objs, + struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev, uint32_t ctx_id, uint64_t offset, uint32_t level, struct drm_virtgpu_3d_box *box, - struct virtio_gpu_object_array *objs, + struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo, struct virtio_gpu_object_params *params, - struct virtio_gpu_object_array *objs, + struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence); void virtio_gpu_ctrl_ack(struct virtqueue *vq); void virtio_gpu_cursor_ack(struct virtqueue *vq); diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 0d6152c99a27..53181fe2afe0 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -111,18 +111,18 @@ int virtio_gpu_gem_object_open(struct drm_gem_object *obj, { struct virtio_gpu_device *vgdev = obj->dev->dev_private; struct virtio_gpu_fpriv *vfpriv = file->driver_priv; - struct virtio_gpu_object_array *objs; + struct virtio_gpu_gem_array *array;
if (!vgdev->has_virgl_3d) return 0;
- objs = virtio_gpu_array_alloc(1); - if (!objs) + array = virtio_gpu_array_alloc(1); + if (!array) return -ENOMEM; - virtio_gpu_array_add_obj(objs, obj); + virtio_gpu_array_add_obj(array, obj);
virtio_gpu_cmd_context_attach_resource(vgdev, vfpriv->ctx_id, - objs); + array); virtio_gpu_notify(vgdev); return 0; } @@ -132,119 +132,119 @@ void virtio_gpu_gem_object_close(struct drm_gem_object *obj, { struct virtio_gpu_device *vgdev = obj->dev->dev_private; struct virtio_gpu_fpriv *vfpriv = file->driver_priv; - struct virtio_gpu_object_array *objs; + struct virtio_gpu_gem_array *array;
if (!vgdev->has_virgl_3d) return;
- objs = virtio_gpu_array_alloc(1); - if (!objs) + array = virtio_gpu_array_alloc(1); + if (!array) return; - virtio_gpu_array_add_obj(objs, obj); + virtio_gpu_array_add_obj(array, obj);
virtio_gpu_cmd_context_detach_resource(vgdev, vfpriv->ctx_id, - objs); + array); virtio_gpu_notify(vgdev); }
-struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents) +struct virtio_gpu_gem_array *virtio_gpu_array_alloc(u32 nents) { - struct virtio_gpu_object_array *objs; - size_t size = sizeof(*objs) + sizeof(objs->objs[0]) * nents; + struct virtio_gpu_gem_array *array; + size_t size = sizeof(*array) + sizeof(array->objs[0]) * nents;
- objs = kmalloc(size, GFP_KERNEL); - if (!objs) + array = kmalloc(size, GFP_KERNEL); + if (!array) return NULL;
- objs->nents = 0; - objs->total = nents; - return objs; + array->nents = 0; + array->total = nents; + return array; }
-static void virtio_gpu_array_free(struct virtio_gpu_object_array *objs) +static void virtio_gpu_array_free(struct virtio_gpu_gem_array *array) { - kfree(objs); + kfree(array); }
-struct virtio_gpu_object_array* +struct virtio_gpu_gem_array* virtio_gpu_array_from_handles(struct drm_file *drm_file, u32 *handles, u32 nents) { - struct virtio_gpu_object_array *objs; + struct virtio_gpu_gem_array *array; u32 i;
- objs = virtio_gpu_array_alloc(nents); - if (!objs) + array = virtio_gpu_array_alloc(nents); + if (!array) return NULL;
for (i = 0; i < nents; i++) { - objs->objs[i] = drm_gem_object_lookup(drm_file, handles[i]); - if (!objs->objs[i]) { - objs->nents = i; - virtio_gpu_array_put_free(objs); + array->objs[i] = drm_gem_object_lookup(drm_file, handles[i]); + if (!array->objs[i]) { + array->nents = i; + virtio_gpu_array_put_free(array); return NULL; } } - objs->nents = i; - return objs; + array->nents = i; + return array; }
-void virtio_gpu_array_add_obj(struct virtio_gpu_object_array *objs, +void virtio_gpu_array_add_obj(struct virtio_gpu_gem_array *array, struct drm_gem_object *obj) { - if (WARN_ON_ONCE(objs->nents == objs->total)) + if (WARN_ON_ONCE(array->nents == array->total)) return;
drm_gem_object_get(obj); - objs->objs[objs->nents] = obj; - objs->nents++; + array->objs[array->nents] = obj; + array->nents++; }
-int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs) +int virtio_gpu_array_lock_resv(struct virtio_gpu_gem_array *array) { int ret;
- if (objs->nents == 1) { - ret = dma_resv_lock_interruptible(objs->objs[0]->resv, NULL); + if (array->nents == 1) { + ret = dma_resv_lock_interruptible(array->objs[0]->resv, NULL); } else { - ret = drm_gem_lock_reservations(objs->objs, objs->nents, - &objs->ticket); + ret = drm_gem_lock_reservations(array->objs, array->nents, + &array->ticket); } return ret; }
-void virtio_gpu_array_unlock_resv(struct virtio_gpu_object_array *objs) +void virtio_gpu_array_unlock_resv(struct virtio_gpu_gem_array *array) { - if (objs->nents == 1) { - dma_resv_unlock(objs->objs[0]->resv); + if (array->nents == 1) { + dma_resv_unlock(array->objs[0]->resv); } else { - drm_gem_unlock_reservations(objs->objs, objs->nents, - &objs->ticket); + drm_gem_unlock_reservations(array->objs, array->nents, + &array->ticket); } }
-void virtio_gpu_array_add_fence(struct virtio_gpu_object_array *objs, +void virtio_gpu_array_add_fence(struct virtio_gpu_gem_array *array, struct dma_fence *fence) { int i;
- for (i = 0; i < objs->nents; i++) - dma_resv_add_excl_fence(objs->objs[i]->resv, fence); + for (i = 0; i < array->nents; i++) + dma_resv_add_excl_fence(array->objs[i]->resv, fence); }
-void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs) +void virtio_gpu_array_put_free(struct virtio_gpu_gem_array *array) { u32 i;
- for (i = 0; i < objs->nents; i++) - drm_gem_object_put_unlocked(objs->objs[i]); - virtio_gpu_array_free(objs); + for (i = 0; i < array->nents; i++) + drm_gem_object_put_unlocked(array->objs[i]); + virtio_gpu_array_free(array); }
void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, - struct virtio_gpu_object_array *objs) + struct virtio_gpu_gem_array *array) { spin_lock(&vgdev->obj_free_lock); - list_add_tail(&objs->next, &vgdev->obj_free_list); + list_add_tail(&array->next, &vgdev->obj_free_list); spin_unlock(&vgdev->obj_free_lock); schedule_work(&vgdev->obj_free_work); } @@ -253,15 +253,15 @@ void virtio_gpu_array_put_free_work(struct work_struct *work) { struct virtio_gpu_device *vgdev = container_of(work, struct virtio_gpu_device, obj_free_work); - struct virtio_gpu_object_array *objs; + struct virtio_gpu_gem_array *array;
spin_lock(&vgdev->obj_free_lock); while (!list_empty(&vgdev->obj_free_list)) { - objs = list_first_entry(&vgdev->obj_free_list, - struct virtio_gpu_object_array, next); - list_del(&objs->next); + array = list_first_entry(&vgdev->obj_free_list, + struct virtio_gpu_gem_array, next); + list_del(&array->next); spin_unlock(&vgdev->obj_free_lock); - virtio_gpu_array_put_free(objs); + virtio_gpu_array_put_free(array); spin_lock(&vgdev->obj_free_lock); } spin_unlock(&vgdev->obj_free_lock); diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index 336cc9143205..9a5bb000ccf2 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -81,7 +81,7 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, int ret; uint32_t *bo_handles = NULL; void __user *user_bo_handles = NULL; - struct virtio_gpu_object_array *buflist = NULL; + struct virtio_gpu_gem_array *buflist = NULL; struct sync_file *sync_file; int in_fence_fd = exbuf->fence_fd; int out_fence_fd = -1; @@ -312,7 +312,7 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev, struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_fpriv *vfpriv = file->driver_priv; struct drm_virtgpu_3d_transfer_from_host *args = data; - struct virtio_gpu_object_array *objs; + struct virtio_gpu_gem_array *array; struct virtio_gpu_fence *fence; int ret; u32 offset = args->offset; @@ -321,11 +321,11 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev, return -ENOSYS;
virtio_gpu_create_context(dev, file); - objs = virtio_gpu_array_from_handles(file, &args->bo_handle, 1); - if (objs == NULL) + array = virtio_gpu_array_from_handles(file, &args->bo_handle, 1); + if (array == NULL) return -ENOENT;
- ret = virtio_gpu_array_lock_resv(objs); + ret = virtio_gpu_array_lock_resv(array); if (ret != 0) goto err_put_free;
@@ -336,15 +336,15 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev, } virtio_gpu_cmd_transfer_from_host_3d (vgdev, vfpriv->ctx_id, offset, args->level, - &args->box, objs, fence); + &args->box, array, fence); dma_fence_put(&fence->f); virtio_gpu_notify(vgdev); return 0;
err_unlock: - virtio_gpu_array_unlock_resv(objs); + virtio_gpu_array_unlock_resv(array); err_put_free: - virtio_gpu_array_put_free(objs); + virtio_gpu_array_put_free(array); return ret; }
@@ -354,23 +354,23 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data, struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_fpriv *vfpriv = file->driver_priv; struct drm_virtgpu_3d_transfer_to_host *args = data; - struct virtio_gpu_object_array *objs; + struct virtio_gpu_gem_array *array; struct virtio_gpu_fence *fence; int ret; u32 offset = args->offset;
- objs = virtio_gpu_array_from_handles(file, &args->bo_handle, 1); - if (objs == NULL) + array = virtio_gpu_array_from_handles(file, &args->bo_handle, 1); + if (array == NULL) return -ENOENT;
if (!vgdev->has_virgl_3d) { virtio_gpu_cmd_transfer_to_host_2d (vgdev, offset, args->box.w, args->box.h, args->box.x, args->box.y, - objs, NULL); + array, NULL); } else { virtio_gpu_create_context(dev, file); - ret = virtio_gpu_array_lock_resv(objs); + ret = virtio_gpu_array_lock_resv(array); if (ret != 0) goto err_put_free;
@@ -382,16 +382,16 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data, virtio_gpu_cmd_transfer_to_host_3d (vgdev, vfpriv ? vfpriv->ctx_id : 0, offset, - args->level, &args->box, objs, fence); + args->level, &args->box, array, fence); dma_fence_put(&fence->f); } virtio_gpu_notify(vgdev); return 0;
err_unlock: - virtio_gpu_array_unlock_resv(objs); + virtio_gpu_array_unlock_resv(array); err_put_free: - virtio_gpu_array_put_free(objs); + virtio_gpu_array_put_free(array); return ret; }
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 84df573e13de..bc8b5a59f364 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -190,7 +190,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, struct virtio_gpu_object **bo_ptr, struct virtio_gpu_fence *fence) { - struct virtio_gpu_object_array *objs = NULL; + struct virtio_gpu_gem_array *array = NULL; struct drm_gem_shmem_object *shmem_obj; struct virtio_gpu_object *bo; struct virtio_gpu_mem_entry *ents; @@ -213,22 +213,22 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
if (fence) { ret = -ENOMEM; - objs = virtio_gpu_array_alloc(1); - if (!objs) + array = virtio_gpu_array_alloc(1); + if (!array) goto err_put_id; - virtio_gpu_array_add_obj(objs, &bo->base.base); + virtio_gpu_array_add_obj(array, &bo->base.base);
- ret = virtio_gpu_array_lock_resv(objs); + ret = virtio_gpu_array_lock_resv(array); if (ret != 0) - goto err_put_objs; + goto err_put_array; }
if (params->virgl) { virtio_gpu_cmd_resource_create_3d(vgdev, bo, params, - objs, fence); + array, fence); } else { virtio_gpu_cmd_create_resource(vgdev, bo, params, - objs, fence); + array, fence); }
ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); @@ -247,8 +247,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, *bo_ptr = bo; return 0;
-err_put_objs: - virtio_gpu_array_put_free(objs); +err_put_array: + virtio_gpu_array_put_free(array); err_put_id: virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); err_free_gem: diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c index 52d24179bcec..fcff5d7a4cee 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -109,7 +109,7 @@ static void virtio_gpu_update_dumb_bo(struct virtio_gpu_device *vgdev, { struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(state->fb->obj[0]); - struct virtio_gpu_object_array *objs; + struct virtio_gpu_gem_array *array; uint32_t w = rect->x2 - rect->x1; uint32_t h = rect->y2 - rect->y1; uint32_t x = rect->x1; @@ -117,13 +117,13 @@ static void virtio_gpu_update_dumb_bo(struct virtio_gpu_device *vgdev, uint32_t off = x * state->fb->format->cpp[0] + y * state->fb->pitches[0];
- objs = virtio_gpu_array_alloc(1); - if (!objs) + array = virtio_gpu_array_alloc(1); + if (!array) return; - virtio_gpu_array_add_obj(objs, &bo->base.base); + virtio_gpu_array_add_obj(array, &bo->base.base);
virtio_gpu_cmd_transfer_to_host_2d(vgdev, off, w, h, x, y, - objs, NULL); + array, NULL); }
static void virtio_gpu_primary_plane_update(struct drm_plane *plane, @@ -252,18 +252,18 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
if (bo && bo->dumb && (plane->state->fb != old_state->fb)) { /* new cursor -- update & wait */ - struct virtio_gpu_object_array *objs; + struct virtio_gpu_gem_array *array;
- objs = virtio_gpu_array_alloc(1); - if (!objs) + array = virtio_gpu_array_alloc(1); + if (!array) return; - virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]); - virtio_gpu_array_lock_resv(objs); + virtio_gpu_array_add_obj(array, vgfb->base.obj[0]); + virtio_gpu_array_lock_resv(array); virtio_gpu_cmd_transfer_to_host_2d (vgdev, 0, plane->state->crtc_w, plane->state->crtc_h, - 0, 0, objs, vgfb->fence); + 0, 0, array, vgfb->fence); virtio_gpu_notify(vgdev); dma_fence_wait(&vgfb->fence->f, true); dma_fence_put(&vgfb->fence->f); diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 14e64c20eda4..961371566724 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -250,8 +250,8 @@ void virtio_gpu_dequeue_ctrl_func(struct work_struct *work) virtio_gpu_fence_event_process(vgdev, fence_id);
list_for_each_entry_safe(entry, tmp, &reclaim_list, list) { - if (entry->objs) - virtio_gpu_array_put_free_delayed(vgdev, entry->objs); + if (entry->array) + virtio_gpu_array_put_free_delayed(vgdev, entry->array); list_del(&entry->list); free_vbuf(vgdev, entry); } @@ -332,8 +332,8 @@ static void virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, int ret, idx;
if (!drm_dev_enter(vgdev->ddev, &idx)) { - if (fence && vbuf->objs) - virtio_gpu_array_unlock_resv(vbuf->objs); + if (fence && vbuf->array) + virtio_gpu_array_unlock_resv(vbuf->array); free_vbuf(vgdev, vbuf); return; } @@ -357,9 +357,9 @@ static void virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, if (fence) { virtio_gpu_fence_emit(vgdev, virtio_gpu_vbuf_ctrl_hdr(vbuf), fence); - if (vbuf->objs) { - virtio_gpu_array_add_fence(vbuf->objs, &fence->f); - virtio_gpu_array_unlock_resv(vbuf->objs); + if (vbuf->array) { + virtio_gpu_array_add_fence(vbuf->array, &fence->f); + virtio_gpu_array_unlock_resv(vbuf->array); } }
@@ -381,6 +381,7 @@ static void virtio_gpu_queue_fenced_ctrl_buffer(struct virtio_gpu_device *vgdev, { struct scatterlist *sgs[3], vcmd, vout, vresp; struct sg_table *sgt = NULL; + struct virtio_gpu_gem_array *array = NULL; int elemcnt = 0, outcnt = 0, incnt = 0;
/* set up vcmd */ @@ -396,8 +397,9 @@ static void virtio_gpu_queue_fenced_ctrl_buffer(struct virtio_gpu_device *vgdev, sgt = vmalloc_to_sgt(vbuf->data_buf, vbuf->data_size, &sg_ents); if (!sgt) { - if (fence && vbuf->objs) - virtio_gpu_array_unlock_resv(vbuf->objs); + array = vbuf->array; + if (fence && array) + virtio_gpu_array_unlock_resv(array); return; }
@@ -498,7 +500,7 @@ static void virtio_gpu_queue_cursor(struct virtio_gpu_device *vgdev, void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo, struct virtio_gpu_object_params *params, - struct virtio_gpu_object_array *objs, + struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence) { struct virtio_gpu_resource_create_2d *cmd_p; @@ -506,7 +508,7 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); memset(cmd_p, 0, sizeof(*cmd_p)); - vbuf->objs = objs; + vbuf->array = array;
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_CREATE_2D); cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); @@ -598,10 +600,10 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, uint64_t offset, uint32_t width, uint32_t height, uint32_t x, uint32_t y, - struct virtio_gpu_object_array *objs, + struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence) { - struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]); + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(array->objs[0]); struct virtio_gpu_transfer_to_host_2d *cmd_p; struct virtio_gpu_vbuffer *vbuf; bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev); @@ -613,7 +615,7 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); memset(cmd_p, 0, sizeof(*cmd_p)); - vbuf->objs = objs; + vbuf->array = array;
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D); cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); @@ -942,15 +944,15 @@ void virtio_gpu_cmd_context_destroy(struct virtio_gpu_device *vgdev,
void virtio_gpu_cmd_context_attach_resource(struct virtio_gpu_device *vgdev, uint32_t ctx_id, - struct virtio_gpu_object_array *objs) + struct virtio_gpu_gem_array *array) { - uint32_t handle = virtio_gpu_get_handle(objs->objs[0]); + uint32_t handle = virtio_gpu_get_handle(array->objs[0]); struct virtio_gpu_ctx_resource *cmd_p; struct virtio_gpu_vbuffer *vbuf;
cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); memset(cmd_p, 0, sizeof(*cmd_p)); - vbuf->objs = objs; + vbuf->array = array;
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE); cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id); @@ -960,15 +962,15 @@ void virtio_gpu_cmd_context_attach_resource(struct virtio_gpu_device *vgdev,
void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev, uint32_t ctx_id, - struct virtio_gpu_object_array *objs) + struct virtio_gpu_gem_array *array) { - uint32_t handle = virtio_gpu_get_handle(objs->objs[0]); + uint32_t handle = virtio_gpu_get_handle(array->objs[0]); struct virtio_gpu_ctx_resource *cmd_p; struct virtio_gpu_vbuffer *vbuf;
cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); memset(cmd_p, 0, sizeof(*cmd_p)); - vbuf->objs = objs; + vbuf->array = array;
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE); cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id); @@ -980,7 +982,7 @@ void virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo, struct virtio_gpu_object_params *params, - struct virtio_gpu_object_array *objs, + struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence) { struct virtio_gpu_resource_create_3d *cmd_p; @@ -988,7 +990,7 @@ virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); memset(cmd_p, 0, sizeof(*cmd_p)); - vbuf->objs = objs; + vbuf->array = array;
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_CREATE_3D); cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); @@ -1013,10 +1015,10 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev, uint32_t ctx_id, uint64_t offset, uint32_t level, struct drm_virtgpu_3d_box *box, - struct virtio_gpu_object_array *objs, + struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence) { - struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]); + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(array->objs[0]); struct virtio_gpu_transfer_host_3d *cmd_p; struct virtio_gpu_vbuffer *vbuf; bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev); @@ -1029,7 +1031,7 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev, cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); memset(cmd_p, 0, sizeof(*cmd_p));
- vbuf->objs = objs; + vbuf->array = array;
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D); cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id); @@ -1045,17 +1047,17 @@ void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev, uint32_t ctx_id, uint64_t offset, uint32_t level, struct drm_virtgpu_3d_box *box, - struct virtio_gpu_object_array *objs, + struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence) { - struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(objs->objs[0]); + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(array->objs[0]); struct virtio_gpu_transfer_host_3d *cmd_p; struct virtio_gpu_vbuffer *vbuf;
cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); memset(cmd_p, 0, sizeof(*cmd_p));
- vbuf->objs = objs; + vbuf->array = array;
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D); cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id); @@ -1070,7 +1072,7 @@ void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev, void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev, void *data, uint32_t data_size, uint32_t ctx_id, - struct virtio_gpu_object_array *objs, + struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence) { struct virtio_gpu_cmd_submit *cmd_p; @@ -1081,7 +1083,7 @@ void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
vbuf->data_buf = data; vbuf->data_size = data_size; - vbuf->objs = objs; + vbuf->array = array;
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_SUBMIT_3D); cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id);
Currently, struct virtio_gpu_object refers to the SHMEM based object, which is fair. We want to expand that a bit, so let's expand the creation params too.
Signed-off-by: Gurchetan Singh gurchetansingh@chromium.org --- drivers/gpu/drm/virtio/virtgpu_drv.h | 10 +++++----- drivers/gpu/drm/virtio/virtgpu_gem.c | 4 ++-- drivers/gpu/drm/virtio/virtgpu_ioctl.c | 2 +- drivers/gpu/drm/virtio/virtgpu_object.c | 2 +- drivers/gpu/drm/virtio/virtgpu_vq.c | 4 ++-- 5 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index a1888a20d311..4399a782b05e 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -49,7 +49,7 @@ #define DRIVER_MINOR 1 #define DRIVER_PATCHLEVEL 0
-struct virtio_gpu_object_params { +struct virtio_gpu_create_params { uint32_t format; uint32_t width; uint32_t height; @@ -230,7 +230,7 @@ int virtio_gpu_gem_init(struct virtio_gpu_device *vgdev); void virtio_gpu_gem_fini(struct virtio_gpu_device *vgdev); int virtio_gpu_gem_create(struct drm_file *file, struct drm_device *dev, - struct virtio_gpu_object_params *params, + struct virtio_gpu_create_params *params, struct drm_gem_object **obj_p, uint32_t *handle_p); int virtio_gpu_gem_object_open(struct drm_gem_object *obj, @@ -263,7 +263,7 @@ int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev); void virtio_gpu_free_vbufs(struct virtio_gpu_device *vgdev); void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo, - struct virtio_gpu_object_params *params, + struct virtio_gpu_create_params *params, struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, @@ -326,7 +326,7 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev, void virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo, - struct virtio_gpu_object_params *params, + struct virtio_gpu_create_params *params, struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence); void virtio_gpu_ctrl_ack(struct virtqueue *vq); @@ -362,7 +362,7 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo); struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev, size_t size); int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, - struct virtio_gpu_object_params *params, + struct virtio_gpu_create_params *params, struct virtio_gpu_object **bo_ptr, struct virtio_gpu_fence *fence);
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 53181fe2afe0..569416dd00e6 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -30,7 +30,7 @@
int virtio_gpu_gem_create(struct drm_file *file, struct drm_device *dev, - struct virtio_gpu_object_params *params, + struct virtio_gpu_create_params *params, struct drm_gem_object **obj_p, uint32_t *handle_p) { @@ -63,7 +63,7 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv, struct drm_mode_create_dumb *args) { struct drm_gem_object *gobj; - struct virtio_gpu_object_params params = { 0 }; + struct virtio_gpu_create_params params = { 0 }; int ret; uint32_t pitch;
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index 9a5bb000ccf2..ec97e18d104d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -232,7 +232,7 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data, struct virtio_gpu_object *qobj; struct drm_gem_object *obj; uint32_t handle = 0; - struct virtio_gpu_object_params params = { 0 }; + struct virtio_gpu_create_params params = { 0 };
if (vgdev->has_virgl_3d) { virtio_gpu_create_context(dev, file); diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index bc8b5a59f364..312c5bf4950a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -186,7 +186,7 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, }
int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, - struct virtio_gpu_object_params *params, + struct virtio_gpu_create_params *params, struct virtio_gpu_object **bo_ptr, struct virtio_gpu_fence *fence) { diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 961371566724..878d07b75b7f 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -499,7 +499,7 @@ static void virtio_gpu_queue_cursor(struct virtio_gpu_device *vgdev, /* create a basic resource */ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo, - struct virtio_gpu_object_params *params, + struct virtio_gpu_create_params *params, struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence) { @@ -981,7 +981,7 @@ void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev, void virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo, - struct virtio_gpu_object_params *params, + struct virtio_gpu_create_params *params, struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence) {
This renames struct virtio_gpu_object to struct virtio_gpu_shmem. This will go in line with the planned struct virtio_gpu_vram.
Signed-off-by: Gurchetan Singh gurchetansingh@chromium.org --- drivers/gpu/drm/virtio/virtgpu_drv.h | 22 ++--- drivers/gpu/drm/virtio/virtgpu_gem.c | 12 +-- drivers/gpu/drm/virtio/virtgpu_ioctl.c | 16 ++-- drivers/gpu/drm/virtio/virtgpu_object.c | 110 ++++++++++++------------ drivers/gpu/drm/virtio/virtgpu_plane.c | 32 +++---- drivers/gpu/drm/virtio/virtgpu_vq.c | 36 ++++---- 6 files changed, 114 insertions(+), 114 deletions(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 4399a782b05e..f62e036f7c40 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -66,7 +66,7 @@ struct virtio_gpu_create_params { uint32_t flags; };
-struct virtio_gpu_object { +struct virtio_gpu_shmem { struct drm_gem_shmem_object base; uint32_t hw_res_handle;
@@ -76,8 +76,8 @@ struct virtio_gpu_object { bool dumb; bool created; }; -#define gem_to_virtio_gpu_obj(gobj) \ - container_of((gobj), struct virtio_gpu_object, base.base) +#define to_virtio_gpu_shmem(gobj) \ + container_of((gobj), struct virtio_gpu_shmem, base.base)
struct virtio_gpu_gem_array { struct ww_acquire_ctx ticket; @@ -262,7 +262,7 @@ void virtio_gpu_array_put_free_work(struct work_struct *work); int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev); void virtio_gpu_free_vbufs(struct virtio_gpu_device *vgdev); void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, - struct virtio_gpu_object *bo, + struct virtio_gpu_shmem *bo, struct virtio_gpu_create_params *params, struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence); @@ -283,7 +283,7 @@ void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev, uint32_t width, uint32_t height, uint32_t x, uint32_t y); int virtio_gpu_object_attach(struct virtio_gpu_device *vgdev, - struct virtio_gpu_object *obj, + struct virtio_gpu_shmem *shmem, struct virtio_gpu_mem_entry *ents, unsigned int nents); int virtio_gpu_attach_status_page(struct virtio_gpu_device *vgdev); @@ -325,7 +325,7 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev, - struct virtio_gpu_object *bo, + struct virtio_gpu_shmem *shmem, struct virtio_gpu_create_params *params, struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence); @@ -358,13 +358,13 @@ void virtio_gpu_fence_event_process(struct virtio_gpu_device *vdev, u64 last_seq);
/* virtio_gpu_object */ -void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo); +void virtio_gpu_cleanup_shmem(struct virtio_gpu_shmem *shmem); struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev, size_t size); -int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, - struct virtio_gpu_create_params *params, - struct virtio_gpu_object **bo_ptr, - struct virtio_gpu_fence *fence); +int virtio_gpu_shmem_create(struct virtio_gpu_device *vgdev, + struct virtio_gpu_create_params *params, + struct virtio_gpu_shmem **shmem_ptr, + struct virtio_gpu_fence *fence);
bool virtio_gpu_is_shmem(struct drm_gem_object *obj); uint32_t virtio_gpu_get_handle(struct drm_gem_object *obj); diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 569416dd00e6..d8429798613a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -35,24 +35,24 @@ int virtio_gpu_gem_create(struct drm_file *file, uint32_t *handle_p) { struct virtio_gpu_device *vgdev = dev->dev_private; - struct virtio_gpu_object *obj; + struct virtio_gpu_shmem *shmem; int ret; u32 handle;
- ret = virtio_gpu_object_create(vgdev, params, &obj, NULL); + ret = virtio_gpu_shmem_create(vgdev, params, &shmem, NULL); if (ret < 0) return ret;
- ret = drm_gem_handle_create(file, &obj->base.base, &handle); + ret = drm_gem_handle_create(file, &shmem->base.base, &handle); if (ret) { - drm_gem_object_release(&obj->base.base); + drm_gem_object_release(&shmem->base.base); return ret; }
- *obj_p = &obj->base.base; + *obj_p = &shmem->base.base;
/* drop reference from allocate - handle holds it now */ - drm_gem_object_put_unlocked(&obj->base.base); + drm_gem_object_put_unlocked(&shmem->base.base);
*handle_p = handle; return 0; diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index ec97e18d104d..cf1639219bb0 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -229,7 +229,7 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data, struct drm_virtgpu_resource_create *rc = data; struct virtio_gpu_fence *fence; int ret; - struct virtio_gpu_object *qobj; + struct virtio_gpu_shmem *shmem; struct drm_gem_object *obj; uint32_t handle = 0; struct virtio_gpu_create_params params = { 0 }; @@ -268,11 +268,11 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data, fence = virtio_gpu_fence_alloc(vgdev); if (!fence) return -ENOMEM; - ret = virtio_gpu_object_create(vgdev, ¶ms, &qobj, fence); + ret = virtio_gpu_shmem_create(vgdev, ¶ms, &shmem, fence); dma_fence_put(&fence->f); if (ret < 0) return ret; - obj = &qobj->base.base; + obj = &shmem->base.base;
ret = drm_gem_handle_create(file, obj, &handle); if (ret) { @@ -281,7 +281,7 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data, } drm_gem_object_put_unlocked(obj);
- rc->res_handle = qobj->hw_res_handle; /* similiar to a VM address */ + rc->res_handle = shmem->hw_res_handle; /* similar to a VM address */ rc->bo_handle = handle; return 0; } @@ -291,16 +291,16 @@ static int virtio_gpu_resource_info_ioctl(struct drm_device *dev, void *data, { struct drm_virtgpu_resource_info *ri = data; struct drm_gem_object *gobj = NULL; - struct virtio_gpu_object *qobj = NULL; + struct virtio_gpu_shmem *shmem = NULL;
gobj = drm_gem_object_lookup(file, ri->bo_handle); if (gobj == NULL) return -ENOENT;
- qobj = gem_to_virtio_gpu_obj(gobj); + shmem = to_virtio_gpu_shmem(gobj);
- ri->size = qobj->base.base.size; - ri->res_handle = qobj->hw_res_handle; + ri->size = shmem->base.base.size; + ri->res_handle = shmem->hw_res_handle; drm_gem_object_put_unlocked(gobj); return 0; } diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 312c5bf4950a..d95c6e93e90b 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -65,51 +65,51 @@ static void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t uint32_t virtio_gpu_get_handle(struct drm_gem_object *obj) { if (virtio_gpu_is_shmem(obj)) { - struct virtio_gpu_object *bo; + struct virtio_gpu_shmem *shmem;
- bo = gem_to_virtio_gpu_obj(obj); - return bo->hw_res_handle; + shmem = to_virtio_gpu_shmem(obj); + return shmem->hw_res_handle; }
DRM_ERROR("resource handle not found\n"); return 0; }
-void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo) +void virtio_gpu_cleanup_shmem(struct virtio_gpu_shmem *shmem) { - struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + struct virtio_gpu_device *vgdev = shmem->base.base.dev->dev_private;
- if (bo->pages) { - if (bo->mapped) { + if (shmem->pages) { + if (shmem->mapped) { dma_unmap_sg(vgdev->vdev->dev.parent, - bo->pages->sgl, bo->mapped, + shmem->pages->sgl, shmem->mapped, DMA_TO_DEVICE); - bo->mapped = 0; + shmem->mapped = 0; } - sg_free_table(bo->pages); - bo->pages = NULL; - drm_gem_shmem_unpin(&bo->base.base); + sg_free_table(shmem->pages); + shmem->pages = NULL; + drm_gem_shmem_unpin(&shmem->base.base); } - virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); - drm_gem_shmem_free_object(&bo->base.base); + virtio_gpu_resource_id_put(vgdev, shmem->hw_res_handle); + drm_gem_shmem_free_object(&shmem->base.base); }
-static void virtio_gpu_free_object(struct drm_gem_object *obj) +static void virtio_gpu_shmem_free(struct drm_gem_object *obj) { - struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); - struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + struct virtio_gpu_shmem *shmem = to_virtio_gpu_shmem(obj); + struct virtio_gpu_device *vgdev = shmem->base.base.dev->dev_private;
- if (bo->created) { + if (shmem->created) { virtio_gpu_cmd_unref_resource(vgdev, obj); virtio_gpu_notify(vgdev); - /* completion handler calls virtio_gpu_cleanup_object() */ + /* completion handler calls virtio_gpu_cleanup_shmem() */ return; } - virtio_gpu_cleanup_object(bo); + virtio_gpu_cleanup_shmem(shmem); }
static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = { - .free = virtio_gpu_free_object, + .free = virtio_gpu_shmem_free, .open = virtio_gpu_gem_object_open, .close = virtio_gpu_gem_object_close,
@@ -130,42 +130,42 @@ bool virtio_gpu_is_shmem(struct drm_gem_object *obj) struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev, size_t size) { - struct virtio_gpu_object *bo; + struct virtio_gpu_shmem *shmem;
- bo = kzalloc(sizeof(*bo), GFP_KERNEL); - if (!bo) + shmem = kzalloc(sizeof(*shmem), GFP_KERNEL); + if (!shmem) return NULL;
- bo->base.base.funcs = &virtio_gpu_shmem_funcs; - return &bo->base.base; + shmem->base.base.funcs = &virtio_gpu_shmem_funcs; + return &shmem->base.base; }
-static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, - struct virtio_gpu_object *bo, - struct virtio_gpu_mem_entry **ents, - unsigned int *nents) +static int virtio_gpu_shmem_init(struct virtio_gpu_device *vgdev, + struct virtio_gpu_shmem *shmem, + struct virtio_gpu_mem_entry **ents, + unsigned int *nents) { bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev); struct scatterlist *sg; int si, ret;
- ret = drm_gem_shmem_pin(&bo->base.base); + ret = drm_gem_shmem_pin(&shmem->base.base); if (ret < 0) return -EINVAL;
- bo->pages = drm_gem_shmem_get_sg_table(&bo->base.base); - if (!bo->pages) { - drm_gem_shmem_unpin(&bo->base.base); + shmem->pages = drm_gem_shmem_get_sg_table(&shmem->base.base); + if (!shmem->pages) { + drm_gem_shmem_unpin(&shmem->base.base); return -EINVAL; }
if (use_dma_api) { - bo->mapped = dma_map_sg(vgdev->vdev->dev.parent, - bo->pages->sgl, bo->pages->nents, + shmem->mapped = dma_map_sg(vgdev->vdev->dev.parent, + shmem->pages->sgl, shmem->pages->nents, DMA_TO_DEVICE); - *nents = bo->mapped; + *nents = shmem->mapped; } else { - *nents = bo->pages->nents; + *nents = shmem->pages->nents; }
*ents = kmalloc_array(*nents, sizeof(struct virtio_gpu_mem_entry), @@ -175,7 +175,7 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, return -ENOMEM; }
- for_each_sg(bo->pages->sgl, sg, *nents, si) { + for_each_sg(shmem->pages->sgl, sg, *nents, si) { (*ents)[si].addr = cpu_to_le64(use_dma_api ? sg_dma_address(sg) : sg_phys(sg)); @@ -185,38 +185,38 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, return 0; }
-int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, +int virtio_gpu_shmem_create(struct virtio_gpu_device *vgdev, struct virtio_gpu_create_params *params, - struct virtio_gpu_object **bo_ptr, + struct virtio_gpu_shmem **shmem_ptr, struct virtio_gpu_fence *fence) { struct virtio_gpu_gem_array *array = NULL; struct drm_gem_shmem_object *shmem_obj; - struct virtio_gpu_object *bo; + struct virtio_gpu_shmem *shmem; struct virtio_gpu_mem_entry *ents; unsigned int nents; int ret;
- *bo_ptr = NULL; + *shmem_ptr = NULL;
params->size = roundup(params->size, PAGE_SIZE); shmem_obj = drm_gem_shmem_create(vgdev->ddev, params->size); if (IS_ERR(shmem_obj)) return PTR_ERR(shmem_obj); - bo = gem_to_virtio_gpu_obj(&shmem_obj->base); + shmem = to_virtio_gpu_shmem(&shmem_obj->base);
- ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle); + ret = virtio_gpu_resource_id_get(vgdev, &shmem->hw_res_handle); if (ret < 0) goto err_free_gem;
- bo->dumb = params->dumb; + shmem->dumb = params->dumb;
if (fence) { ret = -ENOMEM; array = virtio_gpu_array_alloc(1); if (!array) goto err_put_id; - virtio_gpu_array_add_obj(array, &bo->base.base); + virtio_gpu_array_add_obj(array, &shmem->base.base);
ret = virtio_gpu_array_lock_resv(array); if (ret != 0) @@ -224,33 +224,33 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, }
if (params->virgl) { - virtio_gpu_cmd_resource_create_3d(vgdev, bo, params, + virtio_gpu_cmd_resource_create_3d(vgdev, shmem, params, array, fence); } else { - virtio_gpu_cmd_create_resource(vgdev, bo, params, + virtio_gpu_cmd_create_resource(vgdev, shmem, params, array, fence); }
- ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + ret = virtio_gpu_shmem_init(vgdev, shmem, &ents, &nents); if (ret != 0) { - virtio_gpu_free_object(&shmem_obj->base); + virtio_gpu_shmem_free(&shmem_obj->base); return ret; }
- ret = virtio_gpu_object_attach(vgdev, bo, ents, nents); + ret = virtio_gpu_object_attach(vgdev, shmem, ents, nents); if (ret != 0) { - virtio_gpu_free_object(&shmem_obj->base); + virtio_gpu_shmem_free(&shmem_obj->base); return ret; }
virtio_gpu_notify(vgdev); - *bo_ptr = bo; + *shmem_ptr = shmem; return 0;
err_put_array: virtio_gpu_array_put_free(array); err_put_id: - virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); + virtio_gpu_resource_id_put(vgdev, shmem->hw_res_handle); err_free_gem: drm_gem_shmem_free_object(&shmem_obj->base); return ret; diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c index fcff5d7a4cee..3f6a2ba8909f 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -107,8 +107,8 @@ static void virtio_gpu_update_dumb_bo(struct virtio_gpu_device *vgdev, struct drm_plane_state *state, struct drm_rect *rect) { - struct virtio_gpu_object *bo = - gem_to_virtio_gpu_obj(state->fb->obj[0]); + struct virtio_gpu_shmem *shmem = + to_virtio_gpu_shmem(state->fb->obj[0]); struct virtio_gpu_gem_array *array; uint32_t w = rect->x2 - rect->x1; uint32_t h = rect->y2 - rect->y1; @@ -120,7 +120,7 @@ static void virtio_gpu_update_dumb_bo(struct virtio_gpu_device *vgdev, array = virtio_gpu_array_alloc(1); if (!array) return; - virtio_gpu_array_add_obj(array, &bo->base.base); + virtio_gpu_array_add_obj(array, &shmem->base.base);
virtio_gpu_cmd_transfer_to_host_2d(vgdev, off, w, h, x, y, array, NULL); @@ -132,7 +132,7 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane, struct drm_device *dev = plane->dev; struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_output *output = NULL; - struct virtio_gpu_object *bo; + struct virtio_gpu_shmem *shmem; struct drm_rect rect;
if (plane->state->crtc) @@ -155,8 +155,8 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane, if (!drm_atomic_helper_damage_merged(old_state, plane->state, &rect)) return;
- bo = gem_to_virtio_gpu_obj(plane->state->fb->obj[0]); - if (bo->dumb) + shmem = to_virtio_gpu_shmem(plane->state->fb->obj[0]); + if (shmem->dumb) virtio_gpu_update_dumb_bo(vgdev, plane->state, &rect);
if (plane->state->fb != old_state->fb || @@ -165,7 +165,7 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane, plane->state->src_x != old_state->src_x || plane->state->src_y != old_state->src_y) { DRM_DEBUG("handle 0x%x, crtc %dx%d+%d+%d, src %dx%d+%d+%d\n", - bo->hw_res_handle, + shmem->hw_res_handle, plane->state->crtc_w, plane->state->crtc_h, plane->state->crtc_x, plane->state->crtc_y, plane->state->src_w >> 16, @@ -173,14 +173,14 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane, plane->state->src_x >> 16, plane->state->src_y >> 16); virtio_gpu_cmd_set_scanout(vgdev, output->index, - bo->hw_res_handle, + shmem->hw_res_handle, plane->state->src_w >> 16, plane->state->src_h >> 16, plane->state->src_x >> 16, plane->state->src_y >> 16); }
- virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, + virtio_gpu_cmd_resource_flush(vgdev, shmem->hw_res_handle, rect.x1, rect.y1, rect.x2 - rect.x1, @@ -194,14 +194,14 @@ static int virtio_gpu_cursor_prepare_fb(struct drm_plane *plane, struct drm_device *dev = plane->dev; struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_framebuffer *vgfb; - struct virtio_gpu_object *bo; + struct virtio_gpu_shmem *shmem;
if (!new_state->fb) return 0;
vgfb = to_virtio_gpu_framebuffer(new_state->fb); - bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); - if (bo && bo->dumb && (plane->state->fb != new_state->fb)) { + shmem = to_virtio_gpu_shmem(vgfb->base.obj[0]); + if (shmem && shmem->dumb && (plane->state->fb != new_state->fb)) { vgfb->fence = virtio_gpu_fence_alloc(vgdev); if (!vgfb->fence) return -ENOMEM; @@ -232,7 +232,7 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane, struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_output *output = NULL; struct virtio_gpu_framebuffer *vgfb; - struct virtio_gpu_object *bo = NULL; + struct virtio_gpu_shmem *shmem = NULL; uint32_t handle;
if (plane->state->crtc) @@ -244,13 +244,13 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
if (plane->state->fb) { vgfb = to_virtio_gpu_framebuffer(plane->state->fb); - bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); - handle = bo->hw_res_handle; + shmem = to_virtio_gpu_shmem(vgfb->base.obj[0]); + handle = shmem->hw_res_handle; } else { handle = 0; }
- if (bo && bo->dumb && (plane->state->fb != old_state->fb)) { + if (shmem && shmem->dumb && (plane->state->fb != old_state->fb)) { /* new cursor -- update & wait */ struct virtio_gpu_gem_array *array;
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 878d07b75b7f..9f92943af97e 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -498,7 +498,7 @@ static void virtio_gpu_queue_cursor(struct virtio_gpu_device *vgdev,
/* create a basic resource */ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, - struct virtio_gpu_object *bo, + struct virtio_gpu_shmem *shmem, struct virtio_gpu_create_params *params, struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence) @@ -511,13 +511,13 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, vbuf->array = array;
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_CREATE_2D); - cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); + cmd_p->resource_id = cpu_to_le32(shmem->hw_res_handle); cmd_p->format = cpu_to_le32(params->format); cmd_p->width = cpu_to_le32(params->width); cmd_p->height = cpu_to_le32(params->height);
virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); - bo->created = true; + shmem->created = true; }
static void virtio_gpu_cmd_unref_cb(struct virtio_gpu_device *vgdev, @@ -529,9 +529,9 @@ static void virtio_gpu_cmd_unref_cb(struct virtio_gpu_device *vgdev, vbuf->resp_cb_data = NULL;
if (obj && virtio_gpu_is_shmem(obj)) { - struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); + struct virtio_gpu_shmem *shmem = to_virtio_gpu_shmem(obj);
- virtio_gpu_cleanup_object(bo); + virtio_gpu_cleanup_shmem(shmem); } }
@@ -603,14 +603,14 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence) { - struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(array->objs[0]); + struct virtio_gpu_shmem *shmem = to_virtio_gpu_shmem(array->objs[0]); struct virtio_gpu_transfer_to_host_2d *cmd_p; struct virtio_gpu_vbuffer *vbuf; bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
if (use_dma_api) dma_sync_sg_for_device(vgdev->vdev->dev.parent, - bo->pages->sgl, bo->pages->nents, + shmem->pages->sgl, shmem->pages->nents, DMA_TO_DEVICE);
cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); @@ -618,7 +618,7 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, vbuf->array = array;
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D); - cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); + cmd_p->resource_id = cpu_to_le32(shmem->hw_res_handle); cmd_p->offset = cpu_to_le64(offset); cmd_p->r.width = cpu_to_le32(width); cmd_p->r.height = cpu_to_le32(height); @@ -980,7 +980,7 @@ void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev,
void virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev, - struct virtio_gpu_object *bo, + struct virtio_gpu_shmem *shmem, struct virtio_gpu_create_params *params, struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence) @@ -993,7 +993,7 @@ virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev, vbuf->array = array;
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_CREATE_3D); - cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); + cmd_p->resource_id = cpu_to_le32(shmem->hw_res_handle); cmd_p->format = cpu_to_le32(params->format); cmd_p->width = cpu_to_le32(params->width); cmd_p->height = cpu_to_le32(params->height); @@ -1008,7 +1008,7 @@ virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence);
- bo->created = true; + shmem->created = true; }
void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev, @@ -1018,14 +1018,14 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev, struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence) { - struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(array->objs[0]); + struct virtio_gpu_shmem *shmem = to_virtio_gpu_shmem(array->objs[0]); struct virtio_gpu_transfer_host_3d *cmd_p; struct virtio_gpu_vbuffer *vbuf; bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
if (use_dma_api) dma_sync_sg_for_device(vgdev->vdev->dev.parent, - bo->pages->sgl, bo->pages->nents, + shmem->pages->sgl, shmem->pages->nents, DMA_TO_DEVICE);
cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); @@ -1035,7 +1035,7 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D); cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id); - cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); + cmd_p->resource_id = cpu_to_le32(shmem->hw_res_handle); convert_to_hw_box(&cmd_p->box, box); cmd_p->offset = cpu_to_le64(offset); cmd_p->level = cpu_to_le32(level); @@ -1050,7 +1050,7 @@ void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev, struct virtio_gpu_gem_array *array, struct virtio_gpu_fence *fence) { - struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(array->objs[0]); + struct virtio_gpu_shmem *shmem = to_virtio_gpu_shmem(array->objs[0]); struct virtio_gpu_transfer_host_3d *cmd_p; struct virtio_gpu_vbuffer *vbuf;
@@ -1061,7 +1061,7 @@ void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D); cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id); - cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); + cmd_p->resource_id = cpu_to_le32(shmem->hw_res_handle); convert_to_hw_box(&cmd_p->box, box); cmd_p->offset = cpu_to_le64(offset); cmd_p->level = cpu_to_le32(level); @@ -1093,11 +1093,11 @@ void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev, }
int virtio_gpu_object_attach(struct virtio_gpu_device *vgdev, - struct virtio_gpu_object *obj, + struct virtio_gpu_shmem *shmem, struct virtio_gpu_mem_entry *ents, unsigned int nents) { - virtio_gpu_cmd_resource_attach_backing(vgdev, obj->hw_res_handle, + virtio_gpu_cmd_resource_attach_backing(vgdev, shmem->hw_res_handle, ents, nents, NULL); return 0; }
On Wed, Feb 26, 2020 at 04:25:53PM -0800, Gurchetan Singh wrote:
The main motivation behind this is to have eventually have something like this:
struct virtio_gpu_shmem { struct drm_gem_shmem_object base; uint32_t hw_res_handle; struct sg_table *pages; (...) };
struct virtio_gpu_vram { struct drm_gem_object base; // or *drm_gem_vram_object uint32_t hw_res_handle; {offset, range}; (...) };
Given that we probably will not use drm_gem_vram_object and drm_gem_shmem_object->base is drm_gem_object I think we can go this route:
struct virtgpu_object { struct drm_gem_shmem_object base; enum object_type; uint32_t hw_res_handle; [ ... ] };
struct virtgpu_object_shmem { struct virtgpu_object base; struct sg_table *pages; [ ... ] };
struct virtgpu_object_hostmem { struct virtgpu_object base; {offset, range}; (...) };
Then have helpers to get virtgpu_object_hostmem / virtgpu_object_shmem from virtgpu_object via container_of which also sanity-check object_type (maybe we can check drm_gem_object->ops for that instead of adding a new field).
Sending this out to solicit feedback on this approach. Whichever approach we decide, landing incremental changes to internal structures is reduces rebasing costs and avoids mega-changes.
That certainly makes sense.
cheers, Gerd
On Wed, Feb 26, 2020 at 11:23 PM Gerd Hoffmann kraxel@redhat.com wrote:
On Wed, Feb 26, 2020 at 04:25:53PM -0800, Gurchetan Singh wrote:
The main motivation behind this is to have eventually have something like this:
struct virtio_gpu_shmem { struct drm_gem_shmem_object base; uint32_t hw_res_handle; struct sg_table *pages; (...) };
struct virtio_gpu_vram { struct drm_gem_object base; // or *drm_gem_vram_object uint32_t hw_res_handle; {offset, range}; (...) };
Given that we probably will not use drm_gem_vram_object
Makes sense not to use drm_gem_vram_object for now.
and drm_gem_shmem_object->base is drm_gem_object I think we can go this route:
struct virtgpu_object {
Yeah, using "virtgpu_" rather than "virtio_gpu" makes sense. A bit less wordy, though the current code is based on "virtio_gpu".
struct drm_gem_shmem_object base; enum object_type; uint32_t hw_res_handle; [ ... ]
};
struct virtgpu_object_shmem { struct virtgpu_object base; struct sg_table *pages; [ ... ] };
struct virtgpu_object_hostmem { struct virtgpu_object base; {offset, range}; (...)
I'm a kernel newbie, so it's not obvious to me why struct drm_gem_shmem_object would be a base class for struct virtgpu_object_hostmem?
The core utility of drm_gem_shmem_helper seems to get pages, pinning pages, and releasing pages. But with host-mem, we won't have an array of pages, but just an (offset, length) -- which drm_gem_shmem_helper function is useful here?
Side question: is drm_gem_object_funcs.vmap(..) / drm_gem_object_funcs.vunmap(..) even possible for hostmem?
P.S:
The proof of concept hostmem implementation is on Gitlab [1][2]. Some notes:
- io_remap_pfn_range to get a userspace mapping - calls drm_gem_private_object_init(..) rather than drm_gem_object_init [which sets up shmemfs backing store].
[1] https://gitlab.freedesktop.org/virgl/drm-misc-next/-/blob/virtio-gpu-next/dr... [2] https://gitlab.freedesktop.org/virgl/drm-misc-next/-/blob/virtio-gpu-next/dr...
};
Then have helpers to get virtgpu_object_hostmem / virtgpu_object_shmem from virtgpu_object via container_of which also sanity-check object_type (maybe we can check drm_gem_object->ops for that instead of adding a new field).
Sending this out to solicit feedback on this approach. Whichever approach we decide, landing incremental changes to internal structures is reduces rebasing costs and avoids mega-changes.
That certainly makes sense.
cheers, Gerd
Hi,
struct virtgpu_object {
Yeah, using "virtgpu_" rather than "virtio_gpu" makes sense.
It wasn't my intention to suggest a rename. It's just that the kernel is a bit inconsistent here and I picked the wrong name here. Most places use virtio_gpu but some use virtgpu (file names, ioctl api).
struct virtgpu_object_hostmem { struct virtgpu_object base; {offset, range}; (...)
I'm a kernel newbie, so it's not obvious to me why struct drm_gem_shmem_object would be a base class for struct virtgpu_object_hostmem?
I think it is easier to just continue using virtio_gpu_object in most places and cast to virtio_gpu_object_{shmem,hostmem} only if needed. Makes it easier to deal with common fields like hw_res_handle.
In the hostmem case we would simply not use the drm_gem_shmem_object fields except for drm_gem_shmem_object.base (which is drm_gem_object).
Side question: is drm_gem_object_funcs.vmap(..) / drm_gem_object_funcs.vunmap(..) even possible for hostmem?
Sure. Using ioremap should work, after asking the host to map the object at some location in the pci bar.
cheers, Gerd
dri-devel@lists.freedesktop.org