Mostly around locking.
v2: - use 'vmap' instead of 'kmap'. - rework cursor update workflow.
Gerd Hoffmann (11): drm/qxl: properly handle device init failures drm/qxl: more fence wait rework drm/qxl: use ttm bo priorities drm/qxl: fix lockdep issue in qxl_alloc_release_reserved drm/qxl: rename qxl_bo_kmap -> qxl_bo_vmap_locked drm/qxl: add qxl_bo_vmap/qxl_bo_vunmap drm/qxl: fix prime vmap drm/qxl: fix monitors object vmap drm/qxl: move shadow handling to new qxl_prepare_shadow() drm/qxl: rework cursor plane drm/qxl: add lock asserts to qxl_bo_vmap_locked + qxl_bo_vunmap_locked
drivers/gpu/drm/qxl/qxl_object.h | 7 +- drivers/gpu/drm/qxl/qxl_cmd.c | 2 +- drivers/gpu/drm/qxl/qxl_display.c | 319 ++++++++++++++++-------------- drivers/gpu/drm/qxl/qxl_draw.c | 8 +- drivers/gpu/drm/qxl/qxl_gem.c | 2 +- drivers/gpu/drm/qxl/qxl_image.c | 2 +- drivers/gpu/drm/qxl/qxl_kms.c | 4 + drivers/gpu/drm/qxl/qxl_object.c | 53 ++++- drivers/gpu/drm/qxl/qxl_prime.c | 4 +- drivers/gpu/drm/qxl/qxl_release.c | 39 ++-- 10 files changed, 261 insertions(+), 179 deletions(-)
Specifically do not try release resources which where not allocated in the first place.
Cc: Tong Zhang ztong0001@gmail.com Tested-by: Tong Zhang ztong0001@gmail.com Signed-off-by: Gerd Hoffmann kraxel@redhat.com Acked-by: Thomas Zimmermann tzimmermann@suse.de --- drivers/gpu/drm/qxl/qxl_display.c | 3 +++ drivers/gpu/drm/qxl/qxl_kms.c | 4 ++++ 2 files changed, 7 insertions(+)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index c326412136c5..ec50d2cfd4e1 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -1183,6 +1183,9 @@ int qxl_destroy_monitors_object(struct qxl_device *qdev) { int ret;
+ if (!qdev->monitors_config_bo) + return 0; + qdev->monitors_config = NULL; qdev->ram_header->monitors_config = 0;
diff --git a/drivers/gpu/drm/qxl/qxl_kms.c b/drivers/gpu/drm/qxl/qxl_kms.c index 66d74aaaee06..4dc5ad13f12c 100644 --- a/drivers/gpu/drm/qxl/qxl_kms.c +++ b/drivers/gpu/drm/qxl/qxl_kms.c @@ -288,6 +288,10 @@ void qxl_device_fini(struct qxl_device *qdev) { int cur_idx;
+ /* check if qxl_device_init() was successful (gc_work is initialized last) */ + if (!qdev->gc_work.func) + return; + for (cur_idx = 0; cur_idx < 3; cur_idx++) { if (!qdev->current_release_bo[cur_idx]) continue;
Move qxl_io_notify_oom() call into wait condition. That way the driver will call it again if one call wasn't enough.
Also allows to remove the extra dma_fence_is_signaled() check and the goto.
Fixes: 5a838e5d5825 ("drm/qxl: simplify qxl_fence_wait") Signed-off-by: Gerd Hoffmann kraxel@redhat.com --- drivers/gpu/drm/qxl/qxl_release.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c index 6ed673d75f9f..579c6de10c8e 100644 --- a/drivers/gpu/drm/qxl/qxl_release.c +++ b/drivers/gpu/drm/qxl/qxl_release.c @@ -62,16 +62,12 @@ static long qxl_fence_wait(struct dma_fence *fence, bool intr,
qdev = container_of(fence->lock, struct qxl_device, release_lock);
- if (dma_fence_is_signaled(fence)) - goto signaled; - - qxl_io_notify_oom(qdev); if (!wait_event_timeout(qdev->release_event, - dma_fence_is_signaled(fence), + (dma_fence_is_signaled(fence) || + (qxl_io_notify_oom(qdev), 0)), timeout)) return 0;
-signaled: cur = jiffies; if (time_after(cur, end)) return 0;
Am 17.02.21 um 13:32 schrieb Gerd Hoffmann:
Move qxl_io_notify_oom() call into wait condition. That way the driver will call it again if one call wasn't enough.
Also allows to remove the extra dma_fence_is_signaled() check and the goto.
Fixes: 5a838e5d5825 ("drm/qxl: simplify qxl_fence_wait") Signed-off-by: Gerd Hoffmann kraxel@redhat.com
Acked-by: Thomas Zimmermann tzimmermann@suse.de
drivers/gpu/drm/qxl/qxl_release.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c index 6ed673d75f9f..579c6de10c8e 100644 --- a/drivers/gpu/drm/qxl/qxl_release.c +++ b/drivers/gpu/drm/qxl/qxl_release.c @@ -62,16 +62,12 @@ static long qxl_fence_wait(struct dma_fence *fence, bool intr,
qdev = container_of(fence->lock, struct qxl_device, release_lock);
- if (dma_fence_is_signaled(fence))
goto signaled;
- qxl_io_notify_oom(qdev); if (!wait_event_timeout(qdev->release_event,
dma_fence_is_signaled(fence),
(dma_fence_is_signaled(fence) ||
return 0;(qxl_io_notify_oom(qdev), 0)), timeout))
-signaled: cur = jiffies; if (time_after(cur, end)) return 0;
Allow to set priorities for buffer objects. Use priority 1 for surface and cursor command releases. Use priority 0 for drawing command releases. That way the short-living drawing commands are first in line when it comes to eviction, making it *much* less likely that ttm_bo_mem_force_space() picks something which can't be evicted and throws an error after waiting a while without success.
Signed-off-by: Gerd Hoffmann kraxel@redhat.com --- drivers/gpu/drm/qxl/qxl_object.h | 1 + drivers/gpu/drm/qxl/qxl_cmd.c | 2 +- drivers/gpu/drm/qxl/qxl_display.c | 4 ++-- drivers/gpu/drm/qxl/qxl_gem.c | 2 +- drivers/gpu/drm/qxl/qxl_object.c | 5 +++-- drivers/gpu/drm/qxl/qxl_release.c | 18 ++++++++++++------ 6 files changed, 20 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h index e60a8f88e226..dc1659e717f1 100644 --- a/drivers/gpu/drm/qxl/qxl_object.h +++ b/drivers/gpu/drm/qxl/qxl_object.h @@ -61,6 +61,7 @@ static inline u64 qxl_bo_mmap_offset(struct qxl_bo *bo) extern int qxl_bo_create(struct qxl_device *qdev, unsigned long size, bool kernel, bool pinned, u32 domain, + u32 priority, struct qxl_surface *surf, struct qxl_bo **bo_ptr); extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map); diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c index 7e22a81bfb36..7b00c955cd82 100644 --- a/drivers/gpu/drm/qxl/qxl_cmd.c +++ b/drivers/gpu/drm/qxl/qxl_cmd.c @@ -269,7 +269,7 @@ int qxl_alloc_bo_reserved(struct qxl_device *qdev, int ret;
ret = qxl_bo_create(qdev, size, false /* not kernel - device */, - false, QXL_GEM_DOMAIN_VRAM, NULL, &bo); + false, QXL_GEM_DOMAIN_VRAM, 0, NULL, &bo); if (ret) { DRM_ERROR("failed to allocate VRAM BO\n"); return ret; diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index ec50d2cfd4e1..a1b5cc5918bc 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -799,8 +799,8 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane, qdev->dumb_shadow_bo = NULL; } qxl_bo_create(qdev, surf.height * surf.stride, - true, true, QXL_GEM_DOMAIN_SURFACE, &surf, - &qdev->dumb_shadow_bo); + true, true, QXL_GEM_DOMAIN_SURFACE, 0, + &surf, &qdev->dumb_shadow_bo); } if (user_bo->shadow != qdev->dumb_shadow_bo) { if (user_bo->shadow) { diff --git a/drivers/gpu/drm/qxl/qxl_gem.c b/drivers/gpu/drm/qxl/qxl_gem.c index 48e096285b4c..a08da0bd9098 100644 --- a/drivers/gpu/drm/qxl/qxl_gem.c +++ b/drivers/gpu/drm/qxl/qxl_gem.c @@ -55,7 +55,7 @@ int qxl_gem_object_create(struct qxl_device *qdev, int size, /* At least align on page size */ if (alignment < PAGE_SIZE) alignment = PAGE_SIZE; - r = qxl_bo_create(qdev, size, kernel, false, initial_domain, surf, &qbo); + r = qxl_bo_create(qdev, size, kernel, false, initial_domain, 0, surf, &qbo); if (r) { if (r != -ERESTARTSYS) DRM_ERROR( diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index 705b51535492..7eada4ad217b 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -103,8 +103,8 @@ static const struct drm_gem_object_funcs qxl_object_funcs = { .print_info = drm_gem_ttm_print_info, };
-int qxl_bo_create(struct qxl_device *qdev, - unsigned long size, bool kernel, bool pinned, u32 domain, +int qxl_bo_create(struct qxl_device *qdev, unsigned long size, + bool kernel, bool pinned, u32 domain, u32 priority, struct qxl_surface *surf, struct qxl_bo **bo_ptr) { @@ -137,6 +137,7 @@ int qxl_bo_create(struct qxl_device *qdev,
qxl_ttm_placement_from_domain(bo, domain);
+ bo->tbo.priority = priority; r = ttm_bo_init_reserved(&qdev->mman.bdev, &bo->tbo, size, type, &bo->placement, 0, &ctx, NULL, NULL, &qxl_ttm_bo_destroy); diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c index 579c6de10c8e..716d706ca7f0 100644 --- a/drivers/gpu/drm/qxl/qxl_release.c +++ b/drivers/gpu/drm/qxl/qxl_release.c @@ -160,11 +160,12 @@ qxl_release_free(struct qxl_device *qdev, }
static int qxl_release_bo_alloc(struct qxl_device *qdev, - struct qxl_bo **bo) + struct qxl_bo **bo, + u32 priority) { /* pin releases bo's they are too messy to evict */ return qxl_bo_create(qdev, PAGE_SIZE, false, true, - QXL_GEM_DOMAIN_VRAM, NULL, bo); + QXL_GEM_DOMAIN_VRAM, priority, NULL, bo); }
int qxl_release_list_add(struct qxl_release *release, struct qxl_bo *bo) @@ -287,13 +288,18 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size, int ret = 0; union qxl_release_info *info; int cur_idx; + u32 priority;
- if (type == QXL_RELEASE_DRAWABLE) + if (type == QXL_RELEASE_DRAWABLE) { cur_idx = 0; - else if (type == QXL_RELEASE_SURFACE_CMD) + priority = 0; + } else if (type == QXL_RELEASE_SURFACE_CMD) { cur_idx = 1; - else if (type == QXL_RELEASE_CURSOR_CMD) + priority = 1; + } else if (type == QXL_RELEASE_CURSOR_CMD) { cur_idx = 2; + priority = 1; + } else { DRM_ERROR("got illegal type: %d\n", type); return -EINVAL; @@ -315,7 +321,7 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size, qdev->current_release_bo[cur_idx] = NULL; } if (!qdev->current_release_bo[cur_idx]) { - ret = qxl_release_bo_alloc(qdev, &qdev->current_release_bo[cur_idx]); + ret = qxl_release_bo_alloc(qdev, &qdev->current_release_bo[cur_idx], priority); if (ret) { mutex_unlock(&qdev->release_mutex); qxl_release_free(qdev, *release);
Am 17.02.21 um 13:32 schrieb Gerd Hoffmann:
Allow to set priorities for buffer objects. Use priority 1 for surface and cursor command releases. Use priority 0 for drawing command releases. That way the short-living drawing commands are first in line when it comes to eviction, making it *much* less likely that ttm_bo_mem_force_space() picks something which can't be evicted and throws an error after waiting a while without success.
Signed-off-by: Gerd Hoffmann kraxel@redhat.com
Acked-by: Thomas Zimmermann tzimmermann@suse.de
drivers/gpu/drm/qxl/qxl_object.h | 1 + drivers/gpu/drm/qxl/qxl_cmd.c | 2 +- drivers/gpu/drm/qxl/qxl_display.c | 4 ++-- drivers/gpu/drm/qxl/qxl_gem.c | 2 +- drivers/gpu/drm/qxl/qxl_object.c | 5 +++-- drivers/gpu/drm/qxl/qxl_release.c | 18 ++++++++++++------ 6 files changed, 20 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h index e60a8f88e226..dc1659e717f1 100644 --- a/drivers/gpu/drm/qxl/qxl_object.h +++ b/drivers/gpu/drm/qxl/qxl_object.h @@ -61,6 +61,7 @@ static inline u64 qxl_bo_mmap_offset(struct qxl_bo *bo) extern int qxl_bo_create(struct qxl_device *qdev, unsigned long size, bool kernel, bool pinned, u32 domain,
extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);u32 priority, struct qxl_surface *surf, struct qxl_bo **bo_ptr);
diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c index 7e22a81bfb36..7b00c955cd82 100644 --- a/drivers/gpu/drm/qxl/qxl_cmd.c +++ b/drivers/gpu/drm/qxl/qxl_cmd.c @@ -269,7 +269,7 @@ int qxl_alloc_bo_reserved(struct qxl_device *qdev, int ret;
ret = qxl_bo_create(qdev, size, false /* not kernel - device */,
false, QXL_GEM_DOMAIN_VRAM, NULL, &bo);
if (ret) { DRM_ERROR("failed to allocate VRAM BO\n"); return ret;false, QXL_GEM_DOMAIN_VRAM, 0, NULL, &bo);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index ec50d2cfd4e1..a1b5cc5918bc 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -799,8 +799,8 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane, qdev->dumb_shadow_bo = NULL; } qxl_bo_create(qdev, surf.height * surf.stride,
true, true, QXL_GEM_DOMAIN_SURFACE, &surf,
&qdev->dumb_shadow_bo);
true, true, QXL_GEM_DOMAIN_SURFACE, 0,
} if (user_bo->shadow != qdev->dumb_shadow_bo) { if (user_bo->shadow) {&surf, &qdev->dumb_shadow_bo);
diff --git a/drivers/gpu/drm/qxl/qxl_gem.c b/drivers/gpu/drm/qxl/qxl_gem.c index 48e096285b4c..a08da0bd9098 100644 --- a/drivers/gpu/drm/qxl/qxl_gem.c +++ b/drivers/gpu/drm/qxl/qxl_gem.c @@ -55,7 +55,7 @@ int qxl_gem_object_create(struct qxl_device *qdev, int size, /* At least align on page size */ if (alignment < PAGE_SIZE) alignment = PAGE_SIZE;
- r = qxl_bo_create(qdev, size, kernel, false, initial_domain, surf, &qbo);
- r = qxl_bo_create(qdev, size, kernel, false, initial_domain, 0, surf, &qbo); if (r) { if (r != -ERESTARTSYS) DRM_ERROR(
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index 705b51535492..7eada4ad217b 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -103,8 +103,8 @@ static const struct drm_gem_object_funcs qxl_object_funcs = { .print_info = drm_gem_ttm_print_info, };
-int qxl_bo_create(struct qxl_device *qdev,
unsigned long size, bool kernel, bool pinned, u32 domain,
+int qxl_bo_create(struct qxl_device *qdev, unsigned long size,
struct qxl_surface *surf, struct qxl_bo **bo_ptr) {bool kernel, bool pinned, u32 domain, u32 priority,
@@ -137,6 +137,7 @@ int qxl_bo_create(struct qxl_device *qdev,
qxl_ttm_placement_from_domain(bo, domain);
- bo->tbo.priority = priority; r = ttm_bo_init_reserved(&qdev->mman.bdev, &bo->tbo, size, type, &bo->placement, 0, &ctx, NULL, NULL, &qxl_ttm_bo_destroy);
diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c index 579c6de10c8e..716d706ca7f0 100644 --- a/drivers/gpu/drm/qxl/qxl_release.c +++ b/drivers/gpu/drm/qxl/qxl_release.c @@ -160,11 +160,12 @@ qxl_release_free(struct qxl_device *qdev, }
static int qxl_release_bo_alloc(struct qxl_device *qdev,
struct qxl_bo **bo)
struct qxl_bo **bo,
{ /* pin releases bo's they are too messy to evict */ return qxl_bo_create(qdev, PAGE_SIZE, false, true,u32 priority)
QXL_GEM_DOMAIN_VRAM, NULL, bo);
QXL_GEM_DOMAIN_VRAM, priority, NULL, bo);
}
int qxl_release_list_add(struct qxl_release *release, struct qxl_bo *bo)
@@ -287,13 +288,18 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size, int ret = 0; union qxl_release_info *info; int cur_idx;
- u32 priority;
- if (type == QXL_RELEASE_DRAWABLE)
- if (type == QXL_RELEASE_DRAWABLE) { cur_idx = 0;
- else if (type == QXL_RELEASE_SURFACE_CMD)
priority = 0;
- } else if (type == QXL_RELEASE_SURFACE_CMD) { cur_idx = 1;
- else if (type == QXL_RELEASE_CURSOR_CMD)
priority = 1;
- } else if (type == QXL_RELEASE_CURSOR_CMD) { cur_idx = 2;
priority = 1;
- } else { DRM_ERROR("got illegal type: %d\n", type); return -EINVAL;
@@ -315,7 +321,7 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size, qdev->current_release_bo[cur_idx] = NULL; } if (!qdev->current_release_bo[cur_idx]) {
ret = qxl_release_bo_alloc(qdev, &qdev->current_release_bo[cur_idx]);
if (ret) { mutex_unlock(&qdev->release_mutex); qxl_release_free(qdev, *release);ret = qxl_release_bo_alloc(qdev, &qdev->current_release_bo[cur_idx], priority);
Call qxl_bo_unpin (which does a reservation) without holding the release_mutex lock. Fixes lockdep (correctly) warning on a possible deadlock.
Fixes: 65ffea3c6e73 ("drm/qxl: unpin release objects") Signed-off-by: Gerd Hoffmann kraxel@redhat.com --- drivers/gpu/drm/qxl/qxl_release.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c index 716d706ca7f0..f5845c96d414 100644 --- a/drivers/gpu/drm/qxl/qxl_release.c +++ b/drivers/gpu/drm/qxl/qxl_release.c @@ -283,7 +283,7 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size, int type, struct qxl_release **release, struct qxl_bo **rbo) { - struct qxl_bo *bo; + struct qxl_bo *bo, *free_bo = NULL; int idr_ret; int ret = 0; union qxl_release_info *info; @@ -315,8 +315,7 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size,
mutex_lock(&qdev->release_mutex); if (qdev->current_release_bo_offset[cur_idx] + 1 >= releases_per_bo[cur_idx]) { - qxl_bo_unpin(qdev->current_release_bo[cur_idx]); - qxl_bo_unref(&qdev->current_release_bo[cur_idx]); + free_bo = qdev->current_release_bo[cur_idx]; qdev->current_release_bo_offset[cur_idx] = 0; qdev->current_release_bo[cur_idx] = NULL; } @@ -324,6 +323,10 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size, ret = qxl_release_bo_alloc(qdev, &qdev->current_release_bo[cur_idx], priority); if (ret) { mutex_unlock(&qdev->release_mutex); + if (free_bo) { + qxl_bo_unpin(free_bo); + qxl_bo_unref(&free_bo); + } qxl_release_free(qdev, *release); return ret; } @@ -339,6 +342,10 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size, *rbo = bo;
mutex_unlock(&qdev->release_mutex); + if (free_bo) { + qxl_bo_unpin(free_bo); + qxl_bo_unref(&free_bo); + }
ret = qxl_release_list_add(*release, bo); qxl_bo_unref(&bo);
Am 17.02.21 um 13:32 schrieb Gerd Hoffmann:
Call qxl_bo_unpin (which does a reservation) without holding the release_mutex lock. Fixes lockdep (correctly) warning on a possible deadlock.
Fixes: 65ffea3c6e73 ("drm/qxl: unpin release objects") Signed-off-by: Gerd Hoffmann kraxel@redhat.com
Acked-by: Thomas Zimmermann tzimmermann@suse.de
drivers/gpu/drm/qxl/qxl_release.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c index 716d706ca7f0..f5845c96d414 100644 --- a/drivers/gpu/drm/qxl/qxl_release.c +++ b/drivers/gpu/drm/qxl/qxl_release.c @@ -283,7 +283,7 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size, int type, struct qxl_release **release, struct qxl_bo **rbo) {
- struct qxl_bo *bo;
- struct qxl_bo *bo, *free_bo = NULL; int idr_ret; int ret = 0; union qxl_release_info *info;
@@ -315,8 +315,7 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size,
mutex_lock(&qdev->release_mutex); if (qdev->current_release_bo_offset[cur_idx] + 1 >= releases_per_bo[cur_idx]) {
qxl_bo_unpin(qdev->current_release_bo[cur_idx]);
qxl_bo_unref(&qdev->current_release_bo[cur_idx]);
qdev->current_release_bo_offset[cur_idx] = 0; qdev->current_release_bo[cur_idx] = NULL; }free_bo = qdev->current_release_bo[cur_idx];
@@ -324,6 +323,10 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size, ret = qxl_release_bo_alloc(qdev, &qdev->current_release_bo[cur_idx], priority); if (ret) { mutex_unlock(&qdev->release_mutex);
if (free_bo) {
qxl_bo_unpin(free_bo);
qxl_bo_unref(&free_bo);
}} qxl_release_free(qdev, *release); return ret;
@@ -339,6 +342,10 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size, *rbo = bo;
mutex_unlock(&qdev->release_mutex);
if (free_bo) {
qxl_bo_unpin(free_bo);
qxl_bo_unref(&free_bo);
}
ret = qxl_release_list_add(*release, bo); qxl_bo_unref(&bo);
Append _locked to Make clear that these functions should be called with reserved bo's only. While being at it also rename kmap -> vmap.
No functional change.
Signed-off-by: Gerd Hoffmann kraxel@redhat.com --- drivers/gpu/drm/qxl/qxl_object.h | 4 ++-- drivers/gpu/drm/qxl/qxl_display.c | 14 +++++++------- drivers/gpu/drm/qxl/qxl_draw.c | 8 ++++---- drivers/gpu/drm/qxl/qxl_image.c | 2 +- drivers/gpu/drm/qxl/qxl_object.c | 8 ++++---- drivers/gpu/drm/qxl/qxl_prime.c | 4 ++-- 6 files changed, 20 insertions(+), 20 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h index dc1659e717f1..2495e5cdf353 100644 --- a/drivers/gpu/drm/qxl/qxl_object.h +++ b/drivers/gpu/drm/qxl/qxl_object.h @@ -64,8 +64,8 @@ extern int qxl_bo_create(struct qxl_device *qdev, u32 priority, struct qxl_surface *surf, struct qxl_bo **bo_ptr); -extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map); -extern void qxl_bo_kunmap(struct qxl_bo *bo); +int qxl_bo_vmap_locked(struct qxl_bo *bo, struct dma_buf_map *map); +void qxl_bo_vunmap_locked(struct qxl_bo *bo); void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset); void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map); extern struct qxl_bo *qxl_bo_ref(struct qxl_bo *bo); diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index a1b5cc5918bc..bfcc93089a94 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -600,7 +600,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, user_bo = gem_to_qxl_bo(obj);
/* pinning is done in the prepare/cleanup framevbuffer */ - ret = qxl_bo_kmap(user_bo, &user_map); + ret = qxl_bo_vmap_locked(user_bo, &user_map); if (ret) goto out_free_release; user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */ @@ -619,7 +619,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, if (ret) goto out_unpin;
- ret = qxl_bo_kmap(cursor_bo, &cursor_map); + ret = qxl_bo_vmap_locked(cursor_bo, &cursor_map); if (ret) goto out_backoff; if (cursor_map.is_iomem) /* TODO: Use mapping abstraction properly */ @@ -638,8 +638,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, cursor->chunk.prev_chunk = 0; cursor->chunk.data_size = size; memcpy(cursor->chunk.data, user_ptr, size); - qxl_bo_kunmap(cursor_bo); - qxl_bo_kunmap(user_bo); + qxl_bo_vunmap_locked(cursor_bo); + qxl_bo_vunmap_locked(user_bo);
cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release); cmd->u.set.visible = 1; @@ -681,7 +681,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, out_free_bo: qxl_bo_unref(&cursor_bo); out_kunmap: - qxl_bo_kunmap(user_bo); + qxl_bo_vunmap_locked(user_bo); out_free_release: qxl_release_free(qdev, release); return; @@ -1163,7 +1163,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev) if (ret) return ret;
- qxl_bo_kmap(qdev->monitors_config_bo, &map); + qxl_bo_vmap_locked(qdev->monitors_config_bo, &map);
qdev->monitors_config = qdev->monitors_config_bo->kptr; qdev->ram_header->monitors_config = @@ -1189,7 +1189,7 @@ int qxl_destroy_monitors_object(struct qxl_device *qdev) qdev->monitors_config = NULL; qdev->ram_header->monitors_config = 0;
- qxl_bo_kunmap(qdev->monitors_config_bo); + qxl_bo_vunmap_locked(qdev->monitors_config_bo); ret = qxl_bo_unpin(qdev->monitors_config_bo); if (ret) return ret; diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c index 7b7acb910780..7d27891e87fa 100644 --- a/drivers/gpu/drm/qxl/qxl_draw.c +++ b/drivers/gpu/drm/qxl/qxl_draw.c @@ -48,7 +48,7 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev, struct qxl_clip_rects *dev_clips; int ret;
- ret = qxl_bo_kmap(clips_bo, &map); + ret = qxl_bo_vmap_locked(clips_bo, &map); if (ret) return NULL; dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */ @@ -202,7 +202,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, if (ret) goto out_release_backoff;
- ret = qxl_bo_kmap(bo, &surface_map); + ret = qxl_bo_vmap_locked(bo, &surface_map); if (ret) goto out_release_backoff; surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */ @@ -210,7 +210,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, ret = qxl_image_init(qdev, release, dimage, surface_base, left - dumb_shadow_offset, top, width, height, depth, stride); - qxl_bo_kunmap(bo); + qxl_bo_vunmap_locked(bo); if (ret) goto out_release_backoff;
@@ -247,7 +247,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, rects[i].top = clips_ptr->y1; rects[i].bottom = clips_ptr->y2; } - qxl_bo_kunmap(clips_bo); + qxl_bo_vunmap_locked(clips_bo);
qxl_release_fence_buffer_objects(release); qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false); diff --git a/drivers/gpu/drm/qxl/qxl_image.c b/drivers/gpu/drm/qxl/qxl_image.c index 60ab7151b84d..ffff54e5fb31 100644 --- a/drivers/gpu/drm/qxl/qxl_image.c +++ b/drivers/gpu/drm/qxl/qxl_image.c @@ -186,7 +186,7 @@ qxl_image_init_helper(struct qxl_device *qdev, } } } - qxl_bo_kunmap(chunk_bo); + qxl_bo_vunmap_locked(chunk_bo);
image_bo = dimage->bo; ptr = qxl_bo_kmap_atomic_page(qdev, image_bo, 0); diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index 7eada4ad217b..f4a015381a7f 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -155,7 +155,7 @@ int qxl_bo_create(struct qxl_device *qdev, unsigned long size, return 0; }
-int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map) +int qxl_bo_vmap_locked(struct qxl_bo *bo, struct dma_buf_map *map) { int r;
@@ -203,7 +203,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, return rptr; }
- ret = qxl_bo_kmap(bo, &bo_map); + ret = qxl_bo_vmap_locked(bo, &bo_map); if (ret) return NULL; rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */ @@ -212,7 +212,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, return rptr; }
-void qxl_bo_kunmap(struct qxl_bo *bo) +void qxl_bo_vunmap_locked(struct qxl_bo *bo) { if (bo->kptr == NULL) return; @@ -233,7 +233,7 @@ void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, io_mapping_unmap_atomic(pmap); return; fallback: - qxl_bo_kunmap(bo); + qxl_bo_vunmap_locked(bo); }
void qxl_bo_unref(struct qxl_bo **bo) diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c index 4aa949799446..2bebe662516f 100644 --- a/drivers/gpu/drm/qxl/qxl_prime.c +++ b/drivers/gpu/drm/qxl/qxl_prime.c @@ -59,7 +59,7 @@ int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) struct qxl_bo *bo = gem_to_qxl_bo(obj); int ret;
- ret = qxl_bo_kmap(bo, map); + ret = qxl_bo_vmap_locked(bo, map); if (ret < 0) return ret;
@@ -71,7 +71,7 @@ void qxl_gem_prime_vunmap(struct drm_gem_object *obj, { struct qxl_bo *bo = gem_to_qxl_bo(obj);
- qxl_bo_kunmap(bo); + qxl_bo_vunmap_locked(bo); }
int qxl_gem_prime_mmap(struct drm_gem_object *obj,
Am 17.02.21 um 13:32 schrieb Gerd Hoffmann:
Append _locked to Make clear that these functions should be called with reserved bo's only. While being at it also rename kmap -> vmap.
No functional change.
Signed-off-by: Gerd Hoffmann kraxel@redhat.com
Acked-by: Thomas Zimmermann tzimmermann@suse.de
drivers/gpu/drm/qxl/qxl_object.h | 4 ++-- drivers/gpu/drm/qxl/qxl_display.c | 14 +++++++------- drivers/gpu/drm/qxl/qxl_draw.c | 8 ++++---- drivers/gpu/drm/qxl/qxl_image.c | 2 +- drivers/gpu/drm/qxl/qxl_object.c | 8 ++++---- drivers/gpu/drm/qxl/qxl_prime.c | 4 ++-- 6 files changed, 20 insertions(+), 20 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h index dc1659e717f1..2495e5cdf353 100644 --- a/drivers/gpu/drm/qxl/qxl_object.h +++ b/drivers/gpu/drm/qxl/qxl_object.h @@ -64,8 +64,8 @@ extern int qxl_bo_create(struct qxl_device *qdev, u32 priority, struct qxl_surface *surf, struct qxl_bo **bo_ptr); -extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map); -extern void qxl_bo_kunmap(struct qxl_bo *bo); +int qxl_bo_vmap_locked(struct qxl_bo *bo, struct dma_buf_map *map); +void qxl_bo_vunmap_locked(struct qxl_bo *bo); void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset); void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map); extern struct qxl_bo *qxl_bo_ref(struct qxl_bo *bo); diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index a1b5cc5918bc..bfcc93089a94 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -600,7 +600,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, user_bo = gem_to_qxl_bo(obj);
/* pinning is done in the prepare/cleanup framevbuffer */
ret = qxl_bo_kmap(user_bo, &user_map);
if (ret) goto out_free_release; user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */ret = qxl_bo_vmap_locked(user_bo, &user_map);
@@ -619,7 +619,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, if (ret) goto out_unpin;
ret = qxl_bo_kmap(cursor_bo, &cursor_map);
if (ret) goto out_backoff; if (cursor_map.is_iomem) /* TODO: Use mapping abstraction properly */ret = qxl_bo_vmap_locked(cursor_bo, &cursor_map);
@@ -638,8 +638,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, cursor->chunk.prev_chunk = 0; cursor->chunk.data_size = size; memcpy(cursor->chunk.data, user_ptr, size);
qxl_bo_kunmap(cursor_bo);
qxl_bo_kunmap(user_bo);
qxl_bo_vunmap_locked(cursor_bo);
qxl_bo_vunmap_locked(user_bo);
cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release); cmd->u.set.visible = 1;
@@ -681,7 +681,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, out_free_bo: qxl_bo_unref(&cursor_bo); out_kunmap:
- qxl_bo_kunmap(user_bo);
- qxl_bo_vunmap_locked(user_bo); out_free_release: qxl_release_free(qdev, release); return;
@@ -1163,7 +1163,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev) if (ret) return ret;
- qxl_bo_kmap(qdev->monitors_config_bo, &map);
qxl_bo_vmap_locked(qdev->monitors_config_bo, &map);
qdev->monitors_config = qdev->monitors_config_bo->kptr; qdev->ram_header->monitors_config =
@@ -1189,7 +1189,7 @@ int qxl_destroy_monitors_object(struct qxl_device *qdev) qdev->monitors_config = NULL; qdev->ram_header->monitors_config = 0;
- qxl_bo_kunmap(qdev->monitors_config_bo);
- qxl_bo_vunmap_locked(qdev->monitors_config_bo); ret = qxl_bo_unpin(qdev->monitors_config_bo); if (ret) return ret;
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c index 7b7acb910780..7d27891e87fa 100644 --- a/drivers/gpu/drm/qxl/qxl_draw.c +++ b/drivers/gpu/drm/qxl/qxl_draw.c @@ -48,7 +48,7 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev, struct qxl_clip_rects *dev_clips; int ret;
- ret = qxl_bo_kmap(clips_bo, &map);
- ret = qxl_bo_vmap_locked(clips_bo, &map); if (ret) return NULL; dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
@@ -202,7 +202,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, if (ret) goto out_release_backoff;
- ret = qxl_bo_kmap(bo, &surface_map);
- ret = qxl_bo_vmap_locked(bo, &surface_map); if (ret) goto out_release_backoff; surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
@@ -210,7 +210,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, ret = qxl_image_init(qdev, release, dimage, surface_base, left - dumb_shadow_offset, top, width, height, depth, stride);
- qxl_bo_kunmap(bo);
- qxl_bo_vunmap_locked(bo); if (ret) goto out_release_backoff;
@@ -247,7 +247,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, rects[i].top = clips_ptr->y1; rects[i].bottom = clips_ptr->y2; }
- qxl_bo_kunmap(clips_bo);
qxl_bo_vunmap_locked(clips_bo);
qxl_release_fence_buffer_objects(release); qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false);
diff --git a/drivers/gpu/drm/qxl/qxl_image.c b/drivers/gpu/drm/qxl/qxl_image.c index 60ab7151b84d..ffff54e5fb31 100644 --- a/drivers/gpu/drm/qxl/qxl_image.c +++ b/drivers/gpu/drm/qxl/qxl_image.c @@ -186,7 +186,7 @@ qxl_image_init_helper(struct qxl_device *qdev, } } }
- qxl_bo_kunmap(chunk_bo);
qxl_bo_vunmap_locked(chunk_bo);
image_bo = dimage->bo; ptr = qxl_bo_kmap_atomic_page(qdev, image_bo, 0);
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index 7eada4ad217b..f4a015381a7f 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -155,7 +155,7 @@ int qxl_bo_create(struct qxl_device *qdev, unsigned long size, return 0; }
-int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map) +int qxl_bo_vmap_locked(struct qxl_bo *bo, struct dma_buf_map *map) { int r;
@@ -203,7 +203,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, return rptr; }
- ret = qxl_bo_kmap(bo, &bo_map);
- ret = qxl_bo_vmap_locked(bo, &bo_map); if (ret) return NULL; rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
@@ -212,7 +212,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, return rptr; }
-void qxl_bo_kunmap(struct qxl_bo *bo) +void qxl_bo_vunmap_locked(struct qxl_bo *bo) { if (bo->kptr == NULL) return; @@ -233,7 +233,7 @@ void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, io_mapping_unmap_atomic(pmap); return; fallback:
- qxl_bo_kunmap(bo);
qxl_bo_vunmap_locked(bo); }
void qxl_bo_unref(struct qxl_bo **bo)
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c index 4aa949799446..2bebe662516f 100644 --- a/drivers/gpu/drm/qxl/qxl_prime.c +++ b/drivers/gpu/drm/qxl/qxl_prime.c @@ -59,7 +59,7 @@ int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) struct qxl_bo *bo = gem_to_qxl_bo(obj); int ret;
- ret = qxl_bo_kmap(bo, map);
- ret = qxl_bo_vmap_locked(bo, map); if (ret < 0) return ret;
@@ -71,7 +71,7 @@ void qxl_gem_prime_vunmap(struct drm_gem_object *obj, { struct qxl_bo *bo = gem_to_qxl_bo(obj);
- qxl_bo_kunmap(bo);
qxl_bo_vunmap_locked(bo); }
int qxl_gem_prime_mmap(struct drm_gem_object *obj,
Add vmap/vunmap variants which reserve (and pin) the bo. They can be used in case the caller doesn't hold a reservation for the bo.
Signed-off-by: Gerd Hoffmann kraxel@redhat.com --- drivers/gpu/drm/qxl/qxl_object.h | 2 ++ drivers/gpu/drm/qxl/qxl_object.c | 36 ++++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+)
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h index 2495e5cdf353..ee9c29de4d3d 100644 --- a/drivers/gpu/drm/qxl/qxl_object.h +++ b/drivers/gpu/drm/qxl/qxl_object.h @@ -64,7 +64,9 @@ extern int qxl_bo_create(struct qxl_device *qdev, u32 priority, struct qxl_surface *surf, struct qxl_bo **bo_ptr); +int qxl_bo_vmap(struct qxl_bo *bo, struct dma_buf_map *map); int qxl_bo_vmap_locked(struct qxl_bo *bo, struct dma_buf_map *map); +int qxl_bo_vunmap(struct qxl_bo *bo); void qxl_bo_vunmap_locked(struct qxl_bo *bo); void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset); void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map); diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index f4a015381a7f..82c3bf195ad6 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -29,6 +29,9 @@ #include "qxl_drv.h" #include "qxl_object.h"
+static int __qxl_bo_pin(struct qxl_bo *bo); +static void __qxl_bo_unpin(struct qxl_bo *bo); + static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo) { struct qxl_bo *bo; @@ -179,6 +182,25 @@ int qxl_bo_vmap_locked(struct qxl_bo *bo, struct dma_buf_map *map) return 0; }
+int qxl_bo_vmap(struct qxl_bo *bo, struct dma_buf_map *map) +{ + int r; + + r = qxl_bo_reserve(bo); + if (r) + return r; + + r = __qxl_bo_pin(bo); + if (r) { + qxl_bo_unreserve(bo); + return r; + } + + r = qxl_bo_vmap_locked(bo, map); + qxl_bo_unreserve(bo); + return r; +} + void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset) { @@ -223,6 +245,20 @@ void qxl_bo_vunmap_locked(struct qxl_bo *bo) ttm_bo_vunmap(&bo->tbo, &bo->map); }
+int qxl_bo_vunmap(struct qxl_bo *bo) +{ + int r; + + r = qxl_bo_reserve(bo); + if (r) + return r; + + qxl_bo_vunmap_locked(bo); + __qxl_bo_unpin(bo); + qxl_bo_unreserve(bo); + return 0; +} + void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *pmap) {
Am 17.02.21 um 13:32 schrieb Gerd Hoffmann:
Add vmap/vunmap variants which reserve (and pin) the bo. They can be used in case the caller doesn't hold a reservation for the bo.
Signed-off-by: Gerd Hoffmann kraxel@redhat.com
Acked-by: Thomas Zimmermann tzimmermann@suse.de
drivers/gpu/drm/qxl/qxl_object.h | 2 ++ drivers/gpu/drm/qxl/qxl_object.c | 36 ++++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+)
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h index 2495e5cdf353..ee9c29de4d3d 100644 --- a/drivers/gpu/drm/qxl/qxl_object.h +++ b/drivers/gpu/drm/qxl/qxl_object.h @@ -64,7 +64,9 @@ extern int qxl_bo_create(struct qxl_device *qdev, u32 priority, struct qxl_surface *surf, struct qxl_bo **bo_ptr); +int qxl_bo_vmap(struct qxl_bo *bo, struct dma_buf_map *map); int qxl_bo_vmap_locked(struct qxl_bo *bo, struct dma_buf_map *map); +int qxl_bo_vunmap(struct qxl_bo *bo); void qxl_bo_vunmap_locked(struct qxl_bo *bo); void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset); void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map); diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index f4a015381a7f..82c3bf195ad6 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -29,6 +29,9 @@ #include "qxl_drv.h" #include "qxl_object.h"
+static int __qxl_bo_pin(struct qxl_bo *bo); +static void __qxl_bo_unpin(struct qxl_bo *bo);
- static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo) { struct qxl_bo *bo;
@@ -179,6 +182,25 @@ int qxl_bo_vmap_locked(struct qxl_bo *bo, struct dma_buf_map *map) return 0; }
+int qxl_bo_vmap(struct qxl_bo *bo, struct dma_buf_map *map) +{
- int r;
- r = qxl_bo_reserve(bo);
- if (r)
return r;
- r = __qxl_bo_pin(bo);
- if (r) {
qxl_bo_unreserve(bo);
return r;
- }
- r = qxl_bo_vmap_locked(bo, map);
- qxl_bo_unreserve(bo);
- return r;
+}
- void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset) {
@@ -223,6 +245,20 @@ void qxl_bo_vunmap_locked(struct qxl_bo *bo) ttm_bo_vunmap(&bo->tbo, &bo->map); }
+int qxl_bo_vunmap(struct qxl_bo *bo) +{
- int r;
- r = qxl_bo_reserve(bo);
- if (r)
return r;
- qxl_bo_vunmap_locked(bo);
- __qxl_bo_unpin(bo);
- qxl_bo_unreserve(bo);
- return 0;
+}
- void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *pmap) {
Use the correct vmap variant. We don't have a reservation here, so we can't use the _locked version.
Signed-off-by: Gerd Hoffmann kraxel@redhat.com Acked-by: Thomas Zimmermann tzimmermann@suse.de --- drivers/gpu/drm/qxl/qxl_prime.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c index 2bebe662516f..0628d1cc91fe 100644 --- a/drivers/gpu/drm/qxl/qxl_prime.c +++ b/drivers/gpu/drm/qxl/qxl_prime.c @@ -59,7 +59,7 @@ int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) struct qxl_bo *bo = gem_to_qxl_bo(obj); int ret;
- ret = qxl_bo_vmap_locked(bo, map); + ret = qxl_bo_vmap(bo, map); if (ret < 0) return ret;
@@ -71,7 +71,7 @@ void qxl_gem_prime_vunmap(struct drm_gem_object *obj, { struct qxl_bo *bo = gem_to_qxl_bo(obj);
- qxl_bo_vunmap_locked(bo); + qxl_bo_vunmap(bo); }
int qxl_gem_prime_mmap(struct drm_gem_object *obj,
Use the correct vmap variant. We don't hold a reservation here, so we can't use the _locked variant. We can drop the pin because qxl_bo_vmap will do that for us.
Signed-off-by: Gerd Hoffmann kraxel@redhat.com --- drivers/gpu/drm/qxl/qxl_display.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index bfcc93089a94..f106da917863 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -1159,12 +1159,10 @@ int qxl_create_monitors_object(struct qxl_device *qdev) } qdev->monitors_config_bo = gem_to_qxl_bo(gobj);
- ret = qxl_bo_pin(qdev->monitors_config_bo); + ret = qxl_bo_vmap(qdev->monitors_config_bo, &map); if (ret) return ret;
- qxl_bo_vmap_locked(qdev->monitors_config_bo, &map); - qdev->monitors_config = qdev->monitors_config_bo->kptr; qdev->ram_header->monitors_config = qxl_bo_physical_address(qdev, qdev->monitors_config_bo, 0); @@ -1189,8 +1187,7 @@ int qxl_destroy_monitors_object(struct qxl_device *qdev) qdev->monitors_config = NULL; qdev->ram_header->monitors_config = 0;
- qxl_bo_vunmap_locked(qdev->monitors_config_bo); - ret = qxl_bo_unpin(qdev->monitors_config_bo); + ret = qxl_bo_vunmap(qdev->monitors_config_bo); if (ret) return ret;
Am 17.02.21 um 13:32 schrieb Gerd Hoffmann:
Use the correct vmap variant. We don't hold a reservation here, so we can't use the _locked variant. We can drop the pin because qxl_bo_vmap will do that for us.
Signed-off-by: Gerd Hoffmann kraxel@redhat.com
Acked-by: Thomas Zimmermann tzimmermann@suse.de
I simply forgot to ack this patch.
drivers/gpu/drm/qxl/qxl_display.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index bfcc93089a94..f106da917863 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -1159,12 +1159,10 @@ int qxl_create_monitors_object(struct qxl_device *qdev) } qdev->monitors_config_bo = gem_to_qxl_bo(gobj);
- ret = qxl_bo_pin(qdev->monitors_config_bo);
- ret = qxl_bo_vmap(qdev->monitors_config_bo, &map); if (ret) return ret;
- qxl_bo_vmap_locked(qdev->monitors_config_bo, &map);
- qdev->monitors_config = qdev->monitors_config_bo->kptr; qdev->ram_header->monitors_config = qxl_bo_physical_address(qdev, qdev->monitors_config_bo, 0);
@@ -1189,8 +1187,7 @@ int qxl_destroy_monitors_object(struct qxl_device *qdev) qdev->monitors_config = NULL; qdev->ram_header->monitors_config = 0;
- qxl_bo_vunmap_locked(qdev->monitors_config_bo);
- ret = qxl_bo_unpin(qdev->monitors_config_bo);
- ret = qxl_bo_vunmap(qdev->monitors_config_bo); if (ret) return ret;
Pure code motion, no functional change.
Signed-off-by: Gerd Hoffmann kraxel@redhat.com --- drivers/gpu/drm/qxl/qxl_display.c | 61 +++++++++++++++++-------------- 1 file changed, 34 insertions(+), 27 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index f106da917863..b315d7484e21 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -771,13 +771,45 @@ static void qxl_calc_dumb_shadow(struct qxl_device *qdev, DRM_DEBUG("%dx%d\n", surf->width, surf->height); }
+static void qxl_prepare_shadow(struct qxl_device *qdev, struct qxl_bo *user_bo, + int crtc_index) +{ + struct qxl_surface surf; + + qxl_update_dumb_head(qdev, crtc_index, + user_bo); + qxl_calc_dumb_shadow(qdev, &surf); + if (!qdev->dumb_shadow_bo || + qdev->dumb_shadow_bo->surf.width != surf.width || + qdev->dumb_shadow_bo->surf.height != surf.height) { + if (qdev->dumb_shadow_bo) { + drm_gem_object_put + (&qdev->dumb_shadow_bo->tbo.base); + qdev->dumb_shadow_bo = NULL; + } + qxl_bo_create(qdev, surf.height * surf.stride, + true, true, QXL_GEM_DOMAIN_SURFACE, 0, + &surf, &qdev->dumb_shadow_bo); + } + if (user_bo->shadow != qdev->dumb_shadow_bo) { + if (user_bo->shadow) { + qxl_bo_unpin(user_bo->shadow); + drm_gem_object_put + (&user_bo->shadow->tbo.base); + user_bo->shadow = NULL; + } + drm_gem_object_get(&qdev->dumb_shadow_bo->tbo.base); + user_bo->shadow = qdev->dumb_shadow_bo; + qxl_bo_pin(user_bo->shadow); + } +} + static int qxl_plane_prepare_fb(struct drm_plane *plane, struct drm_plane_state *new_state) { struct qxl_device *qdev = to_qxl(plane->dev); struct drm_gem_object *obj; struct qxl_bo *user_bo; - struct qxl_surface surf;
if (!new_state->fb) return 0; @@ -787,32 +819,7 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane,
if (plane->type == DRM_PLANE_TYPE_PRIMARY && user_bo->is_dumb) { - qxl_update_dumb_head(qdev, new_state->crtc->index, - user_bo); - qxl_calc_dumb_shadow(qdev, &surf); - if (!qdev->dumb_shadow_bo || - qdev->dumb_shadow_bo->surf.width != surf.width || - qdev->dumb_shadow_bo->surf.height != surf.height) { - if (qdev->dumb_shadow_bo) { - drm_gem_object_put - (&qdev->dumb_shadow_bo->tbo.base); - qdev->dumb_shadow_bo = NULL; - } - qxl_bo_create(qdev, surf.height * surf.stride, - true, true, QXL_GEM_DOMAIN_SURFACE, 0, - &surf, &qdev->dumb_shadow_bo); - } - if (user_bo->shadow != qdev->dumb_shadow_bo) { - if (user_bo->shadow) { - qxl_bo_unpin(user_bo->shadow); - drm_gem_object_put - (&user_bo->shadow->tbo.base); - user_bo->shadow = NULL; - } - drm_gem_object_get(&qdev->dumb_shadow_bo->tbo.base); - user_bo->shadow = qdev->dumb_shadow_bo; - qxl_bo_pin(user_bo->shadow); - } + qxl_prepare_shadow(qdev, user_bo, new_state->crtc->index); }
return qxl_bo_pin(user_bo);
Add helper functions to create and move the cursor. Create the cursor_bo in prepare_fb callback, in the atomic_commit callback we only send the update command to the host.
Signed-off-by: Gerd Hoffmann kraxel@redhat.com --- drivers/gpu/drm/qxl/qxl_display.c | 248 ++++++++++++++++-------------- 1 file changed, 133 insertions(+), 115 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index b315d7484e21..4a3d272e8d6c 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -476,12 +476,11 @@ static int qxl_primary_atomic_check(struct drm_plane *plane, return qxl_check_framebuffer(qdev, bo); }
-static int qxl_primary_apply_cursor(struct drm_plane *plane) +static int qxl_primary_apply_cursor(struct qxl_device *qdev, + struct drm_plane_state *plane_state) { - struct drm_device *dev = plane->dev; - struct qxl_device *qdev = to_qxl(dev); - struct drm_framebuffer *fb = plane->state->fb; - struct qxl_crtc *qcrtc = to_qxl_crtc(plane->state->crtc); + struct drm_framebuffer *fb = plane_state->fb; + struct qxl_crtc *qcrtc = to_qxl_crtc(plane_state->crtc); struct qxl_cursor_cmd *cmd; struct qxl_release *release; int ret = 0; @@ -505,8 +504,8 @@ static int qxl_primary_apply_cursor(struct drm_plane *plane)
cmd = (struct qxl_cursor_cmd *)qxl_release_map(qdev, release); cmd->type = QXL_CURSOR_SET; - cmd->u.set.position.x = plane->state->crtc_x + fb->hot_x; - cmd->u.set.position.y = plane->state->crtc_y + fb->hot_y; + cmd->u.set.position.x = plane_state->crtc_x + fb->hot_x; + cmd->u.set.position.y = plane_state->crtc_y + fb->hot_y;
cmd->u.set.shape = qxl_bo_physical_address(qdev, qcrtc->cursor_bo, 0);
@@ -523,6 +522,113 @@ static int qxl_primary_apply_cursor(struct drm_plane *plane) return ret; }
+static int qxl_primary_move_cursor(struct qxl_device *qdev, + struct drm_plane_state *plane_state) +{ + struct drm_framebuffer *fb = plane_state->fb; + struct qxl_crtc *qcrtc = to_qxl_crtc(plane_state->crtc); + struct qxl_cursor_cmd *cmd; + struct qxl_release *release; + int ret = 0; + + if (!qcrtc->cursor_bo) + return 0; + + ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd), + QXL_RELEASE_CURSOR_CMD, + &release, NULL); + if (ret) + return ret; + + ret = qxl_release_reserve_list(release, true); + if (ret) { + qxl_release_free(qdev, release); + return ret; + } + + cmd = (struct qxl_cursor_cmd *)qxl_release_map(qdev, release); + cmd->type = QXL_CURSOR_MOVE; + cmd->u.position.x = plane_state->crtc_x + fb->hot_x; + cmd->u.position.y = plane_state->crtc_y + fb->hot_y; + qxl_release_unmap(qdev, release, &cmd->release_info); + + qxl_release_fence_buffer_objects(release); + qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); + return ret; +} + +static struct qxl_bo *qxl_create_cursor(struct qxl_device *qdev, + struct qxl_bo *user_bo, + int hot_x, int hot_y) +{ + static const u32 size = 64 * 64 * 4; + struct qxl_bo *cursor_bo; + struct dma_buf_map cursor_map; + struct dma_buf_map user_map; + struct qxl_cursor cursor; + int ret; + + if (!user_bo) + return NULL; + + ret = qxl_bo_create(qdev, sizeof(struct qxl_cursor) + size, + false, true, QXL_GEM_DOMAIN_VRAM, 1, + NULL, &cursor_bo); + if (ret) + goto err; + + ret = qxl_bo_vmap(cursor_bo, &cursor_map); + if (ret) + goto err_unref; + + ret = qxl_bo_vmap(user_bo, &user_map); + if (ret) + goto err_unmap; + + cursor.header.unique = 0; + cursor.header.type = SPICE_CURSOR_TYPE_ALPHA; + cursor.header.width = 64; + cursor.header.height = 64; + cursor.header.hot_spot_x = hot_x; + cursor.header.hot_spot_y = hot_y; + cursor.data_size = size; + cursor.chunk.next_chunk = 0; + cursor.chunk.prev_chunk = 0; + cursor.chunk.data_size = size; + if (cursor_map.is_iomem) { + memcpy_toio(cursor_map.vaddr_iomem, + &cursor, sizeof(cursor)); + memcpy_toio(cursor_map.vaddr_iomem + sizeof(cursor), + user_map.vaddr, size); + } else { + memcpy(cursor_map.vaddr, + &cursor, sizeof(cursor)); + memcpy(cursor_map.vaddr + sizeof(cursor), + user_map.vaddr, size); + } + + qxl_bo_vunmap(user_bo); + qxl_bo_vunmap(cursor_bo); + return cursor_bo; + +err_unmap: + qxl_bo_vunmap(cursor_bo); +err_unref: + qxl_bo_unpin(cursor_bo); + qxl_bo_unref(&cursor_bo); +err: + return NULL; +} + +static void qxl_free_cursor(struct qxl_bo *cursor_bo) +{ + if (!cursor_bo) + return; + + qxl_bo_unpin(cursor_bo); + qxl_bo_unref(&cursor_bo); +} + static void qxl_primary_atomic_update(struct drm_plane *plane, struct drm_plane_state *old_state) { @@ -543,7 +649,7 @@ static void qxl_primary_atomic_update(struct drm_plane *plane, if (qdev->primary_bo) qxl_io_destroy_primary(qdev); qxl_io_create_primary(qdev, primary); - qxl_primary_apply_cursor(plane); + qxl_primary_apply_cursor(qdev, plane->state); }
if (bo->is_dumb) @@ -574,124 +680,21 @@ static void qxl_primary_atomic_disable(struct drm_plane *plane, static void qxl_cursor_atomic_update(struct drm_plane *plane, struct drm_plane_state *old_state) { - struct drm_device *dev = plane->dev; - struct qxl_device *qdev = to_qxl(dev); + struct qxl_device *qdev = to_qxl(plane->dev); struct drm_framebuffer *fb = plane->state->fb; - struct qxl_crtc *qcrtc = to_qxl_crtc(plane->state->crtc); - struct qxl_release *release; - struct qxl_cursor_cmd *cmd; - struct qxl_cursor *cursor; - struct drm_gem_object *obj; - struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL; - int ret; - struct dma_buf_map user_map; - struct dma_buf_map cursor_map; - void *user_ptr; - int size = 64*64*4; - - ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd), - QXL_RELEASE_CURSOR_CMD, - &release, NULL); - if (ret) - return;
if (fb != old_state->fb) { - obj = fb->obj[0]; - user_bo = gem_to_qxl_bo(obj); - - /* pinning is done in the prepare/cleanup framevbuffer */ - ret = qxl_bo_vmap_locked(user_bo, &user_map); - if (ret) - goto out_free_release; - user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */ - - ret = qxl_alloc_bo_reserved(qdev, release, - sizeof(struct qxl_cursor) + size, - &cursor_bo); - if (ret) - goto out_kunmap; - - ret = qxl_bo_pin(cursor_bo); - if (ret) - goto out_free_bo; - - ret = qxl_release_reserve_list(release, true); - if (ret) - goto out_unpin; - - ret = qxl_bo_vmap_locked(cursor_bo, &cursor_map); - if (ret) - goto out_backoff; - if (cursor_map.is_iomem) /* TODO: Use mapping abstraction properly */ - cursor = (struct qxl_cursor __force *)cursor_map.vaddr_iomem; - else - cursor = (struct qxl_cursor *)cursor_map.vaddr; - - cursor->header.unique = 0; - cursor->header.type = SPICE_CURSOR_TYPE_ALPHA; - cursor->header.width = 64; - cursor->header.height = 64; - cursor->header.hot_spot_x = fb->hot_x; - cursor->header.hot_spot_y = fb->hot_y; - cursor->data_size = size; - cursor->chunk.next_chunk = 0; - cursor->chunk.prev_chunk = 0; - cursor->chunk.data_size = size; - memcpy(cursor->chunk.data, user_ptr, size); - qxl_bo_vunmap_locked(cursor_bo); - qxl_bo_vunmap_locked(user_bo); - - cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release); - cmd->u.set.visible = 1; - cmd->u.set.shape = qxl_bo_physical_address(qdev, - cursor_bo, 0); - cmd->type = QXL_CURSOR_SET; - - old_cursor_bo = qcrtc->cursor_bo; - qcrtc->cursor_bo = cursor_bo; - cursor_bo = NULL; + qxl_primary_apply_cursor(qdev, plane->state); } else { - - ret = qxl_release_reserve_list(release, true); - if (ret) - goto out_free_release; - - cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release); - cmd->type = QXL_CURSOR_MOVE; + qxl_primary_move_cursor(qdev, plane->state); } - - cmd->u.position.x = plane->state->crtc_x + fb->hot_x; - cmd->u.position.y = plane->state->crtc_y + fb->hot_y; - - qxl_release_unmap(qdev, release, &cmd->release_info); - qxl_release_fence_buffer_objects(release); - qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); - - if (old_cursor_bo != NULL) - qxl_bo_unpin(old_cursor_bo); - qxl_bo_unref(&old_cursor_bo); - qxl_bo_unref(&cursor_bo); - - return; - -out_backoff: - qxl_release_backoff_reserve_list(release); -out_unpin: - qxl_bo_unpin(cursor_bo); -out_free_bo: - qxl_bo_unref(&cursor_bo); -out_kunmap: - qxl_bo_vunmap_locked(user_bo); -out_free_release: - qxl_release_free(qdev, release); - return; - }
static void qxl_cursor_atomic_disable(struct drm_plane *plane, struct drm_plane_state *old_state) { struct qxl_device *qdev = to_qxl(plane->dev); + struct qxl_crtc *qcrtc; struct qxl_release *release; struct qxl_cursor_cmd *cmd; int ret; @@ -714,6 +717,10 @@ static void qxl_cursor_atomic_disable(struct drm_plane *plane,
qxl_release_fence_buffer_objects(release); qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); + + qcrtc = to_qxl_crtc(old_state->crtc); + qxl_free_cursor(qcrtc->cursor_bo); + qcrtc->cursor_bo = NULL; }
static void qxl_update_dumb_head(struct qxl_device *qdev, @@ -822,6 +829,17 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane, qxl_prepare_shadow(qdev, user_bo, new_state->crtc->index); }
+ if (plane->type == DRM_PLANE_TYPE_CURSOR && + plane->state->fb != new_state->fb) { + struct qxl_crtc *qcrtc = to_qxl_crtc(new_state->crtc); + struct qxl_bo *old_cursor_bo = qcrtc->cursor_bo; + + qcrtc->cursor_bo = qxl_create_cursor(qdev, user_bo, + new_state->fb->hot_x, + new_state->fb->hot_y); + qxl_free_cursor(old_cursor_bo); + } + return qxl_bo_pin(user_bo); }
Hi
Am 17.02.21 um 13:32 schrieb Gerd Hoffmann:
Add helper functions to create and move the cursor. Create the cursor_bo in prepare_fb callback, in the atomic_commit callback we only send the update command to the host.
I'm still trying to wrap my head around the qxl cursor code.
Getting vmap out of the commit tail is good, but I feel like this isn't going in the right direction overall.
In ast, these helper functions were only good when converting the drvier to atomic modesetting. So I removed them in the latst patchset and did all the updates in the plane helpers directly.
For cursor_bo itself, it seems to be transitional state that is only used during the plane update and crtc update . It should probably be stored in a plane-state structure.
Some of the primary plane's functions seem to deal with cursor handling. What's the role of the primary plane in cursor handling?
For now, I suggest to merge patch 1 to 8 and 11; and move the cursor patches into a new patchset. I'd like ot hear Daniel's opinion on this. Do you have further plans here?
If you absolutely want patches 9 and 10, I'd rubber-stamp an A-b on them.
Best regards Thomas
Signed-off-by: Gerd Hoffmann kraxel@redhat.com
drivers/gpu/drm/qxl/qxl_display.c | 248 ++++++++++++++++-------------- 1 file changed, 133 insertions(+), 115 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index b315d7484e21..4a3d272e8d6c 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -476,12 +476,11 @@ static int qxl_primary_atomic_check(struct drm_plane *plane, return qxl_check_framebuffer(qdev, bo); }
-static int qxl_primary_apply_cursor(struct drm_plane *plane) +static int qxl_primary_apply_cursor(struct qxl_device *qdev,
{struct drm_plane_state *plane_state)
- struct drm_device *dev = plane->dev;
- struct qxl_device *qdev = to_qxl(dev);
- struct drm_framebuffer *fb = plane->state->fb;
- struct qxl_crtc *qcrtc = to_qxl_crtc(plane->state->crtc);
- struct drm_framebuffer *fb = plane_state->fb;
- struct qxl_crtc *qcrtc = to_qxl_crtc(plane_state->crtc); struct qxl_cursor_cmd *cmd; struct qxl_release *release; int ret = 0;
@@ -505,8 +504,8 @@ static int qxl_primary_apply_cursor(struct drm_plane *plane)
cmd = (struct qxl_cursor_cmd *)qxl_release_map(qdev, release); cmd->type = QXL_CURSOR_SET;
- cmd->u.set.position.x = plane->state->crtc_x + fb->hot_x;
- cmd->u.set.position.y = plane->state->crtc_y + fb->hot_y;
cmd->u.set.position.x = plane_state->crtc_x + fb->hot_x;
cmd->u.set.position.y = plane_state->crtc_y + fb->hot_y;
cmd->u.set.shape = qxl_bo_physical_address(qdev, qcrtc->cursor_bo, 0);
@@ -523,6 +522,113 @@ static int qxl_primary_apply_cursor(struct drm_plane *plane) return ret; }
+static int qxl_primary_move_cursor(struct qxl_device *qdev,
struct drm_plane_state *plane_state)
+{
- struct drm_framebuffer *fb = plane_state->fb;
- struct qxl_crtc *qcrtc = to_qxl_crtc(plane_state->crtc);
- struct qxl_cursor_cmd *cmd;
- struct qxl_release *release;
- int ret = 0;
- if (!qcrtc->cursor_bo)
return 0;
- ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd),
QXL_RELEASE_CURSOR_CMD,
&release, NULL);
- if (ret)
return ret;
- ret = qxl_release_reserve_list(release, true);
- if (ret) {
qxl_release_free(qdev, release);
return ret;
- }
- cmd = (struct qxl_cursor_cmd *)qxl_release_map(qdev, release);
- cmd->type = QXL_CURSOR_MOVE;
- cmd->u.position.x = plane_state->crtc_x + fb->hot_x;
- cmd->u.position.y = plane_state->crtc_y + fb->hot_y;
- qxl_release_unmap(qdev, release, &cmd->release_info);
- qxl_release_fence_buffer_objects(release);
- qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
- return ret;
+}
+static struct qxl_bo *qxl_create_cursor(struct qxl_device *qdev,
struct qxl_bo *user_bo,
int hot_x, int hot_y)
+{
- static const u32 size = 64 * 64 * 4;
- struct qxl_bo *cursor_bo;
- struct dma_buf_map cursor_map;
- struct dma_buf_map user_map;
- struct qxl_cursor cursor;
- int ret;
- if (!user_bo)
return NULL;
- ret = qxl_bo_create(qdev, sizeof(struct qxl_cursor) + size,
false, true, QXL_GEM_DOMAIN_VRAM, 1,
NULL, &cursor_bo);
- if (ret)
goto err;
- ret = qxl_bo_vmap(cursor_bo, &cursor_map);
- if (ret)
goto err_unref;
- ret = qxl_bo_vmap(user_bo, &user_map);
- if (ret)
goto err_unmap;
- cursor.header.unique = 0;
- cursor.header.type = SPICE_CURSOR_TYPE_ALPHA;
- cursor.header.width = 64;
- cursor.header.height = 64;
- cursor.header.hot_spot_x = hot_x;
- cursor.header.hot_spot_y = hot_y;
- cursor.data_size = size;
- cursor.chunk.next_chunk = 0;
- cursor.chunk.prev_chunk = 0;
- cursor.chunk.data_size = size;
- if (cursor_map.is_iomem) {
memcpy_toio(cursor_map.vaddr_iomem,
&cursor, sizeof(cursor));
memcpy_toio(cursor_map.vaddr_iomem + sizeof(cursor),
user_map.vaddr, size);
- } else {
memcpy(cursor_map.vaddr,
&cursor, sizeof(cursor));
memcpy(cursor_map.vaddr + sizeof(cursor),
user_map.vaddr, size);
- }
- qxl_bo_vunmap(user_bo);
- qxl_bo_vunmap(cursor_bo);
- return cursor_bo;
+err_unmap:
- qxl_bo_vunmap(cursor_bo);
+err_unref:
- qxl_bo_unpin(cursor_bo);
- qxl_bo_unref(&cursor_bo);
+err:
- return NULL;
+}
+static void qxl_free_cursor(struct qxl_bo *cursor_bo) +{
- if (!cursor_bo)
return;
- qxl_bo_unpin(cursor_bo);
- qxl_bo_unref(&cursor_bo);
+}
- static void qxl_primary_atomic_update(struct drm_plane *plane, struct drm_plane_state *old_state) {
@@ -543,7 +649,7 @@ static void qxl_primary_atomic_update(struct drm_plane *plane, if (qdev->primary_bo) qxl_io_destroy_primary(qdev); qxl_io_create_primary(qdev, primary);
qxl_primary_apply_cursor(plane);
qxl_primary_apply_cursor(qdev, plane->state);
}
if (bo->is_dumb)
@@ -574,124 +680,21 @@ static void qxl_primary_atomic_disable(struct drm_plane *plane, static void qxl_cursor_atomic_update(struct drm_plane *plane, struct drm_plane_state *old_state) {
- struct drm_device *dev = plane->dev;
- struct qxl_device *qdev = to_qxl(dev);
- struct qxl_device *qdev = to_qxl(plane->dev); struct drm_framebuffer *fb = plane->state->fb;
struct qxl_crtc *qcrtc = to_qxl_crtc(plane->state->crtc);
struct qxl_release *release;
struct qxl_cursor_cmd *cmd;
struct qxl_cursor *cursor;
struct drm_gem_object *obj;
struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
int ret;
struct dma_buf_map user_map;
struct dma_buf_map cursor_map;
void *user_ptr;
int size = 64*64*4;
ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd),
QXL_RELEASE_CURSOR_CMD,
&release, NULL);
if (ret)
return;
if (fb != old_state->fb) {
obj = fb->obj[0];
user_bo = gem_to_qxl_bo(obj);
/* pinning is done in the prepare/cleanup framevbuffer */
ret = qxl_bo_vmap_locked(user_bo, &user_map);
if (ret)
goto out_free_release;
user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
ret = qxl_alloc_bo_reserved(qdev, release,
sizeof(struct qxl_cursor) + size,
&cursor_bo);
if (ret)
goto out_kunmap;
ret = qxl_bo_pin(cursor_bo);
if (ret)
goto out_free_bo;
ret = qxl_release_reserve_list(release, true);
if (ret)
goto out_unpin;
ret = qxl_bo_vmap_locked(cursor_bo, &cursor_map);
if (ret)
goto out_backoff;
if (cursor_map.is_iomem) /* TODO: Use mapping abstraction properly */
cursor = (struct qxl_cursor __force *)cursor_map.vaddr_iomem;
else
cursor = (struct qxl_cursor *)cursor_map.vaddr;
cursor->header.unique = 0;
cursor->header.type = SPICE_CURSOR_TYPE_ALPHA;
cursor->header.width = 64;
cursor->header.height = 64;
cursor->header.hot_spot_x = fb->hot_x;
cursor->header.hot_spot_y = fb->hot_y;
cursor->data_size = size;
cursor->chunk.next_chunk = 0;
cursor->chunk.prev_chunk = 0;
cursor->chunk.data_size = size;
memcpy(cursor->chunk.data, user_ptr, size);
qxl_bo_vunmap_locked(cursor_bo);
qxl_bo_vunmap_locked(user_bo);
cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release);
cmd->u.set.visible = 1;
cmd->u.set.shape = qxl_bo_physical_address(qdev,
cursor_bo, 0);
cmd->type = QXL_CURSOR_SET;
old_cursor_bo = qcrtc->cursor_bo;
qcrtc->cursor_bo = cursor_bo;
cursor_bo = NULL;
} else {qxl_primary_apply_cursor(qdev, plane->state);
ret = qxl_release_reserve_list(release, true);
if (ret)
goto out_free_release;
cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release);
cmd->type = QXL_CURSOR_MOVE;
}qxl_primary_move_cursor(qdev, plane->state);
- cmd->u.position.x = plane->state->crtc_x + fb->hot_x;
- cmd->u.position.y = plane->state->crtc_y + fb->hot_y;
- qxl_release_unmap(qdev, release, &cmd->release_info);
- qxl_release_fence_buffer_objects(release);
- qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
- if (old_cursor_bo != NULL)
qxl_bo_unpin(old_cursor_bo);
- qxl_bo_unref(&old_cursor_bo);
- qxl_bo_unref(&cursor_bo);
- return;
-out_backoff:
- qxl_release_backoff_reserve_list(release);
-out_unpin:
- qxl_bo_unpin(cursor_bo);
-out_free_bo:
- qxl_bo_unref(&cursor_bo);
-out_kunmap:
- qxl_bo_vunmap_locked(user_bo);
-out_free_release:
qxl_release_free(qdev, release);
return;
}
static void qxl_cursor_atomic_disable(struct drm_plane *plane, struct drm_plane_state *old_state) { struct qxl_device *qdev = to_qxl(plane->dev);
- struct qxl_crtc *qcrtc; struct qxl_release *release; struct qxl_cursor_cmd *cmd; int ret;
@@ -714,6 +717,10 @@ static void qxl_cursor_atomic_disable(struct drm_plane *plane,
qxl_release_fence_buffer_objects(release); qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
qcrtc = to_qxl_crtc(old_state->crtc);
qxl_free_cursor(qcrtc->cursor_bo);
qcrtc->cursor_bo = NULL; }
static void qxl_update_dumb_head(struct qxl_device *qdev,
@@ -822,6 +829,17 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane, qxl_prepare_shadow(qdev, user_bo, new_state->crtc->index); }
- if (plane->type == DRM_PLANE_TYPE_CURSOR &&
plane->state->fb != new_state->fb) {
struct qxl_crtc *qcrtc = to_qxl_crtc(new_state->crtc);
struct qxl_bo *old_cursor_bo = qcrtc->cursor_bo;
qcrtc->cursor_bo = qxl_create_cursor(qdev, user_bo,
new_state->fb->hot_x,
new_state->fb->hot_y);
qxl_free_cursor(old_cursor_bo);
- }
- return qxl_bo_pin(user_bo); }
Hi,
I'm still trying to wrap my head around the qxl cursor code.
Getting vmap out of the commit tail is good, but I feel like this isn't going in the right direction overall.
In ast, these helper functions were only good when converting the drvier to atomic modesetting. So I removed them in the latst patchset and did all the updates in the plane helpers directly.
I see the helper functions more as a way to get some structure into the code flow. The callbacks are easier to read if they just call helper functions for stuff which needs more than a handful lines of code (patch 9/11 exists for the same reason).
The helpers also make it easier move work from one callback to another, but that is just a useful side-effect.
I had considered making that two separate patches, one factor out code into functions and one moving the calls. Turned out to not be that easy though, because the old qxl_cursor_atomic_update() code was a rather hairy mix of qxl_create_cursor() + qxl_primary_apply_cursor() + qxl_primary_move_cursor().
For cursor_bo itself, it seems to be transitional state that is only used during the plane update and crtc update . It should probably be stored in a plane-state structure.
Some of the primary plane's functions seem to deal with cursor handling. What's the role of the primary plane in cursor handling?
It's a quirk. The qxl device will forget the cursor state on qxl_io_create_primary(), so I have to remember the cursor state and re-establish it by calling qxl_primary_apply_cursor() again.
So I'm not sure sticking this into plane state would work. Because of the quirk this is more than just a handover from prepare to commit.
For now, I suggest to merge patch 1 to 8 and 11; and move the cursor patches into a new patchset.
I can merge 1-8, but 11 has to wait until the cursor is sorted. There is a reason why 11 is last in the series ;)
I'd like ot hear Daniel's opinion on this. Do you have further plans here?
Well. I suspect I could easily spend a month cleaning up and party redesign the qxl driver (specifically qxl_draw.c + qxl_image.c).
I'm not sure I'll find the time to actually do that anytime soon. I have plenty of other stuff on my TODO list, and given that the world is transitioning to virtio-gpu the priority for qxl isn't that high.
So, no, I have no short-term plans for qxl beyond fixing pins + reservations + lockdep.
take care, Gerd
Hi
Am 18.02.21 um 12:50 schrieb Gerd Hoffmann:
Hi,
I'm still trying to wrap my head around the qxl cursor code.
Getting vmap out of the commit tail is good, but I feel like this isn't going in the right direction overall.
In ast, these helper functions were only good when converting the drvier to atomic modesetting. So I removed them in the latst patchset and did all the updates in the plane helpers directly.
I see the helper functions more as a way to get some structure into the code flow. The callbacks are easier to read if they just call helper functions for stuff which needs more than a handful lines of code (patch 9/11 exists for the same reason).
The helpers also make it easier move work from one callback to another, but that is just a useful side-effect.
I had considered making that two separate patches, one factor out code into functions and one moving the calls. Turned out to not be that easy though, because the old qxl_cursor_atomic_update() code was a rather hairy mix of qxl_create_cursor() + qxl_primary_apply_cursor() + qxl_primary_move_cursor().
For cursor_bo itself, it seems to be transitional state that is only used during the plane update and crtc update . It should probably be stored in a plane-state structure.
Some of the primary plane's functions seem to deal with cursor handling. What's the role of the primary plane in cursor handling?
It's a quirk. The qxl device will forget the cursor state on qxl_io_create_primary(), so I have to remember the cursor state and re-establish it by calling qxl_primary_apply_cursor() again.
So I'm not sure sticking this into plane state would work. Because of the quirk this is more than just a handover from prepare to commit.
For now, I suggest to merge patch 1 to 8 and 11; and move the cursor patches into a new patchset.
I can merge 1-8, but 11 has to wait until the cursor is sorted. There is a reason why 11 is last in the series ;)
I'd like ot hear Daniel's opinion on this. Do you have further plans here?
Well. I suspect I could easily spend a month cleaning up and party redesign the qxl driver (specifically qxl_draw.c + qxl_image.c).
I'm not sure I'll find the time to actually do that anytime soon. I have plenty of other stuff on my TODO list, and given that the world is transitioning to virtio-gpu the priority for qxl isn't that high.
Well, in that case:
Acked-by: Thomas Zimmermann tzimmermann@suse.de
for patches 9 and 10. Having the vmap calls fixed is at least worth it.
Best regards Thomas
So, no, I have no short-term plans for qxl beyond fixing pins + reservations + lockdep.
take care, Gerd
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Hi,
Well. I suspect I could easily spend a month cleaning up and party redesign the qxl driver (specifically qxl_draw.c + qxl_image.c).
I'm not sure I'll find the time to actually do that anytime soon. I have plenty of other stuff on my TODO list, and given that the world is transitioning to virtio-gpu the priority for qxl isn't that high.
Well, in that case:
Acked-by: Thomas Zimmermann tzimmermann@suse.de
for patches 9 and 10. Having the vmap calls fixed is at least worth it.
Thanks. Any chance you can ack 8/11 too (only patch missing an ack).
take care, Gerd
Try avoid re-introducing locking bugs.
Signed-off-by: Gerd Hoffmann kraxel@redhat.com Acked-by: Thomas Zimmermann tzimmermann@suse.de --- drivers/gpu/drm/qxl/qxl_object.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index 82c3bf195ad6..6e26d70f2f07 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -162,6 +162,8 @@ int qxl_bo_vmap_locked(struct qxl_bo *bo, struct dma_buf_map *map) { int r;
+ dma_resv_assert_held(bo->tbo.base.resv); + if (bo->kptr) { bo->map_count++; goto out; @@ -236,6 +238,8 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
void qxl_bo_vunmap_locked(struct qxl_bo *bo) { + dma_resv_assert_held(bo->tbo.base.resv); + if (bo->kptr == NULL) return; bo->map_count--;
dri-devel@lists.freedesktop.org