There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or for importers.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Dave Airlie airlied@redhat.com Cc: Sean Paul sean@poorly.run Cc: Gerd Hoffmann kraxel@redhat.com Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Alex Deucher alexander.deucher@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Thomas Gleixner tglx@linutronix.de Cc: Sam Ravnborg sam@ravnborg.org --- drivers/gpu/drm/udl/udl_gem.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c index b6e26f98aa0a..c68d3e265329 100644 --- a/drivers/gpu/drm/udl/udl_gem.c +++ b/drivers/gpu/drm/udl/udl_gem.c @@ -46,29 +46,31 @@ static void *udl_gem_object_vmap(struct drm_gem_object *obj) if (shmem->vmap_use_count++ > 0) goto out;
- ret = drm_gem_shmem_get_pages(shmem); - if (ret) - goto err_zero_use; - - if (obj->import_attach) + if (obj->import_attach) { shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf); - else + } else { + ret = drm_gem_shmem_get_pages(shmem); + if (ret) + goto err; + shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, PAGE_KERNEL);
+ if (!shmem->vaddr) + drm_gem_shmem_put_pages(shmem); + } + if (!shmem->vaddr) { DRM_DEBUG_KMS("Failed to vmap pages\n"); ret = -ENOMEM; - goto err_put_pages; + goto err; }
out: mutex_unlock(&shmem->vmap_lock); return shmem->vaddr;
-err_put_pages: - drm_gem_shmem_put_pages(shmem); -err_zero_use: +err: shmem->vmap_use_count = 0; mutex_unlock(&shmem->vmap_lock); return ERR_PTR(ret);