Hi
Am 11.05.20 um 13:37 schrieb Daniel Vetter:
On Mon, May 11, 2020 at 01:23:38PM +0200, Thomas Zimmermann wrote:
Hi
Am 11.05.20 um 11:35 schrieb Daniel Vetter:
There's no direct harm, because for the shmem helpers these are noops on imported buffers. The trouble is in the locks these take - I want to change dma_buf_vmap locking, and so need to make sure that we only ever take certain locks on one side of the dma-buf interface: Either for exporters, or for importers.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Dave Airlie airlied@redhat.com Cc: Sean Paul sean@poorly.run Cc: Gerd Hoffmann kraxel@redhat.com Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Alex Deucher alexander.deucher@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Thomas Gleixner tglx@linutronix.de Cc: Sam Ravnborg sam@ravnborg.org
drivers/gpu/drm/udl/udl_gem.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c index b6e26f98aa0a..c68d3e265329 100644 --- a/drivers/gpu/drm/udl/udl_gem.c +++ b/drivers/gpu/drm/udl/udl_gem.c @@ -46,29 +46,31 @@ static void *udl_gem_object_vmap(struct drm_gem_object *obj)
It's still not clear to me why this function exists in the first place. It's the same code as the default implementation, except that it doesn't support cached mappings.
I don't see why udl is special. I'd suggest to try to use the original shmem function and remove this one. Same for the mmap code.
tbh no idea, could be that the usb code is then a bit too inefficient at uploading stuff if it needs to cache flush.
IIRC I was told that some other component (userspace, dma-buf provider) might not work well with cached mappings. But that problem should affect all other shmem-based drivers as well. I'm not aware of any problems here.
The upload code is in udl_render_hline. It's an elaborate format-conversion helper that packs the framebuffer into USB URBs and sends them out. Again, I don't see much of a conceptual difference to other drivers that do similar things on device memory.
But then on x86 at least everything (except real gpus) is coherent, so cached mappings should be faster.
No idea, but also don't have the hw so not going to touch udl that much.
I can help with testing.
Best regards Thomas
-Daniel
Best regards Thomas
if (shmem->vmap_use_count++ > 0) goto out;
- ret = drm_gem_shmem_get_pages(shmem);
- if (ret)
goto err_zero_use;
- if (obj->import_attach)
- if (obj->import_attach) { shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
- else
} else {
ret = drm_gem_shmem_get_pages(shmem);
if (ret)
goto err;
shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, PAGE_KERNEL);
if (!shmem->vaddr)
drm_gem_shmem_put_pages(shmem);
}
if (!shmem->vaddr) { DRM_DEBUG_KMS("Failed to vmap pages\n"); ret = -ENOMEM;
goto err_put_pages;
}goto err;
out: mutex_unlock(&shmem->vmap_lock); return shmem->vaddr;
-err_put_pages:
- drm_gem_shmem_put_pages(shmem);
-err_zero_use: +err: shmem->vmap_use_count = 0; mutex_unlock(&shmem->vmap_lock); return ERR_PTR(ret);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer