On Wed, Jul 10, 2013 at 2:03 PM, Maarten Lankhorst maarten.lankhorst@canonical.com wrote:
Op 10-07-13 13:54, Daniel Vetter schreef:
I've checked both implementations (radeon/nouveau) and they both grab the page array from ttm simply by dereferencing it and then wrapping it up with drm_prime_pages_to_sg in the callback and map it with dma_map_sg (in the helper).
Only the grabbing of the underlying page array is anything we need to be concerned about, and either those pages are pinned independently, or we're screwed no matter what.
And indeed, nouveau/radeon pin the backing storage in their attach/detach functions.
The only thing we might claim it does is prevent concurrent mapping of dma_buf attachments. But a) that's not allowed and b) the current code is racy already since it checks whether the sg mapping exists _before_ grabbing the lock.
So the dev->struct_mutex locking here does absolutely nothing useful, but only distracts. Remove it.
This should also help Maarten's work to eventually pin the backing storage more dynamically by preventing locking inversions around dev->struct_mutex.
This pleases me, but I think it's not thorough enough.
if (prime_attach->dir == dir) return prime_attach->sgt;
^ That check must go too. I don't think recursive map_dma_buf is valid.
and unmap_dma_buf should set prime_attach->dir = DMA_NONE; again.
So after a bit of irc chatting with Maarten this seems to be more involved. The above check is to cache the dma mapping, but the implementation is bogus in tons of ways: - If direction changes we don't bother with unmaping and freeing the mapping, but simply leak it. - This will break if the dma mapping needs explicit syncing since the helpers don't call sync_to_cpu/sync_to_device anywhere.
So I think I'll decline to poke around more in this hornet nest and leave it at the locking removal. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch