Op 28-11-12 12:54, Thomas Hellstrom schreef:
Signed-off-by: Maarten Lankhorst maarten.lankhorst@canonical.com
drivers/gpu/drm/ttm/ttm_bo.c | 112 +++++++++++++++++++++++-------------------- 1 file changed, 60 insertions(+), 52 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 202fc20..02b275b 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -486,14 +486,6 @@ static void ttm_bo_cleanup_memtype_use(struct ttm_buffer_object *bo) bo->ttm = NULL; } ttm_bo_mem_put(bo, &bo->mem);
- atomic_set(&bo->reserved, 0);
- /*
* Make processes trying to reserve really pick it up.
*/
- smp_mb__after_atomic_dec();
- wake_up_all(&bo->event_queue); } static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo)
@@ -515,6 +507,9 @@ static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo) put_count = ttm_bo_del_from_lru(bo); spin_unlock(&glob->lru_lock);
atomic_set(&bo->reserved, 0);
wake_up_all(&bo->event_queue);
I think (although I'm not 100% sure) that if we use atomic_set() to unreserve, and it's not followed by a spin_unlock(), we need to insert a memory barrier, like is done above in the removed code, otherwise memory operations protected by reserve may be reordered until after reservation.
Hm yeah, looks like ttm_bo_cleanup_memtype_use probably needs a smb_mb() at the end now. The original smp_mb__after_atomic_dec was a noop, since the wake_up_all call takes a spinlock too.
Thanks for catching it, I'll await the reply to my other email then maybe reword, fix this and resend.
~Maarten