Am 20.08.19 um 17:41 schrieb Daniel Vetter:
On Tue, Aug 20, 2019 at 5:34 PM Koenig, Christian Christian.Koenig@amd.com wrote:
Am 20.08.19 um 17:21 schrieb Daniel Vetter:
On Tue, Aug 20, 2019 at 5:16 PM Koenig, Christian Christian.Koenig@amd.com wrote:
Am 20.08.19 um 16:53 schrieb Daniel Vetter:
With nouveau fixed all ttm-using drives have the correct nesting of mmap_sem vs dma_resv, and we can just lock the buffer.
Assuming I didn't screw up anything with my audit of course.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Christian Koenig christian.koenig@amd.com Cc: Huang Rui ray.huang@amd.com Cc: Gerd Hoffmann kraxel@redhat.com Cc: "VMware Graphics" linux-graphics-maintainer@vmware.com Cc: Thomas Hellstrom thellstrom@vmware.com
Yes, please. But one more point: you can now remove bo->wu_mutex as well!
Ah right totally forgot about that in my enthusiasm after all the auditing and fixing nouveau.
Apart from that Reviewed-by: Christian König christian.koenig@amd.com
Thanks, I already respun the patches, so will be in the next version. Can you pls also give this a spin on the amdgpu CI, just to make sure it's all fine? With full lockdep ofc. And then reply with a t-b.
I can ask for this on our call tomorrow, but I fear our CI infrastructure is not ready yet.
I thought you have some internal branch you all commit amdgpu stuff for, and then Alex goes and collects the pieces that are ready?
No, that part is correct. The problem is that this branch is not QA tested regularly as far as I know.
Or does that all blow up if you push a patch which touches code outside of the dkms?
No, but the problem is related to that.
See the release branches for dkms are separate and indeed QA tested regularly.
But changes from amd-staging-drm-next are only cherry picked into those in certain intervals.
Well going to discuss that tomorrow, Christian.
-Daniel
Christian.
Thanks, Daniel
drivers/gpu/drm/ttm/ttm_bo.c | 34 --------------------------------- drivers/gpu/drm/ttm/ttm_bo_vm.c | 26 +------------------------ include/drm/ttm/ttm_bo_api.h | 1 - 3 files changed, 1 insertion(+), 60 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 20ff56f27aa4..a952dd624b06 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1954,37 +1954,3 @@ void ttm_bo_swapout_all(struct ttm_bo_device *bdev) ; } EXPORT_SYMBOL(ttm_bo_swapout_all);
-/**
- ttm_bo_wait_unreserved - interruptible wait for a buffer object to become
- unreserved
- @bo: Pointer to buffer
- */
-int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo) -{
int ret;
/*
* In the absense of a wait_unlocked API,
* Use the bo::wu_mutex to avoid triggering livelocks due to
* concurrent use of this function. Note that this use of
* bo::wu_mutex can go away if we change locking order to
* mmap_sem -> bo::reserve.
*/
ret = mutex_lock_interruptible(&bo->wu_mutex);
if (unlikely(ret != 0))
return -ERESTARTSYS;
if (!dma_resv_is_locked(bo->base.resv))
goto out_unlock;
ret = dma_resv_lock_interruptible(bo->base.resv, NULL);
if (ret == -EINTR)
ret = -ERESTARTSYS;
if (unlikely(ret != 0))
goto out_unlock;
dma_resv_unlock(bo->base.resv);
-out_unlock:
mutex_unlock(&bo->wu_mutex);
return ret;
-} diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 76eedb963693..505e1787aeea 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -125,31 +125,7 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf) &bdev->man[bo->mem.mem_type]; struct vm_area_struct cvma;
/*
* Work around locking order reversal in fault / nopfn
* between mmap_sem and bo_reserve: Perform a trylock operation
* for reserve, and if it fails, retry the fault after waiting
* for the buffer to become unreserved.
*/
if (unlikely(!dma_resv_trylock(bo->base.resv))) {
if (vmf->flags & FAULT_FLAG_ALLOW_RETRY) {
if (!(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) {
ttm_bo_get(bo);
up_read(&vmf->vma->vm_mm->mmap_sem);
(void) ttm_bo_wait_unreserved(bo);
ttm_bo_put(bo);
}
return VM_FAULT_RETRY;
}
/*
* If we'd want to change locking order to
* mmap_sem -> bo::reserve, we'd use a blocking reserve here
* instead of retrying the fault...
*/
return VM_FAULT_NOPAGE;
}
dma_resv_lock(bo->base.resv, NULL); /* * Refuse to fault imported pages. This should be handled
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 43c4929a2171..6b50e624e3e2 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -765,7 +765,6 @@ ssize_t ttm_bo_io(struct ttm_bo_device *bdev, struct file *filp, int ttm_bo_swapout(struct ttm_bo_global *glob, struct ttm_operation_ctx *ctx); void ttm_bo_swapout_all(struct ttm_bo_device *bdev); -int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo);
/** * ttm_bo_uses_embedded_gem_object - check if the given bo uses the