Hi everyone,
attached are six TTM patches and one Amdgpu patch I came up with while working on a new TTM feature.
The first patch is a bug fix. TTM was always waiting for on a BO to be idle when initially creating it. Previously that didn't matter much because a newly created BO should always be idle. But with the possibility of importing DMA-buf reservation objects into a TTM BO that sometimes isn't the case any more.
Patches 2, 3 and 4 are just cleanups, e.g. removal of unused parameters and variables.
Patches 5 and 6 now finally implement the new LRU features in TTM. Basically we want to give the driver more fine grained control about the order in which the BOs are added to the LRU.
For this patch number 5 implements an optional LRU removal callback, giving the driver the chance to updates it's internal LRU handling as well.
Patch number 6 add two callback functions which return the list head after which a BO should be inserted into the LRU. Default implementations for those are provided and wired up in all drivers with this patch.
The Amdgpu patch now uses the new callbacks to implement a LRU order so that VM page table BOs always come behind everything else. I actually don't intent to commit this patch, but it was good for testing.
In the end we want to use this LRU order for allowing HSA applications to have their BOs after the graphics allocations because preempting HSA processes under resource shortage has a huge bunch of overhead.
Another use Alex suggested is to sort the BOs in the LRU by size. So that we don't swap out a 200MB BO to make room for a 1MB allocation.
Please review and comment, Christian.
From: Christian König christian.koenig@amd.com
When we use an extern reservation object that otherwise waits for every fence registered with it.
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com --- drivers/gpu/drm/ttm/ttm_bo.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 4cbf265..367b87b 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -998,13 +998,19 @@ static int ttm_bo_move_buffer(struct ttm_buffer_object *bo, lockdep_assert_held(&bo->resv->lock.base);
/* - * FIXME: It's possible to pipeline buffer moves. - * Have the driver move function wait for idle when necessary, - * instead of doing it here. + * Don't wait for the BO on initial allocation. This is important when + * the BO has an imported reservation object. */ - ret = ttm_bo_wait(bo, false, interruptible, no_wait_gpu); - if (ret) - return ret; + if (bo->mem.mem_type != TTM_PL_SYSTEM || bo->ttm != NULL) { + /* + * FIXME: It's possible to pipeline buffer moves. + * Have the driver move function wait for idle when necessary, + * instead of doing it here. + */ + ret = ttm_bo_wait(bo, false, interruptible, no_wait_gpu); + if (ret) + return ret; + } mem.num_pages = bo->num_pages; mem.size = mem.num_pages << PAGE_SHIFT; mem.page_alignment = bo->mem.page_alignment;
I don't know much about AMD gpu. Patches 1-6 look good to me.
On Wed, Apr 06, 2016 at 11:12:02AM +0200, Christian König wrote:
From: Christian König christian.koenig@amd.com
When we use an extern reservation object that otherwise waits for every fence registered with it.
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com
drivers/gpu/drm/ttm/ttm_bo.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 4cbf265..367b87b 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -998,13 +998,19 @@ static int ttm_bo_move_buffer(struct ttm_buffer_object *bo, lockdep_assert_held(&bo->resv->lock.base);
/*
* FIXME: It's possible to pipeline buffer moves.
* Have the driver move function wait for idle when necessary,
* instead of doing it here.
* Don't wait for the BO on initial allocation. This is important when
*/* the BO has an imported reservation object.
- ret = ttm_bo_wait(bo, false, interruptible, no_wait_gpu);
- if (ret)
return ret;
- if (bo->mem.mem_type != TTM_PL_SYSTEM || bo->ttm != NULL) {
/*
* FIXME: It's possible to pipeline buffer moves.
* Have the driver move function wait for idle when necessary,
* instead of doing it here.
*/
ret = ttm_bo_wait(bo, false, interruptible, no_wait_gpu);
if (ret)
return ret;
- } mem.num_pages = bo->num_pages; mem.size = mem.num_pages << PAGE_SHIFT; mem.page_alignment = bo->mem.page_alignment;
-- 2.5.0
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Am 06.04.2016 um 18:06 schrieb Sinclair Yeh:
I don't know much about AMD gpu. Patches 1-6 look good to me.
Does that count as a Reviewed-by or at least Acked-by?
Thanks for taking a look, Christian.
On Wed, Apr 06, 2016 at 11:12:02AM +0200, Christian König wrote:
From: Christian König christian.koenig@amd.com
When we use an extern reservation object that otherwise waits for every fence registered with it.
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com
drivers/gpu/drm/ttm/ttm_bo.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 4cbf265..367b87b 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -998,13 +998,19 @@ static int ttm_bo_move_buffer(struct ttm_buffer_object *bo, lockdep_assert_held(&bo->resv->lock.base);
/*
* FIXME: It's possible to pipeline buffer moves.
* Have the driver move function wait for idle when necessary,
* instead of doing it here.
* Don't wait for the BO on initial allocation. This is important when
*/* the BO has an imported reservation object.
- ret = ttm_bo_wait(bo, false, interruptible, no_wait_gpu);
- if (ret)
return ret;
- if (bo->mem.mem_type != TTM_PL_SYSTEM || bo->ttm != NULL) {
/*
* FIXME: It's possible to pipeline buffer moves.
* Have the driver move function wait for idle when necessary,
* instead of doing it here.
*/
ret = ttm_bo_wait(bo, false, interruptible, no_wait_gpu);
if (ret)
return ret;
- } mem.num_pages = bo->num_pages; mem.size = mem.num_pages << PAGE_SHIFT; mem.page_alignment = bo->mem.page_alignment;
-- 2.5.0
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
rb for 1-6.
I can't really comment on 7.
On Wed, Apr 13, 2016 at 07:38:22PM +0200, Christian König wrote:
Am 06.04.2016 um 18:06 schrieb Sinclair Yeh:
I don't know much about AMD gpu. Patches 1-6 look good to me.
Does that count as a Reviewed-by or at least Acked-by?
Thanks for taking a look, Christian.
On Wed, Apr 06, 2016 at 11:12:02AM +0200, Christian König wrote:
From: Christian König christian.koenig@amd.com
When we use an extern reservation object that otherwise waits for every fence registered with it.
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com
drivers/gpu/drm/ttm/ttm_bo.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 4cbf265..367b87b 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -998,13 +998,19 @@ static int ttm_bo_move_buffer(struct ttm_buffer_object *bo, lockdep_assert_held(&bo->resv->lock.base); /*
* FIXME: It's possible to pipeline buffer moves.
* Have the driver move function wait for idle when necessary,
* instead of doing it here.
* Don't wait for the BO on initial allocation. This is important when
*/* the BO has an imported reservation object.
- ret = ttm_bo_wait(bo, false, interruptible, no_wait_gpu);
- if (ret)
return ret;
- if (bo->mem.mem_type != TTM_PL_SYSTEM || bo->ttm != NULL) {
/*
* FIXME: It's possible to pipeline buffer moves.
* Have the driver move function wait for idle when necessary,
* instead of doing it here.
*/
ret = ttm_bo_wait(bo, false, interruptible, no_wait_gpu);
if (ret)
return ret;
- } mem.num_pages = bo->num_pages; mem.size = mem.num_pages << PAGE_SHIFT; mem.page_alignment = bo->mem.page_alignment;
-- 2.5.0
dri-devel mailing list dri-devel@lists.freedesktop.org https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.freedesktop.org_m...
From: Christian König christian.koenig@amd.com
Not used any more.
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 2 +- drivers/gpu/drm/ast/ast_drv.h | 2 +- drivers/gpu/drm/bochs/bochs_fbdev.c | 2 +- drivers/gpu/drm/bochs/bochs_kms.c | 4 ++-- drivers/gpu/drm/cirrus/cirrus_drv.h | 2 +- drivers/gpu/drm/mgag200/mgag200_drv.h | 2 +- drivers/gpu/drm/nouveau/nouveau_bo.c | 6 +++--- drivers/gpu/drm/nouveau/nouveau_display.c | 4 ++-- drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +++--- drivers/gpu/drm/qxl/qxl_object.h | 4 ++-- drivers/gpu/drm/radeon/radeon_object.c | 2 +- drivers/gpu/drm/radeon/radeon_object.h | 2 +- drivers/gpu/drm/ttm/ttm_bo.c | 17 ++++++++--------- drivers/gpu/drm/ttm/ttm_bo_vm.c | 2 +- drivers/gpu/drm/ttm/ttm_execbuf_util.c | 3 +-- drivers/gpu/drm/virtio/virtgpu_drv.h | 2 +- drivers/gpu/drm/virtio/virtgpu_object.c | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_dmabuf.c | 8 ++++---- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 6 +++--- drivers/gpu/drm/vmwgfx/vmwgfx_mob.c | 12 ++++++------ drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 7 +++---- drivers/gpu/drm/vmwgfx/vmwgfx_shader.c | 2 +- include/drm/ttm/ttm_bo_driver.h | 14 +++++--------- 25 files changed, 55 insertions(+), 62 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h index acc0801..bdb01d9 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h @@ -71,7 +71,7 @@ static inline int amdgpu_bo_reserve(struct amdgpu_bo *bo, bool no_intr) { int r;
- r = ttm_bo_reserve(&bo->tbo, !no_intr, false, false, 0); + r = ttm_bo_reserve(&bo->tbo, !no_intr, false, NULL); if (unlikely(r != 0)) { if (r != -ERESTARTSYS) dev_err(bo->adev->dev, "%p reserve failed\n", bo); diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h index eb57159..908011d 100644 --- a/drivers/gpu/drm/ast/ast_drv.h +++ b/drivers/gpu/drm/ast/ast_drv.h @@ -367,7 +367,7 @@ static inline int ast_bo_reserve(struct ast_bo *bo, bool no_wait) { int ret;
- ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, NULL); + ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL); if (ret) { if (ret != -ERESTARTSYS && ret != -EBUSY) DRM_ERROR("reserve failed %p\n", bo); diff --git a/drivers/gpu/drm/bochs/bochs_fbdev.c b/drivers/gpu/drm/bochs/bochs_fbdev.c index 7520bf8..815c185 100644 --- a/drivers/gpu/drm/bochs/bochs_fbdev.c +++ b/drivers/gpu/drm/bochs/bochs_fbdev.c @@ -82,7 +82,7 @@ static int bochsfb_create(struct drm_fb_helper *helper,
bo = gem_to_bochs_bo(gobj);
- ret = ttm_bo_reserve(&bo->bo, true, false, false, NULL); + ret = ttm_bo_reserve(&bo->bo, true, false, NULL); if (ret) return ret;
diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c index 96926f0..22d7055 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c +++ b/drivers/gpu/drm/bochs/bochs_kms.c @@ -43,7 +43,7 @@ static int bochs_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y, if (old_fb) { bochs_fb = to_bochs_framebuffer(old_fb); bo = gem_to_bochs_bo(bochs_fb->obj); - ret = ttm_bo_reserve(&bo->bo, true, false, false, NULL); + ret = ttm_bo_reserve(&bo->bo, true, false, NULL); if (ret) { DRM_ERROR("failed to reserve old_fb bo\n"); } else { @@ -57,7 +57,7 @@ static int bochs_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
bochs_fb = to_bochs_framebuffer(crtc->primary->fb); bo = gem_to_bochs_bo(bochs_fb->obj); - ret = ttm_bo_reserve(&bo->bo, true, false, false, NULL); + ret = ttm_bo_reserve(&bo->bo, true, false, NULL); if (ret) return ret;
diff --git a/drivers/gpu/drm/cirrus/cirrus_drv.h b/drivers/gpu/drm/cirrus/cirrus_drv.h index b774d63..2188d6b 100644 --- a/drivers/gpu/drm/cirrus/cirrus_drv.h +++ b/drivers/gpu/drm/cirrus/cirrus_drv.h @@ -245,7 +245,7 @@ static inline int cirrus_bo_reserve(struct cirrus_bo *bo, bool no_wait) { int ret;
- ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, NULL); + ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL); if (ret) { if (ret != -ERESTARTSYS && ret != -EBUSY) DRM_ERROR("reserve failed %p\n", bo); diff --git a/drivers/gpu/drm/mgag200/mgag200_drv.h b/drivers/gpu/drm/mgag200/mgag200_drv.h index 205b280..3e02ac2 100644 --- a/drivers/gpu/drm/mgag200/mgag200_drv.h +++ b/drivers/gpu/drm/mgag200/mgag200_drv.h @@ -281,7 +281,7 @@ static inline int mgag200_bo_reserve(struct mgag200_bo *bo, bool no_wait) { int ret;
- ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, NULL); + ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL); if (ret) { if (ret != -ERESTARTSYS && ret != -EBUSY) DRM_ERROR("reserve failed %p\n", bo); diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 2cdaea5..ea89286 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -312,7 +312,7 @@ nouveau_bo_pin(struct nouveau_bo *nvbo, uint32_t memtype, bool contig) bool force = false, evict = false; int ret;
- ret = ttm_bo_reserve(bo, false, false, false, NULL); + ret = ttm_bo_reserve(bo, false, false, NULL); if (ret) return ret;
@@ -385,7 +385,7 @@ nouveau_bo_unpin(struct nouveau_bo *nvbo) struct ttm_buffer_object *bo = &nvbo->bo; int ret, ref;
- ret = ttm_bo_reserve(bo, false, false, false, NULL); + ret = ttm_bo_reserve(bo, false, false, NULL); if (ret) return ret;
@@ -420,7 +420,7 @@ nouveau_bo_map(struct nouveau_bo *nvbo) { int ret;
- ret = ttm_bo_reserve(&nvbo->bo, false, false, false, NULL); + ret = ttm_bo_reserve(&nvbo->bo, false, false, NULL); if (ret) return ret;
diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c index 7ce7fa5..844ab1f 100644 --- a/drivers/gpu/drm/nouveau/nouveau_display.c +++ b/drivers/gpu/drm/nouveau/nouveau_display.c @@ -739,7 +739,7 @@ nouveau_crtc_page_flip(struct drm_crtc *crtc, struct drm_framebuffer *fb, }
mutex_lock(&cli->mutex); - ret = ttm_bo_reserve(&new_bo->bo, true, false, false, NULL); + ret = ttm_bo_reserve(&new_bo->bo, true, false, NULL); if (ret) goto fail_unpin;
@@ -753,7 +753,7 @@ nouveau_crtc_page_flip(struct drm_crtc *crtc, struct drm_framebuffer *fb, if (new_bo != old_bo) { ttm_bo_unreserve(&new_bo->bo);
- ret = ttm_bo_reserve(&old_bo->bo, true, false, false, NULL); + ret = ttm_bo_reserve(&old_bo->bo, true, false, NULL); if (ret) goto fail_unpin; } diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index a0865c4..8d64b65 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -71,7 +71,7 @@ nouveau_gem_object_open(struct drm_gem_object *gem, struct drm_file *file_priv) if (!cli->vm) return 0;
- ret = ttm_bo_reserve(&nvbo->bo, false, false, false, NULL); + ret = ttm_bo_reserve(&nvbo->bo, false, false, NULL); if (ret) return ret;
@@ -156,7 +156,7 @@ nouveau_gem_object_close(struct drm_gem_object *gem, struct drm_file *file_priv) if (!cli->vm) return;
- ret = ttm_bo_reserve(&nvbo->bo, false, false, false, NULL); + ret = ttm_bo_reserve(&nvbo->bo, false, false, NULL); if (ret) return;
@@ -409,7 +409,7 @@ retry: break; }
- ret = ttm_bo_reserve(&nvbo->bo, true, false, true, &op->ticket); + ret = ttm_bo_reserve(&nvbo->bo, true, false, &op->ticket); if (ret) { list_splice_tail_init(&vram_list, &op->list); list_splice_tail_init(&gart_list, &op->list); diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h index 37af1bc..483f131 100644 --- a/drivers/gpu/drm/qxl/qxl_object.h +++ b/drivers/gpu/drm/qxl/qxl_object.h @@ -31,7 +31,7 @@ static inline int qxl_bo_reserve(struct qxl_bo *bo, bool no_wait) { int r;
- r = ttm_bo_reserve(&bo->tbo, true, no_wait, false, NULL); + r = ttm_bo_reserve(&bo->tbo, true, no_wait, NULL); if (unlikely(r != 0)) { if (r != -ERESTARTSYS) { struct qxl_device *qdev = (struct qxl_device *)bo->gem_base.dev->dev_private; @@ -67,7 +67,7 @@ static inline int qxl_bo_wait(struct qxl_bo *bo, u32 *mem_type, { int r;
- r = ttm_bo_reserve(&bo->tbo, true, no_wait, false, NULL); + r = ttm_bo_reserve(&bo->tbo, true, no_wait, NULL); if (unlikely(r != 0)) { if (r != -ERESTARTSYS) { struct qxl_device *qdev = (struct qxl_device *)bo->gem_base.dev->dev_private; diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c index 2d901bf..7e0c16c 100644 --- a/drivers/gpu/drm/radeon/radeon_object.c +++ b/drivers/gpu/drm/radeon/radeon_object.c @@ -832,7 +832,7 @@ int radeon_bo_wait(struct radeon_bo *bo, u32 *mem_type, bool no_wait) { int r;
- r = ttm_bo_reserve(&bo->tbo, true, no_wait, false, NULL); + r = ttm_bo_reserve(&bo->tbo, true, no_wait, NULL); if (unlikely(r != 0)) return r; if (mem_type) diff --git a/drivers/gpu/drm/radeon/radeon_object.h b/drivers/gpu/drm/radeon/radeon_object.h index d8d295e..a10bb3d 100644 --- a/drivers/gpu/drm/radeon/radeon_object.h +++ b/drivers/gpu/drm/radeon/radeon_object.h @@ -65,7 +65,7 @@ static inline int radeon_bo_reserve(struct radeon_bo *bo, bool no_intr) { int r;
- r = ttm_bo_reserve(&bo->tbo, !no_intr, false, false, NULL); + r = ttm_bo_reserve(&bo->tbo, !no_intr, false, NULL); if (unlikely(r != 0)) { if (r != -ERESTARTSYS) dev_err(bo->rdev->dev, "%p reserve failed\n", bo); diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 367b87b..c935825 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -452,7 +452,7 @@ static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo) int ret;
spin_lock(&glob->lru_lock); - ret = __ttm_bo_reserve(bo, false, true, false, NULL); + ret = __ttm_bo_reserve(bo, false, true, NULL);
if (!ret) { if (!ttm_bo_wait(bo, false, false, true)) { @@ -526,7 +526,7 @@ static int ttm_bo_cleanup_refs_and_unlock(struct ttm_buffer_object *bo, return -EBUSY;
spin_lock(&glob->lru_lock); - ret = __ttm_bo_reserve(bo, false, true, false, NULL); + ret = __ttm_bo_reserve(bo, false, true, NULL);
/* * We raced, and lost, someone else holds the reservation now, @@ -595,11 +595,10 @@ static int ttm_bo_delayed_delete(struct ttm_bo_device *bdev, bool remove_all) kref_get(&nentry->list_kref); }
- ret = __ttm_bo_reserve(entry, false, true, false, NULL); + ret = __ttm_bo_reserve(entry, false, true, NULL); if (remove_all && ret) { spin_unlock(&glob->lru_lock); - ret = __ttm_bo_reserve(entry, false, false, - false, NULL); + ret = __ttm_bo_reserve(entry, false, false, NULL); spin_lock(&glob->lru_lock); }
@@ -741,7 +740,7 @@ static int ttm_mem_evict_first(struct ttm_bo_device *bdev,
spin_lock(&glob->lru_lock); list_for_each_entry(bo, &man->lru, lru) { - ret = __ttm_bo_reserve(bo, false, true, false, NULL); + ret = __ttm_bo_reserve(bo, false, true, NULL); if (!ret) { if (place && (place->fpfn || place->lpfn)) { /* Don't evict this BO if it's outside of the @@ -1624,7 +1623,7 @@ int ttm_bo_synccpu_write_grab(struct ttm_buffer_object *bo, bool no_wait) * Using ttm_bo_reserve makes sure the lru lists are updated. */
- ret = ttm_bo_reserve(bo, true, no_wait, false, NULL); + ret = ttm_bo_reserve(bo, true, no_wait, NULL); if (unlikely(ret != 0)) return ret; ret = ttm_bo_wait(bo, false, true, no_wait); @@ -1657,7 +1656,7 @@ static int ttm_bo_swapout(struct ttm_mem_shrink *shrink)
spin_lock(&glob->lru_lock); list_for_each_entry(bo, &glob->swap_lru, swap) { - ret = __ttm_bo_reserve(bo, false, true, false, NULL); + ret = __ttm_bo_reserve(bo, false, true, NULL); if (!ret) break; } @@ -1756,7 +1755,7 @@ int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo) return -ERESTARTSYS; if (!ww_mutex_is_locked(&bo->resv->lock)) goto out_unlock; - ret = __ttm_bo_reserve(bo, true, false, false, NULL); + ret = __ttm_bo_reserve(bo, true, false, NULL); if (unlikely(ret != 0)) goto out_unlock; __ttm_bo_unreserve(bo); diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 06d26dc..dbd8d58 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -108,7 +108,7 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf) * for reserve, and if it fails, retry the fault after waiting * for the buffer to become unreserved. */ - ret = ttm_bo_reserve(bo, true, true, false, NULL); + ret = ttm_bo_reserve(bo, true, true, NULL); if (unlikely(ret != 0)) { if (ret != -EBUSY) return VM_FAULT_NOPAGE; diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c index 3820ae9..a80717b 100644 --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c +++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c @@ -112,8 +112,7 @@ int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, list_for_each_entry(entry, list, head) { struct ttm_buffer_object *bo = entry->bo;
- ret = __ttm_bo_reserve(bo, intr, (ticket == NULL), true, - ticket); + ret = __ttm_bo_reserve(bo, intr, (ticket == NULL), ticket); if (!ret && unlikely(atomic_read(&bo->cpu_writers) > 0)) { __ttm_bo_unreserve(bo);
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 8f486f4..0a54f43 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -400,7 +400,7 @@ static inline int virtio_gpu_object_reserve(struct virtio_gpu_object *bo, { int r;
- r = ttm_bo_reserve(&bo->tbo, true, no_wait, false, NULL); + r = ttm_bo_reserve(&bo->tbo, true, no_wait, NULL); if (unlikely(r != 0)) { if (r != -ERESTARTSYS) { struct virtio_gpu_device *qdev = diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index f300eba..208a1fd 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -155,7 +155,7 @@ int virtio_gpu_object_wait(struct virtio_gpu_object *bo, bool no_wait) { int r;
- r = ttm_bo_reserve(&bo->tbo, true, no_wait, false, NULL); + r = ttm_bo_reserve(&bo->tbo, true, no_wait, NULL); if (unlikely(r != 0)) return r; r = ttm_bo_wait(&bo->tbo, true, true, no_wait); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c index 092ea81..ddb3dd9 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c @@ -421,7 +421,7 @@ static int vmw_cotable_resize(struct vmw_resource *res, size_t new_size) }
bo = &buf->base; - WARN_ON_ONCE(ttm_bo_reserve(bo, false, true, false, NULL)); + WARN_ON_ONCE(ttm_bo_reserve(bo, false, true, NULL));
ret = ttm_bo_wait(old_bo, false, false, false); if (unlikely(ret != 0)) { diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_dmabuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_dmabuf.c index 299925a..9b078a4 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_dmabuf.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_dmabuf.c @@ -56,7 +56,7 @@ int vmw_dmabuf_pin_in_placement(struct vmw_private *dev_priv,
vmw_execbuf_release_pinned_bo(dev_priv);
- ret = ttm_bo_reserve(bo, interruptible, false, false, NULL); + ret = ttm_bo_reserve(bo, interruptible, false, NULL); if (unlikely(ret != 0)) goto err;
@@ -98,7 +98,7 @@ int vmw_dmabuf_pin_in_vram_or_gmr(struct vmw_private *dev_priv,
vmw_execbuf_release_pinned_bo(dev_priv);
- ret = ttm_bo_reserve(bo, interruptible, false, false, NULL); + ret = ttm_bo_reserve(bo, interruptible, false, NULL); if (unlikely(ret != 0)) goto err;
@@ -174,7 +174,7 @@ int vmw_dmabuf_pin_in_start_of_vram(struct vmw_private *dev_priv, return ret;
vmw_execbuf_release_pinned_bo(dev_priv); - ret = ttm_bo_reserve(bo, interruptible, false, false, NULL); + ret = ttm_bo_reserve(bo, interruptible, false, NULL); if (unlikely(ret != 0)) goto err_unlock;
@@ -225,7 +225,7 @@ int vmw_dmabuf_unpin(struct vmw_private *dev_priv, if (unlikely(ret != 0)) return ret;
- ret = ttm_bo_reserve(bo, interruptible, false, false, NULL); + ret = ttm_bo_reserve(bo, interruptible, false, NULL); if (unlikely(ret != 0)) goto err;
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index 6cbb7d4..f8cd462 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -326,7 +326,7 @@ static int vmw_dummy_query_bo_create(struct vmw_private *dev_priv) if (unlikely(ret != 0)) return ret;
- ret = ttm_bo_reserve(&vbo->base, false, true, false, NULL); + ret = ttm_bo_reserve(&vbo->base, false, true, NULL); BUG_ON(ret != 0); vmw_bo_pin_reserved(vbo, true);
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c index 4742ec4..fc20d45 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c @@ -98,7 +98,7 @@ int vmw_cursor_update_dmabuf(struct vmw_private *dev_priv, kmap_offset = 0; kmap_num = (width*height*4 + PAGE_SIZE - 1) >> PAGE_SHIFT;
- ret = ttm_bo_reserve(&dmabuf->base, true, false, false, NULL); + ret = ttm_bo_reserve(&dmabuf->base, true, false, NULL); if (unlikely(ret != 0)) { DRM_ERROR("reserve failed\n"); return -EINVAL; @@ -318,7 +318,7 @@ void vmw_kms_cursor_snoop(struct vmw_surface *srf, kmap_offset = cmd->dma.guest.ptr.offset >> PAGE_SHIFT; kmap_num = (64*64*4) >> PAGE_SHIFT;
- ret = ttm_bo_reserve(bo, true, false, false, NULL); + ret = ttm_bo_reserve(bo, true, false, NULL); if (unlikely(ret != 0)) { DRM_ERROR("reserve failed\n"); return; @@ -1859,7 +1859,7 @@ int vmw_kms_helper_buffer_prepare(struct vmw_private *dev_priv, struct ttm_buffer_object *bo = &buf->base; int ret;
- ttm_bo_reserve(bo, false, false, interruptible, NULL); + ttm_bo_reserve(bo, false, false, NULL); ret = vmw_validate_single_buffer(dev_priv, bo, interruptible, validate_as_mob); if (ret) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c index 23db160..b6126a5 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c @@ -222,7 +222,7 @@ static void vmw_takedown_otable_base(struct vmw_private *dev_priv, if (bo) { int ret;
- ret = ttm_bo_reserve(bo, false, true, false, NULL); + ret = ttm_bo_reserve(bo, false, true, NULL); BUG_ON(ret != 0);
vmw_fence_single_bo(bo, NULL); @@ -262,7 +262,7 @@ static int vmw_otable_batch_setup(struct vmw_private *dev_priv, if (unlikely(ret != 0)) goto out_no_bo;
- ret = ttm_bo_reserve(batch->otable_bo, false, true, false, NULL); + ret = ttm_bo_reserve(batch->otable_bo, false, true, NULL); BUG_ON(ret != 0); ret = vmw_bo_driver.ttm_tt_populate(batch->otable_bo->ttm); if (unlikely(ret != 0)) @@ -357,7 +357,7 @@ static void vmw_otable_batch_takedown(struct vmw_private *dev_priv, vmw_takedown_otable_base(dev_priv, i, &batch->otables[i]);
- ret = ttm_bo_reserve(bo, false, true, false, NULL); + ret = ttm_bo_reserve(bo, false, true, NULL); BUG_ON(ret != 0);
vmw_fence_single_bo(bo, NULL); @@ -440,7 +440,7 @@ static int vmw_mob_pt_populate(struct vmw_private *dev_priv, if (unlikely(ret != 0)) return ret;
- ret = ttm_bo_reserve(mob->pt_bo, false, true, false, NULL); + ret = ttm_bo_reserve(mob->pt_bo, false, true, NULL);
BUG_ON(ret != 0); ret = vmw_bo_driver.ttm_tt_populate(mob->pt_bo->ttm); @@ -545,7 +545,7 @@ static void vmw_mob_pt_setup(struct vmw_mob *mob, const struct vmw_sg_table *vsgt; int ret;
- ret = ttm_bo_reserve(bo, false, true, false, NULL); + ret = ttm_bo_reserve(bo, false, true, NULL); BUG_ON(ret != 0);
vsgt = vmw_bo_sg_table(bo); @@ -595,7 +595,7 @@ void vmw_mob_unbind(struct vmw_private *dev_priv, struct ttm_buffer_object *bo = mob->pt_bo;
if (bo) { - ret = ttm_bo_reserve(bo, false, true, false, NULL); + ret = ttm_bo_reserve(bo, false, true, NULL); /* * Noone else should be using this buffer. */ diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index e57667c..9608d33a 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -129,7 +129,7 @@ static void vmw_resource_release(struct kref *kref) if (res->backup) { struct ttm_buffer_object *bo = &res->backup->base;
- ttm_bo_reserve(bo, false, false, false, NULL); + ttm_bo_reserve(bo, false, false, NULL); if (!list_empty(&res->mob_head) && res->func->unbind != NULL) { struct ttm_validate_buffer val_buf; @@ -1717,8 +1717,7 @@ int vmw_resource_pin(struct vmw_resource *res, bool interruptible) if (res->backup) { vbo = res->backup;
- ttm_bo_reserve(&vbo->base, interruptible, false, false, - NULL); + ttm_bo_reserve(&vbo->base, interruptible, false, NULL); if (!vbo->pin_count) { ret = ttm_bo_validate (&vbo->base, @@ -1773,7 +1772,7 @@ void vmw_resource_unpin(struct vmw_resource *res) if (--res->pin_count == 0 && res->backup) { struct vmw_dma_buffer *vbo = res->backup;
- ttm_bo_reserve(&vbo->base, false, false, false, NULL); + ttm_bo_reserve(&vbo->base, false, false, NULL); vmw_bo_pin_reserved(vbo, false); ttm_bo_unreserve(&vbo->base); } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c index fd47547..92f8b1d 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c @@ -988,7 +988,7 @@ int vmw_compat_shader_add(struct vmw_private *dev_priv, if (unlikely(ret != 0)) goto out;
- ret = ttm_bo_reserve(&buf->base, false, true, false, NULL); + ret = ttm_bo_reserve(&buf->base, false, true, NULL); if (unlikely(ret != 0)) goto no_reserve;
diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 3d4bf08..5d118ed 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -759,8 +759,7 @@ extern void ttm_bo_add_to_lru(struct ttm_buffer_object *bo); * @bo: A pointer to a struct ttm_buffer_object. * @interruptible: Sleep interruptible if waiting. * @no_wait: Don't sleep while trying to reserve, rather return -EBUSY. - * @use_ticket: If @bo is already reserved, Only sleep waiting for - * it to become unreserved if @ticket->stamp is older. + * @ticket: ticket used to acquire the ww_mutex. * * Will not remove reserved buffers from the lru lists. * Otherwise identical to ttm_bo_reserve. @@ -776,8 +775,7 @@ extern void ttm_bo_add_to_lru(struct ttm_buffer_object *bo); * be returned if @use_ticket is set to true. */ static inline int __ttm_bo_reserve(struct ttm_buffer_object *bo, - bool interruptible, - bool no_wait, bool use_ticket, + bool interruptible, bool no_wait, struct ww_acquire_ctx *ticket) { int ret = 0; @@ -806,8 +804,7 @@ static inline int __ttm_bo_reserve(struct ttm_buffer_object *bo, * @bo: A pointer to a struct ttm_buffer_object. * @interruptible: Sleep interruptible if waiting. * @no_wait: Don't sleep while trying to reserve, rather return -EBUSY. - * @use_ticket: If @bo is already reserved, Only sleep waiting for - * it to become unreserved if @ticket->stamp is older. + * @ticket: ticket used to acquire the ww_mutex. * * Locks a buffer object for validation. (Or prevents other processes from * locking it for validation) and removes it from lru lists, while taking @@ -846,15 +843,14 @@ static inline int __ttm_bo_reserve(struct ttm_buffer_object *bo, * be returned if @use_ticket is set to true. */ static inline int ttm_bo_reserve(struct ttm_buffer_object *bo, - bool interruptible, - bool no_wait, bool use_ticket, + bool interruptible, bool no_wait, struct ww_acquire_ctx *ticket) { int ret;
WARN_ON(!atomic_read(&bo->kref.refcount));
- ret = __ttm_bo_reserve(bo, interruptible, no_wait, use_ticket, ticket); + ret = __ttm_bo_reserve(bo, interruptible, no_wait, ticket); if (likely(ret == 0)) ttm_bo_del_sub_from_lru(bo);
From: Christian König christian.koenig@amd.com
Not used any more.
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com --- drivers/gpu/drm/nouveau/nouveau_bo.c | 2 +- drivers/gpu/drm/nouveau/nouveau_gem.c | 4 ++-- drivers/gpu/drm/qxl/qxl_cmd.c | 2 +- drivers/gpu/drm/qxl/qxl_object.h | 2 +- drivers/gpu/drm/radeon/radeon_object.c | 2 +- drivers/gpu/drm/ttm/ttm_bo.c | 16 ++++++++-------- drivers/gpu/drm/ttm/ttm_bo_util.c | 2 +- drivers/gpu/drm/ttm/ttm_bo_vm.c | 6 +++--- drivers/gpu/drm/virtio/virtgpu_object.c | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 4 ++-- include/drm/ttm/ttm_bo_api.h | 2 +- 13 files changed, 24 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index ea89286..5fe5000 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -1322,7 +1322,7 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, bool intr, }
/* Fallback to software copy. */ - ret = ttm_bo_wait(bo, true, intr, no_wait_gpu); + ret = ttm_bo_wait(bo, intr, no_wait_gpu); if (ret == 0) ret = ttm_bo_move_memcpy(bo, evict, no_wait_gpu, new_mem);
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index 8d64b65..185aaaa 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -126,7 +126,7 @@ nouveau_gem_object_unmap(struct nouveau_bo *nvbo, struct nvkm_vma *vma) list_del(&vma->head);
if (fobj && fobj->shared_count > 1) - ttm_bo_wait(&nvbo->bo, true, false, false); + ttm_bo_wait(&nvbo->bo, false, false); else if (fobj && fobj->shared_count == 1) fence = rcu_dereference_protected(fobj->shared[0], reservation_object_held(resv)); @@ -651,7 +651,7 @@ nouveau_gem_pushbuf_reloc_apply(struct nouveau_cli *cli, data |= r->vor; }
- ret = ttm_bo_wait(&nvbo->bo, true, false, false); + ret = ttm_bo_wait(&nvbo->bo, false, false); if (ret) { NV_PRINTK(err, cli, "reloc wait_idle failed: %d\n", ret); break; diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c index fdc1833..b5d4b41 100644 --- a/drivers/gpu/drm/qxl/qxl_cmd.c +++ b/drivers/gpu/drm/qxl/qxl_cmd.c @@ -624,7 +624,7 @@ static int qxl_reap_surf(struct qxl_device *qdev, struct qxl_bo *surf, bool stal if (stall) mutex_unlock(&qdev->surf_evict_mutex);
- ret = ttm_bo_wait(&surf->tbo, true, true, !stall); + ret = ttm_bo_wait(&surf->tbo, true, !stall);
if (stall) mutex_lock(&qdev->surf_evict_mutex); diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h index 483f131..4d83113 100644 --- a/drivers/gpu/drm/qxl/qxl_object.h +++ b/drivers/gpu/drm/qxl/qxl_object.h @@ -79,7 +79,7 @@ static inline int qxl_bo_wait(struct qxl_bo *bo, u32 *mem_type, if (mem_type) *mem_type = bo->tbo.mem.mem_type;
- r = ttm_bo_wait(&bo->tbo, true, true, no_wait); + r = ttm_bo_wait(&bo->tbo, true, no_wait); ttm_bo_unreserve(&bo->tbo); return r; } diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c index 7e0c16c..be30861 100644 --- a/drivers/gpu/drm/radeon/radeon_object.c +++ b/drivers/gpu/drm/radeon/radeon_object.c @@ -838,7 +838,7 @@ int radeon_bo_wait(struct radeon_bo *bo, u32 *mem_type, bool no_wait) if (mem_type) *mem_type = bo->tbo.mem.mem_type;
- r = ttm_bo_wait(&bo->tbo, true, true, no_wait); + r = ttm_bo_wait(&bo->tbo, true, no_wait); ttm_bo_unreserve(&bo->tbo); return r; } diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index c935825..1a5a0a2 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -455,7 +455,7 @@ static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo) ret = __ttm_bo_reserve(bo, false, true, NULL);
if (!ret) { - if (!ttm_bo_wait(bo, false, false, true)) { + if (!ttm_bo_wait(bo, false, true)) { put_count = ttm_bo_del_from_lru(bo);
spin_unlock(&glob->lru_lock); @@ -508,7 +508,7 @@ static int ttm_bo_cleanup_refs_and_unlock(struct ttm_buffer_object *bo, int put_count; int ret;
- ret = ttm_bo_wait(bo, false, false, true); + ret = ttm_bo_wait(bo, false, true);
if (ret && !no_wait_gpu) { long lret; @@ -545,7 +545,7 @@ static int ttm_bo_cleanup_refs_and_unlock(struct ttm_buffer_object *bo, * remove sync_obj with ttm_bo_wait, the wait should be * finished, and no new wait object should have been added. */ - ret = ttm_bo_wait(bo, false, false, true); + ret = ttm_bo_wait(bo, false, true); WARN_ON(ret); }
@@ -684,7 +684,7 @@ static int ttm_bo_evict(struct ttm_buffer_object *bo, bool interruptible, struct ttm_placement placement; int ret = 0;
- ret = ttm_bo_wait(bo, false, interruptible, no_wait_gpu); + ret = ttm_bo_wait(bo, interruptible, no_wait_gpu);
if (unlikely(ret != 0)) { if (ret != -ERESTARTSYS) { @@ -1006,7 +1006,7 @@ static int ttm_bo_move_buffer(struct ttm_buffer_object *bo, * Have the driver move function wait for idle when necessary, * instead of doing it here. */ - ret = ttm_bo_wait(bo, false, interruptible, no_wait_gpu); + ret = ttm_bo_wait(bo, interruptible, no_wait_gpu); if (ret) return ret; } @@ -1568,7 +1568,7 @@ void ttm_bo_unmap_virtual(struct ttm_buffer_object *bo) EXPORT_SYMBOL(ttm_bo_unmap_virtual);
int ttm_bo_wait(struct ttm_buffer_object *bo, - bool lazy, bool interruptible, bool no_wait) + bool interruptible, bool no_wait) { struct reservation_object_list *fobj; struct reservation_object *resv; @@ -1626,7 +1626,7 @@ int ttm_bo_synccpu_write_grab(struct ttm_buffer_object *bo, bool no_wait) ret = ttm_bo_reserve(bo, true, no_wait, NULL); if (unlikely(ret != 0)) return ret; - ret = ttm_bo_wait(bo, false, true, no_wait); + ret = ttm_bo_wait(bo, true, no_wait); if (likely(ret == 0)) atomic_inc(&bo->cpu_writers); ttm_bo_unreserve(bo); @@ -1683,7 +1683,7 @@ static int ttm_bo_swapout(struct ttm_mem_shrink *shrink) * Wait for GPU, then move to system cached. */
- ret = ttm_bo_wait(bo, false, false, false); + ret = ttm_bo_wait(bo, false, false);
if (unlikely(ret != 0)) goto out; diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index ac6fe40..d983155 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -645,7 +645,7 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
reservation_object_add_excl_fence(bo->resv, fence); if (evict) { - ret = ttm_bo_wait(bo, false, false, false); + ret = ttm_bo_wait(bo, false, false); if (ret) return ret;
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index dbd8d58..3216878 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -54,7 +54,7 @@ static int ttm_bo_vm_fault_idle(struct ttm_buffer_object *bo, /* * Quick non-stalling check for idle. */ - ret = ttm_bo_wait(bo, false, false, true); + ret = ttm_bo_wait(bo, false, true); if (likely(ret == 0)) goto out_unlock;
@@ -68,14 +68,14 @@ static int ttm_bo_vm_fault_idle(struct ttm_buffer_object *bo, goto out_unlock;
up_read(&vma->vm_mm->mmap_sem); - (void) ttm_bo_wait(bo, false, true, false); + (void) ttm_bo_wait(bo, true, false); goto out_unlock; }
/* * Ordinary wait. */ - ret = ttm_bo_wait(bo, false, true, false); + ret = ttm_bo_wait(bo, true, false); if (unlikely(ret != 0)) ret = (ret != -ERESTARTSYS) ? VM_FAULT_SIGBUS : VM_FAULT_NOPAGE; diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 208a1fd..1483dae 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -158,7 +158,7 @@ int virtio_gpu_object_wait(struct virtio_gpu_object *bo, bool no_wait) r = ttm_bo_reserve(&bo->tbo, true, no_wait, NULL); if (unlikely(r != 0)) return r; - r = ttm_bo_wait(&bo->tbo, true, true, no_wait); + r = ttm_bo_wait(&bo->tbo, true, no_wait); ttm_bo_unreserve(&bo->tbo); return r; } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c index 3329f62..e09423d 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c @@ -839,7 +839,7 @@ static void vmw_move_notify(struct ttm_buffer_object *bo, */ static void vmw_swap_notify(struct ttm_buffer_object *bo) { - ttm_bo_wait(bo, false, false, false); + ttm_bo_wait(bo, false, false); }
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c index ddb3dd9..265c81e 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c @@ -423,7 +423,7 @@ static int vmw_cotable_resize(struct vmw_resource *res, size_t new_size) bo = &buf->base; WARN_ON_ONCE(ttm_bo_reserve(bo, false, true, NULL));
- ret = ttm_bo_wait(old_bo, false, false, false); + ret = ttm_bo_wait(old_bo, false, false); if (unlikely(ret != 0)) { DRM_ERROR("Failed waiting for cotable unbind.\n"); goto out_wait; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 9608d33a..6a328d5 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -1512,7 +1512,7 @@ void vmw_resource_move_notify(struct ttm_buffer_object *bo, list_del_init(&res->mob_head); }
- (void) ttm_bo_wait(bo, false, false, false); + (void) ttm_bo_wait(bo, false, false); } }
@@ -1605,7 +1605,7 @@ void vmw_query_move_notify(struct ttm_buffer_object *bo, if (fence != NULL) vmw_fence_obj_unreference(&fence);
- (void) ttm_bo_wait(bo, false, false, false); + (void) ttm_bo_wait(bo, false, false); } else mutex_unlock(&dev_priv->binding_mutex);
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index afae231..cd49155 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -314,7 +314,7 @@ ttm_bo_reference(struct ttm_buffer_object *bo) * Returns -EBUSY if no_wait is true and the buffer is busy. * Returns -ERESTARTSYS if interrupted by a signal. */ -extern int ttm_bo_wait(struct ttm_buffer_object *bo, bool lazy, +extern int ttm_bo_wait(struct ttm_buffer_object *bo, bool interruptible, bool no_wait); /** * ttm_bo_validate
From: Christian König christian.koenig@amd.com
Not used any more.
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com --- drivers/gpu/drm/ttm/ttm_bo.c | 1 - include/drm/ttm/ttm_bo_driver.h | 2 -- 2 files changed, 3 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 1a5a0a2..dbb0bb7 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1514,7 +1514,6 @@ int ttm_bo_device_init(struct ttm_bo_device *bdev, bdev->dev_mapping = mapping; bdev->glob = glob; bdev->need_dma32 = need_dma32; - bdev->val_seq = 0; mutex_lock(&glob->device_list_mutex); list_add_tail(&bdev->device_list, &glob->device_list); mutex_unlock(&glob->device_list_mutex); diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 5d118ed..391ef04 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -502,7 +502,6 @@ struct ttm_bo_global { * @vma_manager: Address space manager * lru_lock: Spinlock that protects the buffer+device lru lists and * ddestroy lists. - * @val_seq: Current validation sequence. * @dev_mapping: A pointer to the struct address_space representing the * device address space. * @wq: Work queue structure for the delayed delete workqueue. @@ -528,7 +527,6 @@ struct ttm_bo_device { * Protected by the global:lru lock. */ struct list_head ddestroy; - uint32_t val_seq;
/* * Protected by load / firstopen / lastclose /unload sync.
From: Christian König christian.koenig@amd.com
Useful for driver specific LRU handling.
v2: fix typo in comment
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com --- drivers/gpu/drm/ttm/ttm_bo.c | 12 +++++++----- include/drm/ttm/ttm_bo_driver.h | 6 ++++++ 2 files changed, 13 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index dbb0bb7..84ad992 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -186,8 +186,12 @@ EXPORT_SYMBOL(ttm_bo_add_to_lru);
int ttm_bo_del_from_lru(struct ttm_buffer_object *bo) { + struct ttm_bo_device *bdev = bo->bdev; int put_count = 0;
+ if (bdev->driver->lru_removal) + bdev->driver->lru_removal(bo); + if (!list_empty(&bo->swap)) { list_del_init(&bo->swap); ++put_count; @@ -197,11 +201,6 @@ int ttm_bo_del_from_lru(struct ttm_buffer_object *bo) ++put_count; }
- /* - * TODO: Add a driver hook to delete from - * driver-specific LRU's here. - */ - return put_count; }
@@ -235,6 +234,9 @@ void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo)
lockdep_assert_held(&bo->resv->lock.base);
+ if (bdev->driver->lru_removal) + bdev->driver->lru_removal(bo); + if (bo->mem.placement & TTM_PL_FLAG_NO_EVICT) { list_del_init(&bo->swap); list_del_init(&bo->lru); diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 391ef04..23a30b3 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -434,6 +434,12 @@ struct ttm_bo_driver { */ int (*io_mem_reserve)(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem); void (*io_mem_free)(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem); + + /** + * Optional driver callback for when BO is removed from the LRU. + * Called with LRU lock held immediately before the removal. + */ + void (*lru_removal)(struct ttm_buffer_object *bo); };
/**
From: Christian König christian.koenig@amd.com
This allows fine grained control for the driver where to add a BO into the LRU.
v2: fix typo in comment
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 2 ++ drivers/gpu/drm/ast/ast_ttm.c | 2 ++ drivers/gpu/drm/bochs/bochs_mm.c | 2 ++ drivers/gpu/drm/cirrus/cirrus_ttm.c | 2 ++ drivers/gpu/drm/mgag200/mgag200_ttm.c | 2 ++ drivers/gpu/drm/nouveau/nouveau_bo.c | 2 ++ drivers/gpu/drm/qxl/qxl_ttm.c | 2 ++ drivers/gpu/drm/radeon/radeon_ttm.c | 2 ++ drivers/gpu/drm/ttm/ttm_bo.c | 29 ++++++++++++++++++++--------- drivers/gpu/drm/virtio/virtgpu_ttm.c | 2 ++ drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c | 2 ++ include/drm/ttm/ttm_bo_driver.h | 9 +++++++++ 12 files changed, 49 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 9d3341d..fefaa9b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -923,6 +923,8 @@ static struct ttm_bo_driver amdgpu_bo_driver = { .fault_reserve_notify = &amdgpu_bo_fault_reserve_notify, .io_mem_reserve = &amdgpu_ttm_io_mem_reserve, .io_mem_free = &amdgpu_ttm_io_mem_free, + .lru_tail = &ttm_bo_default_lru_tail, + .swap_lru_tail = &ttm_bo_default_swap_lru_tail, };
int amdgpu_ttm_init(struct amdgpu_device *adev) diff --git a/drivers/gpu/drm/ast/ast_ttm.c b/drivers/gpu/drm/ast/ast_ttm.c index 08f82ea..59f2f93 100644 --- a/drivers/gpu/drm/ast/ast_ttm.c +++ b/drivers/gpu/drm/ast/ast_ttm.c @@ -245,6 +245,8 @@ struct ttm_bo_driver ast_bo_driver = { .verify_access = ast_bo_verify_access, .io_mem_reserve = &ast_ttm_io_mem_reserve, .io_mem_free = &ast_ttm_io_mem_free, + .lru_tail = &ttm_bo_default_lru_tail, + .swap_lru_tail = &ttm_bo_default_swap_lru_tail, };
int ast_mm_init(struct ast_private *ast) diff --git a/drivers/gpu/drm/bochs/bochs_mm.c b/drivers/gpu/drm/bochs/bochs_mm.c index d812ad0..24a30f6 100644 --- a/drivers/gpu/drm/bochs/bochs_mm.c +++ b/drivers/gpu/drm/bochs/bochs_mm.c @@ -212,6 +212,8 @@ struct ttm_bo_driver bochs_bo_driver = { .verify_access = bochs_bo_verify_access, .io_mem_reserve = &bochs_ttm_io_mem_reserve, .io_mem_free = &bochs_ttm_io_mem_free, + .lru_tail = &ttm_bo_default_lru_tail, + .swap_lru_tail = &ttm_bo_default_swap_lru_tail, };
int bochs_mm_init(struct bochs_device *bochs) diff --git a/drivers/gpu/drm/cirrus/cirrus_ttm.c b/drivers/gpu/drm/cirrus/cirrus_ttm.c index dfffd52..6768b7b 100644 --- a/drivers/gpu/drm/cirrus/cirrus_ttm.c +++ b/drivers/gpu/drm/cirrus/cirrus_ttm.c @@ -245,6 +245,8 @@ struct ttm_bo_driver cirrus_bo_driver = { .verify_access = cirrus_bo_verify_access, .io_mem_reserve = &cirrus_ttm_io_mem_reserve, .io_mem_free = &cirrus_ttm_io_mem_free, + .lru_tail = &ttm_bo_default_lru_tail, + .swap_lru_tail = &ttm_bo_default_swap_lru_tail, };
int cirrus_mm_init(struct cirrus_device *cirrus) diff --git a/drivers/gpu/drm/mgag200/mgag200_ttm.c b/drivers/gpu/drm/mgag200/mgag200_ttm.c index 05108b5..9d5083d 100644 --- a/drivers/gpu/drm/mgag200/mgag200_ttm.c +++ b/drivers/gpu/drm/mgag200/mgag200_ttm.c @@ -245,6 +245,8 @@ struct ttm_bo_driver mgag200_bo_driver = { .verify_access = mgag200_bo_verify_access, .io_mem_reserve = &mgag200_ttm_io_mem_reserve, .io_mem_free = &mgag200_ttm_io_mem_free, + .lru_tail = &ttm_bo_default_lru_tail, + .swap_lru_tail = &ttm_bo_default_swap_lru_tail, };
int mgag200_mm_init(struct mga_device *mdev) diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 5fe5000..74a8a2c 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -1611,6 +1611,8 @@ struct ttm_bo_driver nouveau_bo_driver = { .fault_reserve_notify = &nouveau_ttm_fault_reserve_notify, .io_mem_reserve = &nouveau_ttm_io_mem_reserve, .io_mem_free = &nouveau_ttm_io_mem_free, + .lru_tail = &ttm_bo_default_lru_tail, + .swap_lru_tail = &ttm_bo_default_swap_lru_tail, };
struct nvkm_vma * diff --git a/drivers/gpu/drm/qxl/qxl_ttm.c b/drivers/gpu/drm/qxl/qxl_ttm.c index 9534127..0738d74 100644 --- a/drivers/gpu/drm/qxl/qxl_ttm.c +++ b/drivers/gpu/drm/qxl/qxl_ttm.c @@ -384,6 +384,8 @@ static struct ttm_bo_driver qxl_bo_driver = { .io_mem_reserve = &qxl_ttm_io_mem_reserve, .io_mem_free = &qxl_ttm_io_mem_free, .move_notify = &qxl_bo_move_notify, + .lru_tail = &ttm_bo_default_lru_tail, + .swap_lru_tail = &ttm_bo_default_swap_lru_tail, };
int qxl_ttm_init(struct qxl_device *qdev) diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index 636c1cf..e31f233 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -864,6 +864,8 @@ static struct ttm_bo_driver radeon_bo_driver = { .fault_reserve_notify = &radeon_bo_fault_reserve_notify, .io_mem_reserve = &radeon_ttm_io_mem_reserve, .io_mem_free = &radeon_ttm_io_mem_free, + .lru_tail = &ttm_bo_default_lru_tail, + .swap_lru_tail = &ttm_bo_default_swap_lru_tail, };
int radeon_ttm_init(struct radeon_device *rdev) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 84ad992..832062a 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -164,7 +164,6 @@ static void ttm_bo_release_list(struct kref *list_kref) void ttm_bo_add_to_lru(struct ttm_buffer_object *bo) { struct ttm_bo_device *bdev = bo->bdev; - struct ttm_mem_type_manager *man;
lockdep_assert_held(&bo->resv->lock.base);
@@ -172,12 +171,11 @@ void ttm_bo_add_to_lru(struct ttm_buffer_object *bo)
BUG_ON(!list_empty(&bo->lru));
- man = &bdev->man[bo->mem.mem_type]; - list_add_tail(&bo->lru, &man->lru); + list_add(&bo->lru, bdev->driver->lru_tail(bo)); kref_get(&bo->list_kref);
if (bo->ttm && !(bo->ttm->page_flags & TTM_PAGE_FLAG_SG)) { - list_add_tail(&bo->swap, &bo->glob->swap_lru); + list_add(&bo->swap, bdev->driver->swap_lru_tail(bo)); kref_get(&bo->list_kref); } } @@ -230,7 +228,6 @@ EXPORT_SYMBOL(ttm_bo_del_sub_from_lru); void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo) { struct ttm_bo_device *bdev = bo->bdev; - struct ttm_mem_type_manager *man;
lockdep_assert_held(&bo->resv->lock.base);
@@ -242,15 +239,29 @@ void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo) list_del_init(&bo->lru);
} else { - if (bo->ttm && !(bo->ttm->page_flags & TTM_PAGE_FLAG_SG)) - list_move_tail(&bo->swap, &bo->glob->swap_lru); + if (bo->ttm && !(bo->ttm->page_flags & TTM_PAGE_FLAG_SG)) { + list_del(&bo->swap); + list_add(&bo->swap, bdev->driver->swap_lru_tail(bo)); + }
- man = &bdev->man[bo->mem.mem_type]; - list_move_tail(&bo->lru, &man->lru); + list_del(&bo->lru); + list_add(&bo->lru, bdev->driver->lru_tail(bo)); } } EXPORT_SYMBOL(ttm_bo_move_to_lru_tail);
+struct list_head *ttm_bo_default_lru_tail(struct ttm_buffer_object *bo) +{ + return bo->bdev->man[bo->mem.mem_type].lru.prev; +} +EXPORT_SYMBOL(ttm_bo_default_lru_tail); + +struct list_head *ttm_bo_default_swap_lru_tail(struct ttm_buffer_object *bo) +{ + return bo->glob->swap_lru.prev; +} +EXPORT_SYMBOL(ttm_bo_default_swap_lru_tail); + /* * Call bo->mutex locked. */ diff --git a/drivers/gpu/drm/virtio/virtgpu_ttm.c b/drivers/gpu/drm/virtio/virtgpu_ttm.c index 9fd924c..a058081 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ttm.c +++ b/drivers/gpu/drm/virtio/virtgpu_ttm.c @@ -426,6 +426,8 @@ static struct ttm_bo_driver virtio_gpu_bo_driver = { .io_mem_free = &virtio_gpu_ttm_io_mem_free, .move_notify = &virtio_gpu_bo_move_notify, .swap_notify = &virtio_gpu_bo_swap_notify, + .lru_tail = &ttm_bo_default_lru_tail, + .swap_lru_tail = &ttm_bo_default_swap_lru_tail, };
int virtio_gpu_ttm_init(struct virtio_gpu_device *vgdev) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c index e09423d..78b75ee 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c @@ -857,4 +857,6 @@ struct ttm_bo_driver vmw_bo_driver = { .fault_reserve_notify = &vmw_ttm_fault_reserve_notify, .io_mem_reserve = &vmw_ttm_io_mem_reserve, .io_mem_free = &vmw_ttm_io_mem_free, + .lru_tail = &ttm_bo_default_lru_tail, + .swap_lru_tail = &ttm_bo_default_swap_lru_tail, }; diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 23a30b3..5d83010 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -440,6 +440,12 @@ struct ttm_bo_driver { * Called with LRU lock held immediately before the removal. */ void (*lru_removal)(struct ttm_buffer_object *bo); + + /** + * Return the list_head after which a BO should be inserted in the LRU. + */ + struct list_head *(*lru_tail)(struct ttm_buffer_object *bo); + struct list_head *(*swap_lru_tail)(struct ttm_buffer_object *bo); };
/** @@ -757,6 +763,9 @@ extern void ttm_mem_io_unlock(struct ttm_mem_type_manager *man); extern void ttm_bo_del_sub_from_lru(struct ttm_buffer_object *bo); extern void ttm_bo_add_to_lru(struct ttm_buffer_object *bo);
+struct list_head *ttm_bo_default_lru_tail(struct ttm_buffer_object *bo); +struct list_head *ttm_bo_default_swap_lru_tail(struct ttm_buffer_object *bo); + /** * __ttm_bo_reserve: *
From: Christian König christian.koenig@amd.com
Keep VM page tables behind normal BOs on the LRU.
Signed-off-by: Christian König christian.koenig@amd.com Acked-by: Alex Deucher alexander.deucher@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 4 +++ drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 53 +++++++++++++++++++++++++++++++-- 2 files changed, 55 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index c4a21c6..1b5e7db 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -408,6 +408,10 @@ struct amdgpu_mman { struct amdgpu_ring *buffer_funcs_ring; /* Scheduler entity for buffer moves */ struct amd_sched_entity entity; + + /* custom LRU management */ + struct list_head *lru[TTM_NUM_MEM_TYPES]; + struct list_head *swap_lru; };
int amdgpu_copy_buffer(struct amdgpu_ring *ring, diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index fefaa9b..f3d118b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -910,6 +910,49 @@ uint32_t amdgpu_ttm_tt_pte_flags(struct amdgpu_device *adev, struct ttm_tt *ttm, return flags; }
+static void amdgpu_ttm_lru_removal(struct ttm_buffer_object *tbo) +{ + struct amdgpu_device *adev = amdgpu_get_adev(tbo->bdev); + + if (&tbo->lru == adev->mman.lru[tbo->mem.mem_type]) + adev->mman.lru[tbo->mem.mem_type] = tbo->lru.prev; + + if (&tbo->swap == adev->mman.swap_lru) + adev->mman.swap_lru = tbo->swap.prev; +} + +static struct list_head *amdgpu_ttm_lru_tail(struct ttm_buffer_object *tbo) +{ + struct amdgpu_bo *bo = container_of(tbo, struct amdgpu_bo, tbo); + struct amdgpu_device *adev = amdgpu_get_adev(tbo->bdev); + struct list_head *res; + + /* only VM BOs have a parent set */ + if (bo->parent) + return ttm_bo_default_lru_tail(tbo); + + res = adev->mman.lru[tbo->mem.mem_type]; + adev->mman.lru[tbo->mem.mem_type] = &tbo->lru; + + return res; +} + +static struct list_head *amdgpu_ttm_swap_lru_tail(struct ttm_buffer_object *tbo) +{ + struct amdgpu_bo *bo = container_of(tbo, struct amdgpu_bo, tbo); + struct amdgpu_device *adev = amdgpu_get_adev(tbo->bdev); + struct list_head *res; + + /* only VM BOs have a parent set */ + if (bo->parent) + return ttm_bo_default_swap_lru_tail(tbo); + + res = adev->mman.swap_lru; + adev->mman.swap_lru = &tbo->swap; + + return res; +} + static struct ttm_bo_driver amdgpu_bo_driver = { .ttm_tt_create = &amdgpu_ttm_tt_create, .ttm_tt_populate = &amdgpu_ttm_tt_populate, @@ -923,12 +966,14 @@ static struct ttm_bo_driver amdgpu_bo_driver = { .fault_reserve_notify = &amdgpu_bo_fault_reserve_notify, .io_mem_reserve = &amdgpu_ttm_io_mem_reserve, .io_mem_free = &amdgpu_ttm_io_mem_free, - .lru_tail = &ttm_bo_default_lru_tail, - .swap_lru_tail = &ttm_bo_default_swap_lru_tail, + .lru_removal = &amdgpu_ttm_lru_removal, + .lru_tail = &amdgpu_ttm_lru_tail, + .swap_lru_tail = &amdgpu_ttm_swap_lru_tail, };
int amdgpu_ttm_init(struct amdgpu_device *adev) { + unsigned i; int r;
r = amdgpu_ttm_global_init(adev); @@ -946,6 +991,10 @@ int amdgpu_ttm_init(struct amdgpu_device *adev) DRM_ERROR("failed initializing buffer object driver(%d).\n", r); return r; } + for (i = 0; i < TTM_NUM_MEM_TYPES; ++i) + adev->mman.lru[i] = &adev->mman.bdev.man[i].lru; + adev->mman.swap_lru = &adev->mman.bdev.glob->swap_lru; + adev->mman.initialized = true; r = ttm_bo_init_mm(&adev->mman.bdev, TTM_PL_VRAM, adev->mc.real_vram_size >> PAGE_SHIFT);
dri-devel@lists.freedesktop.org