TTM implements a rather extensive accounting of allocated memory.
There are two reasons for this: 1. It tries to block userspace allocating a huge number of very small BOs without accounting for the kmalloced memory.
2. Make sure we don't over allocate and run into an OOM situation during swapout while trying to handle the memory shortage.
This is only partially a good idea. First of all it is perfectly valid for an application to use all of system memory, limiting it to 50% is not really acceptable.
What we need to take care of is that the application is held accountable for the memory it allocated. This is what control mechanisms like memcg and the normal Linux page accounting already do.
Making sure that we don't run into an OOM situation while trying to cope with a memory shortage is still a good idea, but this is also not very well implemented since it means another opportunity of recursion from the driver back into TTM.
So start to rework all of this by accounting the memory before allocating the pages and making sure that we never go over a certain limit pages locked by TTM in the ttm_tt objects.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/ttm/ttm_memory.c | 17 ++++++++- drivers/gpu/drm/ttm/ttm_tt.c | 64 +++++++++++++++++++++++++++++--- include/drm/ttm/ttm_tt.h | 2 + 3 files changed, 77 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_memory.c b/drivers/gpu/drm/ttm/ttm_memory.c index 5ed1fc8f2ace..c53136048630 100644 --- a/drivers/gpu/drm/ttm/ttm_memory.c +++ b/drivers/gpu/drm/ttm/ttm_memory.c @@ -38,6 +38,7 @@ #include <linux/slab.h> #include <linux/swap.h> #include <drm/ttm/ttm_pool.h> +#include <drm/ttm/ttm_tt.h>
#define TTM_MEMORY_ALLOC_RETRIES 4
@@ -414,10 +415,11 @@ static int ttm_mem_init_dma32_zone(struct ttm_mem_global *glob,
int ttm_mem_global_init(struct ttm_mem_global *glob) { + uint64_t num_pages, num_dma32_pages; + struct ttm_mem_zone *zone; struct sysinfo si; int ret; int i; - struct ttm_mem_zone *zone;
spin_lock_init(&glob->lock); glob->swap_queue = create_singlethread_workqueue("ttm_swap"); @@ -452,6 +454,19 @@ int ttm_mem_global_init(struct ttm_mem_global *glob) zone->name, (unsigned long long)zone->max_mem >> 10); } ttm_pool_mgr_init(glob->zone_kernel->max_mem/(2*PAGE_SIZE)); + + /* We generally allow TTM to allocate a maximum of 50% of + * the available system memory. + */ + num_pages = (u64)si.totalram * si.mem_unit; + num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; + + /* But for DMA32 we limit ourself to only use 2GiB maximum. */ + num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; + num_dma32_pages = min(num_dma32_pages, 2ULL << 30) >> PAGE_SHIFT; + + ttm_tt_mgr_init(num_pages, num_dma32_pages); + return 0; out_no_zone: ttm_mem_global_release(glob); diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index cfd633c7e764..c7b71c9861a6 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -38,6 +38,19 @@ #include <drm/drm_cache.h> #include <drm/ttm/ttm_bo_driver.h>
+static unsigned long ttm_pages_limit; + +MODULE_PARM_DESC(pages_limit, "Limit for the allocated pages"); +module_param_named(pages_limit, ttm_pages_limit, ulong, 0644); + +static unsigned long ttm_dma32_pages_limit; + +MODULE_PARM_DESC(dma32_pages_limit, "Limit for the allocated DMA32 pages"); +module_param_named(dma32_pages_limit, ttm_dma32_pages_limit, ulong, 0644); + +static atomic_long_t ttm_pages_allocated; +static atomic_long_t ttm_dma32_pages_allocated; + /** * Allocates a ttm structure for the given BO. */ @@ -295,8 +308,8 @@ static void ttm_tt_add_mapping(struct ttm_bo_device *bdev, struct ttm_tt *ttm) ttm->pages[i]->mapping = bdev->dev_mapping; }
-int ttm_tt_populate(struct ttm_bo_device *bdev, - struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) +int ttm_tt_populate(struct ttm_bo_device *bdev, struct ttm_tt *ttm, + struct ttm_operation_ctx *ctx) { int ret;
@@ -306,12 +319,25 @@ int ttm_tt_populate(struct ttm_bo_device *bdev, if (ttm_tt_is_populated(ttm)) return 0;
+ atomic_long_add(ttm->num_pages, &ttm_pages_allocated); + if (bdev->pool.use_dma32) + atomic_long_add(ttm->num_pages, &ttm_dma32_pages_allocated); + + while (atomic_long_read(&ttm_pages_allocated) > ttm_pages_limit || + atomic_long_read(&ttm_dma32_pages_allocated) > + ttm_dma32_pages_limit) { + + ret = ttm_bo_swapout(ctx); + if (ret) + goto error; + } + if (bdev->driver->ttm_tt_populate) ret = bdev->driver->ttm_tt_populate(bdev, ttm, ctx); else ret = ttm_pool_alloc(&bdev->pool, ttm, ctx); if (ret) - return ret; + goto error;
ttm_tt_add_mapping(bdev, ttm); ttm->page_flags |= TTM_PAGE_FLAG_PRIV_POPULATED; @@ -324,6 +350,12 @@ int ttm_tt_populate(struct ttm_bo_device *bdev, }
return 0; + +error: + atomic_long_sub(ttm->num_pages, &ttm_pages_allocated); + if (bdev->pool.use_dma32) + atomic_long_sub(ttm->num_pages, &ttm_dma32_pages_allocated); + return ret; } EXPORT_SYMBOL(ttm_tt_populate);
@@ -341,8 +373,7 @@ static void ttm_tt_clear_mapping(struct ttm_tt *ttm) } }
-void ttm_tt_unpopulate(struct ttm_bo_device *bdev, - struct ttm_tt *ttm) +void ttm_tt_unpopulate(struct ttm_bo_device *bdev, struct ttm_tt *ttm) { if (!ttm_tt_is_populated(ttm)) return; @@ -352,5 +383,28 @@ void ttm_tt_unpopulate(struct ttm_bo_device *bdev, bdev->driver->ttm_tt_unpopulate(bdev, ttm); else ttm_pool_free(&bdev->pool, ttm); + + atomic_long_sub(ttm->num_pages, &ttm_pages_allocated); + if (bdev->pool.use_dma32) + atomic_long_sub(ttm->num_pages, &ttm_dma32_pages_allocated); + ttm->page_flags &= ~TTM_PAGE_FLAG_PRIV_POPULATED; } + +/** + * ttm_tt_mgr_init - set defaults for the number of pages + * + * @num_pages: global limit of system pages to use + * @num_dma32_pages: global limit of DMA32 pages to use + * + * Set reasonable defaults for the number of allocated pages. + */ +void ttm_tt_mgr_init(unsigned long num_pages, unsigned long num_dma32_pages) +{ + if (!ttm_pages_limit) + ttm_pages_limit = num_pages; + + if (!ttm_dma32_pages_limit) + ttm_dma32_pages_limit = num_dma32_pages; + +} diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h index da27e9d8fa64..05ca43f13884 100644 --- a/include/drm/ttm/ttm_tt.h +++ b/include/drm/ttm/ttm_tt.h @@ -157,6 +157,8 @@ int ttm_tt_populate(struct ttm_bo_device *bdev, struct ttm_tt *ttm, struct ttm_o */ void ttm_tt_unpopulate(struct ttm_bo_device *bdev, struct ttm_tt *ttm);
+void ttm_tt_mgr_init(unsigned long num_pages, unsigned long num_dma32_pages); + #if IS_ENABLED(CONFIG_AGP) #include <linux/agp_backend.h>
This is just another feature which is only used by VMWGFX, so move it into the driver instead.
I've tried to add the accounting sysfs file to the kobject of the drm minor, but I'm not 100% sure if this works as expected.
Signed-off-by: Christian König christian.koenig@amd.com --- .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 16 +++-- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 8 +-- drivers/gpu/drm/drm_gem_vram_helper.c | 6 +- drivers/gpu/drm/nouveau/nouveau_bo.c | 7 +- drivers/gpu/drm/nouveau/nouveau_drv.h | 1 - drivers/gpu/drm/qxl/qxl_object.c | 4 +- drivers/gpu/drm/radeon/radeon_object.c | 8 +-- drivers/gpu/drm/ttm/Makefile | 6 +- drivers/gpu/drm/ttm/ttm_bo.c | 65 ++++++++----------- drivers/gpu/drm/ttm/ttm_bo_util.c | 1 - drivers/gpu/drm/ttm/ttm_pool.c | 13 +--- drivers/gpu/drm/vmwgfx/Makefile | 2 +- drivers/gpu/drm/{ttm => vmwgfx}/ttm_memory.c | 31 +++------ .../gpu/drm/vmwgfx}/ttm_memory.h | 5 +- drivers/gpu/drm/vmwgfx/ttm_object.h | 3 +- drivers/gpu/drm/vmwgfx/vmwgfx_bo.c | 22 +++++-- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 5 ++ drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 28 +++++++- include/drm/ttm/ttm_bo_api.h | 11 +--- include/drm/ttm/ttm_bo_driver.h | 1 - include/drm/ttm/ttm_tt.h | 1 + 21 files changed, 118 insertions(+), 126 deletions(-) rename drivers/gpu/drm/{ttm => vmwgfx}/ttm_memory.c (94%) rename {include/drm/ttm => drivers/gpu/drm/vmwgfx}/ttm_memory.h (97%)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index 1755386470e6..4eec73da0a7b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -118,6 +118,16 @@ void amdgpu_amdkfd_gpuvm_init_mem_limits(void) */ #define ESTIMATE_PT_SIZE(mem_size) ((mem_size) >> 14)
+static size_t amdgpu_amdkfd_acc_size(uint64_t size) +{ + size >>= PAGE_SHIFT; + size *= sizeof(dma_addr_t) + sizeof(void *); + + return __roundup_pow_of_two(sizeof(struct amdgpu_bo)) + + __rountup_pow_of_two(sizeof(struct ttm_tt)) + + PAGE_ALIGN(size); +} + static int amdgpu_amdkfd_reserve_mem_limit(struct amdgpu_device *adev, uint64_t size, u32 domain, bool sg) { @@ -126,8 +136,7 @@ static int amdgpu_amdkfd_reserve_mem_limit(struct amdgpu_device *adev, size_t acc_size, system_mem_needed, ttm_mem_needed, vram_needed; int ret = 0;
- acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size, - sizeof(struct amdgpu_bo)); + acc_size = amdgpu_amdkfd_acc_size(size);
vram_needed = 0; if (domain == AMDGPU_GEM_DOMAIN_GTT) { @@ -174,8 +183,7 @@ static void unreserve_mem_limit(struct amdgpu_device *adev, { size_t acc_size;
- acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size, - sizeof(struct amdgpu_bo)); + acc_size = amdgpu_amdkfd_acc_size(size);
spin_lock(&kfd_mem_limit.mem_limit_lock); if (domain == AMDGPU_GEM_DOMAIN_GTT) { diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c index c6c9723d3d8a..b4ed0662ec5b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c @@ -523,7 +523,6 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev, }; struct amdgpu_bo *bo; unsigned long page_align, size = bp->size; - size_t acc_size; int r;
/* Note that GDS/GWS/OA allocates 1 page per byte/resource. */ @@ -546,9 +545,6 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
*bo_ptr = NULL;
- acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size, - sizeof(struct amdgpu_bo)); - bo = kzalloc(sizeof(struct amdgpu_bo), GFP_KERNEL); if (bo == NULL) return -ENOMEM; @@ -577,8 +573,8 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev, bo->tbo.priority = 1;
r = ttm_bo_init_reserved(&adev->mman.bdev, &bo->tbo, size, bp->type, - &bo->placement, page_align, &ctx, acc_size, - NULL, bp->resv, &amdgpu_bo_destroy); + &bo->placement, page_align, &ctx, NULL, + bp->resv, &amdgpu_bo_destroy); if (unlikely(r != 0)) return r;
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 02ca22e90290..5cf7797048e1 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -189,7 +189,6 @@ struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev, struct drm_vram_mm *vmm = dev->vram_mm; struct ttm_bo_device *bdev; int ret; - size_t acc_size;
if (WARN_ONCE(!vmm, "VRAM MM not initialized")) return ERR_PTR(-EINVAL); @@ -216,7 +215,6 @@ struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev, }
bdev = &vmm->bdev; - acc_size = ttm_bo_dma_acc_size(bdev, size, sizeof(*gbo));
gbo->bo.bdev = bdev; drm_gem_vram_placement(gbo, DRM_GEM_VRAM_PL_FLAG_SYSTEM); @@ -226,8 +224,8 @@ struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev, * to release gbo->bo.base and kfree gbo. */ ret = ttm_bo_init(bdev, &gbo->bo, size, ttm_bo_type_device, - &gbo->placement, pg_align, false, acc_size, - NULL, NULL, ttm_buffer_object_destroy); + &gbo->placement, pg_align, false, NULL, NULL, + ttm_buffer_object_destroy); if (ret) return ERR_PTR(ret);
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 7aa4286784ae..3ed5bac70852 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -300,18 +300,15 @@ nouveau_bo_init(struct nouveau_bo *nvbo, u64 size, int align, u32 domain, struct sg_table *sg, struct dma_resv *robj) { int type = sg ? ttm_bo_type_sg : ttm_bo_type_device; - size_t acc_size; int ret;
- acc_size = ttm_bo_dma_acc_size(nvbo->bo.bdev, size, sizeof(*nvbo)); - nvbo->bo.mem.num_pages = size >> PAGE_SHIFT; nouveau_bo_placement_set(nvbo, domain, 0); INIT_LIST_HEAD(&nvbo->io_reserve_lru);
ret = ttm_bo_init(nvbo->bo.bdev, &nvbo->bo, size, type, - &nvbo->placement, align >> PAGE_SHIFT, false, - acc_size, sg, robj, nouveau_bo_del_ttm); + &nvbo->placement, align >> PAGE_SHIFT, false, sg, + robj, nouveau_bo_del_ttm); if (ret) { /* ttm will call nouveau_bo_del_ttm if it fails.. */ return ret; diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h b/drivers/gpu/drm/nouveau/nouveau_drv.h index 9d04d1b36434..8e90b3e47bbe 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drv.h +++ b/drivers/gpu/drm/nouveau/nouveau_drv.h @@ -54,7 +54,6 @@ #include <drm/ttm/ttm_bo_api.h> #include <drm/ttm/ttm_bo_driver.h> #include <drm/ttm/ttm_placement.h> -#include <drm/ttm/ttm_memory.h> #include <drm/ttm/ttm_module.h>
#include <drm/drm_audio_component.h> diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index ceebc5881f68..705b51535492 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -138,8 +138,8 @@ int qxl_bo_create(struct qxl_device *qdev, qxl_ttm_placement_from_domain(bo, domain);
r = ttm_bo_init_reserved(&qdev->mman.bdev, &bo->tbo, size, type, - &bo->placement, 0, &ctx, size, - NULL, NULL, &qxl_ttm_bo_destroy); + &bo->placement, 0, &ctx, NULL, NULL, + &qxl_ttm_bo_destroy); if (unlikely(r != 0)) { if (r != -ERESTARTSYS) dev_err(qdev->ddev.dev, diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c index ab81e35cb060..934109634ee6 100644 --- a/drivers/gpu/drm/radeon/radeon_object.c +++ b/drivers/gpu/drm/radeon/radeon_object.c @@ -161,7 +161,6 @@ int radeon_bo_create(struct radeon_device *rdev, struct radeon_bo *bo; enum ttm_bo_type type; unsigned long page_align = roundup(byte_align, PAGE_SIZE) >> PAGE_SHIFT; - size_t acc_size; int r;
size = ALIGN(size, PAGE_SIZE); @@ -175,9 +174,6 @@ int radeon_bo_create(struct radeon_device *rdev, } *bo_ptr = NULL;
- acc_size = ttm_bo_dma_acc_size(&rdev->mman.bdev, size, - sizeof(struct radeon_bo)); - bo = kzalloc(sizeof(struct radeon_bo), GFP_KERNEL); if (bo == NULL) return -ENOMEM; @@ -232,8 +228,8 @@ int radeon_bo_create(struct radeon_device *rdev, /* Kernel allocation are uninterruptible */ down_read(&rdev->pm.mclk_lock); r = ttm_bo_init(&rdev->mman.bdev, &bo->tbo, size, type, - &bo->placement, page_align, !kernel, acc_size, - sg, resv, &radeon_ttm_bo_destroy); + &bo->placement, page_align, !kernel, sg, resv, + &radeon_ttm_bo_destroy); up_read(&rdev->pm.mclk_lock); if (unlikely(r != 0)) { return r; diff --git a/drivers/gpu/drm/ttm/Makefile b/drivers/gpu/drm/ttm/Makefile index b6f5f87b270f..7e8dc85caac3 100644 --- a/drivers/gpu/drm/ttm/Makefile +++ b/drivers/gpu/drm/ttm/Makefile @@ -2,10 +2,8 @@ # # Makefile for the drm device driver. This driver provides support for the
-ttm-y := ttm_memory.o ttm_tt.o ttm_bo.o \ - ttm_bo_util.o ttm_bo_vm.o ttm_module.o \ - ttm_execbuf_util.o ttm_range_manager.o \ - ttm_resource.o ttm_pool.o +ttm-y := ttm_tt.o ttm_bo.o ttm_bo_util.o ttm_bo_vm.o ttm_module.o \ + ttm_execbuf_util.o ttm_range_manager.o ttm_resource.o ttm_pool.o ttm-$(CONFIG_AGP) += ttm_agp_backend.o
obj-$(CONFIG_DRM_TTM) += ttm.o diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index e6bcbfe530ec..a958135cb3fe 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -480,7 +480,6 @@ static void ttm_bo_release(struct kref *kref) struct ttm_buffer_object *bo = container_of(kref, struct ttm_buffer_object, kref); struct ttm_bo_device *bdev = bo->bdev; - size_t acc_size = bo->acc_size; int ret;
if (!bo->deleted) { @@ -541,7 +540,6 @@ static void ttm_bo_release(struct kref *kref) if (!ttm_bo_uses_embedded_gem_object(bo)) dma_resv_fini(&bo->base._resv); bo->destroy(bo); - ttm_mem_global_free(&ttm_mem_glob, acc_size); }
void ttm_bo_put(struct ttm_buffer_object *bo) @@ -1105,25 +1103,13 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, struct ttm_placement *placement, uint32_t page_alignment, struct ttm_operation_ctx *ctx, - size_t acc_size, struct sg_table *sg, struct dma_resv *resv, void (*destroy) (struct ttm_buffer_object *)) { - struct ttm_mem_global *mem_glob = &ttm_mem_glob; - int ret = 0; unsigned long num_pages; bool locked; - - ret = ttm_mem_global_alloc(mem_glob, acc_size, ctx); - if (ret) { - pr_err("Out of kernel memory\n"); - if (destroy) - (*destroy)(bo); - else - kfree(bo); - return -ENOMEM; - } + int ret = 0;
num_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; if (num_pages == 0) { @@ -1132,7 +1118,6 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, (*destroy)(bo); else kfree(bo); - ttm_mem_global_free(mem_glob, acc_size); return -EINVAL; } bo->destroy = destroy ? destroy : ttm_bo_default_destroy; @@ -1153,7 +1138,6 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, bo->mem.bus.addr = NULL; bo->moving = NULL; bo->mem.placement = 0; - bo->acc_size = acc_size; bo->pin_count = 0; bo->sg = sg; if (resv) { @@ -1213,7 +1197,6 @@ int ttm_bo_init(struct ttm_bo_device *bdev, struct ttm_placement *placement, uint32_t page_alignment, bool interruptible, - size_t acc_size, struct sg_table *sg, struct dma_resv *resv, void (*destroy) (struct ttm_buffer_object *)) @@ -1222,8 +1205,7 @@ int ttm_bo_init(struct ttm_bo_device *bdev, int ret;
ret = ttm_bo_init_reserved(bdev, bo, size, type, placement, - page_alignment, &ctx, acc_size, - sg, resv, destroy); + page_alignment, &ctx, sg, resv, destroy); if (ret) return ret;
@@ -1234,20 +1216,6 @@ int ttm_bo_init(struct ttm_bo_device *bdev, } EXPORT_SYMBOL(ttm_bo_init);
-size_t ttm_bo_dma_acc_size(struct ttm_bo_device *bdev, - unsigned long bo_size, - unsigned struct_size) -{ - unsigned npages = (PAGE_ALIGN(bo_size)) >> PAGE_SHIFT; - size_t size = 0; - - size += ttm_round_pot(struct_size); - size += ttm_round_pot(npages * (2*sizeof(void *) + sizeof(dma_addr_t))); - size += ttm_round_pot(sizeof(struct ttm_tt)); - return size; -} -EXPORT_SYMBOL(ttm_bo_dma_acc_size); - static void ttm_bo_global_kobj_release(struct kobject *kobj) { struct ttm_bo_global *glob = @@ -1264,9 +1232,10 @@ static void ttm_bo_global_release(void) if (--ttm_bo_glob_use_count > 0) goto out;
+ ttm_pool_mgr_fini(); + kobject_del(&glob->kobj); kobject_put(&glob->kobj); - ttm_mem_global_release(&ttm_mem_glob); memset(glob, 0, sizeof(*glob)); out: mutex_unlock(&ttm_global_mutex); @@ -1275,6 +1244,8 @@ static void ttm_bo_global_release(void) static int ttm_bo_global_init(void) { struct ttm_bo_global *glob = &ttm_bo_glob; + uint64_t num_pages, num_dma32_pages; + struct sysinfo si; int ret = 0; unsigned i;
@@ -1282,9 +1253,27 @@ static int ttm_bo_global_init(void) if (++ttm_bo_glob_use_count > 1) goto out;
- ret = ttm_mem_global_init(&ttm_mem_glob); - if (ret) - goto out; + si_meminfo(&si); + + /* Limit the number of pages in the pool to about 50% of the total + * system memory. + */ + num_pages = (u64)si.totalram * si.mem_unit; + num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; + + ttm_pool_mgr_init(num_pages); + + /* We generally allow TTM to allocate a maximum of 50% of + * the available system memory. + */ + num_pages = (u64)si.totalram * si.mem_unit; + num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; + + /* But for DMA32 we limit ourself to only use 2GiB maximum. */ + num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; + num_dma32_pages = min(num_dma32_pages, 2ULL << 30) >> PAGE_SHIFT; + + ttm_tt_mgr_init(num_pages, num_dma32_pages);
spin_lock_init(&glob->lru_lock); glob->dummy_read_page = alloc_page(__GFP_ZERO | GFP_DMA32); diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 7ccb2295cac1..1b6195771011 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -309,7 +309,6 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
kref_init(&fbo->base.kref); fbo->base.destroy = &ttm_transfered_destroy; - fbo->base.acc_size = 0; fbo->base.pin_count = 1; if (bo->type != ttm_bo_type_sg) fbo->base.base.resv = &fbo->base.base._resv; diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index 1b96780b4989..6a4e093e7a22 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -407,16 +407,10 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, caching = pages + (1 << order); }
- r = ttm_mem_global_alloc_page(&ttm_mem_glob, p, - (1 << order) * PAGE_SIZE, - ctx); - if (r) - goto error_free_page; - if (dma_addr) { r = ttm_pool_map(pool, order, p, &dma_addr); if (r) - goto error_global_free; + goto error_free_page; }
num_pages -= 1 << order; @@ -430,9 +424,6 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
return 0;
-error_global_free: - ttm_mem_global_free_page(&ttm_mem_glob, p, (1 << order) * PAGE_SIZE); - error_free_page: ttm_pool_free_page(pool, tt->caching, order, p);
@@ -467,8 +458,6 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt)
order = ttm_pool_page_order(pool, p); num_pages = 1ULL << order; - ttm_mem_global_free_page(&ttm_mem_glob, p, - num_pages * PAGE_SIZE); if (tt->dma_address) ttm_pool_unmap(pool, tt->dma_address[i], num_pages);
diff --git a/drivers/gpu/drm/vmwgfx/Makefile b/drivers/gpu/drm/vmwgfx/Makefile index 31f85f09f1fc..86af5d7b0242 100644 --- a/drivers/gpu/drm/vmwgfx/Makefile +++ b/drivers/gpu/drm/vmwgfx/Makefile @@ -9,7 +9,7 @@ vmwgfx-y := vmwgfx_execbuf.o vmwgfx_gmr.o vmwgfx_kms.o vmwgfx_drv.o \ vmwgfx_cotable.o vmwgfx_so.o vmwgfx_binding.o vmwgfx_msg.o \ vmwgfx_simple_resource.o vmwgfx_va.o vmwgfx_blit.o \ vmwgfx_validation.o vmwgfx_page_dirty.o vmwgfx_streamoutput.o \ - ttm_object.o ttm_lock.o + ttm_object.o ttm_lock.o ttm_memory.o
vmwgfx-$(CONFIG_TRANSPARENT_HUGEPAGE) += vmwgfx_thp.o obj-$(CONFIG_DRM_VMWGFX) := vmwgfx.o diff --git a/drivers/gpu/drm/ttm/ttm_memory.c b/drivers/gpu/drm/vmwgfx/ttm_memory.c similarity index 94% rename from drivers/gpu/drm/ttm/ttm_memory.c rename to drivers/gpu/drm/vmwgfx/ttm_memory.c index c53136048630..fa75110d7fab 100644 --- a/drivers/gpu/drm/ttm/ttm_memory.c +++ b/drivers/gpu/drm/vmwgfx/ttm_memory.c @@ -28,8 +28,6 @@
#define pr_fmt(fmt) "[TTM] " fmt
-#include <drm/ttm/ttm_memory.h> -#include <drm/ttm/ttm_module.h> #include <linux/spinlock.h> #include <linux/sched.h> #include <linux/wait.h> @@ -37,8 +35,11 @@ #include <linux/module.h> #include <linux/slab.h> #include <linux/swap.h> -#include <drm/ttm/ttm_pool.h> -#include <drm/ttm/ttm_tt.h> + +#include <drm/drm_device.h> +#include <drm/drm_file.h> + +#include "ttm_memory.h"
#define TTM_MEMORY_ALLOC_RETRIES 4
@@ -413,9 +414,8 @@ static int ttm_mem_init_dma32_zone(struct ttm_mem_global *glob, } #endif
-int ttm_mem_global_init(struct ttm_mem_global *glob) +int ttm_mem_global_init(struct ttm_mem_global *glob, struct drm_device *dev) { - uint64_t num_pages, num_dma32_pages; struct ttm_mem_zone *zone; struct sysinfo si; int ret; @@ -425,7 +425,8 @@ int ttm_mem_global_init(struct ttm_mem_global *glob) glob->swap_queue = create_singlethread_workqueue("ttm_swap"); INIT_WORK(&glob->work, ttm_shrink_work); ret = kobject_init_and_add( - &glob->kobj, &ttm_mem_glob_kobj_type, ttm_get_kobj(), "memory_accounting"); + &glob->kobj, &ttm_mem_glob_kobj_type, &dev->primary->kdev->kobj, + "memory_accounting"); if (unlikely(ret != 0)) { kobject_put(&glob->kobj); return ret; @@ -453,19 +454,6 @@ int ttm_mem_global_init(struct ttm_mem_global *glob) pr_info("Zone %7s: Available graphics memory: %llu KiB\n", zone->name, (unsigned long long)zone->max_mem >> 10); } - ttm_pool_mgr_init(glob->zone_kernel->max_mem/(2*PAGE_SIZE)); - - /* We generally allow TTM to allocate a maximum of 50% of - * the available system memory. - */ - num_pages = (u64)si.totalram * si.mem_unit; - num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; - - /* But for DMA32 we limit ourself to only use 2GiB maximum. */ - num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; - num_dma32_pages = min(num_dma32_pages, 2ULL << 30) >> PAGE_SHIFT; - - ttm_tt_mgr_init(num_pages, num_dma32_pages);
return 0; out_no_zone: @@ -478,9 +466,6 @@ void ttm_mem_global_release(struct ttm_mem_global *glob) struct ttm_mem_zone *zone; unsigned int i;
- /* let the page allocator first stop the shrink work. */ - ttm_pool_mgr_fini(); - flush_workqueue(glob->swap_queue); destroy_workqueue(glob->swap_queue); glob->swap_queue = NULL; diff --git a/include/drm/ttm/ttm_memory.h b/drivers/gpu/drm/vmwgfx/ttm_memory.h similarity index 97% rename from include/drm/ttm/ttm_memory.h rename to drivers/gpu/drm/vmwgfx/ttm_memory.h index c1f167881e33..850ee6c867da 100644 --- a/include/drm/ttm/ttm_memory.h +++ b/drivers/gpu/drm/vmwgfx/ttm_memory.h @@ -35,7 +35,8 @@ #include <linux/errno.h> #include <linux/kobject.h> #include <linux/mm.h> -#include "ttm_bo_api.h" + +#include <drm/ttm/ttm_bo_api.h>
/** * struct ttm_mem_global - Global memory accounting structure. @@ -79,7 +80,7 @@ extern struct ttm_mem_global { #endif } ttm_mem_glob;
-int ttm_mem_global_init(struct ttm_mem_global *glob); +int ttm_mem_global_init(struct ttm_mem_global *glob, struct drm_device *dev); void ttm_mem_global_release(struct ttm_mem_global *glob); int ttm_mem_global_alloc(struct ttm_mem_global *glob, uint64_t memory, struct ttm_operation_ctx *ctx); diff --git a/drivers/gpu/drm/vmwgfx/ttm_object.h b/drivers/gpu/drm/vmwgfx/ttm_object.h index ede26df87c93..49b064f0cb19 100644 --- a/drivers/gpu/drm/vmwgfx/ttm_object.h +++ b/drivers/gpu/drm/vmwgfx/ttm_object.h @@ -43,7 +43,8 @@ #include <linux/rcupdate.h>
#include <drm/drm_hashtab.h> -#include <drm/ttm/ttm_memory.h> + +#include "ttm_memory.h"
/** * enum ttm_ref_type diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c index 263d76ae43f0..5878f0e9199f 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c @@ -508,11 +508,16 @@ int vmw_bo_create_kernel(struct vmw_private *dev_priv, unsigned long size, acc_size = ttm_round_pot(sizeof(*bo)); acc_size += ttm_round_pot(npages * sizeof(void *)); acc_size += ttm_round_pot(sizeof(struct ttm_tt)); + + ret = ttm_mem_global_alloc(&ttm_mem_glob, acc_size, &ctx); + if (unlikely(ret)) + goto error_free; + ret = ttm_bo_init_reserved(&dev_priv->bdev, bo, size, ttm_bo_type_device, placement, 0, - &ctx, acc_size, NULL, NULL, NULL); + &ctx, NULL, NULL, NULL); if (unlikely(ret)) - goto error_free; + goto error_account;
ttm_bo_pin(bo); ttm_bo_unreserve(bo); @@ -520,6 +525,9 @@ int vmw_bo_create_kernel(struct vmw_private *dev_priv, unsigned long size,
return 0;
+error_account: + ttm_mem_global_free(&ttm_mem_glob, acc_size); + error_free: kfree(bo); return ret; @@ -559,11 +567,17 @@ int vmw_bo_init(struct vmw_private *dev_priv, vmw_bo->base.priority = 3; vmw_bo->res_tree = RB_ROOT;
+ ret = ttm_mem_global_alloc(&ttm_mem_glob, acc_size, &ctx); + if (unlikely(ret)) + return ret; + ret = ttm_bo_init_reserved(bdev, &vmw_bo->base, size, ttm_bo_type_device, placement, - 0, &ctx, acc_size, NULL, NULL, bo_free); - if (unlikely(ret)) + 0, &ctx, NULL, NULL, bo_free); + if (unlikely(ret)) { + ttm_mem_global_free(&ttm_mem_glob, acc_size); return ret; + }
if (pin) ttm_bo_pin(&vmw_bo->base); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index 216daf93022c..1310857879e7 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -1273,6 +1273,7 @@ static void vmw_remove(struct pci_dev *pdev) { struct drm_device *dev = pci_get_drvdata(pdev);
+ ttm_mem_global_release(&ttm_mem_glob); drm_dev_unregister(dev); vmw_driver_unload(dev); drm_dev_put(dev); @@ -1520,6 +1521,10 @@ static int vmw_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_pci_disable_device; }
+ ret = ttm_mem_global_init(&ttm_mem_glob, dev); + if (ret) + return ret; + dev->pdev = pdev; pci_set_drvdata(pdev, dev);
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c index 6a04261ce760..128e977939c3 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c @@ -575,11 +575,31 @@ static void vmw_ttm_destroy(struct ttm_bo_device *bdev, struct ttm_tt *ttm) static int vmw_ttm_populate(struct ttm_bo_device *bdev, struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) { + unsigned int i; + int ret; + /* TODO: maybe completely drop this ? */ if (ttm_tt_is_populated(ttm)) return 0;
- return ttm_pool_alloc(&bdev->pool, ttm, ctx); + ret = ttm_pool_alloc(&bdev->pool, ttm, ctx); + if (ret) + return ret; + + for (i = 0; i < ttm->num_pages; ++i) { + ret = ttm_mem_global_alloc_page(&ttm_mem_glob, ttm->pages[i], + PAGE_SIZE, ctx); + if (ret) + goto error; + } + return 0; + +error: + while (i--) + ttm_mem_global_free_page(&ttm_mem_glob, ttm->pages[i], + PAGE_SIZE); + ttm_pool_free(&bdev->pool, ttm); + return ret; }
static void vmw_ttm_unpopulate(struct ttm_bo_device *bdev, @@ -587,6 +607,7 @@ static void vmw_ttm_unpopulate(struct ttm_bo_device *bdev, { struct vmw_ttm_tt *vmw_tt = container_of(ttm, struct vmw_ttm_tt, dma_ttm); + unsigned int i;
if (vmw_tt->mob) { vmw_mob_destroy(vmw_tt->mob); @@ -594,6 +615,11 @@ static void vmw_ttm_unpopulate(struct ttm_bo_device *bdev, }
vmw_ttm_unmap_dma(vmw_tt); + + for (i = 0; i < ttm->num_pages; ++i) + ttm_mem_global_free_page(&ttm_mem_glob, ttm->pages[i], + PAGE_SIZE); + ttm_pool_free(&bdev->pool, ttm); }
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 2564e66e67d7..e5027f7cdd68 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -88,7 +88,6 @@ struct ttm_tt; * @type: The bo type. * @destroy: Destruction function. If NULL, kfree is used. * @num_pages: Actual number of pages. - * @acc_size: Accounted size for this object. * @kref: Reference count of this buffer object. When this refcount reaches * zero, the object is destroyed or put on the delayed delete list. * @mem: structure describing current placement. @@ -126,7 +125,6 @@ struct ttm_buffer_object { enum ttm_bo_type type; void (*destroy) (struct ttm_buffer_object *); unsigned long num_pages; - size_t acc_size;
/** * Members not needing protection. @@ -356,10 +354,6 @@ void ttm_bo_unlock_delayed_workqueue(struct ttm_bo_device *bdev, int resched); bool ttm_bo_eviction_valuable(struct ttm_buffer_object *bo, const struct ttm_place *place);
-size_t ttm_bo_dma_acc_size(struct ttm_bo_device *bdev, - unsigned long bo_size, - unsigned struct_size); - /** * ttm_bo_init_reserved * @@ -370,7 +364,6 @@ size_t ttm_bo_dma_acc_size(struct ttm_bo_device *bdev, * @flags: Initial placement flags. * @page_alignment: Data alignment in pages. * @ctx: TTM operation context for memory allocation. - * @acc_size: Accounted size for this object. * @resv: Pointer to a dma_resv, or NULL to let ttm allocate one. * @destroy: Destroy function. Use NULL for kfree(). * @@ -402,7 +395,6 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, struct ttm_placement *placement, uint32_t page_alignment, struct ttm_operation_ctx *ctx, - size_t acc_size, struct sg_table *sg, struct dma_resv *resv, void (*destroy) (struct ttm_buffer_object *)); @@ -422,7 +414,6 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, * holds a pointer to a persistent shmem object. Typically, this would * point to the shmem object backing a GEM object if TTM is used to back a * GEM user interface. - * @acc_size: Accounted size for this object. * @resv: Pointer to a dma_resv, or NULL to let ttm allocate one. * @destroy: Destroy function. Use NULL for kfree(). * @@ -447,7 +438,7 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, int ttm_bo_init(struct ttm_bo_device *bdev, struct ttm_buffer_object *bo, unsigned long size, enum ttm_bo_type type, struct ttm_placement *placement, - uint32_t page_alignment, bool interrubtible, size_t acc_size, + uint32_t page_alignment, bool interrubtible, struct sg_table *sg, struct dma_resv *resv, void (*destroy) (struct ttm_buffer_object *));
diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index f02f7cf9ae90..54788a5160a0 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -38,7 +38,6 @@ #include <linux/dma-resv.h>
#include "ttm_bo_api.h" -#include "ttm_memory.h" #include "ttm_module.h" #include "ttm_placement.h" #include "ttm_tt.h" diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h index 05ca43f13884..4acb4d5b5eef 100644 --- a/include/drm/ttm/ttm_tt.h +++ b/include/drm/ttm/ttm_tt.h @@ -30,6 +30,7 @@ #include <linux/types.h> #include <drm/ttm/ttm_caching.h>
+struct ttm_bo_device; struct ttm_tt; struct ttm_resource; struct ttm_buffer_object;
Hi "Christian,
I love your patch! Yet something to improve:
[auto build test ERROR on next-20201117] [cannot apply to linus/master v5.10-rc4 v5.10-rc3 v5.10-rc2 v5.10-rc4] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Christian-K-nig/drm-ttm-rework-ttm_... base: 7c8ca8129ee9724cb1527895fe6dec942ef07f19 config: arm-allyesconfig (attached as .config) compiler: arm-linux-gnueabi-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/78c57dc48dc2612e7a6a9d265622f3f459a9... git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit/20201117-220814 git checkout 78c57dc48dc2612e7a6a9d265622f3f459a92c56 # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arm
If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot lkp@intel.com
All errors (new ones prefixed by >>):
arm-linux-gnueabi-ld: drivers/gpu/drm/ttm/ttm_bo.o: in function `ttm_bo_device_init': (.text+0x1bb0): undefined reference to `__aeabi_uldivmod'
arm-linux-gnueabi-ld: (.text+0x1c1c): undefined reference to `__aeabi_uldivmod'
--- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Hi "Christian,
I love your patch! Yet something to improve:
[auto build test ERROR on next-20201117] [cannot apply to linus/master v5.10-rc4 v5.10-rc3 v5.10-rc2 v5.10-rc4] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Christian-K-nig/drm-ttm-rework-ttm_... base: 7c8ca8129ee9724cb1527895fe6dec942ef07f19 config: mips-allyesconfig (attached as .config) compiler: mips-linux-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/78c57dc48dc2612e7a6a9d265622f3f459a9... git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit/20201117-220814 git checkout 78c57dc48dc2612e7a6a9d265622f3f459a92c56 # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=mips
If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot lkp@intel.com
All errors (new ones prefixed by >>):
arch/mips/kernel/head.o: in function `dtb_found': (.ref.text+0xe0): relocation truncated to fit: R_MIPS_26 against `start_kernel' init/main.o: in function `set_reset_devices': main.c:(.init.text+0x20): relocation truncated to fit: R_MIPS_26 against `_mcount' main.c:(.init.text+0x30): relocation truncated to fit: R_MIPS_26 against `__sanitizer_cov_trace_pc' init/main.o: in function `debug_kernel': main.c:(.init.text+0x9c): relocation truncated to fit: R_MIPS_26 against `_mcount' main.c:(.init.text+0xac): relocation truncated to fit: R_MIPS_26 against `__sanitizer_cov_trace_pc' init/main.o: in function `quiet_kernel': main.c:(.init.text+0x118): relocation truncated to fit: R_MIPS_26 against `_mcount' main.c:(.init.text+0x128): relocation truncated to fit: R_MIPS_26 against `__sanitizer_cov_trace_pc' init/main.o: in function `init_setup': main.c:(.init.text+0x1a4): relocation truncated to fit: R_MIPS_26 against `_mcount' main.c:(.init.text+0x1c8): relocation truncated to fit: R_MIPS_26 against `__sanitizer_cov_trace_pc' main.c:(.init.text+0x1e8): relocation truncated to fit: R_MIPS_26 against `__sanitizer_cov_trace_pc' main.c:(.init.text+0x1fc): additional relocation overflows omitted from the output mips-linux-ld: drivers/gpu/drm/ttm/ttm_bo.o: in function `ttm_bo_device_init':
(.text.ttm_bo_device_init+0x1f0): undefined reference to `__udivdi3' mips-linux-ld: (.text.ttm_bo_device_init+0x238): undefined reference to `__udivdi3'
--- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Increase the ammount of system memory drivers can use to about 90% of the total available.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/ttm/ttm_bo.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index a958135cb3fe..0a93df93dba4 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) * the available system memory. */ num_pages = (u64)si.totalram * si.mem_unit; - num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; + num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT;
/* But for DMA32 we limit ourself to only use 2GiB maximum. */ num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit;
On 2020-11-17 3:06 p.m., Christian König wrote:
Increase the ammount of system memory drivers can use to about 90% of the total available.
Signed-off-by: Christian König christian.koenig@amd.com
drivers/gpu/drm/ttm/ttm_bo.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index a958135cb3fe..0a93df93dba4 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) * the available system memory. */ num_pages = (u64)si.totalram * si.mem_unit;
- num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT;
num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT;
/* But for DMA32 we limit ourself to only use 2GiB maximum. */ num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit;
This should update the comment added in patch 1.
Am 17.11.20 um 17:38 schrieb Michel Dänzer:
On 2020-11-17 3:06 p.m., Christian König wrote:
Increase the ammount of system memory drivers can use to about 90% of the total available.
Signed-off-by: Christian König christian.koenig@amd.com
drivers/gpu/drm/ttm/ttm_bo.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index a958135cb3fe..0a93df93dba4 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) * the available system memory. */ num_pages = (u64)si.totalram * si.mem_unit; - num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT; + num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT; /* But for DMA32 we limit ourself to only use 2GiB maximum. */ num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit;
This should update the comment added in patch 1.
Yeah indeed, I've tested this with 70, 75, 80, 85 and finally 90 and it looks like I've forgot to update the comment after a while.
Thanks, Christian.
On Tue, Nov 17, 2020 at 03:06:15PM +0100, Christian König wrote:
Increase the ammount of system memory drivers can use to about 90% of the total available.
Signed-off-by: Christian König christian.koenig@amd.com
drivers/gpu/drm/ttm/ttm_bo.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index a958135cb3fe..0a93df93dba4 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) * the available system memory. */ num_pages = (u64)si.totalram * si.mem_unit;
- num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT;
- num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT;
I don't think this is the design we want. As long as it was set at "half of system memory" it was clear that a) it's a hack b) precision didn't matter.
But if you go to the limit and still want to keep the "we make sure there's no OOM", then precision starts to matter: - memory hotplug and hotunplug is a thing - userspace can mlock, and it's configureable - drivers can pin_user_pages for IO and random other stuff. Some of it is accounted as some subsystem specific mlock (like rdma does iirc), some is just yolo or short term enough (like) - none of what we do here takes into considerations any interactions with core mm tracking (like cgroups or numa or anything like that)
If we want to drop the "half of system ram" limit (and yes that makes sense) I think the right design is:
- we give up on the "no OOM" guarantee.
- This means if you want real isolation of tasks, we need cgroups, and we need to integrate ttm cgroups with system memory cgroups somehow. Unlike randomly picked hardcoded limits this should work a lot more reliably and be a lot more useful in practical use in the field.
- This also means that drivers start to fail in interesting ways. I think locking headaches are covered with all the lockdep annotations I've pushed, plus some of the things I still have in-flight (I have a might_alloc() annotations somewhere). That leaves validation of error paths for when allocations fail. Ime a very effective way we used in i915 is (ab)using EINTR restarting, which per drmIoctl uapi spec is requried. We could put a debug mode into ttm_tt which randomly fails with -EINTR to make sure it's all working correctly (plus anything else that allocates memory), and unlike real out-of-memory injection piglit and any other cts will complete without failure. Which gives us an excellent metric for "does it work". Actualy OOM, even injected one, tends to make stuff blow up in a way that's very hard to track and make sure you've got good coverage, since all your usual tests die pretty quickly.
- ttm_tt needs to play fair with every other system memory user. We need to register a shrinker for each ttm_tt (so usually one per device I guess), which walks the lru (in shrink_count) and uses dma_resv_trylock for actual shrinking. We probably want to move it to SYSTEM first for that shrinker to pick up, so that there's some global fairness across all ttm_tt.
- for GFP_DMA32 that means zone aware shrinkers. We've never used those, because thus far i915 didn't have any big need for low memory, so we haven't used this in practice. But it's supposed to be a thing.
It's a bit more code than the oneliner above, but I also think it's a lot more solid. Plus it would resolve the last big issue where i915 gem is fairly fundamentally different compared to ttm. For that question I think once Maarten's locking rework for i915 has landed and all the other ttm rework from you and Dave is in, we've resolved them all.
/* But for DMA32 we limit ourself to only use 2GiB maximum. */ num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; -- 2.25.1
Am 17.11.20 um 18:19 schrieb Daniel Vetter:
On Tue, Nov 17, 2020 at 03:06:15PM +0100, Christian König wrote:
Increase the ammount of system memory drivers can use to about 90% of the total available.
Signed-off-by: Christian König christian.koenig@amd.com
drivers/gpu/drm/ttm/ttm_bo.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index a958135cb3fe..0a93df93dba4 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) * the available system memory. */ num_pages = (u64)si.totalram * si.mem_unit;
- num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT;
- num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT;
I don't think this is the design we want. As long as it was set at "half of system memory" it was clear that a) it's a hack b) precision didn't matter.
Yeah, I'm also wondering what to do here exactly. In generally I would completely nuke that limit if possible.
But if you go to the limit and still want to keep the "we make sure there's no OOM", then precision starts to matter:
- memory hotplug and hotunplug is a thing
- userspace can mlock, and it's configureable
- drivers can pin_user_pages for IO and random other stuff. Some of it is accounted as some subsystem specific mlock (like rdma does iirc), some is just yolo or short term enough (like)
- none of what we do here takes into considerations any interactions with core mm tracking (like cgroups or numa or anything like that)
OOM is perfectly fine with me, we should just not run into an OOM killer situation because we call shmem_read_mapping_page_gfp() in the shrinker callback.
Any idea how we could guarantee that?
If we want to drop the "half of system ram" limit (and yes that makes sense) I think the right design is:
we give up on the "no OOM" guarantee.
This means if you want real isolation of tasks, we need cgroups, and we need to integrate ttm cgroups with system memory cgroups somehow. Unlike randomly picked hardcoded limits this should work a lot more reliably and be a lot more useful in practical use in the field.
This also means that drivers start to fail in interesting ways. I think locking headaches are covered with all the lockdep annotations I've pushed, plus some of the things I still have in-flight (I have a might_alloc() annotations somewhere). That leaves validation of error paths for when allocations fail. Ime a very effective way we used in i915 is (ab)using EINTR restarting, which per drmIoctl uapi spec is requried. We could put a debug mode into ttm_tt which randomly fails with -EINTR to make sure it's all working correctly (plus anything else that allocates memory), and unlike real out-of-memory injection piglit and any other cts will complete without failure. Which gives us an excellent metric for "does it work". Actualy OOM, even injected one, tends to make stuff blow up in a way that's very hard to track and make sure you've got good coverage, since all your usual tests die pretty quickly.
ttm_tt needs to play fair with every other system memory user. We need to register a shrinker for each ttm_tt (so usually one per device I guess), which walks the lru (in shrink_count) and uses dma_resv_trylock for actual shrinking. We probably want to move it to SYSTEM first for that shrinker to pick up, so that there's some global fairness across all ttm_tt.
I already have patches for this.
What's still missing is teaching the OOM killer which task is using the memory since memory referenced through the file descriptors are currently not accounted towards any process resources.
- for GFP_DMA32 that means zone aware shrinkers. We've never used those, because thus far i915 didn't have any big need for low memory, so we haven't used this in practice. But it's supposed to be a thing.
I think we can mostly forget about GFP_DMA32, this should only be used for AGP and other ancient hardware.
Christian.
It's a bit more code than the oneliner above, but I also think it's a lot more solid. Plus it would resolve the last big issue where i915 gem is fairly fundamentally different compared to ttm. For that question I think once Maarten's locking rework for i915 has landed and all the other ttm rework from you and Dave is in, we've resolved them all.
/* But for DMA32 we limit ourself to only use 2GiB maximum. */ num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; -- 2.25.1
On Wed, Nov 18, 2020 at 02:35:24PM +0100, Christian König wrote:
Am 17.11.20 um 18:19 schrieb Daniel Vetter:
On Tue, Nov 17, 2020 at 03:06:15PM +0100, Christian König wrote:
Increase the ammount of system memory drivers can use to about 90% of the total available.
Signed-off-by: Christian König christian.koenig@amd.com
drivers/gpu/drm/ttm/ttm_bo.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index a958135cb3fe..0a93df93dba4 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) * the available system memory. */ num_pages = (u64)si.totalram * si.mem_unit;
- num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT;
- num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT;
I don't think this is the design we want. As long as it was set at "half of system memory" it was clear that a) it's a hack b) precision didn't matter.
Yeah, I'm also wondering what to do here exactly. In generally I would completely nuke that limit if possible.
But if you go to the limit and still want to keep the "we make sure there's no OOM", then precision starts to matter:
- memory hotplug and hotunplug is a thing
- userspace can mlock, and it's configureable
- drivers can pin_user_pages for IO and random other stuff. Some of it is accounted as some subsystem specific mlock (like rdma does iirc), some is just yolo or short term enough (like)
- none of what we do here takes into considerations any interactions with core mm tracking (like cgroups or numa or anything like that)
OOM is perfectly fine with me, we should just not run into an OOM killer situation because we call shmem_read_mapping_page_gfp() in the shrinker callback.
Any idea how we could guarantee that?
Uh ...
I just realized I missed something between how ttm works and how i915 works. With i915 directly using shmem, all we do in the shrinker is unpin the shmem pages. With ttm we have the page pool in the middle. And it's needed for dma coherent and other things. This is rather fundamental.
btw I don't think you'll get OOM, you get lockdep splat because you're recusring on fs_reclaim fake lock. We should probably put a might_alloc() into shmem_read_mapping_page_gfp().
If we want to drop the "half of system ram" limit (and yes that makes sense) I think the right design is:
we give up on the "no OOM" guarantee.
This means if you want real isolation of tasks, we need cgroups, and we need to integrate ttm cgroups with system memory cgroups somehow. Unlike randomly picked hardcoded limits this should work a lot more reliably and be a lot more useful in practical use in the field.
This also means that drivers start to fail in interesting ways. I think locking headaches are covered with all the lockdep annotations I've pushed, plus some of the things I still have in-flight (I have a might_alloc() annotations somewhere). That leaves validation of error paths for when allocations fail. Ime a very effective way we used in i915 is (ab)using EINTR restarting, which per drmIoctl uapi spec is requried. We could put a debug mode into ttm_tt which randomly fails with -EINTR to make sure it's all working correctly (plus anything else that allocates memory), and unlike real out-of-memory injection piglit and any other cts will complete without failure. Which gives us an excellent metric for "does it work". Actualy OOM, even injected one, tends to make stuff blow up in a way that's very hard to track and make sure you've got good coverage, since all your usual tests die pretty quickly.
ttm_tt needs to play fair with every other system memory user. We need to register a shrinker for each ttm_tt (so usually one per device I guess), which walks the lru (in shrink_count) and uses dma_resv_trylock for actual shrinking. We probably want to move it to SYSTEM first for that shrinker to pick up, so that there's some global fairness across all ttm_tt.
I already have patches for this.
What's still missing is teaching the OOM killer which task is using the memory since memory referenced through the file descriptors are currently not accounted towards any process resources.
Yeah I don't think we've fixed that problem for i915 either. It loves to kill randome other stuff. In igt we solve this by marking any igt testcase as "pls kill this first". Well "solve".
- for GFP_DMA32 that means zone aware shrinkers. We've never used those, because thus far i915 didn't have any big need for low memory, so we haven't used this in practice. But it's supposed to be a thing.
I think we can mostly forget about GFP_DMA32, this should only be used for AGP and other ancient hardware.
Ok, that's good. So having an arbitrary "only half of GFP_DMA32" for these should also be acceptable, since back then gpus didn't really have gigabytes of vram yet. Biggest r500 seems to have topped out at 512MB.
How much do we require the dma page pool? Afaiui it's only when there's bounce buffers or something nasty like that present. Are there more use cases?
For fixing the TT -> SYSTEM problem of calling shmem_read_mapping_page_gfp from shrinker I think there's 3 solutions:
1)Call shmem_read_mapping_page_gfp with GFP_IO. This way we should not be calling back into our shrinker, which are at GFP_FS level. This isn't great, but since page reclaim is a giantic loop which repeats as long as we're making meaningful forward progress, we should be able to trickle TT pages to SYSTEM pages even under severe memory pressure.
2)For the normal page pool (i.e. neither GFP_DMA32 nor dma_alloc_coherent or write-combine) we stop copying the pages for TT <-> SYSTEM, but just directly pin the pages we get from shmem. Like all the shmem based soc drivers do. No copying, no need for allocations, no problem in shrinkers.
3)For the remaining page pools we'd need ttmfs so that we can control the address_space_operations and directly swap out from our pages to swapout, bypassing shmem and the copying. Iirc Chris Wilson had a prototype once somewhere under gemfs for this. Only needed if 1) isn't fast enough for these because there's too many such cases where we care.
At least on intel hw the direction is very much towards fully coherent, so 1&2 should be good enough. -Daniel
Christian.
It's a bit more code than the oneliner above, but I also think it's a lot more solid. Plus it would resolve the last big issue where i915 gem is fairly fundamentally different compared to ttm. For that question I think once Maarten's locking rework for i915 has landed and all the other ttm rework from you and Dave is in, we've resolved them all.
/* But for DMA32 we limit ourself to only use 2GiB maximum. */ num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit; -- 2.25.1
Am 18.11.20 um 22:15 schrieb Daniel Vetter:
On Wed, Nov 18, 2020 at 02:35:24PM +0100, Christian König wrote:
Am 17.11.20 um 18:19 schrieb Daniel Vetter:
On Tue, Nov 17, 2020 at 03:06:15PM +0100, Christian König wrote:
Increase the ammount of system memory drivers can use to about 90% of the total available.
Signed-off-by: Christian König christian.koenig@amd.com
drivers/gpu/drm/ttm/ttm_bo.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index a958135cb3fe..0a93df93dba4 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) * the available system memory. */ num_pages = (u64)si.totalram * si.mem_unit;
- num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT;
- num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT;
I don't think this is the design we want. As long as it was set at "half of system memory" it was clear that a) it's a hack b) precision didn't matter.
Yeah, I'm also wondering what to do here exactly. In generally I would completely nuke that limit if possible.
But if you go to the limit and still want to keep the "we make sure there's no OOM", then precision starts to matter:
- memory hotplug and hotunplug is a thing
- userspace can mlock, and it's configureable
- drivers can pin_user_pages for IO and random other stuff. Some of it is accounted as some subsystem specific mlock (like rdma does iirc), some is just yolo or short term enough (like)
- none of what we do here takes into considerations any interactions with core mm tracking (like cgroups or numa or anything like that)
OOM is perfectly fine with me, we should just not run into an OOM killer situation because we call shmem_read_mapping_page_gfp() in the shrinker callback.
Any idea how we could guarantee that?
Uh ...
I just realized I missed something between how ttm works and how i915 works. With i915 directly using shmem, all we do in the shrinker is unpin the shmem pages. With ttm we have the page pool in the middle. And it's needed for dma coherent and other things. This is rather fundamental.
Yeah, the WC/UC handling is otherwise way to slow. Only for normal WB allocations we could do this.
btw I don't think you'll get OOM, you get lockdep splat because you're recusring on fs_reclaim fake lock. We should probably put a might_alloc() into shmem_read_mapping_page_gfp().
Yeah, one reason why I haven't enabled that yet.
If we want to drop the "half of system ram" limit (and yes that makes sense) I think the right design is:
we give up on the "no OOM" guarantee.
This means if you want real isolation of tasks, we need cgroups, and we need to integrate ttm cgroups with system memory cgroups somehow. Unlike randomly picked hardcoded limits this should work a lot more reliably and be a lot more useful in practical use in the field.
This also means that drivers start to fail in interesting ways. I think locking headaches are covered with all the lockdep annotations I've pushed, plus some of the things I still have in-flight (I have a might_alloc() annotations somewhere). That leaves validation of error paths for when allocations fail. Ime a very effective way we used in i915 is (ab)using EINTR restarting, which per drmIoctl uapi spec is requried. We could put a debug mode into ttm_tt which randomly fails with -EINTR to make sure it's all working correctly (plus anything else that allocates memory), and unlike real out-of-memory injection piglit and any other cts will complete without failure. Which gives us an excellent metric for "does it work". Actualy OOM, even injected one, tends to make stuff blow up in a way that's very hard to track and make sure you've got good coverage, since all your usual tests die pretty quickly.
ttm_tt needs to play fair with every other system memory user. We need to register a shrinker for each ttm_tt (so usually one per device I guess), which walks the lru (in shrink_count) and uses dma_resv_trylock for actual shrinking. We probably want to move it to SYSTEM first for that shrinker to pick up, so that there's some global fairness across all ttm_tt.
I already have patches for this.
What's still missing is teaching the OOM killer which task is using the memory since memory referenced through the file descriptors are currently not accounted towards any process resources.
Yeah I don't think we've fixed that problem for i915 either. It loves to kill randome other stuff. In igt we solve this by marking any igt testcase as "pls kill this first". Well "solve".
Well that is a "creative" solution :)
I will be rather looking into if we can somehow track to which files_struct a file belongs to and if we somehow can use this in the OOM killer.
My last try of giving each file something wasn't received well.
The real boomer is that you can really nicely create a deny of service using memfd_create() because of this :)
- for GFP_DMA32 that means zone aware shrinkers. We've never used those, because thus far i915 didn't have any big need for low memory, so we haven't used this in practice. But it's supposed to be a thing.
I think we can mostly forget about GFP_DMA32, this should only be used for AGP and other ancient hardware.
Ok, that's good. So having an arbitrary "only half of GFP_DMA32" for these should also be acceptable, since back then gpus didn't really have gigabytes of vram yet. Biggest r500 seems to have topped out at 512MB.
How much do we require the dma page pool? Afaiui it's only when there's bounce buffers or something nasty like that present. Are there more use cases?
Not that I know off.
For fixing the TT -> SYSTEM problem of calling shmem_read_mapping_page_gfp from shrinker I think there's 3 solutions:
1)Call shmem_read_mapping_page_gfp with GFP_IO. This way we should not be calling back into our shrinker, which are at GFP_FS level. This isn't great, but since page reclaim is a giantic loop which repeats as long as we're making meaningful forward progress, we should be able to trickle TT pages to SYSTEM pages even under severe memory pressure.
This is most likely the way to go for us.
2)For the normal page pool (i.e. neither GFP_DMA32 nor dma_alloc_coherent or write-combine) we stop copying the pages for TT <-> SYSTEM, but just directly pin the pages we get from shmem. Like all the shmem based soc drivers do. No copying, no need for allocations, no problem in shrinkers.
Not really doable, WC is just to widely used and shmem doesn't support larger orders.
Regards, Christian.
3)For the remaining page pools we'd need ttmfs so that we can control the address_space_operations and directly swap out from our pages to swapout, bypassing shmem and the copying. Iirc Chris Wilson had a prototype once somewhere under gemfs for this. Only needed if 1) isn't fast enough for these because there's too many such cases where we care.
At least on intel hw the direction is very much towards fully coherent, so 1&2 should be good enough. -Daniel
Christian.
It's a bit more code than the oneliner above, but I also think it's a lot more solid. Plus it would resolve the last big issue where i915 gem is fairly fundamentally different compared to ttm. For that question I think once Maarten's locking rework for i915 has landed and all the other ttm rework from you and Dave is in, we've resolved them all.
/* But for DMA32 we limit ourself to only use 2GiB maximum. */ num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit;
-- 2.25.1
On Thu, Nov 19, 2020 at 04:27:00PM +0100, Christian König wrote:
Am 18.11.20 um 22:15 schrieb Daniel Vetter:
On Wed, Nov 18, 2020 at 02:35:24PM +0100, Christian König wrote:
Am 17.11.20 um 18:19 schrieb Daniel Vetter:
On Tue, Nov 17, 2020 at 03:06:15PM +0100, Christian König wrote:
Increase the ammount of system memory drivers can use to about 90% of the total available.
Signed-off-by: Christian König christian.koenig@amd.com
drivers/gpu/drm/ttm/ttm_bo.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index a958135cb3fe..0a93df93dba4 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1267,7 +1267,7 @@ static int ttm_bo_global_init(void) * the available system memory. */ num_pages = (u64)si.totalram * si.mem_unit;
- num_pages = (num_pages * 50 / 100) >> PAGE_SHIFT;
- num_pages = (num_pages * 90 / 100) >> PAGE_SHIFT;
I don't think this is the design we want. As long as it was set at "half of system memory" it was clear that a) it's a hack b) precision didn't matter.
Yeah, I'm also wondering what to do here exactly. In generally I would completely nuke that limit if possible.
But if you go to the limit and still want to keep the "we make sure there's no OOM", then precision starts to matter:
- memory hotplug and hotunplug is a thing
- userspace can mlock, and it's configureable
- drivers can pin_user_pages for IO and random other stuff. Some of it is accounted as some subsystem specific mlock (like rdma does iirc), some is just yolo or short term enough (like)
- none of what we do here takes into considerations any interactions with core mm tracking (like cgroups or numa or anything like that)
OOM is perfectly fine with me, we should just not run into an OOM killer situation because we call shmem_read_mapping_page_gfp() in the shrinker callback.
Any idea how we could guarantee that?
Uh ...
I just realized I missed something between how ttm works and how i915 works. With i915 directly using shmem, all we do in the shrinker is unpin the shmem pages. With ttm we have the page pool in the middle. And it's needed for dma coherent and other things. This is rather fundamental.
Yeah, the WC/UC handling is otherwise way to slow. Only for normal WB allocations we could do this.
btw I don't think you'll get OOM, you get lockdep splat because you're recusring on fs_reclaim fake lock. We should probably put a might_alloc() into shmem_read_mapping_page_gfp().
Yeah, one reason why I haven't enabled that yet.
If we want to drop the "half of system ram" limit (and yes that makes sense) I think the right design is:
we give up on the "no OOM" guarantee.
This means if you want real isolation of tasks, we need cgroups, and we need to integrate ttm cgroups with system memory cgroups somehow. Unlike randomly picked hardcoded limits this should work a lot more reliably and be a lot more useful in practical use in the field.
This also means that drivers start to fail in interesting ways. I think locking headaches are covered with all the lockdep annotations I've pushed, plus some of the things I still have in-flight (I have a might_alloc() annotations somewhere). That leaves validation of error paths for when allocations fail. Ime a very effective way we used in i915 is (ab)using EINTR restarting, which per drmIoctl uapi spec is requried. We could put a debug mode into ttm_tt which randomly fails with -EINTR to make sure it's all working correctly (plus anything else that allocates memory), and unlike real out-of-memory injection piglit and any other cts will complete without failure. Which gives us an excellent metric for "does it work". Actualy OOM, even injected one, tends to make stuff blow up in a way that's very hard to track and make sure you've got good coverage, since all your usual tests die pretty quickly.
ttm_tt needs to play fair with every other system memory user. We need to register a shrinker for each ttm_tt (so usually one per device I guess), which walks the lru (in shrink_count) and uses dma_resv_trylock for actual shrinking. We probably want to move it to SYSTEM first for that shrinker to pick up, so that there's some global fairness across all ttm_tt.
I already have patches for this.
What's still missing is teaching the OOM killer which task is using the memory since memory referenced through the file descriptors are currently not accounted towards any process resources.
Yeah I don't think we've fixed that problem for i915 either. It loves to kill randome other stuff. In igt we solve this by marking any igt testcase as "pls kill this first". Well "solve".
Well that is a "creative" solution :)
I will be rather looking into if we can somehow track to which files_struct a file belongs to and if we somehow can use this in the OOM killer.
My last try of giving each file something wasn't received well.
The real boomer is that you can really nicely create a deny of service using memfd_create() because of this :)
Yeah proper accounting is the right fix here I agree.
- for GFP_DMA32 that means zone aware shrinkers. We've never used those, because thus far i915 didn't have any big need for low memory, so we haven't used this in practice. But it's supposed to be a thing.
I think we can mostly forget about GFP_DMA32, this should only be used for AGP and other ancient hardware.
Ok, that's good. So having an arbitrary "only half of GFP_DMA32" for these should also be acceptable, since back then gpus didn't really have gigabytes of vram yet. Biggest r500 seems to have topped out at 512MB.
How much do we require the dma page pool? Afaiui it's only when there's bounce buffers or something nasty like that present. Are there more use cases?
Not that I know off.
For fixing the TT -> SYSTEM problem of calling shmem_read_mapping_page_gfp from shrinker I think there's 3 solutions:
1)Call shmem_read_mapping_page_gfp with GFP_IO. This way we should not be calling back into our shrinker, which are at GFP_FS level. This isn't great, but since page reclaim is a giantic loop which repeats as long as we're making meaningful forward progress, we should be able to trickle TT pages to SYSTEM pages even under severe memory pressure.
This is most likely the way to go for us.
GFP_NOFS is the flag, I mixed them up.
2)For the normal page pool (i.e. neither GFP_DMA32 nor dma_alloc_coherent or write-combine) we stop copying the pages for TT <-> SYSTEM, but just directly pin the pages we get from shmem. Like all the shmem based soc drivers do. No copying, no need for allocations, no problem in shrinkers.
Not really doable, WC is just to widely used and shmem doesn't support larger orders.
Hm I thought you get large pages automatically, if you reassemble them. It's just the interface that sucks.
btw did you look at 3 below? It might break dma-api assumptions a bit too badly since we're doing nice page flushing behind everyone's back. -Daniel
Regards, Christian.
3)For the remaining page pools we'd need ttmfs so that we can control the address_space_operations and directly swap out from our pages to swapout, bypassing shmem and the copying. Iirc Chris Wilson had a prototype once somewhere under gemfs for this. Only needed if 1) isn't fast enough for these because there's too many such cases where we care.
At least on intel hw the direction is very much towards fully coherent, so 1&2 should be good enough. -Daniel
Christian.
It's a bit more code than the oneliner above, but I also think it's a lot more solid. Plus it would resolve the last big issue where i915 gem is fairly fundamentally different compared to ttm. For that question I think once Maarten's locking rework for i915 has landed and all the other ttm rework from you and Dave is in, we've resolved them all.
/* But for DMA32 we limit ourself to only use 2GiB maximum. */ num_dma32_pages = (u64)(si.totalram - si.totalhigh) * si.mem_unit;
-- 2.25.1
Hi "Christian,
I love your patch! Yet something to improve:
[auto build test ERROR on next-20201117] [cannot apply to linus/master v5.10-rc4 v5.10-rc3 v5.10-rc2 v5.10-rc4] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Christian-K-nig/drm-ttm-rework-ttm_... base: 7c8ca8129ee9724cb1527895fe6dec942ef07f19 config: arm-allyesconfig (attached as .config) compiler: arm-linux-gnueabi-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/ed6ebc71bdbee87b189c3127e4e6a840776a... git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit/20201117-220814 git checkout ed6ebc71bdbee87b189c3127e4e6a840776af4e2 # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arm
If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot lkp@intel.com
All errors (new ones prefixed by >>):
arm-linux-gnueabi-ld: drivers/gpu/drm/ttm/ttm_memory.o: in function `ttm_mem_global_init':
(.text+0x14f4): undefined reference to `__aeabi_uldivmod'
--- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Hi "Christian,
I love your patch! Yet something to improve:
[auto build test ERROR on next-20201117] [cannot apply to linus/master v5.10-rc4 v5.10-rc3 v5.10-rc2 v5.10-rc4] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Christian-K-nig/drm-ttm-rework-ttm_... base: 7c8ca8129ee9724cb1527895fe6dec942ef07f19 config: mips-allmodconfig (attached as .config) compiler: mips-linux-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/ed6ebc71bdbee87b189c3127e4e6a840776a... git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Christian-K-nig/drm-ttm-rework-ttm_tt-page-limit/20201117-220814 git checkout ed6ebc71bdbee87b189c3127e4e6a840776af4e2 # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=mips
If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot lkp@intel.com
All errors (new ones prefixed by >>, old ones prefixed by <<):
ERROR: modpost: "__udivdi3" [drivers/gpu/drm/ttm/ttm.ko] undefined!
--- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
dri-devel@lists.freedesktop.org