On Sun, Jun 21, 2020 at 02:03:02AM -0400, Andrey Grodzovsky wrote:
On device removal reroute all CPU mappings to dummy page per drm_file instance or imported GEM object.
Signed-off-by: Andrey Grodzovsky andrey.grodzovsky@amd.com
drivers/gpu/drm/ttm/ttm_bo_vm.c | 65 ++++++++++++++++++++++++++++++++++++----- 1 file changed, 57 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 389128b..2f8bf5e 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -35,6 +35,8 @@ #include <drm/ttm/ttm_bo_driver.h> #include <drm/ttm/ttm_placement.h> #include <drm/drm_vma_manager.h> +#include <drm/drm_drv.h> +#include <drm/drm_file.h> #include <linux/mm.h> #include <linux/pfn_t.h> #include <linux/rbtree.h> @@ -328,19 +330,66 @@ vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
Hm I think diff and code flow look a bit bad now. What about renaming the current function to __ttm_bo_vm_fault and then having something like the below:
ttm_bo_vm_fault(args) {
if (drm_dev_enter()) { __ttm_bo_vm_fault(args); drm_dev_exit(); } else { drm_gem_insert_dummy_pfn(); } }
I think drm_gem_insert_dummy_pfn(); should be portable across drivers, so another nice point to try to unifiy drivers as much as possible. -Daniel
pgprot_t prot; struct ttm_buffer_object *bo = vma->vm_private_data; vm_fault_t ret;
- int idx;
- struct drm_device *ddev = bo->base.dev;
- ret = ttm_bo_vm_reserve(bo, vmf);
- if (ret)
return ret;
- if (drm_dev_enter(ddev, &idx)) {
ret = ttm_bo_vm_reserve(bo, vmf);
if (ret)
goto exit;
prot = vma->vm_page_prot;
- prot = vma->vm_page_prot;
- ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT);
- if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT))
ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT);
if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT))
goto exit;
dma_resv_unlock(bo->base.resv);
+exit:
return ret;drm_dev_exit(idx);
- } else {
- dma_resv_unlock(bo->base.resv);
struct drm_file *file = NULL;
struct page *dummy_page = NULL;
int handle;
- return ret;
/* We are faulting on imported BO from dma_buf */
if (bo->base.dma_buf && bo->base.import_attach) {
dummy_page = bo->base.dummy_page;
/* We are faulting on non imported BO, find drm_file owning the BO*/
Uh, we can't fish that out of the vma->vm_file pointer somehow? Or is that one all wrong? Doing this kind of list walk looks pretty horrible.
If the vma doesn't have the right pointer I guess next option is that we store the drm_file page in gem_bo->dummy_page, and replace it on first export. But that's going to be tricky to track ...
} else {
struct drm_gem_object *gobj;
mutex_lock(&ddev->filelist_mutex);
list_for_each_entry(file, &ddev->filelist, lhead) {
spin_lock(&file->table_lock);
idr_for_each_entry(&file->object_idr, gobj, handle) {
if (gobj == &bo->base) {
dummy_page = file->dummy_page;
break;
}
}
spin_unlock(&file->table_lock);
}
mutex_unlock(&ddev->filelist_mutex);
}
if (dummy_page) {
/*
* Let do_fault complete the PTE install e.t.c using vmf->page
*
* TODO - should i call free_page somewhere ?
Nah, instead don't call get_page. The page will be around as long as there's a reference for the drm_file or gem_bo, which is longer than any mmap. Otherwise yes this would like really badly.
*/
get_page(dummy_page);
vmf->page = dummy_page;
return 0;
} else {
return VM_FAULT_SIGSEGV;
Hm that would be a kernel bug, wouldn't it? WARN_ON() required here imo. -Daniel
}
- }
} EXPORT_SYMBOL(ttm_bo_vm_fault);
-- 2.7.4