This series adds new BO allocation flags PANFROST_BO_HEAP and PANFROST_BO_NOEXEC. The heap allocations are paged in on GPU page faults.
Hopefully, this is the last version, but I made a few changes after implementing per FD address spaces on top of this. Primarily, I moved the GPU VA mapping into GEM open/close functions. This is a bit cleaner and will work when we need access to per FD objects.
I also found a problem with how the GPU reset is handled with the MMU. With the move to a threaded irq handler, it's not okay to unmask the MMU at any time as was possible with panfrost_mmu_enable() calls. The result was some consolidation of the reset calls.
Tomeu, I don't think there should be any changes in test results, but I didn't add your tested-by as there are a few too many changes that I felt comfortable with. I did test this with your series.
This is based on drm-misc-next. An updated branch is here[1].
v4: - Move GPU VA mapping/unmapping to GEM open/close - Add rework of reset handling. - Rebased on top of madvise support
v3: - Retain shared irq support, splitting IRQ changes to separate patch (6/8) - Stop leaking SG tables - Prevent mmap and pinning pages for heap BOs
Rob
[1] git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git panfrost/heap-noexec
Rob Herring (9): drm/gem: Allow sparsely populated page arrays in drm_gem_put_pages drm/shmem: Put pages independent of a SG table being set drm/panfrost: Restructure the GEM object creation drm/panfrost: Split panfrost_mmu_map SG list mapping to its own function drm/panfrost: Add a no execute flag for BO allocations drm/panfrost: Consolidate reset handling drm/panfrost: Convert MMU IRQ handler to threaded handler drm/panfrost: Add support for GPU heap allocations drm/panfrost: Bump driver version to 1.1
drivers/gpu/drm/drm_gem.c | 3 + drivers/gpu/drm/drm_gem_shmem_helper.c | 4 +- drivers/gpu/drm/panfrost/TODO | 2 - drivers/gpu/drm/panfrost/panfrost_device.c | 16 +- drivers/gpu/drm/panfrost/panfrost_device.h | 1 + drivers/gpu/drm/panfrost/panfrost_drv.c | 65 +++++-- drivers/gpu/drm/panfrost/panfrost_gem.c | 138 ++++++++++--- drivers/gpu/drm/panfrost/panfrost_gem.h | 17 +- drivers/gpu/drm/panfrost/panfrost_job.c | 7 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 216 +++++++++++++++++---- drivers/gpu/drm/panfrost/panfrost_mmu.h | 3 +- include/uapi/drm/panfrost_drm.h | 3 + 12 files changed, 374 insertions(+), 101 deletions(-)
-- 2.20.1
Panfrost has a need for pages allocated on demand via GPU page faults. When releasing the pages, the only thing preventing using drm_gem_put_pages() is needing to skip over unpopulated pages, so allow for skipping over NULL struct page pointers.
Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard maxime.ripard@bootlin.com Cc: Sean Paul sean@poorly.run Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: dri-devel@lists.freedesktop.org Reviewed-by: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/drm_gem.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index afc38cece3f5..a2dd198177f2 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -633,6 +633,9 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
pagevec_init(&pvec); for (i = 0; i < npages; i++) { + if (!pages[i]) + continue; + if (dirty) set_page_dirty(pages[i]);
If a driver does its own management of pages, the shmem helper object's pages array could be allocated when a SG table is not. There's not really any good reason to tie putting pages with having a SG table when freeing the object, so just put pages if the pages array is populated.
Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard maxime.ripard@bootlin.com Cc: Sean Paul sean@poorly.run Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Eric Anholt eric@anholt.net Reviewed-by: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 4b442576de1c..df8f2c8adb2b 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -119,11 +119,11 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj) if (shmem->sgt) { dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl, shmem->sgt->nents, DMA_BIDIRECTIONAL); - - drm_gem_shmem_put_pages(shmem); sg_free_table(shmem->sgt); kfree(shmem->sgt); } + if (shmem->pages) + drm_gem_shmem_put_pages(shmem); }
WARN_ON(shmem->pages_use_count);
Rob Herring robh@kernel.org writes:
If a driver does its own management of pages, the shmem helper object's pages array could be allocated when a SG table is not. There's not really any good reason to tie putting pages with having a SG table when freeing the object, so just put pages if the pages array is populated.
Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard maxime.ripard@bootlin.com Cc: Sean Paul sean@poorly.run Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Eric Anholt eric@anholt.net Reviewed-by: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org
Patch 1, 2:
Reviewed-by: Eric Anholt eric@anholt.net
Setting the GPU VA when creating the GEM object doesn't allow for any conditional adjustments to the mapping. In preparation to support adjusting the mapping and per FD address spaces, restructure the GEM object creation to map and unmap the GEM object in the GEM object .open() and .close() hooks.
While panfrost_gem_free_object() and panfrost_gem_prime_import_sg_table() are not really needed after this commit, keep them as we'll need them in subsequent commits.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Reviewed-by: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- Steven, Alyssa, I kept your tags, but please take another look as things moved around a bit here.
drivers/gpu/drm/panfrost/panfrost_drv.c | 9 ---- drivers/gpu/drm/panfrost/panfrost_gem.c | 67 ++++++++++++++----------- 2 files changed, 37 insertions(+), 39 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 926d021ee202..2894cfbbce2b 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -78,7 +78,6 @@ static int panfrost_ioctl_get_param(struct drm_device *ddev, void *data, struct static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *file) { - int ret; struct drm_gem_shmem_object *shmem; struct drm_panfrost_create_bo *args = data;
@@ -90,17 +89,9 @@ static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data, if (IS_ERR(shmem)) return PTR_ERR(shmem);
- ret = panfrost_mmu_map(to_panfrost_bo(&shmem->base)); - if (ret) - goto err_free; - args->offset = to_panfrost_bo(&shmem->base)->node.start << PAGE_SHIFT;
return 0; - -err_free: - drm_gem_handle_delete(file, args->handle); - return ret; }
/** diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 67d374184340..3933f83ba6b0 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -15,6 +15,39 @@ * BO. */ static void panfrost_gem_free_object(struct drm_gem_object *obj) +{ + mutex_lock(&pfdev->shrinker_lock); + if (!list_empty(&bo->base.madv_list)) + list_del(&bo->base.madv_list); + mutex_unlock(&pfdev->shrinker_lock); + + drm_gem_shmem_free_object(obj); +} + +static int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv) +{ + int ret; + size_t size = obj->size; + u64 align = size >= SZ_2M ? SZ_2M >> PAGE_SHIFT : 0; + struct panfrost_gem_object *bo = to_panfrost_bo(obj); + struct panfrost_device *pfdev = obj->dev->dev_private; + + spin_lock(&pfdev->mm_lock); + ret = drm_mm_insert_node_generic(&pfdev->mm, &bo->node, + size >> PAGE_SHIFT, align, 0, 0); + if (ret) + goto out; + + ret = panfrost_mmu_map(bo); + if (ret) + drm_mm_remove_node(&bo->node); + +out: + spin_unlock(&pfdev->mm_lock); + return ret; +} + +static void panfrost_gem_close(struct drm_gem_object *obj, struct drm_file *file_priv) { struct panfrost_gem_object *bo = to_panfrost_bo(obj); struct panfrost_device *pfdev = obj->dev->dev_private; @@ -23,19 +56,15 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) panfrost_mmu_unmap(bo);
spin_lock(&pfdev->mm_lock); - drm_mm_remove_node(&bo->node); + if (drm_mm_node_allocated(&bo->node)) + drm_mm_remove_node(&bo->node); spin_unlock(&pfdev->mm_lock); - - mutex_lock(&pfdev->shrinker_lock); - if (!list_empty(&bo->base.madv_list)) - list_del(&bo->base.madv_list); - mutex_unlock(&pfdev->shrinker_lock); - - drm_gem_shmem_free_object(obj); }
static const struct drm_gem_object_funcs panfrost_gem_funcs = { .free = panfrost_gem_free_object, + .open = panfrost_gem_open, + .close = panfrost_gem_close, .print_info = drm_gem_shmem_print_info, .pin = drm_gem_shmem_pin, .unpin = drm_gem_shmem_unpin, @@ -55,10 +84,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = { */ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t size) { - int ret; - struct panfrost_device *pfdev = dev->dev_private; struct panfrost_gem_object *obj; - u64 align;
obj = kzalloc(sizeof(*obj), GFP_KERNEL); if (!obj) @@ -66,21 +92,7 @@ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t
obj->base.base.funcs = &panfrost_gem_funcs;
- size = roundup(size, PAGE_SIZE); - align = size >= SZ_2M ? SZ_2M >> PAGE_SHIFT : 0; - - spin_lock(&pfdev->mm_lock); - ret = drm_mm_insert_node_generic(&pfdev->mm, &obj->node, - size >> PAGE_SHIFT, align, 0, 0); - spin_unlock(&pfdev->mm_lock); - if (ret) - goto free_obj; - return &obj->base.base; - -free_obj: - kfree(obj); - return ERR_PTR(ret); }
struct drm_gem_object * @@ -89,15 +101,10 @@ panfrost_gem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt) { struct drm_gem_object *obj; - struct panfrost_gem_object *pobj;
obj = drm_gem_shmem_prime_import_sg_table(dev, attach, sgt); if (IS_ERR(obj)) return ERR_CAST(obj);
- pobj = to_panfrost_bo(obj); - - panfrost_mmu_map(pobj); - return obj; } -- 2.20.1
On 08/08/2019 23:21, Rob Herring wrote:
Setting the GPU VA when creating the GEM object doesn't allow for any conditional adjustments to the mapping. In preparation to support adjusting the mapping and per FD address spaces, restructure the GEM object creation to map and unmap the GEM object in the GEM object .open() and .close() hooks.
While panfrost_gem_free_object() and panfrost_gem_prime_import_sg_table() are not really needed after this commit, keep them as we'll need them in subsequent commits.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Reviewed-by: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org
Steven, Alyssa, I kept your tags, but please take another look as things moved around a bit here.
Sadly this doesn't compile (bisection is broken), see below:
drivers/gpu/drm/panfrost/panfrost_drv.c | 9 ---- drivers/gpu/drm/panfrost/panfrost_gem.c | 67 ++++++++++++++----------- 2 files changed, 37 insertions(+), 39 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 926d021ee202..2894cfbbce2b 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -78,7 +78,6 @@ static int panfrost_ioctl_get_param(struct drm_device *ddev, void *data, struct static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *file) {
- int ret; struct drm_gem_shmem_object *shmem; struct drm_panfrost_create_bo *args = data;
@@ -90,17 +89,9 @@ static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data, if (IS_ERR(shmem)) return PTR_ERR(shmem);
ret = panfrost_mmu_map(to_panfrost_bo(&shmem->base));
if (ret)
goto err_free;
args->offset = to_panfrost_bo(&shmem->base)->node.start << PAGE_SHIFT;
return 0;
-err_free:
- drm_gem_handle_delete(file, args->handle);
- return ret;
}
/** diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 67d374184340..3933f83ba6b0 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -15,6 +15,39 @@
- BO.
*/ static void panfrost_gem_free_object(struct drm_gem_object *obj) +{
- mutex_lock(&pfdev->shrinker_lock);
- if (!list_empty(&bo->base.madv_list))
list_del(&bo->base.madv_list);
- mutex_unlock(&pfdev->shrinker_lock);
- drm_gem_shmem_free_object(obj);
+}
'pfdev' and 'bo' have't been defined yet.
Steve
Still A-b :) On Thu, Aug 08, 2019 at 04:21:54PM -0600, Rob Herring wrote:
Setting the GPU VA when creating the GEM object doesn't allow for any conditional adjustments to the mapping. In preparation to support adjusting the mapping and per FD address spaces, restructure the GEM object creation to map and unmap the GEM object in the GEM object .open() and .close() hooks.
While panfrost_gem_free_object() and panfrost_gem_prime_import_sg_table() are not really needed after this commit, keep them as we'll need them in subsequent commits.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Reviewed-by: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org
Steven, Alyssa, I kept your tags, but please take another look as things moved around a bit here.
drivers/gpu/drm/panfrost/panfrost_drv.c | 9 ---- drivers/gpu/drm/panfrost/panfrost_gem.c | 67 ++++++++++++++----------- 2 files changed, 37 insertions(+), 39 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 926d021ee202..2894cfbbce2b 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -78,7 +78,6 @@ static int panfrost_ioctl_get_param(struct drm_device *ddev, void *data, struct static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *file) {
- int ret; struct drm_gem_shmem_object *shmem; struct drm_panfrost_create_bo *args = data;
@@ -90,17 +89,9 @@ static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data, if (IS_ERR(shmem)) return PTR_ERR(shmem);
ret = panfrost_mmu_map(to_panfrost_bo(&shmem->base));
if (ret)
goto err_free;
args->offset = to_panfrost_bo(&shmem->base)->node.start << PAGE_SHIFT;
return 0;
-err_free:
- drm_gem_handle_delete(file, args->handle);
- return ret;
}
/** diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 67d374184340..3933f83ba6b0 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -15,6 +15,39 @@
- BO.
*/ static void panfrost_gem_free_object(struct drm_gem_object *obj) +{
- mutex_lock(&pfdev->shrinker_lock);
- if (!list_empty(&bo->base.madv_list))
list_del(&bo->base.madv_list);
- mutex_unlock(&pfdev->shrinker_lock);
- drm_gem_shmem_free_object(obj);
+}
+static int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv) +{
- int ret;
- size_t size = obj->size;
- u64 align = size >= SZ_2M ? SZ_2M >> PAGE_SHIFT : 0;
- struct panfrost_gem_object *bo = to_panfrost_bo(obj);
- struct panfrost_device *pfdev = obj->dev->dev_private;
- spin_lock(&pfdev->mm_lock);
- ret = drm_mm_insert_node_generic(&pfdev->mm, &bo->node,
size >> PAGE_SHIFT, align, 0, 0);
- if (ret)
goto out;
- ret = panfrost_mmu_map(bo);
- if (ret)
drm_mm_remove_node(&bo->node);
+out:
- spin_unlock(&pfdev->mm_lock);
- return ret;
+}
+static void panfrost_gem_close(struct drm_gem_object *obj, struct drm_file *file_priv) { struct panfrost_gem_object *bo = to_panfrost_bo(obj); struct panfrost_device *pfdev = obj->dev->dev_private; @@ -23,19 +56,15 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) panfrost_mmu_unmap(bo);
spin_lock(&pfdev->mm_lock);
- drm_mm_remove_node(&bo->node);
- if (drm_mm_node_allocated(&bo->node))
spin_unlock(&pfdev->mm_lock);drm_mm_remove_node(&bo->node);
- mutex_lock(&pfdev->shrinker_lock);
- if (!list_empty(&bo->base.madv_list))
list_del(&bo->base.madv_list);
- mutex_unlock(&pfdev->shrinker_lock);
- drm_gem_shmem_free_object(obj);
}
static const struct drm_gem_object_funcs panfrost_gem_funcs = { .free = panfrost_gem_free_object,
- .open = panfrost_gem_open,
- .close = panfrost_gem_close, .print_info = drm_gem_shmem_print_info, .pin = drm_gem_shmem_pin, .unpin = drm_gem_shmem_unpin,
@@ -55,10 +84,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = { */ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t size) {
int ret;
struct panfrost_device *pfdev = dev->dev_private; struct panfrost_gem_object *obj;
u64 align;
obj = kzalloc(sizeof(*obj), GFP_KERNEL); if (!obj)
@@ -66,21 +92,7 @@ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t
obj->base.base.funcs = &panfrost_gem_funcs;
- size = roundup(size, PAGE_SIZE);
- align = size >= SZ_2M ? SZ_2M >> PAGE_SHIFT : 0;
- spin_lock(&pfdev->mm_lock);
- ret = drm_mm_insert_node_generic(&pfdev->mm, &obj->node,
size >> PAGE_SHIFT, align, 0, 0);
- spin_unlock(&pfdev->mm_lock);
- if (ret)
goto free_obj;
- return &obj->base.base;
-free_obj:
- kfree(obj);
- return ERR_PTR(ret);
}
struct drm_gem_object * @@ -89,15 +101,10 @@ panfrost_gem_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt) { struct drm_gem_object *obj;
struct panfrost_gem_object *pobj;
obj = drm_gem_shmem_prime_import_sg_table(dev, attach, sgt); if (IS_ERR(obj)) return ERR_CAST(obj);
pobj = to_panfrost_bo(obj);
panfrost_mmu_map(pobj);
return obj;
}
2.20.1
In preparation to create partial GPU mappings of BOs on page faults, split out the SG list handling of panfrost_mmu_map().
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Reviewed: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/panfrost/panfrost_mmu.c | 52 +++++++++++++++---------- 1 file changed, 31 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 92ac995dd9c6..b4ac149b2399 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -145,27 +145,13 @@ static size_t get_pgsize(u64 addr, size_t size) return SZ_2M; }
-int panfrost_mmu_map(struct panfrost_gem_object *bo) +static int mmu_map_sg(struct panfrost_device *pfdev, u64 iova, + int prot, struct sg_table *sgt) { - struct drm_gem_object *obj = &bo->base.base; - struct panfrost_device *pfdev = to_panfrost_device(obj->dev); - struct io_pgtable_ops *ops = pfdev->mmu->pgtbl_ops; - u64 iova = bo->node.start << PAGE_SHIFT; unsigned int count; struct scatterlist *sgl; - struct sg_table *sgt; - int ret; - - if (WARN_ON(bo->is_mapped)) - return 0; - - sgt = drm_gem_shmem_get_pages_sgt(obj); - if (WARN_ON(IS_ERR(sgt))) - return PTR_ERR(sgt); - - ret = pm_runtime_get_sync(pfdev->dev); - if (ret < 0) - return ret; + struct io_pgtable_ops *ops = pfdev->mmu->pgtbl_ops; + u64 start_iova = iova;
mutex_lock(&pfdev->mmu->lock);
@@ -178,18 +164,42 @@ int panfrost_mmu_map(struct panfrost_gem_object *bo) while (len) { size_t pgsize = get_pgsize(iova | paddr, len);
- ops->map(ops, iova, paddr, pgsize, IOMMU_WRITE | IOMMU_READ); + ops->map(ops, iova, paddr, pgsize, prot); iova += pgsize; paddr += pgsize; len -= pgsize; } }
- mmu_hw_do_operation(pfdev, 0, bo->node.start << PAGE_SHIFT, - bo->node.size << PAGE_SHIFT, AS_COMMAND_FLUSH_PT); + mmu_hw_do_operation(pfdev, 0, start_iova, iova - start_iova, + AS_COMMAND_FLUSH_PT);
mutex_unlock(&pfdev->mmu->lock);
+ return 0; +} + +int panfrost_mmu_map(struct panfrost_gem_object *bo) +{ + struct drm_gem_object *obj = &bo->base.base; + struct panfrost_device *pfdev = to_panfrost_device(obj->dev); + struct sg_table *sgt; + int ret; + int prot = IOMMU_READ | IOMMU_WRITE; + + if (WARN_ON(bo->is_mapped)) + return 0; + + sgt = drm_gem_shmem_get_pages_sgt(obj); + if (WARN_ON(IS_ERR(sgt))) + return PTR_ERR(sgt); + + ret = pm_runtime_get_sync(pfdev->dev); + if (ret < 0) + return ret; + + mmu_map_sg(pfdev, bo->node.start << PAGE_SHIFT, prot, sgt); + pm_runtime_mark_last_busy(pfdev->dev); pm_runtime_put_autosuspend(pfdev->dev); bo->is_mapped = true;
Executable buffers have an alignment restriction that they can't cross 16MB boundary as the GPU program counter is 24-bits. This restriction is currently not handled and we just get lucky. As current userspace assumes all BOs are executable, that has to remain the default. So add a new PANFROST_BO_NOEXEC flag to allow userspace to indicate which BOs are not executable.
There is also a restriction that executable buffers cannot start or end on a 4GB boundary. This is mostly avoided as there is only 4GB of space currently and the beginning is already blocked out for NULL ptr detection. Add support to handle this restriction fully regardless of the current constraints.
For existing userspace, all created BOs remain executable, but the GPU VA alignment will be increased to the size of the BO. This shouldn't matter as there is plenty of GPU VA space.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Reviewed-by: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/panfrost/panfrost_drv.c | 42 +++++++++++++++++---- drivers/gpu/drm/panfrost/panfrost_gem.c | 50 ++++++++++++++++++++++++- drivers/gpu/drm/panfrost/panfrost_gem.h | 9 ++++- drivers/gpu/drm/panfrost/panfrost_mmu.c | 3 ++ include/uapi/drm/panfrost_drm.h | 2 + 5 files changed, 96 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 2894cfbbce2b..f070f2dd9f84 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -78,18 +78,19 @@ static int panfrost_ioctl_get_param(struct drm_device *ddev, void *data, struct static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *file) { - struct drm_gem_shmem_object *shmem; + struct panfrost_gem_object *bo; struct drm_panfrost_create_bo *args = data;
- if (!args->size || args->flags || args->pad) + if (!args->size || args->pad || + (args->flags & ~PANFROST_BO_NOEXEC)) return -EINVAL;
- shmem = drm_gem_shmem_create_with_handle(file, dev, args->size, - &args->handle); - if (IS_ERR(shmem)) - return PTR_ERR(shmem); + bo = panfrost_gem_create_with_handle(file, dev, args->size, args->flags, + &args->handle); + if (IS_ERR(bo)) + return PTR_ERR(bo);
- args->offset = to_panfrost_bo(&shmem->base)->node.start << PAGE_SHIFT; + args->offset = bo->node.start << PAGE_SHIFT;
return 0; } @@ -364,6 +365,32 @@ int panfrost_unstable_ioctl_check(void) return 0; }
+#define PFN_4G (SZ_4G >> PAGE_SHIFT) +#define PFN_4G_MASK (PFN_4G - 1) +#define PFN_16M (SZ_16M >> PAGE_SHIFT) + +static void panfrost_drm_mm_color_adjust(const struct drm_mm_node *node, + unsigned long color, + u64 *start, u64 *end) +{ + /* Executable buffers can't start or end on a 4GB boundary */ + if (!(color & PANFROST_BO_NOEXEC)) { + u64 next_seg; + + if ((*start & PFN_4G_MASK) == 0) + (*start)++; + + if ((*end & PFN_4G_MASK) == 0) + (*end)--; + + next_seg = ALIGN(*start, PFN_4G); + if (next_seg - *start <= PFN_16M) + *start = next_seg + 1; + + *end = min(*end, ALIGN(*start, PFN_4G) - 1); + } +} + static int panfrost_open(struct drm_device *dev, struct drm_file *file) { @@ -461,6 +488,7 @@ static int panfrost_probe(struct platform_device *pdev)
/* 4G enough for now. can be 48-bit */ drm_mm_init(&pfdev->mm, SZ_32M >> PAGE_SHIFT, (SZ_4G - SZ_32M) >> PAGE_SHIFT); + pfdev->mm.color_adjust = panfrost_drm_mm_color_adjust;
pm_runtime_use_autosuspend(pfdev->dev); pm_runtime_set_autosuspend_delay(pfdev->dev, 50); /* ~3 frames */ diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 3933f83ba6b0..f398109b8e0c 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -28,13 +28,25 @@ static int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_p { int ret; size_t size = obj->size; - u64 align = size >= SZ_2M ? SZ_2M >> PAGE_SHIFT : 0; + u64 align; struct panfrost_gem_object *bo = to_panfrost_bo(obj); struct panfrost_device *pfdev = obj->dev->dev_private; + unsigned long color = bo->noexec ? PANFROST_BO_NOEXEC : 0; + + /* + * Executable buffers cannot cross a 16MB boundary as the program + * counter is 24-bits. We assume executable buffers will be less than + * 16MB and aligning executable buffers to their size will avoid + * crossing a 16MB boundary. + */ + if (!bo->noexec) + align = size >> PAGE_SHIFT; + else + align = size >= SZ_2M ? SZ_2M >> PAGE_SHIFT : 0;
spin_lock(&pfdev->mm_lock); ret = drm_mm_insert_node_generic(&pfdev->mm, &bo->node, - size >> PAGE_SHIFT, align, 0, 0); + size >> PAGE_SHIFT, align, color, 0); if (ret) goto out;
@@ -95,16 +107,50 @@ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t return &obj->base.base; }
+struct panfrost_gem_object * +panfrost_gem_create_with_handle(struct drm_file *file_priv, + struct drm_device *dev, size_t size, + u32 flags, + uint32_t *handle) +{ + int ret; + struct drm_gem_shmem_object *shmem; + struct panfrost_gem_object *bo; + + shmem = drm_gem_shmem_create(dev, size); + if (IS_ERR(shmem)) + return ERR_CAST(shmem); + + bo = to_panfrost_bo(&shmem->base); + bo->noexec = !!(flags & PANFROST_BO_NOEXEC); + + /* + * Allocate an id of idr table where the obj is registered + * and handle has the id what user can see. + */ + ret = drm_gem_handle_create(file_priv, &shmem->base, handle); + /* drop reference from allocate - handle holds it now. */ + drm_gem_object_put_unlocked(&shmem->base); + if (ret) + return ERR_PTR(ret); + + return bo; +} + struct drm_gem_object * panfrost_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt) { struct drm_gem_object *obj; + struct panfrost_gem_object *bo;
obj = drm_gem_shmem_prime_import_sg_table(dev, attach, sgt); if (IS_ERR(obj)) return ERR_CAST(obj);
+ bo = to_panfrost_bo(obj); + bo->noexec = true; + return obj; } diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index 5f51f881ea3f..d4c7aa1790a7 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -11,7 +11,8 @@ struct panfrost_gem_object { struct drm_gem_shmem_object base;
struct drm_mm_node node; - bool is_mapped; + bool is_mapped :1; + bool noexec :1; };
static inline @@ -27,6 +28,12 @@ panfrost_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt);
+struct panfrost_gem_object * +panfrost_gem_create_with_handle(struct drm_file *file_priv, + struct drm_device *dev, size_t size, + u32 flags, + uint32_t *handle); + void panfrost_gem_shrinker_init(struct drm_device *dev); void panfrost_gem_shrinker_cleanup(struct drm_device *dev);
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index b4ac149b2399..eba6ce785ef0 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -190,6 +190,9 @@ int panfrost_mmu_map(struct panfrost_gem_object *bo) if (WARN_ON(bo->is_mapped)) return 0;
+ if (bo->noexec) + prot |= IOMMU_NOEXEC; + sgt = drm_gem_shmem_get_pages_sgt(obj); if (WARN_ON(IS_ERR(sgt))) return PTR_ERR(sgt); diff --git a/include/uapi/drm/panfrost_drm.h b/include/uapi/drm/panfrost_drm.h index 1e547f9692e9..b80c20d17dec 100644 --- a/include/uapi/drm/panfrost_drm.h +++ b/include/uapi/drm/panfrost_drm.h @@ -84,6 +84,8 @@ struct drm_panfrost_wait_bo { __s64 timeout_ns; /* absolute */ };
+#define PANFROST_BO_NOEXEC 1 + /** * struct drm_panfrost_create_bo - ioctl argument for creating Panfrost BOs. *
Runtime PM resume and job timeouts both call the same sequence of functions, so consolidate them to a common function. This will make changing the reset related code easier. The MMU also needs some re-initialization on reset, so rework its call. In the process, we hide the address space details within the MMU code in preparation to support multiple address spaces.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Robin Murphy robin.murphy@arm.com Cc: Steven Price steven.price@arm.com Cc: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- New patch in this version.
drivers/gpu/drm/panfrost/panfrost_device.c | 16 ++++++++++------ drivers/gpu/drm/panfrost/panfrost_device.h | 1 + drivers/gpu/drm/panfrost/panfrost_job.c | 7 +------ drivers/gpu/drm/panfrost/panfrost_mmu.c | 16 +++++++++------- drivers/gpu/drm/panfrost/panfrost_mmu.h | 3 +-- 5 files changed, 22 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c index 8a111d7c0200..9814f4ccbd26 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.c +++ b/drivers/gpu/drm/panfrost/panfrost_device.c @@ -254,18 +254,22 @@ const char *panfrost_exception_name(struct panfrost_device *pfdev, u32 exception return "UNKNOWN"; }
+void panfrost_device_reset(struct panfrost_device *pfdev) +{ + panfrost_gpu_soft_reset(pfdev); + + panfrost_gpu_power_on(pfdev); + panfrost_mmu_reset(pfdev); + panfrost_job_enable_interrupts(pfdev); +} + #ifdef CONFIG_PM int panfrost_device_resume(struct device *dev) { struct platform_device *pdev = to_platform_device(dev); struct panfrost_device *pfdev = platform_get_drvdata(pdev);
- panfrost_gpu_soft_reset(pfdev); - - /* TODO: Re-enable all other address spaces */ - panfrost_gpu_power_on(pfdev); - panfrost_mmu_enable(pfdev, 0); - panfrost_job_enable_interrupts(pfdev); + panfrost_device_reset(pfdev); panfrost_devfreq_resume(pfdev);
return 0; diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index 038b32c62484..4e5641db9c7e 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -132,6 +132,7 @@ int panfrost_unstable_ioctl_check(void);
int panfrost_device_init(struct panfrost_device *pfdev); void panfrost_device_fini(struct panfrost_device *pfdev); +void panfrost_device_reset(struct panfrost_device *pfdev);
int panfrost_device_resume(struct device *dev); int panfrost_device_suspend(struct device *dev); diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 9bb9260d9181..d567ce98494c 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -395,12 +395,7 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) /* panfrost_core_dump(pfdev); */
panfrost_devfreq_record_transition(pfdev, js); - panfrost_gpu_soft_reset(pfdev); - - /* TODO: Re-enable all other address spaces */ - panfrost_mmu_enable(pfdev, 0); - panfrost_gpu_power_on(pfdev); - panfrost_job_enable_interrupts(pfdev); + panfrost_device_reset(pfdev);
for (i = 0; i < NUM_JOB_SLOTS; i++) drm_sched_resubmit_jobs(&pfdev->js->queue[i].sched); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index eba6ce785ef0..13757427b886 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -105,15 +105,12 @@ static int mmu_hw_do_operation(struct panfrost_device *pfdev, u32 as_nr, return ret; }
-void panfrost_mmu_enable(struct panfrost_device *pfdev, u32 as_nr) +static void panfrost_mmu_enable(struct panfrost_device *pfdev, u32 as_nr) { struct io_pgtable_cfg *cfg = &pfdev->mmu->pgtbl_cfg; u64 transtab = cfg->arm_mali_lpae_cfg.transtab; u64 memattr = cfg->arm_mali_lpae_cfg.memattr;
- mmu_write(pfdev, MMU_INT_CLEAR, ~0); - mmu_write(pfdev, MMU_INT_MASK, ~0); - mmu_write(pfdev, AS_TRANSTAB_LO(as_nr), transtab & 0xffffffffUL); mmu_write(pfdev, AS_TRANSTAB_HI(as_nr), transtab >> 32);
@@ -137,6 +134,14 @@ static void mmu_disable(struct panfrost_device *pfdev, u32 as_nr) write_cmd(pfdev, as_nr, AS_COMMAND_UPDATE); }
+void panfrost_mmu_reset(struct panfrost_device *pfdev) +{ + panfrost_mmu_enable(pfdev, 0); + + mmu_write(pfdev, MMU_INT_CLEAR, ~0); + mmu_write(pfdev, MMU_INT_MASK, ~0); +} + static size_t get_pgsize(u64 addr, size_t size) { if (addr & (SZ_2M - 1) || size < SZ_2M) @@ -375,9 +380,6 @@ int panfrost_mmu_init(struct panfrost_device *pfdev) dev_err(pfdev->dev, "failed to request mmu irq"); return err; } - mmu_write(pfdev, MMU_INT_CLEAR, ~0); - mmu_write(pfdev, MMU_INT_MASK, ~0); - pfdev->mmu->pgtbl_cfg = (struct io_pgtable_cfg) { .pgsize_bitmap = SZ_4K | SZ_2M, .ias = FIELD_GET(0xff, pfdev->features.mmu_features), diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.h b/drivers/gpu/drm/panfrost/panfrost_mmu.h index f5878d86a5ce..d5f9b24537db 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.h +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.h @@ -11,7 +11,6 @@ void panfrost_mmu_unmap(struct panfrost_gem_object *bo);
int panfrost_mmu_init(struct panfrost_device *pfdev); void panfrost_mmu_fini(struct panfrost_device *pfdev); - -void panfrost_mmu_enable(struct panfrost_device *pfdev, u32 as_nr); +void panfrost_mmu_reset(struct panfrost_device *pfdev);
#endif -- 2.20.1
On 08/08/2019 23:21, Rob Herring wrote:
Runtime PM resume and job timeouts both call the same sequence of functions, so consolidate them to a common function. This will make changing the reset related code easier. The MMU also needs some re-initialization on reset, so rework its call. In the process, we hide the address space details within the MMU code in preparation to support multiple address spaces.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Robin Murphy robin.murphy@arm.com Cc: Steven Price steven.price@arm.com Cc: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org
Reviewed-by: Steven Price steven.price@arm.com
New patch in this version.
drivers/gpu/drm/panfrost/panfrost_device.c | 16 ++++++++++------ drivers/gpu/drm/panfrost/panfrost_device.h | 1 + drivers/gpu/drm/panfrost/panfrost_job.c | 7 +------ drivers/gpu/drm/panfrost/panfrost_mmu.c | 16 +++++++++------- drivers/gpu/drm/panfrost/panfrost_mmu.h | 3 +-- 5 files changed, 22 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c index 8a111d7c0200..9814f4ccbd26 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.c +++ b/drivers/gpu/drm/panfrost/panfrost_device.c @@ -254,18 +254,22 @@ const char *panfrost_exception_name(struct panfrost_device *pfdev, u32 exception return "UNKNOWN"; }
+void panfrost_device_reset(struct panfrost_device *pfdev) +{
- panfrost_gpu_soft_reset(pfdev);
- panfrost_gpu_power_on(pfdev);
- panfrost_mmu_reset(pfdev);
- panfrost_job_enable_interrupts(pfdev);
+}
#ifdef CONFIG_PM int panfrost_device_resume(struct device *dev) { struct platform_device *pdev = to_platform_device(dev); struct panfrost_device *pfdev = platform_get_drvdata(pdev);
- panfrost_gpu_soft_reset(pfdev);
- /* TODO: Re-enable all other address spaces */
- panfrost_gpu_power_on(pfdev);
- panfrost_mmu_enable(pfdev, 0);
- panfrost_job_enable_interrupts(pfdev);
panfrost_device_reset(pfdev); panfrost_devfreq_resume(pfdev);
return 0;
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index 038b32c62484..4e5641db9c7e 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -132,6 +132,7 @@ int panfrost_unstable_ioctl_check(void);
int panfrost_device_init(struct panfrost_device *pfdev); void panfrost_device_fini(struct panfrost_device *pfdev); +void panfrost_device_reset(struct panfrost_device *pfdev);
int panfrost_device_resume(struct device *dev); int panfrost_device_suspend(struct device *dev); diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 9bb9260d9181..d567ce98494c 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -395,12 +395,7 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) /* panfrost_core_dump(pfdev); */
panfrost_devfreq_record_transition(pfdev, js);
- panfrost_gpu_soft_reset(pfdev);
- /* TODO: Re-enable all other address spaces */
- panfrost_mmu_enable(pfdev, 0);
- panfrost_gpu_power_on(pfdev);
- panfrost_job_enable_interrupts(pfdev);
panfrost_device_reset(pfdev);
for (i = 0; i < NUM_JOB_SLOTS; i++) drm_sched_resubmit_jobs(&pfdev->js->queue[i].sched);
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index eba6ce785ef0..13757427b886 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -105,15 +105,12 @@ static int mmu_hw_do_operation(struct panfrost_device *pfdev, u32 as_nr, return ret; }
-void panfrost_mmu_enable(struct panfrost_device *pfdev, u32 as_nr) +static void panfrost_mmu_enable(struct panfrost_device *pfdev, u32 as_nr) { struct io_pgtable_cfg *cfg = &pfdev->mmu->pgtbl_cfg; u64 transtab = cfg->arm_mali_lpae_cfg.transtab; u64 memattr = cfg->arm_mali_lpae_cfg.memattr;
- mmu_write(pfdev, MMU_INT_CLEAR, ~0);
- mmu_write(pfdev, MMU_INT_MASK, ~0);
- mmu_write(pfdev, AS_TRANSTAB_LO(as_nr), transtab & 0xffffffffUL); mmu_write(pfdev, AS_TRANSTAB_HI(as_nr), transtab >> 32);
@@ -137,6 +134,14 @@ static void mmu_disable(struct panfrost_device *pfdev, u32 as_nr) write_cmd(pfdev, as_nr, AS_COMMAND_UPDATE); }
+void panfrost_mmu_reset(struct panfrost_device *pfdev) +{
- panfrost_mmu_enable(pfdev, 0);
- mmu_write(pfdev, MMU_INT_CLEAR, ~0);
- mmu_write(pfdev, MMU_INT_MASK, ~0);
+}
static size_t get_pgsize(u64 addr, size_t size) { if (addr & (SZ_2M - 1) || size < SZ_2M) @@ -375,9 +380,6 @@ int panfrost_mmu_init(struct panfrost_device *pfdev) dev_err(pfdev->dev, "failed to request mmu irq"); return err; }
- mmu_write(pfdev, MMU_INT_CLEAR, ~0);
- mmu_write(pfdev, MMU_INT_MASK, ~0);
- pfdev->mmu->pgtbl_cfg = (struct io_pgtable_cfg) { .pgsize_bitmap = SZ_4K | SZ_2M, .ias = FIELD_GET(0xff, pfdev->features.mmu_features),
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.h b/drivers/gpu/drm/panfrost/panfrost_mmu.h index f5878d86a5ce..d5f9b24537db 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.h +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.h @@ -11,7 +11,6 @@ void panfrost_mmu_unmap(struct panfrost_gem_object *bo);
int panfrost_mmu_init(struct panfrost_device *pfdev); void panfrost_mmu_fini(struct panfrost_device *pfdev);
-void panfrost_mmu_enable(struct panfrost_device *pfdev, u32 as_nr); +void panfrost_mmu_reset(struct panfrost_device *pfdev);
#endif
2.20.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
R-b, good stuff.
On Thu, Aug 08, 2019 at 04:21:57PM -0600, Rob Herring wrote:
Runtime PM resume and job timeouts both call the same sequence of functions, so consolidate them to a common function. This will make changing the reset related code easier. The MMU also needs some re-initialization on reset, so rework its call. In the process, we hide the address space details within the MMU code in preparation to support multiple address spaces.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Robin Murphy robin.murphy@arm.com Cc: Steven Price steven.price@arm.com Cc: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org
New patch in this version.
drivers/gpu/drm/panfrost/panfrost_device.c | 16 ++++++++++------ drivers/gpu/drm/panfrost/panfrost_device.h | 1 + drivers/gpu/drm/panfrost/panfrost_job.c | 7 +------ drivers/gpu/drm/panfrost/panfrost_mmu.c | 16 +++++++++------- drivers/gpu/drm/panfrost/panfrost_mmu.h | 3 +-- 5 files changed, 22 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c index 8a111d7c0200..9814f4ccbd26 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.c +++ b/drivers/gpu/drm/panfrost/panfrost_device.c @@ -254,18 +254,22 @@ const char *panfrost_exception_name(struct panfrost_device *pfdev, u32 exception return "UNKNOWN"; }
+void panfrost_device_reset(struct panfrost_device *pfdev) +{
- panfrost_gpu_soft_reset(pfdev);
- panfrost_gpu_power_on(pfdev);
- panfrost_mmu_reset(pfdev);
- panfrost_job_enable_interrupts(pfdev);
+}
#ifdef CONFIG_PM int panfrost_device_resume(struct device *dev) { struct platform_device *pdev = to_platform_device(dev); struct panfrost_device *pfdev = platform_get_drvdata(pdev);
- panfrost_gpu_soft_reset(pfdev);
- /* TODO: Re-enable all other address spaces */
- panfrost_gpu_power_on(pfdev);
- panfrost_mmu_enable(pfdev, 0);
- panfrost_job_enable_interrupts(pfdev);
panfrost_device_reset(pfdev); panfrost_devfreq_resume(pfdev);
return 0;
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index 038b32c62484..4e5641db9c7e 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -132,6 +132,7 @@ int panfrost_unstable_ioctl_check(void);
int panfrost_device_init(struct panfrost_device *pfdev); void panfrost_device_fini(struct panfrost_device *pfdev); +void panfrost_device_reset(struct panfrost_device *pfdev);
int panfrost_device_resume(struct device *dev); int panfrost_device_suspend(struct device *dev); diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 9bb9260d9181..d567ce98494c 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -395,12 +395,7 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) /* panfrost_core_dump(pfdev); */
panfrost_devfreq_record_transition(pfdev, js);
- panfrost_gpu_soft_reset(pfdev);
- /* TODO: Re-enable all other address spaces */
- panfrost_mmu_enable(pfdev, 0);
- panfrost_gpu_power_on(pfdev);
- panfrost_job_enable_interrupts(pfdev);
panfrost_device_reset(pfdev);
for (i = 0; i < NUM_JOB_SLOTS; i++) drm_sched_resubmit_jobs(&pfdev->js->queue[i].sched);
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index eba6ce785ef0..13757427b886 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -105,15 +105,12 @@ static int mmu_hw_do_operation(struct panfrost_device *pfdev, u32 as_nr, return ret; }
-void panfrost_mmu_enable(struct panfrost_device *pfdev, u32 as_nr) +static void panfrost_mmu_enable(struct panfrost_device *pfdev, u32 as_nr) { struct io_pgtable_cfg *cfg = &pfdev->mmu->pgtbl_cfg; u64 transtab = cfg->arm_mali_lpae_cfg.transtab; u64 memattr = cfg->arm_mali_lpae_cfg.memattr;
- mmu_write(pfdev, MMU_INT_CLEAR, ~0);
- mmu_write(pfdev, MMU_INT_MASK, ~0);
- mmu_write(pfdev, AS_TRANSTAB_LO(as_nr), transtab & 0xffffffffUL); mmu_write(pfdev, AS_TRANSTAB_HI(as_nr), transtab >> 32);
@@ -137,6 +134,14 @@ static void mmu_disable(struct panfrost_device *pfdev, u32 as_nr) write_cmd(pfdev, as_nr, AS_COMMAND_UPDATE); }
+void panfrost_mmu_reset(struct panfrost_device *pfdev) +{
- panfrost_mmu_enable(pfdev, 0);
- mmu_write(pfdev, MMU_INT_CLEAR, ~0);
- mmu_write(pfdev, MMU_INT_MASK, ~0);
+}
static size_t get_pgsize(u64 addr, size_t size) { if (addr & (SZ_2M - 1) || size < SZ_2M) @@ -375,9 +380,6 @@ int panfrost_mmu_init(struct panfrost_device *pfdev) dev_err(pfdev->dev, "failed to request mmu irq"); return err; }
- mmu_write(pfdev, MMU_INT_CLEAR, ~0);
- mmu_write(pfdev, MMU_INT_MASK, ~0);
- pfdev->mmu->pgtbl_cfg = (struct io_pgtable_cfg) { .pgsize_bitmap = SZ_4K | SZ_2M, .ias = FIELD_GET(0xff, pfdev->features.mmu_features),
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.h b/drivers/gpu/drm/panfrost/panfrost_mmu.h index f5878d86a5ce..d5f9b24537db 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.h +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.h @@ -11,7 +11,6 @@ void panfrost_mmu_unmap(struct panfrost_gem_object *bo);
int panfrost_mmu_init(struct panfrost_device *pfdev); void panfrost_mmu_fini(struct panfrost_device *pfdev);
-void panfrost_mmu_enable(struct panfrost_device *pfdev, u32 as_nr); +void panfrost_mmu_reset(struct panfrost_device *pfdev);
#endif
2.20.1
In preparation to handle mapping of page faults, we need the MMU handler to be threaded as code paths take a mutex.
As the IRQ may be shared, we can't use the default handler and must disable the MMU interrupts locally.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Reviewed-by: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/panfrost/panfrost_mmu.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 13757427b886..b609ee55a872 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -305,12 +305,20 @@ static const char *access_type_name(struct panfrost_device *pfdev, static irqreturn_t panfrost_mmu_irq_handler(int irq, void *data) { struct panfrost_device *pfdev = data; - u32 status = mmu_read(pfdev, MMU_INT_STAT); - int i;
- if (!status) + if (!mmu_read(pfdev, MMU_INT_STAT)) return IRQ_NONE;
+ mmu_write(pfdev, MMU_INT_MASK, 0); + return IRQ_WAKE_THREAD; +} + +static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data) +{ + struct panfrost_device *pfdev = data; + u32 status = mmu_read(pfdev, MMU_INT_RAWSTAT); + int i; + dev_err(pfdev->dev, "mmu irq status=%x\n", status);
for (i = 0; status; i++) { @@ -355,6 +363,7 @@ static irqreturn_t panfrost_mmu_irq_handler(int irq, void *data) status &= ~mask; }
+ mmu_write(pfdev, MMU_INT_MASK, ~0); return IRQ_HANDLED; };
@@ -373,8 +382,9 @@ int panfrost_mmu_init(struct panfrost_device *pfdev) if (irq <= 0) return -ENODEV;
- err = devm_request_irq(pfdev->dev, irq, panfrost_mmu_irq_handler, - IRQF_SHARED, "mmu", pfdev); + err = devm_request_threaded_irq(pfdev->dev, irq, panfrost_mmu_irq_handler, + panfrost_mmu_irq_handler_thread, + IRQF_SHARED, "mmu", pfdev);
if (err) { dev_err(pfdev->dev, "failed to request mmu irq");
The midgard/bifrost GPUs need to allocate GPU heap memory which is allocated on GPU page faults and not pinned in memory. The vendor driver calls this functionality GROW_ON_GPF.
This implementation assumes that BOs allocated with the PANFROST_BO_NOEXEC flag are never mmapped or exported. Both of those may actually work, but I'm unsure if there's some interaction there. It would cause the whole object to be pinned in memory which would defeat the point of this.
On faults, we map in 2MB at a time in order to utilize huge pages (if enabled). Currently, once we've mapped pages in, they are only unmapped if the BO is freed. Once we add shrinker support, we can unmap pages with the shrinker.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Cc: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/panfrost/TODO | 2 - drivers/gpu/drm/panfrost/panfrost_drv.c | 11 +- drivers/gpu/drm/panfrost/panfrost_gem.c | 43 +++++++- drivers/gpu/drm/panfrost/panfrost_gem.h | 8 ++ drivers/gpu/drm/panfrost/panfrost_mmu.c | 129 ++++++++++++++++++++++-- include/uapi/drm/panfrost_drm.h | 1 + 6 files changed, 177 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/TODO b/drivers/gpu/drm/panfrost/TODO index d4c7fb7e7d13..e7727b292355 100644 --- a/drivers/gpu/drm/panfrost/TODO +++ b/drivers/gpu/drm/panfrost/TODO @@ -10,8 +10,6 @@ The hard part is handling when more address spaces are needed than what the h/w provides.
-- Support pinning pages on demand (GPU page faults). - - Support userspace controlled GPU virtual addresses. Needed for Vulkan. (Tomeu)
- Compute job support. So called 'compute only' jobs need to be plumbed up to diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index f070f2dd9f84..994dbc37e4d0 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -82,7 +82,12 @@ static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_panfrost_create_bo *args = data;
if (!args->size || args->pad || - (args->flags & ~PANFROST_BO_NOEXEC)) + (args->flags & ~(PANFROST_BO_NOEXEC | PANFROST_BO_HEAP))) + return -EINVAL; + + /* Heaps should never be executable */ + if ((args->flags & PANFROST_BO_HEAP) && + !(args->flags & PANFROST_BO_NOEXEC)) return -EINVAL;
bo = panfrost_gem_create_with_handle(file, dev, args->size, args->flags, @@ -297,6 +302,10 @@ static int panfrost_ioctl_mmap_bo(struct drm_device *dev, void *data, return -ENOENT; }
+ /* Don't allow mmapping of heap objects as pages are not pinned. */ + if (to_panfrost_bo(gem_obj)->is_heap) + return -EINVAL; + ret = drm_gem_create_mmap_offset(gem_obj); if (ret == 0) args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index f398109b8e0c..37a3a6ed4617 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -16,6 +16,23 @@ */ static void panfrost_gem_free_object(struct drm_gem_object *obj) { + struct panfrost_gem_object *bo = to_panfrost_bo(obj); + struct panfrost_device *pfdev = obj->dev->dev_private; + + if (bo->sgts) { + int i; + int n_sgt = bo->base.base.size / SZ_2M; + + for (i = 0; i < n_sgt; i++) { + if (bo->sgts[i].sgl) { + dma_unmap_sg(pfdev->dev, bo->sgts[i].sgl, + bo->sgts[i].nents, DMA_BIDIRECTIONAL); + sg_free_table(&bo->sgts[i]); + } + } + kfree(bo->sgts); + } + mutex_lock(&pfdev->shrinker_lock); if (!list_empty(&bo->base.madv_list)) list_del(&bo->base.madv_list); @@ -50,10 +67,11 @@ static int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_p if (ret) goto out;
- ret = panfrost_mmu_map(bo); - if (ret) - drm_mm_remove_node(&bo->node); - + if (!bo->is_heap) { + ret = panfrost_mmu_map(bo); + if (ret) + drm_mm_remove_node(&bo->node); + } out: spin_unlock(&pfdev->mm_lock); return ret; @@ -73,12 +91,20 @@ static void panfrost_gem_close(struct drm_gem_object *obj, struct drm_file *file spin_unlock(&pfdev->mm_lock); }
+static int panfrost_gem_pin(struct drm_gem_object *obj) +{ + if (to_panfrost_bo(obj)->is_heap) + return -EINVAL; + + return drm_gem_shmem_pin(obj); +} + static const struct drm_gem_object_funcs panfrost_gem_funcs = { .free = panfrost_gem_free_object, .open = panfrost_gem_open, .close = panfrost_gem_close, .print_info = drm_gem_shmem_print_info, - .pin = drm_gem_shmem_pin, + .pin = panfrost_gem_pin, .unpin = drm_gem_shmem_unpin, .get_sg_table = drm_gem_shmem_get_sg_table, .vmap = drm_gem_shmem_vmap, @@ -117,12 +143,19 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv, struct drm_gem_shmem_object *shmem; struct panfrost_gem_object *bo;
+ /* Round up heap allocations to 2MB to keep fault handling simple */ + if (flags & PANFROST_BO_HEAP) + size = roundup(size, SZ_2M); + else + size = roundup(size, PAGE_SIZE); + shmem = drm_gem_shmem_create(dev, size); if (IS_ERR(shmem)) return ERR_CAST(shmem);
bo = to_panfrost_bo(&shmem->base); bo->noexec = !!(flags & PANFROST_BO_NOEXEC); + bo->is_heap = !!(flags & PANFROST_BO_HEAP);
/* * Allocate an id of idr table where the obj is registered diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index d4c7aa1790a7..e10f58316915 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -9,10 +9,12 @@
struct panfrost_gem_object { struct drm_gem_shmem_object base; + struct sg_table *sgts;
struct drm_mm_node node; bool is_mapped :1; bool noexec :1; + bool is_heap :1; };
static inline @@ -21,6 +23,12 @@ struct panfrost_gem_object *to_panfrost_bo(struct drm_gem_object *obj) return container_of(to_drm_gem_shmem_obj(obj), struct panfrost_gem_object, base); }
+static inline +struct panfrost_gem_object *drm_mm_node_to_panfrost_bo(struct drm_mm_node *node) +{ + return container_of(node, struct panfrost_gem_object, node); +} + struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t size);
struct drm_gem_object * diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index b609ee55a872..2ed411f09d80 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -2,6 +2,7 @@ /* Copyright 2019 Linaro, Ltd, Rob Herring robh@kernel.org */ #include <linux/bitfield.h> #include <linux/delay.h> +#include <linux/dma-mapping.h> #include <linux/interrupt.h> #include <linux/io.h> #include <linux/iopoll.h> @@ -9,6 +10,7 @@ #include <linux/iommu.h> #include <linux/platform_device.h> #include <linux/pm_runtime.h> +#include <linux/shmem_fs.h> #include <linux/sizes.h>
#include "panfrost_device.h" @@ -240,12 +242,12 @@ void panfrost_mmu_unmap(struct panfrost_gem_object *bo) size_t unmapped_page; size_t pgsize = get_pgsize(iova, len - unmapped_len);
- unmapped_page = ops->unmap(ops, iova, pgsize); - if (!unmapped_page) - break; - - iova += unmapped_page; - unmapped_len += unmapped_page; + if (ops->iova_to_phys(ops, iova)) { + unmapped_page = ops->unmap(ops, iova, pgsize); + WARN_ON(unmapped_page != pgsize); + } + iova += pgsize; + unmapped_len += pgsize; }
mmu_hw_do_operation(pfdev, 0, bo->node.start << PAGE_SHIFT, @@ -281,6 +283,105 @@ static const struct iommu_gather_ops mmu_tlb_ops = { .tlb_sync = mmu_tlb_sync_context, };
+static struct drm_mm_node *addr_to_drm_mm_node(struct panfrost_device *pfdev, int as, u64 addr) +{ + struct drm_mm_node *node; + u64 offset = addr >> PAGE_SHIFT; + + drm_mm_for_each_node(node, &pfdev->mm) { + if (offset >= node->start && offset < (node->start + node->size)) + return node; + } + return NULL; +} + +#define NUM_FAULT_PAGES (SZ_2M / PAGE_SIZE) + +int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, u64 addr) +{ + int ret, i; + struct drm_mm_node *node; + struct panfrost_gem_object *bo; + struct address_space *mapping; + pgoff_t page_offset; + struct sg_table *sgt; + struct page **pages; + + node = addr_to_drm_mm_node(pfdev, as, addr); + if (!node) + return -ENOENT; + + bo = drm_mm_node_to_panfrost_bo(node); + if (!bo->is_heap) { + dev_WARN(pfdev->dev, "matching BO is not heap type (GPU VA = %llx)", + node->start << PAGE_SHIFT); + return -EINVAL; + } + /* Assume 2MB alignment and size multiple */ + addr &= ~((u64)SZ_2M - 1); + page_offset = addr >> PAGE_SHIFT; + page_offset -= node->start; + + mutex_lock(&bo->base.pages_lock); + + if (!bo->base.pages) { + bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M, + sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO); + if (!bo->sgts) + return -ENOMEM; + + pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, + sizeof(struct page *), GFP_KERNEL | __GFP_ZERO); + if (!pages) { + kfree(bo->sgts); + bo->sgts = NULL; + return -ENOMEM; + } + bo->base.pages = pages; + bo->base.pages_use_count = 1; + } else + pages = bo->base.pages; + + mapping = bo->base.base.filp->f_mapping; + mapping_set_unevictable(mapping); + + for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { + pages[i] = shmem_read_mapping_page(mapping, i); + if (IS_ERR(pages[i])) { + mutex_unlock(&bo->base.pages_lock); + ret = PTR_ERR(pages[i]); + goto err_pages; + } + } + + mutex_unlock(&bo->base.pages_lock); + + sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; + ret = sg_alloc_table_from_pages(sgt, pages + page_offset, + NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); + if (ret) + goto err_pages; + + if (!dma_map_sg(pfdev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL)) { + ret = -EINVAL; + goto err_map; + } + + mmu_map_sg(pfdev, addr, IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt); + + bo->is_mapped = true; + + dev_dbg(pfdev->dev, "mapped page fault @ %llx", addr); + + return 0; + +err_map: + sg_free_table(sgt); +err_pages: + drm_gem_shmem_put_pages(&bo->base); + return ret; +} + static const char *access_type_name(struct panfrost_device *pfdev, u32 fault_status) { @@ -317,9 +418,7 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data) { struct panfrost_device *pfdev = data; u32 status = mmu_read(pfdev, MMU_INT_RAWSTAT); - int i; - - dev_err(pfdev->dev, "mmu irq status=%x\n", status); + int i, ret;
for (i = 0; status; i++) { u32 mask = BIT(i) | BIT(i + 16); @@ -341,6 +440,18 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data) access_type = (fault_status >> 8) & 0x3; source_id = (fault_status >> 16);
+ /* Page fault only */ + if ((status & mask) == BIT(i)) { + WARN_ON(exception_type < 0xC1 || exception_type > 0xC4); + + ret = panfrost_mmu_map_fault_addr(pfdev, i, addr); + if (!ret) { + mmu_write(pfdev, MMU_INT_CLEAR, BIT(i)); + status &= ~mask; + continue; + } + } + /* terminal fault, print info about the fault */ dev_err(pfdev->dev, "Unhandled Page fault in AS%d at VA 0x%016llX\n" diff --git a/include/uapi/drm/panfrost_drm.h b/include/uapi/drm/panfrost_drm.h index b80c20d17dec..ec19db1eead8 100644 --- a/include/uapi/drm/panfrost_drm.h +++ b/include/uapi/drm/panfrost_drm.h @@ -85,6 +85,7 @@ struct drm_panfrost_wait_bo { };
#define PANFROST_BO_NOEXEC 1 +#define PANFROST_BO_HEAP 2
/** * struct drm_panfrost_create_bo - ioctl argument for creating Panfrost BOs.
On 08/08/2019 23:21, Rob Herring wrote:
The midgard/bifrost GPUs need to allocate GPU heap memory which is allocated on GPU page faults and not pinned in memory. The vendor driver calls this functionality GROW_ON_GPF.
This implementation assumes that BOs allocated with the PANFROST_BO_NOEXEC flag are never mmapped or exported. Both of those may actually work, but I'm unsure if there's some interaction there. It would cause the whole object to be pinned in memory which would defeat the point of this.
On faults, we map in 2MB at a time in order to utilize huge pages (if enabled). Currently, once we've mapped pages in, they are only unmapped if the BO is freed. Once we add shrinker support, we can unmap pages with the shrinker.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Cc: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org
Reviewed-by: Steven Price steven.price@arm.com
Although a couple of comments below...
drivers/gpu/drm/panfrost/TODO | 2 - drivers/gpu/drm/panfrost/panfrost_drv.c | 11 +- drivers/gpu/drm/panfrost/panfrost_gem.c | 43 +++++++- drivers/gpu/drm/panfrost/panfrost_gem.h | 8 ++ drivers/gpu/drm/panfrost/panfrost_mmu.c | 129 ++++++++++++++++++++++-- include/uapi/drm/panfrost_drm.h | 1 + 6 files changed, 177 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/TODO b/drivers/gpu/drm/panfrost/TODO index d4c7fb7e7d13..e7727b292355 100644 --- a/drivers/gpu/drm/panfrost/TODO +++ b/drivers/gpu/drm/panfrost/TODO @@ -10,8 +10,6 @@ The hard part is handling when more address spaces are needed than what the h/w provides.
-- Support pinning pages on demand (GPU page faults).
Support userspace controlled GPU virtual addresses. Needed for Vulkan. (Tomeu)
Compute job support. So called 'compute only' jobs need to be plumbed up to
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index f070f2dd9f84..994dbc37e4d0 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -82,7 +82,12 @@ static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_panfrost_create_bo *args = data;
if (!args->size || args->pad ||
(args->flags & ~PANFROST_BO_NOEXEC))
(args->flags & ~(PANFROST_BO_NOEXEC | PANFROST_BO_HEAP)))
return -EINVAL;
/* Heaps should never be executable */
if ((args->flags & PANFROST_BO_HEAP) &&
!(args->flags & PANFROST_BO_NOEXEC))
return -EINVAL;
bo = panfrost_gem_create_with_handle(file, dev, args->size, args->flags,
@@ -297,6 +302,10 @@ static int panfrost_ioctl_mmap_bo(struct drm_device *dev, void *data, return -ENOENT; }
- /* Don't allow mmapping of heap objects as pages are not pinned. */
- if (to_panfrost_bo(gem_obj)->is_heap)
return -EINVAL;
- ret = drm_gem_create_mmap_offset(gem_obj); if (ret == 0) args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index f398109b8e0c..37a3a6ed4617 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -16,6 +16,23 @@ */ static void panfrost_gem_free_object(struct drm_gem_object *obj) {
- struct panfrost_gem_object *bo = to_panfrost_bo(obj);
- struct panfrost_device *pfdev = obj->dev->dev_private;
This is where things start building again - these need moving to patch 3.
- if (bo->sgts) {
int i;
int n_sgt = bo->base.base.size / SZ_2M;
for (i = 0; i < n_sgt; i++) {
if (bo->sgts[i].sgl) {
dma_unmap_sg(pfdev->dev, bo->sgts[i].sgl,
bo->sgts[i].nents, DMA_BIDIRECTIONAL);
sg_free_table(&bo->sgts[i]);
}
}
kfree(bo->sgts);
- }
- mutex_lock(&pfdev->shrinker_lock); if (!list_empty(&bo->base.madv_list)) list_del(&bo->base.madv_list);
@@ -50,10 +67,11 @@ static int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_p if (ret) goto out;
- ret = panfrost_mmu_map(bo);
- if (ret)
drm_mm_remove_node(&bo->node);
- if (!bo->is_heap) {
ret = panfrost_mmu_map(bo);
if (ret)
drm_mm_remove_node(&bo->node);
- }
out: spin_unlock(&pfdev->mm_lock); return ret; @@ -73,12 +91,20 @@ static void panfrost_gem_close(struct drm_gem_object *obj, struct drm_file *file spin_unlock(&pfdev->mm_lock); }
+static int panfrost_gem_pin(struct drm_gem_object *obj) +{
- if (to_panfrost_bo(obj)->is_heap)
return -EINVAL;
- return drm_gem_shmem_pin(obj);
+}
static const struct drm_gem_object_funcs panfrost_gem_funcs = { .free = panfrost_gem_free_object, .open = panfrost_gem_open, .close = panfrost_gem_close, .print_info = drm_gem_shmem_print_info,
- .pin = drm_gem_shmem_pin,
- .pin = panfrost_gem_pin, .unpin = drm_gem_shmem_unpin, .get_sg_table = drm_gem_shmem_get_sg_table, .vmap = drm_gem_shmem_vmap,
@@ -117,12 +143,19 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv, struct drm_gem_shmem_object *shmem; struct panfrost_gem_object *bo;
- /* Round up heap allocations to 2MB to keep fault handling simple */
- if (flags & PANFROST_BO_HEAP)
size = roundup(size, SZ_2M);
- else
size = roundup(size, PAGE_SIZE);
If I understand correctly this roundup to PAGE_SIZE is now redundant (although not harmful).
Steve
shmem = drm_gem_shmem_create(dev, size); if (IS_ERR(shmem)) return ERR_CAST(shmem);
bo = to_panfrost_bo(&shmem->base); bo->noexec = !!(flags & PANFROST_BO_NOEXEC);
bo->is_heap = !!(flags & PANFROST_BO_HEAP);
/*
- Allocate an id of idr table where the obj is registered
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index d4c7aa1790a7..e10f58316915 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -9,10 +9,12 @@
struct panfrost_gem_object { struct drm_gem_shmem_object base;
struct sg_table *sgts;
struct drm_mm_node node; bool is_mapped :1; bool noexec :1;
bool is_heap :1;
};
static inline @@ -21,6 +23,12 @@ struct panfrost_gem_object *to_panfrost_bo(struct drm_gem_object *obj) return container_of(to_drm_gem_shmem_obj(obj), struct panfrost_gem_object, base); }
+static inline +struct panfrost_gem_object *drm_mm_node_to_panfrost_bo(struct drm_mm_node *node) +{
- return container_of(node, struct panfrost_gem_object, node);
+}
struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t size);
struct drm_gem_object * diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index b609ee55a872..2ed411f09d80 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -2,6 +2,7 @@ /* Copyright 2019 Linaro, Ltd, Rob Herring robh@kernel.org */ #include <linux/bitfield.h> #include <linux/delay.h> +#include <linux/dma-mapping.h> #include <linux/interrupt.h> #include <linux/io.h> #include <linux/iopoll.h> @@ -9,6 +10,7 @@ #include <linux/iommu.h> #include <linux/platform_device.h> #include <linux/pm_runtime.h> +#include <linux/shmem_fs.h> #include <linux/sizes.h>
#include "panfrost_device.h" @@ -240,12 +242,12 @@ void panfrost_mmu_unmap(struct panfrost_gem_object *bo) size_t unmapped_page; size_t pgsize = get_pgsize(iova, len - unmapped_len);
unmapped_page = ops->unmap(ops, iova, pgsize);
if (!unmapped_page)
break;
iova += unmapped_page;
unmapped_len += unmapped_page;
if (ops->iova_to_phys(ops, iova)) {
unmapped_page = ops->unmap(ops, iova, pgsize);
WARN_ON(unmapped_page != pgsize);
}
iova += pgsize;
unmapped_len += pgsize;
}
mmu_hw_do_operation(pfdev, 0, bo->node.start << PAGE_SHIFT,
@@ -281,6 +283,105 @@ static const struct iommu_gather_ops mmu_tlb_ops = { .tlb_sync = mmu_tlb_sync_context, };
+static struct drm_mm_node *addr_to_drm_mm_node(struct panfrost_device *pfdev, int as, u64 addr) +{
- struct drm_mm_node *node;
- u64 offset = addr >> PAGE_SHIFT;
- drm_mm_for_each_node(node, &pfdev->mm) {
if (offset >= node->start && offset < (node->start + node->size))
return node;
- }
- return NULL;
+}
+#define NUM_FAULT_PAGES (SZ_2M / PAGE_SIZE)
+int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, u64 addr) +{
- int ret, i;
- struct drm_mm_node *node;
- struct panfrost_gem_object *bo;
- struct address_space *mapping;
- pgoff_t page_offset;
- struct sg_table *sgt;
- struct page **pages;
- node = addr_to_drm_mm_node(pfdev, as, addr);
- if (!node)
return -ENOENT;
- bo = drm_mm_node_to_panfrost_bo(node);
- if (!bo->is_heap) {
dev_WARN(pfdev->dev, "matching BO is not heap type (GPU VA = %llx)",
node->start << PAGE_SHIFT);
return -EINVAL;
- }
- /* Assume 2MB alignment and size multiple */
- addr &= ~((u64)SZ_2M - 1);
- page_offset = addr >> PAGE_SHIFT;
- page_offset -= node->start;
- mutex_lock(&bo->base.pages_lock);
- if (!bo->base.pages) {
bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M,
sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO);
if (!bo->sgts)
return -ENOMEM;
pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
sizeof(struct page *), GFP_KERNEL | __GFP_ZERO);
if (!pages) {
kfree(bo->sgts);
bo->sgts = NULL;
return -ENOMEM;
}
bo->base.pages = pages;
bo->base.pages_use_count = 1;
- } else
pages = bo->base.pages;
- mapping = bo->base.base.filp->f_mapping;
- mapping_set_unevictable(mapping);
- for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
pages[i] = shmem_read_mapping_page(mapping, i);
if (IS_ERR(pages[i])) {
mutex_unlock(&bo->base.pages_lock);
ret = PTR_ERR(pages[i]);
goto err_pages;
}
- }
- mutex_unlock(&bo->base.pages_lock);
- sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
- ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
- if (ret)
goto err_pages;
- if (!dma_map_sg(pfdev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL)) {
ret = -EINVAL;
goto err_map;
- }
- mmu_map_sg(pfdev, addr, IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt);
- bo->is_mapped = true;
- dev_dbg(pfdev->dev, "mapped page fault @ %llx", addr);
- return 0;
+err_map:
- sg_free_table(sgt);
+err_pages:
- drm_gem_shmem_put_pages(&bo->base);
- return ret;
+}
static const char *access_type_name(struct panfrost_device *pfdev, u32 fault_status) { @@ -317,9 +418,7 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data) { struct panfrost_device *pfdev = data; u32 status = mmu_read(pfdev, MMU_INT_RAWSTAT);
- int i;
- dev_err(pfdev->dev, "mmu irq status=%x\n", status);
int i, ret;
for (i = 0; status; i++) { u32 mask = BIT(i) | BIT(i + 16);
@@ -341,6 +440,18 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data) access_type = (fault_status >> 8) & 0x3; source_id = (fault_status >> 16);
/* Page fault only */
if ((status & mask) == BIT(i)) {
WARN_ON(exception_type < 0xC1 || exception_type > 0xC4);
ret = panfrost_mmu_map_fault_addr(pfdev, i, addr);
if (!ret) {
mmu_write(pfdev, MMU_INT_CLEAR, BIT(i));
status &= ~mask;
continue;
}
}
- /* terminal fault, print info about the fault */ dev_err(pfdev->dev, "Unhandled Page fault in AS%d at VA 0x%016llX\n"
diff --git a/include/uapi/drm/panfrost_drm.h b/include/uapi/drm/panfrost_drm.h index b80c20d17dec..ec19db1eead8 100644 --- a/include/uapi/drm/panfrost_drm.h +++ b/include/uapi/drm/panfrost_drm.h @@ -85,6 +85,7 @@ struct drm_panfrost_wait_bo { };
#define PANFROST_BO_NOEXEC 1 +#define PANFROST_BO_HEAP 2
/**
- struct drm_panfrost_create_bo - ioctl argument for creating Panfrost BOs.
Increment the driver version to expose the new BO allocation flags.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Cc: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/panfrost/panfrost_drv.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 994dbc37e4d0..a1352750984c 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -448,6 +448,11 @@ static const struct drm_ioctl_desc panfrost_drm_driver_ioctls[] = {
DEFINE_DRM_GEM_SHMEM_FOPS(panfrost_drm_driver_fops);
+/* + * Panfrost driver version: + * - 1.0 - initial interface + * - 1.1 - adds HEAP and NOEXEC flags for CREATE_BO + */ static struct drm_driver panfrost_drm_driver = { .driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ, .open = panfrost_open, @@ -459,7 +464,7 @@ static struct drm_driver panfrost_drm_driver = { .desc = "panfrost DRM", .date = "20180908", .major = 1, - .minor = 0, + .minor = 1,
.gem_create_object = panfrost_gem_create_object, .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
dri-devel@lists.freedesktop.org