This series adds new BO allocation flags PANFROST_BO_HEAP and PANFROST_BO_NOEXEC. The heap allocations are paged in on GPU page faults.
Tomeu reports he has tested this in the panfrost CI.
This is based on drm-misc-next. An updated branch is here[1].
v3: - Retain shared irq support, splitting IRQ changes to separate patch (6/8) - Stop leaking SG tables - Prevent mmap and pinning pages for heap BOs
Rob
[1] git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git panfrost/heap-noexec
Rob Herring (8): drm/gem: Allow sparsely populated page arrays in drm_gem_put_pages drm/shmem: Put pages independent of a SG table being set drm/panfrost: Restructure the GEM object creation drm/panfrost: Split panfrost_mmu_map SG list mapping to its own function drm/panfrost: Add a no execute flag for BO allocations drm/panfrost: Convert MMU IRQ handler to threaded handler drm/panfrost: Add support for GPU heap allocations drm/panfrost: Bump driver version to 1.1
drivers/gpu/drm/drm_gem.c | 3 + drivers/gpu/drm/drm_gem_shmem_helper.c | 4 +- drivers/gpu/drm/panfrost/TODO | 2 - drivers/gpu/drm/panfrost/panfrost_drv.c | 65 ++++++-- drivers/gpu/drm/panfrost/panfrost_gem.c | 106 +++++++++++-- drivers/gpu/drm/panfrost/panfrost_gem.h | 16 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 200 ++++++++++++++++++++---- include/uapi/drm/panfrost_drm.h | 3 + 8 files changed, 333 insertions(+), 66 deletions(-)
-- 2.20.1
Panfrost has a need for pages allocated on demand via GPU page faults. When releasing the pages, the only thing preventing using drm_gem_put_pages() is needing to skip over unpopulated pages, so allow for skipping over NULL struct page pointers.
Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard maxime.ripard@bootlin.com Cc: Sean Paul sean@poorly.run Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: dri-devel@lists.freedesktop.org Reviewed-by: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/drm_gem.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 243f43d70f42..db373c945f16 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -633,6 +633,9 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
pagevec_init(&pvec); for (i = 0; i < npages; i++) { + if (!pages[i]) + continue; + if (dirty) set_page_dirty(pages[i]);
If a driver does its own management of pages, the shmem helper object's pages array could be allocated when a SG table is not. There's not really any good reason to tie putting pages with having a SG table when freeing the object, so just put pages if the pages array is populated.
Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard maxime.ripard@bootlin.com Cc: Sean Paul sean@poorly.run Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Eric Anholt eric@anholt.net Reviewed-by: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 2f64667ac805..477e4cc50f7a 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -118,11 +118,11 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj) if (shmem->sgt) { dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl, shmem->sgt->nents, DMA_BIDIRECTIONAL); - - drm_gem_shmem_put_pages(shmem); sg_free_table(shmem->sgt); kfree(shmem->sgt); } + if (shmem->pages) + drm_gem_shmem_put_pages(shmem); }
WARN_ON(shmem->pages_use_count);
Setting the GPU VA when creating the GEM object doesn't allow for any conditional adjustments. In preparation to support adjusting the mapping, restructure the GEM object creation to map the GEM object after we've created the base shmem object.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Reviewed-by: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/panfrost/panfrost_drv.c | 21 +++------ drivers/gpu/drm/panfrost/panfrost_gem.c | 58 ++++++++++++++++++++----- drivers/gpu/drm/panfrost/panfrost_gem.h | 5 +++ 3 files changed, 59 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index cb43ff4ebf4a..d354b92964d5 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -46,29 +46,20 @@ static int panfrost_ioctl_get_param(struct drm_device *ddev, void *data, struct static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *file) { - int ret; - struct drm_gem_shmem_object *shmem; + struct panfrost_gem_object *bo; struct drm_panfrost_create_bo *args = data;
if (!args->size || args->flags || args->pad) return -EINVAL;
- shmem = drm_gem_shmem_create_with_handle(file, dev, args->size, - &args->handle); - if (IS_ERR(shmem)) - return PTR_ERR(shmem); - - ret = panfrost_mmu_map(to_panfrost_bo(&shmem->base)); - if (ret) - goto err_free; + bo = panfrost_gem_create_with_handle(file, dev, args->size, args->flags, + &args->handle); + if (IS_ERR(bo)) + return PTR_ERR(bo);
- args->offset = to_panfrost_bo(&shmem->base)->node.start << PAGE_SHIFT; + args->offset = bo->node.start << PAGE_SHIFT;
return 0; - -err_free: - drm_gem_handle_delete(file, args->handle); - return ret; }
/** diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 543ab1b81bd5..df70dcf3cb2f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -23,7 +23,8 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) panfrost_mmu_unmap(bo);
spin_lock(&pfdev->mm_lock); - drm_mm_remove_node(&bo->node); + if (drm_mm_node_allocated(&bo->node)) + drm_mm_remove_node(&bo->node); spin_unlock(&pfdev->mm_lock);
drm_gem_shmem_free_object(obj); @@ -50,10 +51,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = { */ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t size) { - int ret; - struct panfrost_device *pfdev = dev->dev_private; struct panfrost_gem_object *obj; - u64 align;
obj = kzalloc(sizeof(*obj), GFP_KERNEL); if (!obj) @@ -61,20 +59,52 @@ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t
obj->base.base.funcs = &panfrost_gem_funcs;
- size = roundup(size, PAGE_SIZE); - align = size >= SZ_2M ? SZ_2M >> PAGE_SHIFT : 0; + return &obj->base.base; +} + +static int panfrost_gem_map(struct panfrost_device *pfdev, struct panfrost_gem_object *bo) +{ + int ret; + size_t size = bo->base.base.size; + u64 align = size >= SZ_2M ? SZ_2M >> PAGE_SHIFT : 0;
spin_lock(&pfdev->mm_lock); - ret = drm_mm_insert_node_generic(&pfdev->mm, &obj->node, + ret = drm_mm_insert_node_generic(&pfdev->mm, &bo->node, size >> PAGE_SHIFT, align, 0, 0); spin_unlock(&pfdev->mm_lock); + if (ret) + return ret; + + return panfrost_mmu_map(bo); +} + +struct panfrost_gem_object * +panfrost_gem_create_with_handle(struct drm_file *file_priv, + struct drm_device *dev, size_t size, + u32 flags, + uint32_t *handle) +{ + int ret; + struct panfrost_device *pfdev = dev->dev_private; + struct drm_gem_shmem_object *shmem; + struct panfrost_gem_object *bo; + + size = roundup(size, PAGE_SIZE); + + shmem = drm_gem_shmem_create_with_handle(file_priv, dev, size, handle); + if (IS_ERR(shmem)) + return ERR_CAST(shmem); + + bo = to_panfrost_bo(&shmem->base); + + ret = panfrost_gem_map(pfdev, bo); if (ret) goto free_obj;
- return &obj->base.base; + return bo;
free_obj: - kfree(obj); + drm_gem_handle_delete(file_priv, *handle); return ERR_PTR(ret); }
@@ -83,8 +113,10 @@ panfrost_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt) { + int ret; struct drm_gem_object *obj; struct panfrost_gem_object *pobj; + struct panfrost_device *pfdev = dev->dev_private;
obj = drm_gem_shmem_prime_import_sg_table(dev, attach, sgt); if (IS_ERR(obj)) @@ -92,7 +124,13 @@ panfrost_gem_prime_import_sg_table(struct drm_device *dev,
pobj = to_panfrost_bo(obj);
- panfrost_mmu_map(pobj); + ret = panfrost_gem_map(pfdev, pobj); + if (ret) + goto free_obj;
return obj; + +free_obj: + drm_gem_object_put_unlocked(obj); + return ERR_PTR(ret); } diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index 6dbcaba020fc..ce065270720b 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -22,6 +22,11 @@ struct panfrost_gem_object *to_panfrost_bo(struct drm_gem_object *obj)
struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t size);
+struct panfrost_gem_object * +panfrost_gem_create_with_handle(struct drm_file *file_priv, + struct drm_device *dev, size_t size, u32 flags, + uint32_t *handle); + struct drm_gem_object * panfrost_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach,
In preparation to create partial GPU mappings of BOs on page faults, split out the SG list handling of panfrost_mmu_map().
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Cc: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/panfrost/panfrost_mmu.c | 52 +++++++++++++++---------- 1 file changed, 31 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 92ac995dd9c6..b4ac149b2399 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -145,27 +145,13 @@ static size_t get_pgsize(u64 addr, size_t size) return SZ_2M; }
-int panfrost_mmu_map(struct panfrost_gem_object *bo) +static int mmu_map_sg(struct panfrost_device *pfdev, u64 iova, + int prot, struct sg_table *sgt) { - struct drm_gem_object *obj = &bo->base.base; - struct panfrost_device *pfdev = to_panfrost_device(obj->dev); - struct io_pgtable_ops *ops = pfdev->mmu->pgtbl_ops; - u64 iova = bo->node.start << PAGE_SHIFT; unsigned int count; struct scatterlist *sgl; - struct sg_table *sgt; - int ret; - - if (WARN_ON(bo->is_mapped)) - return 0; - - sgt = drm_gem_shmem_get_pages_sgt(obj); - if (WARN_ON(IS_ERR(sgt))) - return PTR_ERR(sgt); - - ret = pm_runtime_get_sync(pfdev->dev); - if (ret < 0) - return ret; + struct io_pgtable_ops *ops = pfdev->mmu->pgtbl_ops; + u64 start_iova = iova;
mutex_lock(&pfdev->mmu->lock);
@@ -178,18 +164,42 @@ int panfrost_mmu_map(struct panfrost_gem_object *bo) while (len) { size_t pgsize = get_pgsize(iova | paddr, len);
- ops->map(ops, iova, paddr, pgsize, IOMMU_WRITE | IOMMU_READ); + ops->map(ops, iova, paddr, pgsize, prot); iova += pgsize; paddr += pgsize; len -= pgsize; } }
- mmu_hw_do_operation(pfdev, 0, bo->node.start << PAGE_SHIFT, - bo->node.size << PAGE_SHIFT, AS_COMMAND_FLUSH_PT); + mmu_hw_do_operation(pfdev, 0, start_iova, iova - start_iova, + AS_COMMAND_FLUSH_PT);
mutex_unlock(&pfdev->mmu->lock);
+ return 0; +} + +int panfrost_mmu_map(struct panfrost_gem_object *bo) +{ + struct drm_gem_object *obj = &bo->base.base; + struct panfrost_device *pfdev = to_panfrost_device(obj->dev); + struct sg_table *sgt; + int ret; + int prot = IOMMU_READ | IOMMU_WRITE; + + if (WARN_ON(bo->is_mapped)) + return 0; + + sgt = drm_gem_shmem_get_pages_sgt(obj); + if (WARN_ON(IS_ERR(sgt))) + return PTR_ERR(sgt); + + ret = pm_runtime_get_sync(pfdev->dev); + if (ret < 0) + return ret; + + mmu_map_sg(pfdev, bo->node.start << PAGE_SHIFT, prot, sgt); + pm_runtime_mark_last_busy(pfdev->dev); pm_runtime_put_autosuspend(pfdev->dev); bo->is_mapped = true;
On 02/08/2019 20:51, Rob Herring wrote:
In preparation to create partial GPU mappings of BOs on page faults, split out the SG list handling of panfrost_mmu_map().
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Cc: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org
Reviewed-by: Steven Price steven.price@arm.com
(Although I don't see a change from the previous posting which already had my R-b).
drivers/gpu/drm/panfrost/panfrost_mmu.c | 52 +++++++++++++++---------- 1 file changed, 31 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 92ac995dd9c6..b4ac149b2399 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -145,27 +145,13 @@ static size_t get_pgsize(u64 addr, size_t size) return SZ_2M; }
-int panfrost_mmu_map(struct panfrost_gem_object *bo) +static int mmu_map_sg(struct panfrost_device *pfdev, u64 iova,
int prot, struct sg_table *sgt)
{
- struct drm_gem_object *obj = &bo->base.base;
- struct panfrost_device *pfdev = to_panfrost_device(obj->dev);
- struct io_pgtable_ops *ops = pfdev->mmu->pgtbl_ops;
- u64 iova = bo->node.start << PAGE_SHIFT; unsigned int count; struct scatterlist *sgl;
- struct sg_table *sgt;
- int ret;
- if (WARN_ON(bo->is_mapped))
return 0;
- sgt = drm_gem_shmem_get_pages_sgt(obj);
- if (WARN_ON(IS_ERR(sgt)))
return PTR_ERR(sgt);
- ret = pm_runtime_get_sync(pfdev->dev);
- if (ret < 0)
return ret;
struct io_pgtable_ops *ops = pfdev->mmu->pgtbl_ops;
u64 start_iova = iova;
mutex_lock(&pfdev->mmu->lock);
@@ -178,18 +164,42 @@ int panfrost_mmu_map(struct panfrost_gem_object *bo) while (len) { size_t pgsize = get_pgsize(iova | paddr, len);
ops->map(ops, iova, paddr, pgsize, IOMMU_WRITE | IOMMU_READ);
} }ops->map(ops, iova, paddr, pgsize, prot); iova += pgsize; paddr += pgsize; len -= pgsize;
- mmu_hw_do_operation(pfdev, 0, bo->node.start << PAGE_SHIFT,
bo->node.size << PAGE_SHIFT, AS_COMMAND_FLUSH_PT);
mmu_hw_do_operation(pfdev, 0, start_iova, iova - start_iova,
AS_COMMAND_FLUSH_PT);
mutex_unlock(&pfdev->mmu->lock);
return 0;
+}
+int panfrost_mmu_map(struct panfrost_gem_object *bo) +{
- struct drm_gem_object *obj = &bo->base.base;
- struct panfrost_device *pfdev = to_panfrost_device(obj->dev);
- struct sg_table *sgt;
- int ret;
- int prot = IOMMU_READ | IOMMU_WRITE;
- if (WARN_ON(bo->is_mapped))
return 0;
- sgt = drm_gem_shmem_get_pages_sgt(obj);
- if (WARN_ON(IS_ERR(sgt)))
return PTR_ERR(sgt);
- ret = pm_runtime_get_sync(pfdev->dev);
- if (ret < 0)
return ret;
- mmu_map_sg(pfdev, bo->node.start << PAGE_SHIFT, prot, sgt);
- pm_runtime_mark_last_busy(pfdev->dev); pm_runtime_put_autosuspend(pfdev->dev); bo->is_mapped = true;
Executable buffers have an alignment restriction that they can't cross 16MB boundary as the GPU program counter is 24-bits. This restriction is currently not handled and we just get lucky. As current userspace assumes all BOs are executable, that has to remain the default. So add a new PANFROST_BO_NOEXEC flag to allow userspace to indicate which BOs are not executable.
There is also a restriction that executable buffers cannot start or end on a 4GB boundary. This is mostly avoided as there is only 4GB of space currently and the beginning is already blocked out for NULL ptr detection. Add support to handle this restriction fully regardless of the current constraints.
For existing userspace, all created BOs remain executable, but the GPU VA alignment will be increased to the size of the BO. This shouldn't matter as there is plenty of GPU VA space.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Reviewed-by: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/panfrost/panfrost_drv.c | 30 ++++++++++++++++++++++++- drivers/gpu/drm/panfrost/panfrost_gem.c | 18 +++++++++++++-- drivers/gpu/drm/panfrost/panfrost_gem.h | 3 ++- drivers/gpu/drm/panfrost/panfrost_mmu.c | 3 +++ include/uapi/drm/panfrost_drm.h | 2 ++ 5 files changed, 52 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index d354b92964d5..7ebd82d8d570 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -49,7 +49,8 @@ static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data, struct panfrost_gem_object *bo; struct drm_panfrost_create_bo *args = data;
- if (!args->size || args->flags || args->pad) + if (!args->size || args->pad || + (args->flags & ~PANFROST_BO_NOEXEC)) return -EINVAL;
bo = panfrost_gem_create_with_handle(file, dev, args->size, args->flags, @@ -367,6 +368,32 @@ static struct drm_driver panfrost_drm_driver = { .gem_prime_mmap = drm_gem_prime_mmap, };
+#define PFN_4G (SZ_4G >> PAGE_SHIFT) +#define PFN_4G_MASK (PFN_4G - 1) +#define PFN_16M (SZ_16M >> PAGE_SHIFT) + +static void panfrost_drm_mm_color_adjust(const struct drm_mm_node *node, + unsigned long color, + u64 *start, u64 *end) +{ + /* Executable buffers can't start or end on a 4GB boundary */ + if (!(color & PANFROST_BO_NOEXEC)) { + u64 next_seg; + + if ((*start & PFN_4G_MASK) == 0) + (*start)++; + + if ((*end & PFN_4G_MASK) == 0) + (*end)--; + + next_seg = ALIGN(*start, PFN_4G); + if (next_seg - *start <= PFN_16M) + *start = next_seg + 1; + + *end = min(*end, ALIGN(*start, PFN_4G) - 1); + } +} + static int panfrost_probe(struct platform_device *pdev) { struct panfrost_device *pfdev; @@ -394,6 +421,7 @@ static int panfrost_probe(struct platform_device *pdev)
/* 4G enough for now. can be 48-bit */ drm_mm_init(&pfdev->mm, SZ_32M >> PAGE_SHIFT, (SZ_4G - SZ_32M) >> PAGE_SHIFT); + pfdev->mm.color_adjust = panfrost_drm_mm_color_adjust;
pm_runtime_use_autosuspend(pfdev->dev); pm_runtime_set_autosuspend_delay(pfdev->dev, 50); /* ~3 frames */ diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index df70dcf3cb2f..63731f6c5223 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -66,11 +66,23 @@ static int panfrost_gem_map(struct panfrost_device *pfdev, struct panfrost_gem_o { int ret; size_t size = bo->base.base.size; - u64 align = size >= SZ_2M ? SZ_2M >> PAGE_SHIFT : 0; + u64 align; + unsigned long color = bo->noexec ? PANFROST_BO_NOEXEC : 0; + + /* + * Executable buffers cannot cross a 16MB boundary as the program + * counter is 24-bits. We assume executable buffers will be less than + * 16MB and aligning executable buffers to their size will avoid + * crossing a 16MB boundary. + */ + if (!bo->noexec) + align = size >> PAGE_SHIFT; + else + align = size >= SZ_2M ? SZ_2M >> PAGE_SHIFT : 0;
spin_lock(&pfdev->mm_lock); ret = drm_mm_insert_node_generic(&pfdev->mm, &bo->node, - size >> PAGE_SHIFT, align, 0, 0); + size >> PAGE_SHIFT, align, color, 0); spin_unlock(&pfdev->mm_lock); if (ret) return ret; @@ -96,6 +108,7 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv, return ERR_CAST(shmem);
bo = to_panfrost_bo(&shmem->base); + bo->noexec = !!(flags & PANFROST_BO_NOEXEC);
ret = panfrost_gem_map(pfdev, bo); if (ret) @@ -123,6 +136,7 @@ panfrost_gem_prime_import_sg_table(struct drm_device *dev, return ERR_CAST(obj);
pobj = to_panfrost_bo(obj); + pobj->noexec = true;
ret = panfrost_gem_map(pfdev, pobj); if (ret) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index ce065270720b..132f02399b7b 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -11,7 +11,8 @@ struct panfrost_gem_object { struct drm_gem_shmem_object base;
struct drm_mm_node node; - bool is_mapped; + bool is_mapped :1; + bool noexec :1; };
static inline diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index b4ac149b2399..eba6ce785ef0 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -190,6 +190,9 @@ int panfrost_mmu_map(struct panfrost_gem_object *bo) if (WARN_ON(bo->is_mapped)) return 0;
+ if (bo->noexec) + prot |= IOMMU_NOEXEC; + sgt = drm_gem_shmem_get_pages_sgt(obj); if (WARN_ON(IS_ERR(sgt))) return PTR_ERR(sgt); diff --git a/include/uapi/drm/panfrost_drm.h b/include/uapi/drm/panfrost_drm.h index b5d370638846..17fb5d200f7a 100644 --- a/include/uapi/drm/panfrost_drm.h +++ b/include/uapi/drm/panfrost_drm.h @@ -82,6 +82,8 @@ struct drm_panfrost_wait_bo { __s64 timeout_ns; /* absolute */ };
+#define PANFROST_BO_NOEXEC 1 + /** * struct drm_panfrost_create_bo - ioctl argument for creating Panfrost BOs. *
In preparation to handle mapping of page faults, we need the MMU handler to be threaded as code paths take a mutex.
As the IRQ may be shared, we can't use the default handler and must disable the MMU interrupts locally.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Cc: Steven Price steven.price@arm.com Cc: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/panfrost/panfrost_mmu.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index eba6ce785ef0..7d44328b280f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -300,12 +300,20 @@ static const char *access_type_name(struct panfrost_device *pfdev, static irqreturn_t panfrost_mmu_irq_handler(int irq, void *data) { struct panfrost_device *pfdev = data; - u32 status = mmu_read(pfdev, MMU_INT_STAT); - int i;
- if (!status) + if (!mmu_read(pfdev, MMU_INT_STAT)) return IRQ_NONE;
+ mmu_write(pfdev, MMU_INT_MASK, 0); + return IRQ_WAKE_THREAD; +} + +static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data) +{ + struct panfrost_device *pfdev = data; + u32 status = mmu_read(pfdev, MMU_INT_RAWSTAT); + int i; + dev_err(pfdev->dev, "mmu irq status=%x\n", status);
for (i = 0; status; i++) { @@ -350,6 +358,7 @@ static irqreturn_t panfrost_mmu_irq_handler(int irq, void *data) status &= ~mask; }
+ mmu_write(pfdev, MMU_INT_MASK, ~0); return IRQ_HANDLED; };
@@ -368,8 +377,9 @@ int panfrost_mmu_init(struct panfrost_device *pfdev) if (irq <= 0) return -ENODEV;
- err = devm_request_irq(pfdev->dev, irq, panfrost_mmu_irq_handler, - IRQF_SHARED, "mmu", pfdev); + err = devm_request_threaded_irq(pfdev->dev, irq, panfrost_mmu_irq_handler, + panfrost_mmu_irq_handler_thread, + IRQF_SHARED, "mmu", pfdev);
if (err) { dev_err(pfdev->dev, "failed to request mmu irq");
A-b, I think.
On Fri, Aug 02, 2019 at 01:51:48PM -0600, Rob Herring wrote:
In preparation to handle mapping of page faults, we need the MMU handler to be threaded as code paths take a mutex.
As the IRQ may be shared, we can't use the default handler and must disable the MMU interrupts locally.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Cc: Steven Price steven.price@arm.com Cc: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org
drivers/gpu/drm/panfrost/panfrost_mmu.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index eba6ce785ef0..7d44328b280f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -300,12 +300,20 @@ static const char *access_type_name(struct panfrost_device *pfdev, static irqreturn_t panfrost_mmu_irq_handler(int irq, void *data) { struct panfrost_device *pfdev = data;
u32 status = mmu_read(pfdev, MMU_INT_STAT);
int i;
if (!status)
if (!mmu_read(pfdev, MMU_INT_STAT)) return IRQ_NONE;
mmu_write(pfdev, MMU_INT_MASK, 0);
return IRQ_WAKE_THREAD;
+}
+static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data) +{
struct panfrost_device *pfdev = data;
u32 status = mmu_read(pfdev, MMU_INT_RAWSTAT);
int i;
dev_err(pfdev->dev, "mmu irq status=%x\n", status);
for (i = 0; status; i++) {
@@ -350,6 +358,7 @@ static irqreturn_t panfrost_mmu_irq_handler(int irq, void *data) status &= ~mask; }
- mmu_write(pfdev, MMU_INT_MASK, ~0); return IRQ_HANDLED;
};
@@ -368,8 +377,9 @@ int panfrost_mmu_init(struct panfrost_device *pfdev) if (irq <= 0) return -ENODEV;
- err = devm_request_irq(pfdev->dev, irq, panfrost_mmu_irq_handler,
IRQF_SHARED, "mmu", pfdev);
err = devm_request_threaded_irq(pfdev->dev, irq, panfrost_mmu_irq_handler,
panfrost_mmu_irq_handler_thread,
IRQF_SHARED, "mmu", pfdev);
if (err) { dev_err(pfdev->dev, "failed to request mmu irq");
-- 2.20.1
On 02/08/2019 20:51, Rob Herring wrote:
In preparation to handle mapping of page faults, we need the MMU handler to be threaded as code paths take a mutex.
As the IRQ may be shared, we can't use the default handler and must disable the MMU interrupts locally.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Cc: Steven Price steven.price@arm.com Cc: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org
Reviewed-by: Steven Price steven.price@arm.com
drivers/gpu/drm/panfrost/panfrost_mmu.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index eba6ce785ef0..7d44328b280f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -300,12 +300,20 @@ static const char *access_type_name(struct panfrost_device *pfdev, static irqreturn_t panfrost_mmu_irq_handler(int irq, void *data) { struct panfrost_device *pfdev = data;
u32 status = mmu_read(pfdev, MMU_INT_STAT);
int i;
if (!status)
if (!mmu_read(pfdev, MMU_INT_STAT)) return IRQ_NONE;
mmu_write(pfdev, MMU_INT_MASK, 0);
return IRQ_WAKE_THREAD;
+}
+static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data) +{
struct panfrost_device *pfdev = data;
u32 status = mmu_read(pfdev, MMU_INT_RAWSTAT);
int i;
dev_err(pfdev->dev, "mmu irq status=%x\n", status);
for (i = 0; status; i++) {
@@ -350,6 +358,7 @@ static irqreturn_t panfrost_mmu_irq_handler(int irq, void *data) status &= ~mask; }
- mmu_write(pfdev, MMU_INT_MASK, ~0); return IRQ_HANDLED;
};
@@ -368,8 +377,9 @@ int panfrost_mmu_init(struct panfrost_device *pfdev) if (irq <= 0) return -ENODEV;
- err = devm_request_irq(pfdev->dev, irq, panfrost_mmu_irq_handler,
IRQF_SHARED, "mmu", pfdev);
err = devm_request_threaded_irq(pfdev->dev, irq, panfrost_mmu_irq_handler,
panfrost_mmu_irq_handler_thread,
IRQF_SHARED, "mmu", pfdev);
if (err) { dev_err(pfdev->dev, "failed to request mmu irq");
The midgard/bifrost GPUs need to allocate GPU heap memory which is allocated on GPU page faults and not pinned in memory. The vendor driver calls this functionality GROW_ON_GPF.
This implementation assumes that BOs allocated with the PANFROST_BO_NOEXEC flag are never mmapped or exported. Both of those may actually work, but I'm unsure if there's some interaction there. It would cause the whole object to be pinned in memory which would defeat the point of this.
On faults, we map in 2MB at a time in order to utilize huge pages (if enabled). Currently, once we've mapped pages in, they are only unmapped if the BO is freed. Once we add shrinker support, we can unmap pages with the shrinker.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Cc: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/panfrost/TODO | 2 - drivers/gpu/drm/panfrost/panfrost_drv.c | 11 +- drivers/gpu/drm/panfrost/panfrost_gem.c | 36 ++++++- drivers/gpu/drm/panfrost/panfrost_gem.h | 8 ++ drivers/gpu/drm/panfrost/panfrost_mmu.c | 129 ++++++++++++++++++++++-- include/uapi/drm/panfrost_drm.h | 1 + 6 files changed, 172 insertions(+), 15 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/TODO b/drivers/gpu/drm/panfrost/TODO index c2e44add37d8..64129bf73933 100644 --- a/drivers/gpu/drm/panfrost/TODO +++ b/drivers/gpu/drm/panfrost/TODO @@ -14,8 +14,6 @@ The hard part is handling when more address spaces are needed than what the h/w provides.
-- Support pinning pages on demand (GPU page faults). - - Support userspace controlled GPU virtual addresses. Needed for Vulkan. (Tomeu)
- Support for madvise and a shrinker. diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 7ebd82d8d570..a7126b5f8e5d 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -50,7 +50,12 @@ static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_panfrost_create_bo *args = data;
if (!args->size || args->pad || - (args->flags & ~PANFROST_BO_NOEXEC)) + (args->flags & ~(PANFROST_BO_NOEXEC | PANFROST_BO_HEAP))) + return -EINVAL; + + /* Heaps should never be executable */ + if ((args->flags & PANFROST_BO_HEAP) && + !(args->flags & PANFROST_BO_NOEXEC)) return -EINVAL;
bo = panfrost_gem_create_with_handle(file, dev, args->size, args->flags, @@ -265,6 +270,10 @@ static int panfrost_ioctl_mmap_bo(struct drm_device *dev, void *data, return -ENOENT; }
+ /* Don't allow mmapping of heap objects as pages are not pinned. */ + if (to_panfrost_bo(gem_obj)->is_heap) + return -EINVAL; + ret = drm_gem_create_mmap_offset(gem_obj); if (ret == 0) args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 63731f6c5223..f3d5f61714ae 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -27,13 +27,35 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) drm_mm_remove_node(&bo->node); spin_unlock(&pfdev->mm_lock);
+ if (bo->sgts) { + int i; + int n_sgt = bo->base.base.size / SZ_2M; + + for (i = 0; i < n_sgt; i++) { + if (bo->sgts[i].sgl) { + dma_unmap_sg(pfdev->dev, bo->sgts[i].sgl, + bo->sgts[i].nents, DMA_BIDIRECTIONAL); + sg_free_table(&bo->sgts[i]); + } + } + kfree(bo->sgts); + } + drm_gem_shmem_free_object(obj); }
+static int panfrost_gem_pin(struct drm_gem_object *obj) +{ + if (to_panfrost_bo(obj)->is_heap) + return -EINVAL; + + return drm_gem_shmem_pin(obj); +} + static const struct drm_gem_object_funcs panfrost_gem_funcs = { .free = panfrost_gem_free_object, .print_info = drm_gem_shmem_print_info, - .pin = drm_gem_shmem_pin, + .pin = panfrost_gem_pin, .unpin = drm_gem_shmem_unpin, .get_sg_table = drm_gem_shmem_get_sg_table, .vmap = drm_gem_shmem_vmap, @@ -87,7 +109,10 @@ static int panfrost_gem_map(struct panfrost_device *pfdev, struct panfrost_gem_o if (ret) return ret;
- return panfrost_mmu_map(bo); + if (!bo->is_heap) + ret = panfrost_mmu_map(bo); + + return ret; }
struct panfrost_gem_object * @@ -101,7 +126,11 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv, struct drm_gem_shmem_object *shmem; struct panfrost_gem_object *bo;
- size = roundup(size, PAGE_SIZE); + /* Round up heap allocations to 2MB to keep fault handling simple */ + if (flags & PANFROST_BO_HEAP) + size = roundup(size, SZ_2M); + else + size = roundup(size, PAGE_SIZE);
shmem = drm_gem_shmem_create_with_handle(file_priv, dev, size, handle); if (IS_ERR(shmem)) @@ -109,6 +138,7 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv,
bo = to_panfrost_bo(&shmem->base); bo->noexec = !!(flags & PANFROST_BO_NOEXEC); + bo->is_heap = !!(flags & PANFROST_BO_HEAP);
ret = panfrost_gem_map(pfdev, bo); if (ret) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index 132f02399b7b..b628c9b67784 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -9,10 +9,12 @@
struct panfrost_gem_object { struct drm_gem_shmem_object base; + struct sg_table *sgts;
struct drm_mm_node node; bool is_mapped :1; bool noexec :1; + bool is_heap :1; };
static inline @@ -21,6 +23,12 @@ struct panfrost_gem_object *to_panfrost_bo(struct drm_gem_object *obj) return container_of(to_drm_gem_shmem_obj(obj), struct panfrost_gem_object, base); }
+static inline +struct panfrost_gem_object *drm_mm_node_to_panfrost_bo(struct drm_mm_node *node) +{ + return container_of(node, struct panfrost_gem_object, node); +} + struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t size);
struct panfrost_gem_object * diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 7d44328b280f..aa83bf46360b 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -2,6 +2,7 @@ /* Copyright 2019 Linaro, Ltd, Rob Herring robh@kernel.org */ #include <linux/bitfield.h> #include <linux/delay.h> +#include <linux/dma-mapping.h> #include <linux/interrupt.h> #include <linux/io.h> #include <linux/iopoll.h> @@ -9,6 +10,7 @@ #include <linux/iommu.h> #include <linux/platform_device.h> #include <linux/pm_runtime.h> +#include <linux/shmem_fs.h> #include <linux/sizes.h>
#include "panfrost_device.h" @@ -235,12 +237,12 @@ void panfrost_mmu_unmap(struct panfrost_gem_object *bo) size_t unmapped_page; size_t pgsize = get_pgsize(iova, len - unmapped_len);
- unmapped_page = ops->unmap(ops, iova, pgsize); - if (!unmapped_page) - break; - - iova += unmapped_page; - unmapped_len += unmapped_page; + if (ops->iova_to_phys(ops, iova)) { + unmapped_page = ops->unmap(ops, iova, pgsize); + WARN_ON(unmapped_page != pgsize); + } + iova += pgsize; + unmapped_len += pgsize; }
mmu_hw_do_operation(pfdev, 0, bo->node.start << PAGE_SHIFT, @@ -276,6 +278,105 @@ static const struct iommu_gather_ops mmu_tlb_ops = { .tlb_sync = mmu_tlb_sync_context, };
+static struct drm_mm_node *addr_to_drm_mm_node(struct panfrost_device *pfdev, int as, u64 addr) +{ + struct drm_mm_node *node; + u64 offset = addr >> PAGE_SHIFT; + + drm_mm_for_each_node(node, &pfdev->mm) { + if (offset >= node->start && offset < (node->start + node->size)) + return node; + } + return NULL; +} + +#define NUM_FAULT_PAGES (SZ_2M / PAGE_SIZE) + +int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, u64 addr) +{ + int ret, i; + struct drm_mm_node *node; + struct panfrost_gem_object *bo; + struct address_space *mapping; + pgoff_t page_offset; + struct sg_table *sgt; + struct page **pages; + + node = addr_to_drm_mm_node(pfdev, as, addr); + if (!node) + return -ENOENT; + + bo = drm_mm_node_to_panfrost_bo(node); + if (!bo->is_heap) { + dev_WARN(pfdev->dev, "matching BO is not heap type (GPU VA = %llx)", + node->start << PAGE_SHIFT); + return -EINVAL; + } + /* Assume 2MB alignment and size multiple */ + addr &= ~((u64)SZ_2M - 1); + page_offset = addr >> PAGE_SHIFT; + page_offset -= node->start; + + mutex_lock(&bo->base.pages_lock); + + if (!bo->base.pages) { + bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M, + sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO); + if (!bo->sgts) + return -ENOMEM; + + pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, + sizeof(struct page *), GFP_KERNEL | __GFP_ZERO); + if (!pages) { + kfree(bo->sgts); + bo->sgts = NULL; + return -ENOMEM; + } + bo->base.pages = pages; + bo->base.pages_use_count = 1; + } else + pages = bo->base.pages; + + mapping = bo->base.base.filp->f_mapping; + mapping_set_unevictable(mapping); + + for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { + pages[i] = shmem_read_mapping_page(mapping, i); + if (IS_ERR(pages[i])) { + mutex_unlock(&bo->base.pages_lock); + ret = PTR_ERR(pages[i]); + goto err_pages; + } + } + + mutex_unlock(&bo->base.pages_lock); + + sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; + ret = sg_alloc_table_from_pages(sgt, pages + page_offset, + NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); + if (ret) + goto err_pages; + + if (!dma_map_sg(pfdev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL)) { + ret = -EINVAL; + goto err_map; + } + + mmu_map_sg(pfdev, addr, IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt); + + bo->is_mapped = true; + + dev_dbg(pfdev->dev, "mapped page fault @ %llx", addr); + + return 0; + +err_map: + sg_free_table(sgt); +err_pages: + drm_gem_shmem_put_pages(&bo->base); + return ret; +} + static const char *access_type_name(struct panfrost_device *pfdev, u32 fault_status) { @@ -312,9 +413,7 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data) { struct panfrost_device *pfdev = data; u32 status = mmu_read(pfdev, MMU_INT_RAWSTAT); - int i; - - dev_err(pfdev->dev, "mmu irq status=%x\n", status); + int i, ret;
for (i = 0; status; i++) { u32 mask = BIT(i) | BIT(i + 16); @@ -336,6 +435,18 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data) access_type = (fault_status >> 8) & 0x3; source_id = (fault_status >> 16);
+ /* Page fault only */ + if ((status & mask) == BIT(i)) { + WARN_ON(exception_type < 0xC1 || exception_type > 0xC4); + + ret = panfrost_mmu_map_fault_addr(pfdev, i, addr); + if (!ret) { + mmu_write(pfdev, MMU_INT_CLEAR, BIT(i)); + status &= ~mask; + continue; + } + } + /* terminal fault, print info about the fault */ dev_err(pfdev->dev, "Unhandled Page fault in AS%d at VA 0x%016llX\n" diff --git a/include/uapi/drm/panfrost_drm.h b/include/uapi/drm/panfrost_drm.h index 17fb5d200f7a..9150dd75aad8 100644 --- a/include/uapi/drm/panfrost_drm.h +++ b/include/uapi/drm/panfrost_drm.h @@ -83,6 +83,7 @@ struct drm_panfrost_wait_bo { };
#define PANFROST_BO_NOEXEC 1 +#define PANFROST_BO_HEAP 2
/** * struct drm_panfrost_create_bo - ioctl argument for creating Panfrost BOs.
On 02/08/2019 20:51, Rob Herring wrote:
The midgard/bifrost GPUs need to allocate GPU heap memory which is allocated on GPU page faults and not pinned in memory. The vendor driver calls this functionality GROW_ON_GPF.
This implementation assumes that BOs allocated with the PANFROST_BO_NOEXEC flag are never mmapped or exported. Both of those may actually work, but I'm unsure if there's some interaction there. It would cause the whole object to be pinned in memory which would defeat the point of this.
On faults, we map in 2MB at a time in order to utilize huge pages (if enabled). Currently, once we've mapped pages in, they are only unmapped if the BO is freed. Once we add shrinker support, we can unmap pages with the shrinker.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Cc: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org
I can no longer trigger a memory leak or crash... :)
Reviewed-by: Steven Price steven.price@arm.com
Steve
drivers/gpu/drm/panfrost/TODO | 2 - drivers/gpu/drm/panfrost/panfrost_drv.c | 11 +- drivers/gpu/drm/panfrost/panfrost_gem.c | 36 ++++++- drivers/gpu/drm/panfrost/panfrost_gem.h | 8 ++ drivers/gpu/drm/panfrost/panfrost_mmu.c | 129 ++++++++++++++++++++++-- include/uapi/drm/panfrost_drm.h | 1 + 6 files changed, 172 insertions(+), 15 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/TODO b/drivers/gpu/drm/panfrost/TODO index c2e44add37d8..64129bf73933 100644 --- a/drivers/gpu/drm/panfrost/TODO +++ b/drivers/gpu/drm/panfrost/TODO @@ -14,8 +14,6 @@ The hard part is handling when more address spaces are needed than what the h/w provides.
-- Support pinning pages on demand (GPU page faults).
Support userspace controlled GPU virtual addresses. Needed for Vulkan. (Tomeu)
Support for madvise and a shrinker.
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 7ebd82d8d570..a7126b5f8e5d 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -50,7 +50,12 @@ static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_panfrost_create_bo *args = data;
if (!args->size || args->pad ||
(args->flags & ~PANFROST_BO_NOEXEC))
(args->flags & ~(PANFROST_BO_NOEXEC | PANFROST_BO_HEAP)))
return -EINVAL;
/* Heaps should never be executable */
if ((args->flags & PANFROST_BO_HEAP) &&
!(args->flags & PANFROST_BO_NOEXEC))
return -EINVAL;
bo = panfrost_gem_create_with_handle(file, dev, args->size, args->flags,
@@ -265,6 +270,10 @@ static int panfrost_ioctl_mmap_bo(struct drm_device *dev, void *data, return -ENOENT; }
- /* Don't allow mmapping of heap objects as pages are not pinned. */
- if (to_panfrost_bo(gem_obj)->is_heap)
return -EINVAL;
- ret = drm_gem_create_mmap_offset(gem_obj); if (ret == 0) args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 63731f6c5223..f3d5f61714ae 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -27,13 +27,35 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) drm_mm_remove_node(&bo->node); spin_unlock(&pfdev->mm_lock);
- if (bo->sgts) {
int i;
int n_sgt = bo->base.base.size / SZ_2M;
for (i = 0; i < n_sgt; i++) {
if (bo->sgts[i].sgl) {
dma_unmap_sg(pfdev->dev, bo->sgts[i].sgl,
bo->sgts[i].nents, DMA_BIDIRECTIONAL);
sg_free_table(&bo->sgts[i]);
}
}
kfree(bo->sgts);
- }
- drm_gem_shmem_free_object(obj);
}
+static int panfrost_gem_pin(struct drm_gem_object *obj) +{
- if (to_panfrost_bo(obj)->is_heap)
return -EINVAL;
- return drm_gem_shmem_pin(obj);
+}
static const struct drm_gem_object_funcs panfrost_gem_funcs = { .free = panfrost_gem_free_object, .print_info = drm_gem_shmem_print_info,
- .pin = drm_gem_shmem_pin,
- .pin = panfrost_gem_pin, .unpin = drm_gem_shmem_unpin, .get_sg_table = drm_gem_shmem_get_sg_table, .vmap = drm_gem_shmem_vmap,
@@ -87,7 +109,10 @@ static int panfrost_gem_map(struct panfrost_device *pfdev, struct panfrost_gem_o if (ret) return ret;
- return panfrost_mmu_map(bo);
- if (!bo->is_heap)
ret = panfrost_mmu_map(bo);
- return ret;
}
struct panfrost_gem_object * @@ -101,7 +126,11 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv, struct drm_gem_shmem_object *shmem; struct panfrost_gem_object *bo;
- size = roundup(size, PAGE_SIZE);
/* Round up heap allocations to 2MB to keep fault handling simple */
if (flags & PANFROST_BO_HEAP)
size = roundup(size, SZ_2M);
else
size = roundup(size, PAGE_SIZE);
shmem = drm_gem_shmem_create_with_handle(file_priv, dev, size, handle); if (IS_ERR(shmem))
@@ -109,6 +138,7 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv,
bo = to_panfrost_bo(&shmem->base); bo->noexec = !!(flags & PANFROST_BO_NOEXEC);
bo->is_heap = !!(flags & PANFROST_BO_HEAP);
ret = panfrost_gem_map(pfdev, bo); if (ret)
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index 132f02399b7b..b628c9b67784 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -9,10 +9,12 @@
struct panfrost_gem_object { struct drm_gem_shmem_object base;
struct sg_table *sgts;
struct drm_mm_node node; bool is_mapped :1; bool noexec :1;
bool is_heap :1;
};
static inline @@ -21,6 +23,12 @@ struct panfrost_gem_object *to_panfrost_bo(struct drm_gem_object *obj) return container_of(to_drm_gem_shmem_obj(obj), struct panfrost_gem_object, base); }
+static inline +struct panfrost_gem_object *drm_mm_node_to_panfrost_bo(struct drm_mm_node *node) +{
- return container_of(node, struct panfrost_gem_object, node);
+}
struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t size);
struct panfrost_gem_object * diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 7d44328b280f..aa83bf46360b 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -2,6 +2,7 @@ /* Copyright 2019 Linaro, Ltd, Rob Herring robh@kernel.org */ #include <linux/bitfield.h> #include <linux/delay.h> +#include <linux/dma-mapping.h> #include <linux/interrupt.h> #include <linux/io.h> #include <linux/iopoll.h> @@ -9,6 +10,7 @@ #include <linux/iommu.h> #include <linux/platform_device.h> #include <linux/pm_runtime.h> +#include <linux/shmem_fs.h> #include <linux/sizes.h>
#include "panfrost_device.h" @@ -235,12 +237,12 @@ void panfrost_mmu_unmap(struct panfrost_gem_object *bo) size_t unmapped_page; size_t pgsize = get_pgsize(iova, len - unmapped_len);
unmapped_page = ops->unmap(ops, iova, pgsize);
if (!unmapped_page)
break;
iova += unmapped_page;
unmapped_len += unmapped_page;
if (ops->iova_to_phys(ops, iova)) {
unmapped_page = ops->unmap(ops, iova, pgsize);
WARN_ON(unmapped_page != pgsize);
}
iova += pgsize;
unmapped_len += pgsize;
}
mmu_hw_do_operation(pfdev, 0, bo->node.start << PAGE_SHIFT,
@@ -276,6 +278,105 @@ static const struct iommu_gather_ops mmu_tlb_ops = { .tlb_sync = mmu_tlb_sync_context, };
+static struct drm_mm_node *addr_to_drm_mm_node(struct panfrost_device *pfdev, int as, u64 addr) +{
- struct drm_mm_node *node;
- u64 offset = addr >> PAGE_SHIFT;
- drm_mm_for_each_node(node, &pfdev->mm) {
if (offset >= node->start && offset < (node->start + node->size))
return node;
- }
- return NULL;
+}
+#define NUM_FAULT_PAGES (SZ_2M / PAGE_SIZE)
+int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, u64 addr) +{
- int ret, i;
- struct drm_mm_node *node;
- struct panfrost_gem_object *bo;
- struct address_space *mapping;
- pgoff_t page_offset;
- struct sg_table *sgt;
- struct page **pages;
- node = addr_to_drm_mm_node(pfdev, as, addr);
- if (!node)
return -ENOENT;
- bo = drm_mm_node_to_panfrost_bo(node);
- if (!bo->is_heap) {
dev_WARN(pfdev->dev, "matching BO is not heap type (GPU VA = %llx)",
node->start << PAGE_SHIFT);
return -EINVAL;
- }
- /* Assume 2MB alignment and size multiple */
- addr &= ~((u64)SZ_2M - 1);
- page_offset = addr >> PAGE_SHIFT;
- page_offset -= node->start;
- mutex_lock(&bo->base.pages_lock);
- if (!bo->base.pages) {
bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M,
sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO);
if (!bo->sgts)
return -ENOMEM;
pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
sizeof(struct page *), GFP_KERNEL | __GFP_ZERO);
if (!pages) {
kfree(bo->sgts);
bo->sgts = NULL;
return -ENOMEM;
}
bo->base.pages = pages;
bo->base.pages_use_count = 1;
- } else
pages = bo->base.pages;
- mapping = bo->base.base.filp->f_mapping;
- mapping_set_unevictable(mapping);
- for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
pages[i] = shmem_read_mapping_page(mapping, i);
if (IS_ERR(pages[i])) {
mutex_unlock(&bo->base.pages_lock);
ret = PTR_ERR(pages[i]);
goto err_pages;
}
- }
- mutex_unlock(&bo->base.pages_lock);
- sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
- ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
- if (ret)
goto err_pages;
- if (!dma_map_sg(pfdev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL)) {
ret = -EINVAL;
goto err_map;
- }
- mmu_map_sg(pfdev, addr, IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt);
- bo->is_mapped = true;
- dev_dbg(pfdev->dev, "mapped page fault @ %llx", addr);
- return 0;
+err_map:
- sg_free_table(sgt);
+err_pages:
- drm_gem_shmem_put_pages(&bo->base);
- return ret;
+}
static const char *access_type_name(struct panfrost_device *pfdev, u32 fault_status) { @@ -312,9 +413,7 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data) { struct panfrost_device *pfdev = data; u32 status = mmu_read(pfdev, MMU_INT_RAWSTAT);
- int i;
- dev_err(pfdev->dev, "mmu irq status=%x\n", status);
int i, ret;
for (i = 0; status; i++) { u32 mask = BIT(i) | BIT(i + 16);
@@ -336,6 +435,18 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data) access_type = (fault_status >> 8) & 0x3; source_id = (fault_status >> 16);
/* Page fault only */
if ((status & mask) == BIT(i)) {
WARN_ON(exception_type < 0xC1 || exception_type > 0xC4);
ret = panfrost_mmu_map_fault_addr(pfdev, i, addr);
if (!ret) {
mmu_write(pfdev, MMU_INT_CLEAR, BIT(i));
status &= ~mask;
continue;
}
}
- /* terminal fault, print info about the fault */ dev_err(pfdev->dev, "Unhandled Page fault in AS%d at VA 0x%016llX\n"
diff --git a/include/uapi/drm/panfrost_drm.h b/include/uapi/drm/panfrost_drm.h index 17fb5d200f7a..9150dd75aad8 100644 --- a/include/uapi/drm/panfrost_drm.h +++ b/include/uapi/drm/panfrost_drm.h @@ -83,6 +83,7 @@ struct drm_panfrost_wait_bo { };
#define PANFROST_BO_NOEXEC 1 +#define PANFROST_BO_HEAP 2
/**
- struct drm_panfrost_create_bo - ioctl argument for creating Panfrost BOs.
Increment the driver version to expose the new BO allocation flags.
Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Robin Murphy robin.murphy@arm.com Cc: Steven Price steven.price@arm.com Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Rob Herring robh@kernel.org --- drivers/gpu/drm/panfrost/panfrost_drv.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index a7126b5f8e5d..ecdbe2c9fd67 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -357,6 +357,11 @@ static const struct drm_ioctl_desc panfrost_drm_driver_ioctls[] = {
DEFINE_DRM_GEM_SHMEM_FOPS(panfrost_drm_driver_fops);
+/* + * Panfrost driver version: + * - 1.0 - initial interface + * - 1.1 - adds HEAP and NOEXEC flags for CREATE_BO + */ static struct drm_driver panfrost_drm_driver = { .driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ, .open = panfrost_open, @@ -368,7 +373,7 @@ static struct drm_driver panfrost_drm_driver = { .desc = "panfrost DRM", .date = "20180908", .major = 1, - .minor = 0, + .minor = 1,
.gem_create_object = panfrost_gem_create_object, .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
On 8/2/19 9:51 PM, Rob Herring wrote:
This series adds new BO allocation flags PANFROST_BO_HEAP and PANFROST_BO_NOEXEC. The heap allocations are paged in on GPU page faults.
Tomeu reports he has tested this in the panfrost CI.
All seems to be working fine:
https://gitlab.freedesktop.org/tomeu/mesa/pipelines/53591
Cheers,
Tomeu
This is based on drm-misc-next. An updated branch is here[1].
v3:
- Retain shared irq support, splitting IRQ changes to separate patch (6/8)
- Stop leaking SG tables
- Prevent mmap and pinning pages for heap BOs
Rob
[1] git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git panfrost/heap-noexec
Rob Herring (8): drm/gem: Allow sparsely populated page arrays in drm_gem_put_pages drm/shmem: Put pages independent of a SG table being set drm/panfrost: Restructure the GEM object creation drm/panfrost: Split panfrost_mmu_map SG list mapping to its own function drm/panfrost: Add a no execute flag for BO allocations drm/panfrost: Convert MMU IRQ handler to threaded handler drm/panfrost: Add support for GPU heap allocations drm/panfrost: Bump driver version to 1.1
drivers/gpu/drm/drm_gem.c | 3 + drivers/gpu/drm/drm_gem_shmem_helper.c | 4 +- drivers/gpu/drm/panfrost/TODO | 2 - drivers/gpu/drm/panfrost/panfrost_drv.c | 65 ++++++-- drivers/gpu/drm/panfrost/panfrost_gem.c | 106 +++++++++++-- drivers/gpu/drm/panfrost/panfrost_gem.h | 16 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 200 ++++++++++++++++++++---- include/uapi/drm/panfrost_drm.h | 3 + 8 files changed, 333 insertions(+), 66 deletions(-)
-- 2.20.1
On Mon, Aug 5, 2019 at 10:10 AM Tomeu Vizoso tomeu.vizoso@collabora.com wrote:
On 8/2/19 9:51 PM, Rob Herring wrote:
This series adds new BO allocation flags PANFROST_BO_HEAP and PANFROST_BO_NOEXEC. The heap allocations are paged in on GPU page faults.
Tomeu reports he has tested this in the panfrost CI.
All seems to be working fine:
Thanks for testing. Is that a Tested-by?
Rob
On 8/5/19 11:10 PM, Rob Herring wrote:
On Mon, Aug 5, 2019 at 10:10 AM Tomeu Vizoso tomeu.vizoso@collabora.com wrote:
On 8/2/19 9:51 PM, Rob Herring wrote:
This series adds new BO allocation flags PANFROST_BO_HEAP and PANFROST_BO_NOEXEC. The heap allocations are paged in on GPU page faults.
Tomeu reports he has tested this in the panfrost CI.
All seems to be working fine:
Thanks for testing. Is that a Tested-by?
Yep.
Thanks,
Tomeu
dri-devel@lists.freedesktop.org