Kernel DRM driver for ARM Mali 400/450 GPUs.
This implementation mainly take amdgpu DRM driver as reference.
- Mali 4xx GPUs have two kinds of processors GP and PP. GP is for OpenGL vertex shader processing and PP is for fragment shader processing. Each processor has its own MMU so prcessors work in virtual address space. - There's only one GP but multiple PP (max 4 for mali 400 and 8 for mali 450) in the same mali 4xx GPU. All PPs are grouped togather to handle a single fragment shader task divided by FB output tiled pixels. Mali 400 user space driver is responsible for assign target tiled pixels to each PP, but mali 450 has a HW module called DLBU to dynamically balance each PP's load. - User space driver allocate buffer object and map into GPU virtual address space, upload command stream and draw data with CPU mmap of the buffer object, then submit task to GP/PP with a register frame indicating where is the command stream and misc settings. - There's no command stream validation/relocation due to each user process has its own GPU virtual address space. GP/PP's MMU switch virtual address space before running two tasks from different user process. Error or evil user space code just get MMU fault or GP/PP error IRQ, then the HW/SW will be recovered. - Use TTM as MM. TTM_PL_TT type memory is used as the content of lima buffer object which is allocated from TTM page pool. all lima buffer object gets pinned with TTM_PL_FLAG_NO_EVICT when allocation, so there's no buffer eviction and swap for now. We need reverse engineering to see if and how GP/PP support MMU fault recovery (continue execution). Otherwise we have to pin/unpin each envolved buffer when task creation/deletion. - Use drm_sched for GPU task schedule. Each OpenGL context should have a lima context object in the kernel to distinguish tasks from different user. drm_sched gets task from each lima context in a fair way.
Not implemented: - Dump buffer support - Power management - Performance counter
This patch serial just pack a pair of .c/.h files in each patch. For whole history of this driver's development, see: https://github.com/yuq/linux-lima/commits/lima-4.17-rc4
Mesa driver is still in development and not ready for daily usage, but can run some simple tests like kmscube and glamrk2, see: https://github.com/yuq/mesa-lima
Andrei Paulau (1): arm64/dts: add switch-delay for meson mali
Lima Project Developers (10): drm/lima: add mali 4xx GPU hardware regs drm/lima: add lima core driver drm/lima: add GPU device functions drm/lima: add PMU related functions drm/lima: add PP related functions drm/lima: add MMU related functions drm/lima: add GPU virtual memory space handing drm/lima: add GEM related functions drm/lima: add GEM Prime related functions drm/lima: add makefile and kconfig
Qiang Yu (12): dt-bindings: add switch-delay property for mali-utgard arm64/dts: add switch-delay for meson mali Revert "drm: Nerf the preclose callback for modern drivers" drm/lima: add lima uapi header drm/lima: add L2 cache functions drm/lima: add GP related functions drm/lima: add BCAST related function drm/lima: add DLBU related functions drm/lima: add TTM subsystem functions drm/lima: add buffer object functions drm/lima: add GPU schedule using DRM_SCHED drm/lima: add context related functions
Simon Shields (1): ARM: dts: add gpu node to exynos4
.../bindings/gpu/arm,mali-utgard.txt | 4 + arch/arm/boot/dts/exynos4.dtsi | 33 ++ arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi | 1 + .../boot/dts/amlogic/meson-gxl-mali.dtsi | 1 + drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_file.c | 8 +- drivers/gpu/drm/lima/Kconfig | 9 + drivers/gpu/drm/lima/Makefile | 19 + drivers/gpu/drm/lima/lima_bcast.c | 65 +++ drivers/gpu/drm/lima/lima_bcast.h | 34 ++ drivers/gpu/drm/lima/lima_ctx.c | 143 +++++ drivers/gpu/drm/lima/lima_ctx.h | 51 ++ drivers/gpu/drm/lima/lima_device.c | 407 ++++++++++++++ drivers/gpu/drm/lima/lima_device.h | 136 +++++ drivers/gpu/drm/lima/lima_dlbu.c | 75 +++ drivers/gpu/drm/lima/lima_dlbu.h | 37 ++ drivers/gpu/drm/lima/lima_drv.c | 466 ++++++++++++++++ drivers/gpu/drm/lima/lima_drv.h | 77 +++ drivers/gpu/drm/lima/lima_gem.c | 459 ++++++++++++++++ drivers/gpu/drm/lima/lima_gem.h | 41 ++ drivers/gpu/drm/lima/lima_gem_prime.c | 66 +++ drivers/gpu/drm/lima/lima_gem_prime.h | 31 ++ drivers/gpu/drm/lima/lima_gp.c | 293 +++++++++++ drivers/gpu/drm/lima/lima_gp.h | 34 ++ drivers/gpu/drm/lima/lima_l2_cache.c | 98 ++++ drivers/gpu/drm/lima/lima_l2_cache.h | 32 ++ drivers/gpu/drm/lima/lima_mmu.c | 154 ++++++ drivers/gpu/drm/lima/lima_mmu.h | 34 ++ drivers/gpu/drm/lima/lima_object.c | 120 +++++ drivers/gpu/drm/lima/lima_object.h | 87 +++ drivers/gpu/drm/lima/lima_pmu.c | 85 +++ drivers/gpu/drm/lima/lima_pmu.h | 30 ++ drivers/gpu/drm/lima/lima_pp.c | 418 +++++++++++++++ drivers/gpu/drm/lima/lima_pp.h | 37 ++ drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++ drivers/gpu/drm/lima/lima_sched.c | 497 ++++++++++++++++++ drivers/gpu/drm/lima/lima_sched.h | 126 +++++ drivers/gpu/drm/lima/lima_ttm.c | 409 ++++++++++++++ drivers/gpu/drm/lima/lima_ttm.h | 44 ++ drivers/gpu/drm/lima/lima_vm.c | 312 +++++++++++ drivers/gpu/drm/lima/lima_vm.h | 73 +++ include/drm/drm_drv.h | 23 +- include/uapi/drm/lima_drm.h | 195 +++++++ 44 files changed, 5565 insertions(+), 6 deletions(-) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile create mode 100644 drivers/gpu/drm/lima/lima_bcast.c create mode 100644 drivers/gpu/drm/lima/lima_bcast.h create mode 100644 drivers/gpu/drm/lima/lima_ctx.c create mode 100644 drivers/gpu/drm/lima/lima_ctx.h create mode 100644 drivers/gpu/drm/lima/lima_device.c create mode 100644 drivers/gpu/drm/lima/lima_device.h create mode 100644 drivers/gpu/drm/lima/lima_dlbu.c create mode 100644 drivers/gpu/drm/lima/lima_dlbu.h create mode 100644 drivers/gpu/drm/lima/lima_drv.c create mode 100644 drivers/gpu/drm/lima/lima_drv.h create mode 100644 drivers/gpu/drm/lima/lima_gem.c create mode 100644 drivers/gpu/drm/lima/lima_gem.h create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.c create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.h create mode 100644 drivers/gpu/drm/lima/lima_gp.c create mode 100644 drivers/gpu/drm/lima/lima_gp.h create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.c create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.h create mode 100644 drivers/gpu/drm/lima/lima_mmu.c create mode 100644 drivers/gpu/drm/lima/lima_mmu.h create mode 100644 drivers/gpu/drm/lima/lima_object.c create mode 100644 drivers/gpu/drm/lima/lima_object.h create mode 100644 drivers/gpu/drm/lima/lima_pmu.c create mode 100644 drivers/gpu/drm/lima/lima_pmu.h create mode 100644 drivers/gpu/drm/lima/lima_pp.c create mode 100644 drivers/gpu/drm/lima/lima_pp.h create mode 100644 drivers/gpu/drm/lima/lima_regs.h create mode 100644 drivers/gpu/drm/lima/lima_sched.c create mode 100644 drivers/gpu/drm/lima/lima_sched.h create mode 100644 drivers/gpu/drm/lima/lima_ttm.c create mode 100644 drivers/gpu/drm/lima/lima_ttm.h create mode 100644 drivers/gpu/drm/lima/lima_vm.c create mode 100644 drivers/gpu/drm/lima/lima_vm.h create mode 100644 include/uapi/drm/lima_drm.h
From: Simon Shields simon@lineageos.org
v2 (Qiang Yu): add vender string to exynos4 mali gpu
Based off a similar commit for the Samsung Mali driver by Tobias Jakobi tjakobi@math.uni-bielefeld.de
Signed-off-by: Simon Shields simon@lineageos.org Signed-off-by: Qiang Yu yuq825@gmail.com --- arch/arm/boot/dts/exynos4.dtsi | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+)
diff --git a/arch/arm/boot/dts/exynos4.dtsi b/arch/arm/boot/dts/exynos4.dtsi index 909a9f2bf5be..7509671c505e 100644 --- a/arch/arm/boot/dts/exynos4.dtsi +++ b/arch/arm/boot/dts/exynos4.dtsi @@ -731,6 +731,39 @@ status = "disabled"; };
+ gpu: gpu@13000000 { + compatible = "samsung,exynos4-mali", "arm,mali-400"; + reg = <0x13000000 0x30000>; + power-domains = <&pd_g3d>; + + /* + * Propagate VPLL output clock to SCLK_G3D and + * ensure that the DIV_G3D divider is 1. + */ + assigned-clocks = <&clock CLK_MOUT_G3D1>, <&clock CLK_MOUT_G3D>, + <&clock CLK_FOUT_VPLL>, <&clock CLK_SCLK_G3D>; + assigned-clock-parents = <&clock CLK_SCLK_VPLL>, + <&clock CLK_MOUT_G3D1>; + assigned-clock-rates = <0>, <0>, <160000000>, <160000000>; + + clocks = <&clock CLK_SCLK_G3D>, <&clock CLK_G3D>; + clock-names = "bus", "core"; + + interrupts = <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>, + <GIC_SPI 119 IRQ_TYPE_LEVEL_HIGH>, + <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>, + <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>, + <GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>, + <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>, + <GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>, + <GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>, + <GIC_SPI 126 IRQ_TYPE_LEVEL_HIGH>, + <GIC_SPI 127 IRQ_TYPE_LEVEL_HIGH>; + interrupt-names = "ppmmu0", "ppmmu1", "ppmmu2", "ppmmu3", + "gpmmu", "pp0", "pp1", "pp2", "pp3", "gp"; + status = "disabled"; + }; + tmu: tmu@100c0000 { interrupt-parent = <&combiner>; reg = <0x100C0000 0x100>;
On Fri, May 18, 2018 at 05:27:52PM +0800, Qiang Yu wrote:
From: Simon Shields simon@lineageos.org
v2 (Qiang Yu): add vender string to exynos4 mali gpu
This also needs to be added to the binding doc.
Based off a similar commit for the Samsung Mali driver by Tobias Jakobi tjakobi@math.uni-bielefeld.de
Signed-off-by: Simon Shields simon@lineageos.org Signed-off-by: Qiang Yu yuq825@gmail.com
arch/arm/boot/dts/exynos4.dtsi | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+)
Signed-off-by: Qiang Yu yuq825@gmail.com --- Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt b/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt index c1f65d1dac1d..062d4bee216a 100644 --- a/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt +++ b/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt @@ -58,6 +58,10 @@ Optional properties: A power domain consumer specifier as defined in Documentation/devicetree/bindings/power/power_domain.txt
+ - switch-delay: + This value is the number of Mali clock cycles it takes to + enable the power gates and turn on the power mesh. + Vendor-specific bindings ------------------------
On Fri, May 18, 2018 at 05:27:53PM +0800, Qiang Yu wrote:
Signed-off-by: Qiang Yu yuq825@gmail.com
Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt b/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt index c1f65d1dac1d..062d4bee216a 100644 --- a/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt +++ b/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt @@ -58,6 +58,10 @@ Optional properties: A power domain consumer specifier as defined in Documentation/devicetree/bindings/power/power_domain.txt
- switch-delay:
- This value is the number of Mali clock cycles it takes to
- enable the power gates and turn on the power mesh.
This should be implied by the SoC specific compatible string.
Alternatively, can't the driver just pick a value long enough for everyone? Does it really vary that much, and is it timing critical?
Rob
P.S. Keep up the great work on this!
On Thu, May 24, 2018 at 1:04 AM, Rob Herring robh@kernel.org wrote:
On Fri, May 18, 2018 at 05:27:53PM +0800, Qiang Yu wrote:
Signed-off-by: Qiang Yu yuq825@gmail.com
Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt b/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt index c1f65d1dac1d..062d4bee216a 100644 --- a/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt +++ b/Documentation/devicetree/bindings/gpu/arm,mali-utgard.txt @@ -58,6 +58,10 @@ Optional properties: A power domain consumer specifier as defined in Documentation/devicetree/bindings/power/power_domain.txt
- switch-delay:
- This value is the number of Mali clock cycles it takes to
- enable the power gates and turn on the power mesh.
This should be implied by the SoC specific compatible string.
If so, we have to maintain a SoC-switch delay table inside the driver. But this should be the DTS's work as it's an SoC parameter.
Alternatively, can't the driver just pick a value long enough for everyone? Does it really vary that much, and is it timing critical?
In fact, I haven't tried setting 0xff SoC with 0xffff. I just use value from official driver shipped with the board. But if a board need 0xffff, set to 0xff will cause it works unstable. So for setting 0xff board to 0xffff, it may need time experiment to see if it affect performance or stability. And need to do exp on more SoC.
Regards, Qiang
Rob
P.S. Keep up the great work on this!
Meson mali GPU operate in high clock frequency, need this value be high to work in a stable state.
Signed-off-by: Qiang Yu yuq825@gmail.com --- arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi index eb327664a4d8..8bed15267c9c 100644 --- a/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi +++ b/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi @@ -23,6 +23,7 @@ "pp2", "ppmmu2"; clocks = <&clkc CLKID_CLK81>, <&clkc CLKID_MALI>; clock-names = "bus", "core"; + switch-delay = <0xffff>;
/* * Mali clocking is provided by two identical clock paths
Hi Yuq,
On 18/05/2018 11:27, Qiang Yu wrote:
Meson mali GPU operate in high clock frequency, need this value be high to work in a stable state.
Signed-off-by: Qiang Yu yuq825@gmail.com
arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi index eb327664a4d8..8bed15267c9c 100644 --- a/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi +++ b/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi @@ -23,6 +23,7 @@ "pp2", "ppmmu2"; clocks = <&clkc CLKID_CLK81>, <&clkc CLKID_MALI>; clock-names = "bus", "core";
switch-delay = <0xffff>;
/*
- Mali clocking is provided by two identical clock paths
Please CC : linux-amlogic@lists.infradead.org to have these amlogic DT patches reviewed and applied.
Same for cover-letter to have context.
Thanks, Neil
Hi Neil,
OK, I'll resend these patches.
Regards, Qiang
On Mon, May 21, 2018 at 10:16 PM, Neil Armstrong narmstrong@baylibre.com wrote:
Hi Yuq,
On 18/05/2018 11:27, Qiang Yu wrote:
Meson mali GPU operate in high clock frequency, need this value be high to work in a stable state.
Signed-off-by: Qiang Yu yuq825@gmail.com
arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi index eb327664a4d8..8bed15267c9c 100644 --- a/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi +++ b/arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi @@ -23,6 +23,7 @@ "pp2", "ppmmu2"; clocks = <&clkc CLKID_CLK81>, <&clkc CLKID_MALI>; clock-names = "bus", "core";
switch-delay = <0xffff>; /* * Mali clocking is provided by two identical clock paths
Please CC : linux-amlogic@lists.infradead.org to have these amlogic DT patches reviewed and applied.
Same for cover-letter to have context.
Thanks, Neil
From: Andrei Paulau 7134956@gmail.com
Meson mali GPU operate in high clock frequency, need this value be high to work in a stable state.
Signed-off-by: Andrei Paulau 7134956@gmail.com --- arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi index 562c26a0ba33..00cd2fc1aa95 100644 --- a/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi +++ b/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi @@ -241,6 +241,7 @@ "pp2", "ppmmu2"; clocks = <&clkc CLKID_CLK81>, <&clkc CLKID_MALI>; clock-names = "bus", "core"; + switch-delay = <0xffff>;
/* * Mali clocking is provided by two identical clock paths
Hi Yuq,
On 18/05/2018 11:27, Qiang Yu wrote:
From: Andrei Paulau 7134956@gmail.com
Meson mali GPU operate in high clock frequency, need this value be high to work in a stable state.
Signed-off-by: Andrei Paulau 7134956@gmail.com
arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi index 562c26a0ba33..00cd2fc1aa95 100644 --- a/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi +++ b/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi @@ -241,6 +241,7 @@ "pp2", "ppmmu2"; clocks = <&clkc CLKID_CLK81>, <&clkc CLKID_MALI>; clock-names = "bus", "core";
switch-delay = <0xffff>;
/*
- Mali clocking is provided by two identical clock paths
Same this one, CC it to : linux-amlogic@lists.infradead.org
Thanks, Neil
This reverts commit 45c3d213a400c952ab7119f394c5293bb6877e6b.
lima driver need preclose to wait all task in the context created within closing file to finish before free all the buffer object. Otherwise pending tesk may fail and get noisy MMU fault message.
Move this wait to each buffer object free function can achieve the same result but some buffer object is shared with other file context, but we only want to wait the closing file context's tasks. So the implementation is not that straight forword compared to the preclose one.
Signed-off-by: Qiang Yu yuq825@gmail.com --- drivers/gpu/drm/drm_file.c | 8 ++++---- include/drm/drm_drv.h | 23 +++++++++++++++++++++-- 2 files changed, 25 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c index e394799979a6..0a43107396b9 100644 --- a/drivers/gpu/drm/drm_file.c +++ b/drivers/gpu/drm/drm_file.c @@ -361,8 +361,9 @@ void drm_lastclose(struct drm_device * dev) * * This function must be used by drivers as their &file_operations.release * method. It frees any resources associated with the open file, and calls the - * &drm_driver.postclose driver callback. If this is the last open file for the - * DRM device also proceeds to call the &drm_driver.lastclose driver callback. + * &drm_driver.preclose and &drm_driver.lastclose driver callbacks. If this is + * the last open file for the DRM device also proceeds to call the + * &drm_driver.lastclose driver callback. * * RETURNS: * @@ -382,8 +383,7 @@ int drm_release(struct inode *inode, struct file *filp) list_del(&file_priv->lhead); mutex_unlock(&dev->filelist_mutex);
- if (drm_core_check_feature(dev, DRIVER_LEGACY) && - dev->driver->preclose) + if (dev->driver->preclose) dev->driver->preclose(dev, file_priv);
/* ======================================================== diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h index d23dcdd1bd95..8d6080f97ed4 100644 --- a/include/drm/drm_drv.h +++ b/include/drm/drm_drv.h @@ -107,6 +107,23 @@ struct drm_driver { */ int (*open) (struct drm_device *, struct drm_file *);
+ /** + * @preclose: + * + * One of the driver callbacks when a new &struct drm_file is closed. + * Useful for tearing down driver-private data structures allocated in + * @open like buffer allocators, execution contexts or similar things. + * + * Since the display/modeset side of DRM can only be owned by exactly + * one &struct drm_file (see &drm_file.is_master and &drm_device.master) + * there should never be a need to tear down any modeset related + * resources in this callback. Doing so would be a driver design bug. + * + * FIXME: It is not really clear why there's both @preclose and + * @postclose. Without a really good reason, use @postclose only. + */ + void (*preclose) (struct drm_device *, struct drm_file *file_priv); + /** * @postclose: * @@ -118,6 +135,9 @@ struct drm_driver { * one &struct drm_file (see &drm_file.is_master and &drm_device.master) * there should never be a need to tear down any modeset related * resources in this callback. Doing so would be a driver design bug. + * + * FIXME: It is not really clear why there's both @preclose and + * @postclose. Without a really good reason, use @postclose only. */ void (*postclose) (struct drm_device *, struct drm_file *);
@@ -134,7 +154,7 @@ struct drm_driver { * state changes, e.g. in conjunction with the :ref:`vga_switcheroo` * infrastructure. * - * This is called after @postclose hook has been called. + * This is called after @preclose and @postclose have been called. * * NOTE: * @@ -601,7 +621,6 @@ struct drm_driver { /* List of devices hanging off this driver with stealth attach. */ struct list_head legacy_dev_list; int (*firstopen) (struct drm_device *); - void (*preclose) (struct drm_device *, struct drm_file *file_priv); int (*dma_ioctl) (struct drm_device *dev, void *data, struct drm_file *file_priv); int (*dma_quiescent) (struct drm_device *); int (*context_dtor) (struct drm_device *dev, int context);
Well NAK, that brings back a callback we worked quite hard on getting rid of.
It looks like the problem isn't that you need the preclose callback, but you rather seem to misunderstood how TTM works.
All you need to do is to cleanup your command submission path so that the caller of lima_sched_context_queue_task() adds the resulting scheduler fence to TTMs buffer objects.
Regards, Christian.
Am 18.05.2018 um 11:27 schrieb Qiang Yu:
This reverts commit 45c3d213a400c952ab7119f394c5293bb6877e6b.
lima driver need preclose to wait all task in the context created within closing file to finish before free all the buffer object. Otherwise pending tesk may fail and get noisy MMU fault message.
Move this wait to each buffer object free function can achieve the same result but some buffer object is shared with other file context, but we only want to wait the closing file context's tasks. So the implementation is not that straight forword compared to the preclose one.
Signed-off-by: Qiang Yu yuq825@gmail.com
drivers/gpu/drm/drm_file.c | 8 ++++---- include/drm/drm_drv.h | 23 +++++++++++++++++++++-- 2 files changed, 25 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c index e394799979a6..0a43107396b9 100644 --- a/drivers/gpu/drm/drm_file.c +++ b/drivers/gpu/drm/drm_file.c @@ -361,8 +361,9 @@ void drm_lastclose(struct drm_device * dev)
- This function must be used by drivers as their &file_operations.release
- method. It frees any resources associated with the open file, and calls the
- &drm_driver.postclose driver callback. If this is the last open file for the
- DRM device also proceeds to call the &drm_driver.lastclose driver callback.
- &drm_driver.preclose and &drm_driver.lastclose driver callbacks. If this is
- the last open file for the DRM device also proceeds to call the
- &drm_driver.lastclose driver callback.
- RETURNS:
@@ -382,8 +383,7 @@ int drm_release(struct inode *inode, struct file *filp) list_del(&file_priv->lhead); mutex_unlock(&dev->filelist_mutex);
- if (drm_core_check_feature(dev, DRIVER_LEGACY) &&
dev->driver->preclose)
if (dev->driver->preclose) dev->driver->preclose(dev, file_priv);
/* ========================================================
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h index d23dcdd1bd95..8d6080f97ed4 100644 --- a/include/drm/drm_drv.h +++ b/include/drm/drm_drv.h @@ -107,6 +107,23 @@ struct drm_driver { */ int (*open) (struct drm_device *, struct drm_file *);
- /**
* @preclose:
*
* One of the driver callbacks when a new &struct drm_file is closed.
* Useful for tearing down driver-private data structures allocated in
* @open like buffer allocators, execution contexts or similar things.
*
* Since the display/modeset side of DRM can only be owned by exactly
* one &struct drm_file (see &drm_file.is_master and &drm_device.master)
* there should never be a need to tear down any modeset related
* resources in this callback. Doing so would be a driver design bug.
*
* FIXME: It is not really clear why there's both @preclose and
* @postclose. Without a really good reason, use @postclose only.
*/
- void (*preclose) (struct drm_device *, struct drm_file *file_priv);
- /**
- @postclose:
@@ -118,6 +135,9 @@ struct drm_driver { * one &struct drm_file (see &drm_file.is_master and &drm_device.master) * there should never be a need to tear down any modeset related * resources in this callback. Doing so would be a driver design bug.
*
* FIXME: It is not really clear why there's both @preclose and
*/ void (*postclose) (struct drm_device *, struct drm_file *);* @postclose. Without a really good reason, use @postclose only.
@@ -134,7 +154,7 @@ struct drm_driver { * state changes, e.g. in conjunction with the :ref:`vga_switcheroo` * infrastructure. *
* This is called after @postclose hook has been called.
* This is called after @preclose and @postclose have been called.
- NOTE:
@@ -601,7 +621,6 @@ struct drm_driver { /* List of devices hanging off this driver with stealth attach. */ struct list_head legacy_dev_list; int (*firstopen) (struct drm_device *);
- void (*preclose) (struct drm_device *, struct drm_file *file_priv); int (*dma_ioctl) (struct drm_device *dev, void *data, struct drm_file *file_priv); int (*dma_quiescent) (struct drm_device *); int (*context_dtor) (struct drm_device *dev, int context);
On Wed, May 23, 2018 at 5:35 PM, Christian König ckoenig.leichtzumerken@gmail.com wrote:
Well NAK, that brings back a callback we worked quite hard on getting rid of.
It looks like the problem isn't that you need the preclose callback, but you rather seem to misunderstood how TTM works.
All you need to do is to cleanup your command submission path so that the caller of lima_sched_context_queue_task() adds the resulting scheduler fence to TTMs buffer objects.
You mean adding the finished dma fence to the buffer's reservation object then waiting it before unmap the buffer from GPU VM in the drm_release()'s buffer close callback?
Adding fence is done already, and I did wait it before unmap. But then I see when the buffer is shared between processes, the "perfect wait" is just wait the fence from this process's task, so it's better to also distinguish fences. If so, I just think why we don't just wait tasks from this process in the preclose before unmap/free buffer in the drm_release()?
Regards, Qiang
Am 18.05.2018 um 11:27 schrieb Qiang Yu:
This reverts commit 45c3d213a400c952ab7119f394c5293bb6877e6b.
lima driver need preclose to wait all task in the context created within closing file to finish before free all the buffer object. Otherwise pending tesk may fail and get noisy MMU fault message.
Move this wait to each buffer object free function can achieve the same result but some buffer object is shared with other file context, but we only want to wait the closing file context's tasks. So the implementation is not that straight forword compared to the preclose one.
Signed-off-by: Qiang Yu yuq825@gmail.com
drivers/gpu/drm/drm_file.c | 8 ++++---- include/drm/drm_drv.h | 23 +++++++++++++++++++++-- 2 files changed, 25 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c index e394799979a6..0a43107396b9 100644 --- a/drivers/gpu/drm/drm_file.c +++ b/drivers/gpu/drm/drm_file.c @@ -361,8 +361,9 @@ void drm_lastclose(struct drm_device * dev)
- This function must be used by drivers as their
&file_operations.release
- method. It frees any resources associated with the open file, and
calls the
- &drm_driver.postclose driver callback. If this is the last open file
for the
- DRM device also proceeds to call the &drm_driver.lastclose driver
callback.
- &drm_driver.preclose and &drm_driver.lastclose driver callbacks. If
this is
- the last open file for the DRM device also proceeds to call the
- &drm_driver.lastclose driver callback.
- RETURNS:
@@ -382,8 +383,7 @@ int drm_release(struct inode *inode, struct file *filp) list_del(&file_priv->lhead); mutex_unlock(&dev->filelist_mutex);
if (drm_core_check_feature(dev, DRIVER_LEGACY) &&
dev->driver->preclose)
if (dev->driver->preclose) dev->driver->preclose(dev, file_priv); /* ========================================================
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h index d23dcdd1bd95..8d6080f97ed4 100644 --- a/include/drm/drm_drv.h +++ b/include/drm/drm_drv.h @@ -107,6 +107,23 @@ struct drm_driver { */ int (*open) (struct drm_device *, struct drm_file *);
/**
* @preclose:
*
* One of the driver callbacks when a new &struct drm_file is
closed.
* Useful for tearing down driver-private data structures
allocated in
* @open like buffer allocators, execution contexts or similar
things.
*
* Since the display/modeset side of DRM can only be owned by
exactly
* one &struct drm_file (see &drm_file.is_master and
&drm_device.master)
* there should never be a need to tear down any modeset related
* resources in this callback. Doing so would be a driver design
bug.
*
* FIXME: It is not really clear why there's both @preclose and
* @postclose. Without a really good reason, use @postclose only.
*/
void (*preclose) (struct drm_device *, struct drm_file
*file_priv);
/** * @postclose: *
@@ -118,6 +135,9 @@ struct drm_driver { * one &struct drm_file (see &drm_file.is_master and &drm_device.master) * there should never be a need to tear down any modeset related * resources in this callback. Doing so would be a driver design bug.
*
* FIXME: It is not really clear why there's both @preclose and
@@ -134,7 +154,7 @@ struct drm_driver { * state changes, e.g. in conjunction with the* @postclose. Without a really good reason, use @postclose only. */ void (*postclose) (struct drm_device *, struct drm_file *);
:ref:`vga_switcheroo` * infrastructure. *
* This is called after @postclose hook has been called.
* This is called after @preclose and @postclose have been called. * * NOTE: *
@@ -601,7 +621,6 @@ struct drm_driver { /* List of devices hanging off this driver with stealth attach. */ struct list_head legacy_dev_list; int (*firstopen) (struct drm_device *);
void (*preclose) (struct drm_device *, struct drm_file
*file_priv); int (*dma_ioctl) (struct drm_device *dev, void *data, struct drm_file *file_priv); int (*dma_quiescent) (struct drm_device *); int (*context_dtor) (struct drm_device *dev, int context);
Am 23.05.2018 um 15:13 schrieb Qiang Yu:
On Wed, May 23, 2018 at 5:35 PM, Christian König ckoenig.leichtzumerken@gmail.com wrote:
Well NAK, that brings back a callback we worked quite hard on getting rid of.
It looks like the problem isn't that you need the preclose callback, but you rather seem to misunderstood how TTM works.
All you need to do is to cleanup your command submission path so that the caller of lima_sched_context_queue_task() adds the resulting scheduler fence to TTMs buffer objects.
You mean adding the finished dma fence to the buffer's reservation object then waiting it before unmap the buffer from GPU VM in the drm_release()'s buffer close callback?
That is one possibility, but also not necessary.
TTM has a destroy callback which is called from a workqueue when all fences on that BOs have signaled.
Depending on your VM management you can use it to delay unmapping the buffer until it is actually not used any more.
Adding fence is done already, and I did wait it before unmap. But then I see when the buffer is shared between processes, the "perfect wait" is just wait the fence from this process's task, so it's better to also distinguish fences. If so, I just think why we don't just wait tasks from this process in the preclose before unmap/free buffer in the drm_release()?
Well it depends on your VM management. When userspace expects that the VM space the BO used is reusable immediately than the TTM callback won't work.
On the other hand you can just grab the list of fences on a BO and filter out the ones from your current process and wait for those. See amdgpu_sync_resv() as an example how to do that.
Christian.
Regards, Qiang
Am 18.05.2018 um 11:27 schrieb Qiang Yu:
This reverts commit 45c3d213a400c952ab7119f394c5293bb6877e6b.
lima driver need preclose to wait all task in the context created within closing file to finish before free all the buffer object. Otherwise pending tesk may fail and get noisy MMU fault message.
Move this wait to each buffer object free function can achieve the same result but some buffer object is shared with other file context, but we only want to wait the closing file context's tasks. So the implementation is not that straight forword compared to the preclose one.
Signed-off-by: Qiang Yu yuq825@gmail.com
drivers/gpu/drm/drm_file.c | 8 ++++---- include/drm/drm_drv.h | 23 +++++++++++++++++++++-- 2 files changed, 25 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c index e394799979a6..0a43107396b9 100644 --- a/drivers/gpu/drm/drm_file.c +++ b/drivers/gpu/drm/drm_file.c @@ -361,8 +361,9 @@ void drm_lastclose(struct drm_device * dev) * * This function must be used by drivers as their &file_operations.release * method. It frees any resources associated with the open file, and calls the
- &drm_driver.postclose driver callback. If this is the last open file
for the
- DRM device also proceeds to call the &drm_driver.lastclose driver
callback.
- &drm_driver.preclose and &drm_driver.lastclose driver callbacks. If
this is
- the last open file for the DRM device also proceeds to call the
- &drm_driver.lastclose driver callback.
- RETURNS:
@@ -382,8 +383,7 @@ int drm_release(struct inode *inode, struct file *filp) list_del(&file_priv->lhead); mutex_unlock(&dev->filelist_mutex);
if (drm_core_check_feature(dev, DRIVER_LEGACY) &&
dev->driver->preclose)
if (dev->driver->preclose) dev->driver->preclose(dev, file_priv); /* ========================================================
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h index d23dcdd1bd95..8d6080f97ed4 100644 --- a/include/drm/drm_drv.h +++ b/include/drm/drm_drv.h @@ -107,6 +107,23 @@ struct drm_driver { */ int (*open) (struct drm_device *, struct drm_file *);
/**
* @preclose:
*
* One of the driver callbacks when a new &struct drm_file is
closed.
* Useful for tearing down driver-private data structures
allocated in
* @open like buffer allocators, execution contexts or similar
things.
*
* Since the display/modeset side of DRM can only be owned by
exactly
* one &struct drm_file (see &drm_file.is_master and
&drm_device.master)
* there should never be a need to tear down any modeset related
* resources in this callback. Doing so would be a driver design
bug.
*
* FIXME: It is not really clear why there's both @preclose and
* @postclose. Without a really good reason, use @postclose only.
*/
void (*preclose) (struct drm_device *, struct drm_file
*file_priv);
/** * @postclose: *
@@ -118,6 +135,9 @@ struct drm_driver { * one &struct drm_file (see &drm_file.is_master and &drm_device.master) * there should never be a need to tear down any modeset related * resources in this callback. Doing so would be a driver design bug.
*
* FIXME: It is not really clear why there's both @preclose and
@@ -134,7 +154,7 @@ struct drm_driver { * state changes, e.g. in conjunction with the* @postclose. Without a really good reason, use @postclose only. */ void (*postclose) (struct drm_device *, struct drm_file *);
:ref:`vga_switcheroo` * infrastructure. *
* This is called after @postclose hook has been called.
* This is called after @preclose and @postclose have been called. * * NOTE: *
@@ -601,7 +621,6 @@ struct drm_driver { /* List of devices hanging off this driver with stealth attach. */ struct list_head legacy_dev_list; int (*firstopen) (struct drm_device *);
void (*preclose) (struct drm_device *, struct drm_file
*file_priv); int (*dma_ioctl) (struct drm_device *dev, void *data, struct drm_file *file_priv); int (*dma_quiescent) (struct drm_device *); int (*context_dtor) (struct drm_device *dev, int context);
On Wed, May 23, 2018 at 9:41 PM, Christian König christian.koenig@amd.com wrote:
Am 23.05.2018 um 15:13 schrieb Qiang Yu:
On Wed, May 23, 2018 at 5:35 PM, Christian König ckoenig.leichtzumerken@gmail.com wrote:
Well NAK, that brings back a callback we worked quite hard on getting rid of.
It looks like the problem isn't that you need the preclose callback, but you rather seem to misunderstood how TTM works.
All you need to do is to cleanup your command submission path so that the caller of lima_sched_context_queue_task() adds the resulting scheduler fence to TTMs buffer objects.
You mean adding the finished dma fence to the buffer's reservation object then waiting it before unmap the buffer from GPU VM in the drm_release()'s buffer close callback?
That is one possibility, but also not necessary.
TTM has a destroy callback which is called from a workqueue when all fences on that BOs have signaled.
Depending on your VM management you can use it to delay unmapping the buffer until it is actually not used any more.
Adding fence is done already, and I did wait it before unmap. But then I see when the buffer is shared between processes, the "perfect wait" is just wait the fence from this process's task, so it's better to also distinguish fences. If so, I just think why we don't just wait tasks from this process in the preclose before unmap/free buffer in the drm_release()?
Well it depends on your VM management. When userspace expects that the VM space the BO used is reusable immediately than the TTM callback won't work.
On the other hand you can just grab the list of fences on a BO and filter out the ones from your current process and wait for those. See amdgpu_sync_resv() as an example how to do that.
In current lima implementation, user space driver is responsible not unmap/free buffer before task is complete. And VM map/unmap is not differed.
This works simple and fine except the case that user press Ctrl+C to terminate the application which will force to close drm fd.
I'd more prefer to wait buffer fence before vm unmap and filter like amdgpu_sync_resv() compared to implement refcount in kernel task. But these two ways are all not as simple as preclose.
So I still don't understand why you don't want to get preclose back even have to introduce other complicated mechanism to cover the case free/unmap buffer before this process's task is done?
Regards, Qiang
Christian.
Regards, Qiang
Am 18.05.2018 um 11:27 schrieb Qiang Yu:
This reverts commit 45c3d213a400c952ab7119f394c5293bb6877e6b.
lima driver need preclose to wait all task in the context created within closing file to finish before free all the buffer object. Otherwise pending tesk may fail and get noisy MMU fault message.
Move this wait to each buffer object free function can achieve the same result but some buffer object is shared with other file context, but we only want to wait the closing file context's tasks. So the implementation is not that straight forword compared to the preclose one.
Signed-off-by: Qiang Yu yuq825@gmail.com
drivers/gpu/drm/drm_file.c | 8 ++++---- include/drm/drm_drv.h | 23 +++++++++++++++++++++-- 2 files changed, 25 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c index e394799979a6..0a43107396b9 100644 --- a/drivers/gpu/drm/drm_file.c +++ b/drivers/gpu/drm/drm_file.c @@ -361,8 +361,9 @@ void drm_lastclose(struct drm_device * dev) * * This function must be used by drivers as their &file_operations.release * method. It frees any resources associated with the open file, and calls the
- &drm_driver.postclose driver callback. If this is the last open file
for the
- DRM device also proceeds to call the &drm_driver.lastclose driver
callback.
- &drm_driver.preclose and &drm_driver.lastclose driver callbacks. If
this is
- the last open file for the DRM device also proceeds to call the
- &drm_driver.lastclose driver callback.
- RETURNS:
@@ -382,8 +383,7 @@ int drm_release(struct inode *inode, struct file *filp) list_del(&file_priv->lhead); mutex_unlock(&dev->filelist_mutex);
if (drm_core_check_feature(dev, DRIVER_LEGACY) &&
dev->driver->preclose)
if (dev->driver->preclose) dev->driver->preclose(dev, file_priv); /* ========================================================
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h index d23dcdd1bd95..8d6080f97ed4 100644 --- a/include/drm/drm_drv.h +++ b/include/drm/drm_drv.h @@ -107,6 +107,23 @@ struct drm_driver { */ int (*open) (struct drm_device *, struct drm_file *);
/**
* @preclose:
*
* One of the driver callbacks when a new &struct drm_file is
closed.
* Useful for tearing down driver-private data structures
allocated in
* @open like buffer allocators, execution contexts or similar
things.
*
* Since the display/modeset side of DRM can only be owned by
exactly
* one &struct drm_file (see &drm_file.is_master and
&drm_device.master)
* there should never be a need to tear down any modeset related
* resources in this callback. Doing so would be a driver design
bug.
*
* FIXME: It is not really clear why there's both @preclose and
* @postclose. Without a really good reason, use @postclose
only.
*/
void (*preclose) (struct drm_device *, struct drm_file
*file_priv);
/** * @postclose: *
@@ -118,6 +135,9 @@ struct drm_driver { * one &struct drm_file (see &drm_file.is_master and &drm_device.master) * there should never be a need to tear down any modeset related * resources in this callback. Doing so would be a driver design bug.
*
* FIXME: It is not really clear why there's both @preclose and
* @postclose. Without a really good reason, use @postclose
only. */ void (*postclose) (struct drm_device *, struct drm_file *); @@ -134,7 +154,7 @@ struct drm_driver { * state changes, e.g. in conjunction with the :ref:`vga_switcheroo` * infrastructure. *
* This is called after @postclose hook has been called.
* This is called after @preclose and @postclose have been
called. * * NOTE: * @@ -601,7 +621,6 @@ struct drm_driver { /* List of devices hanging off this driver with stealth attach. */ struct list_head legacy_dev_list; int (*firstopen) (struct drm_device *);
void (*preclose) (struct drm_device *, struct drm_file
*file_priv); int (*dma_ioctl) (struct drm_device *dev, void *data, struct drm_file *file_priv); int (*dma_quiescent) (struct drm_device *); int (*context_dtor) (struct drm_device *dev, int context);
Am 24.05.2018 um 03:38 schrieb Qiang Yu:
[SNIP]
Adding fence is done already, and I did wait it before unmap. But then I see when the buffer is shared between processes, the "perfect wait" is just wait the fence from this process's task, so it's better to also distinguish fences. If so, I just think why we don't just wait tasks from this process in the preclose before unmap/free buffer in the drm_release()?
Well it depends on your VM management. When userspace expects that the VM space the BO used is reusable immediately than the TTM callback won't work.
On the other hand you can just grab the list of fences on a BO and filter out the ones from your current process and wait for those. See amdgpu_sync_resv() as an example how to do that.
In current lima implementation, user space driver is responsible not unmap/free buffer before task is complete. And VM map/unmap is not differed.
Well it's up to you how to design userspace, but in the past doing it like that turned out to be a rather bad design decision.
Keep in mind that the kernel driver must guarantee that a shaders can never access freed up memory.
Otherwise taking over the system from an unprivileged processes becomes just a typing exercise when you manage to access freed memory which is now used for a page table.
Because of this we have a separate tracking in amdgpu so that we not only know who is using which BO, who is using which VM.
This works simple and fine except the case that user press Ctrl+C to terminate the application which will force to close drm fd.
I'm not sure if that actually works as fine as you think.
For an example of what we had to add to prevent security breaches, take a look at amdgpu_gem_object_close().
I'd more prefer to wait buffer fence before vm unmap and filter like amdgpu_sync_resv() compared to implement refcount in kernel task. But these two ways are all not as simple as preclose.
Well, I would rather say you should either delay VM unmap operations until all users of the VM are done with their work using the ttm_bo_destroy callback.
Or you block in the gem_close_object callback until all tasks using the BO are done with it.
So I still don't understand why you don't want to get preclose back even have to introduce other complicated mechanism to cover the case free/unmap buffer before this process's task is done?
We intentionally removed the preclose callback to prevent certain use cases, bringing it back to allow your use case looks rather fishy to me.
BTW: What exactly is the issue with using the postclose callback?
Regards, Christian.
Regards, Qiang
On Thu, May 24, 2018 at 2:46 PM, Christian König christian.koenig@amd.com wrote:
Am 24.05.2018 um 03:38 schrieb Qiang Yu:
[SNIP]
Adding fence is done already, and I did wait it before unmap. But then I see when the buffer is shared between processes, the "perfect wait" is just wait the fence from this process's task, so it's better to also distinguish fences. If so, I just think why we don't just wait tasks from this process in the preclose before unmap/free buffer in the drm_release()?
Well it depends on your VM management. When userspace expects that the VM space the BO used is reusable immediately than the TTM callback won't work.
On the other hand you can just grab the list of fences on a BO and filter out the ones from your current process and wait for those. See amdgpu_sync_resv() as an example how to do that.
In current lima implementation, user space driver is responsible not unmap/free buffer before task is complete. And VM map/unmap is not differed.
Well it's up to you how to design userspace, but in the past doing it like that turned out to be a rather bad design decision.
Keep in mind that the kernel driver must guarantee that a shaders can never access freed up memory.
Otherwise taking over the system from an unprivileged processes becomes just a typing exercise when you manage to access freed memory which is now used for a page table.
Right, I know this has to be avoided.
Because of this we have a separate tracking in amdgpu so that we not only know who is using which BO, who is using which VM.
amdgpu's VM implementation seems too complicated for this simple mali GPU, but I may investigate it more to see if I can make it better.
This works simple and fine except the case that user press Ctrl+C to terminate the application which will force to close drm fd.
I'm not sure if that actually works as fine as you think.
For an example of what we had to add to prevent security breaches, take a look at amdgpu_gem_object_close().
I'd more prefer to wait buffer fence before vm unmap and filter like amdgpu_sync_resv() compared to implement refcount in kernel task. But these two ways are all not as simple as preclose.
Well, I would rather say you should either delay VM unmap operations until all users of the VM are done with their work using the ttm_bo_destroy callback.
Or you block in the gem_close_object callback until all tasks using the BO are done with it.
So I still don't understand why you don't want to get preclose back even have to introduce other complicated mechanism to cover the case free/unmap buffer before this process's task is done?
We intentionally removed the preclose callback to prevent certain use cases, bringing it back to allow your use case looks rather fishy to me.
Seems other drivers do either the deffer or wait way to adopt the drop of preclose. I can do the same as you suggested, but just not understand why we make our life harder. Can I know what's the case you want to prevent?
BTW: What exactly is the issue with using the postclose callback?
The issue is, when Ctrl+C to terminate an application, if no wait or deffer unmap, buffer just gets unmapped before task is done, so kernel driver gets MMU fault and HW reset to recover the GPU.
Regards, Qiang
Regards, Christian.
Regards, Qiang
Am 24.05.2018 um 11:24 schrieb Qiang Yu:
On Thu, May 24, 2018 at 2:46 PM, Christian König christian.koenig@amd.com wrote: [SNIP]
Because of this we have a separate tracking in amdgpu so that we not only know who is using which BO, who is using which VM.
amdgpu's VM implementation seems too complicated for this simple mali GPU, but I may investigate it more to see if I can make it better.
Yeah, completely agree.
The VM handling in amdgpu is really complicated because we had to tune it for multiple use cases. E.g. partial resident textures, delayed updates etc etc....
But you should at least be able to take the lessons learned we had with that VM code and not make the same mistakes again.
We intentionally removed the preclose callback to prevent certain use cases, bringing it back to allow your use case looks rather fishy to me.
Seems other drivers do either the deffer or wait way to adopt the drop of preclose. I can do the same as you suggested, but just not understand why we make our life harder. Can I know what's the case you want to prevent?
I think what matters most for your case is the issue is that drivers should handle closing a BO because userspace said so in the same way it handles closing a BO because of a process termination, but see below.
BTW: What exactly is the issue with using the postclose callback?
The issue is, when Ctrl+C to terminate an application, if no wait or deffer unmap, buffer just gets unmapped before task is done, so kernel driver gets MMU fault and HW reset to recover the GPU.
Yeah, that sounds like exactly one of the reasons we had the callback in the first place and worked on to removing it.
See the intention is to have reliable handling, e.g. use the same code path for closing a BO because of an IOCTL and closing a BO because of process termination.
In other words what happens when userspace closes a BO while the GPU is still using it? Would you then run into a GPU reset as well?
I mean it's your driver stack, so I'm not against it as long as you can live with it. But it's exactly the thing we wanted to avoid here.
Regards, Christian.
On Thu, May 24, 2018 at 5:41 PM, Christian König christian.koenig@amd.com wrote:
Am 24.05.2018 um 11:24 schrieb Qiang Yu:
On Thu, May 24, 2018 at 2:46 PM, Christian König christian.koenig@amd.com wrote: [SNIP]
Because of this we have a separate tracking in amdgpu so that we not only know who is using which BO, who is using which VM.
amdgpu's VM implementation seems too complicated for this simple mali GPU, but I may investigate it more to see if I can make it better.
Yeah, completely agree.
The VM handling in amdgpu is really complicated because we had to tune it for multiple use cases. E.g. partial resident textures, delayed updates etc etc....
But you should at least be able to take the lessons learned we had with that VM code and not make the same mistakes again.
We intentionally removed the preclose callback to prevent certain use cases, bringing it back to allow your use case looks rather fishy to me.
Seems other drivers do either the deffer or wait way to adopt the drop of preclose. I can do the same as you suggested, but just not understand why we make our life harder. Can I know what's the case you want to prevent?
I think what matters most for your case is the issue is that drivers should handle closing a BO because userspace said so in the same way it handles closing a BO because of a process termination, but see below.
BTW: What exactly is the issue with using the postclose callback?
The issue is, when Ctrl+C to terminate an application, if no wait or deffer unmap, buffer just gets unmapped before task is done, so kernel driver gets MMU fault and HW reset to recover the GPU.
Yeah, that sounds like exactly one of the reasons we had the callback in the first place and worked on to removing it.
See the intention is to have reliable handling, e.g. use the same code path for closing a BO because of an IOCTL and closing a BO because of process termination.
In other words what happens when userspace closes a BO while the GPU is still using it? Would you then run into a GPU reset as well?
Yes, also a MMU fault and GPU reset when user space driver error usage like this. I think I don't need to avoid this case because it's user error usage which deserve a GPU reset, but process termination is not. But you remind me they indeed share the same code path if remove preclose now.
Regards, Qiang
I mean it's your driver stack, so I'm not against it as long as you can live with it. But it's exactly the thing we wanted to avoid here.
Seems
Regards, Christian.
Signed-off-by: Qiang Yu yuq825@gmail.com --- include/uapi/drm/lima_drm.h | 195 ++++++++++++++++++++++++++++++++++++ 1 file changed, 195 insertions(+) create mode 100644 include/uapi/drm/lima_drm.h
diff --git a/include/uapi/drm/lima_drm.h b/include/uapi/drm/lima_drm.h new file mode 100644 index 000000000000..9df95e46fb2c --- /dev/null +++ b/include/uapi/drm/lima_drm.h @@ -0,0 +1,195 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_DRM_H__ +#define __LIMA_DRM_H__ + +#include "drm.h" + +#if defined(__cplusplus) +extern "C" { +#endif + +#define LIMA_INFO_GPU_MALI400 0x00 +#define LIMA_INFO_GPU_MALI450 0x01 + +struct drm_lima_info { + __u32 gpu_id; /* out */ + __u32 num_pp; /* out */ + __u64 va_start; /* out */ + __u64 va_end; /* out */ +}; + +struct drm_lima_gem_create { + __u32 size; /* in */ + __u32 flags; /* in */ + __u32 handle; /* out */ + __u32 pad; +}; + +struct drm_lima_gem_info { + __u32 handle; /* in */ + __u32 pad; + __u64 offset; /* out */ +}; + +#define LIMA_VA_OP_MAP 1 +#define LIMA_VA_OP_UNMAP 2 + +struct drm_lima_gem_va { + __u32 handle; /* in */ + __u32 op; /* in */ + __u32 flags; /* in */ + __u32 va; /* in */ +}; + +#define LIMA_SUBMIT_BO_READ 0x01 +#define LIMA_SUBMIT_BO_WRITE 0x02 + +struct drm_lima_gem_submit_bo { + __u32 handle; /* in */ + __u32 flags; /* in */ +}; + +#define LIMA_SUBMIT_DEP_FENCE 0x00 +#define LIMA_SUBMIT_DEP_SYNC_FD 0x01 + +struct drm_lima_gem_submit_dep_fence { + __u32 type; + __u32 ctx; + __u32 pipe; + __u32 seq; +}; + +struct drm_lima_gem_submit_dep_sync_fd { + __u32 type; + __u32 fd; +}; + +union drm_lima_gem_submit_dep { + __u32 type; + struct drm_lima_gem_submit_dep_fence fence; + struct drm_lima_gem_submit_dep_sync_fd sync_fd; +}; + +#define LIMA_GP_FRAME_REG_NUM 6 + +struct drm_lima_gp_frame { + __u32 frame[LIMA_GP_FRAME_REG_NUM]; +}; + +#define LIMA_PP_FRAME_REG_NUM 23 +#define LIMA_PP_WB_REG_NUM 12 + +struct drm_lima_m400_pp_frame { + __u32 frame[LIMA_PP_FRAME_REG_NUM]; + __u32 num_pp; + __u32 wb[3 * LIMA_PP_WB_REG_NUM]; + __u32 plbu_array_address[4]; + __u32 fragment_stack_address[4]; +}; + +struct drm_lima_m450_pp_frame { + __u32 frame[LIMA_PP_FRAME_REG_NUM]; + __u32 _pad; + __u32 wb[3 * LIMA_PP_WB_REG_NUM]; + __u32 dlbu_regs[4]; + __u32 fragment_stack_address[8]; +}; + +#define LIMA_PIPE_GP 0x00 +#define LIMA_PIPE_PP 0x01 + +#define LIMA_SUBMIT_FLAG_EXPLICIT_FENCE (1 << 0) +#define LIMA_SUBMIT_FLAG_SYNC_FD_OUT (1 << 1) + +struct drm_lima_gem_submit_in { + __u32 ctx; + __u32 pipe; + __u32 nr_bos; + __u32 frame_size; + __u64 bos; + __u64 frame; + __u64 deps; + __u32 nr_deps; + __u32 flags; +}; + +struct drm_lima_gem_submit_out { + __u32 fence; + __u32 done; + __u32 sync_fd; + __u32 _pad; +}; + +union drm_lima_gem_submit { + struct drm_lima_gem_submit_in in; + struct drm_lima_gem_submit_out out; +}; + +struct drm_lima_wait_fence { + __u32 ctx; /* in */ + __u32 pipe; /* in */ + __u64 timeout_ns; /* in */ + __u32 seq; /* in */ + __u32 _pad; +}; + +#define LIMA_GEM_WAIT_READ 0x01 +#define LIMA_GEM_WAIT_WRITE 0x02 + +struct drm_lima_gem_wait { + __u32 handle; /* in */ + __u32 op; /* in */ + __u64 timeout_ns; /* in */ +}; + +#define LIMA_CTX_OP_CREATE 1 +#define LIMA_CTX_OP_FREE 2 + +struct drm_lima_ctx { + __u32 op; /* in */ + __u32 id; /* in/out */ +}; + +#define DRM_LIMA_INFO 0x00 +#define DRM_LIMA_GEM_CREATE 0x01 +#define DRM_LIMA_GEM_INFO 0x02 +#define DRM_LIMA_GEM_VA 0x03 +#define DRM_LIMA_GEM_SUBMIT 0x04 +#define DRM_LIMA_WAIT_FENCE 0x05 +#define DRM_LIMA_GEM_WAIT 0x06 +#define DRM_LIMA_CTX 0x07 + +#define DRM_IOCTL_LIMA_INFO DRM_IOR(DRM_COMMAND_BASE + DRM_LIMA_INFO, struct drm_lima_info) +#define DRM_IOCTL_LIMA_GEM_CREATE DRM_IOWR(DRM_COMMAND_BASE + DRM_LIMA_GEM_CREATE, struct drm_lima_gem_create) +#define DRM_IOCTL_LIMA_GEM_INFO DRM_IOWR(DRM_COMMAND_BASE + DRM_LIMA_GEM_INFO, struct drm_lima_gem_info) +#define DRM_IOCTL_LIMA_GEM_VA DRM_IOW(DRM_COMMAND_BASE + DRM_LIMA_GEM_VA, struct drm_lima_gem_va) +#define DRM_IOCTL_LIMA_GEM_SUBMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_LIMA_GEM_SUBMIT, union drm_lima_gem_submit) +#define DRM_IOCTL_LIMA_WAIT_FENCE DRM_IOW(DRM_COMMAND_BASE + DRM_LIMA_WAIT_FENCE, struct drm_lima_wait_fence) +#define DRM_IOCTL_LIMA_GEM_WAIT DRM_IOW(DRM_COMMAND_BASE + DRM_LIMA_GEM_WAIT, struct drm_lima_gem_wait) +#define DRM_IOCTL_LIMA_CTX DRM_IOWR(DRM_COMMAND_BASE + DRM_LIMA_CTX, struct drm_lima_ctx) + +#if defined(__cplusplus) +} +#endif + +#endif /* __LIMA_DRM_H__ */
On 05/18/2018 11:27 AM, Qiang Yu wrote:
Commit message is missing
Signed-off-by: Qiang Yu yuq825@gmail.com
include/uapi/drm/lima_drm.h | 195 ++++++++++++++++++++++++++++++++++++ 1 file changed, 195 insertions(+) create mode 100644 include/uapi/drm/lima_drm.h
diff --git a/include/uapi/drm/lima_drm.h b/include/uapi/drm/lima_drm.h new file mode 100644 index 000000000000..9df95e46fb2c --- /dev/null +++ b/include/uapi/drm/lima_drm.h
Please convert this to the SPDX license identifiers, that is
// SPDX...
@@ -0,0 +1,195 @@ +/*
- Copyright (C) 2017-2018 Lima Project
- Permission is hereby granted, free of charge, to any person obtaining a
- copy of this software and associated documentation files (the "Software"),
- to deal in the Software without restriction, including without limitation
- the rights to use, copy, modify, merge, publish, distribute, sublicense,
- and/or sell copies of the Software, and to permit persons to whom the
- Software is furnished to do so, subject to the following conditions:
[...]
+#if defined(__cplusplus) +extern "C" { +#endif
Is this C++ stuff needed ?
[...]
+#define LIMA_SUBMIT_FLAG_EXPLICIT_FENCE (1 << 0) +#define LIMA_SUBMIT_FLAG_SYNC_FD_OUT (1 << 1)
BIT(0) and BIT(1) if applicable [...]
On Fri, May 18, 2018 at 5:33 PM, Marek Vasut marex@denx.de wrote:
On 05/18/2018 11:27 AM, Qiang Yu wrote:
Commit message is missing
Signed-off-by: Qiang Yu yuq825@gmail.com
include/uapi/drm/lima_drm.h | 195 ++++++++++++++++++++++++++++++++++++ 1 file changed, 195 insertions(+) create mode 100644 include/uapi/drm/lima_drm.h
diff --git a/include/uapi/drm/lima_drm.h b/include/uapi/drm/lima_drm.h new file mode 100644 index 000000000000..9df95e46fb2c --- /dev/null +++ b/include/uapi/drm/lima_drm.h
Please convert this to the SPDX license identifiers, that is
// SPDX...
OK.
@@ -0,0 +1,195 @@ +/*
- Copyright (C) 2017-2018 Lima Project
- Permission is hereby granted, free of charge, to any person
obtaining a
- copy of this software and associated documentation files (the
"Software"),
- to deal in the Software without restriction, including without
limitation
- the rights to use, copy, modify, merge, publish, distribute,
sublicense,
- and/or sell copies of the Software, and to permit persons to whom the
- Software is furnished to do so, subject to the following conditions:
[...]
+#if defined(__cplusplus) +extern "C" { +#endif
Is this C++ stuff needed ?
This file is used by both kernel and user space programs, so I added this following other xxx_drm.h files here.
[...]
+#define LIMA_SUBMIT_FLAG_EXPLICIT_FENCE (1 << 0) +#define LIMA_SUBMIT_FLAG_SYNC_FD_OUT (1 << 1)
BIT(0) and BIT(1) if applicable
I can use BIT() for kernel only files but not this user/kernel shared one, because I see BIT is defined in the kernel only, user need to define it if this file use it.
[...]
-- Best regards, Marek Vasut
Regards, Qiang
On 05/20/2018 09:22 AM, Qiang Yu wrote:
On Fri, May 18, 2018 at 5:33 PM, Marek Vasut <marex@denx.de mailto:marex@denx.de> wrote:
On 05/18/2018 11:27 AM, Qiang Yu wrote: Commit message is missing > Signed-off-by: Qiang Yu <yuq825@gmail.com <mailto:yuq825@gmail.com>> > --- > include/uapi/drm/lima_drm.h | 195 ++++++++++++++++++++++++++++++++++++ > 1 file changed, 195 insertions(+) > create mode 100644 include/uapi/drm/lima_drm.h > > diff --git a/include/uapi/drm/lima_drm.h b/include/uapi/drm/lima_drm.h > new file mode 100644 > index 000000000000..9df95e46fb2c > --- /dev/null > +++ b/include/uapi/drm/lima_drm.h Please convert this to the SPDX license identifiers, that is // SPDX...
OK.
Thanks
> @@ -0,0 +1,195 @@ > +/* > + * Copyright (C) 2017-2018 Lima Project > + * > + * Permission is hereby granted, free of charge, to any person obtaining a > + * copy of this software and associated documentation files (the "Software"), > + * to deal in the Software without restriction, including without limitation > + * the rights to use, copy, modify, merge, publish, distribute, sublicense, > + * and/or sell copies of the Software, and to permit persons to whom the > + * Software is furnished to do so, subject to the following conditions: [...] > +#if defined(__cplusplus) > +extern "C" { > +#endif Is this C++ stuff needed ?
This file is used by both kernel and user space programs, so I added this following other xxx_drm.h files here.
Got it
[...] > +#define LIMA_SUBMIT_FLAG_EXPLICIT_FENCE (1 << 0) > +#define LIMA_SUBMIT_FLAG_SYNC_FD_OUT (1 << 1) BIT(0) and BIT(1) if applicable
I can use BIT() for kernel only files but not this user/kernel shared one, because I see BIT is defined in the kernel only, user need to define it if this file use it.
OK
On Fri, May 18, 2018 at 5:33 PM, Marek Vasut marex@denx.de wrote:
On 05/18/2018 11:27 AM, Qiang Yu wrote:
Commit message is missing
Signed-off-by: Qiang Yu yuq825@gmail.com
include/uapi/drm/lima_drm.h | 195 ++++++++++++++++++++++++++++++++++++ 1 file changed, 195 insertions(+) create mode 100644 include/uapi/drm/lima_drm.h
diff --git a/include/uapi/drm/lima_drm.h b/include/uapi/drm/lima_drm.h new file mode 100644 index 000000000000..9df95e46fb2c --- /dev/null +++ b/include/uapi/drm/lima_drm.h
Please convert this to the SPDX license identifiers, that is
// SPDX...
OK.
@@ -0,0 +1,195 @@ +/*
- Copyright (C) 2017-2018 Lima Project
- Permission is hereby granted, free of charge, to any person obtaining a
- copy of this software and associated documentation files (the "Software"),
- to deal in the Software without restriction, including without limitation
- the rights to use, copy, modify, merge, publish, distribute, sublicense,
- and/or sell copies of the Software, and to permit persons to whom the
- Software is furnished to do so, subject to the following conditions:
[...]
+#if defined(__cplusplus) +extern "C" { +#endif
Is this C++ stuff needed ?
This file is used by both kernel and user space programs, so I added this following other xxx_drm.h files here.
[...]
+#define LIMA_SUBMIT_FLAG_EXPLICIT_FENCE (1 << 0) +#define LIMA_SUBMIT_FLAG_SYNC_FD_OUT (1 << 1)
BIT(0) and BIT(1) if applicable
I can use BIT() for kernel only files but not this user/kernel shared one, because I see BIT is defined in the kernel only, user need to define it if this file use it.
[...]
-- Best regards, Marek Vasut
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Heiko Stuebner heiko@sntech.de --- drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++++++++++++++++++++++ 1 file changed, 304 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_regs.h
diff --git a/drivers/gpu/drm/lima/lima_regs.h b/drivers/gpu/drm/lima/lima_regs.h new file mode 100644 index 000000000000..ea4a37d69b98 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_regs.h @@ -0,0 +1,304 @@ +/* + * Copyright (C) 2010-2017 ARM Limited. All rights reserved. + * Copyright (C) 2017-2018 Lima Project + * + * This program is free software and is provided to you under + * the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation, and any use by + * you of this program is subject to the terms of such GNU + * licence. + * + * A copy of the licence is included with the program, and + * can also be obtained from Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. + */ + +#ifndef __LIMA_REGS_H__ +#define __LIMA_REGS_H__ + +/* PMU regs */ +#define LIMA_PMU_POWER_UP 0x00 +#define LIMA_PMU_POWER_DOWN 0x04 +#define LIMA_PMU_POWER_GP0_MASK (1 << 0) +#define LIMA_PMU_POWER_L2_MASK (1 << 1) +#define LIMA_PMU_POWER_PP_MASK(i) (1 << (2 + i)) + +/* + * On Mali450 each block automatically starts up its corresponding L2 + * and the PPs are not fully independent controllable. + * Instead PP0, PP1-3 and PP4-7 can be turned on or off. + */ +#define LIMA450_PMU_POWER_PP0_MASK BIT(1) +#define LIMA450_PMU_POWER_PP13_MASK BIT(2) +#define LIMA450_PMU_POWER_PP47_MASK BIT(3) + +#define LIMA_PMU_STATUS 0x08 +#define LIMA_PMU_INT_MASK 0x0C +#define LIMA_PMU_INT_RAWSTAT 0x10 +#define LIMA_PMU_INT_CLEAR 0x18 +#define LIMA_PMU_INT_CMD_MASK (1 << 0) +#define LIMA_PMU_SW_DELAY 0x1C + +/* L2 cache regs */ +#define LIMA_L2_CACHE_SIZE 0x0004 +#define LIMA_L2_CACHE_STATUS 0x0008 +#define LIMA_L2_CACHE_STATUS_COMMAND_BUSY (1 << 0) +#define LIMA_L2_CACHE_STATUS_DATA_BUSY (1 << 1) +#define LIMA_L2_CACHE_COMMAND 0x0010 +#define LIMA_L2_CACHE_COMMAND_CLEAR_ALL (1 << 0) +#define LIMA_L2_CACHE_CLEAR_PAGE 0x0014 +#define LIMA_L2_CACHE_MAX_READS 0x0018 +#define LIMA_L2_CACHE_ENABLE 0x001C +#define LIMA_L2_CACHE_ENABLE_ACCESS (1 << 0) +#define LIMA_L2_CACHE_ENABLE_READ_ALLOCATE (1 << 1) +#define LIMA_L2_CACHE_PERFCNT_SRC0 0x0020 +#define LIMA_L2_CACHE_PERFCNT_VAL0 0x0024 +#define LIMA_L2_CACHE_PERFCNT_SRC1 0x0028 +#define LIMA_L2_CACHE_ERFCNT_VAL1 0x002C + +/* GP regs */ +#define LIMA_GP_VSCL_START_ADDR 0x00 +#define LIMA_GP_VSCL_END_ADDR 0x04 +#define LIMA_GP_PLBUCL_START_ADDR 0x08 +#define LIMA_GP_PLBUCL_END_ADDR 0x0c +#define LIMA_GP_PLBU_ALLOC_START_ADDR 0x10 +#define LIMA_GP_PLBU_ALLOC_END_ADDR 0x14 +#define LIMA_GP_CMD 0x20 +#define LIMA_GP_CMD_START_VS (1 << 0) +#define LIMA_GP_CMD_START_PLBU (1 << 1) +#define LIMA_GP_CMD_UPDATE_PLBU_ALLOC (1 << 4) +#define LIMA_GP_CMD_RESET (1 << 5) +#define LIMA_GP_CMD_FORCE_HANG (1 << 6) +#define LIMA_GP_CMD_STOP_BUS (1 << 9) +#define LIMA_GP_CMD_SOFT_RESET (1 << 10) +#define LIMA_GP_INT_RAWSTAT 0x24 +#define LIMA_GP_INT_CLEAR 0x28 +#define LIMA_GP_INT_MASK 0x2C +#define LIMA_GP_INT_STAT 0x30 +#define LIMA_GP_IRQ_VS_END_CMD_LST (1 << 0) +#define LIMA_GP_IRQ_PLBU_END_CMD_LST (1 << 1) +#define LIMA_GP_IRQ_PLBU_OUT_OF_MEM (1 << 2) +#define LIMA_GP_IRQ_VS_SEM_IRQ (1 << 3) +#define LIMA_GP_IRQ_PLBU_SEM_IRQ (1 << 4) +#define LIMA_GP_IRQ_HANG (1 << 5) +#define LIMA_GP_IRQ_FORCE_HANG (1 << 6) +#define LIMA_GP_IRQ_PERF_CNT_0_LIMIT (1 << 7) +#define LIMA_GP_IRQ_PERF_CNT_1_LIMIT (1 << 8) +#define LIMA_GP_IRQ_WRITE_BOUND_ERR (1 << 9) +#define LIMA_GP_IRQ_SYNC_ERROR (1 << 10) +#define LIMA_GP_IRQ_AXI_BUS_ERROR (1 << 11) +#define LIMA_GP_IRQ_AXI_BUS_STOPPED (1 << 12) +#define LIMA_GP_IRQ_VS_INVALID_CMD (1 << 13) +#define LIMA_GP_IRQ_PLB_INVALID_CMD (1 << 14) +#define LIMA_GP_IRQ_RESET_COMPLETED (1 << 19) +#define LIMA_GP_IRQ_SEMAPHORE_UNDERFLOW (1 << 20) +#define LIMA_GP_IRQ_SEMAPHORE_OVERFLOW (1 << 21) +#define LIMA_GP_IRQ_PTR_ARRAY_OUT_OF_BOUNDS (1 << 22) +#define LIMA_GP_WRITE_BOUND_LOW 0x34 +#define LIMA_GP_PERF_CNT_0_ENABLE 0x3C +#define LIMA_GP_PERF_CNT_1_ENABLE 0x40 +#define LIMA_GP_PERF_CNT_0_SRC 0x44 +#define LIMA_GP_PERF_CNT_1_SRC 0x48 +#define LIMA_GP_PERF_CNT_0_VALUE 0x4C +#define LIMA_GP_PERF_CNT_1_VALUE 0x50 +#define LIMA_GP_PERF_CNT_0_LIMIT 0x54 +#define LIMA_GP_STATUS 0x68 +#define LIMA_GP_STATUS_VS_ACTIVE (1 << 1) +#define LIMA_GP_STATUS_BUS_STOPPED (1 << 2) +#define LIMA_GP_STATUS_PLBU_ACTIVE (1 << 3) +#define LIMA_GP_STATUS_BUS_ERROR (1 << 6) +#define LIMA_GP_STATUS_WRITE_BOUND_ERR (1 << 8) +#define LIMA_GP_VERSION 0x6C +#define LIMA_GP_VSCL_START_ADDR_READ 0x80 +#define LIMA_GP_PLBCL_START_ADDR_READ 0x84 +#define LIMA_GP_CONTR_AXI_BUS_ERROR_STAT 0x94 + +#define LIMA_GP_IRQ_MASK_ALL \ + ( \ + LIMA_GP_IRQ_VS_END_CMD_LST | \ + LIMA_GP_IRQ_PLBU_END_CMD_LST | \ + LIMA_GP_IRQ_PLBU_OUT_OF_MEM | \ + LIMA_GP_IRQ_VS_SEM_IRQ | \ + LIMA_GP_IRQ_PLBU_SEM_IRQ | \ + LIMA_GP_IRQ_HANG | \ + LIMA_GP_IRQ_FORCE_HANG | \ + LIMA_GP_IRQ_PERF_CNT_0_LIMIT | \ + LIMA_GP_IRQ_PERF_CNT_1_LIMIT | \ + LIMA_GP_IRQ_WRITE_BOUND_ERR | \ + LIMA_GP_IRQ_SYNC_ERROR | \ + LIMA_GP_IRQ_AXI_BUS_ERROR | \ + LIMA_GP_IRQ_AXI_BUS_STOPPED | \ + LIMA_GP_IRQ_VS_INVALID_CMD | \ + LIMA_GP_IRQ_PLB_INVALID_CMD | \ + LIMA_GP_IRQ_RESET_COMPLETED | \ + LIMA_GP_IRQ_SEMAPHORE_UNDERFLOW | \ + LIMA_GP_IRQ_SEMAPHORE_OVERFLOW | \ + LIMA_GP_IRQ_PTR_ARRAY_OUT_OF_BOUNDS) + +#define LIMA_GP_IRQ_MASK_ERROR \ + ( \ + LIMA_GP_IRQ_PLBU_OUT_OF_MEM | \ + LIMA_GP_IRQ_FORCE_HANG | \ + LIMA_GP_IRQ_WRITE_BOUND_ERR | \ + LIMA_GP_IRQ_SYNC_ERROR | \ + LIMA_GP_IRQ_AXI_BUS_ERROR | \ + LIMA_GP_IRQ_VS_INVALID_CMD | \ + LIMA_GP_IRQ_PLB_INVALID_CMD | \ + LIMA_GP_IRQ_SEMAPHORE_UNDERFLOW | \ + LIMA_GP_IRQ_SEMAPHORE_OVERFLOW | \ + LIMA_GP_IRQ_PTR_ARRAY_OUT_OF_BOUNDS) + +#define LIMA_GP_IRQ_MASK_USED \ + ( \ + LIMA_GP_IRQ_VS_END_CMD_LST | \ + LIMA_GP_IRQ_PLBU_END_CMD_LST | \ + LIMA_GP_IRQ_MASK_ERROR) + +/* PP regs */ +#define LIMA_PP_FRAME 0x0000 +#define LIMA_PP_RSW 0x0004 +#define LIMA_PP_STACK 0x0030 +#define LIMA_PP_STACK_SIZE 0x0034 +#define LIMA_PP_ORIGIN_OFFSET_X 0x0040 +#define LIMA_PP_WB(i) (0x0100 * (i + 1)) +#define LIMA_PP_WB_SOURCE_SELECT 0x0000 +#define LIMA_PP_WB_SOURCE_ADDR 0x0004 + +#define LIMA_PP_VERSION 0x1000 +#define LIMA_PP_CURRENT_REND_LIST_ADDR 0x1004 +#define LIMA_PP_STATUS 0x1008 +#define LIMA_PP_STATUS_RENDERING_ACTIVE (1 << 0) +#define LIMA_PP_STATUS_BUS_STOPPED (1 << 4) +#define LIMA_PP_CTRL 0x100c +#define LIMA_PP_CTRL_STOP_BUS (1 << 0) +#define LIMA_PP_CTRL_FLUSH_CACHES (1 << 3) +#define LIMA_PP_CTRL_FORCE_RESET (1 << 5) +#define LIMA_PP_CTRL_START_RENDERING (1 << 6) +#define LIMA_PP_CTRL_SOFT_RESET (1 << 7) +#define LIMA_PP_INT_RAWSTAT 0x1020 +#define LIMA_PP_INT_CLEAR 0x1024 +#define LIMA_PP_INT_MASK 0x1028 +#define LIMA_PP_INT_STATUS 0x102c +#define LIMA_PP_IRQ_END_OF_FRAME (1 << 0) +#define LIMA_PP_IRQ_END_OF_TILE (1 << 1) +#define LIMA_PP_IRQ_HANG (1 << 2) +#define LIMA_PP_IRQ_FORCE_HANG (1 << 3) +#define LIMA_PP_IRQ_BUS_ERROR (1 << 4) +#define LIMA_PP_IRQ_BUS_STOP (1 << 5) +#define LIMA_PP_IRQ_CNT_0_LIMIT (1 << 6) +#define LIMA_PP_IRQ_CNT_1_LIMIT (1 << 7) +#define LIMA_PP_IRQ_WRITE_BOUNDARY_ERROR (1 << 8) +#define LIMA_PP_IRQ_INVALID_PLIST_COMMAND (1 << 9) +#define LIMA_PP_IRQ_CALL_STACK_UNDERFLOW (1 << 10) +#define LIMA_PP_IRQ_CALL_STACK_OVERFLOW (1 << 11) +#define LIMA_PP_IRQ_RESET_COMPLETED (1 << 12) +#define LIMA_PP_WRITE_BOUNDARY_LOW 0x1044 +#define LIMA_PP_BUS_ERROR_STATUS 0x1050 +#define LIMA_PP_PERF_CNT_0_ENABLE 0x1080 +#define LIMA_PP_PERF_CNT_0_SRC 0x1084 +#define LIMA_PP_PERF_CNT_0_LIMIT 0x1088 +#define LIMA_PP_PERF_CNT_0_VALUE 0x108c +#define LIMA_PP_PERF_CNT_1_ENABLE 0x10a0 +#define LIMA_PP_PERF_CNT_1_SRC 0x10a4 +#define LIMA_PP_PERF_CNT_1_LIMIT 0x10a8 +#define LIMA_PP_PERF_CNT_1_VALUE 0x10ac +#define LIMA_PP_PERFMON_CONTR 0x10b0 +#define LIMA_PP_PERFMON_BASE 0x10b4 + +#define LIMA_PP_IRQ_MASK_ALL \ + ( \ + LIMA_PP_IRQ_END_OF_FRAME | \ + LIMA_PP_IRQ_END_OF_TILE | \ + LIMA_PP_IRQ_HANG | \ + LIMA_PP_IRQ_FORCE_HANG | \ + LIMA_PP_IRQ_BUS_ERROR | \ + LIMA_PP_IRQ_BUS_STOP | \ + LIMA_PP_IRQ_CNT_0_LIMIT | \ + LIMA_PP_IRQ_CNT_1_LIMIT | \ + LIMA_PP_IRQ_WRITE_BOUNDARY_ERROR | \ + LIMA_PP_IRQ_INVALID_PLIST_COMMAND | \ + LIMA_PP_IRQ_CALL_STACK_UNDERFLOW | \ + LIMA_PP_IRQ_CALL_STACK_OVERFLOW | \ + LIMA_PP_IRQ_RESET_COMPLETED) + +#define LIMA_PP_IRQ_MASK_ERROR \ + ( \ + LIMA_PP_IRQ_FORCE_HANG | \ + LIMA_PP_IRQ_BUS_ERROR | \ + LIMA_PP_IRQ_WRITE_BOUNDARY_ERROR | \ + LIMA_PP_IRQ_INVALID_PLIST_COMMAND | \ + LIMA_PP_IRQ_CALL_STACK_UNDERFLOW | \ + LIMA_PP_IRQ_CALL_STACK_OVERFLOW) + +#define LIMA_PP_IRQ_MASK_USED \ + ( \ + LIMA_PP_IRQ_END_OF_FRAME | \ + LIMA_PP_IRQ_MASK_ERROR) + +/* MMU regs */ +#define LIMA_MMU_DTE_ADDR 0x0000 +#define LIMA_MMU_STATUS 0x0004 +#define LIMA_MMU_STATUS_PAGING_ENABLED (1 << 0) +#define LIMA_MMU_STATUS_PAGE_FAULT_ACTIVE (1 << 1) +#define LIMA_MMU_STATUS_STALL_ACTIVE (1 << 2) +#define LIMA_MMU_STATUS_IDLE (1 << 3) +#define LIMA_MMU_STATUS_REPLAY_BUFFER_EMPTY (1 << 4) +#define LIMA_MMU_STATUS_PAGE_FAULT_IS_WRITE (1 << 5) +#define LIMA_MMU_STATUS_BUS_ID(x) ((x >> 6) & 0x1F) +#define LIMA_MMU_COMMAND 0x0008 +#define LIMA_MMU_COMMAND_ENABLE_PAGING 0x00 +#define LIMA_MMU_COMMAND_DISABLE_PAGING 0x01 +#define LIMA_MMU_COMMAND_ENABLE_STALL 0x02 +#define LIMA_MMU_COMMAND_DISABLE_STALL 0x03 +#define LIMA_MMU_COMMAND_ZAP_CACHE 0x04 +#define LIMA_MMU_COMMAND_PAGE_FAULT_DONE 0x05 +#define LIMA_MMU_COMMAND_HARD_RESET 0x06 +#define LIMA_MMU_PAGE_FAULT_ADDR 0x000C +#define LIMA_MMU_ZAP_ONE_LINE 0x0010 +#define LIMA_MMU_INT_RAWSTAT 0x0014 +#define LIMA_MMU_INT_CLEAR 0x0018 +#define LIMA_MMU_INT_MASK 0x001C +#define LIMA_MMU_INT_PAGE_FAULT 0x01 +#define LIMA_MMU_INT_READ_BUS_ERROR 0x02 +#define LIMA_MMU_INT_STATUS 0x0020 + +#define LIMA_VM_FLAG_PRESENT (1 << 0) +#define LIMA_VM_FLAG_READ_PERMISSION (1 << 1) +#define LIMA_VM_FLAG_WRITE_PERMISSION (1 << 2) +#define LIMA_VM_FLAG_OVERRIDE_CACHE (1 << 3) +#define LIMA_VM_FLAG_WRITE_CACHEABLE (1 << 4) +#define LIMA_VM_FLAG_WRITE_ALLOCATE (1 << 5) +#define LIMA_VM_FLAG_WRITE_BUFFERABLE (1 << 6) +#define LIMA_VM_FLAG_READ_CACHEABLE (1 << 7) +#define LIMA_VM_FLAG_READ_ALLOCATE (1 << 8) +#define LIMA_VM_FLAG_MASK 0x1FF + +#define LIMA_VM_FLAGS_CACHE ( \ + LIMA_VM_FLAG_PRESENT | \ + LIMA_VM_FLAG_READ_PERMISSION | \ + LIMA_VM_FLAG_WRITE_PERMISSION | \ + LIMA_VM_FLAG_OVERRIDE_CACHE | \ + LIMA_VM_FLAG_WRITE_CACHEABLE | \ + LIMA_VM_FLAG_WRITE_BUFFERABLE | \ + LIMA_VM_FLAG_READ_CACHEABLE | \ + LIMA_VM_FLAG_READ_ALLOCATE ) + +#define LIMA_VM_FLAGS_UNCACHE ( \ + LIMA_VM_FLAG_PRESENT | \ + LIMA_VM_FLAG_READ_PERMISSION | \ + LIMA_VM_FLAG_WRITE_PERMISSION ) + +/* DLBU regs */ +#define LIMA_DLBU_MASTER_TLLIST_PHYS_ADDR 0x0000 +#define LIMA_DLBU_MASTER_TLLIST_VADDR 0x0004 +#define LIMA_DLBU_TLLIST_VBASEADDR 0x0008 +#define LIMA_DLBU_FB_DIM 0x000C +#define LIMA_DLBU_TLLIST_CONF 0x0010 +#define LIMA_DLBU_START_TILE_POS 0x0014 +#define LIMA_DLBU_PP_ENABLE_MASK 0x0018 + +/* BCAST regs */ +#define LIMA_BCAST_BROADCAST_MASK 0x0 +#define LIMA_BCAST_INTERRUPT_MASK 0x4 + +#endif
On Fri, May 18, 2018 at 05:27:58PM +0800, Qiang Yu wrote:
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Heiko Stuebner heiko@sntech.de
drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++++++++++++++++++++++ 1 file changed, 304 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_regs.h
diff --git a/drivers/gpu/drm/lima/lima_regs.h b/drivers/gpu/drm/lima/lima_regs.h new file mode 100644 index 000000000000..ea4a37d69b98 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_regs.h @@ -0,0 +1,304 @@ +/*
- Copyright (C) 2010-2017 ARM Limited. All rights reserved.
I assume this came from ARM's out of tree kernel driver source. You should document what it was based on.
- Copyright (C) 2017-2018 Lima Project
IANAL, but is Lima Project a legal entity that can copyright things?
- This program is free software and is provided to you under
- the terms of the GNU General Public License version 2 as
- published by the Free Software Foundation, and any use by
- you of this program is subject to the terms of such GNU
- licence.
- A copy of the licence is included with the program, and
- can also be obtained from Free Software Foundation, Inc.,
- 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
You can use SPDX tags instead.
Rob
On Wed, May 23, 2018 at 10:24 AM, Rob Herring robh@kernel.org wrote:
On Fri, May 18, 2018 at 05:27:58PM +0800, Qiang Yu wrote:
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Heiko Stuebner heiko@sntech.de
drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++++++++++++++++++++++ 1 file changed, 304 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_regs.h
diff --git a/drivers/gpu/drm/lima/lima_regs.h b/drivers/gpu/drm/lima/lima_regs.h new file mode 100644 index 000000000000..ea4a37d69b98 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_regs.h @@ -0,0 +1,304 @@ +/*
- Copyright (C) 2010-2017 ARM Limited. All rights reserved.
I assume this came from ARM's out of tree kernel driver source. You should document what it was based on.
- Copyright (C) 2017-2018 Lima Project
IANAL, but is Lima Project a legal entity that can copyright things?
AFAIR it's not a legal entity, and I believe Qiang can simply replace "Lima Project" with his name in kernel driver. I don't think that anyone else made contribution to the kernel driver that is significant enough to copyright it.
Luc Verhaegen may hold copyright for REed data structures if there're any in kernel driver - but I believe only userspace parts took something from limare.
- This program is free software and is provided to you under
- the terms of the GNU General Public License version 2 as
- published by the Free Software Foundation, and any use by
- you of this program is subject to the terms of such GNU
- licence.
- A copy of the licence is included with the program, and
- can also be obtained from Free Software Foundation, Inc.,
- 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
You can use SPDX tags instead.
Rob
On Thu, May 24, 2018 at 1:24 AM, Rob Herring robh@kernel.org wrote:
On Fri, May 18, 2018 at 05:27:58PM +0800, Qiang Yu wrote:
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Heiko Stuebner heiko@sntech.de
drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++++++++++++++++++++++ 1 file changed, 304 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_regs.h
diff --git a/drivers/gpu/drm/lima/lima_regs.h b/drivers/gpu/drm/lima/lima_regs.h new file mode 100644 index 000000000000..ea4a37d69b98 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_regs.h @@ -0,0 +1,304 @@ +/*
- Copyright (C) 2010-2017 ARM Limited. All rights reserved.
I assume this came from ARM's out of tree kernel driver source. You should document what it was based on.
Yes, I'll comment it.
- Copyright (C) 2017-2018 Lima Project
IANAL, but is Lima Project a legal entity that can copyright things?
I heard this second time. Seems it's not a good idea to write like this. I'll change the copyright next time.
- This program is free software and is provided to you under
- the terms of the GNU General Public License version 2 as
- published by the Free Software Foundation, and any use by
- you of this program is subject to the terms of such GNU
- licence.
- A copy of the licence is included with the program, and
- can also be obtained from Free Software Foundation, Inc.,
- 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
You can use SPDX tags instead.
If using SPDX, can I drop these license text? How about the copyright text as I see SPDX header doesn't have it?
Regards, Qiang
Rob
On Wed, May 23, 2018 at 7:58 PM, Qiang Yu yuq825@gmail.com wrote:
On Thu, May 24, 2018 at 1:24 AM, Rob Herring robh@kernel.org wrote:
On Fri, May 18, 2018 at 05:27:58PM +0800, Qiang Yu wrote:
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Heiko Stuebner heiko@sntech.de
drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++++++++++++++++++++++ 1 file changed, 304 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_regs.h
diff --git a/drivers/gpu/drm/lima/lima_regs.h b/drivers/gpu/drm/lima/lima_regs.h new file mode 100644 index 000000000000..ea4a37d69b98 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_regs.h @@ -0,0 +1,304 @@ +/*
- Copyright (C) 2010-2017 ARM Limited. All rights reserved.
I assume this came from ARM's out of tree kernel driver source. You should document what it was based on.
Yes, I'll comment it.
- Copyright (C) 2017-2018 Lima Project
IANAL, but is Lima Project a legal entity that can copyright things?
I heard this second time. Seems it's not a good idea to write like this. I'll change the copyright next time.
- This program is free software and is provided to you under
- the terms of the GNU General Public License version 2 as
- published by the Free Software Foundation, and any use by
- you of this program is subject to the terms of such GNU
- licence.
- A copy of the licence is included with the program, and
- can also be obtained from Free Software Foundation, Inc.,
- 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
You can use SPDX tags instead.
If using SPDX, can I drop these license text? How about the copyright text as I see SPDX header doesn't have it?
License and copyright are 2 independent things. Yes, you can drop the license text, but keep the copyrights.
Rob
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Heiko Stuebner heiko@sntech.de Signed-off-by: Erico Nunes nunes.erico@gmail.com --- drivers/gpu/drm/lima/lima_drv.c | 466 ++++++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_drv.h | 77 ++++++ 2 files changed, 543 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_drv.c create mode 100644 drivers/gpu/drm/lima/lima_drv.h
diff --git a/drivers/gpu/drm/lima/lima_drv.c b/drivers/gpu/drm/lima/lima_drv.c new file mode 100644 index 000000000000..4df24a6cfff3 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_drv.c @@ -0,0 +1,466 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <linux/module.h> +#include <linux/of_platform.h> +#include <linux/log2.h> +#include <drm/drm_prime.h> +#include <drm/lima_drm.h> + +#include "lima_drv.h" +#include "lima_gem.h" +#include "lima_gem_prime.h" +#include "lima_vm.h" + +int lima_sched_timeout_ms = 0; +int lima_sched_max_tasks = 32; +int lima_max_mem = -1; + +MODULE_PARM_DESC(sched_timeout_ms, "task run timeout in ms (0 = no timeout (default))"); +module_param_named(sched_timeout_ms, lima_sched_timeout_ms, int, 0444); + +MODULE_PARM_DESC(sched_max_tasks, "max queued task num in a context (default 32)"); +module_param_named(sched_max_tasks, lima_sched_max_tasks, int, 0444); + +MODULE_PARM_DESC(max_mem, "Max memory size in MB can be used (<0 = auto)"); +module_param_named(max_mem, lima_max_mem, int, 0444); + +static int lima_ioctl_info(struct drm_device *dev, void *data, struct drm_file *file) +{ + struct drm_lima_info *info = data; + struct lima_device *ldev = to_lima_dev(dev); + + switch (ldev->id) { + case lima_gpu_mali400: + info->gpu_id = LIMA_INFO_GPU_MALI400; + break; + case lima_gpu_mali450: + info->gpu_id = LIMA_INFO_GPU_MALI450; + break; + default: + return -ENODEV; + } + info->num_pp = ldev->pipe[lima_pipe_pp].num_processor; + info->va_start = ldev->va_start; + info->va_end = ldev->va_end; + return 0; +} + +static int lima_ioctl_gem_create(struct drm_device *dev, void *data, struct drm_file *file) +{ + struct drm_lima_gem_create *args = data; + + if (args->flags) + return -EINVAL; + + if (args->size == 0) + return -EINVAL; + + return lima_gem_create_handle(dev, file, args->size, args->flags, &args->handle); +} + +static int lima_ioctl_gem_info(struct drm_device *dev, void *data, struct drm_file *file) +{ + struct drm_lima_gem_info *args = data; + + return lima_gem_mmap_offset(file, args->handle, &args->offset); +} + +static int lima_ioctl_gem_va(struct drm_device *dev, void *data, struct drm_file *file) +{ + struct drm_lima_gem_va *args = data; + + switch (args->op) { + case LIMA_VA_OP_MAP: + return lima_gem_va_map(file, args->handle, args->flags, args->va); + case LIMA_VA_OP_UNMAP: + return lima_gem_va_unmap(file, args->handle, args->va); + default: + return -EINVAL; + } +} + +static int lima_ioctl_gem_submit(struct drm_device *dev, void *data, struct drm_file *file) +{ + struct drm_lima_gem_submit_in *args = data; + struct lima_device *ldev = to_lima_dev(dev); + struct lima_drm_priv *priv = file->driver_priv; + struct drm_lima_gem_submit_bo *bos; + struct ttm_validate_buffer *vbs; + union drm_lima_gem_submit_dep *deps = NULL; + struct lima_sched_pipe *pipe; + struct lima_sched_task *task; + struct lima_ctx *ctx; + struct lima_submit submit = {0}; + int err = 0, size; + + if (args->pipe >= lima_pipe_num || args->nr_bos == 0) + return -EINVAL; + + if (args->flags & ~(LIMA_SUBMIT_FLAG_EXPLICIT_FENCE | + LIMA_SUBMIT_FLAG_SYNC_FD_OUT)) + return -EINVAL; + + pipe = ldev->pipe + args->pipe; + if (args->frame_size != pipe->frame_size) + return -EINVAL; + + size = args->nr_bos * (sizeof(*submit.bos) + sizeof(*submit.vbs)) + + args->nr_deps * sizeof(*submit.deps); + bos = kzalloc(size, GFP_KERNEL); + if (!bos) + return -ENOMEM; + + size = args->nr_bos * sizeof(*submit.bos); + if (copy_from_user(bos, u64_to_user_ptr(args->bos), size)) { + err = -EFAULT; + goto out0; + } + + vbs = (void *)bos + size; + + if (args->nr_deps) { + deps = (void *)vbs + args->nr_bos * sizeof(*submit.vbs); + size = args->nr_deps * sizeof(*submit.deps); + if (copy_from_user(deps, u64_to_user_ptr(args->deps), size)) { + err = -EFAULT; + goto out0; + } + } + + task = kmem_cache_zalloc(pipe->task_slab, GFP_KERNEL); + if (!task) { + err = -ENOMEM; + goto out0; + } + + task->frame = task + 1; + if (copy_from_user(task->frame, u64_to_user_ptr(args->frame), args->frame_size)) { + err = -EFAULT; + goto out1; + } + + err = pipe->task_validate(pipe, task); + if (err) + goto out1; + + ctx = lima_ctx_get(&priv->ctx_mgr, args->ctx); + if (!ctx) { + err = -ENOENT; + goto out1; + } + + submit.pipe = args->pipe; + submit.bos = bos; + submit.vbs = vbs; + submit.nr_bos = args->nr_bos; + submit.task = task; + submit.ctx = ctx; + submit.deps = deps; + submit.nr_deps = args->nr_deps; + submit.flags = args->flags; + + err = lima_gem_submit(file, &submit); + if (!err) { + struct drm_lima_gem_submit_out *out = data; + out->fence = submit.fence; + out->done = submit.done; + out->sync_fd = submit.sync_fd; + } + + lima_ctx_put(ctx); +out1: + if (err) + kmem_cache_free(pipe->task_slab, task); +out0: + kfree(bos); + return err; +} + +static int lima_wait_fence(struct dma_fence *fence, u64 timeout_ns) +{ + signed long ret; + + if (!timeout_ns) + ret = dma_fence_is_signaled(fence) ? 0 : -EBUSY; + else { + unsigned long timeout = lima_timeout_to_jiffies(timeout_ns); + + /* must use long for result check because in 64bit arch int + * will overflow if timeout is too large and get <0 result + */ + ret = dma_fence_wait_timeout(fence, true, timeout); + if (ret == 0) + ret = timeout ? -ETIMEDOUT : -EBUSY; + else if (ret > 0) + ret = 0; + } + + return ret; +} + +static int lima_ioctl_wait_fence(struct drm_device *dev, void *data, struct drm_file *file) +{ + struct drm_lima_wait_fence *args = data; + struct lima_drm_priv *priv = file->driver_priv; + struct dma_fence *fence; + int err = 0; + + fence = lima_ctx_get_native_fence(&priv->ctx_mgr, args->ctx, + args->pipe, args->seq); + if (IS_ERR(fence)) + return PTR_ERR(fence); + + if (fence) { + err = lima_wait_fence(fence, args->timeout_ns); + dma_fence_put(fence); + } + + return err; +} + +static int lima_ioctl_gem_wait(struct drm_device *dev, void *data, struct drm_file *file) +{ + struct drm_lima_gem_wait *args = data; + + if (!(args->op & (LIMA_GEM_WAIT_READ|LIMA_GEM_WAIT_WRITE))) + return -EINVAL; + + return lima_gem_wait(file, args->handle, args->op, args->timeout_ns); +} + +static int lima_ioctl_ctx(struct drm_device *dev, void *data, struct drm_file *file) +{ + struct drm_lima_ctx *args = data; + struct lima_drm_priv *priv = file->driver_priv; + struct lima_device *ldev = to_lima_dev(dev); + + if (args->op == LIMA_CTX_OP_CREATE) + return lima_ctx_create(ldev, &priv->ctx_mgr, &args->id); + else if (args->op == LIMA_CTX_OP_FREE) + return lima_ctx_free(&priv->ctx_mgr, args->id); + + return -EINVAL; +} + +static int lima_drm_driver_open(struct drm_device *dev, struct drm_file *file) +{ + int err; + struct lima_drm_priv *priv; + struct lima_device *ldev = to_lima_dev(dev); + + priv = kzalloc(sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + priv->vm = lima_vm_create(ldev); + if (!priv->vm) { + err = -ENOMEM; + goto err_out0; + } + + lima_ctx_mgr_init(&priv->ctx_mgr); + + file->driver_priv = priv; + return 0; + +err_out0: + kfree(priv); + return err; +} + +static void lima_drm_driver_preclose(struct drm_device *dev, struct drm_file *file) +{ + struct lima_drm_priv *priv = file->driver_priv; + + lima_ctx_mgr_fini(&priv->ctx_mgr); +} + +static void lima_drm_driver_postclose(struct drm_device *dev, struct drm_file *file) +{ + struct lima_drm_priv *priv = file->driver_priv; + + lima_vm_put(priv->vm); + kfree(priv); +} + +static const struct drm_ioctl_desc lima_drm_driver_ioctls[] = { + DRM_IOCTL_DEF_DRV(LIMA_INFO, lima_ioctl_info, DRM_AUTH|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(LIMA_GEM_CREATE, lima_ioctl_gem_create, DRM_AUTH|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(LIMA_GEM_INFO, lima_ioctl_gem_info, DRM_AUTH|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(LIMA_GEM_VA, lima_ioctl_gem_va, DRM_AUTH|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(LIMA_GEM_SUBMIT, lima_ioctl_gem_submit, DRM_AUTH|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(LIMA_WAIT_FENCE, lima_ioctl_wait_fence, DRM_AUTH|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(LIMA_GEM_WAIT, lima_ioctl_gem_wait, DRM_AUTH|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(LIMA_CTX, lima_ioctl_ctx, DRM_AUTH|DRM_RENDER_ALLOW), +}; + +static const struct file_operations lima_drm_driver_fops = { + .owner = THIS_MODULE, + .open = drm_open, + .release = drm_release, + .unlocked_ioctl = drm_ioctl, +#ifdef CONFIG_COMPAT + .compat_ioctl = drm_compat_ioctl, +#endif + .mmap = lima_gem_mmap, +}; + +static struct drm_driver lima_drm_driver = { + .driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_PRIME, + .open = lima_drm_driver_open, + .preclose = lima_drm_driver_preclose, + .postclose = lima_drm_driver_postclose, + .ioctls = lima_drm_driver_ioctls, + .num_ioctls = ARRAY_SIZE(lima_drm_driver_ioctls), + .fops = &lima_drm_driver_fops, + .gem_free_object_unlocked = lima_gem_free_object, + .gem_open_object = lima_gem_object_open, + .gem_close_object = lima_gem_object_close, + .name = "lima", + .desc = "lima DRM", + .date = "20170325", + .major = 1, + .minor = 0, + .patchlevel = 0, + + .prime_fd_to_handle = drm_gem_prime_fd_to_handle, + .gem_prime_import = drm_gem_prime_import, + .gem_prime_import_sg_table = lima_gem_prime_import_sg_table, + .prime_handle_to_fd = drm_gem_prime_handle_to_fd, + .gem_prime_export = drm_gem_prime_export, + .gem_prime_res_obj = lima_gem_prime_res_obj, + .gem_prime_get_sg_table = lima_gem_prime_get_sg_table, +}; + +static int lima_pdev_probe(struct platform_device *pdev) +{ + struct lima_device *ldev; + struct drm_device *ddev; + int err; + + ldev = devm_kzalloc(&pdev->dev, sizeof(*ldev), GFP_KERNEL); + if (!ldev) + return -ENOMEM; + + ldev->pdev = pdev; + ldev->dev = &pdev->dev; + ldev->id = (enum lima_gpu_id)of_device_get_match_data(&pdev->dev); + + platform_set_drvdata(pdev, ldev); + + /* Allocate and initialize the DRM device. */ + ddev = drm_dev_alloc(&lima_drm_driver, &pdev->dev); + if (IS_ERR(ddev)) + return PTR_ERR(ddev); + + ddev->dev_private = ldev; + ldev->ddev = ddev; + + err = lima_device_init(ldev); + if (err) { + dev_err(&pdev->dev, "Fatal error during GPU init\n"); + goto err_out0; + } + + /* + * Register the DRM device with the core and the connectors with + * sysfs. + */ + err = drm_dev_register(ddev, 0); + if (err < 0) + goto err_out1; + + return 0; + +err_out1: + lima_device_fini(ldev); +err_out0: + drm_dev_unref(ddev); + return err; +} + +static int lima_pdev_remove(struct platform_device *pdev) +{ + struct lima_device *ldev = platform_get_drvdata(pdev); + struct drm_device *ddev = ldev->ddev; + + drm_dev_unregister(ddev); + lima_device_fini(ldev); + drm_dev_unref(ddev); + return 0; +} + +static const struct of_device_id dt_match[] = { + { .compatible = "arm,mali-400", .data = (void *)lima_gpu_mali400 }, + { .compatible = "arm,mali-450", .data = (void *)lima_gpu_mali450 }, + {} +}; +MODULE_DEVICE_TABLE(of, dt_match); + +static struct platform_driver lima_platform_driver = { + .probe = lima_pdev_probe, + .remove = lima_pdev_remove, + .driver = { + .name = "lima", + .of_match_table = dt_match, + }, +}; + +static void lima_check_module_param(void) +{ + if (lima_sched_max_tasks < 4) + lima_sched_max_tasks = 4; + else + lima_sched_max_tasks = roundup_pow_of_two(lima_sched_max_tasks); + + if (lima_max_mem < 32) + lima_max_mem = -1; +} + +static int __init lima_init(void) +{ + int ret; + + lima_check_module_param(); + ret = lima_sched_slab_init(); + if (ret) + return ret; + + ret = platform_driver_register(&lima_platform_driver); + if (ret) + lima_sched_slab_fini(); + + return ret; +} +module_init(lima_init); + +static void __exit lima_exit(void) +{ + platform_driver_unregister(&lima_platform_driver); + lima_sched_slab_fini(); +} +module_exit(lima_exit); + +MODULE_AUTHOR("Lima Project Developers"); +MODULE_DESCRIPTION("Lima DRM Driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/gpu/drm/lima/lima_drv.h b/drivers/gpu/drm/lima/lima_drv.h new file mode 100644 index 000000000000..2f5f51da21db --- /dev/null +++ b/drivers/gpu/drm/lima/lima_drv.h @@ -0,0 +1,77 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_DRV_H__ +#define __LIMA_DRV_H__ + +#include <drm/drmP.h> +#include <drm/ttm/ttm_execbuf_util.h> + +#include "lima_ctx.h" + +extern int lima_sched_timeout_ms; +extern int lima_sched_max_tasks; +extern int lima_max_mem; + +struct lima_vm; +struct lima_bo; +struct lima_sched_task; + +struct drm_lima_gem_submit_bo; + +#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT) + +struct lima_drm_priv { + struct lima_vm *vm; + struct lima_ctx_mgr ctx_mgr; +}; + +struct lima_submit { + struct lima_ctx *ctx; + int pipe; + u32 flags; + + struct drm_lima_gem_submit_bo *bos; + struct ttm_validate_buffer *vbs; + u32 nr_bos; + + struct ttm_validate_buffer vm_pd_vb; + struct ww_acquire_ctx ticket; + struct list_head duplicates; + struct list_head validated; + + union drm_lima_gem_submit_dep *deps; + u32 nr_deps; + + struct lima_sched_task *task; + + uint32_t fence; + uint32_t done; + int sync_fd; +}; + +static inline struct lima_drm_priv * +to_lima_drm_priv(struct drm_file *file) +{ + return file->driver_priv; +} + +#endif
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Simon Shields simon@lineageos.org Signed-off-by: Heiko Stuebner heiko@sntech.de --- drivers/gpu/drm/lima/lima_device.c | 407 +++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_device.h | 136 ++++++++++ 2 files changed, 543 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_device.c create mode 100644 drivers/gpu/drm/lima/lima_device.h
diff --git a/drivers/gpu/drm/lima/lima_device.c b/drivers/gpu/drm/lima/lima_device.c new file mode 100644 index 000000000000..a6c3905a0c85 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_device.c @@ -0,0 +1,407 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include <linux/regulator/consumer.h> +#include <linux/reset.h> +#include <linux/clk.h> +#include <linux/dma-mapping.h> +#include <linux/platform_device.h> + +#include "lima_device.h" +#include "lima_gp.h" +#include "lima_pp.h" +#include "lima_mmu.h" +#include "lima_pmu.h" +#include "lima_l2_cache.h" +#include "lima_dlbu.h" +#include "lima_bcast.h" +#include "lima_vm.h" + +struct lima_ip_desc { + char *name; + char *irq_name; + bool must_have[lima_gpu_num]; + int offset[lima_gpu_num]; + + int (*init)(struct lima_ip *); + void (*fini)(struct lima_ip *); +}; + +#define LIMA_IP_DESC(ipname, mst0, mst1, off0, off1, func, irq) \ + [lima_ip_##ipname] = { \ + .name = #ipname, \ + .irq_name = irq, \ + .must_have = { \ + [lima_gpu_mali400] = mst0, \ + [lima_gpu_mali450] = mst1, \ + }, \ + .offset = { \ + [lima_gpu_mali400] = off0, \ + [lima_gpu_mali450] = off1, \ + }, \ + .init = lima_##func##_init, \ + .fini = lima_##func##_fini, \ + } + +static struct lima_ip_desc lima_ip_desc[lima_ip_num] = { + LIMA_IP_DESC(pmu, false, false, 0x02000, 0x02000, pmu, "pmu"), + LIMA_IP_DESC(l2_cache0, true, true, 0x01000, 0x10000, l2_cache, NULL), + LIMA_IP_DESC(l2_cache1, false, true, -1, 0x01000, l2_cache, NULL), + LIMA_IP_DESC(l2_cache2, false, false, -1, 0x11000, l2_cache, NULL), + LIMA_IP_DESC(gp, true, true, 0x00000, 0x00000, gp, "gp"), + LIMA_IP_DESC(pp0, true, true, 0x08000, 0x08000, pp, "pp0"), + LIMA_IP_DESC(pp1, false, false, 0x0A000, 0x0A000, pp, "pp1"), + LIMA_IP_DESC(pp2, false, false, 0x0C000, 0x0C000, pp, "pp2"), + LIMA_IP_DESC(pp3, false, false, 0x0E000, 0x0E000, pp, "pp3"), + LIMA_IP_DESC(pp4, false, false, -1, 0x28000, pp, "pp4"), + LIMA_IP_DESC(pp5, false, false, -1, 0x2A000, pp, "pp5"), + LIMA_IP_DESC(pp6, false, false, -1, 0x2C000, pp, "pp6"), + LIMA_IP_DESC(pp7, false, false, -1, 0x2E000, pp, "pp7"), + LIMA_IP_DESC(gpmmu, true, true, 0x03000, 0x03000, mmu, "gpmmu"), + LIMA_IP_DESC(ppmmu0, true, true, 0x04000, 0x04000, mmu, "ppmmu0"), + LIMA_IP_DESC(ppmmu1, false, false, 0x05000, 0x05000, mmu, "ppmmu1"), + LIMA_IP_DESC(ppmmu2, false, false, 0x06000, 0x06000, mmu, "ppmmu2"), + LIMA_IP_DESC(ppmmu3, false, false, 0x07000, 0x07000, mmu, "ppmmu3"), + LIMA_IP_DESC(ppmmu4, false, false, -1, 0x1C000, mmu, "ppmmu4"), + LIMA_IP_DESC(ppmmu5, false, false, -1, 0x1D000, mmu, "ppmmu5"), + LIMA_IP_DESC(ppmmu6, false, false, -1, 0x1E000, mmu, "ppmmu6"), + LIMA_IP_DESC(ppmmu7, false, false, -1, 0x1F000, mmu, "ppmmu7"), + LIMA_IP_DESC(dlbu, false, true, -1, 0x14000, dlbu, NULL), + LIMA_IP_DESC(bcast, false, true, -1, 0x13000, bcast, NULL), + LIMA_IP_DESC(pp_bcast, false, true, -1, 0x16000, pp_bcast, "pp"), + LIMA_IP_DESC(ppmmu_bcast, false, true, -1, 0x15000, mmu, NULL), +}; + +const char *lima_ip_name(struct lima_ip *ip) +{ + return lima_ip_desc[ip->id].name; +} + +static int lima_clk_init(struct lima_device *dev) +{ + int err; + unsigned long bus_rate, gpu_rate; + + dev->clk_bus = devm_clk_get(dev->dev, "bus"); + if (IS_ERR(dev->clk_bus)) { + dev_err(dev->dev, "get bus clk failed %ld\n", PTR_ERR(dev->clk_bus)); + return PTR_ERR(dev->clk_bus); + } + + dev->clk_gpu = devm_clk_get(dev->dev, "core"); + if (IS_ERR(dev->clk_gpu)) { + dev_err(dev->dev, "get core clk failed %ld\n", PTR_ERR(dev->clk_gpu)); + return PTR_ERR(dev->clk_gpu); + } + + bus_rate = clk_get_rate(dev->clk_bus); + dev_info(dev->dev, "bus rate = %lu\n", bus_rate); + + gpu_rate = clk_get_rate(dev->clk_gpu); + dev_info(dev->dev, "mod rate = %lu", gpu_rate); + + if ((err = clk_prepare_enable(dev->clk_bus))) + return err; + if ((err = clk_prepare_enable(dev->clk_gpu))) + goto error_out0; + + dev->reset = devm_reset_control_get_optional(dev->dev, NULL); + if (IS_ERR(dev->reset)) { + err = PTR_ERR(dev->reset); + goto error_out1; + } else if (dev->reset != NULL) { + if ((err = reset_control_deassert(dev->reset))) + goto error_out1; + } + + return 0; + +error_out1: + clk_disable_unprepare(dev->clk_gpu); +error_out0: + clk_disable_unprepare(dev->clk_bus); + return err; +} + +static void lima_clk_fini(struct lima_device *dev) +{ + if (dev->reset != NULL) + reset_control_assert(dev->reset); + clk_disable_unprepare(dev->clk_gpu); + clk_disable_unprepare(dev->clk_bus); +} + +static int lima_regulator_init(struct lima_device *dev) +{ + int ret; + dev->regulator = devm_regulator_get_optional(dev->dev, "mali"); + if (IS_ERR(dev->regulator)) { + ret = PTR_ERR(dev->regulator); + dev->regulator = NULL; + if (ret == -ENODEV) + return 0; + dev_err(dev->dev, "failed to get regulator: %ld\n", PTR_ERR(dev->regulator)); + return ret; + } + + ret = regulator_enable(dev->regulator); + if (ret < 0) { + dev_err(dev->dev, "failed to enable regulator: %d\n", ret); + return ret; + } + + return 0; +} + +static void lima_regulator_fini(struct lima_device *dev) +{ + if (dev->regulator) + regulator_disable(dev->regulator); +} + +static int lima_init_ip(struct lima_device *dev, int index) +{ + struct lima_ip_desc *desc = lima_ip_desc + index; + struct lima_ip *ip = dev->ip + index; + int offset = desc->offset[dev->id]; + bool must = desc->must_have[dev->id]; + int err; + + if (offset < 0) + return 0; + + ip->dev = dev; + ip->id = index; + ip->iomem = dev->iomem + offset; + if (desc->irq_name) { + err = platform_get_irq_byname(dev->pdev, desc->irq_name); + if (err < 0) + goto out; + ip->irq = err; + } + + err = desc->init(ip); + if (!err) { + ip->present = true; + return 0; + } + +out: + return must ? err : 0; +} + +static void lima_fini_ip(struct lima_device *ldev, int index) +{ + struct lima_ip_desc *desc = lima_ip_desc + index; + struct lima_ip *ip = ldev->ip + index; + + if (ip->present) + desc->fini(ip); +} + +static int lima_init_gp_pipe(struct lima_device *dev) +{ + struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_gp; + int err; + + if ((err = lima_sched_pipe_init(pipe, "gp"))) + return err; + + pipe->l2_cache[pipe->num_l2_cache++] = dev->ip + lima_ip_l2_cache0; + pipe->mmu[pipe->num_mmu++] = dev->ip + lima_ip_gpmmu; + pipe->processor[pipe->num_processor++] = dev->ip + lima_ip_gp; + + if ((err = lima_gp_pipe_init(dev))) { + lima_sched_pipe_fini(pipe); + return err; + } + + return 0; +} + +static void lima_fini_gp_pipe(struct lima_device *dev) +{ + struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_gp; + + lima_gp_pipe_fini(dev); + lima_sched_pipe_fini(pipe); +} + +static int lima_init_pp_pipe(struct lima_device *dev) +{ + struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_pp; + int err, i; + + if ((err = lima_sched_pipe_init(pipe, "pp"))) + return err; + + for (i = 0; i < LIMA_SCHED_PIPE_MAX_PROCESSOR; i++) { + struct lima_ip *pp = dev->ip + lima_ip_pp0 + i; + struct lima_ip *ppmmu = dev->ip + lima_ip_ppmmu0 + i; + struct lima_ip *l2_cache; + + if (dev->id == lima_gpu_mali400) + l2_cache = dev->ip + lima_ip_l2_cache0; + else + l2_cache = dev->ip + lima_ip_l2_cache1 + (i >> 2); + + if (pp->present && ppmmu->present && l2_cache->present) { + pipe->mmu[pipe->num_mmu++] = ppmmu; + pipe->processor[pipe->num_processor++] = pp; + if (!pipe->l2_cache[i >> 2]) + pipe->l2_cache[pipe->num_l2_cache++] = l2_cache; + } + } + + if (dev->ip[lima_ip_bcast].present) { + pipe->bcast_processor = dev->ip + lima_ip_pp_bcast; + pipe->bcast_mmu = dev->ip + lima_ip_ppmmu_bcast; + } + + if ((err = lima_pp_pipe_init(dev))) { + lima_sched_pipe_fini(pipe); + return err; + } + + return 0; +} + +static void lima_fini_pp_pipe(struct lima_device *dev) +{ + struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_pp; + + lima_pp_pipe_fini(dev); + lima_sched_pipe_fini(pipe); +} + +int lima_device_init(struct lima_device *ldev) +{ + int err, i; + struct resource *res; + + dma_set_coherent_mask(ldev->dev, DMA_BIT_MASK(32)); + + err = lima_clk_init(ldev); + if (err) { + dev_err(ldev->dev, "clk init fail %d\n", err); + return err; + } + + if ((err = lima_regulator_init(ldev))) { + dev_err(ldev->dev, "regulator init fail %d\n", err); + goto err_out0; + } + + err = lima_ttm_init(ldev); + if (err) + goto err_out1; + + ldev->empty_vm = lima_vm_create(ldev); + if (!ldev->empty_vm) { + err = -ENOMEM; + goto err_out2; + } + + ldev->va_start = 0; + if (ldev->id == lima_gpu_mali450) { + ldev->va_end = LIMA_VA_RESERVE_START; + ldev->dlbu_cpu = dma_alloc_wc( + ldev->dev, LIMA_PAGE_SIZE, + &ldev->dlbu_dma, GFP_KERNEL); + if (!ldev->dlbu_cpu) { + err = -ENOMEM; + goto err_out3; + } + } + else + ldev->va_end = LIMA_VA_RESERVE_END; + + res = platform_get_resource(ldev->pdev, IORESOURCE_MEM, 0); + ldev->iomem = devm_ioremap_resource(ldev->dev, res); + if (IS_ERR(ldev->iomem)) { + dev_err(ldev->dev, "fail to ioremap iomem\n"); + err = PTR_ERR(ldev->iomem); + goto err_out4; + } + + for (i = 0; i < lima_ip_num; i++) { + err = lima_init_ip(ldev, i); + if (err) + goto err_out5; + } + + err = lima_init_gp_pipe(ldev); + if (err) + goto err_out5; + + err = lima_init_pp_pipe(ldev); + if (err) + goto err_out6; + + if (ldev->id == lima_gpu_mali450) { + lima_dlbu_enable(ldev); + lima_bcast_enable(ldev); + } + + return 0; + +err_out6: + lima_fini_gp_pipe(ldev); +err_out5: + while (--i >= 0) + lima_fini_ip(ldev, i); +err_out4: + if (ldev->dlbu_cpu) + dma_free_wc(ldev->dev, LIMA_PAGE_SIZE, + ldev->dlbu_cpu, ldev->dlbu_dma); +err_out3: + lima_vm_put(ldev->empty_vm); +err_out2: + lima_ttm_fini(ldev); +err_out1: + lima_regulator_fini(ldev); +err_out0: + lima_clk_fini(ldev); + return err; +} + +void lima_device_fini(struct lima_device *ldev) +{ + int i; + + lima_fini_pp_pipe(ldev); + lima_fini_gp_pipe(ldev); + + for (i = lima_ip_num - 1; i >= 0; i--) + lima_fini_ip(ldev, i); + + if (ldev->dlbu_cpu) + dma_free_wc(ldev->dev, LIMA_PAGE_SIZE, + ldev->dlbu_cpu, ldev->dlbu_dma); + + lima_vm_put(ldev->empty_vm); + + lima_ttm_fini(ldev); + + lima_regulator_fini(ldev); + + lima_clk_fini(ldev); +} diff --git a/drivers/gpu/drm/lima/lima_device.h b/drivers/gpu/drm/lima/lima_device.h new file mode 100644 index 000000000000..6c9c26b9e122 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_device.h @@ -0,0 +1,136 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_DEVICE_H__ +#define __LIMA_DEVICE_H__ + +#include <drm/drm_device.h> + +#include "lima_sched.h" +#include "lima_ttm.h" + +enum lima_gpu_id { + lima_gpu_mali400 = 0, + lima_gpu_mali450, + lima_gpu_num, +}; + +enum lima_ip_id { + lima_ip_pmu, + lima_ip_gpmmu, + lima_ip_ppmmu0, + lima_ip_ppmmu1, + lima_ip_ppmmu2, + lima_ip_ppmmu3, + lima_ip_ppmmu4, + lima_ip_ppmmu5, + lima_ip_ppmmu6, + lima_ip_ppmmu7, + lima_ip_gp, + lima_ip_pp0, + lima_ip_pp1, + lima_ip_pp2, + lima_ip_pp3, + lima_ip_pp4, + lima_ip_pp5, + lima_ip_pp6, + lima_ip_pp7, + lima_ip_l2_cache0, + lima_ip_l2_cache1, + lima_ip_l2_cache2, + lima_ip_dlbu, + lima_ip_bcast, + lima_ip_pp_bcast, + lima_ip_ppmmu_bcast, + lima_ip_num, +}; + +struct lima_device; + +struct lima_ip { + struct lima_device *dev; + enum lima_ip_id id; + bool present; + + void __iomem *iomem; + int irq; + + union { + /* pmu */ + unsigned switch_delay; + /* gp/pp */ + bool async_reset; + /* l2 cache */ + spinlock_t lock; + } data; +}; + +enum lima_pipe_id { + lima_pipe_gp, + lima_pipe_pp, + lima_pipe_num, +}; + +struct lima_device { + struct device *dev; + struct drm_device *ddev; + struct platform_device *pdev; + + enum lima_gpu_id id; + int num_pp; + + void __iomem *iomem; + struct clk *clk_bus; + struct clk *clk_gpu; + struct reset_control *reset; + struct regulator *regulator; + + struct lima_ip ip[lima_ip_num]; + struct lima_sched_pipe pipe[lima_pipe_num]; + + struct lima_mman mman; + + struct lima_vm *empty_vm; + uint64_t va_start; + uint64_t va_end; + + u32 *dlbu_cpu; + dma_addr_t dlbu_dma; +}; + +static inline struct lima_device * +to_lima_dev(struct drm_device *dev) +{ + return dev->dev_private; +} + +static inline struct lima_device * +ttm_to_lima_dev(struct ttm_bo_device *dev) +{ + return container_of(dev, struct lima_device, mman.bdev); +} + +int lima_device_init(struct lima_device *ldev); +void lima_device_fini(struct lima_device *ldev); + +const char *lima_ip_name(struct lima_ip *ip); + +#endif
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Heiko Stuebner heiko@sntech.de --- drivers/gpu/drm/lima/lima_pmu.c | 85 +++++++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_pmu.h | 30 ++++++++++++ 2 files changed, 115 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_pmu.c create mode 100644 drivers/gpu/drm/lima/lima_pmu.h
diff --git a/drivers/gpu/drm/lima/lima_pmu.c b/drivers/gpu/drm/lima/lima_pmu.c new file mode 100644 index 000000000000..255a64e9f265 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_pmu.c @@ -0,0 +1,85 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <linux/of.h> +#include <linux/io.h> +#include <linux/device.h> + +#include "lima_device.h" +#include "lima_pmu.h" +#include "lima_regs.h" + +#define pmu_write(reg, data) writel(data, ip->iomem + LIMA_PMU_##reg) +#define pmu_read(reg) readl(ip->iomem + LIMA_PMU_##reg) + +static int lima_pmu_wait_cmd(struct lima_ip *ip) +{ + struct lima_device *dev = ip->dev; + u32 stat, timeout; + + for (timeout = 1000000; timeout > 0; timeout--) { + stat = pmu_read(INT_RAWSTAT); + if (stat & LIMA_PMU_INT_CMD_MASK) + break; + } + + if (!timeout) { + dev_err(dev->dev, "timeout wait pmd cmd\n"); + return -ETIMEDOUT; + } + + pmu_write(INT_CLEAR, LIMA_PMU_INT_CMD_MASK); + return 0; +} + +int lima_pmu_init(struct lima_ip *ip) +{ + int err; + u32 stat; + struct lima_device *dev = ip->dev; + struct device_node *np = dev->dev->of_node; + + /* If this value is too low, when in high GPU clk freq, + * GPU will be in unstable state. */ + if (of_property_read_u32(np, "switch-delay", &ip->data.switch_delay)) + ip->data.switch_delay = 0xff; + + pmu_write(INT_MASK, 0); + pmu_write(SW_DELAY, ip->data.switch_delay); + + /* status reg 1=off 0=on */ + stat = pmu_read(STATUS); + + /* power up all ip */ + if (stat) { + pmu_write(POWER_UP, stat); + err = lima_pmu_wait_cmd(ip); + if (err) + return err; + } + return 0; +} + +void lima_pmu_fini(struct lima_ip *ip) +{ + +} diff --git a/drivers/gpu/drm/lima/lima_pmu.h b/drivers/gpu/drm/lima/lima_pmu.h new file mode 100644 index 000000000000..fb68a7059a37 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_pmu.h @@ -0,0 +1,30 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_PMU_H__ +#define __LIMA_PMU_H__ + +struct lima_ip; + +int lima_pmu_init(struct lima_ip *ip); +void lima_pmu_fini(struct lima_ip *ip); + +#endif
Signed-off-by: Qiang Yu yuq825@gmail.com --- drivers/gpu/drm/lima/lima_l2_cache.c | 98 ++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_l2_cache.h | 32 +++++++++ 2 files changed, 130 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.c create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.h
diff --git a/drivers/gpu/drm/lima/lima_l2_cache.c b/drivers/gpu/drm/lima/lima_l2_cache.c new file mode 100644 index 000000000000..a9b85de5c51a --- /dev/null +++ b/drivers/gpu/drm/lima/lima_l2_cache.c @@ -0,0 +1,98 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <linux/io.h> +#include <linux/device.h> + +#include "lima_device.h" +#include "lima_l2_cache.h" +#include "lima_regs.h" + +#define l2_cache_write(reg, data) writel(data, ip->iomem + LIMA_L2_CACHE_##reg) +#define l2_cache_read(reg) readl(ip->iomem + LIMA_L2_CACHE_##reg) + +static int lima_l2_cache_wait_idle(struct lima_ip *ip) +{ + int timeout; + struct lima_device *dev = ip->dev; + + for (timeout = 100000; timeout > 0; timeout--) { + if (!(l2_cache_read(STATUS) & LIMA_L2_CACHE_STATUS_COMMAND_BUSY)) + break; + } + if (!timeout) { + dev_err(dev->dev, "l2 cache wait command timeout\n"); + return -ETIMEDOUT; + } + return 0; +} + +int lima_l2_cache_flush(struct lima_ip *ip) +{ + int ret; + + spin_lock(&ip->data.lock); + l2_cache_write(COMMAND, LIMA_L2_CACHE_COMMAND_CLEAR_ALL); + ret = lima_l2_cache_wait_idle(ip); + spin_unlock(&ip->data.lock); + return ret; +} + +int lima_l2_cache_init(struct lima_ip *ip) +{ + int i, err; + u32 size; + struct lima_device *dev = ip->dev; + + /* l2_cache2 only exists when one of PP4-7 present */ + if (ip->id == lima_ip_l2_cache2) { + for (i = lima_ip_pp4; i <= lima_ip_pp7; i++) { + if (dev->ip[i].present) + break; + } + if (i > lima_ip_pp7) + return -ENODEV; + } + + spin_lock_init(&ip->data.lock); + + size = l2_cache_read(SIZE); + dev_info(dev->dev, "l2 cache %uK, %u-way, %ubyte cache line, %ubit external bus\n", + 1 << (((size >> 16) & 0xff) - 10), + 1 << ((size >> 8) & 0xff), + 1 << (size & 0xff), + 1 << ((size >> 24) & 0xff)); + + err = lima_l2_cache_flush(ip); + if (err) + return err; + + l2_cache_write(ENABLE, LIMA_L2_CACHE_ENABLE_ACCESS | LIMA_L2_CACHE_ENABLE_READ_ALLOCATE); + l2_cache_write(MAX_READS, 0x1c); + + return 0; +} + +void lima_l2_cache_fini(struct lima_ip *ip) +{ + +} diff --git a/drivers/gpu/drm/lima/lima_l2_cache.h b/drivers/gpu/drm/lima/lima_l2_cache.h new file mode 100644 index 000000000000..4a35725bf38d --- /dev/null +++ b/drivers/gpu/drm/lima/lima_l2_cache.h @@ -0,0 +1,32 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_L2_CACHE_H__ +#define __LIMA_L2_CACHE_H__ + +struct lima_ip; + +int lima_l2_cache_init(struct lima_ip *ip); +void lima_l2_cache_fini(struct lima_ip *ip); + +int lima_l2_cache_flush(struct lima_ip *ip); + +#endif
GP is a processor for OpenGL vertex shader processing.
Signed-off-by: Qiang Yu yuq825@gmail.com --- drivers/gpu/drm/lima/lima_gp.c | 293 +++++++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_gp.h | 34 ++++ 2 files changed, 327 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_gp.c create mode 100644 drivers/gpu/drm/lima/lima_gp.h
diff --git a/drivers/gpu/drm/lima/lima_gp.c b/drivers/gpu/drm/lima/lima_gp.c new file mode 100644 index 000000000000..8fb49986418a --- /dev/null +++ b/drivers/gpu/drm/lima/lima_gp.c @@ -0,0 +1,293 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <linux/interrupt.h> +#include <linux/io.h> +#include <linux/device.h> +#include <linux/slab.h> + +#include <drm/lima_drm.h> + +#include "lima_device.h" +#include "lima_gp.h" +#include "lima_regs.h" + +#define gp_write(reg, data) writel(data, ip->iomem + LIMA_GP_##reg) +#define gp_read(reg) readl(ip->iomem + LIMA_GP_##reg) + +static irqreturn_t lima_gp_irq_handler(int irq, void *data) +{ + struct lima_ip *ip = data; + struct lima_device *dev = ip->dev; + struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_gp; + u32 state = gp_read(INT_STAT); + u32 status = gp_read(STATUS); + bool done = false; + + /* for shared irq case */ + if (!state) + return IRQ_NONE; + + if (state & LIMA_GP_IRQ_MASK_ERROR) { + dev_err(dev->dev, "gp error irq state=%x status=%x\n", + state, status); + + /* mask all interrupts before hard reset */ + gp_write(INT_MASK, 0); + + pipe->error = true; + done = true; + } + else { + bool valid = state & (LIMA_GP_IRQ_VS_END_CMD_LST | + LIMA_GP_IRQ_PLBU_END_CMD_LST); + bool active = status & (LIMA_GP_STATUS_VS_ACTIVE | + LIMA_GP_STATUS_PLBU_ACTIVE); + done = valid && !active; + } + + gp_write(INT_CLEAR, state); + + if (done) + lima_sched_pipe_task_done(pipe); + + return IRQ_HANDLED; +} + +static void lima_gp_soft_reset_async(struct lima_ip *ip) +{ + if (ip->data.async_reset) + return; + + gp_write(INT_MASK, 0); + gp_write(INT_CLEAR, LIMA_GP_IRQ_RESET_COMPLETED); + gp_write(CMD, LIMA_GP_CMD_SOFT_RESET); + ip->data.async_reset = true; +} + +static int lima_gp_soft_reset_async_wait(struct lima_ip *ip) +{ + struct lima_device *dev = ip->dev; + int timeout; + + if (!ip->data.async_reset) + return 0; + + for (timeout = 1000; timeout > 0; timeout--) { + if (gp_read(INT_RAWSTAT) & LIMA_GP_IRQ_RESET_COMPLETED) + break; + } + if (!timeout) { + dev_err(dev->dev, "gp soft reset time out\n"); + return -ETIMEDOUT; + } + + gp_write(INT_CLEAR, LIMA_GP_IRQ_MASK_ALL); + gp_write(INT_MASK, LIMA_GP_IRQ_MASK_USED); + + ip->data.async_reset = false; + return 0; +} + +static int lima_gp_task_validate(struct lima_sched_pipe *pipe, + struct lima_sched_task *task) +{ + struct drm_lima_gp_frame *frame = task->frame; + u32 *f = frame->frame; + (void)pipe; + + if (f[LIMA_GP_VSCL_START_ADDR >> 2] > + f[LIMA_GP_VSCL_END_ADDR >> 2] || + f[LIMA_GP_PLBUCL_START_ADDR >> 2] > + f[LIMA_GP_PLBUCL_END_ADDR >> 2] || + f[LIMA_GP_PLBU_ALLOC_START_ADDR >> 2] > + f[LIMA_GP_PLBU_ALLOC_END_ADDR >> 2]) + return -EINVAL; + + if (f[LIMA_GP_VSCL_START_ADDR >> 2] == + f[LIMA_GP_VSCL_END_ADDR >> 2] && + f[LIMA_GP_PLBUCL_START_ADDR >> 2] == + f[LIMA_GP_PLBUCL_END_ADDR >> 2]) + return -EINVAL; + + return 0; +} + +static void lima_gp_task_run(struct lima_sched_pipe *pipe, + struct lima_sched_task *task) +{ + struct lima_ip *ip = pipe->processor[0]; + struct drm_lima_gp_frame *frame = task->frame; + u32 *f = frame->frame; + u32 cmd = 0; + int i; + + if (f[LIMA_GP_VSCL_START_ADDR >> 2] != + f[LIMA_GP_VSCL_END_ADDR >> 2]) + cmd |= LIMA_GP_CMD_START_VS; + if (f[LIMA_GP_PLBUCL_START_ADDR >> 2] != + f[LIMA_GP_PLBUCL_END_ADDR >> 2]) + cmd |= LIMA_GP_CMD_START_PLBU; + + /* before any hw ops, wait last success task async soft reset */ + lima_gp_soft_reset_async_wait(ip); + + for (i = 0; i < LIMA_GP_FRAME_REG_NUM; i++) + writel(f[i], ip->iomem + LIMA_GP_VSCL_START_ADDR + i * 4); + + gp_write(CMD, LIMA_GP_CMD_UPDATE_PLBU_ALLOC); + gp_write(CMD, cmd); +} + +static int lima_gp_hard_reset(struct lima_ip *ip) +{ + struct lima_device *dev = ip->dev; + int timeout; + + gp_write(PERF_CNT_0_LIMIT, 0xC0FFE000); + gp_write(INT_MASK, 0); + gp_write(CMD, LIMA_GP_CMD_RESET); + for (timeout = 1000; timeout > 0; timeout--) { + gp_write(PERF_CNT_0_LIMIT, 0xC01A0000); + if (gp_read(PERF_CNT_0_LIMIT) == 0xC01A0000) + break; + } + if (!timeout) { + dev_err(dev->dev, "gp hard reset timeout\n"); + return -ETIMEDOUT; + } + + gp_write(PERF_CNT_0_LIMIT, 0); + gp_write(INT_CLEAR, LIMA_GP_IRQ_MASK_ALL); + gp_write(INT_MASK, LIMA_GP_IRQ_MASK_USED); + return 0; +} + +static void lima_gp_task_fini(struct lima_sched_pipe *pipe) +{ + lima_gp_soft_reset_async(pipe->processor[0]); +} + +static void lima_gp_task_error(struct lima_sched_pipe *pipe) +{ + lima_gp_hard_reset(pipe->processor[0]); +} + +static void lima_gp_task_mmu_error(struct lima_sched_pipe *pipe) +{ + lima_sched_pipe_task_done(pipe); +} + +static void lima_gp_print_version(struct lima_ip *ip) +{ + u32 version, major, minor; + char *name; + + version = gp_read(VERSION); + major = (version >> 8) & 0xFF; + minor = version & 0xFF; + switch (version >> 16) { + case 0xA07: + name = "mali200"; + break; + case 0xC07: + name = "mali300"; + break; + case 0xB07: + name = "mali400"; + break; + case 0xD07: + name = "mali450"; + break; + default: + name = "unknow"; + break; + } + dev_info(ip->dev->dev, "%s - %s version major %d minor %d\n", + lima_ip_name(ip), name, major, minor); +} + +static struct kmem_cache *lima_gp_task_slab = NULL; +static int lima_gp_task_slab_refcnt = 0; + +int lima_gp_init(struct lima_ip *ip) +{ + struct lima_device *dev = ip->dev; + int err; + + lima_gp_print_version(ip); + + ip->data.async_reset = false; + lima_gp_soft_reset_async(ip); + err = lima_gp_soft_reset_async_wait(ip); + if (err) + return err; + + err = devm_request_irq(dev->dev, ip->irq, lima_gp_irq_handler, 0, + lima_ip_name(ip), ip); + if (err) { + dev_err(dev->dev, "gp %s fail to request irq\n", + lima_ip_name(ip)); + return err; + } + + return 0; +} + +void lima_gp_fini(struct lima_ip *ip) +{ + +} + +int lima_gp_pipe_init(struct lima_device *dev) +{ + int frame_size = sizeof(struct drm_lima_gp_frame); + struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_gp; + + if (!lima_gp_task_slab) { + lima_gp_task_slab = kmem_cache_create( + "lima_gp_task", sizeof(struct lima_sched_task) + frame_size, + 0, SLAB_HWCACHE_ALIGN, NULL); + if (!lima_gp_task_slab) + return -ENOMEM; + } + lima_gp_task_slab_refcnt++; + + pipe->frame_size = frame_size; + pipe->task_slab = lima_gp_task_slab; + + pipe->task_validate = lima_gp_task_validate; + pipe->task_run = lima_gp_task_run; + pipe->task_fini = lima_gp_task_fini; + pipe->task_error = lima_gp_task_error; + pipe->task_mmu_error = lima_gp_task_mmu_error; + + return 0; +} + +void lima_gp_pipe_fini(struct lima_device *dev) +{ + if (!--lima_gp_task_slab_refcnt) { + kmem_cache_destroy(lima_gp_task_slab); + lima_gp_task_slab = NULL; + } +} diff --git a/drivers/gpu/drm/lima/lima_gp.h b/drivers/gpu/drm/lima/lima_gp.h new file mode 100644 index 000000000000..8354911e50ce --- /dev/null +++ b/drivers/gpu/drm/lima/lima_gp.h @@ -0,0 +1,34 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_GP_H__ +#define __LIMA_GP_H__ + +struct lima_ip; +struct lima_device; + +int lima_gp_init(struct lima_ip *ip); +void lima_gp_fini(struct lima_ip *ip); + +int lima_gp_pipe_init(struct lima_device *dev); +void lima_gp_pipe_fini(struct lima_device *dev); + +#endif
On 05/18/2018 11:28 AM, Qiang Yu wrote:
GP is a processor for OpenGL vertex shader processing.
Signed-off-by: Qiang Yu yuq825@gmail.com
[...]
+int lima_gp_init(struct lima_ip *ip) +{
- struct lima_device *dev = ip->dev;
- int err;
- lima_gp_print_version(ip);
- ip->data.async_reset = false;
- lima_gp_soft_reset_async(ip);
- err = lima_gp_soft_reset_async_wait(ip);
- if (err)
return err;
- err = devm_request_irq(dev->dev, ip->irq, lima_gp_irq_handler, 0,
lima_ip_name(ip), ip);
IRQF_SHARED, since there are designs (like zynqmp) where there is only one IRQ line for the entire GPU.
- if (err) {
dev_err(dev->dev, "gp %s fail to request irq\n",
lima_ip_name(ip));
return err;
- }
- return 0;
+}
[...]
On Thu, May 24, 2018 at 1:12 AM, Marek Vasut marex@denx.de wrote:
On 05/18/2018 11:28 AM, Qiang Yu wrote:
GP is a processor for OpenGL vertex shader processing.
Signed-off-by: Qiang Yu yuq825@gmail.com
[...]
+int lima_gp_init(struct lima_ip *ip) +{
struct lima_device *dev = ip->dev;
int err;
lima_gp_print_version(ip);
ip->data.async_reset = false;
lima_gp_soft_reset_async(ip);
err = lima_gp_soft_reset_async_wait(ip);
if (err)
return err;
err = devm_request_irq(dev->dev, ip->irq, lima_gp_irq_handler, 0,
lima_ip_name(ip), ip);
IRQF_SHARED, since there are designs (like zynqmp) where there is only one IRQ line for the entire GPU.
Right, will add this flag.
Regards, Qiang
if (err) {
dev_err(dev->dev, "gp %s fail to request irq\n",
lima_ip_name(ip));
return err;
}
return 0;
+}
[...]
-- Best regards, Marek Vasut
From: Lima Project Developers dri-devel@lists.freedesktop.org
PP is a processor used for OpenGL fragment shader processing.
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Heiko Stuebner heiko@sntech.de --- drivers/gpu/drm/lima/lima_pp.c | 418 +++++++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_pp.h | 37 +++ 2 files changed, 455 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_pp.c create mode 100644 drivers/gpu/drm/lima/lima_pp.h
diff --git a/drivers/gpu/drm/lima/lima_pp.c b/drivers/gpu/drm/lima/lima_pp.c new file mode 100644 index 000000000000..371d6b70c271 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_pp.c @@ -0,0 +1,418 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <linux/interrupt.h> +#include <linux/io.h> +#include <linux/device.h> +#include <linux/slab.h> + +#include <drm/lima_drm.h> + +#include "lima_device.h" +#include "lima_pp.h" +#include "lima_dlbu.h" +#include "lima_bcast.h" +#include "lima_vm.h" +#include "lima_regs.h" + +#define pp_write(reg, data) writel(data, ip->iomem + LIMA_PP_##reg) +#define pp_read(reg) readl(ip->iomem + LIMA_PP_##reg) + +static void lima_pp_handle_irq(struct lima_ip *ip, u32 state) +{ + struct lima_device *dev = ip->dev; + struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_pp; + + if (state & LIMA_PP_IRQ_MASK_ERROR) { + u32 status = pp_read(STATUS); + + dev_err(dev->dev, "pp error irq state=%x status=%x\n", + state, status); + + pipe->error = true; + + /* mask all interrupts before hard reset */ + pp_write(INT_MASK, 0); + } + + pp_write(INT_CLEAR, state); +} + +static irqreturn_t lima_pp_irq_handler(int irq, void *data) +{ + struct lima_ip *ip = data; + struct lima_device *dev = ip->dev; + struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_pp; + u32 state = pp_read(INT_STATUS); + + /* for shared irq case */ + if (!state) + return IRQ_NONE; + + lima_pp_handle_irq(ip, state); + + if (atomic_dec_and_test(&pipe->task)) + lima_sched_pipe_task_done(pipe); + + return IRQ_HANDLED; +} + +static irqreturn_t lima_pp_bcast_irq_handler(int irq, void *data) +{ + int i; + irqreturn_t ret = IRQ_NONE; + struct lima_ip *pp_bcast = data; + struct lima_device *dev = pp_bcast->dev; + struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_pp; + + for (i = 0; i < pipe->num_processor; i++) { + struct lima_ip *ip = pipe->processor[i]; + u32 status, state; + + if (pipe->done & (1 << i)) + continue; + + /* status read first in case int state change in the middle + * which may miss the interrupt handling */ + status = pp_read(STATUS); + state = pp_read(INT_STATUS); + + if (state) { + lima_pp_handle_irq(ip, state); + ret = IRQ_HANDLED; + } + else { + if (status & LIMA_PP_STATUS_RENDERING_ACTIVE) + continue; + } + + pipe->done |= (1 << i); + if (atomic_dec_and_test(&pipe->task)) + lima_sched_pipe_task_done(pipe); + } + + return ret; +} + +static void lima_pp_soft_reset_async(struct lima_ip *ip) +{ + if (ip->data.async_reset) + return; + + pp_write(INT_MASK, 0); + pp_write(INT_RAWSTAT, LIMA_PP_IRQ_MASK_ALL); + pp_write(CTRL, LIMA_PP_CTRL_SOFT_RESET); + ip->data.async_reset = true; +} + +static int lima_pp_soft_reset_async_wait_one(struct lima_ip *ip) +{ + struct lima_device *dev = ip->dev; + int timeout; + + for (timeout = 1000; timeout > 0; timeout--) { + if (!(pp_read(STATUS) & LIMA_PP_STATUS_RENDERING_ACTIVE) && + pp_read(INT_RAWSTAT) == LIMA_PP_IRQ_RESET_COMPLETED) + break; + } + if (!timeout) { + dev_err(dev->dev, "pp %s reset time out\n", lima_ip_name(ip)); + return -ETIMEDOUT; + } + + pp_write(INT_CLEAR, LIMA_PP_IRQ_MASK_ALL); + pp_write(INT_MASK, LIMA_PP_IRQ_MASK_USED); + return 0; +} + +static int lima_pp_soft_reset_async_wait(struct lima_ip *ip) +{ + int i, err = 0; + + if (!ip->data.async_reset) + return 0; + + if (ip->id == lima_ip_pp_bcast) { + struct lima_device *dev = ip->dev; + struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_pp; + for (i = 0; i < pipe->num_processor; i++) + err |= lima_pp_soft_reset_async_wait_one(pipe->processor[i]); + } + else + err = lima_pp_soft_reset_async_wait_one(ip); + + ip->data.async_reset = false; + return err; +} + +static void lima_pp_start_task(struct lima_ip *ip, u32 *frame, u32 *wb, + bool skip_stack_addr) +{ + int i, j, n = 0; + + for (i = 0; i < LIMA_PP_FRAME_REG_NUM; i++) { + if (skip_stack_addr && i * 4 == LIMA_PP_STACK) + continue; + + writel(frame[i], ip->iomem + LIMA_PP_FRAME + i * 4); + } + + for (i = 0; i < 3; i++) { + for (j = 0; j < LIMA_PP_WB_REG_NUM; j++) + writel(wb[n++], ip->iomem + LIMA_PP_WB(i) + j * 4); + } + + pp_write(CTRL, LIMA_PP_CTRL_START_RENDERING); +} + +static int lima_pp_hard_reset(struct lima_ip *ip) +{ + struct lima_device *dev = ip->dev; + int timeout; + + pp_write(PERF_CNT_0_LIMIT, 0xC0FFE000); + pp_write(INT_MASK, 0); + pp_write(CTRL, LIMA_PP_CTRL_FORCE_RESET); + for (timeout = 1000; timeout > 0; timeout--) { + pp_write(PERF_CNT_0_LIMIT, 0xC01A0000); + if (pp_read(PERF_CNT_0_LIMIT) == 0xC01A0000) + break; + } + if (!timeout) { + dev_err(dev->dev, "pp hard reset timeout\n"); + return -ETIMEDOUT; + } + + pp_write(PERF_CNT_0_LIMIT, 0); + pp_write(INT_CLEAR, LIMA_PP_IRQ_MASK_ALL); + pp_write(INT_MASK, LIMA_PP_IRQ_MASK_USED); + return 0; +} + +static void lima_pp_print_version(struct lima_ip *ip) +{ + u32 version, major, minor; + char *name; + + version = pp_read(VERSION); + major = (version >> 8) & 0xFF; + minor = version & 0xFF; + switch (version >> 16) { + case 0xC807: + name = "mali200"; + break; + case 0xCE07: + name = "mali300"; + break; + case 0xCD07: + name = "mali400"; + break; + case 0xCF07: + name = "mali450"; + break; + default: + name = "unknow"; + break; + } + dev_info(ip->dev->dev, "%s - %s version major %d minor %d\n", + lima_ip_name(ip), name, major, minor); +} + +int lima_pp_init(struct lima_ip *ip) +{ + struct lima_device *dev = ip->dev; + int err; + + lima_pp_print_version(ip); + + ip->data.async_reset = false; + lima_pp_soft_reset_async(ip); + err = lima_pp_soft_reset_async_wait(ip); + if (err) + return err; + + err = devm_request_irq(dev->dev, ip->irq, lima_pp_irq_handler, + IRQF_SHARED, lima_ip_name(ip), ip); + if (err) { + dev_err(dev->dev, "pp %s fail to request irq\n", + lima_ip_name(ip)); + return err; + } + + return 0; +} + +void lima_pp_fini(struct lima_ip *ip) +{ + +} + +int lima_pp_bcast_init(struct lima_ip *ip) +{ + struct lima_device *dev = ip->dev; + int err; + + err = devm_request_irq(dev->dev, ip->irq, lima_pp_bcast_irq_handler, + IRQF_SHARED, lima_ip_name(ip), ip); + if (err) { + dev_err(dev->dev, "pp %s fail to request irq\n", + lima_ip_name(ip)); + return err; + } + + return 0; +} + +void lima_pp_bcast_fini(struct lima_ip *ip) +{ + +} + +static int lima_pp_task_validate(struct lima_sched_pipe *pipe, + struct lima_sched_task *task) +{ + if (!pipe->bcast_processor) { + struct drm_lima_m400_pp_frame *f = task->frame; + + if (f->num_pp > pipe->num_processor) + return -EINVAL; + } + + return 0; +} + +static void lima_pp_task_run(struct lima_sched_pipe *pipe, + struct lima_sched_task *task) +{ + if (pipe->bcast_processor) { + struct drm_lima_m450_pp_frame *frame = task->frame; + struct lima_device *dev = pipe->bcast_processor->dev; + int i; + + pipe->done = 0; + atomic_set(&pipe->task, pipe->num_processor); + + frame->frame[LIMA_PP_FRAME >> 2] = LIMA_VA_RESERVE_DLBU; + lima_dlbu_set_reg(dev->ip + lima_ip_dlbu, frame->dlbu_regs); + + lima_pp_soft_reset_async_wait(pipe->bcast_processor); + + for (i = 0; i < pipe->num_processor; i++) { + struct lima_ip *ip = pipe->processor[i]; + pp_write(STACK, frame->fragment_stack_address[i]); + } + + lima_pp_start_task(pipe->bcast_processor, frame->frame, + frame->wb, true); + } + else { + struct drm_lima_m400_pp_frame *frame = task->frame; + int i; + + atomic_set(&pipe->task, frame->num_pp); + + for (i = 0; i < frame->num_pp; i++) { + frame->frame[LIMA_PP_FRAME >> 2] = + frame->plbu_array_address[i]; + frame->frame[LIMA_PP_STACK >> 2] = + frame->fragment_stack_address[i]; + + lima_pp_soft_reset_async_wait(pipe->processor[i]); + + lima_pp_start_task(pipe->processor[i], frame->frame, + frame->wb, false); + } + } +} + +static void lima_pp_task_fini(struct lima_sched_pipe *pipe) +{ + if (pipe->bcast_processor) + lima_pp_soft_reset_async(pipe->bcast_processor); + else { + int i; + for (i = 0; i < pipe->num_processor; i++) + lima_pp_soft_reset_async(pipe->processor[i]); + } +} + +static void lima_pp_task_error(struct lima_sched_pipe *pipe) +{ + int i; + + if (pipe->bcast_processor) + lima_bcast_disable(pipe->bcast_processor->dev); + + for (i = 0; i < pipe->num_processor; i++) + lima_pp_hard_reset(pipe->processor[i]); + + if (pipe->bcast_processor) + lima_bcast_enable(pipe->bcast_processor->dev); +} + +static void lima_pp_task_mmu_error(struct lima_sched_pipe *pipe) +{ + if (atomic_dec_and_test(&pipe->task)) + lima_sched_pipe_task_done(pipe); +} + +static struct kmem_cache *lima_pp_task_slab = NULL; +static int lima_pp_task_slab_refcnt = 0; + +int lima_pp_pipe_init(struct lima_device *dev) +{ + int frame_size; + struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_pp; + + if (dev->id == lima_gpu_mali400) + frame_size = sizeof(struct drm_lima_m400_pp_frame); + else + frame_size = sizeof(struct drm_lima_m450_pp_frame); + + if (!lima_pp_task_slab) { + lima_pp_task_slab = kmem_cache_create( + "lima_pp_task", sizeof(struct lima_sched_task) + frame_size, + 0, SLAB_HWCACHE_ALIGN, NULL); + if (!lima_pp_task_slab) + return -ENOMEM; + } + lima_pp_task_slab_refcnt++; + + pipe->frame_size = frame_size; + pipe->task_slab = lima_pp_task_slab; + + pipe->task_validate = lima_pp_task_validate; + pipe->task_run = lima_pp_task_run; + pipe->task_fini = lima_pp_task_fini; + pipe->task_error = lima_pp_task_error; + pipe->task_mmu_error = lima_pp_task_mmu_error; + + return 0; +} + +void lima_pp_pipe_fini(struct lima_device *dev) +{ + if (!--lima_pp_task_slab_refcnt) { + kmem_cache_destroy(lima_pp_task_slab); + lima_pp_task_slab = NULL; + } +} diff --git a/drivers/gpu/drm/lima/lima_pp.h b/drivers/gpu/drm/lima/lima_pp.h new file mode 100644 index 000000000000..4bd1d9fcbcdf --- /dev/null +++ b/drivers/gpu/drm/lima/lima_pp.h @@ -0,0 +1,37 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_PP_H__ +#define __LIMA_PP_H__ + +struct lima_ip; +struct lima_device; + +int lima_pp_init(struct lima_ip *ip); +void lima_pp_fini(struct lima_ip *ip); + +int lima_pp_bcast_init(struct lima_ip *ip); +void lima_pp_bcast_fini(struct lima_ip *ip); + +int lima_pp_pipe_init(struct lima_device *dev); +void lima_pp_pipe_fini(struct lima_device *dev); + +#endif
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Marek Vasut marex@denx.de Signed-off-by: Heiko Stuebner heiko@sntech.de --- drivers/gpu/drm/lima/lima_mmu.c | 154 ++++++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_mmu.h | 34 +++++++ 2 files changed, 188 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_mmu.c create mode 100644 drivers/gpu/drm/lima/lima_mmu.h
diff --git a/drivers/gpu/drm/lima/lima_mmu.c b/drivers/gpu/drm/lima/lima_mmu.c new file mode 100644 index 000000000000..22ac4db07849 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_mmu.c @@ -0,0 +1,154 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <linux/interrupt.h> +#include <linux/io.h> +#include <linux/device.h> + +#include "lima_device.h" +#include "lima_mmu.h" +#include "lima_vm.h" +#include "lima_object.h" +#include "lima_regs.h" + +#define mmu_write(reg, data) writel(data, ip->iomem + LIMA_MMU_##reg) +#define mmu_read(reg) readl(ip->iomem + LIMA_MMU_##reg) + +#define lima_mmu_send_command(command, condition) \ +({ \ + int __timeout, __ret = 0; \ + \ + mmu_write(COMMAND, command); \ + for (__timeout = 1000; __timeout > 0; __timeout--) { \ + if (condition) \ + break; \ + } \ + if (!__timeout) { \ + dev_err(dev->dev, "mmu command %x timeout\n", command); \ + __ret = -ETIMEDOUT; \ + } \ + __ret; \ +}) + +static irqreturn_t lima_mmu_irq_handler(int irq, void *data) +{ + struct lima_ip *ip = data; + struct lima_device *dev = ip->dev; + u32 status = mmu_read(INT_STATUS); + struct lima_sched_pipe *pipe; + + /* for shared irq case */ + if (!status) + return IRQ_NONE; + + if (status & LIMA_MMU_INT_PAGE_FAULT) { + u32 fault = mmu_read(PAGE_FAULT_ADDR); + dev_err(dev->dev, "mmu page fault at 0x%x from bus id %d of type %s on %s\n", + fault, LIMA_MMU_STATUS_BUS_ID(status), + status & LIMA_MMU_STATUS_PAGE_FAULT_IS_WRITE ? "write" : "read", + lima_ip_name(ip)); + } + + if (status & LIMA_MMU_INT_READ_BUS_ERROR) { + dev_err(dev->dev, "mmu %s irq bus error\n", lima_ip_name(ip)); + } + + /* mask all interrupts before resume */ + mmu_write(INT_MASK, 0); + mmu_write(INT_CLEAR, status); + + pipe = dev->pipe + (ip->id == lima_ip_gpmmu ? lima_pipe_gp : lima_pipe_pp); + lima_sched_pipe_mmu_error(pipe); + + return IRQ_HANDLED; +} + +int lima_mmu_init(struct lima_ip *ip) +{ + struct lima_device *dev = ip->dev; + int err; + + if (ip->id == lima_ip_ppmmu_bcast) + return 0; + + mmu_write(DTE_ADDR, 0xCAFEBABE); + if (mmu_read(DTE_ADDR) != 0xCAFEB000) { + dev_err(dev->dev, "mmu %s dte write test fail\n", lima_ip_name(ip)); + return -EIO; + } + + err = lima_mmu_send_command(LIMA_MMU_COMMAND_HARD_RESET, mmu_read(DTE_ADDR) == 0); + if (err) + return err; + + err = devm_request_irq(dev->dev, ip->irq, lima_mmu_irq_handler, + IRQF_SHARED, lima_ip_name(ip), ip); + if (err) { + dev_err(dev->dev, "mmu %s fail to request irq\n", lima_ip_name(ip)); + return err; + } + + mmu_write(INT_MASK, LIMA_MMU_INT_PAGE_FAULT | LIMA_MMU_INT_READ_BUS_ERROR); + mmu_write(DTE_ADDR, *lima_bo_get_pages(dev->empty_vm->pd)); + return lima_mmu_send_command(LIMA_MMU_COMMAND_ENABLE_PAGING, + mmu_read(STATUS) & LIMA_MMU_STATUS_PAGING_ENABLED); +} + +void lima_mmu_fini(struct lima_ip *ip) +{ + +} + +void lima_mmu_switch_vm(struct lima_ip *ip, struct lima_vm *vm) +{ + struct lima_device *dev = ip->dev; + + lima_mmu_send_command(LIMA_MMU_COMMAND_ENABLE_STALL, + mmu_read(STATUS) & LIMA_MMU_STATUS_STALL_ACTIVE); + + if (vm) + mmu_write(DTE_ADDR, *lima_bo_get_pages(vm->pd)); + + /* flush the TLB */ + mmu_write(COMMAND, LIMA_MMU_COMMAND_ZAP_CACHE); + + lima_mmu_send_command(LIMA_MMU_COMMAND_DISABLE_STALL, + !(mmu_read(STATUS) & LIMA_MMU_STATUS_STALL_ACTIVE)); +} + +void lima_mmu_page_fault_resume(struct lima_ip *ip) +{ + struct lima_device *dev = ip->dev; + u32 status = mmu_read(STATUS); + + if (status & LIMA_MMU_STATUS_PAGE_FAULT_ACTIVE) { + dev_info(dev->dev, "mmu resume\n"); + + mmu_write(INT_MASK, 0); + mmu_write(DTE_ADDR, 0xCAFEBABE); + lima_mmu_send_command(LIMA_MMU_COMMAND_HARD_RESET, mmu_read(DTE_ADDR) == 0); + mmu_write(INT_MASK, LIMA_MMU_INT_PAGE_FAULT | LIMA_MMU_INT_READ_BUS_ERROR); + mmu_write(DTE_ADDR, *lima_bo_get_pages(dev->empty_vm->pd)); + lima_mmu_send_command(LIMA_MMU_COMMAND_ENABLE_PAGING, + mmu_read(STATUS) & LIMA_MMU_STATUS_PAGING_ENABLED); + } +} diff --git a/drivers/gpu/drm/lima/lima_mmu.h b/drivers/gpu/drm/lima/lima_mmu.h new file mode 100644 index 000000000000..9930521ddfa1 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_mmu.h @@ -0,0 +1,34 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_MMU_H__ +#define __LIMA_MMU_H__ + +struct lima_ip; +struct lima_vm; + +int lima_mmu_init(struct lima_ip *ip); +void lima_mmu_fini(struct lima_ip *ip); + +void lima_mmu_switch_vm(struct lima_ip *ip, struct lima_vm *vm); +void lima_mmu_page_fault_resume(struct lima_ip *ip); + +#endif
BCAST is a hardware module to broadcast register read/write for PPs. It can also merge IRQs from different PPs into one IRQ.
Signed-off-by: Qiang Yu yuq825@gmail.com --- drivers/gpu/drm/lima/lima_bcast.c | 65 +++++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_bcast.h | 34 ++++++++++++++++ 2 files changed, 99 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_bcast.c create mode 100644 drivers/gpu/drm/lima/lima_bcast.h
diff --git a/drivers/gpu/drm/lima/lima_bcast.c b/drivers/gpu/drm/lima/lima_bcast.c new file mode 100644 index 000000000000..32012a61ea6a --- /dev/null +++ b/drivers/gpu/drm/lima/lima_bcast.c @@ -0,0 +1,65 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <linux/io.h> +#include <linux/device.h> + +#include "lima_device.h" +#include "lima_bcast.h" +#include "lima_regs.h" + +#define bcast_write(reg, data) writel(data, ip->iomem + LIMA_BCAST_##reg) +#define bcast_read(reg) readl(ip->iomem + LIMA_BCAST_##reg) + +void lima_bcast_enable(struct lima_device *dev) +{ + struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_pp; + struct lima_ip *ip = dev->ip + lima_ip_bcast; + int i, mask = 0; + + for (i = 0; i < pipe->num_processor; i++) { + struct lima_ip *pp = pipe->processor[i]; + mask |= 1 << (pp->id - lima_ip_pp0); + } + + bcast_write(BROADCAST_MASK, (mask << 16) | mask); + bcast_write(INTERRUPT_MASK, mask); +} + +void lima_bcast_disable(struct lima_device *dev) +{ + struct lima_ip *ip = dev->ip + lima_ip_bcast; + + bcast_write(BROADCAST_MASK, 0); + bcast_write(INTERRUPT_MASK, 0); +} + +int lima_bcast_init(struct lima_ip *ip) +{ + return 0; +} + +void lima_bcast_fini(struct lima_ip *ip) +{ + +} + diff --git a/drivers/gpu/drm/lima/lima_bcast.h b/drivers/gpu/drm/lima/lima_bcast.h new file mode 100644 index 000000000000..abafd4f613c7 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_bcast.h @@ -0,0 +1,34 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#ifndef __LIMA_BCAST_H__ +#define __LIMA_BCAST_H__ + +struct lima_ip; + +int lima_bcast_init(struct lima_ip *ip); +void lima_bcast_fini(struct lima_ip *ip); + +void lima_bcast_enable(struct lima_device *dev); +void lima_bcast_disable(struct lima_device *dev); + +#endif
DLBU is used to balance load among PPs.
Signed-off-by: Qiang Yu yuq825@gmail.com --- drivers/gpu/drm/lima/lima_dlbu.c | 75 ++++++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_dlbu.h | 37 ++++++++++++++++ 2 files changed, 112 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_dlbu.c create mode 100644 drivers/gpu/drm/lima/lima_dlbu.h
diff --git a/drivers/gpu/drm/lima/lima_dlbu.c b/drivers/gpu/drm/lima/lima_dlbu.c new file mode 100644 index 000000000000..5281dd3c0417 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_dlbu.c @@ -0,0 +1,75 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <linux/io.h> +#include <linux/device.h> + +#include "lima_device.h" +#include "lima_dlbu.h" +#include "lima_vm.h" +#include "lima_regs.h" + +#define dlbu_write(reg, data) writel(data, ip->iomem + LIMA_DLBU_##reg) +#define dlbu_read(reg) readl(ip->iomem + LIMA_DLBU_##reg) + +void lima_dlbu_enable(struct lima_device *dev) +{ + struct lima_sched_pipe *pipe = dev->pipe + lima_pipe_pp; + struct lima_ip *ip = dev->ip + lima_ip_dlbu; + int i, mask = 0; + + for (i = 0; i < pipe->num_processor; i++) { + struct lima_ip *pp = pipe->processor[i]; + mask |= 1 << (pp->id - lima_ip_pp0); + } + + dlbu_write(PP_ENABLE_MASK, mask); +} + +void lima_dlbu_disable(struct lima_device *dev) +{ + struct lima_ip *ip = dev->ip + lima_ip_dlbu; + dlbu_write(PP_ENABLE_MASK, 0); +} + +void lima_dlbu_set_reg(struct lima_ip *ip, u32 *reg) +{ + dlbu_write(TLLIST_VBASEADDR, reg[0]); + dlbu_write(FB_DIM, reg[1]); + dlbu_write(TLLIST_CONF, reg[2]); + dlbu_write(START_TILE_POS, reg[3]); +} + +int lima_dlbu_init(struct lima_ip *ip) +{ + struct lima_device *dev = ip->dev; + + dlbu_write(MASTER_TLLIST_PHYS_ADDR, dev->dlbu_dma | 1); + dlbu_write(MASTER_TLLIST_VADDR, LIMA_VA_RESERVE_DLBU); + + return 0; +} + +void lima_dlbu_fini(struct lima_ip *ip) +{ + +} diff --git a/drivers/gpu/drm/lima/lima_dlbu.h b/drivers/gpu/drm/lima/lima_dlbu.h new file mode 100644 index 000000000000..4521a5dda9e0 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_dlbu.h @@ -0,0 +1,37 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#ifndef __LIMA_DLBU_H__ +#define __LIMA_DLBU_H__ + +struct lima_ip; +struct lima_device; + +void lima_dlbu_enable(struct lima_device *dev); +void lima_dlbu_disable(struct lima_device *dev); + +void lima_dlbu_set_reg(struct lima_ip *ip, u32 *reg); + +int lima_dlbu_init(struct lima_ip *ip); +void lima_dlbu_fini(struct lima_ip *ip); + +#endif
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Vasily Khoruzhick anarsoul@gmail.com --- drivers/gpu/drm/lima/lima_vm.c | 312 +++++++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_vm.h | 73 ++++++++ 2 files changed, 385 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_vm.c create mode 100644 drivers/gpu/drm/lima/lima_vm.h
diff --git a/drivers/gpu/drm/lima/lima_vm.c b/drivers/gpu/drm/lima/lima_vm.c new file mode 100644 index 000000000000..00a3f6b59a33 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_vm.c @@ -0,0 +1,312 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <linux/slab.h> +#include <linux/dma-mapping.h> +#include <linux/interval_tree_generic.h> + +#include "lima_device.h" +#include "lima_vm.h" +#include "lima_object.h" +#include "lima_regs.h" + +struct lima_bo_va_mapping { + struct list_head list; + struct rb_node rb; + uint32_t start; + uint32_t last; + uint32_t __subtree_last; +}; + +struct lima_bo_va { + struct list_head list; + unsigned ref_count; + + struct list_head mapping; + + struct lima_vm *vm; +}; + +#define LIMA_PDE(va) (va >> 22) +#define LIMA_PTE(va) ((va & 0x3FFFFF) >> 12) + +#define START(node) ((node)->start) +#define LAST(node) ((node)->last) + +INTERVAL_TREE_DEFINE(struct lima_bo_va_mapping, rb, uint32_t, __subtree_last, + START, LAST, static, lima_vm_it) + +#undef START +#undef LAST + +static void lima_vm_unmap_page_table(struct lima_vm *vm, u32 start, u32 end) +{ + u32 addr; + + for (addr = start; addr <= end; addr += LIMA_PAGE_SIZE) { + u32 pde = LIMA_PDE(addr); + u32 pte = LIMA_PTE(addr); + u32 *pt; + + pt = lima_bo_kmap(vm->pt[pde]); + pt[pte] = 0; + } +} + +static int lima_vm_map_page_table(struct lima_vm *vm, dma_addr_t *dma, + u32 start, u32 end) +{ + u64 addr; + int err, i = 0; + + for (addr = start; addr <= end; addr += LIMA_PAGE_SIZE) { + u32 pde = LIMA_PDE(addr); + u32 pte = LIMA_PTE(addr); + u32 *pd, *pt; + + if (vm->pt[pde]) + pt = lima_bo_kmap(vm->pt[pde]); + else { + vm->pt[pde] = lima_bo_create( + vm->dev, LIMA_PAGE_SIZE, 0, ttm_bo_type_kernel, + NULL, vm->pd->tbo.resv); + if (IS_ERR(vm->pt[pde])) { + err = PTR_ERR(vm->pt[pde]); + goto err_out; + } + + pt = lima_bo_kmap(vm->pt[pde]); + if (IS_ERR(pt)) { + err = PTR_ERR(pt); + goto err_out; + } + + pd = lima_bo_kmap(vm->pd); + pd[pde] = *lima_bo_get_pages(vm->pt[pde]) | LIMA_VM_FLAG_PRESENT; + } + + pt[pte] = dma[i++] | LIMA_VM_FLAGS_CACHE; + } + + return 0; + +err_out: + if (addr != start) + lima_vm_unmap_page_table(vm, start, addr - 1); + return err; +} + +static struct lima_bo_va * +lima_vm_bo_find(struct lima_vm *vm, struct lima_bo *bo) +{ + struct lima_bo_va *bo_va, *ret = NULL; + + list_for_each_entry(bo_va, &bo->va, list) { + if (bo_va->vm == vm) { + ret = bo_va; + break; + } + } + + return ret; +} + +int lima_vm_bo_map(struct lima_vm *vm, struct lima_bo *bo, u32 start) +{ + int err; + struct lima_bo_va_mapping *it, *mapping; + u32 end = start + bo->gem.size - 1; + dma_addr_t *pages_dma = lima_bo_get_pages(bo); + struct lima_bo_va *bo_va; + + it = lima_vm_it_iter_first(&vm->va, start, end); + if (it) { + dev_dbg(bo->gem.dev->dev, "lima vm map va overlap %x-%x %x-%x\n", + start, end, it->start, it->last); + return -EINVAL; + } + + mapping = kmalloc(sizeof(*mapping), GFP_KERNEL); + if (!mapping) + return -ENOMEM; + mapping->start = start; + mapping->last = end; + + err = lima_vm_map_page_table(vm, pages_dma, start, end); + if (err) { + kfree(mapping); + return err; + } + + lima_vm_it_insert(mapping, &vm->va); + + bo_va = lima_vm_bo_find(vm, bo); + list_add_tail(&mapping->list, &bo_va->mapping); + + return 0; +} + +static void lima_vm_unmap(struct lima_vm *vm, + struct lima_bo_va_mapping *mapping) +{ + lima_vm_it_remove(mapping, &vm->va); + + lima_vm_unmap_page_table(vm, mapping->start, mapping->last); + + list_del(&mapping->list); + kfree(mapping); +} + +int lima_vm_bo_unmap(struct lima_vm *vm, struct lima_bo *bo, u32 start) +{ + struct lima_bo_va *bo_va; + struct lima_bo_va_mapping *mapping; + + bo_va = lima_vm_bo_find(vm, bo); + list_for_each_entry(mapping, &bo_va->mapping, list) { + if (mapping->start == start) { + lima_vm_unmap(vm, mapping); + break; + } + } + + return 0; +} + +int lima_vm_bo_add(struct lima_vm *vm, struct lima_bo *bo) +{ + struct lima_bo_va *bo_va; + + bo_va = lima_vm_bo_find(vm, bo); + if (bo_va) { + bo_va->ref_count++; + return 0; + } + + bo_va = kmalloc(sizeof(*bo_va), GFP_KERNEL); + if (!bo_va) + return -ENOMEM; + + bo_va->vm = vm; + bo_va->ref_count = 1; + INIT_LIST_HEAD(&bo_va->mapping); + list_add_tail(&bo_va->list, &bo->va); + return 0; +} + +int lima_vm_bo_del(struct lima_vm *vm, struct lima_bo *bo) +{ + struct lima_bo_va *bo_va; + struct lima_bo_va_mapping *mapping, *tmp; + + bo_va = lima_vm_bo_find(vm, bo); + if (--bo_va->ref_count > 0) + return 0; + + list_for_each_entry_safe(mapping, tmp, &bo_va->mapping, list) { + lima_vm_unmap(vm, mapping); + } + list_del(&bo_va->list); + kfree(bo_va); + return 0; +} + +struct lima_vm *lima_vm_create(struct lima_device *dev) +{ + struct lima_vm *vm; + void *pd; + + vm = kzalloc(sizeof(*vm), GFP_KERNEL); + if (!vm) + return NULL; + + vm->dev = dev; + vm->va = RB_ROOT_CACHED; + kref_init(&vm->refcount); + + vm->pd = lima_bo_create(dev, LIMA_PAGE_SIZE, 0, + ttm_bo_type_kernel, NULL, NULL); + if (IS_ERR(vm->pd)) + goto err_out0; + + pd = lima_bo_kmap(vm->pd); + if (IS_ERR(pd)) + goto err_out1; + + if (dev->dlbu_cpu) { + int err = lima_vm_map_page_table( + vm, &dev->dlbu_dma, LIMA_VA_RESERVE_DLBU, + LIMA_VA_RESERVE_DLBU + LIMA_PAGE_SIZE - 1); + if (err) + goto err_out1; + } + + return vm; + +err_out1: + lima_bo_unref(vm->pd); +err_out0: + kfree(vm); + return NULL; +} + +void lima_vm_release(struct kref *kref) +{ + struct lima_vm *vm = container_of(kref, struct lima_vm, refcount); + struct lima_device *dev = vm->dev; + int i; + + if (!RB_EMPTY_ROOT(&vm->va.rb_root)) { + dev_err(dev->dev, "still active bo inside vm\n"); + } + + for (i = 0; i < LIMA_PAGE_ENT_NUM; i++) { + if (vm->pt[i]) + lima_bo_unref(vm->pt[i]); + } + + if (vm->pd) + lima_bo_unref(vm->pd); + + kfree(vm); +} + +void lima_vm_print(struct lima_vm *vm) +{ + int i, j; + u32 *pd = lima_bo_kmap(vm->pd); + + /* to avoid the defined by not used warning */ + (void)&lima_vm_it_iter_next; + + for (i = 0; i < LIMA_PAGE_ENT_NUM; i++) { + if (pd[i]) { + u32 *pt = lima_bo_kmap(vm->pt[i]); + + printk(KERN_INFO "lima vm pd %03x:%08x\n", i, pd[i]); + for (j = 0; j < LIMA_PAGE_ENT_NUM; j++) { + if (pt[j]) + printk(KERN_INFO " pt %03x:%08x\n", j, pt[j]); + } + } + } +} diff --git a/drivers/gpu/drm/lima/lima_vm.h b/drivers/gpu/drm/lima/lima_vm.h new file mode 100644 index 000000000000..20506459def0 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_vm.h @@ -0,0 +1,73 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_VM_H__ +#define __LIMA_VM_H__ + +#include <linux/rbtree.h> +#include <linux/kref.h> + +#define LIMA_PAGE_SIZE 4096 +#define LIMA_PAGE_MASK (LIMA_PAGE_SIZE - 1) +#define LIMA_PAGE_ENT_NUM (LIMA_PAGE_SIZE / sizeof(u32)) + +#define LIMA_VA_RESERVE_START 0xFFF00000 +#define LIMA_VA_RESERVE_DLBU LIMA_VA_RESERVE_START +#define LIMA_VA_RESERVE_END 0x100000000 + +struct lima_bo; +struct lima_device; + +struct lima_vm { + struct kref refcount; + + /* tree of virtual addresses mapped */ + struct rb_root_cached va; + + struct lima_device *dev; + + struct lima_bo *pd; + struct lima_bo *pt[LIMA_PAGE_ENT_NUM]; +}; + +int lima_vm_bo_map(struct lima_vm *vm, struct lima_bo *bo, u32 start); +int lima_vm_bo_unmap(struct lima_vm *vm, struct lima_bo *bo, u32 start); + +int lima_vm_bo_add(struct lima_vm *vm, struct lima_bo *bo); +int lima_vm_bo_del(struct lima_vm *vm, struct lima_bo *bo); + +struct lima_vm *lima_vm_create(struct lima_device *dev); +void lima_vm_release(struct kref *kref); + +static inline struct lima_vm *lima_vm_get(struct lima_vm *vm) +{ + kref_get(&vm->refcount); + return vm; +} + +static inline void lima_vm_put(struct lima_vm *vm) +{ + kref_put(&vm->refcount, lima_vm_release); +} + +void lima_vm_print(struct lima_vm *vm); + +#endif
Signed-off-by: Qiang Yu yuq825@gmail.com --- drivers/gpu/drm/lima/lima_ttm.c | 409 ++++++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_ttm.h | 44 ++++ 2 files changed, 453 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_ttm.c create mode 100644 drivers/gpu/drm/lima/lima_ttm.h
diff --git a/drivers/gpu/drm/lima/lima_ttm.c b/drivers/gpu/drm/lima/lima_ttm.c new file mode 100644 index 000000000000..5325f3f48ae7 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_ttm.c @@ -0,0 +1,409 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <linux/mm.h> +#include <drm/ttm/ttm_page_alloc.h> + +#include "lima_drv.h" +#include "lima_device.h" +#include "lima_object.h" + + +static int lima_ttm_mem_global_init(struct drm_global_reference *ref) +{ + return ttm_mem_global_init(ref->object); +} + +static void lima_ttm_mem_global_release(struct drm_global_reference *ref) +{ + ttm_mem_global_release(ref->object); +} + +static int lima_ttm_global_init(struct lima_device *dev) +{ + struct drm_global_reference *global_ref; + int err; + + dev->mman.mem_global_referenced = false; + global_ref = &dev->mman.mem_global_ref; + global_ref->global_type = DRM_GLOBAL_TTM_MEM; + global_ref->size = sizeof(struct ttm_mem_global); + global_ref->init = &lima_ttm_mem_global_init; + global_ref->release = &lima_ttm_mem_global_release; + + err = drm_global_item_ref(global_ref); + if (err != 0) { + dev_err(dev->dev, "Failed setting up TTM memory accounting " + "subsystem.\n"); + return err; + } + + dev->mman.bo_global_ref.mem_glob = + dev->mman.mem_global_ref.object; + global_ref = &dev->mman.bo_global_ref.ref; + global_ref->global_type = DRM_GLOBAL_TTM_BO; + global_ref->size = sizeof(struct ttm_bo_global); + global_ref->init = &ttm_bo_global_init; + global_ref->release = &ttm_bo_global_release; + err = drm_global_item_ref(global_ref); + if (err != 0) { + dev_err(dev->dev, "Failed setting up TTM BO subsystem.\n"); + drm_global_item_unref(&dev->mman.mem_global_ref); + return err; + } + + dev->mman.mem_global_referenced = true; + return 0; +} + +static void lima_ttm_global_fini(struct lima_device *dev) +{ + if (dev->mman.mem_global_referenced) { + drm_global_item_unref(&dev->mman.bo_global_ref.ref); + drm_global_item_unref(&dev->mman.mem_global_ref); + dev->mman.mem_global_referenced = false; + } +} + +struct lima_tt_mgr { + spinlock_t lock; + unsigned long available; +}; + +static int lima_ttm_bo_man_init(struct ttm_mem_type_manager *man, + unsigned long p_size) +{ + struct lima_tt_mgr *mgr; + + mgr = kmalloc(sizeof(*mgr), GFP_KERNEL); + if (!mgr) + return -ENOMEM; + + spin_lock_init(&mgr->lock); + mgr->available = p_size; + man->priv = mgr; + return 0; +} + +static int lima_ttm_bo_man_takedown(struct ttm_mem_type_manager *man) +{ + struct lima_tt_mgr *mgr = man->priv; + + kfree(mgr); + man->priv = NULL; + return 0; +} + +static int lima_ttm_bo_man_get_node(struct ttm_mem_type_manager *man, + struct ttm_buffer_object *bo, + const struct ttm_place *place, + struct ttm_mem_reg *mem) +{ + struct lima_tt_mgr *mgr = man->priv; + + /* don't exceed the mem limit */ + spin_lock(&mgr->lock); + if (mgr->available < mem->num_pages) { + spin_unlock(&mgr->lock); + return 0; + } + mgr->available -= mem->num_pages; + spin_unlock(&mgr->lock); + + /* just fake a non-null pointer to tell caller success */ + mem->mm_node = (void *)1; + return 0; +} + +static void lima_ttm_bo_man_put_node(struct ttm_mem_type_manager *man, + struct ttm_mem_reg *mem) +{ + struct lima_tt_mgr *mgr = man->priv; + + spin_lock(&mgr->lock); + mgr->available += mem->num_pages; + spin_unlock(&mgr->lock); + + mem->mm_node = NULL; +} + +static void lima_ttm_bo_man_debug(struct ttm_mem_type_manager *man, + struct drm_printer *printer) +{ +} + +static const struct ttm_mem_type_manager_func lima_bo_manager_func = { + .init = lima_ttm_bo_man_init, + .takedown = lima_ttm_bo_man_takedown, + .get_node = lima_ttm_bo_man_get_node, + .put_node = lima_ttm_bo_man_put_node, + .debug = lima_ttm_bo_man_debug +}; + +static int lima_init_mem_type(struct ttm_bo_device *bdev, uint32_t type, + struct ttm_mem_type_manager *man) +{ + struct lima_device *dev = ttm_to_lima_dev(bdev); + + switch (type) { + case TTM_PL_SYSTEM: + /* System memory */ + man->flags = TTM_MEMTYPE_FLAG_MAPPABLE; + man->available_caching = TTM_PL_MASK_CACHING; + man->default_caching = TTM_PL_FLAG_CACHED; + break; + case TTM_PL_TT: + man->func = &lima_bo_manager_func; + man->flags = TTM_MEMTYPE_FLAG_MAPPABLE; + man->available_caching = TTM_PL_MASK_CACHING; + man->default_caching = TTM_PL_FLAG_CACHED; + break; + default: + dev_err(dev->dev, "Unsupported memory type %u\n", + (unsigned int)type); + return -EINVAL; + } + return 0; +} + +static int lima_ttm_backend_bind(struct ttm_tt *ttm, + struct ttm_mem_reg *bo_mem) +{ + return 0; +} + +static int lima_ttm_backend_unbind(struct ttm_tt *ttm) +{ + return 0; +} + +static void lima_ttm_backend_destroy(struct ttm_tt *ttm) +{ + struct lima_ttm_tt *tt = (void *)ttm; + + ttm_dma_tt_fini(&tt->ttm); + kfree(tt); +} + +static struct ttm_backend_func lima_ttm_backend_func = { + .bind = &lima_ttm_backend_bind, + .unbind = &lima_ttm_backend_unbind, + .destroy = &lima_ttm_backend_destroy, +}; + +static struct ttm_tt *lima_ttm_tt_create(struct ttm_buffer_object *bo, + uint32_t page_flags) +{ + struct lima_ttm_tt *tt; + + tt = kzalloc(sizeof(struct lima_ttm_tt), GFP_KERNEL); + if (tt == NULL) + return NULL; + + tt->ttm.ttm.func = &lima_ttm_backend_func; + + if (ttm_sg_tt_init(&tt->ttm, bo, page_flags)) { + kfree(tt); + return NULL; + } + + return &tt->ttm.ttm; +} + +static int lima_ttm_tt_populate(struct ttm_tt *ttm, + struct ttm_operation_ctx *ctx) +{ + struct lima_device *dev = ttm_to_lima_dev(ttm->bdev); + struct lima_ttm_tt *tt = (void *)ttm; + bool slave = !!(ttm->page_flags & TTM_PAGE_FLAG_SG); + + if (slave) { + drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages, + tt->ttm.dma_address, + ttm->num_pages); + ttm->state = tt_unbound; + return 0; + } + + return ttm_populate_and_map_pages(dev->dev, &tt->ttm, ctx); +} + +static void lima_ttm_tt_unpopulate(struct ttm_tt *ttm) +{ + struct lima_device *dev = ttm_to_lima_dev(ttm->bdev); + struct lima_ttm_tt *tt = (void *)ttm; + bool slave = !!(ttm->page_flags & TTM_PAGE_FLAG_SG); + + if (slave) + return; + + ttm_unmap_and_unpopulate_pages(dev->dev, &tt->ttm); +} + +static int lima_invalidate_caches(struct ttm_bo_device *bdev, + uint32_t flags) +{ + struct lima_device *dev = ttm_to_lima_dev(bdev); + + dev_err(dev->dev, "%s not implemented\n", __FUNCTION__); + return 0; +} + +static void lima_evict_flags(struct ttm_buffer_object *tbo, + struct ttm_placement *placement) +{ + struct lima_bo *bo = ttm_to_lima_bo(tbo); + struct lima_device *dev = to_lima_dev(bo->gem.dev); + + dev_err(dev->dev, "%s not implemented\n", __FUNCTION__); +} + +static int lima_verify_access(struct ttm_buffer_object *tbo, + struct file *filp) +{ + struct lima_bo *bo = ttm_to_lima_bo(tbo); + + return drm_vma_node_verify_access(&bo->gem.vma_node, + filp->private_data); +} + +static int lima_ttm_io_mem_reserve(struct ttm_bo_device *bdev, + struct ttm_mem_reg *mem) +{ + struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type]; + + mem->bus.addr = NULL; + mem->bus.offset = 0; + mem->bus.size = mem->num_pages << PAGE_SHIFT; + mem->bus.base = 0; + mem->bus.is_iomem = false; + + if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE)) + return -EINVAL; + + switch (mem->mem_type) { + case TTM_PL_SYSTEM: + case TTM_PL_TT: + return 0; + default: + return -EINVAL; + } + return 0; +} + +static void lima_ttm_io_mem_free(struct ttm_bo_device *bdev, + struct ttm_mem_reg *mem) +{ + +} + +static void lima_bo_move_notify(struct ttm_buffer_object *tbo, bool evict, + struct ttm_mem_reg *new_mem) +{ + struct lima_bo *bo = ttm_to_lima_bo(tbo); + struct lima_device *dev = to_lima_dev(bo->gem.dev); + + if (evict) + dev_err(dev->dev, "%s not implemented\n", __FUNCTION__); +} + +static void lima_bo_swap_notify(struct ttm_buffer_object *tbo) +{ + struct lima_bo *bo = ttm_to_lima_bo(tbo); + struct lima_device *dev = to_lima_dev(bo->gem.dev); + + dev_err(dev->dev, "%s not implemented\n", __FUNCTION__); +} + +static struct ttm_bo_driver lima_bo_driver = { + .ttm_tt_create = lima_ttm_tt_create, + .ttm_tt_populate = lima_ttm_tt_populate, + .ttm_tt_unpopulate = lima_ttm_tt_unpopulate, + .invalidate_caches = lima_invalidate_caches, + .init_mem_type = lima_init_mem_type, + .eviction_valuable = ttm_bo_eviction_valuable, + .evict_flags = lima_evict_flags, + .verify_access = lima_verify_access, + .io_mem_reserve = lima_ttm_io_mem_reserve, + .io_mem_free = lima_ttm_io_mem_free, + .move_notify = lima_bo_move_notify, + .swap_notify = lima_bo_swap_notify, +}; + +int lima_ttm_init(struct lima_device *dev) +{ + int err; + bool need_dma32; + u64 gtt_size; + + err = lima_ttm_global_init(dev); + if (err) + return err; + +#if defined(CONFIG_ARM) && !defined(CONFIG_ARM_LPAE) + need_dma32 = false; +#else + need_dma32 = true; +#endif + + err = ttm_bo_device_init(&dev->mman.bdev, + dev->mman.bo_global_ref.ref.object, + &lima_bo_driver, + dev->ddev->anon_inode->i_mapping, + DRM_FILE_PAGE_OFFSET, + need_dma32); + if (err) { + dev_err(dev->dev, "failed initializing buffer object " + "driver(%d).\n", err); + goto err_out0; + } + + if (lima_max_mem < 0) { + struct sysinfo si; + si_meminfo(&si); + /* TODO: better to have lower 32 mem size */ + gtt_size = min(((u64)si.totalram * si.mem_unit * 3/4), + 0x100000000ULL); + } + else + gtt_size = (u64)lima_max_mem << 20; + + err = ttm_bo_init_mm(&dev->mman.bdev, TTM_PL_TT, gtt_size >> PAGE_SHIFT); + if (err) { + dev_err(dev->dev, "Failed initializing GTT heap.\n"); + goto err_out1; + } + return 0; + +err_out1: + ttm_bo_device_release(&dev->mman.bdev); +err_out0: + lima_ttm_global_fini(dev); + return err; +} + +void lima_ttm_fini(struct lima_device *dev) +{ + ttm_bo_device_release(&dev->mman.bdev); + lima_ttm_global_fini(dev); + dev_info(dev->dev, "ttm finalized\n"); +} diff --git a/drivers/gpu/drm/lima/lima_ttm.h b/drivers/gpu/drm/lima/lima_ttm.h new file mode 100644 index 000000000000..1d36d06a47a3 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_ttm.h @@ -0,0 +1,44 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_TTM_H__ +#define __LIMA_TTM_H__ + +#include <drm/ttm/ttm_bo_driver.h> + +struct lima_mman { + struct ttm_bo_global_ref bo_global_ref; + struct drm_global_reference mem_global_ref; + struct ttm_bo_device bdev; + bool mem_global_referenced; +}; + +struct lima_ttm_tt { + struct ttm_dma_tt ttm; +}; + +struct lima_device; +struct lima_bo; + +int lima_ttm_init(struct lima_device *dev); +void lima_ttm_fini(struct lima_device *dev); + +#endif
Signed-off-by: Qiang Yu yuq825@gmail.com --- drivers/gpu/drm/lima/lima_object.c | 120 +++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_object.h | 87 +++++++++++++++++++++ 2 files changed, 207 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_object.c create mode 100644 drivers/gpu/drm/lima/lima_object.h
diff --git a/drivers/gpu/drm/lima/lima_object.c b/drivers/gpu/drm/lima/lima_object.c new file mode 100644 index 000000000000..5a22b235626b --- /dev/null +++ b/drivers/gpu/drm/lima/lima_object.c @@ -0,0 +1,120 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <drm/drm_prime.h> + +#include "lima_object.h" + +static void lima_bo_init_placement(struct lima_bo *bo) +{ + struct ttm_placement *placement = &bo->placement; + struct ttm_place *place = &bo->place; + + place->fpfn = 0; + place->lpfn = 0; + place->flags = TTM_PL_FLAG_TT | TTM_PL_FLAG_WC; + + /* pin all bo for now */ + place->flags |= TTM_PL_FLAG_NO_EVICT; + + placement->num_placement = 1; + placement->placement = place; + + placement->num_busy_placement = 1; + placement->busy_placement = place; +} + +static void lima_bo_destroy(struct ttm_buffer_object *tbo) +{ + struct lima_bo *bo = ttm_to_lima_bo(tbo); + + if (bo->gem.import_attach) + drm_prime_gem_destroy(&bo->gem, bo->tbo.sg); + drm_gem_object_release(&bo->gem); + kfree(bo); +} + +struct lima_bo *lima_bo_create(struct lima_device *dev, u64 size, + u32 flags, enum ttm_bo_type type, + struct sg_table *sg, + struct reservation_object *resv) +{ + struct lima_bo *bo; + struct ttm_mem_type_manager *man; + size_t acc_size; + int err; + + size = PAGE_ALIGN(size); + man = dev->mman.bdev.man + TTM_PL_TT; + if (size >= (man->size << PAGE_SHIFT)) + return ERR_PTR(-ENOMEM); + + acc_size = ttm_bo_dma_acc_size(&dev->mman.bdev, size, + sizeof(struct lima_bo)); + + bo = kzalloc(sizeof(*bo), GFP_KERNEL); + if (!bo) + return ERR_PTR(-ENOMEM); + + drm_gem_private_object_init(dev->ddev, &bo->gem, size); + + INIT_LIST_HEAD(&bo->va); + + bo->tbo.bdev = &dev->mman.bdev; + + lima_bo_init_placement(bo); + + err = ttm_bo_init(&dev->mman.bdev, &bo->tbo, size, type, + &bo->placement, 0, type != ttm_bo_type_kernel, + acc_size, sg, resv, &lima_bo_destroy); + if (err) + goto err_out; + + return bo; + +err_out: + kfree(bo); + return ERR_PTR(err); +} + +dma_addr_t *lima_bo_get_pages(struct lima_bo *bo) +{ + struct lima_ttm_tt *ttm = (void *)bo->tbo.ttm; + return ttm->ttm.dma_address; +} + +void *lima_bo_kmap(struct lima_bo *bo) +{ + bool is_iomem; + void *ret; + int err; + + ret = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem); + if (ret) + return ret; + + err = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap); + if (err) + return ERR_PTR(err); + + return ttm_kmap_obj_virtual(&bo->kmap, &is_iomem); +} diff --git a/drivers/gpu/drm/lima/lima_object.h b/drivers/gpu/drm/lima/lima_object.h new file mode 100644 index 000000000000..2b8b8fcb9063 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_object.h @@ -0,0 +1,87 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_OBJECT_H__ +#define __LIMA_OBJECT_H__ + +#include <drm/drm_gem.h> +#include <drm/ttm/ttm_placement.h> +#include <drm/ttm/ttm_bo_api.h> + +#include "lima_device.h" + +struct lima_bo { + struct drm_gem_object gem; + + struct ttm_place place; + struct ttm_placement placement; + struct ttm_buffer_object tbo; + struct ttm_bo_kmap_obj kmap; + + struct list_head va; +}; + +static inline struct lima_bo * +to_lima_bo(struct drm_gem_object *obj) +{ + return container_of(obj, struct lima_bo, gem); +} + +static inline struct lima_bo * +ttm_to_lima_bo(struct ttm_buffer_object *tbo) +{ + return container_of(tbo, struct lima_bo, tbo); +} + +static inline int lima_bo_reserve(struct lima_bo *bo, bool intr) +{ + struct lima_device *dev = ttm_to_lima_dev(bo->tbo.bdev); + int r; + + r = ttm_bo_reserve(&bo->tbo, intr, false, NULL); + if (unlikely(r != 0)) { + if (r != -ERESTARTSYS) + dev_err(dev->dev, "%p reserve failed\n", bo); + return r; + } + return 0; +} + +static inline void lima_bo_unreserve(struct lima_bo *bo) +{ + ttm_bo_unreserve(&bo->tbo); +} + +struct lima_bo *lima_bo_create(struct lima_device *dev, u64 size, + u32 flags, enum ttm_bo_type type, + struct sg_table *sg, + struct reservation_object *resv); + +static inline void lima_bo_unref(struct lima_bo *bo) +{ + struct ttm_buffer_object *tbo = &bo->tbo; + ttm_bo_unref(&tbo); +} + +dma_addr_t *lima_bo_get_pages(struct lima_bo *bo); +void *lima_bo_kmap(struct lima_bo *bo); + +#endif
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Erico Nunes nunes.erico@gmail.com Signed-off-by: Heiko Stuebner heiko@sntech.de --- drivers/gpu/drm/lima/lima_gem.c | 459 ++++++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_gem.h | 41 +++ 2 files changed, 500 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_gem.c create mode 100644 drivers/gpu/drm/lima/lima_gem.h
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c new file mode 100644 index 000000000000..1ad3f38ddfde --- /dev/null +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -0,0 +1,459 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <drm/drmP.h> +#include <linux/dma-mapping.h> +#include <linux/pagemap.h> +#include <linux/sync_file.h> + +#include <drm/lima_drm.h> + +#include "lima_drv.h" +#include "lima_gem.h" +#include "lima_vm.h" +#include "lima_object.h" + +int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file, + u32 size, u32 flags, u32 *handle) +{ + int err; + struct lima_bo *bo; + struct lima_device *ldev = to_lima_dev(dev); + + bo = lima_bo_create(ldev, size, flags, ttm_bo_type_device, NULL, NULL); + if (IS_ERR(bo)) + return PTR_ERR(bo); + + err = drm_gem_handle_create(file, &bo->gem, handle); + + /* drop reference from allocate - handle holds it now */ + drm_gem_object_put_unlocked(&bo->gem); + + return err; +} + +void lima_gem_free_object(struct drm_gem_object *obj) +{ + struct lima_bo *bo = to_lima_bo(obj); + + if (!list_empty(&bo->va)) + dev_err(obj->dev->dev, "lima gem free bo still has va\n"); + + lima_bo_unref(bo); +} + +int lima_gem_object_open(struct drm_gem_object *obj, struct drm_file *file) +{ + struct lima_bo *bo = to_lima_bo(obj); + struct lima_drm_priv *priv = to_lima_drm_priv(file); + struct lima_vm *vm = priv->vm; + int err; + + err = lima_bo_reserve(bo, true); + if (err) + return err; + + err = lima_vm_bo_add(vm, bo); + + lima_bo_unreserve(bo); + return err; +} + +void lima_gem_object_close(struct drm_gem_object *obj, struct drm_file *file) +{ + struct lima_bo *bo = to_lima_bo(obj); + struct lima_device *dev = to_lima_dev(obj->dev); + struct lima_drm_priv *priv = to_lima_drm_priv(file); + struct lima_vm *vm = priv->vm; + + LIST_HEAD(list); + struct ttm_validate_buffer tv_bo, tv_pd; + struct ww_acquire_ctx ticket; + int r; + + tv_bo.bo = &bo->tbo; + tv_bo.shared = true; + list_add(&tv_bo.head, &list); + + tv_pd.bo = &vm->pd->tbo; + tv_pd.shared = true; + list_add(&tv_pd.head, &list); + + r = ttm_eu_reserve_buffers(&ticket, &list, false, NULL); + if (r) { + dev_err(dev->dev, "leeking bo va because we " + "fail to reserve bo (%d)\n", r); + return; + } + + lima_vm_bo_del(vm, bo); + + ttm_eu_backoff_reservation(&ticket, &list); +} + +int lima_gem_mmap_offset(struct drm_file *file, u32 handle, u64 *offset) +{ + struct drm_gem_object *obj; + struct lima_bo *bo; + + obj = drm_gem_object_lookup(file, handle); + if (!obj) + return -ENOENT; + + bo = to_lima_bo(obj); + *offset = drm_vma_node_offset_addr(&bo->tbo.vma_node); + + drm_gem_object_put_unlocked(obj); + return 0; +} + +int lima_gem_mmap(struct file *filp, struct vm_area_struct *vma) +{ + struct drm_file *file_priv; + struct lima_device *dev; + + if (unlikely(vma->vm_pgoff < DRM_FILE_PAGE_OFFSET)) + return -EINVAL; + + file_priv = filp->private_data; + dev = file_priv->minor->dev->dev_private; + if (dev == NULL) + return -EINVAL; + + return ttm_bo_mmap(filp, vma, &dev->mman.bdev); +} + +int lima_gem_va_map(struct drm_file *file, u32 handle, u32 flags, u32 va) +{ + struct lima_drm_priv *priv = to_lima_drm_priv(file); + struct lima_vm *vm = priv->vm; + struct drm_gem_object *obj; + struct lima_bo *bo; + struct lima_device *dev; + int err; + + LIST_HEAD(list); + struct ttm_validate_buffer tv_bo, tv_pd; + struct ww_acquire_ctx ticket; + + if (!PAGE_ALIGNED(va)) + return -EINVAL; + + obj = drm_gem_object_lookup(file, handle); + if (!obj) + return -ENOENT; + + bo = to_lima_bo(obj); + dev = to_lima_dev(obj->dev); + + /* carefully handle overflow when calculate range */ + if (va < dev->va_start || dev->va_end - obj->size < va) { + err = -EINVAL; + goto out; + } + + tv_bo.bo = &bo->tbo; + tv_bo.shared = true; + list_add(&tv_bo.head, &list); + + tv_pd.bo = &vm->pd->tbo; + tv_pd.shared = true; + list_add(&tv_pd.head, &list); + + err = ttm_eu_reserve_buffers(&ticket, &list, false, NULL); + if (err) + goto out; + + err = lima_vm_bo_map(vm, bo, va); + + ttm_eu_backoff_reservation(&ticket, &list); +out: + drm_gem_object_put_unlocked(obj); + return err; +} + +int lima_gem_va_unmap(struct drm_file *file, u32 handle, u32 va) +{ + struct lima_drm_priv *priv = to_lima_drm_priv(file); + struct lima_vm *vm = priv->vm; + struct drm_gem_object *obj; + struct lima_bo *bo; + int err; + + LIST_HEAD(list); + struct ttm_validate_buffer tv_bo, tv_pd; + struct ww_acquire_ctx ticket; + + if (!PAGE_ALIGNED(va)) + return -EINVAL; + + obj = drm_gem_object_lookup(file, handle); + if (!obj) + return -ENOENT; + + bo = to_lima_bo(obj); + + tv_bo.bo = &bo->tbo; + tv_bo.shared = true; + list_add(&tv_bo.head, &list); + + tv_pd.bo = &vm->pd->tbo; + tv_pd.shared = true; + list_add(&tv_pd.head, &list); + + err = ttm_eu_reserve_buffers(&ticket, &list, false, NULL); + if (err) + goto out; + + err = lima_vm_bo_unmap(vm, bo, va); + + ttm_eu_backoff_reservation(&ticket, &list); +out: + drm_gem_object_put_unlocked(obj); + return err; +} + +static int lima_gem_sync_bo(struct lima_sched_task *task, struct lima_bo *bo, + bool write, bool explicit) +{ + int i, err; + struct dma_fence *f; + u64 context = task->base.s_fence->finished.context; + + if (!write) { + err = reservation_object_reserve_shared(bo->tbo.resv); + if (err) + return err; + } + + /* explicit sync use user passed dep fence */ + if (explicit) + return 0; + + /* implicit sync use bo fence in resv obj */ + if (write) { + struct reservation_object_list *fobj = + reservation_object_get_list(bo->tbo.resv); + + if (fobj && fobj->shared_count > 0) { + for (i = 0; i < fobj->shared_count; i++) { + f = rcu_dereference_protected( + fobj->shared[i], + reservation_object_held(bo->tbo.resv)); + if (f->context != context) { + err = lima_sched_task_add_dep(task, f); + if (err) + return err; + } + } + } + } + + f = reservation_object_get_excl(bo->tbo.resv); + if (f) { + err = lima_sched_task_add_dep(task, f); + if (err) + return err; + } + + return 0; +} + +static int lima_gem_add_deps(struct lima_ctx_mgr *mgr, struct lima_submit *submit) +{ + int i, err = 0; + + for (i = 0; i < submit->nr_deps; i++) { + union drm_lima_gem_submit_dep *dep = submit->deps + i; + struct dma_fence *fence; + + if (dep->type == LIMA_SUBMIT_DEP_FENCE) { + fence = lima_ctx_get_native_fence( + mgr, dep->fence.ctx, dep->fence.pipe, + dep->fence.seq); + if (IS_ERR(fence)) { + err = PTR_ERR(fence); + break; + } + } + else if (dep->type == LIMA_SUBMIT_DEP_SYNC_FD) { + fence = sync_file_get_fence(dep->sync_fd.fd); + if (!fence) { + err = -EINVAL; + break; + } + } + else { + err = -EINVAL; + break; + } + + if (fence) { + err = lima_sched_task_add_dep(submit->task, fence); + dma_fence_put(fence); + if (err) + break; + } + } + + return err; +} + +static int lima_gem_get_sync_fd(struct dma_fence *fence) +{ + struct sync_file *sync_file; + int fd; + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) + return fd; + + sync_file = sync_file_create(fence); + if (!sync_file) { + put_unused_fd(fd); + return -ENOMEM; + } + + fd_install(fd, sync_file->file); + return fd; +} + +int lima_gem_submit(struct drm_file *file, struct lima_submit *submit) +{ + int i, err = 0; + struct lima_drm_priv *priv = to_lima_drm_priv(file); + struct lima_vm *vm = priv->vm; + + INIT_LIST_HEAD(&submit->validated); + INIT_LIST_HEAD(&submit->duplicates); + + for (i = 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj; + struct drm_lima_gem_submit_bo *bo = submit->bos + i; + struct ttm_validate_buffer *vb = submit->vbs + i; + + obj = drm_gem_object_lookup(file, bo->handle); + if (!obj) { + err = -ENOENT; + goto out0; + } + + vb->bo = &to_lima_bo(obj)->tbo; + vb->shared = !(bo->flags & LIMA_SUBMIT_BO_WRITE); + list_add_tail(&vb->head, &submit->validated); + } + + submit->vm_pd_vb.bo = &vm->pd->tbo; + submit->vm_pd_vb.shared = true; + list_add(&submit->vm_pd_vb.head, &submit->validated); + + err = ttm_eu_reserve_buffers(&submit->ticket, &submit->validated, + true, &submit->duplicates); + if (err) + goto out0; + + err = lima_sched_task_init( + submit->task, submit->ctx->context + submit->pipe, vm); + if (err) + goto out1; + + err = lima_gem_add_deps(&priv->ctx_mgr, submit); + if (err) + goto out2; + + for (i = 0; i < submit->nr_bos; i++) { + struct ttm_validate_buffer *vb = submit->vbs + i; + struct lima_bo *bo = ttm_to_lima_bo(vb->bo); + err = lima_gem_sync_bo( + submit->task, bo, !vb->shared, + submit->flags & LIMA_SUBMIT_FLAG_EXPLICIT_FENCE); + if (err) + goto out2; + } + + if (submit->flags & LIMA_SUBMIT_FLAG_SYNC_FD_OUT) { + int fd = lima_gem_get_sync_fd( + &submit->task->base.s_fence->finished); + if (fd < 0) { + err = fd; + goto out2; + } + submit->sync_fd = fd; + } + + submit->fence = lima_sched_context_queue_task( + submit->ctx->context + submit->pipe, submit->task, + &submit->done); + + ttm_eu_fence_buffer_objects(&submit->ticket, &submit->validated, + &submit->task->base.s_fence->finished); + +out2: + if (err) + lima_sched_task_fini(submit->task); +out1: + if (err) + ttm_eu_backoff_reservation(&submit->ticket, &submit->validated); +out0: + for (i = 0; i < submit->nr_bos; i++) { + struct ttm_validate_buffer *vb = submit->vbs + i; + if (!vb->bo) + break; + drm_gem_object_put_unlocked(&ttm_to_lima_bo(vb->bo)->gem); + } + return err; +} + +int lima_gem_wait(struct drm_file *file, u32 handle, u32 op, u64 timeout_ns) +{ + bool write = op & LIMA_GEM_WAIT_WRITE; + struct drm_gem_object *obj; + struct lima_bo *bo; + signed long ret; + unsigned long timeout; + + obj = drm_gem_object_lookup(file, handle); + if (!obj) + return -ENOENT; + + bo = to_lima_bo(obj); + + timeout = timeout_ns ? lima_timeout_to_jiffies(timeout_ns) : 0; + + ret = lima_bo_reserve(bo, true); + if (ret) + goto out; + + /* must use long for result check because in 64bit arch int + * will overflow if timeout is too large and get <0 result + */ + ret = reservation_object_wait_timeout_rcu(bo->tbo.resv, write, true, timeout); + if (ret == 0) + ret = timeout ? -ETIMEDOUT : -EBUSY; + else if (ret > 0) + ret = 0; + + lima_bo_unreserve(bo); +out: + drm_gem_object_put_unlocked(obj); + return ret; +} diff --git a/drivers/gpu/drm/lima/lima_gem.h b/drivers/gpu/drm/lima/lima_gem.h new file mode 100644 index 000000000000..8e3c4110825d --- /dev/null +++ b/drivers/gpu/drm/lima/lima_gem.h @@ -0,0 +1,41 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_GEM_H__ +#define __LIMA_GEM_H__ + +struct lima_bo; +struct lima_submit; + +struct lima_bo *lima_gem_create_bo(struct drm_device *dev, u32 size, u32 flags); +int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file, + u32 size, u32 flags, u32 *handle); +void lima_gem_free_object(struct drm_gem_object *obj); +int lima_gem_object_open(struct drm_gem_object *obj, struct drm_file *file); +void lima_gem_object_close(struct drm_gem_object *obj, struct drm_file *file); +int lima_gem_mmap_offset(struct drm_file *file, u32 handle, u64 *offset); +int lima_gem_mmap(struct file *filp, struct vm_area_struct *vma); +int lima_gem_va_map(struct drm_file *file, u32 handle, u32 flags, u32 va); +int lima_gem_va_unmap(struct drm_file *file, u32 handle, u32 va); +int lima_gem_submit(struct drm_file *file, struct lima_submit *submit); +int lima_gem_wait(struct drm_file *file, u32 handle, u32 op, u64 timeout_ns); + +#endif
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Erico Nunes nunes.erico@gmail.com --- drivers/gpu/drm/lima/lima_gem_prime.c | 66 +++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_gem_prime.h | 31 +++++++++++++ 2 files changed, 97 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.c create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.h
diff --git a/drivers/gpu/drm/lima/lima_gem_prime.c b/drivers/gpu/drm/lima/lima_gem_prime.c new file mode 100644 index 000000000000..74da43a4378f --- /dev/null +++ b/drivers/gpu/drm/lima/lima_gem_prime.c @@ -0,0 +1,66 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <linux/dma-buf.h> +#include <drm/drm_prime.h> + +#include "lima_device.h" +#include "lima_object.h" +#include "lima_gem_prime.h" + +struct drm_gem_object *lima_gem_prime_import_sg_table( + struct drm_device *dev, struct dma_buf_attachment *attach, + struct sg_table *sgt) +{ + struct reservation_object *resv = attach->dmabuf->resv; + struct lima_device *ldev = to_lima_dev(dev); + struct lima_bo *bo; + + ww_mutex_lock(&resv->lock, NULL); + + bo = lima_bo_create(ldev, attach->dmabuf->size, 0, + ttm_bo_type_sg, sgt, resv); + if (IS_ERR(bo)) + goto err_out; + + ww_mutex_unlock(&resv->lock); + return &bo->gem; + +err_out: + ww_mutex_unlock(&resv->lock); + return (void *)bo; +} + +struct reservation_object *lima_gem_prime_res_obj(struct drm_gem_object *obj) +{ + struct lima_bo *bo = to_lima_bo(obj); + + return bo->tbo.resv; +} + +struct sg_table *lima_gem_prime_get_sg_table(struct drm_gem_object *obj) +{ + struct lima_bo *bo = to_lima_bo(obj); + int npages = bo->tbo.num_pages; + + return drm_prime_pages_to_sg(bo->tbo.ttm->pages, npages); +} diff --git a/drivers/gpu/drm/lima/lima_gem_prime.h b/drivers/gpu/drm/lima/lima_gem_prime.h new file mode 100644 index 000000000000..023bf5ba2d7b --- /dev/null +++ b/drivers/gpu/drm/lima/lima_gem_prime.h @@ -0,0 +1,31 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_GEM_PRIME_H__ +#define __LIMA_GEM_PRIME_H__ + +struct drm_gem_object *lima_gem_prime_import_sg_table( + struct drm_device *dev, struct dma_buf_attachment *attach, + struct sg_table *sgt); +struct sg_table *lima_gem_prime_get_sg_table(struct drm_gem_object *obj); +struct reservation_object *lima_gem_prime_res_obj(struct drm_gem_object *obj); + +#endif
Signed-off-by: Qiang Yu yuq825@gmail.com --- drivers/gpu/drm/lima/lima_sched.c | 497 ++++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_sched.h | 126 ++++++++ 2 files changed, 623 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_sched.c create mode 100644 drivers/gpu/drm/lima/lima_sched.h
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c new file mode 100644 index 000000000000..190932955e9b --- /dev/null +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -0,0 +1,497 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <linux/kthread.h> +#include <linux/slab.h> + +#include "lima_drv.h" +#include "lima_sched.h" +#include "lima_vm.h" +#include "lima_mmu.h" +#include "lima_l2_cache.h" + +struct lima_fence { + struct dma_fence base; + struct lima_sched_pipe *pipe; +}; + +static struct kmem_cache *lima_fence_slab = NULL; + +int lima_sched_slab_init(void) +{ + lima_fence_slab = kmem_cache_create( + "lima_fence", sizeof(struct lima_fence), 0, + SLAB_HWCACHE_ALIGN, NULL); + if (!lima_fence_slab) + return -ENOMEM; + + return 0; +} + +void lima_sched_slab_fini(void) +{ + if (lima_fence_slab) + kmem_cache_destroy(lima_fence_slab); +} + +static inline struct lima_fence *to_lima_fence(struct dma_fence *fence) +{ + return container_of(fence, struct lima_fence, base); +} + +static const char *lima_fence_get_driver_name(struct dma_fence *fence) +{ + return "lima"; +} + +static const char *lima_fence_get_timeline_name(struct dma_fence *fence) +{ + struct lima_fence *f = to_lima_fence(fence); + + return f->pipe->base.name; +} + +static bool lima_fence_enable_signaling(struct dma_fence *fence) +{ + return true; +} + +static void lima_fence_release_rcu(struct rcu_head *rcu) +{ + struct dma_fence *f = container_of(rcu, struct dma_fence, rcu); + struct lima_fence *fence = to_lima_fence(f); + + kmem_cache_free(lima_fence_slab, fence); +} + +static void lima_fence_release(struct dma_fence *fence) +{ + struct lima_fence *f = to_lima_fence(fence); + + call_rcu(&f->base.rcu, lima_fence_release_rcu); +} + +static const struct dma_fence_ops lima_fence_ops = { + .get_driver_name = lima_fence_get_driver_name, + .get_timeline_name = lima_fence_get_timeline_name, + .enable_signaling = lima_fence_enable_signaling, + .wait = dma_fence_default_wait, + .release = lima_fence_release, +}; + +static struct lima_fence *lima_fence_create(struct lima_sched_pipe *pipe) +{ + struct lima_fence *fence; + + fence = kmem_cache_zalloc(lima_fence_slab, GFP_KERNEL); + if (!fence) + return NULL; + + fence->pipe = pipe; + dma_fence_init(&fence->base, &lima_fence_ops, &pipe->fence_lock, + pipe->fence_context, ++pipe->fence_seqno); + + return fence; +} + +static inline struct lima_sched_task *to_lima_task(struct drm_sched_job *job) +{ + return container_of(job, struct lima_sched_task, base); +} + +static inline struct lima_sched_pipe *to_lima_pipe(struct drm_gpu_scheduler *sched) +{ + return container_of(sched, struct lima_sched_pipe, base); +} + +int lima_sched_task_init(struct lima_sched_task *task, + struct lima_sched_context *context, + struct lima_vm *vm) +{ + int err; + + err = drm_sched_job_init(&task->base, context->base.sched, + &context->base, context); + if (err) + return err; + + task->vm = lima_vm_get(vm); + return 0; +} + +void lima_sched_task_fini(struct lima_sched_task *task) +{ + dma_fence_put(&task->base.s_fence->finished); + lima_vm_put(task->vm); +} + +int lima_sched_task_add_dep(struct lima_sched_task *task, struct dma_fence *fence) +{ + int i, new_dep = 4; + + if (task->dep && task->num_dep == task->max_dep) + new_dep = task->max_dep * 2; + + if (task->max_dep < new_dep) { + void *dep = krealloc(task->dep, sizeof(*task->dep) * new_dep, GFP_KERNEL); + if (!dep) + return -ENOMEM; + task->max_dep = new_dep; + task->dep = dep; + } + + dma_fence_get(fence); + for (i = 0; i < task->num_dep; i++) { + if (task->dep[i]->context == fence->context && + dma_fence_is_later(fence, task->dep[i])) { + dma_fence_put(task->dep[i]); + task->dep[i] = fence; + return 0; + } + } + + task->dep[task->num_dep++] = fence; + return 0; +} + +int lima_sched_context_init(struct lima_sched_pipe *pipe, + struct lima_sched_context *context, + atomic_t *guilty) +{ + struct drm_sched_rq *rq = pipe->base.sched_rq + DRM_SCHED_PRIORITY_NORMAL; + int err; + + context->fences = + kzalloc(sizeof(*context->fences) * lima_sched_max_tasks, GFP_KERNEL); + if (!context->fences) + return -ENOMEM; + + mutex_init(&context->lock); + err = drm_sched_entity_init(&pipe->base, &context->base, rq, + lima_sched_max_tasks, guilty); + if (err) { + kfree(context->fences); + context->fences = NULL; + return err; + } + + return 0; +} + +void lima_sched_context_fini(struct lima_sched_pipe *pipe, + struct lima_sched_context *context) +{ + drm_sched_entity_fini(&pipe->base, &context->base); + + mutex_destroy(&context->lock); + + if (context->fences) + kfree(context->fences); +} + +static uint32_t lima_sched_context_add_fence(struct lima_sched_context *context, + struct dma_fence *fence, + uint32_t *done) +{ + uint32_t seq, idx, i; + struct dma_fence *other; + + mutex_lock(&context->lock); + + seq = context->sequence; + idx = seq & (lima_sched_max_tasks - 1); + other = context->fences[idx]; + + if (other) { + int err = dma_fence_wait(other, false); + if (err) + DRM_ERROR("Error %d waiting context fence\n", err); + } + + context->fences[idx] = dma_fence_get(fence); + context->sequence++; + + /* get finished fence offset from seq */ + for (i = 1; i < lima_sched_max_tasks; i++) { + idx = (seq - i) & (lima_sched_max_tasks - 1); + if (!context->fences[idx] || + dma_fence_is_signaled(context->fences[idx])) + break; + } + + mutex_unlock(&context->lock); + + dma_fence_put(other); + + *done = i; + return seq; +} + +struct dma_fence *lima_sched_context_get_fence( + struct lima_sched_context *context, uint32_t seq) +{ + struct dma_fence *fence; + int idx; + uint32_t max, min; + + mutex_lock(&context->lock); + + max = context->sequence - 1; + min = context->sequence - lima_sched_max_tasks; + + /* handle overflow case */ + if ((min < max && (seq < min || seq > max)) || + (min > max && (seq < min && seq > max))) { + fence = NULL; + goto out; + } + + idx = seq & (lima_sched_max_tasks - 1); + fence = dma_fence_get(context->fences[idx]); + +out: + mutex_unlock(&context->lock); + + return fence; +} + +uint32_t lima_sched_context_queue_task(struct lima_sched_context *context, + struct lima_sched_task *task, + uint32_t *done) +{ + uint32_t seq = lima_sched_context_add_fence( + context, &task->base.s_fence->finished, done); + drm_sched_entity_push_job(&task->base, &context->base); + return seq; +} + +static struct dma_fence *lima_sched_dependency(struct drm_sched_job *job, + struct drm_sched_entity *entity) +{ + struct lima_sched_task *task = to_lima_task(job); + int i; + + for (i = 0; i < task->num_dep; i++) { + struct dma_fence *fence = task->dep[i]; + + if (!task->dep[i]) + continue; + + task->dep[i] = NULL; + + if (!dma_fence_is_signaled(fence)) + return fence; + + dma_fence_put(fence); + } + + return NULL; +} + +static struct dma_fence *lima_sched_run_job(struct drm_sched_job *job) +{ + struct lima_sched_task *task = to_lima_task(job); + struct lima_sched_pipe *pipe = to_lima_pipe(job->sched); + struct lima_fence *fence; + struct dma_fence *ret; + struct lima_vm *vm = NULL, *last_vm = NULL; + int i; + + /* after GPU reset */ + if (job->s_fence->finished.error < 0) + return NULL; + + fence = lima_fence_create(pipe); + if (!fence) + return NULL; + task->fence = &fence->base; + + /* for caller usage of the fence, otherwise irq handler + * may consume the fence before caller use it */ + ret = dma_fence_get(task->fence); + + pipe->current_task = task; + + /* this is needed for MMU to work correctly, otherwise GP/PP + * will hang or page fault for unknown reason after running for + * a while. + * + * Need to investigate: + * 1. is it related to TLB + * 2. how much performance will be affected by L2 cache flush + * 3. can we reduce the calling of this function because all + * GP/PP use the same L2 cache on mali400 + * + * TODO: + * 1. move this to task fini to save some wait time? + * 2. when GP/PP use different l2 cache, need PP wait GP l2 + * cache flush? + */ + for (i = 0; i < pipe->num_l2_cache; i++) + lima_l2_cache_flush(pipe->l2_cache[i]); + + if (task->vm != pipe->current_vm) { + vm = lima_vm_get(task->vm); + last_vm = pipe->current_vm; + pipe->current_vm = task->vm; + } + + if (pipe->bcast_mmu) + lima_mmu_switch_vm(pipe->bcast_mmu, vm); + else { + for (i = 0; i < pipe->num_mmu; i++) + lima_mmu_switch_vm(pipe->mmu[i], vm); + } + + if (last_vm) + lima_vm_put(last_vm); + + pipe->error = false; + pipe->task_run(pipe, task); + + return task->fence; +} + +static void lima_sched_handle_error_task(struct lima_sched_pipe *pipe, + struct lima_sched_task *task) +{ + kthread_park(pipe->base.thread); + drm_sched_hw_job_reset(&pipe->base, &task->base); + + pipe->task_error(pipe); + + if (pipe->bcast_mmu) + lima_mmu_page_fault_resume(pipe->bcast_mmu); + else { + int i; + for (i = 0; i < pipe->num_mmu; i++) + lima_mmu_page_fault_resume(pipe->mmu[i]); + } + + if (pipe->current_vm) + lima_vm_put(pipe->current_vm); + + pipe->current_vm = NULL; + pipe->current_task = NULL; + + drm_sched_job_recovery(&pipe->base); + kthread_unpark(pipe->base.thread); +} + +static void lima_sched_timedout_job(struct drm_sched_job *job) +{ + struct lima_sched_pipe *pipe = to_lima_pipe(job->sched); + struct lima_sched_task *task = to_lima_task(job); + + lima_sched_handle_error_task(pipe, task); +} + +static void lima_sched_free_job(struct drm_sched_job *job) +{ + struct lima_sched_task *task = to_lima_task(job); + struct lima_sched_pipe *pipe = to_lima_pipe(job->sched); + int i; + + dma_fence_put(task->fence); + + for (i = 0; i < task->num_dep; i++) { + if (task->dep[i]) + dma_fence_put(task->dep[i]); + } + + if (task->dep) + kfree(task->dep); + + lima_vm_put(task->vm); + kmem_cache_free(pipe->task_slab, task); +} + +const struct drm_sched_backend_ops lima_sched_ops = { + .dependency = lima_sched_dependency, + .run_job = lima_sched_run_job, + .timedout_job = lima_sched_timedout_job, + .free_job = lima_sched_free_job, +}; + +static void lima_sched_error_work(struct work_struct *work) +{ + struct lima_sched_pipe *pipe = + container_of(work, struct lima_sched_pipe, error_work); + struct lima_sched_task *task = pipe->current_task; + + lima_sched_handle_error_task(pipe, task); +} + +int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char *name) +{ + long timeout; + + if (lima_sched_timeout_ms <= 0) + timeout = MAX_SCHEDULE_TIMEOUT; + else + timeout = msecs_to_jiffies(lima_sched_timeout_ms); + + pipe->fence_context = dma_fence_context_alloc(1); + spin_lock_init(&pipe->fence_lock); + + INIT_WORK(&pipe->error_work, lima_sched_error_work); + + return drm_sched_init(&pipe->base, &lima_sched_ops, 1, 0, timeout, name); +} + +void lima_sched_pipe_fini(struct lima_sched_pipe *pipe) +{ + drm_sched_fini(&pipe->base); +} + +unsigned long lima_timeout_to_jiffies(u64 timeout_ns) +{ + unsigned long timeout_jiffies; + ktime_t timeout; + + /* clamp timeout if it's to large */ + if (((s64)timeout_ns) < 0) + return MAX_SCHEDULE_TIMEOUT; + + timeout = ktime_sub(ns_to_ktime(timeout_ns), ktime_get()); + if (ktime_to_ns(timeout) < 0) + return 0; + + timeout_jiffies = nsecs_to_jiffies(ktime_to_ns(timeout)); + /* clamp timeout to avoid unsigned-> signed overflow */ + if (timeout_jiffies > MAX_SCHEDULE_TIMEOUT ) + return MAX_SCHEDULE_TIMEOUT; + + return timeout_jiffies; +} + +void lima_sched_pipe_task_done(struct lima_sched_pipe *pipe) +{ + if (pipe->error) + schedule_work(&pipe->error_work); + else { + struct lima_sched_task *task = pipe->current_task; + + pipe->task_fini(pipe); + dma_fence_signal(task->fence); + } +} diff --git a/drivers/gpu/drm/lima/lima_sched.h b/drivers/gpu/drm/lima/lima_sched.h new file mode 100644 index 000000000000..b93b7b4eded4 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_sched.h @@ -0,0 +1,126 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_SCHED_H__ +#define __LIMA_SCHED_H__ + +#include <drm/gpu_scheduler.h> + +struct lima_vm; + +struct lima_sched_task { + struct drm_sched_job base; + + struct lima_vm *vm; + void *frame; + + struct dma_fence **dep; + int num_dep; + int max_dep; + + /* pipe fence */ + struct dma_fence *fence; +}; + +struct lima_sched_context { + struct drm_sched_entity base; + struct mutex lock; + struct dma_fence **fences; + uint32_t sequence; +}; + +#define LIMA_SCHED_PIPE_MAX_MMU 8 +#define LIMA_SCHED_PIPE_MAX_L2_CACHE 2 +#define LIMA_SCHED_PIPE_MAX_PROCESSOR 8 + +struct lima_ip; + +struct lima_sched_pipe { + struct drm_gpu_scheduler base; + + u64 fence_context; + u32 fence_seqno; + spinlock_t fence_lock; + + struct lima_sched_task *current_task; + struct lima_vm *current_vm; + + struct lima_ip *mmu[LIMA_SCHED_PIPE_MAX_MMU]; + int num_mmu; + + struct lima_ip *l2_cache[LIMA_SCHED_PIPE_MAX_L2_CACHE]; + int num_l2_cache; + + struct lima_ip *processor[LIMA_SCHED_PIPE_MAX_PROCESSOR]; + int num_processor; + + struct lima_ip *bcast_processor; + struct lima_ip *bcast_mmu; + + u32 done; + bool error; + atomic_t task; + + int frame_size; + struct kmem_cache *task_slab; + + int (*task_validate)(struct lima_sched_pipe *pipe, struct lima_sched_task *task); + void (*task_run)(struct lima_sched_pipe *pipe, struct lima_sched_task *task); + void (*task_fini)(struct lima_sched_pipe *pipe); + void (*task_error)(struct lima_sched_pipe *pipe); + void (*task_mmu_error)(struct lima_sched_pipe *pipe); + + struct work_struct error_work; +}; + +int lima_sched_task_init(struct lima_sched_task *task, + struct lima_sched_context *context, + struct lima_vm *vm); +void lima_sched_task_fini(struct lima_sched_task *task); +int lima_sched_task_add_dep(struct lima_sched_task *task, struct dma_fence *fence); + +int lima_sched_context_init(struct lima_sched_pipe *pipe, + struct lima_sched_context *context, + atomic_t *guilty); +void lima_sched_context_fini(struct lima_sched_pipe *pipe, + struct lima_sched_context *context); +uint32_t lima_sched_context_queue_task(struct lima_sched_context *context, + struct lima_sched_task *task, + uint32_t *done); +struct dma_fence *lima_sched_context_get_fence( + struct lima_sched_context *context, uint32_t seq); + +int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char *name); +void lima_sched_pipe_fini(struct lima_sched_pipe *pipe); +void lima_sched_pipe_task_done(struct lima_sched_pipe *pipe); + +static inline void lima_sched_pipe_mmu_error(struct lima_sched_pipe *pipe) +{ + pipe->error = true; + pipe->task_mmu_error(pipe); +} + +int lima_sched_slab_init(void); +void lima_sched_slab_fini(void); + +unsigned long lima_timeout_to_jiffies(u64 timeout_ns); + +#endif
Signed-off-by: Qiang Yu yuq825@gmail.com --- drivers/gpu/drm/lima/lima_ctx.c | 143 ++++++++++++++++++++++++++++++++ drivers/gpu/drm/lima/lima_ctx.h | 51 ++++++++++++ 2 files changed, 194 insertions(+) create mode 100644 drivers/gpu/drm/lima/lima_ctx.c create mode 100644 drivers/gpu/drm/lima/lima_ctx.h
diff --git a/drivers/gpu/drm/lima/lima_ctx.c b/drivers/gpu/drm/lima/lima_ctx.c new file mode 100644 index 000000000000..7243861760b4 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_ctx.c @@ -0,0 +1,143 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include <linux/slab.h> + +#include "lima_device.h" +#include "lima_ctx.h" + +int lima_ctx_create(struct lima_device *dev, struct lima_ctx_mgr *mgr, u32 *id) +{ + struct lima_ctx *ctx; + int i, err; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + ctx->dev = dev; + kref_init(&ctx->refcnt); + + for (i = 0; i < lima_pipe_num; i++) { + err = lima_sched_context_init(dev->pipe + i, ctx->context + i, &ctx->guilty); + if (err) + goto err_out0; + } + + idr_preload(GFP_KERNEL); + spin_lock(&mgr->lock); + err = idr_alloc(&mgr->handles, ctx, 1, 0, GFP_ATOMIC); + spin_unlock(&mgr->lock); + idr_preload_end(); + if (err < 0) + goto err_out0; + + *id = err; + return 0; + +err_out0: + for (i--; i >= 0; i--) + lima_sched_context_fini(dev->pipe + i, ctx->context + i); + kfree(ctx); + return err; +} + +static void lima_ctx_do_release(struct kref *ref) +{ + struct lima_ctx *ctx = container_of(ref, struct lima_ctx, refcnt); + int i; + + for (i = 0; i < lima_pipe_num; i++) + lima_sched_context_fini(ctx->dev->pipe + i, ctx->context + i); + kfree(ctx); +} + +int lima_ctx_free(struct lima_ctx_mgr *mgr, u32 id) +{ + struct lima_ctx *ctx; + + spin_lock(&mgr->lock); + ctx = idr_remove(&mgr->handles, id); + spin_unlock(&mgr->lock); + + if (ctx) { + kref_put(&ctx->refcnt, lima_ctx_do_release); + return 0; + } + return -EINVAL; +} + +struct lima_ctx *lima_ctx_get(struct lima_ctx_mgr *mgr, u32 id) +{ + struct lima_ctx *ctx; + + spin_lock(&mgr->lock); + ctx = idr_find(&mgr->handles, id); + if (ctx) + kref_get(&ctx->refcnt); + spin_unlock(&mgr->lock); + return ctx; +} + +void lima_ctx_put(struct lima_ctx *ctx) +{ + kref_put(&ctx->refcnt, lima_ctx_do_release); +} + +void lima_ctx_mgr_init(struct lima_ctx_mgr *mgr) +{ + spin_lock_init(&mgr->lock); + idr_init(&mgr->handles); +} + +void lima_ctx_mgr_fini(struct lima_ctx_mgr *mgr) +{ + struct lima_ctx *ctx; + struct idr *idp; + uint32_t id; + + idp = &mgr->handles; + + idr_for_each_entry(idp, ctx, id) { + kref_put(&ctx->refcnt, lima_ctx_do_release); + } + + idr_destroy(&mgr->handles); +} + +struct dma_fence *lima_ctx_get_native_fence(struct lima_ctx_mgr *mgr, + u32 ctx, u32 pipe, u32 seq) +{ + struct lima_ctx *c; + struct dma_fence *ret; + + if (pipe >= lima_pipe_num) + return ERR_PTR(-EINVAL); + + c = lima_ctx_get(mgr, ctx); + if (!c) + return ERR_PTR(-ENOENT); + + ret = lima_sched_context_get_fence(c->context + pipe, seq); + + lima_ctx_put(c); + return ret; +} diff --git a/drivers/gpu/drm/lima/lima_ctx.h b/drivers/gpu/drm/lima/lima_ctx.h new file mode 100644 index 000000000000..591f64532772 --- /dev/null +++ b/drivers/gpu/drm/lima/lima_ctx.h @@ -0,0 +1,51 @@ +/* + * Copyright (C) 2017-2018 Lima Project + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#ifndef __LIMA_CTX_H__ +#define __LIMA_CTX_H__ + +#include <linux/idr.h> + +#include "lima_device.h" + +struct lima_ctx { + struct kref refcnt; + struct lima_device *dev; + struct lima_sched_context context[lima_pipe_num]; + atomic_t guilty; +}; + +struct lima_ctx_mgr { + spinlock_t lock; + struct idr handles; +}; + +int lima_ctx_create(struct lima_device *dev, struct lima_ctx_mgr *mgr, u32 *id); +int lima_ctx_free(struct lima_ctx_mgr *mgr, u32 id); +struct lima_ctx *lima_ctx_get(struct lima_ctx_mgr *mgr, u32 id); +void lima_ctx_put(struct lima_ctx *ctx); +void lima_ctx_mgr_init(struct lima_ctx_mgr *mgr); +void lima_ctx_mgr_fini(struct lima_ctx_mgr *mgr); + +struct dma_fence *lima_ctx_get_native_fence(struct lima_ctx_mgr *mgr, + u32 ctx, u32 pipe, u32 seq); + +#endif
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Neil Armstrong narmstrong@baylibre.com Signed-off-by: Simon Shields simon@lineageos.org Signed-off-by: Heiko Stuebner heiko@sntech.de --- drivers/gpu/drm/Kconfig | 2 ++ drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/lima/Kconfig | 9 +++++++++ drivers/gpu/drm/lima/Makefile | 19 +++++++++++++++++++ 4 files changed, 31 insertions(+) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index deeefa7a1773..f00d529ee034 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
source "drivers/gpu/drm/tve200/Kconfig"
+source "drivers/gpu/drm/lima/Kconfig" + # Keep legacy drivers last
menuconfig DRM_LEGACY diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 50093ff4479b..aba686e41d6b 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/ obj-$(CONFIG_DRM_TINYDRM) += tinydrm/ obj-$(CONFIG_DRM_PL111) += pl111/ obj-$(CONFIG_DRM_TVE200) += tve200/ +obj-$(CONFIG_DRM_LIMA) += lima/ diff --git a/drivers/gpu/drm/lima/Kconfig b/drivers/gpu/drm/lima/Kconfig new file mode 100644 index 000000000000..4ce9ac2e8204 --- /dev/null +++ b/drivers/gpu/drm/lima/Kconfig @@ -0,0 +1,9 @@ + +config DRM_LIMA + tristate "LIMA (DRM support for ARM Mali 400/450 GPU)" + depends on DRM + depends on ARCH_SUNXI || ARCH_ROCKCHIP || ARCH_EXYNOS || ARCH_MESON + select DRM_SCHED + select DRM_TTM + help + DRM driver for ARM Mali 400/450 GPUs. diff --git a/drivers/gpu/drm/lima/Makefile b/drivers/gpu/drm/lima/Makefile new file mode 100644 index 000000000000..0a1d6605f164 --- /dev/null +++ b/drivers/gpu/drm/lima/Makefile @@ -0,0 +1,19 @@ +lima-y := \ + lima_drv.o \ + lima_device.o \ + lima_pmu.o \ + lima_l2_cache.o \ + lima_mmu.o \ + lima_gp.o \ + lima_pp.o \ + lima_gem.o \ + lima_vm.o \ + lima_sched.o \ + lima_ctx.o \ + lima_gem_prime.o \ + lima_dlbu.o \ + lima_bcast.o \ + lima_ttm.o \ + lima_object.o + +obj-$(CONFIG_DRM_LIMA) += lima.o
On 05/18/2018 11:28 AM, Qiang Yu wrote:
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Neil Armstrong narmstrong@baylibre.com Signed-off-by: Simon Shields simon@lineageos.org Signed-off-by: Heiko Stuebner heiko@sntech.de
drivers/gpu/drm/Kconfig | 2 ++ drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/lima/Kconfig | 9 +++++++++ drivers/gpu/drm/lima/Makefile | 19 +++++++++++++++++++ 4 files changed, 31 insertions(+) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index deeefa7a1773..f00d529ee034 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
source "drivers/gpu/drm/tve200/Kconfig"
+source "drivers/gpu/drm/lima/Kconfig"
# Keep legacy drivers last
menuconfig DRM_LEGACY diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 50093ff4479b..aba686e41d6b 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/ obj-$(CONFIG_DRM_TINYDRM) += tinydrm/ obj-$(CONFIG_DRM_PL111) += pl111/ obj-$(CONFIG_DRM_TVE200) += tve200/ +obj-$(CONFIG_DRM_LIMA) += lima/ diff --git a/drivers/gpu/drm/lima/Kconfig b/drivers/gpu/drm/lima/Kconfig new file mode 100644 index 000000000000..4ce9ac2e8204 --- /dev/null +++ b/drivers/gpu/drm/lima/Kconfig @@ -0,0 +1,9 @@
+config DRM_LIMA
tristate "LIMA (DRM support for ARM Mali 400/450 GPU)"
depends on DRM
depends on ARCH_SUNXI || ARCH_ROCKCHIP || ARCH_EXYNOS || ARCH_MESON
You can add ARCH_ZYNQMP here too , it has Mali 400 MP2.
On Wed, May 23, 2018 at 07:16:41PM +0200, Marek Vasut wrote:
On 05/18/2018 11:28 AM, Qiang Yu wrote:
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Neil Armstrong narmstrong@baylibre.com Signed-off-by: Simon Shields simon@lineageos.org Signed-off-by: Heiko Stuebner heiko@sntech.de
drivers/gpu/drm/Kconfig | 2 ++ drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/lima/Kconfig | 9 +++++++++ drivers/gpu/drm/lima/Makefile | 19 +++++++++++++++++++ 4 files changed, 31 insertions(+) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index deeefa7a1773..f00d529ee034 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
source "drivers/gpu/drm/tve200/Kconfig"
+source "drivers/gpu/drm/lima/Kconfig"
# Keep legacy drivers last
menuconfig DRM_LEGACY diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 50093ff4479b..aba686e41d6b 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/ obj-$(CONFIG_DRM_TINYDRM) += tinydrm/ obj-$(CONFIG_DRM_PL111) += pl111/ obj-$(CONFIG_DRM_TVE200) += tve200/ +obj-$(CONFIG_DRM_LIMA) += lima/ diff --git a/drivers/gpu/drm/lima/Kconfig b/drivers/gpu/drm/lima/Kconfig new file mode 100644 index 000000000000..4ce9ac2e8204 --- /dev/null +++ b/drivers/gpu/drm/lima/Kconfig @@ -0,0 +1,9 @@
+config DRM_LIMA
tristate "LIMA (DRM support for ARM Mali 400/450 GPU)"
depends on DRM
depends on ARCH_SUNXI || ARCH_ROCKCHIP || ARCH_EXYNOS || ARCH_MESON
You can add ARCH_ZYNQMP here too , it has Mali 400 MP2.
Better yet, just drop them all rather than continually adding to the list.
But if you keep it, add '|| COMPILE_TEST'.
Rob
Yeah, the list is coming longer and longer, maybe I should just use ARM || ARM64 || COMPILE_TEST
Regards, Qiang
On Thu, May 24, 2018 at 1:26 AM, Rob Herring robh@kernel.org wrote:
On Wed, May 23, 2018 at 07:16:41PM +0200, Marek Vasut wrote:
On 05/18/2018 11:28 AM, Qiang Yu wrote:
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Neil Armstrong narmstrong@baylibre.com Signed-off-by: Simon Shields simon@lineageos.org Signed-off-by: Heiko Stuebner heiko@sntech.de
drivers/gpu/drm/Kconfig | 2 ++ drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/lima/Kconfig | 9 +++++++++ drivers/gpu/drm/lima/Makefile | 19 +++++++++++++++++++ 4 files changed, 31 insertions(+) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index deeefa7a1773..f00d529ee034 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
source "drivers/gpu/drm/tve200/Kconfig"
+source "drivers/gpu/drm/lima/Kconfig"
# Keep legacy drivers last
menuconfig DRM_LEGACY diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 50093ff4479b..aba686e41d6b 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/ obj-$(CONFIG_DRM_TINYDRM) += tinydrm/ obj-$(CONFIG_DRM_PL111) += pl111/ obj-$(CONFIG_DRM_TVE200) += tve200/ +obj-$(CONFIG_DRM_LIMA) += lima/ diff --git a/drivers/gpu/drm/lima/Kconfig b/drivers/gpu/drm/lima/Kconfig new file mode 100644 index 000000000000..4ce9ac2e8204 --- /dev/null +++ b/drivers/gpu/drm/lima/Kconfig @@ -0,0 +1,9 @@
+config DRM_LIMA
tristate "LIMA (DRM support for ARM Mali 400/450 GPU)"
depends on DRM
depends on ARCH_SUNXI || ARCH_ROCKCHIP || ARCH_EXYNOS || ARCH_MESON
You can add ARCH_ZYNQMP here too , it has Mali 400 MP2.
Better yet, just drop them all rather than continually adding to the list.
But if you keep it, add '|| COMPILE_TEST'.
Rob
On 05/23/2018 17:16, Marek Vasut wrote:
On 05/18/2018 11:28 AM, Qiang Yu wrote:
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Neil Armstrong narmstrong@baylibre.com Signed-off-by: Simon Shields simon@lineageos.org Signed-off-by: Heiko Stuebner heiko@sntech.de
drivers/gpu/drm/Kconfig | 2 ++ drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/lima/Kconfig | 9 +++++++++ drivers/gpu/drm/lima/Makefile | 19 +++++++++++++++++++ 4 files changed, 31 insertions(+) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index deeefa7a1773..f00d529ee034 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
source "drivers/gpu/drm/tve200/Kconfig"
+source "drivers/gpu/drm/lima/Kconfig"
# Keep legacy drivers last
menuconfig DRM_LEGACY diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 50093ff4479b..aba686e41d6b 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/ obj-$(CONFIG_DRM_TINYDRM) += tinydrm/ obj-$(CONFIG_DRM_PL111) += pl111/ obj-$(CONFIG_DRM_TVE200) += tve200/ +obj-$(CONFIG_DRM_LIMA) += lima/ diff --git a/drivers/gpu/drm/lima/Kconfig b/drivers/gpu/drm/lima/Kconfig new file mode 100644 index 000000000000..4ce9ac2e8204 --- /dev/null +++ b/drivers/gpu/drm/lima/Kconfig @@ -0,0 +1,9 @@
+config DRM_LIMA
tristate "LIMA (DRM support for ARM Mali 400/450 GPU)"
depends on DRM
depends on ARCH_SUNXI || ARCH_ROCKCHIP || ARCH_EXYNOS || ARCH_MESON
You can add ARCH_ZYNQMP here too , it has Mali 400 MP2.
Well, as Qiang Yu already figured, it seems much smarter to not enumerate every possible platform here. More than that, the Kconfig depends should be strictly technical. There is nothing in this driver which is ARM specific, in fact I managed to compile it for x86-64 as well (with some small fix in a random header file). In fact there are x86-64 based SoCs pairing Intel Atom cores with a Mali GPUs: https://en.wikipedia.org/wiki/Rockchip#Tablet_processors_with_integrated_mod...
So you can get rid of this whole line at all, meaning you don't even need the "depends on ARM || ARM64 || COMPILE_TEST" you have in your gitlab repo.
Cheers, Andre.
-- Best regards, Marek Vasut _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Hi Andre,
Thanks for your info. What a surprise that exist such a SoC. That means I have to judge if it's a 64bit CPU in other way than just check ARM64 config.
Regards, Qiang On Sat, Jun 16, 2018 at 1:23 AM Andre Przywara andre.przywara@arm.com wrote:
On 05/23/2018 17:16, Marek Vasut wrote:
On 05/18/2018 11:28 AM, Qiang Yu wrote:
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Neil Armstrong narmstrong@baylibre.com Signed-off-by: Simon Shields simon@lineageos.org Signed-off-by: Heiko Stuebner heiko@sntech.de
drivers/gpu/drm/Kconfig | 2 ++ drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/lima/Kconfig | 9 +++++++++ drivers/gpu/drm/lima/Makefile | 19 +++++++++++++++++++ 4 files changed, 31 insertions(+) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index deeefa7a1773..f00d529ee034 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
source "drivers/gpu/drm/tve200/Kconfig"
+source "drivers/gpu/drm/lima/Kconfig"
# Keep legacy drivers last
menuconfig DRM_LEGACY diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 50093ff4479b..aba686e41d6b 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/ obj-$(CONFIG_DRM_TINYDRM) += tinydrm/ obj-$(CONFIG_DRM_PL111) += pl111/ obj-$(CONFIG_DRM_TVE200) += tve200/ +obj-$(CONFIG_DRM_LIMA) += lima/ diff --git a/drivers/gpu/drm/lima/Kconfig b/drivers/gpu/drm/lima/Kconfig new file mode 100644 index 000000000000..4ce9ac2e8204 --- /dev/null +++ b/drivers/gpu/drm/lima/Kconfig @@ -0,0 +1,9 @@
+config DRM_LIMA
tristate "LIMA (DRM support for ARM Mali 400/450 GPU)"
depends on DRM
depends on ARCH_SUNXI || ARCH_ROCKCHIP || ARCH_EXYNOS || ARCH_MESON
You can add ARCH_ZYNQMP here too , it has Mali 400 MP2.
Well, as Qiang Yu already figured, it seems much smarter to not enumerate every possible platform here. More than that, the Kconfig depends should be strictly technical. There is nothing in this driver which is ARM specific, in fact I managed to compile it for x86-64 as well (with some small fix in a random header file). In fact there are x86-64 based SoCs pairing Intel Atom cores with a Mali GPUs: https://en.wikipedia.org/wiki/Rockchip#Tablet_processors_with_integrated_mod...
So you can get rid of this whole line at all, meaning you don't even need the "depends on ARM || ARM64 || COMPILE_TEST" you have in your gitlab repo.
Cheers, Andre.
-- Best regards, Marek Vasut _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On 07/14/2018 02:14 AM, Qiang Yu wrote:
Hi,
Thanks for your info. What a surprise that exist such a SoC. That means I have to judge if it's a 64bit CPU in other way than just check ARM64 config.
Yeah, you should do anyways. Actually you should try to avoid those explicit checks in the first place. Drivers shouldn't need to care about the "bit size" of the CPU and Linux provides many ways to automatically cope with that, with types likes phys_addr_t for instance. Quickly grep-ing I find "need_dma32" in lima_ttm.c:lima_ttm_init(), is that the only place you need to check?
Cheers, Andre.
Regards, Qiang On Sat, Jun 16, 2018 at 1:23 AM Andre Przywara andre.przywara@arm.com wrote:
On 05/23/2018 17:16, Marek Vasut wrote:
On 05/18/2018 11:28 AM, Qiang Yu wrote:
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Neil Armstrong narmstrong@baylibre.com Signed-off-by: Simon Shields simon@lineageos.org Signed-off-by: Heiko Stuebner heiko@sntech.de
drivers/gpu/drm/Kconfig | 2 ++ drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/lima/Kconfig | 9 +++++++++ drivers/gpu/drm/lima/Makefile | 19 +++++++++++++++++++ 4 files changed, 31 insertions(+) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index deeefa7a1773..f00d529ee034 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
source "drivers/gpu/drm/tve200/Kconfig"
+source "drivers/gpu/drm/lima/Kconfig"
# Keep legacy drivers last
menuconfig DRM_LEGACY diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 50093ff4479b..aba686e41d6b 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/ obj-$(CONFIG_DRM_TINYDRM) += tinydrm/ obj-$(CONFIG_DRM_PL111) += pl111/ obj-$(CONFIG_DRM_TVE200) += tve200/ +obj-$(CONFIG_DRM_LIMA) += lima/ diff --git a/drivers/gpu/drm/lima/Kconfig b/drivers/gpu/drm/lima/Kconfig new file mode 100644 index 000000000000..4ce9ac2e8204 --- /dev/null +++ b/drivers/gpu/drm/lima/Kconfig @@ -0,0 +1,9 @@
+config DRM_LIMA
tristate "LIMA (DRM support for ARM Mali 400/450 GPU)"
depends on DRM
depends on ARCH_SUNXI || ARCH_ROCKCHIP || ARCH_EXYNOS || ARCH_MESON
You can add ARCH_ZYNQMP here too , it has Mali 400 MP2.
Well, as Qiang Yu already figured, it seems much smarter to not enumerate every possible platform here. More than that, the Kconfig depends should be strictly technical. There is nothing in this driver which is ARM specific, in fact I managed to compile it for x86-64 as well (with some small fix in a random header file). In fact there are x86-64 based SoCs pairing Intel Atom cores with a Mali GPUs: https://en.wikipedia.org/wiki/Rockchip#Tablet_processors_with_integrated_mod...
So you can get rid of this whole line at all, meaning you don't even need the "depends on ARM || ARM64 || COMPILE_TEST" you have in your gitlab repo.
Cheers, Andre.
-- Best regards, Marek Vasut _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Sat, Jul 14, 2018 at 8:07 PM André Przywara andre.przywara@arm.com wrote:
On 07/14/2018 02:14 AM, Qiang Yu wrote:
Hi,
Thanks for your info. What a surprise that exist such a SoC. That means I have to judge if it's a 64bit CPU in other way than just check ARM64 config.
Yeah, you should do anyways. Actually you should try to avoid those explicit checks in the first place. Drivers shouldn't need to care about the "bit size" of the CPU and Linux provides many ways to automatically cope with that, with types likes phys_addr_t for instance. Quickly grep-ing I find "need_dma32" in lima_ttm.c:lima_ttm_init(), is that the only place you need to check?
Yes, mali GPU can only use physical mem within 32bit address space. So I check ARM64 and ARM_LPAE for possible more than 32bit address space to pass in “need_dma32”.
Regards, Qiang
Cheers, Andre.
Regards, Qiang On Sat, Jun 16, 2018 at 1:23 AM Andre Przywara andre.przywara@arm.com wrote:
On 05/23/2018 17:16, Marek Vasut wrote:
On 05/18/2018 11:28 AM, Qiang Yu wrote:
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Neil Armstrong narmstrong@baylibre.com Signed-off-by: Simon Shields simon@lineageos.org Signed-off-by: Heiko Stuebner heiko@sntech.de
drivers/gpu/drm/Kconfig | 2 ++ drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/lima/Kconfig | 9 +++++++++ drivers/gpu/drm/lima/Makefile | 19 +++++++++++++++++++ 4 files changed, 31 insertions(+) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index deeefa7a1773..f00d529ee034 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
source "drivers/gpu/drm/tve200/Kconfig"
+source "drivers/gpu/drm/lima/Kconfig"
# Keep legacy drivers last
menuconfig DRM_LEGACY diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 50093ff4479b..aba686e41d6b 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/ obj-$(CONFIG_DRM_TINYDRM) += tinydrm/ obj-$(CONFIG_DRM_PL111) += pl111/ obj-$(CONFIG_DRM_TVE200) += tve200/ +obj-$(CONFIG_DRM_LIMA) += lima/ diff --git a/drivers/gpu/drm/lima/Kconfig b/drivers/gpu/drm/lima/Kconfig new file mode 100644 index 000000000000..4ce9ac2e8204 --- /dev/null +++ b/drivers/gpu/drm/lima/Kconfig @@ -0,0 +1,9 @@
+config DRM_LIMA
tristate "LIMA (DRM support for ARM Mali 400/450 GPU)"
depends on DRM
depends on ARCH_SUNXI || ARCH_ROCKCHIP || ARCH_EXYNOS || ARCH_MESON
You can add ARCH_ZYNQMP here too , it has Mali 400 MP2.
Well, as Qiang Yu already figured, it seems much smarter to not enumerate every possible platform here. More than that, the Kconfig depends should be strictly technical. There is nothing in this driver which is ARM specific, in fact I managed to compile it for x86-64 as well (with some small fix in a random header file). In fact there are x86-64 based SoCs pairing Intel Atom cores with a Mali GPUs: https://en.wikipedia.org/wiki/Rockchip#Tablet_processors_with_integrated_mod...
So you can get rid of this whole line at all, meaning you don't even need the "depends on ARM || ARM64 || COMPILE_TEST" you have in your gitlab repo.
Cheers, Andre.
-- Best regards, Marek Vasut _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On 07/14/2018 03:18 PM, Qiang Yu wrote:
Hi,
On Sat, Jul 14, 2018 at 8:07 PM André Przywara andre.przywara@arm.com wrote:
On 07/14/2018 02:14 AM, Qiang Yu wrote:
Hi,
Thanks for your info. What a surprise that exist such a SoC. That means I have to judge if it's a 64bit CPU in other way than just check ARM64 config.
Yeah, you should do anyways. Actually you should try to avoid those explicit checks in the first place. Drivers shouldn't need to care about the "bit size" of the CPU and Linux provides many ways to automatically cope with that, with types likes phys_addr_t for instance. Quickly grep-ing I find "need_dma32" in lima_ttm.c:lima_ttm_init(), is that the only place you need to check?
Yes, mali GPU can only use physical mem within 32bit address space.
Ah, thanks, I was wondering about that.
So I check ARM64 and ARM_LPAE for possible more than 32bit address space to pass in “need_dma32”.
Mmh, but this is not how I understand this parameter. To me it looks like it is a property of the device (GPU and bus), but not the CPU. So if Mali 4xx can only do 32-bit DMA, then this parameter should *always* be true, regardless of whether the CPU can address more than 4GB. If the system is restricted to 32-bit anyways, it should not hurt to have it set to true, even though it is not needed in this case.
So you can drop this check altogether and just always pass "true".
Cheers, Andre.
On Sat, Jun 16, 2018 at 1:23 AM Andre Przywara andre.przywara@arm.com wrote:
On 05/23/2018 17:16, Marek Vasut wrote:
On 05/18/2018 11:28 AM, Qiang Yu wrote:
From: Lima Project Developers dri-devel@lists.freedesktop.org
Signed-off-by: Qiang Yu yuq825@gmail.com Signed-off-by: Neil Armstrong narmstrong@baylibre.com Signed-off-by: Simon Shields simon@lineageos.org Signed-off-by: Heiko Stuebner heiko@sntech.de
drivers/gpu/drm/Kconfig | 2 ++ drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/lima/Kconfig | 9 +++++++++ drivers/gpu/drm/lima/Makefile | 19 +++++++++++++++++++ 4 files changed, 31 insertions(+) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index deeefa7a1773..f00d529ee034 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
source "drivers/gpu/drm/tve200/Kconfig"
+source "drivers/gpu/drm/lima/Kconfig"
# Keep legacy drivers last
menuconfig DRM_LEGACY diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 50093ff4479b..aba686e41d6b 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/ obj-$(CONFIG_DRM_TINYDRM) += tinydrm/ obj-$(CONFIG_DRM_PL111) += pl111/ obj-$(CONFIG_DRM_TVE200) += tve200/ +obj-$(CONFIG_DRM_LIMA) += lima/ diff --git a/drivers/gpu/drm/lima/Kconfig b/drivers/gpu/drm/lima/Kconfig new file mode 100644 index 000000000000..4ce9ac2e8204 --- /dev/null +++ b/drivers/gpu/drm/lima/Kconfig @@ -0,0 +1,9 @@
+config DRM_LIMA
tristate "LIMA (DRM support for ARM Mali 400/450 GPU)"
depends on DRM
depends on ARCH_SUNXI || ARCH_ROCKCHIP || ARCH_EXYNOS || ARCH_MESON
You can add ARCH_ZYNQMP here too , it has Mali 400 MP2.
Well, as Qiang Yu already figured, it seems much smarter to not enumerate every possible platform here. More than that, the Kconfig depends should be strictly technical. There is nothing in this driver which is ARM specific, in fact I managed to compile it for x86-64 as well (with some small fix in a random header file). In fact there are x86-64 based SoCs pairing Intel Atom cores with a Mali GPUs: https://en.wikipedia.org/wiki/Rockchip#Tablet_processors_with_integrated_mod...
So you can get rid of this whole line at all, meaning you don't even need the "depends on ARM || ARM64 || COMPILE_TEST" you have in your gitlab repo.
Cheers, Andre.
-- Best regards, Marek Vasut _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Sun, Jul 15, 2018 at 3:16 AM André Przywara andre.przywara@arm.com wrote:
On 07/14/2018 03:18 PM, Qiang Yu wrote:
Hi,
On Sat, Jul 14, 2018 at 8:07 PM André Przywara andre.przywara@arm.com wrote:
On 07/14/2018 02:14 AM, Qiang Yu wrote:
Hi,
Thanks for your info. What a surprise that exist such a SoC. That means I have to judge if it's a 64bit CPU in other way than just check ARM64 config.
Yeah, you should do anyways. Actually you should try to avoid those explicit checks in the first place. Drivers shouldn't need to care about the "bit size" of the CPU and Linux provides many ways to automatically cope with that, with types likes phys_addr_t for instance. Quickly grep-ing I find "need_dma32" in lima_ttm.c:lima_ttm_init(), is that the only place you need to check?
Yes, mali GPU can only use physical mem within 32bit address space.
Ah, thanks, I was wondering about that.
So I check ARM64 and ARM_LPAE for possible more than 32bit address space to pass in “need_dma32”.
Mmh, but this is not how I understand this parameter. To me it looks like it is a property of the device (GPU and bus), but not the CPU. So if Mali 4xx can only do 32-bit DMA, then this parameter should *always* be true, regardless of whether the CPU can address more than 4GB. If the system is restricted to 32-bit anyways, it should not hurt to have it set to true, even though it is not needed in this case.
So you can drop this check altogether and just always pass "true".
This parameter is for TTM to alloc page with GFP_DMA32 flag, I'd love to always set it if GFP_DMA32 won't cause any trouble on 32bit CPUs.
Regards, Qiang
Cheers, Andre.
On Sat, Jun 16, 2018 at 1:23 AM Andre Przywara andre.przywara@arm.com wrote:
On 05/23/2018 17:16, Marek Vasut wrote:
On 05/18/2018 11:28 AM, Qiang Yu wrote: > From: Lima Project Developers dri-devel@lists.freedesktop.org > > Signed-off-by: Qiang Yu yuq825@gmail.com > Signed-off-by: Neil Armstrong narmstrong@baylibre.com > Signed-off-by: Simon Shields simon@lineageos.org > Signed-off-by: Heiko Stuebner heiko@sntech.de > --- > drivers/gpu/drm/Kconfig | 2 ++ > drivers/gpu/drm/Makefile | 1 + > drivers/gpu/drm/lima/Kconfig | 9 +++++++++ > drivers/gpu/drm/lima/Makefile | 19 +++++++++++++++++++ > 4 files changed, 31 insertions(+) > create mode 100644 drivers/gpu/drm/lima/Kconfig > create mode 100644 drivers/gpu/drm/lima/Makefile > > diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig > index deeefa7a1773..f00d529ee034 100644 > --- a/drivers/gpu/drm/Kconfig > +++ b/drivers/gpu/drm/Kconfig > @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig" > > source "drivers/gpu/drm/tve200/Kconfig" > > +source "drivers/gpu/drm/lima/Kconfig" > + > # Keep legacy drivers last > > menuconfig DRM_LEGACY > diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile > index 50093ff4479b..aba686e41d6b 100644 > --- a/drivers/gpu/drm/Makefile > +++ b/drivers/gpu/drm/Makefile > @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/ > obj-$(CONFIG_DRM_TINYDRM) += tinydrm/ > obj-$(CONFIG_DRM_PL111) += pl111/ > obj-$(CONFIG_DRM_TVE200) += tve200/ > +obj-$(CONFIG_DRM_LIMA) += lima/ > diff --git a/drivers/gpu/drm/lima/Kconfig b/drivers/gpu/drm/lima/Kconfig > new file mode 100644 > index 000000000000..4ce9ac2e8204 > --- /dev/null > +++ b/drivers/gpu/drm/lima/Kconfig > @@ -0,0 +1,9 @@ > + > +config DRM_LIMA > + tristate "LIMA (DRM support for ARM Mali 400/450 GPU)" > + depends on DRM > + depends on ARCH_SUNXI || ARCH_ROCKCHIP || ARCH_EXYNOS || ARCH_MESON
You can add ARCH_ZYNQMP here too , it has Mali 400 MP2.
Well, as Qiang Yu already figured, it seems much smarter to not enumerate every possible platform here. More than that, the Kconfig depends should be strictly technical. There is nothing in this driver which is ARM specific, in fact I managed to compile it for x86-64 as well (with some small fix in a random header file). In fact there are x86-64 based SoCs pairing Intel Atom cores with a Mali GPUs: https://en.wikipedia.org/wiki/Rockchip#Tablet_processors_with_integrated_mod...
So you can get rid of this whole line at all, meaning you don't even need the "depends on ARM || ARM64 || COMPILE_TEST" you have in your gitlab repo.
Cheers, Andre.
-- Best regards, Marek Vasut _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Fri, May 18, 2018 at 05:27:51PM +0800, Qiang Yu wrote:
Kernel DRM driver for ARM Mali 400/450 GPUs.
This implementation mainly take amdgpu DRM driver as reference.
- Mali 4xx GPUs have two kinds of processors GP and PP. GP is for OpenGL vertex shader processing and PP is for fragment shader processing. Each processor has its own MMU so prcessors work in virtual address space.
- There's only one GP but multiple PP (max 4 for mali 400 and 8 for mali 450) in the same mali 4xx GPU. All PPs are grouped togather to handle a single fragment shader task divided by FB output tiled pixels. Mali 400 user space driver is responsible for assign target tiled pixels to each PP, but mali 450 has a HW module called DLBU to dynamically balance each PP's load.
- User space driver allocate buffer object and map into GPU virtual address space, upload command stream and draw data with CPU mmap of the buffer object, then submit task to GP/PP with a register frame indicating where is the command stream and misc settings.
- There's no command stream validation/relocation due to each user process has its own GPU virtual address space. GP/PP's MMU switch virtual address space before running two tasks from different user process. Error or evil user space code just get MMU fault or GP/PP error IRQ, then the HW/SW will be recovered.
- Use TTM as MM. TTM_PL_TT type memory is used as the content of lima buffer object which is allocated from TTM page pool. all lima buffer object gets pinned with TTM_PL_FLAG_NO_EVICT when allocation, so there's no buffer eviction and swap for now. We need reverse engineering to see if and how GP/PP support MMU fault recovery (continue execution). Otherwise we have to pin/unpin each envolved buffer when task creation/deletion.
Curios question, but why? The one thing that ttm does help you with is keeping track of buffer moves from/to discrete memory. You get that benefit at the cost of a nice midlayer which tends to get in the way. If all you do is map buffers into pagetables, then rolling your own (like e.g. etnaviv does) is I think much better: All the other ttm functionality (reservatsions, drm_mm allocation management, fences) has all been extracted and is available to any driver without ttm. -Daniel
- Use drm_sched for GPU task schedule. Each OpenGL context should have a lima context object in the kernel to distinguish tasks from different user. drm_sched gets task from each lima context in a fair way.
Not implemented:
- Dump buffer support
- Power management
- Performance counter
This patch serial just pack a pair of .c/.h files in each patch. For whole history of this driver's development, see: https://github.com/yuq/linux-lima/commits/lima-4.17-rc4
Mesa driver is still in development and not ready for daily usage, but can run some simple tests like kmscube and glamrk2, see: https://github.com/yuq/mesa-lima
Andrei Paulau (1): arm64/dts: add switch-delay for meson mali
Lima Project Developers (10): drm/lima: add mali 4xx GPU hardware regs drm/lima: add lima core driver drm/lima: add GPU device functions drm/lima: add PMU related functions drm/lima: add PP related functions drm/lima: add MMU related functions drm/lima: add GPU virtual memory space handing drm/lima: add GEM related functions drm/lima: add GEM Prime related functions drm/lima: add makefile and kconfig
Qiang Yu (12): dt-bindings: add switch-delay property for mali-utgard arm64/dts: add switch-delay for meson mali Revert "drm: Nerf the preclose callback for modern drivers" drm/lima: add lima uapi header drm/lima: add L2 cache functions drm/lima: add GP related functions drm/lima: add BCAST related function drm/lima: add DLBU related functions drm/lima: add TTM subsystem functions drm/lima: add buffer object functions drm/lima: add GPU schedule using DRM_SCHED drm/lima: add context related functions
Simon Shields (1): ARM: dts: add gpu node to exynos4
.../bindings/gpu/arm,mali-utgard.txt | 4 + arch/arm/boot/dts/exynos4.dtsi | 33 ++ arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi | 1 + .../boot/dts/amlogic/meson-gxl-mali.dtsi | 1 + drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_file.c | 8 +- drivers/gpu/drm/lima/Kconfig | 9 + drivers/gpu/drm/lima/Makefile | 19 + drivers/gpu/drm/lima/lima_bcast.c | 65 +++ drivers/gpu/drm/lima/lima_bcast.h | 34 ++ drivers/gpu/drm/lima/lima_ctx.c | 143 +++++ drivers/gpu/drm/lima/lima_ctx.h | 51 ++ drivers/gpu/drm/lima/lima_device.c | 407 ++++++++++++++ drivers/gpu/drm/lima/lima_device.h | 136 +++++ drivers/gpu/drm/lima/lima_dlbu.c | 75 +++ drivers/gpu/drm/lima/lima_dlbu.h | 37 ++ drivers/gpu/drm/lima/lima_drv.c | 466 ++++++++++++++++ drivers/gpu/drm/lima/lima_drv.h | 77 +++ drivers/gpu/drm/lima/lima_gem.c | 459 ++++++++++++++++ drivers/gpu/drm/lima/lima_gem.h | 41 ++ drivers/gpu/drm/lima/lima_gem_prime.c | 66 +++ drivers/gpu/drm/lima/lima_gem_prime.h | 31 ++ drivers/gpu/drm/lima/lima_gp.c | 293 +++++++++++ drivers/gpu/drm/lima/lima_gp.h | 34 ++ drivers/gpu/drm/lima/lima_l2_cache.c | 98 ++++ drivers/gpu/drm/lima/lima_l2_cache.h | 32 ++ drivers/gpu/drm/lima/lima_mmu.c | 154 ++++++ drivers/gpu/drm/lima/lima_mmu.h | 34 ++ drivers/gpu/drm/lima/lima_object.c | 120 +++++ drivers/gpu/drm/lima/lima_object.h | 87 +++ drivers/gpu/drm/lima/lima_pmu.c | 85 +++ drivers/gpu/drm/lima/lima_pmu.h | 30 ++ drivers/gpu/drm/lima/lima_pp.c | 418 +++++++++++++++ drivers/gpu/drm/lima/lima_pp.h | 37 ++ drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++ drivers/gpu/drm/lima/lima_sched.c | 497 ++++++++++++++++++ drivers/gpu/drm/lima/lima_sched.h | 126 +++++ drivers/gpu/drm/lima/lima_ttm.c | 409 ++++++++++++++ drivers/gpu/drm/lima/lima_ttm.h | 44 ++ drivers/gpu/drm/lima/lima_vm.c | 312 +++++++++++ drivers/gpu/drm/lima/lima_vm.h | 73 +++ include/drm/drm_drv.h | 23 +- include/uapi/drm/lima_drm.h | 195 +++++++ 44 files changed, 5565 insertions(+), 6 deletions(-) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile create mode 100644 drivers/gpu/drm/lima/lima_bcast.c create mode 100644 drivers/gpu/drm/lima/lima_bcast.h create mode 100644 drivers/gpu/drm/lima/lima_ctx.c create mode 100644 drivers/gpu/drm/lima/lima_ctx.h create mode 100644 drivers/gpu/drm/lima/lima_device.c create mode 100644 drivers/gpu/drm/lima/lima_device.h create mode 100644 drivers/gpu/drm/lima/lima_dlbu.c create mode 100644 drivers/gpu/drm/lima/lima_dlbu.h create mode 100644 drivers/gpu/drm/lima/lima_drv.c create mode 100644 drivers/gpu/drm/lima/lima_drv.h create mode 100644 drivers/gpu/drm/lima/lima_gem.c create mode 100644 drivers/gpu/drm/lima/lima_gem.h create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.c create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.h create mode 100644 drivers/gpu/drm/lima/lima_gp.c create mode 100644 drivers/gpu/drm/lima/lima_gp.h create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.c create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.h create mode 100644 drivers/gpu/drm/lima/lima_mmu.c create mode 100644 drivers/gpu/drm/lima/lima_mmu.h create mode 100644 drivers/gpu/drm/lima/lima_object.c create mode 100644 drivers/gpu/drm/lima/lima_object.h create mode 100644 drivers/gpu/drm/lima/lima_pmu.c create mode 100644 drivers/gpu/drm/lima/lima_pmu.h create mode 100644 drivers/gpu/drm/lima/lima_pp.c create mode 100644 drivers/gpu/drm/lima/lima_pp.h create mode 100644 drivers/gpu/drm/lima/lima_regs.h create mode 100644 drivers/gpu/drm/lima/lima_sched.c create mode 100644 drivers/gpu/drm/lima/lima_sched.h create mode 100644 drivers/gpu/drm/lima/lima_ttm.c create mode 100644 drivers/gpu/drm/lima/lima_ttm.h create mode 100644 drivers/gpu/drm/lima/lima_vm.c create mode 100644 drivers/gpu/drm/lima/lima_vm.h create mode 100644 include/uapi/drm/lima_drm.h
-- 2.17.0
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Wed, May 23, 2018 at 5:02 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Fri, May 18, 2018 at 05:27:51PM +0800, Qiang Yu wrote:
Kernel DRM driver for ARM Mali 400/450 GPUs.
This implementation mainly take amdgpu DRM driver as reference.
- Mali 4xx GPUs have two kinds of processors GP and PP. GP is for OpenGL vertex shader processing and PP is for fragment shader processing. Each processor has its own MMU so prcessors work in virtual address space.
- There's only one GP but multiple PP (max 4 for mali 400 and 8 for mali 450) in the same mali 4xx GPU. All PPs are grouped togather to handle a single fragment shader task divided by FB output tiled pixels. Mali 400 user space driver is responsible for assign target tiled pixels to each PP, but mali 450 has a HW module called DLBU to dynamically balance each PP's load.
- User space driver allocate buffer object and map into GPU virtual address space, upload command stream and draw data with CPU mmap of the buffer object, then submit task to GP/PP with a register frame indicating where is the command stream and misc settings.
- There's no command stream validation/relocation due to each user process has its own GPU virtual address space. GP/PP's MMU switch virtual address space before running two tasks from different user process. Error or evil user space code just get MMU fault or GP/PP error IRQ, then the HW/SW will be recovered.
- Use TTM as MM. TTM_PL_TT type memory is used as the content of lima buffer object which is allocated from TTM page pool. all lima buffer object gets pinned with TTM_PL_FLAG_NO_EVICT when allocation, so there's no buffer eviction and swap for now. We need reverse engineering to see if and how GP/PP support MMU fault recovery (continue execution). Otherwise we have to pin/unpin each envolved buffer when task creation/deletion.
Curios question, but why? The one thing that ttm does help you with is keeping track of buffer moves from/to discrete memory. You get that benefit at the cost of a nice midlayer which tends to get in the way. If all you do is map buffers into pagetables, then rolling your own (like e.g. etnaviv does) is I think much better: All the other ttm functionality (reservatsions, drm_mm allocation management, fences) has all been extracted and is available to any driver without ttm.
Yeah, I can spend more time to write one without that much redundant functionality of TTM, but as there's one that can be used directly, I give up. I also see virtio GPU use this way.
If I'm going to write a new one, I also want the ttm_page_alloc.c page pool. But my interface won't be that generic as TTM.
Regards, Qiang
-Daniel
- Use drm_sched for GPU task schedule. Each OpenGL context should have a lima context object in the kernel to distinguish tasks from different user. drm_sched gets task from each lima context in a fair way.
Not implemented:
- Dump buffer support
- Power management
- Performance counter
This patch serial just pack a pair of .c/.h files in each patch. For whole history of this driver's development, see: https://github.com/yuq/linux-lima/commits/lima-4.17-rc4
Mesa driver is still in development and not ready for daily usage, but can run some simple tests like kmscube and glamrk2, see: https://github.com/yuq/mesa-lima
Andrei Paulau (1): arm64/dts: add switch-delay for meson mali
Lima Project Developers (10): drm/lima: add mali 4xx GPU hardware regs drm/lima: add lima core driver drm/lima: add GPU device functions drm/lima: add PMU related functions drm/lima: add PP related functions drm/lima: add MMU related functions drm/lima: add GPU virtual memory space handing drm/lima: add GEM related functions drm/lima: add GEM Prime related functions drm/lima: add makefile and kconfig
Qiang Yu (12): dt-bindings: add switch-delay property for mali-utgard arm64/dts: add switch-delay for meson mali Revert "drm: Nerf the preclose callback for modern drivers" drm/lima: add lima uapi header drm/lima: add L2 cache functions drm/lima: add GP related functions drm/lima: add BCAST related function drm/lima: add DLBU related functions drm/lima: add TTM subsystem functions drm/lima: add buffer object functions drm/lima: add GPU schedule using DRM_SCHED drm/lima: add context related functions
Simon Shields (1): ARM: dts: add gpu node to exynos4
.../bindings/gpu/arm,mali-utgard.txt | 4 + arch/arm/boot/dts/exynos4.dtsi | 33 ++ arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi | 1 + .../boot/dts/amlogic/meson-gxl-mali.dtsi | 1 + drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_file.c | 8 +- drivers/gpu/drm/lima/Kconfig | 9 + drivers/gpu/drm/lima/Makefile | 19 + drivers/gpu/drm/lima/lima_bcast.c | 65 +++ drivers/gpu/drm/lima/lima_bcast.h | 34 ++ drivers/gpu/drm/lima/lima_ctx.c | 143 +++++ drivers/gpu/drm/lima/lima_ctx.h | 51 ++ drivers/gpu/drm/lima/lima_device.c | 407 ++++++++++++++ drivers/gpu/drm/lima/lima_device.h | 136 +++++ drivers/gpu/drm/lima/lima_dlbu.c | 75 +++ drivers/gpu/drm/lima/lima_dlbu.h | 37 ++ drivers/gpu/drm/lima/lima_drv.c | 466 ++++++++++++++++ drivers/gpu/drm/lima/lima_drv.h | 77 +++ drivers/gpu/drm/lima/lima_gem.c | 459 ++++++++++++++++ drivers/gpu/drm/lima/lima_gem.h | 41 ++ drivers/gpu/drm/lima/lima_gem_prime.c | 66 +++ drivers/gpu/drm/lima/lima_gem_prime.h | 31 ++ drivers/gpu/drm/lima/lima_gp.c | 293 +++++++++++ drivers/gpu/drm/lima/lima_gp.h | 34 ++ drivers/gpu/drm/lima/lima_l2_cache.c | 98 ++++ drivers/gpu/drm/lima/lima_l2_cache.h | 32 ++ drivers/gpu/drm/lima/lima_mmu.c | 154 ++++++ drivers/gpu/drm/lima/lima_mmu.h | 34 ++ drivers/gpu/drm/lima/lima_object.c | 120 +++++ drivers/gpu/drm/lima/lima_object.h | 87 +++ drivers/gpu/drm/lima/lima_pmu.c | 85 +++ drivers/gpu/drm/lima/lima_pmu.h | 30 ++ drivers/gpu/drm/lima/lima_pp.c | 418 +++++++++++++++ drivers/gpu/drm/lima/lima_pp.h | 37 ++ drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++ drivers/gpu/drm/lima/lima_sched.c | 497 ++++++++++++++++++ drivers/gpu/drm/lima/lima_sched.h | 126 +++++ drivers/gpu/drm/lima/lima_ttm.c | 409 ++++++++++++++ drivers/gpu/drm/lima/lima_ttm.h | 44 ++ drivers/gpu/drm/lima/lima_vm.c | 312 +++++++++++ drivers/gpu/drm/lima/lima_vm.h | 73 +++ include/drm/drm_drv.h | 23 +- include/uapi/drm/lima_drm.h | 195 +++++++ 44 files changed, 5565 insertions(+), 6 deletions(-) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile create mode 100644 drivers/gpu/drm/lima/lima_bcast.c create mode 100644 drivers/gpu/drm/lima/lima_bcast.h create mode 100644 drivers/gpu/drm/lima/lima_ctx.c create mode 100644 drivers/gpu/drm/lima/lima_ctx.h create mode 100644 drivers/gpu/drm/lima/lima_device.c create mode 100644 drivers/gpu/drm/lima/lima_device.h create mode 100644 drivers/gpu/drm/lima/lima_dlbu.c create mode 100644 drivers/gpu/drm/lima/lima_dlbu.h create mode 100644 drivers/gpu/drm/lima/lima_drv.c create mode 100644 drivers/gpu/drm/lima/lima_drv.h create mode 100644 drivers/gpu/drm/lima/lima_gem.c create mode 100644 drivers/gpu/drm/lima/lima_gem.h create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.c create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.h create mode 100644 drivers/gpu/drm/lima/lima_gp.c create mode 100644 drivers/gpu/drm/lima/lima_gp.h create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.c create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.h create mode 100644 drivers/gpu/drm/lima/lima_mmu.c create mode 100644 drivers/gpu/drm/lima/lima_mmu.h create mode 100644 drivers/gpu/drm/lima/lima_object.c create mode 100644 drivers/gpu/drm/lima/lima_object.h create mode 100644 drivers/gpu/drm/lima/lima_pmu.c create mode 100644 drivers/gpu/drm/lima/lima_pmu.h create mode 100644 drivers/gpu/drm/lima/lima_pp.c create mode 100644 drivers/gpu/drm/lima/lima_pp.h create mode 100644 drivers/gpu/drm/lima/lima_regs.h create mode 100644 drivers/gpu/drm/lima/lima_sched.c create mode 100644 drivers/gpu/drm/lima/lima_sched.h create mode 100644 drivers/gpu/drm/lima/lima_ttm.c create mode 100644 drivers/gpu/drm/lima/lima_ttm.h create mode 100644 drivers/gpu/drm/lima/lima_vm.c create mode 100644 drivers/gpu/drm/lima/lima_vm.h create mode 100644 include/uapi/drm/lima_drm.h
-- 2.17.0
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
Am 18.05.2018 um 11:27 schrieb Qiang Yu:
Kernel DRM driver for ARM Mali 400/450 GPUs.
This implementation mainly take amdgpu DRM driver as reference.
- Mali 4xx GPUs have two kinds of processors GP and PP. GP is for OpenGL vertex shader processing and PP is for fragment shader processing. Each processor has its own MMU so prcessors work in virtual address space.
- There's only one GP but multiple PP (max 4 for mali 400 and 8 for mali 450) in the same mali 4xx GPU. All PPs are grouped togather to handle a single fragment shader task divided by FB output tiled pixels. Mali 400 user space driver is responsible for assign target tiled pixels to each PP, but mali 450 has a HW module called DLBU to dynamically balance each PP's load.
- User space driver allocate buffer object and map into GPU virtual address space, upload command stream and draw data with CPU mmap of the buffer object, then submit task to GP/PP with a register frame indicating where is the command stream and misc settings.
- There's no command stream validation/relocation due to each user process has its own GPU virtual address space. GP/PP's MMU switch virtual address space before running two tasks from different user process. Error or evil user space code just get MMU fault or GP/PP error IRQ, then the HW/SW will be recovered.
- Use TTM as MM. TTM_PL_TT type memory is used as the content of lima buffer object which is allocated from TTM page pool. all lima buffer object gets pinned with TTM_PL_FLAG_NO_EVICT when allocation, so there's no buffer eviction and swap for now. We need reverse engineering to see if and how GP/PP support MMU fault recovery (continue execution). Otherwise we have to pin/unpin each envolved buffer when task creation/deletion.
Well pinning all memory is usually a no-go for upstreaming. But since you are already using the drm_sched for GPU task scheduling why are you actually needing this?
The scheduler should take care of signaling all fences when the hardware is done with it's magic and that is enough for TTM to note that a buffer object is movable again (e.g. unpin them).
Christian.
- Use drm_sched for GPU task schedule. Each OpenGL context should have a lima context object in the kernel to distinguish tasks from different user. drm_sched gets task from each lima context in a fair way.
Not implemented:
- Dump buffer support
- Power management
- Performance counter
This patch serial just pack a pair of .c/.h files in each patch. For whole history of this driver's development, see: https://github.com/yuq/linux-lima/commits/lima-4.17-rc4
Mesa driver is still in development and not ready for daily usage, but can run some simple tests like kmscube and glamrk2, see: https://github.com/yuq/mesa-lima
Andrei Paulau (1): arm64/dts: add switch-delay for meson mali
Lima Project Developers (10): drm/lima: add mali 4xx GPU hardware regs drm/lima: add lima core driver drm/lima: add GPU device functions drm/lima: add PMU related functions drm/lima: add PP related functions drm/lima: add MMU related functions drm/lima: add GPU virtual memory space handing drm/lima: add GEM related functions drm/lima: add GEM Prime related functions drm/lima: add makefile and kconfig
Qiang Yu (12): dt-bindings: add switch-delay property for mali-utgard arm64/dts: add switch-delay for meson mali Revert "drm: Nerf the preclose callback for modern drivers" drm/lima: add lima uapi header drm/lima: add L2 cache functions drm/lima: add GP related functions drm/lima: add BCAST related function drm/lima: add DLBU related functions drm/lima: add TTM subsystem functions drm/lima: add buffer object functions drm/lima: add GPU schedule using DRM_SCHED drm/lima: add context related functions
Simon Shields (1): ARM: dts: add gpu node to exynos4
.../bindings/gpu/arm,mali-utgard.txt | 4 + arch/arm/boot/dts/exynos4.dtsi | 33 ++ arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi | 1 + .../boot/dts/amlogic/meson-gxl-mali.dtsi | 1 + drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_file.c | 8 +- drivers/gpu/drm/lima/Kconfig | 9 + drivers/gpu/drm/lima/Makefile | 19 + drivers/gpu/drm/lima/lima_bcast.c | 65 +++ drivers/gpu/drm/lima/lima_bcast.h | 34 ++ drivers/gpu/drm/lima/lima_ctx.c | 143 +++++ drivers/gpu/drm/lima/lima_ctx.h | 51 ++ drivers/gpu/drm/lima/lima_device.c | 407 ++++++++++++++ drivers/gpu/drm/lima/lima_device.h | 136 +++++ drivers/gpu/drm/lima/lima_dlbu.c | 75 +++ drivers/gpu/drm/lima/lima_dlbu.h | 37 ++ drivers/gpu/drm/lima/lima_drv.c | 466 ++++++++++++++++ drivers/gpu/drm/lima/lima_drv.h | 77 +++ drivers/gpu/drm/lima/lima_gem.c | 459 ++++++++++++++++ drivers/gpu/drm/lima/lima_gem.h | 41 ++ drivers/gpu/drm/lima/lima_gem_prime.c | 66 +++ drivers/gpu/drm/lima/lima_gem_prime.h | 31 ++ drivers/gpu/drm/lima/lima_gp.c | 293 +++++++++++ drivers/gpu/drm/lima/lima_gp.h | 34 ++ drivers/gpu/drm/lima/lima_l2_cache.c | 98 ++++ drivers/gpu/drm/lima/lima_l2_cache.h | 32 ++ drivers/gpu/drm/lima/lima_mmu.c | 154 ++++++ drivers/gpu/drm/lima/lima_mmu.h | 34 ++ drivers/gpu/drm/lima/lima_object.c | 120 +++++ drivers/gpu/drm/lima/lima_object.h | 87 +++ drivers/gpu/drm/lima/lima_pmu.c | 85 +++ drivers/gpu/drm/lima/lima_pmu.h | 30 ++ drivers/gpu/drm/lima/lima_pp.c | 418 +++++++++++++++ drivers/gpu/drm/lima/lima_pp.h | 37 ++ drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++ drivers/gpu/drm/lima/lima_sched.c | 497 ++++++++++++++++++ drivers/gpu/drm/lima/lima_sched.h | 126 +++++ drivers/gpu/drm/lima/lima_ttm.c | 409 ++++++++++++++ drivers/gpu/drm/lima/lima_ttm.h | 44 ++ drivers/gpu/drm/lima/lima_vm.c | 312 +++++++++++ drivers/gpu/drm/lima/lima_vm.h | 73 +++ include/drm/drm_drv.h | 23 +- include/uapi/drm/lima_drm.h | 195 +++++++ 44 files changed, 5565 insertions(+), 6 deletions(-) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile create mode 100644 drivers/gpu/drm/lima/lima_bcast.c create mode 100644 drivers/gpu/drm/lima/lima_bcast.h create mode 100644 drivers/gpu/drm/lima/lima_ctx.c create mode 100644 drivers/gpu/drm/lima/lima_ctx.h create mode 100644 drivers/gpu/drm/lima/lima_device.c create mode 100644 drivers/gpu/drm/lima/lima_device.h create mode 100644 drivers/gpu/drm/lima/lima_dlbu.c create mode 100644 drivers/gpu/drm/lima/lima_dlbu.h create mode 100644 drivers/gpu/drm/lima/lima_drv.c create mode 100644 drivers/gpu/drm/lima/lima_drv.h create mode 100644 drivers/gpu/drm/lima/lima_gem.c create mode 100644 drivers/gpu/drm/lima/lima_gem.h create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.c create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.h create mode 100644 drivers/gpu/drm/lima/lima_gp.c create mode 100644 drivers/gpu/drm/lima/lima_gp.h create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.c create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.h create mode 100644 drivers/gpu/drm/lima/lima_mmu.c create mode 100644 drivers/gpu/drm/lima/lima_mmu.h create mode 100644 drivers/gpu/drm/lima/lima_object.c create mode 100644 drivers/gpu/drm/lima/lima_object.h create mode 100644 drivers/gpu/drm/lima/lima_pmu.c create mode 100644 drivers/gpu/drm/lima/lima_pmu.h create mode 100644 drivers/gpu/drm/lima/lima_pp.c create mode 100644 drivers/gpu/drm/lima/lima_pp.h create mode 100644 drivers/gpu/drm/lima/lima_regs.h create mode 100644 drivers/gpu/drm/lima/lima_sched.c create mode 100644 drivers/gpu/drm/lima/lima_sched.h create mode 100644 drivers/gpu/drm/lima/lima_ttm.c create mode 100644 drivers/gpu/drm/lima/lima_ttm.h create mode 100644 drivers/gpu/drm/lima/lima_vm.c create mode 100644 drivers/gpu/drm/lima/lima_vm.h create mode 100644 include/uapi/drm/lima_drm.h
On Wed, May 23, 2018 at 5:29 PM, Christian König ckoenig.leichtzumerken@gmail.com wrote:
Am 18.05.2018 um 11:27 schrieb Qiang Yu:
Kernel DRM driver for ARM Mali 400/450 GPUs.
This implementation mainly take amdgpu DRM driver as reference.
- Mali 4xx GPUs have two kinds of processors GP and PP. GP is for OpenGL vertex shader processing and PP is for fragment shader processing. Each processor has its own MMU so prcessors work in virtual address space.
- There's only one GP but multiple PP (max 4 for mali 400 and 8 for mali 450) in the same mali 4xx GPU. All PPs are grouped togather to handle a single fragment shader task divided by FB output tiled pixels. Mali 400 user space driver is responsible for assign target tiled pixels to each PP, but mali 450 has a HW module called DLBU to dynamically balance each PP's load.
- User space driver allocate buffer object and map into GPU virtual address space, upload command stream and draw data with CPU mmap of the buffer object, then submit task to GP/PP with a register frame indicating where is the command stream and misc settings.
- There's no command stream validation/relocation due to each user process has its own GPU virtual address space. GP/PP's MMU switch virtual address space before running two tasks from different user process. Error or evil user space code just get MMU fault or GP/PP error IRQ, then the HW/SW will be recovered.
- Use TTM as MM. TTM_PL_TT type memory is used as the content of lima buffer object which is allocated from TTM page pool. all lima buffer object gets pinned with TTM_PL_FLAG_NO_EVICT when allocation, so there's no buffer eviction and swap for now. We need reverse engineering to see if and how GP/PP support MMU fault recovery (continue execution). Otherwise we have to pin/unpin each envolved buffer when task creation/deletion.
Well pinning all memory is usually a no-go for upstreaming. But since you are already using the drm_sched for GPU task scheduling why are you actually needing this?
The scheduler should take care of signaling all fences when the hardware is done with it's magic and that is enough for TTM to note that a buffer object is movable again (e.g. unpin them).
Please correct me if I'm wrong.
One way to implement eviction/swap is like this: call validation on each buffers involved in a task, but this won't prevent it from eviction/swap when executing, so a GPU MMU fault may happen and in the handler we need to recover the buffer evicted/swapped.
Another way is pin/unpin buffers evolved when task create/free.
First way is better when memory load is low and second way is better when memory load is high. First way also need less memory.
So I'd prefer first way but due to the GPU MMU fault HW op need reverse engineering, I have to pin all buffers now. After the HW op is clear, I can choose one way to implement.
Regards, Qiang
Christian.
- Use drm_sched for GPU task schedule. Each OpenGL context should have a lima context object in the kernel to distinguish tasks from different user. drm_sched gets task from each lima context in a fair way.
Not implemented:
- Dump buffer support
- Power management
- Performance counter
This patch serial just pack a pair of .c/.h files in each patch. For whole history of this driver's development, see: https://github.com/yuq/linux-lima/commits/lima-4.17-rc4
Mesa driver is still in development and not ready for daily usage, but can run some simple tests like kmscube and glamrk2, see: https://github.com/yuq/mesa-lima
Andrei Paulau (1): arm64/dts: add switch-delay for meson mali
Lima Project Developers (10): drm/lima: add mali 4xx GPU hardware regs drm/lima: add lima core driver drm/lima: add GPU device functions drm/lima: add PMU related functions drm/lima: add PP related functions drm/lima: add MMU related functions drm/lima: add GPU virtual memory space handing drm/lima: add GEM related functions drm/lima: add GEM Prime related functions drm/lima: add makefile and kconfig
Qiang Yu (12): dt-bindings: add switch-delay property for mali-utgard arm64/dts: add switch-delay for meson mali Revert "drm: Nerf the preclose callback for modern drivers" drm/lima: add lima uapi header drm/lima: add L2 cache functions drm/lima: add GP related functions drm/lima: add BCAST related function drm/lima: add DLBU related functions drm/lima: add TTM subsystem functions drm/lima: add buffer object functions drm/lima: add GPU schedule using DRM_SCHED drm/lima: add context related functions
Simon Shields (1): ARM: dts: add gpu node to exynos4
.../bindings/gpu/arm,mali-utgard.txt | 4 + arch/arm/boot/dts/exynos4.dtsi | 33 ++ arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi | 1 + .../boot/dts/amlogic/meson-gxl-mali.dtsi | 1 + drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_file.c | 8 +- drivers/gpu/drm/lima/Kconfig | 9 + drivers/gpu/drm/lima/Makefile | 19 + drivers/gpu/drm/lima/lima_bcast.c | 65 +++ drivers/gpu/drm/lima/lima_bcast.h | 34 ++ drivers/gpu/drm/lima/lima_ctx.c | 143 +++++ drivers/gpu/drm/lima/lima_ctx.h | 51 ++ drivers/gpu/drm/lima/lima_device.c | 407 ++++++++++++++ drivers/gpu/drm/lima/lima_device.h | 136 +++++ drivers/gpu/drm/lima/lima_dlbu.c | 75 +++ drivers/gpu/drm/lima/lima_dlbu.h | 37 ++ drivers/gpu/drm/lima/lima_drv.c | 466 ++++++++++++++++ drivers/gpu/drm/lima/lima_drv.h | 77 +++ drivers/gpu/drm/lima/lima_gem.c | 459 ++++++++++++++++ drivers/gpu/drm/lima/lima_gem.h | 41 ++ drivers/gpu/drm/lima/lima_gem_prime.c | 66 +++ drivers/gpu/drm/lima/lima_gem_prime.h | 31 ++ drivers/gpu/drm/lima/lima_gp.c | 293 +++++++++++ drivers/gpu/drm/lima/lima_gp.h | 34 ++ drivers/gpu/drm/lima/lima_l2_cache.c | 98 ++++ drivers/gpu/drm/lima/lima_l2_cache.h | 32 ++ drivers/gpu/drm/lima/lima_mmu.c | 154 ++++++ drivers/gpu/drm/lima/lima_mmu.h | 34 ++ drivers/gpu/drm/lima/lima_object.c | 120 +++++ drivers/gpu/drm/lima/lima_object.h | 87 +++ drivers/gpu/drm/lima/lima_pmu.c | 85 +++ drivers/gpu/drm/lima/lima_pmu.h | 30 ++ drivers/gpu/drm/lima/lima_pp.c | 418 +++++++++++++++ drivers/gpu/drm/lima/lima_pp.h | 37 ++ drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++ drivers/gpu/drm/lima/lima_sched.c | 497 ++++++++++++++++++ drivers/gpu/drm/lima/lima_sched.h | 126 +++++ drivers/gpu/drm/lima/lima_ttm.c | 409 ++++++++++++++ drivers/gpu/drm/lima/lima_ttm.h | 44 ++ drivers/gpu/drm/lima/lima_vm.c | 312 +++++++++++ drivers/gpu/drm/lima/lima_vm.h | 73 +++ include/drm/drm_drv.h | 23 +- include/uapi/drm/lima_drm.h | 195 +++++++ 44 files changed, 5565 insertions(+), 6 deletions(-) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile create mode 100644 drivers/gpu/drm/lima/lima_bcast.c create mode 100644 drivers/gpu/drm/lima/lima_bcast.h create mode 100644 drivers/gpu/drm/lima/lima_ctx.c create mode 100644 drivers/gpu/drm/lima/lima_ctx.h create mode 100644 drivers/gpu/drm/lima/lima_device.c create mode 100644 drivers/gpu/drm/lima/lima_device.h create mode 100644 drivers/gpu/drm/lima/lima_dlbu.c create mode 100644 drivers/gpu/drm/lima/lima_dlbu.h create mode 100644 drivers/gpu/drm/lima/lima_drv.c create mode 100644 drivers/gpu/drm/lima/lima_drv.h create mode 100644 drivers/gpu/drm/lima/lima_gem.c create mode 100644 drivers/gpu/drm/lima/lima_gem.h create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.c create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.h create mode 100644 drivers/gpu/drm/lima/lima_gp.c create mode 100644 drivers/gpu/drm/lima/lima_gp.h create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.c create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.h create mode 100644 drivers/gpu/drm/lima/lima_mmu.c create mode 100644 drivers/gpu/drm/lima/lima_mmu.h create mode 100644 drivers/gpu/drm/lima/lima_object.c create mode 100644 drivers/gpu/drm/lima/lima_object.h create mode 100644 drivers/gpu/drm/lima/lima_pmu.c create mode 100644 drivers/gpu/drm/lima/lima_pmu.h create mode 100644 drivers/gpu/drm/lima/lima_pp.c create mode 100644 drivers/gpu/drm/lima/lima_pp.h create mode 100644 drivers/gpu/drm/lima/lima_regs.h create mode 100644 drivers/gpu/drm/lima/lima_sched.c create mode 100644 drivers/gpu/drm/lima/lima_sched.h create mode 100644 drivers/gpu/drm/lima/lima_ttm.c create mode 100644 drivers/gpu/drm/lima/lima_ttm.h create mode 100644 drivers/gpu/drm/lima/lima_vm.c create mode 100644 drivers/gpu/drm/lima/lima_vm.h create mode 100644 include/uapi/drm/lima_drm.h
Am 23.05.2018 um 15:52 schrieb Qiang Yu:
On Wed, May 23, 2018 at 5:29 PM, Christian König ckoenig.leichtzumerken@gmail.com wrote:
Am 18.05.2018 um 11:27 schrieb Qiang Yu:
Kernel DRM driver for ARM Mali 400/450 GPUs.
This implementation mainly take amdgpu DRM driver as reference.
- Mali 4xx GPUs have two kinds of processors GP and PP. GP is for OpenGL vertex shader processing and PP is for fragment shader processing. Each processor has its own MMU so prcessors work in virtual address space.
- There's only one GP but multiple PP (max 4 for mali 400 and 8 for mali 450) in the same mali 4xx GPU. All PPs are grouped togather to handle a single fragment shader task divided by FB output tiled pixels. Mali 400 user space driver is responsible for assign target tiled pixels to each PP, but mali 450 has a HW module called DLBU to dynamically balance each PP's load.
- User space driver allocate buffer object and map into GPU virtual address space, upload command stream and draw data with CPU mmap of the buffer object, then submit task to GP/PP with a register frame indicating where is the command stream and misc settings.
- There's no command stream validation/relocation due to each user process has its own GPU virtual address space. GP/PP's MMU switch virtual address space before running two tasks from different user process. Error or evil user space code just get MMU fault or GP/PP error IRQ, then the HW/SW will be recovered.
- Use TTM as MM. TTM_PL_TT type memory is used as the content of lima buffer object which is allocated from TTM page pool. all lima buffer object gets pinned with TTM_PL_FLAG_NO_EVICT when allocation, so there's no buffer eviction and swap for now. We need reverse engineering to see if and how GP/PP support MMU fault recovery (continue execution). Otherwise we have to pin/unpin each envolved buffer when task creation/deletion.
Well pinning all memory is usually a no-go for upstreaming. But since you are already using the drm_sched for GPU task scheduling why are you actually needing this?
The scheduler should take care of signaling all fences when the hardware is done with it's magic and that is enough for TTM to note that a buffer object is movable again (e.g. unpin them).
Please correct me if I'm wrong.
Well, you are wrong :)
One way to implement eviction/swap is like this: call validation on each buffers involved in a task, but this won't prevent it from eviction/swap when executing, so a GPU MMU fault may happen and in the handler we need to recover the buffer evicted/swapped.
Another way is pin/unpin buffers evolved when task create/free.
First way is better when memory load is low and second way is better when memory load is high. First way also need less memory.
So I'd prefer first way but due to the GPU MMU fault HW op need reverse engineering, I have to pin all buffers now. After the HW op is clear, I can choose one way to implement.
The general approach is: 1.) Lock all BOs 2.) Validate all BOs 3.) Add the fence 4.) Unlock the BOs
BOs can't be evicted while they are locked and since you already add the fence that should be perfectly sufficient to prevent it from being evicted until your operation is completed.
Using the MMU is certainly be better in general, but usually only optional and a pain in the ass to get working. We have that in amdgpu for quite a while as well now and still don't use it because of that.
Regards, Christian.
Regards, Qiang
Christian.
- Use drm_sched for GPU task schedule. Each OpenGL context should have a lima context object in the kernel to distinguish tasks from different user. drm_sched gets task from each lima context in a fair way.
Not implemented:
- Dump buffer support
- Power management
- Performance counter
This patch serial just pack a pair of .c/.h files in each patch. For whole history of this driver's development, see: https://github.com/yuq/linux-lima/commits/lima-4.17-rc4
Mesa driver is still in development and not ready for daily usage, but can run some simple tests like kmscube and glamrk2, see: https://github.com/yuq/mesa-lima
Andrei Paulau (1): arm64/dts: add switch-delay for meson mali
Lima Project Developers (10): drm/lima: add mali 4xx GPU hardware regs drm/lima: add lima core driver drm/lima: add GPU device functions drm/lima: add PMU related functions drm/lima: add PP related functions drm/lima: add MMU related functions drm/lima: add GPU virtual memory space handing drm/lima: add GEM related functions drm/lima: add GEM Prime related functions drm/lima: add makefile and kconfig
Qiang Yu (12): dt-bindings: add switch-delay property for mali-utgard arm64/dts: add switch-delay for meson mali Revert "drm: Nerf the preclose callback for modern drivers" drm/lima: add lima uapi header drm/lima: add L2 cache functions drm/lima: add GP related functions drm/lima: add BCAST related function drm/lima: add DLBU related functions drm/lima: add TTM subsystem functions drm/lima: add buffer object functions drm/lima: add GPU schedule using DRM_SCHED drm/lima: add context related functions
Simon Shields (1): ARM: dts: add gpu node to exynos4
.../bindings/gpu/arm,mali-utgard.txt | 4 + arch/arm/boot/dts/exynos4.dtsi | 33 ++ arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi | 1 + .../boot/dts/amlogic/meson-gxl-mali.dtsi | 1 + drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_file.c | 8 +- drivers/gpu/drm/lima/Kconfig | 9 + drivers/gpu/drm/lima/Makefile | 19 + drivers/gpu/drm/lima/lima_bcast.c | 65 +++ drivers/gpu/drm/lima/lima_bcast.h | 34 ++ drivers/gpu/drm/lima/lima_ctx.c | 143 +++++ drivers/gpu/drm/lima/lima_ctx.h | 51 ++ drivers/gpu/drm/lima/lima_device.c | 407 ++++++++++++++ drivers/gpu/drm/lima/lima_device.h | 136 +++++ drivers/gpu/drm/lima/lima_dlbu.c | 75 +++ drivers/gpu/drm/lima/lima_dlbu.h | 37 ++ drivers/gpu/drm/lima/lima_drv.c | 466 ++++++++++++++++ drivers/gpu/drm/lima/lima_drv.h | 77 +++ drivers/gpu/drm/lima/lima_gem.c | 459 ++++++++++++++++ drivers/gpu/drm/lima/lima_gem.h | 41 ++ drivers/gpu/drm/lima/lima_gem_prime.c | 66 +++ drivers/gpu/drm/lima/lima_gem_prime.h | 31 ++ drivers/gpu/drm/lima/lima_gp.c | 293 +++++++++++ drivers/gpu/drm/lima/lima_gp.h | 34 ++ drivers/gpu/drm/lima/lima_l2_cache.c | 98 ++++ drivers/gpu/drm/lima/lima_l2_cache.h | 32 ++ drivers/gpu/drm/lima/lima_mmu.c | 154 ++++++ drivers/gpu/drm/lima/lima_mmu.h | 34 ++ drivers/gpu/drm/lima/lima_object.c | 120 +++++ drivers/gpu/drm/lima/lima_object.h | 87 +++ drivers/gpu/drm/lima/lima_pmu.c | 85 +++ drivers/gpu/drm/lima/lima_pmu.h | 30 ++ drivers/gpu/drm/lima/lima_pp.c | 418 +++++++++++++++ drivers/gpu/drm/lima/lima_pp.h | 37 ++ drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++ drivers/gpu/drm/lima/lima_sched.c | 497 ++++++++++++++++++ drivers/gpu/drm/lima/lima_sched.h | 126 +++++ drivers/gpu/drm/lima/lima_ttm.c | 409 ++++++++++++++ drivers/gpu/drm/lima/lima_ttm.h | 44 ++ drivers/gpu/drm/lima/lima_vm.c | 312 +++++++++++ drivers/gpu/drm/lima/lima_vm.h | 73 +++ include/drm/drm_drv.h | 23 +- include/uapi/drm/lima_drm.h | 195 +++++++ 44 files changed, 5565 insertions(+), 6 deletions(-) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile create mode 100644 drivers/gpu/drm/lima/lima_bcast.c create mode 100644 drivers/gpu/drm/lima/lima_bcast.h create mode 100644 drivers/gpu/drm/lima/lima_ctx.c create mode 100644 drivers/gpu/drm/lima/lima_ctx.h create mode 100644 drivers/gpu/drm/lima/lima_device.c create mode 100644 drivers/gpu/drm/lima/lima_device.h create mode 100644 drivers/gpu/drm/lima/lima_dlbu.c create mode 100644 drivers/gpu/drm/lima/lima_dlbu.h create mode 100644 drivers/gpu/drm/lima/lima_drv.c create mode 100644 drivers/gpu/drm/lima/lima_drv.h create mode 100644 drivers/gpu/drm/lima/lima_gem.c create mode 100644 drivers/gpu/drm/lima/lima_gem.h create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.c create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.h create mode 100644 drivers/gpu/drm/lima/lima_gp.c create mode 100644 drivers/gpu/drm/lima/lima_gp.h create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.c create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.h create mode 100644 drivers/gpu/drm/lima/lima_mmu.c create mode 100644 drivers/gpu/drm/lima/lima_mmu.h create mode 100644 drivers/gpu/drm/lima/lima_object.c create mode 100644 drivers/gpu/drm/lima/lima_object.h create mode 100644 drivers/gpu/drm/lima/lima_pmu.c create mode 100644 drivers/gpu/drm/lima/lima_pmu.h create mode 100644 drivers/gpu/drm/lima/lima_pp.c create mode 100644 drivers/gpu/drm/lima/lima_pp.h create mode 100644 drivers/gpu/drm/lima/lima_regs.h create mode 100644 drivers/gpu/drm/lima/lima_sched.c create mode 100644 drivers/gpu/drm/lima/lima_sched.h create mode 100644 drivers/gpu/drm/lima/lima_ttm.c create mode 100644 drivers/gpu/drm/lima/lima_ttm.h create mode 100644 drivers/gpu/drm/lima/lima_vm.c create mode 100644 drivers/gpu/drm/lima/lima_vm.h create mode 100644 include/uapi/drm/lima_drm.h
On Wed, May 23, 2018 at 9:59 PM, Christian König christian.koenig@amd.com wrote:
Am 23.05.2018 um 15:52 schrieb Qiang Yu:
On Wed, May 23, 2018 at 5:29 PM, Christian König ckoenig.leichtzumerken@gmail.com wrote:
Am 18.05.2018 um 11:27 schrieb Qiang Yu:
Kernel DRM driver for ARM Mali 400/450 GPUs.
This implementation mainly take amdgpu DRM driver as reference.
- Mali 4xx GPUs have two kinds of processors GP and PP. GP is for OpenGL vertex shader processing and PP is for fragment shader processing. Each processor has its own MMU so prcessors work in virtual address space.
- There's only one GP but multiple PP (max 4 for mali 400 and 8 for mali 450) in the same mali 4xx GPU. All PPs are grouped togather to handle a single fragment shader task divided by FB output tiled pixels. Mali 400 user space driver is responsible for assign target tiled pixels to each PP, but mali 450 has a HW module called DLBU to dynamically balance each PP's load.
- User space driver allocate buffer object and map into GPU virtual address space, upload command stream and draw data with CPU mmap of the buffer object, then submit task to GP/PP with a register frame indicating where is the command stream and misc settings.
- There's no command stream validation/relocation due to each user process has its own GPU virtual address space. GP/PP's MMU switch virtual address space before running two tasks from different user process. Error or evil user space code just get MMU fault or GP/PP error IRQ, then the HW/SW will be recovered.
- Use TTM as MM. TTM_PL_TT type memory is used as the content of lima buffer object which is allocated from TTM page pool. all lima buffer object gets pinned with TTM_PL_FLAG_NO_EVICT when allocation, so there's no buffer eviction and swap for now. We need reverse engineering to see if and how GP/PP support MMU fault recovery (continue execution). Otherwise we have to pin/unpin each envolved buffer when task creation/deletion.
Well pinning all memory is usually a no-go for upstreaming. But since you are already using the drm_sched for GPU task scheduling why are you actually needing this?
The scheduler should take care of signaling all fences when the hardware is done with it's magic and that is enough for TTM to note that a buffer object is movable again (e.g. unpin them).
Please correct me if I'm wrong.
Well, you are wrong :)
One way to implement eviction/swap is like this: call validation on each buffers involved in a task, but this won't prevent it from eviction/swap when executing, so a GPU MMU fault may happen and in the handler we need to recover the buffer evicted/swapped.
Another way is pin/unpin buffers evolved when task create/free.
First way is better when memory load is low and second way is better when memory load is high. First way also need less memory.
So I'd prefer first way but due to the GPU MMU fault HW op need reverse engineering, I have to pin all buffers now. After the HW op is clear, I can choose one way to implement.
The general approach is: 1.) Lock all BOs 2.) Validate all BOs 3.) Add the fence 4.) Unlock the BOs
This is the task prepare process, right?
BOs can't be evicted while they are locked
During the task prepare stage, they're locked, but after task queued, they get unlocked and be evictable?
and since you already add the fence that should be perfectly sufficient to prevent it from being evicted until your operation is completed.
You mean I have to explicitly pin it with TTM_PL_FLAG_NO_EVICT when task creation or TTM will check buffer's reservation object and won't evict it if see a fence?
Regards, Qiang
Using the MMU is certainly be better in general, but usually only optional and a pain in the ass to get working. We have that in amdgpu for quite a while as well now and still don't use it because of that.
Regards, Christian.
Regards, Qiang
Christian.
- Use drm_sched for GPU task schedule. Each OpenGL context should have a lima context object in the kernel to distinguish tasks from different user. drm_sched gets task from each lima context in a fair way.
Not implemented:
- Dump buffer support
- Power management
- Performance counter
This patch serial just pack a pair of .c/.h files in each patch. For whole history of this driver's development, see: https://github.com/yuq/linux-lima/commits/lima-4.17-rc4
Mesa driver is still in development and not ready for daily usage, but can run some simple tests like kmscube and glamrk2, see: https://github.com/yuq/mesa-lima
Andrei Paulau (1): arm64/dts: add switch-delay for meson mali
Lima Project Developers (10): drm/lima: add mali 4xx GPU hardware regs drm/lima: add lima core driver drm/lima: add GPU device functions drm/lima: add PMU related functions drm/lima: add PP related functions drm/lima: add MMU related functions drm/lima: add GPU virtual memory space handing drm/lima: add GEM related functions drm/lima: add GEM Prime related functions drm/lima: add makefile and kconfig
Qiang Yu (12): dt-bindings: add switch-delay property for mali-utgard arm64/dts: add switch-delay for meson mali Revert "drm: Nerf the preclose callback for modern drivers" drm/lima: add lima uapi header drm/lima: add L2 cache functions drm/lima: add GP related functions drm/lima: add BCAST related function drm/lima: add DLBU related functions drm/lima: add TTM subsystem functions drm/lima: add buffer object functions drm/lima: add GPU schedule using DRM_SCHED drm/lima: add context related functions
Simon Shields (1): ARM: dts: add gpu node to exynos4
.../bindings/gpu/arm,mali-utgard.txt | 4 + arch/arm/boot/dts/exynos4.dtsi | 33 ++ arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi | 1 + .../boot/dts/amlogic/meson-gxl-mali.dtsi | 1 + drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_file.c | 8 +- drivers/gpu/drm/lima/Kconfig | 9 + drivers/gpu/drm/lima/Makefile | 19 + drivers/gpu/drm/lima/lima_bcast.c | 65 +++ drivers/gpu/drm/lima/lima_bcast.h | 34 ++ drivers/gpu/drm/lima/lima_ctx.c | 143 +++++ drivers/gpu/drm/lima/lima_ctx.h | 51 ++ drivers/gpu/drm/lima/lima_device.c | 407 ++++++++++++++ drivers/gpu/drm/lima/lima_device.h | 136 +++++ drivers/gpu/drm/lima/lima_dlbu.c | 75 +++ drivers/gpu/drm/lima/lima_dlbu.h | 37 ++ drivers/gpu/drm/lima/lima_drv.c | 466 ++++++++++++++++ drivers/gpu/drm/lima/lima_drv.h | 77 +++ drivers/gpu/drm/lima/lima_gem.c | 459 ++++++++++++++++ drivers/gpu/drm/lima/lima_gem.h | 41 ++ drivers/gpu/drm/lima/lima_gem_prime.c | 66 +++ drivers/gpu/drm/lima/lima_gem_prime.h | 31 ++ drivers/gpu/drm/lima/lima_gp.c | 293 +++++++++++ drivers/gpu/drm/lima/lima_gp.h | 34 ++ drivers/gpu/drm/lima/lima_l2_cache.c | 98 ++++ drivers/gpu/drm/lima/lima_l2_cache.h | 32 ++ drivers/gpu/drm/lima/lima_mmu.c | 154 ++++++ drivers/gpu/drm/lima/lima_mmu.h | 34 ++ drivers/gpu/drm/lima/lima_object.c | 120 +++++ drivers/gpu/drm/lima/lima_object.h | 87 +++ drivers/gpu/drm/lima/lima_pmu.c | 85 +++ drivers/gpu/drm/lima/lima_pmu.h | 30 ++ drivers/gpu/drm/lima/lima_pp.c | 418 +++++++++++++++ drivers/gpu/drm/lima/lima_pp.h | 37 ++ drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++ drivers/gpu/drm/lima/lima_sched.c | 497 ++++++++++++++++++ drivers/gpu/drm/lima/lima_sched.h | 126 +++++ drivers/gpu/drm/lima/lima_ttm.c | 409 ++++++++++++++ drivers/gpu/drm/lima/lima_ttm.h | 44 ++ drivers/gpu/drm/lima/lima_vm.c | 312 +++++++++++ drivers/gpu/drm/lima/lima_vm.h | 73 +++ include/drm/drm_drv.h | 23 +- include/uapi/drm/lima_drm.h | 195 +++++++ 44 files changed, 5565 insertions(+), 6 deletions(-) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile create mode 100644 drivers/gpu/drm/lima/lima_bcast.c create mode 100644 drivers/gpu/drm/lima/lima_bcast.h create mode 100644 drivers/gpu/drm/lima/lima_ctx.c create mode 100644 drivers/gpu/drm/lima/lima_ctx.h create mode 100644 drivers/gpu/drm/lima/lima_device.c create mode 100644 drivers/gpu/drm/lima/lima_device.h create mode 100644 drivers/gpu/drm/lima/lima_dlbu.c create mode 100644 drivers/gpu/drm/lima/lima_dlbu.h create mode 100644 drivers/gpu/drm/lima/lima_drv.c create mode 100644 drivers/gpu/drm/lima/lima_drv.h create mode 100644 drivers/gpu/drm/lima/lima_gem.c create mode 100644 drivers/gpu/drm/lima/lima_gem.h create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.c create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.h create mode 100644 drivers/gpu/drm/lima/lima_gp.c create mode 100644 drivers/gpu/drm/lima/lima_gp.h create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.c create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.h create mode 100644 drivers/gpu/drm/lima/lima_mmu.c create mode 100644 drivers/gpu/drm/lima/lima_mmu.h create mode 100644 drivers/gpu/drm/lima/lima_object.c create mode 100644 drivers/gpu/drm/lima/lima_object.h create mode 100644 drivers/gpu/drm/lima/lima_pmu.c create mode 100644 drivers/gpu/drm/lima/lima_pmu.h create mode 100644 drivers/gpu/drm/lima/lima_pp.c create mode 100644 drivers/gpu/drm/lima/lima_pp.h create mode 100644 drivers/gpu/drm/lima/lima_regs.h create mode 100644 drivers/gpu/drm/lima/lima_sched.c create mode 100644 drivers/gpu/drm/lima/lima_sched.h create mode 100644 drivers/gpu/drm/lima/lima_ttm.c create mode 100644 drivers/gpu/drm/lima/lima_ttm.h create mode 100644 drivers/gpu/drm/lima/lima_vm.c create mode 100644 drivers/gpu/drm/lima/lima_vm.h create mode 100644 include/uapi/drm/lima_drm.h
Am 23.05.2018 um 16:13 schrieb Qiang Yu:
On Wed, May 23, 2018 at 9:59 PM, Christian König christian.koenig@amd.com wrote:
Am 23.05.2018 um 15:52 schrieb Qiang Yu:
On Wed, May 23, 2018 at 5:29 PM, Christian König ckoenig.leichtzumerken@gmail.com wrote:
Am 18.05.2018 um 11:27 schrieb Qiang Yu:
Kernel DRM driver for ARM Mali 400/450 GPUs.
This implementation mainly take amdgpu DRM driver as reference.
- Mali 4xx GPUs have two kinds of processors GP and PP. GP is for OpenGL vertex shader processing and PP is for fragment shader processing. Each processor has its own MMU so prcessors work in virtual address space.
- There's only one GP but multiple PP (max 4 for mali 400 and 8 for mali 450) in the same mali 4xx GPU. All PPs are grouped togather to handle a single fragment shader task divided by FB output tiled pixels. Mali 400 user space driver is responsible for assign target tiled pixels to each PP, but mali 450 has a HW module called DLBU to dynamically balance each PP's load.
- User space driver allocate buffer object and map into GPU virtual address space, upload command stream and draw data with CPU mmap of the buffer object, then submit task to GP/PP with a register frame indicating where is the command stream and misc settings.
- There's no command stream validation/relocation due to each user process has its own GPU virtual address space. GP/PP's MMU switch virtual address space before running two tasks from different user process. Error or evil user space code just get MMU fault or GP/PP error IRQ, then the HW/SW will be recovered.
- Use TTM as MM. TTM_PL_TT type memory is used as the content of lima buffer object which is allocated from TTM page pool. all lima buffer object gets pinned with TTM_PL_FLAG_NO_EVICT when allocation, so there's no buffer eviction and swap for now. We need reverse engineering to see if and how GP/PP support MMU fault recovery (continue execution). Otherwise we have to pin/unpin each envolved buffer when task creation/deletion.
Well pinning all memory is usually a no-go for upstreaming. But since you are already using the drm_sched for GPU task scheduling why are you actually needing this?
The scheduler should take care of signaling all fences when the hardware is done with it's magic and that is enough for TTM to note that a buffer object is movable again (e.g. unpin them).
Please correct me if I'm wrong.
Well, you are wrong :)
One way to implement eviction/swap is like this: call validation on each buffers involved in a task, but this won't prevent it from eviction/swap when executing, so a GPU MMU fault may happen and in the handler we need to recover the buffer evicted/swapped.
Another way is pin/unpin buffers evolved when task create/free.
First way is better when memory load is low and second way is better when memory load is high. First way also need less memory.
So I'd prefer first way but due to the GPU MMU fault HW op need reverse engineering, I have to pin all buffers now. After the HW op is clear, I can choose one way to implement.
The general approach is: 1.) Lock all BOs 2.) Validate all BOs 3.) Add the fence 4.) Unlock the BOs
This is the task prepare process, right?
Yes.
BOs can't be evicted while they are locked
During the task prepare stage, they're locked, but after task queued, they get unlocked and be evictable?
Yes, the fence you added to the BO prevents TTM from evicting the BO until the fence signaled.
and since you already add the fence that should be perfectly sufficient to prevent it from being evicted until your operation is completed.
You mean I have to explicitly pin it with TTM_PL_FLAG_NO_EVICT when task creation or TTM will check buffer's reservation object and won't evict it if see a fence?
The second. You *don't* have to explicitly pin it with TTM_PL_FLAG_NO_EVICT as long as you always add the correct fence with your command submissions.
When evicting something TTM will take a look at the fences assigned to the BO and either don't evict it at all or wait for all fences to be completed before doing so.
When you need to update some internal state or flush caches or stuff like that when a BO is evicted TTM also has callbacks for this.
Regards, Christian.
Regards, Qiang
Using the MMU is certainly be better in general, but usually only optional and a pain in the ass to get working. We have that in amdgpu for quite a while as well now and still don't use it because of that.
Regards, Christian.
Regards, Qiang
Christian.
- Use drm_sched for GPU task schedule. Each OpenGL context should have a lima context object in the kernel to distinguish tasks from different user. drm_sched gets task from each lima context in a fair way.
Not implemented:
- Dump buffer support
- Power management
- Performance counter
This patch serial just pack a pair of .c/.h files in each patch. For whole history of this driver's development, see: https://github.com/yuq/linux-lima/commits/lima-4.17-rc4
Mesa driver is still in development and not ready for daily usage, but can run some simple tests like kmscube and glamrk2, see: https://github.com/yuq/mesa-lima
Andrei Paulau (1): arm64/dts: add switch-delay for meson mali
Lima Project Developers (10): drm/lima: add mali 4xx GPU hardware regs drm/lima: add lima core driver drm/lima: add GPU device functions drm/lima: add PMU related functions drm/lima: add PP related functions drm/lima: add MMU related functions drm/lima: add GPU virtual memory space handing drm/lima: add GEM related functions drm/lima: add GEM Prime related functions drm/lima: add makefile and kconfig
Qiang Yu (12): dt-bindings: add switch-delay property for mali-utgard arm64/dts: add switch-delay for meson mali Revert "drm: Nerf the preclose callback for modern drivers" drm/lima: add lima uapi header drm/lima: add L2 cache functions drm/lima: add GP related functions drm/lima: add BCAST related function drm/lima: add DLBU related functions drm/lima: add TTM subsystem functions drm/lima: add buffer object functions drm/lima: add GPU schedule using DRM_SCHED drm/lima: add context related functions
Simon Shields (1): ARM: dts: add gpu node to exynos4
.../bindings/gpu/arm,mali-utgard.txt | 4 + arch/arm/boot/dts/exynos4.dtsi | 33 ++ arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi | 1 + .../boot/dts/amlogic/meson-gxl-mali.dtsi | 1 + drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_file.c | 8 +- drivers/gpu/drm/lima/Kconfig | 9 + drivers/gpu/drm/lima/Makefile | 19 + drivers/gpu/drm/lima/lima_bcast.c | 65 +++ drivers/gpu/drm/lima/lima_bcast.h | 34 ++ drivers/gpu/drm/lima/lima_ctx.c | 143 +++++ drivers/gpu/drm/lima/lima_ctx.h | 51 ++ drivers/gpu/drm/lima/lima_device.c | 407 ++++++++++++++ drivers/gpu/drm/lima/lima_device.h | 136 +++++ drivers/gpu/drm/lima/lima_dlbu.c | 75 +++ drivers/gpu/drm/lima/lima_dlbu.h | 37 ++ drivers/gpu/drm/lima/lima_drv.c | 466 ++++++++++++++++ drivers/gpu/drm/lima/lima_drv.h | 77 +++ drivers/gpu/drm/lima/lima_gem.c | 459 ++++++++++++++++ drivers/gpu/drm/lima/lima_gem.h | 41 ++ drivers/gpu/drm/lima/lima_gem_prime.c | 66 +++ drivers/gpu/drm/lima/lima_gem_prime.h | 31 ++ drivers/gpu/drm/lima/lima_gp.c | 293 +++++++++++ drivers/gpu/drm/lima/lima_gp.h | 34 ++ drivers/gpu/drm/lima/lima_l2_cache.c | 98 ++++ drivers/gpu/drm/lima/lima_l2_cache.h | 32 ++ drivers/gpu/drm/lima/lima_mmu.c | 154 ++++++ drivers/gpu/drm/lima/lima_mmu.h | 34 ++ drivers/gpu/drm/lima/lima_object.c | 120 +++++ drivers/gpu/drm/lima/lima_object.h | 87 +++ drivers/gpu/drm/lima/lima_pmu.c | 85 +++ drivers/gpu/drm/lima/lima_pmu.h | 30 ++ drivers/gpu/drm/lima/lima_pp.c | 418 +++++++++++++++ drivers/gpu/drm/lima/lima_pp.h | 37 ++ drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++ drivers/gpu/drm/lima/lima_sched.c | 497
++++++++++++++++++ drivers/gpu/drm/lima/lima_sched.h | 126 +++++ drivers/gpu/drm/lima/lima_ttm.c | 409 ++++++++++++++ drivers/gpu/drm/lima/lima_ttm.h | 44 ++ drivers/gpu/drm/lima/lima_vm.c | 312 +++++++++++ drivers/gpu/drm/lima/lima_vm.h | 73 +++ include/drm/drm_drv.h | 23 +- include/uapi/drm/lima_drm.h | 195 +++++++ 44 files changed, 5565 insertions(+), 6 deletions(-) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile create mode 100644 drivers/gpu/drm/lima/lima_bcast.c create mode 100644 drivers/gpu/drm/lima/lima_bcast.h create mode 100644 drivers/gpu/drm/lima/lima_ctx.c create mode 100644 drivers/gpu/drm/lima/lima_ctx.h create mode 100644 drivers/gpu/drm/lima/lima_device.c create mode 100644 drivers/gpu/drm/lima/lima_device.h create mode 100644 drivers/gpu/drm/lima/lima_dlbu.c create mode 100644 drivers/gpu/drm/lima/lima_dlbu.h create mode 100644 drivers/gpu/drm/lima/lima_drv.c create mode 100644 drivers/gpu/drm/lima/lima_drv.h create mode 100644 drivers/gpu/drm/lima/lima_gem.c create mode 100644 drivers/gpu/drm/lima/lima_gem.h create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.c create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.h create mode 100644 drivers/gpu/drm/lima/lima_gp.c create mode 100644 drivers/gpu/drm/lima/lima_gp.h create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.c create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.h create mode 100644 drivers/gpu/drm/lima/lima_mmu.c create mode 100644 drivers/gpu/drm/lima/lima_mmu.h create mode 100644 drivers/gpu/drm/lima/lima_object.c create mode 100644 drivers/gpu/drm/lima/lima_object.h create mode 100644 drivers/gpu/drm/lima/lima_pmu.c create mode 100644 drivers/gpu/drm/lima/lima_pmu.h create mode 100644 drivers/gpu/drm/lima/lima_pp.c create mode 100644 drivers/gpu/drm/lima/lima_pp.h create mode 100644 drivers/gpu/drm/lima/lima_regs.h create mode 100644 drivers/gpu/drm/lima/lima_sched.c create mode 100644 drivers/gpu/drm/lima/lima_sched.h create mode 100644 drivers/gpu/drm/lima/lima_ttm.c create mode 100644 drivers/gpu/drm/lima/lima_ttm.h create mode 100644 drivers/gpu/drm/lima/lima_vm.c create mode 100644 drivers/gpu/drm/lima/lima_vm.h create mode 100644 include/uapi/drm/lima_drm.h
On Wed, May 23, 2018 at 10:19 PM, Christian König christian.koenig@amd.com wrote:
Am 23.05.2018 um 16:13 schrieb Qiang Yu:
On Wed, May 23, 2018 at 9:59 PM, Christian König christian.koenig@amd.com wrote:
Am 23.05.2018 um 15:52 schrieb Qiang Yu:
On Wed, May 23, 2018 at 5:29 PM, Christian König ckoenig.leichtzumerken@gmail.com wrote:
Am 18.05.2018 um 11:27 schrieb Qiang Yu:
Kernel DRM driver for ARM Mali 400/450 GPUs.
This implementation mainly take amdgpu DRM driver as reference.
- Mali 4xx GPUs have two kinds of processors GP and PP. GP is for OpenGL vertex shader processing and PP is for fragment shader processing. Each processor has its own MMU so prcessors work in virtual address space.
- There's only one GP but multiple PP (max 4 for mali 400 and 8 for mali 450) in the same mali 4xx GPU. All PPs are grouped togather to handle a single fragment shader task divided by FB output tiled pixels. Mali 400 user space driver is responsible for assign target tiled pixels to each PP, but mali 450 has a HW module called DLBU to dynamically balance each PP's load.
- User space driver allocate buffer object and map into GPU virtual address space, upload command stream and draw data with CPU mmap of the buffer object, then submit task to GP/PP with a register frame indicating where is the command stream and misc settings.
- There's no command stream validation/relocation due to each user process has its own GPU virtual address space. GP/PP's MMU switch virtual address space before running two tasks from different user process. Error or evil user space code just get MMU fault or GP/PP error IRQ, then the HW/SW will be recovered.
- Use TTM as MM. TTM_PL_TT type memory is used as the content of lima buffer object which is allocated from TTM page pool. all lima buffer object gets pinned with TTM_PL_FLAG_NO_EVICT when allocation, so there's no buffer eviction and swap for now. We need reverse engineering to see if and how GP/PP support MMU fault recovery (continue execution). Otherwise we have to pin/unpin each envolved buffer when task creation/deletion.
Well pinning all memory is usually a no-go for upstreaming. But since you are already using the drm_sched for GPU task scheduling why are you actually needing this?
The scheduler should take care of signaling all fences when the hardware is done with it's magic and that is enough for TTM to note that a buffer object is movable again (e.g. unpin them).
Please correct me if I'm wrong.
Well, you are wrong :)
One way to implement eviction/swap is like this: call validation on each buffers involved in a task, but this won't prevent it from eviction/swap when executing, so a GPU MMU fault may happen and in the handler we need to recover the buffer evicted/swapped.
Another way is pin/unpin buffers evolved when task create/free.
First way is better when memory load is low and second way is better when memory load is high. First way also need less memory.
So I'd prefer first way but due to the GPU MMU fault HW op need reverse engineering, I have to pin all buffers now. After the HW op is clear, I can choose one way to implement.
The general approach is: 1.) Lock all BOs 2.) Validate all BOs 3.) Add the fence 4.) Unlock the BOs
This is the task prepare process, right?
Yes.
BOs can't be evicted while they are locked
During the task prepare stage, they're locked, but after task queued, they get unlocked and be evictable?
Yes, the fence you added to the BO prevents TTM from evicting the BO until the fence signaled.
and since you already add the fence that should be perfectly sufficient to prevent it from being evicted until your operation is completed.
You mean I have to explicitly pin it with TTM_PL_FLAG_NO_EVICT when task creation or TTM will check buffer's reservation object and won't evict it if see a fence?
The second. You *don't* have to explicitly pin it with TTM_PL_FLAG_NO_EVICT as long as you always add the correct fence with your command submissions.
When evicting something TTM will take a look at the fences assigned to the BO and either don't evict it at all or wait for all fences to be completed before doing so.
When you need to update some internal state or flush caches or stuff like that when a BO is evicted TTM also has callbacks for this.
Oh, thanks for clearing this for me, it makes my life easier.
Regards, Qiang
Regards, Christian.
Regards, Qiang
Using the MMU is certainly be better in general, but usually only optional and a pain in the ass to get working. We have that in amdgpu for quite a while as well now and still don't use it because of that.
Regards, Christian.
Regards, Qiang
Christian.
- Use drm_sched for GPU task schedule. Each OpenGL context should have a lima context object in the kernel to distinguish tasks from different user. drm_sched gets task from each lima context in a fair way.
Not implemented:
- Dump buffer support
- Power management
- Performance counter
This patch serial just pack a pair of .c/.h files in each patch. For whole history of this driver's development, see: https://github.com/yuq/linux-lima/commits/lima-4.17-rc4
Mesa driver is still in development and not ready for daily usage, but can run some simple tests like kmscube and glamrk2, see: https://github.com/yuq/mesa-lima
Andrei Paulau (1): arm64/dts: add switch-delay for meson mali
Lima Project Developers (10): drm/lima: add mali 4xx GPU hardware regs drm/lima: add lima core driver drm/lima: add GPU device functions drm/lima: add PMU related functions drm/lima: add PP related functions drm/lima: add MMU related functions drm/lima: add GPU virtual memory space handing drm/lima: add GEM related functions drm/lima: add GEM Prime related functions drm/lima: add makefile and kconfig
Qiang Yu (12): dt-bindings: add switch-delay property for mali-utgard arm64/dts: add switch-delay for meson mali Revert "drm: Nerf the preclose callback for modern drivers" drm/lima: add lima uapi header drm/lima: add L2 cache functions drm/lima: add GP related functions drm/lima: add BCAST related function drm/lima: add DLBU related functions drm/lima: add TTM subsystem functions drm/lima: add buffer object functions drm/lima: add GPU schedule using DRM_SCHED drm/lima: add context related functions
Simon Shields (1): ARM: dts: add gpu node to exynos4
.../bindings/gpu/arm,mali-utgard.txt | 4 + arch/arm/boot/dts/exynos4.dtsi | 33 ++ arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi | 1 + .../boot/dts/amlogic/meson-gxl-mali.dtsi | 1 + drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_file.c | 8 +- drivers/gpu/drm/lima/Kconfig | 9 + drivers/gpu/drm/lima/Makefile | 19 + drivers/gpu/drm/lima/lima_bcast.c | 65 +++ drivers/gpu/drm/lima/lima_bcast.h | 34 ++ drivers/gpu/drm/lima/lima_ctx.c | 143 +++++ drivers/gpu/drm/lima/lima_ctx.h | 51 ++ drivers/gpu/drm/lima/lima_device.c | 407 ++++++++++++++ drivers/gpu/drm/lima/lima_device.h | 136 +++++ drivers/gpu/drm/lima/lima_dlbu.c | 75 +++ drivers/gpu/drm/lima/lima_dlbu.h | 37 ++ drivers/gpu/drm/lima/lima_drv.c | 466
++++++++++++++++ drivers/gpu/drm/lima/lima_drv.h | 77 +++ drivers/gpu/drm/lima/lima_gem.c | 459 ++++++++++++++++ drivers/gpu/drm/lima/lima_gem.h | 41 ++ drivers/gpu/drm/lima/lima_gem_prime.c | 66 +++ drivers/gpu/drm/lima/lima_gem_prime.h | 31 ++ drivers/gpu/drm/lima/lima_gp.c | 293 +++++++++++ drivers/gpu/drm/lima/lima_gp.h | 34 ++ drivers/gpu/drm/lima/lima_l2_cache.c | 98 ++++ drivers/gpu/drm/lima/lima_l2_cache.h | 32 ++ drivers/gpu/drm/lima/lima_mmu.c | 154 ++++++ drivers/gpu/drm/lima/lima_mmu.h | 34 ++ drivers/gpu/drm/lima/lima_object.c | 120 +++++ drivers/gpu/drm/lima/lima_object.h | 87 +++ drivers/gpu/drm/lima/lima_pmu.c | 85 +++ drivers/gpu/drm/lima/lima_pmu.h | 30 ++ drivers/gpu/drm/lima/lima_pp.c | 418 +++++++++++++++ drivers/gpu/drm/lima/lima_pp.h | 37 ++ drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++ drivers/gpu/drm/lima/lima_sched.c | 497 ++++++++++++++++++ drivers/gpu/drm/lima/lima_sched.h | 126 +++++ drivers/gpu/drm/lima/lima_ttm.c | 409 ++++++++++++++ drivers/gpu/drm/lima/lima_ttm.h | 44 ++ drivers/gpu/drm/lima/lima_vm.c | 312 +++++++++++ drivers/gpu/drm/lima/lima_vm.h | 73 +++ include/drm/drm_drv.h | 23 +- include/uapi/drm/lima_drm.h | 195 +++++++ 44 files changed, 5565 insertions(+), 6 deletions(-) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile create mode 100644 drivers/gpu/drm/lima/lima_bcast.c create mode 100644 drivers/gpu/drm/lima/lima_bcast.h create mode 100644 drivers/gpu/drm/lima/lima_ctx.c create mode 100644 drivers/gpu/drm/lima/lima_ctx.h create mode 100644 drivers/gpu/drm/lima/lima_device.c create mode 100644 drivers/gpu/drm/lima/lima_device.h create mode 100644 drivers/gpu/drm/lima/lima_dlbu.c create mode 100644 drivers/gpu/drm/lima/lima_dlbu.h create mode 100644 drivers/gpu/drm/lima/lima_drv.c create mode 100644 drivers/gpu/drm/lima/lima_drv.h create mode 100644 drivers/gpu/drm/lima/lima_gem.c create mode 100644 drivers/gpu/drm/lima/lima_gem.h create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.c create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.h create mode 100644 drivers/gpu/drm/lima/lima_gp.c create mode 100644 drivers/gpu/drm/lima/lima_gp.h create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.c create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.h create mode 100644 drivers/gpu/drm/lima/lima_mmu.c create mode 100644 drivers/gpu/drm/lima/lima_mmu.h create mode 100644 drivers/gpu/drm/lima/lima_object.c create mode 100644 drivers/gpu/drm/lima/lima_object.h create mode 100644 drivers/gpu/drm/lima/lima_pmu.c create mode 100644 drivers/gpu/drm/lima/lima_pmu.h create mode 100644 drivers/gpu/drm/lima/lima_pp.c create mode 100644 drivers/gpu/drm/lima/lima_pp.h create mode 100644 drivers/gpu/drm/lima/lima_regs.h create mode 100644 drivers/gpu/drm/lima/lima_sched.c create mode 100644 drivers/gpu/drm/lima/lima_sched.h create mode 100644 drivers/gpu/drm/lima/lima_ttm.c create mode 100644 drivers/gpu/drm/lima/lima_ttm.h create mode 100644 drivers/gpu/drm/lima/lima_vm.c create mode 100644 drivers/gpu/drm/lima/lima_vm.h create mode 100644 include/uapi/drm/lima_drm.h
On Wed, May 23, 2018 at 3:52 PM, Qiang Yu yuq825@gmail.com wrote:
On Wed, May 23, 2018 at 5:29 PM, Christian König ckoenig.leichtzumerken@gmail.com wrote:
Am 18.05.2018 um 11:27 schrieb Qiang Yu:
Kernel DRM driver for ARM Mali 400/450 GPUs.
This implementation mainly take amdgpu DRM driver as reference.
- Mali 4xx GPUs have two kinds of processors GP and PP. GP is for OpenGL vertex shader processing and PP is for fragment shader processing. Each processor has its own MMU so prcessors work in virtual address space.
- There's only one GP but multiple PP (max 4 for mali 400 and 8 for mali 450) in the same mali 4xx GPU. All PPs are grouped togather to handle a single fragment shader task divided by FB output tiled pixels. Mali 400 user space driver is responsible for assign target tiled pixels to each PP, but mali 450 has a HW module called DLBU to dynamically balance each PP's load.
- User space driver allocate buffer object and map into GPU virtual address space, upload command stream and draw data with CPU mmap of the buffer object, then submit task to GP/PP with a register frame indicating where is the command stream and misc settings.
- There's no command stream validation/relocation due to each user process has its own GPU virtual address space. GP/PP's MMU switch virtual address space before running two tasks from different user process. Error or evil user space code just get MMU fault or GP/PP error IRQ, then the HW/SW will be recovered.
- Use TTM as MM. TTM_PL_TT type memory is used as the content of lima buffer object which is allocated from TTM page pool. all lima buffer object gets pinned with TTM_PL_FLAG_NO_EVICT when allocation, so there's no buffer eviction and swap for now. We need reverse engineering to see if and how GP/PP support MMU fault recovery (continue execution). Otherwise we have to pin/unpin each envolved buffer when task creation/deletion.
Well pinning all memory is usually a no-go for upstreaming. But since you are already using the drm_sched for GPU task scheduling why are you actually needing this?
The scheduler should take care of signaling all fences when the hardware is done with it's magic and that is enough for TTM to note that a buffer object is movable again (e.g. unpin them).
Please correct me if I'm wrong.
One way to implement eviction/swap is like this: call validation on each buffers involved in a task, but this won't prevent it from eviction/swap when executing, so a GPU MMU fault may happen and in the handler we need to recover the buffer evicted/swapped.
Another way is pin/unpin buffers evolved when task create/free.
First way is better when memory load is low and second way is better when memory load is high. First way also need less memory.
So I'd prefer first way but due to the GPU MMU fault HW op need reverse engineering, I have to pin all buffers now. After the HW op is clear, I can choose one way to implement.
All the drivers using ttm have something that looks like vram, or a requirement to move buffers around. Afaiui that includes virtio drm driver. From your description you don't have such a requirement, and then doing what etnaviv has done would be a lot simpler. Everything that's not related to buffer movement handling is also available outside of ttm already. -Daniel
Regards, Qiang
Christian.
- Use drm_sched for GPU task schedule. Each OpenGL context should have a lima context object in the kernel to distinguish tasks from different user. drm_sched gets task from each lima context in a fair way.
Not implemented:
- Dump buffer support
- Power management
- Performance counter
This patch serial just pack a pair of .c/.h files in each patch. For whole history of this driver's development, see: https://github.com/yuq/linux-lima/commits/lima-4.17-rc4
Mesa driver is still in development and not ready for daily usage, but can run some simple tests like kmscube and glamrk2, see: https://github.com/yuq/mesa-lima
Andrei Paulau (1): arm64/dts: add switch-delay for meson mali
Lima Project Developers (10): drm/lima: add mali 4xx GPU hardware regs drm/lima: add lima core driver drm/lima: add GPU device functions drm/lima: add PMU related functions drm/lima: add PP related functions drm/lima: add MMU related functions drm/lima: add GPU virtual memory space handing drm/lima: add GEM related functions drm/lima: add GEM Prime related functions drm/lima: add makefile and kconfig
Qiang Yu (12): dt-bindings: add switch-delay property for mali-utgard arm64/dts: add switch-delay for meson mali Revert "drm: Nerf the preclose callback for modern drivers" drm/lima: add lima uapi header drm/lima: add L2 cache functions drm/lima: add GP related functions drm/lima: add BCAST related function drm/lima: add DLBU related functions drm/lima: add TTM subsystem functions drm/lima: add buffer object functions drm/lima: add GPU schedule using DRM_SCHED drm/lima: add context related functions
Simon Shields (1): ARM: dts: add gpu node to exynos4
.../bindings/gpu/arm,mali-utgard.txt | 4 + arch/arm/boot/dts/exynos4.dtsi | 33 ++ arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi | 1 + .../boot/dts/amlogic/meson-gxl-mali.dtsi | 1 + drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_file.c | 8 +- drivers/gpu/drm/lima/Kconfig | 9 + drivers/gpu/drm/lima/Makefile | 19 + drivers/gpu/drm/lima/lima_bcast.c | 65 +++ drivers/gpu/drm/lima/lima_bcast.h | 34 ++ drivers/gpu/drm/lima/lima_ctx.c | 143 +++++ drivers/gpu/drm/lima/lima_ctx.h | 51 ++ drivers/gpu/drm/lima/lima_device.c | 407 ++++++++++++++ drivers/gpu/drm/lima/lima_device.h | 136 +++++ drivers/gpu/drm/lima/lima_dlbu.c | 75 +++ drivers/gpu/drm/lima/lima_dlbu.h | 37 ++ drivers/gpu/drm/lima/lima_drv.c | 466 ++++++++++++++++ drivers/gpu/drm/lima/lima_drv.h | 77 +++ drivers/gpu/drm/lima/lima_gem.c | 459 ++++++++++++++++ drivers/gpu/drm/lima/lima_gem.h | 41 ++ drivers/gpu/drm/lima/lima_gem_prime.c | 66 +++ drivers/gpu/drm/lima/lima_gem_prime.h | 31 ++ drivers/gpu/drm/lima/lima_gp.c | 293 +++++++++++ drivers/gpu/drm/lima/lima_gp.h | 34 ++ drivers/gpu/drm/lima/lima_l2_cache.c | 98 ++++ drivers/gpu/drm/lima/lima_l2_cache.h | 32 ++ drivers/gpu/drm/lima/lima_mmu.c | 154 ++++++ drivers/gpu/drm/lima/lima_mmu.h | 34 ++ drivers/gpu/drm/lima/lima_object.c | 120 +++++ drivers/gpu/drm/lima/lima_object.h | 87 +++ drivers/gpu/drm/lima/lima_pmu.c | 85 +++ drivers/gpu/drm/lima/lima_pmu.h | 30 ++ drivers/gpu/drm/lima/lima_pp.c | 418 +++++++++++++++ drivers/gpu/drm/lima/lima_pp.h | 37 ++ drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++ drivers/gpu/drm/lima/lima_sched.c | 497 ++++++++++++++++++ drivers/gpu/drm/lima/lima_sched.h | 126 +++++ drivers/gpu/drm/lima/lima_ttm.c | 409 ++++++++++++++ drivers/gpu/drm/lima/lima_ttm.h | 44 ++ drivers/gpu/drm/lima/lima_vm.c | 312 +++++++++++ drivers/gpu/drm/lima/lima_vm.h | 73 +++ include/drm/drm_drv.h | 23 +- include/uapi/drm/lima_drm.h | 195 +++++++ 44 files changed, 5565 insertions(+), 6 deletions(-) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile create mode 100644 drivers/gpu/drm/lima/lima_bcast.c create mode 100644 drivers/gpu/drm/lima/lima_bcast.h create mode 100644 drivers/gpu/drm/lima/lima_ctx.c create mode 100644 drivers/gpu/drm/lima/lima_ctx.h create mode 100644 drivers/gpu/drm/lima/lima_device.c create mode 100644 drivers/gpu/drm/lima/lima_device.h create mode 100644 drivers/gpu/drm/lima/lima_dlbu.c create mode 100644 drivers/gpu/drm/lima/lima_dlbu.h create mode 100644 drivers/gpu/drm/lima/lima_drv.c create mode 100644 drivers/gpu/drm/lima/lima_drv.h create mode 100644 drivers/gpu/drm/lima/lima_gem.c create mode 100644 drivers/gpu/drm/lima/lima_gem.h create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.c create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.h create mode 100644 drivers/gpu/drm/lima/lima_gp.c create mode 100644 drivers/gpu/drm/lima/lima_gp.h create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.c create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.h create mode 100644 drivers/gpu/drm/lima/lima_mmu.c create mode 100644 drivers/gpu/drm/lima/lima_mmu.h create mode 100644 drivers/gpu/drm/lima/lima_object.c create mode 100644 drivers/gpu/drm/lima/lima_object.h create mode 100644 drivers/gpu/drm/lima/lima_pmu.c create mode 100644 drivers/gpu/drm/lima/lima_pmu.h create mode 100644 drivers/gpu/drm/lima/lima_pp.c create mode 100644 drivers/gpu/drm/lima/lima_pp.h create mode 100644 drivers/gpu/drm/lima/lima_regs.h create mode 100644 drivers/gpu/drm/lima/lima_sched.c create mode 100644 drivers/gpu/drm/lima/lima_sched.h create mode 100644 drivers/gpu/drm/lima/lima_ttm.c create mode 100644 drivers/gpu/drm/lima/lima_ttm.h create mode 100644 drivers/gpu/drm/lima/lima_vm.c create mode 100644 drivers/gpu/drm/lima/lima_vm.h create mode 100644 include/uapi/drm/lima_drm.h
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Wed, May 23, 2018 at 11:44 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Wed, May 23, 2018 at 3:52 PM, Qiang Yu yuq825@gmail.com wrote:
On Wed, May 23, 2018 at 5:29 PM, Christian König ckoenig.leichtzumerken@gmail.com wrote:
Am 18.05.2018 um 11:27 schrieb Qiang Yu:
Kernel DRM driver for ARM Mali 400/450 GPUs.
This implementation mainly take amdgpu DRM driver as reference.
- Mali 4xx GPUs have two kinds of processors GP and PP. GP is for OpenGL vertex shader processing and PP is for fragment shader processing. Each processor has its own MMU so prcessors work in virtual address space.
- There's only one GP but multiple PP (max 4 for mali 400 and 8 for mali 450) in the same mali 4xx GPU. All PPs are grouped togather to handle a single fragment shader task divided by FB output tiled pixels. Mali 400 user space driver is responsible for assign target tiled pixels to each PP, but mali 450 has a HW module called DLBU to dynamically balance each PP's load.
- User space driver allocate buffer object and map into GPU virtual address space, upload command stream and draw data with CPU mmap of the buffer object, then submit task to GP/PP with a register frame indicating where is the command stream and misc settings.
- There's no command stream validation/relocation due to each user process has its own GPU virtual address space. GP/PP's MMU switch virtual address space before running two tasks from different user process. Error or evil user space code just get MMU fault or GP/PP error IRQ, then the HW/SW will be recovered.
- Use TTM as MM. TTM_PL_TT type memory is used as the content of lima buffer object which is allocated from TTM page pool. all lima buffer object gets pinned with TTM_PL_FLAG_NO_EVICT when allocation, so there's no buffer eviction and swap for now. We need reverse engineering to see if and how GP/PP support MMU fault recovery (continue execution). Otherwise we have to pin/unpin each envolved buffer when task creation/deletion.
Well pinning all memory is usually a no-go for upstreaming. But since you are already using the drm_sched for GPU task scheduling why are you actually needing this?
The scheduler should take care of signaling all fences when the hardware is done with it's magic and that is enough for TTM to note that a buffer object is movable again (e.g. unpin them).
Please correct me if I'm wrong.
One way to implement eviction/swap is like this: call validation on each buffers involved in a task, but this won't prevent it from eviction/swap when executing, so a GPU MMU fault may happen and in the handler we need to recover the buffer evicted/swapped.
Another way is pin/unpin buffers evolved when task create/free.
First way is better when memory load is low and second way is better when memory load is high. First way also need less memory.
So I'd prefer first way but due to the GPU MMU fault HW op need reverse engineering, I have to pin all buffers now. After the HW op is clear, I can choose one way to implement.
All the drivers using ttm have something that looks like vram, or a requirement to move buffers around. Afaiui that includes virtio drm driver.
Does virtio drm driver need to move buffers around? amdgpu also has no vram when APU.
From your description you don't have such a requirement, and then doing what etnaviv has done would be a lot simpler. Everything that's not related to buffer movement handling is also available outside of ttm already.
Yeah, I could do like etnaviv, but it's not simpler than using ttm directly especially want some optimization (like ttm page pool, ttm_eu_reserve_buffers, ttm_bo_mmap). If I have/want to implement them, why not just use TTM directly with all those helper functions.
Regards, Qiang
-Daniel
Regards, Qiang
Christian.
- Use drm_sched for GPU task schedule. Each OpenGL context should have a lima context object in the kernel to distinguish tasks from different user. drm_sched gets task from each lima context in a fair way.
Not implemented:
- Dump buffer support
- Power management
- Performance counter
This patch serial just pack a pair of .c/.h files in each patch. For whole history of this driver's development, see: https://github.com/yuq/linux-lima/commits/lima-4.17-rc4
Mesa driver is still in development and not ready for daily usage, but can run some simple tests like kmscube and glamrk2, see: https://github.com/yuq/mesa-lima
Andrei Paulau (1): arm64/dts: add switch-delay for meson mali
Lima Project Developers (10): drm/lima: add mali 4xx GPU hardware regs drm/lima: add lima core driver drm/lima: add GPU device functions drm/lima: add PMU related functions drm/lima: add PP related functions drm/lima: add MMU related functions drm/lima: add GPU virtual memory space handing drm/lima: add GEM related functions drm/lima: add GEM Prime related functions drm/lima: add makefile and kconfig
Qiang Yu (12): dt-bindings: add switch-delay property for mali-utgard arm64/dts: add switch-delay for meson mali Revert "drm: Nerf the preclose callback for modern drivers" drm/lima: add lima uapi header drm/lima: add L2 cache functions drm/lima: add GP related functions drm/lima: add BCAST related function drm/lima: add DLBU related functions drm/lima: add TTM subsystem functions drm/lima: add buffer object functions drm/lima: add GPU schedule using DRM_SCHED drm/lima: add context related functions
Simon Shields (1): ARM: dts: add gpu node to exynos4
.../bindings/gpu/arm,mali-utgard.txt | 4 + arch/arm/boot/dts/exynos4.dtsi | 33 ++ arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi | 1 + .../boot/dts/amlogic/meson-gxl-mali.dtsi | 1 + drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_file.c | 8 +- drivers/gpu/drm/lima/Kconfig | 9 + drivers/gpu/drm/lima/Makefile | 19 + drivers/gpu/drm/lima/lima_bcast.c | 65 +++ drivers/gpu/drm/lima/lima_bcast.h | 34 ++ drivers/gpu/drm/lima/lima_ctx.c | 143 +++++ drivers/gpu/drm/lima/lima_ctx.h | 51 ++ drivers/gpu/drm/lima/lima_device.c | 407 ++++++++++++++ drivers/gpu/drm/lima/lima_device.h | 136 +++++ drivers/gpu/drm/lima/lima_dlbu.c | 75 +++ drivers/gpu/drm/lima/lima_dlbu.h | 37 ++ drivers/gpu/drm/lima/lima_drv.c | 466 ++++++++++++++++ drivers/gpu/drm/lima/lima_drv.h | 77 +++ drivers/gpu/drm/lima/lima_gem.c | 459 ++++++++++++++++ drivers/gpu/drm/lima/lima_gem.h | 41 ++ drivers/gpu/drm/lima/lima_gem_prime.c | 66 +++ drivers/gpu/drm/lima/lima_gem_prime.h | 31 ++ drivers/gpu/drm/lima/lima_gp.c | 293 +++++++++++ drivers/gpu/drm/lima/lima_gp.h | 34 ++ drivers/gpu/drm/lima/lima_l2_cache.c | 98 ++++ drivers/gpu/drm/lima/lima_l2_cache.h | 32 ++ drivers/gpu/drm/lima/lima_mmu.c | 154 ++++++ drivers/gpu/drm/lima/lima_mmu.h | 34 ++ drivers/gpu/drm/lima/lima_object.c | 120 +++++ drivers/gpu/drm/lima/lima_object.h | 87 +++ drivers/gpu/drm/lima/lima_pmu.c | 85 +++ drivers/gpu/drm/lima/lima_pmu.h | 30 ++ drivers/gpu/drm/lima/lima_pp.c | 418 +++++++++++++++ drivers/gpu/drm/lima/lima_pp.h | 37 ++ drivers/gpu/drm/lima/lima_regs.h | 304 +++++++++++ drivers/gpu/drm/lima/lima_sched.c | 497 ++++++++++++++++++ drivers/gpu/drm/lima/lima_sched.h | 126 +++++ drivers/gpu/drm/lima/lima_ttm.c | 409 ++++++++++++++ drivers/gpu/drm/lima/lima_ttm.h | 44 ++ drivers/gpu/drm/lima/lima_vm.c | 312 +++++++++++ drivers/gpu/drm/lima/lima_vm.h | 73 +++ include/drm/drm_drv.h | 23 +- include/uapi/drm/lima_drm.h | 195 +++++++ 44 files changed, 5565 insertions(+), 6 deletions(-) create mode 100644 drivers/gpu/drm/lima/Kconfig create mode 100644 drivers/gpu/drm/lima/Makefile create mode 100644 drivers/gpu/drm/lima/lima_bcast.c create mode 100644 drivers/gpu/drm/lima/lima_bcast.h create mode 100644 drivers/gpu/drm/lima/lima_ctx.c create mode 100644 drivers/gpu/drm/lima/lima_ctx.h create mode 100644 drivers/gpu/drm/lima/lima_device.c create mode 100644 drivers/gpu/drm/lima/lima_device.h create mode 100644 drivers/gpu/drm/lima/lima_dlbu.c create mode 100644 drivers/gpu/drm/lima/lima_dlbu.h create mode 100644 drivers/gpu/drm/lima/lima_drv.c create mode 100644 drivers/gpu/drm/lima/lima_drv.h create mode 100644 drivers/gpu/drm/lima/lima_gem.c create mode 100644 drivers/gpu/drm/lima/lima_gem.h create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.c create mode 100644 drivers/gpu/drm/lima/lima_gem_prime.h create mode 100644 drivers/gpu/drm/lima/lima_gp.c create mode 100644 drivers/gpu/drm/lima/lima_gp.h create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.c create mode 100644 drivers/gpu/drm/lima/lima_l2_cache.h create mode 100644 drivers/gpu/drm/lima/lima_mmu.c create mode 100644 drivers/gpu/drm/lima/lima_mmu.h create mode 100644 drivers/gpu/drm/lima/lima_object.c create mode 100644 drivers/gpu/drm/lima/lima_object.h create mode 100644 drivers/gpu/drm/lima/lima_pmu.c create mode 100644 drivers/gpu/drm/lima/lima_pmu.h create mode 100644 drivers/gpu/drm/lima/lima_pp.c create mode 100644 drivers/gpu/drm/lima/lima_pp.h create mode 100644 drivers/gpu/drm/lima/lima_regs.h create mode 100644 drivers/gpu/drm/lima/lima_sched.c create mode 100644 drivers/gpu/drm/lima/lima_sched.h create mode 100644 drivers/gpu/drm/lima/lima_ttm.c create mode 100644 drivers/gpu/drm/lima/lima_ttm.h create mode 100644 drivers/gpu/drm/lima/lima_vm.c create mode 100644 drivers/gpu/drm/lima/lima_vm.h create mode 100644 include/uapi/drm/lima_drm.h
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
-- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch
Am 24.05.2018 um 02:31 schrieb Qiang Yu:
On Wed, May 23, 2018 at 11:44 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Wed, May 23, 2018 at 3:52 PM, Qiang Yu yuq825@gmail.com wrote:
On Wed, May 23, 2018 at 5:29 PM, Christian König ckoenig.leichtzumerken@gmail.com wrote:
Am 18.05.2018 um 11:27 schrieb Qiang Yu:
Kernel DRM driver for ARM Mali 400/450 GPUs.
This implementation mainly take amdgpu DRM driver as reference.
- Mali 4xx GPUs have two kinds of processors GP and PP. GP is for OpenGL vertex shader processing and PP is for fragment shader processing. Each processor has its own MMU so prcessors work in virtual address space.
- There's only one GP but multiple PP (max 4 for mali 400 and 8 for mali 450) in the same mali 4xx GPU. All PPs are grouped togather to handle a single fragment shader task divided by FB output tiled pixels. Mali 400 user space driver is responsible for assign target tiled pixels to each PP, but mali 450 has a HW module called DLBU to dynamically balance each PP's load.
- User space driver allocate buffer object and map into GPU virtual address space, upload command stream and draw data with CPU mmap of the buffer object, then submit task to GP/PP with a register frame indicating where is the command stream and misc settings.
- There's no command stream validation/relocation due to each user process has its own GPU virtual address space. GP/PP's MMU switch virtual address space before running two tasks from different user process. Error or evil user space code just get MMU fault or GP/PP error IRQ, then the HW/SW will be recovered.
- Use TTM as MM. TTM_PL_TT type memory is used as the content of lima buffer object which is allocated from TTM page pool. all lima buffer object gets pinned with TTM_PL_FLAG_NO_EVICT when allocation, so there's no buffer eviction and swap for now. We need reverse engineering to see if and how GP/PP support MMU fault recovery (continue execution). Otherwise we have to pin/unpin each envolved buffer when task creation/deletion.
Well pinning all memory is usually a no-go for upstreaming. But since you are already using the drm_sched for GPU task scheduling why are you actually needing this?
The scheduler should take care of signaling all fences when the hardware is done with it's magic and that is enough for TTM to note that a buffer object is movable again (e.g. unpin them).
Please correct me if I'm wrong.
One way to implement eviction/swap is like this: call validation on each buffers involved in a task, but this won't prevent it from eviction/swap when executing, so a GPU MMU fault may happen and in the handler we need to recover the buffer evicted/swapped.
Another way is pin/unpin buffers evolved when task create/free.
First way is better when memory load is low and second way is better when memory load is high. First way also need less memory.
So I'd prefer first way but due to the GPU MMU fault HW op need reverse engineering, I have to pin all buffers now. After the HW op is clear, I can choose one way to implement.
All the drivers using ttm have something that looks like vram, or a requirement to move buffers around. Afaiui that includes virtio drm driver.
Does virtio drm driver need to move buffers around? amdgpu also has no vram when APU.
From your description you don't have such a requirement, and then doing what etnaviv has done would be a lot simpler. Everything that's not related to buffer movement handling is also available outside of ttm already.
Yeah, I could do like etnaviv, but it's not simpler than using ttm directly especially want some optimization (like ttm page pool, ttm_eu_reserve_buffers, ttm_bo_mmap). If I have/want to implement them, why not just use TTM directly with all those helper functions.
Well TTM has some design flaws (e.g. heavily layered design etc...), but it also offers some rather nice functionality.
Regards, Christian.
Regards, Qiang
On Thu, May 24, 2018 at 8:27 AM, Christian König christian.koenig@amd.com wrote:
Am 24.05.2018 um 02:31 schrieb Qiang Yu:
On Wed, May 23, 2018 at 11:44 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Wed, May 23, 2018 at 3:52 PM, Qiang Yu yuq825@gmail.com wrote:
On Wed, May 23, 2018 at 5:29 PM, Christian König ckoenig.leichtzumerken@gmail.com wrote:
Am 18.05.2018 um 11:27 schrieb Qiang Yu:
Kernel DRM driver for ARM Mali 400/450 GPUs.
This implementation mainly take amdgpu DRM driver as reference.
- Mali 4xx GPUs have two kinds of processors GP and PP. GP is for OpenGL vertex shader processing and PP is for fragment shader processing. Each processor has its own MMU so prcessors work in virtual address space.
- There's only one GP but multiple PP (max 4 for mali 400 and 8 for mali 450) in the same mali 4xx GPU. All PPs are grouped togather to handle a single fragment shader task divided by FB output tiled pixels. Mali 400 user space driver is responsible for assign target tiled pixels to each PP, but mali 450 has a HW module called DLBU to dynamically balance each PP's load.
- User space driver allocate buffer object and map into GPU virtual address space, upload command stream and draw data with CPU mmap of the buffer object, then submit task to GP/PP with a register frame indicating where is the command stream and misc settings.
- There's no command stream validation/relocation due to each user process has its own GPU virtual address space. GP/PP's MMU switch virtual address space before running two tasks from different user process. Error or evil user space code just get MMU fault or GP/PP error IRQ, then the HW/SW will be recovered.
- Use TTM as MM. TTM_PL_TT type memory is used as the content of lima buffer object which is allocated from TTM page pool. all lima buffer object gets pinned with TTM_PL_FLAG_NO_EVICT when allocation, so there's no buffer eviction and swap for now. We need reverse engineering to see if and how GP/PP support MMU fault recovery (continue execution). Otherwise we have to pin/unpin each envolved buffer when task creation/deletion.
Well pinning all memory is usually a no-go for upstreaming. But since you are already using the drm_sched for GPU task scheduling why are you actually needing this?
The scheduler should take care of signaling all fences when the hardware is done with it's magic and that is enough for TTM to note that a buffer object is movable again (e.g. unpin them).
Please correct me if I'm wrong.
One way to implement eviction/swap is like this: call validation on each buffers involved in a task, but this won't prevent it from eviction/swap when executing, so a GPU MMU fault may happen and in the handler we need to recover the buffer evicted/swapped.
Another way is pin/unpin buffers evolved when task create/free.
First way is better when memory load is low and second way is better when memory load is high. First way also need less memory.
So I'd prefer first way but due to the GPU MMU fault HW op need reverse engineering, I have to pin all buffers now. After the HW op is clear, I can choose one way to implement.
All the drivers using ttm have something that looks like vram, or a requirement to move buffers around. Afaiui that includes virtio drm driver.
Does virtio drm driver need to move buffers around? amdgpu also has no vram when APU.
Afaiui APUs have a range of stolen memory which looks and acts and is managed like discrete vram. Including moving buffers around.
From your description you don't have such a requirement, and then doing what etnaviv has done would be a lot simpler. Everything that's not related to buffer movement handling is also available outside of ttm already.
Yeah, I could do like etnaviv, but it's not simpler than using ttm directly especially want some optimization (like ttm page pool, ttm_eu_reserve_buffers, ttm_bo_mmap). If I have/want to implement them, why not just use TTM directly with all those helper functions.
Well TTM has some design flaws (e.g. heavily layered design etc...), but it also offers some rather nice functionality.
Yeah, but I still think that for non-discrete drivers just moving a bunch of more of the neat ttm functionality into helpers where everyone can use them (instead of the binary ttm y/n decision) would be much better. E.g. the allocator pool definitely sounds like something gem helpers should be able to do, same for reserving a pile of buffers or default mmap implementations. A lot of that also exists already, thanks to lots of efforts from Noralf Tronnes and others.
I think ideally the long-term goal would be to modularize ttm concepts as much as possible, so that drivers can flexibly pick&choose the bits they need. We're slowly getting there (but definitely not yet there if you need to manage discrete vram I think). -Daniel
Am 24.05.2018 um 09:25 schrieb Daniel Vetter:
[SNIP]
Does virtio drm driver need to move buffers around? amdgpu also has no vram when APU.
Afaiui APUs have a range of stolen memory which looks and acts and is managed like discrete vram. Including moving buffers around.
BTW: We are actually working on getting rid of that. E.g. the only thing modern APUs need this stolen VRAM for are page tables, and it's just a matter of my time to fix this.
From your description you don't have such a requirement, and then doing what etnaviv has done would be a lot simpler. Everything that's not related to buffer movement handling is also available outside of ttm already.
Yeah, I could do like etnaviv, but it's not simpler than using ttm directly especially want some optimization (like ttm page pool, ttm_eu_reserve_buffers, ttm_bo_mmap). If I have/want to implement them, why not just use TTM directly with all those helper functions.
Well TTM has some design flaws (e.g. heavily layered design etc...), but it also offers some rather nice functionality.
Yeah, but I still think that for non-discrete drivers just moving a bunch of more of the neat ttm functionality into helpers where everyone can use them (instead of the binary ttm y/n decision) would be much better. E.g. the allocator pool definitely sounds like something gem helpers should be able to do, same for reserving a pile of buffers or default mmap implementations. A lot of that also exists already, thanks to lots of efforts from Noralf Tronnes and others.
I think ideally the long-term goal would be to modularize ttm concepts as much as possible, so that drivers can flexibly pick&choose the bits they need. We're slowly getting there (but definitely not yet there if you need to manage discrete vram I think).
Yes, completely agree. It's just that nobody had time for that.
Especially the different memory pools should be cleaned up and moved into a common DRM functionality or even into the DMA or directly the MM subsystem.
E.g. an interface like: I'm device X and need memory which is cached/uncached/wc please allocate something for me.
Regards, Christian.
dri-devel@lists.freedesktop.org