Hi!
Sorry for the long delay since https://patchwork.kernel.org/patch/11132381/, finally got around to give this a real try.
The main purpose of this series is to upstream the dts change and the binding document, but I wanted to see how far I could probe the GPU, to check that the binding is indeed correct. The rest of the patches are RFC/work-in-progress, but I think some of them could already be picked up.
So this is tested on MT8183 with a chromeos-4.19 kernel, and a ton of backports to get the latest panfrost driver (I should probably try on linux-next at some point but this was the path of least resistance).
I tested it as a module as it's more challenging (originally probing would work built-in, on boot, but not as a module, as I didn't have the power domain changes, and all power domains are on by default during boot).
Probing logs looks like this, currently: [ 221.867726] panfrost 13040000.gpu: clock rate = 511999970 [ 221.867929] panfrost 13040000.gpu: Linked as a consumer to regulator.14 [ 221.868600] panfrost 13040000.gpu: Linked as a consumer to regulator.31 [ 221.870586] panfrost 13040000.gpu: Linked as a consumer to genpd:0:13040000.gpu [ 221.871492] panfrost 13040000.gpu: Linked as a consumer to genpd:1:13040000.gpu [ 221.871866] panfrost 13040000.gpu: Linked as a consumer to genpd:2:13040000.gpu [ 221.872427] panfrost 13040000.gpu: mali-g72 id 0x6221 major 0x0 minor 0x3 status 0x0 [ 221.872439] panfrost 13040000.gpu: features: 00000000,13de77ff, issues: 00000000,00000400 [ 221.872445] panfrost 13040000.gpu: Features: L2:0x07120206 Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002830 AS:0xff JS:0x7 [ 221.872449] panfrost 13040000.gpu: shader_present=0x7 l2_present=0x1 [ 221.873526] panfrost 13040000.gpu: error powering up gpu stack [ 221.878088] [drm] Initialized panfrost 1.1.0 20180908 for 13040000.gpu on minor 2 [ 221.940817] panfrost 13040000.gpu: error powering up gpu stack [ 222.018233] panfrost 13040000.gpu: error powering up gpu stack (repeated)
So the GPU is probed, but there's an issue when powering up the STACK, not quite sure why, I'll try to have a deeper look, at some point.
Thanks!
Nicolas
v2: - Use sram instead of mali_sram as SRAM supply name. - Rename mali@ to gpu@. - Add dt-bindings changes - Stacking patches after the device tree change that allow basic probing (still incomplete and broken).
Nicolas Boichat (7): dt-bindings: gpu: mali-bifrost: Add Mediatek MT8183 arm64: dts: mt8183: Add node for the Mali GPU drm/panfrost: Improve error reporting in panfrost_gpu_power_on drm/panfrost: Add support for a second regulator for the GPU drm/panfrost: Add support for multiple power domain support RFC: drm/panfrost: Add bifrost compatible string RFC: drm/panfrost: devfreq: Add support for 2 regulators
.../bindings/gpu/arm,mali-bifrost.yaml | 20 ++++ arch/arm64/boot/dts/mediatek/mt8183-evb.dts | 7 ++ arch/arm64/boot/dts/mediatek/mt8183.dtsi | 104 +++++++++++++++++ drivers/gpu/drm/panfrost/panfrost_devfreq.c | 18 +++ drivers/gpu/drm/panfrost/panfrost_device.c | 108 ++++++++++++++++-- drivers/gpu/drm/panfrost/panfrost_device.h | 7 ++ drivers/gpu/drm/panfrost/panfrost_drv.c | 1 + drivers/gpu/drm/panfrost/panfrost_gpu.c | 15 ++- 8 files changed, 267 insertions(+), 13 deletions(-)
Define a compatible string for the Mali Bifrost GPU found in Mediatek's MT8183 SoCs.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org --- .../bindings/gpu/arm,mali-bifrost.yaml | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
diff --git a/Documentation/devicetree/bindings/gpu/arm,mali-bifrost.yaml b/Documentation/devicetree/bindings/gpu/arm,mali-bifrost.yaml index 4ea6a8789699709..9e095608d2d98f0 100644 --- a/Documentation/devicetree/bindings/gpu/arm,mali-bifrost.yaml +++ b/Documentation/devicetree/bindings/gpu/arm,mali-bifrost.yaml @@ -17,6 +17,7 @@ properties: items: - enum: - amlogic,meson-g12a-mali + - mediatek,mt8183-mali - realtek,rtd1619-mali - rockchip,px30-mali - const: arm,mali-bifrost # Mali Bifrost GPU model/revision is fully discoverable @@ -62,6 +63,23 @@ allOf: minItems: 2 required: - resets + - if: + properties: + compatible: + contains: + const: mediatek,mt8183-mali + then: + properties: + sram-supply: true + power-domains: + description: + List of phandle and PM domain specifier as documented in + Documentation/devicetree/bindings/power/power_domain.txt + minItems: 3 + maxItems: 3 + required: + - sram-supply + - power-domains
examples: - |
Add a basic GPU node for mt8183.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org --- Upstreaming what matches existing bindings from our Chromium OS tree: https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-4...
The evb part of this change depends on this patch to add PMIC dtsi: https://patchwork.kernel.org/patch/10928161/
The binding we use with out-of-tree Mali drivers includes more clocks, this is used for devfreq: the out-of-tree driver switches clk_mux to clk_sub_parent (26Mhz), adjusts clk_main_parent, then switches clk_mux back to clk_main_parent: (see https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-4...) clocks = <&topckgen CLK_TOP_MFGPLL_CK>, <&topckgen CLK_TOP_MUX_MFG>, <&clk26m>, <&mfgcfg CLK_MFG_BG3D>; clock-names = "clk_main_parent", "clk_mux", "clk_sub_parent", "subsys_mfg_cg";
v2: - Use sram instead of mali_sram as SRAM supply name. - Rename mali@ to gpu@.
arch/arm64/boot/dts/mediatek/mt8183-evb.dts | 7 ++ arch/arm64/boot/dts/mediatek/mt8183.dtsi | 104 ++++++++++++++++++++ 2 files changed, 111 insertions(+)
diff --git a/arch/arm64/boot/dts/mediatek/mt8183-evb.dts b/arch/arm64/boot/dts/mediatek/mt8183-evb.dts index 1fb195c683c3d01..7d609e0cd9b4975 100644 --- a/arch/arm64/boot/dts/mediatek/mt8183-evb.dts +++ b/arch/arm64/boot/dts/mediatek/mt8183-evb.dts @@ -7,6 +7,7 @@
/dts-v1/; #include "mt8183.dtsi" +#include "mt6358.dtsi"
/ { model = "MediaTek MT8183 evaluation board"; @@ -30,6 +31,12 @@ &auxadc { status = "okay"; };
+&gpu { + supply-names = "mali", "sram"; + mali-supply = <&mt6358_vgpu_reg>; + sram-supply = <&mt6358_vsram_gpu_reg>; +}; + &i2c0 { pinctrl-names = "default"; pinctrl-0 = <&i2c_pins_0>; diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi index 1ade9153e5c265b..4da3f1ed1c15bf3 100644 --- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi @@ -597,6 +597,110 @@ mfgcfg: syscon@13000000 { #clock-cells = <1>; };
+ gpu: gpu@13040000 { + compatible = "mediatek,mt8183-mali", "arm,mali-bifrost"; + reg = <0 0x13040000 0 0x4000>; + interrupts = + <GIC_SPI 280 IRQ_TYPE_LEVEL_LOW>, + <GIC_SPI 279 IRQ_TYPE_LEVEL_LOW>, + <GIC_SPI 278 IRQ_TYPE_LEVEL_LOW>; + interrupt-names = "job", "mmu", "gpu"; + + clocks = <&topckgen CLK_TOP_MFGPLL_CK>; + + power-domains = + <&scpsys MT8183_POWER_DOMAIN_MFG_CORE0>, + <&scpsys MT8183_POWER_DOMAIN_MFG_CORE1>, + <&scpsys MT8183_POWER_DOMAIN_MFG_2D>; + + operating-points-v2 = <&gpu_opp_table>; + }; + + gpu_opp_table: opp_table0 { + compatible = "operating-points-v2"; + opp-shared; + + opp-300000000 { + opp-hz = /bits/ 64 <300000000>; + opp-microvolt = <625000>, <850000>; + }; + + opp-320000000 { + opp-hz = /bits/ 64 <320000000>; + opp-microvolt = <631250>, <850000>; + }; + + opp-340000000 { + opp-hz = /bits/ 64 <340000000>; + opp-microvolt = <637500>, <850000>; + }; + + opp-360000000 { + opp-hz = /bits/ 64 <360000000>; + opp-microvolt = <643750>, <850000>; + }; + + opp-380000000 { + opp-hz = /bits/ 64 <380000000>; + opp-microvolt = <650000>, <850000>; + }; + + opp-400000000 { + opp-hz = /bits/ 64 <400000000>; + opp-microvolt = <656250>, <850000>; + }; + + opp-420000000 { + opp-hz = /bits/ 64 <420000000>; + opp-microvolt = <662500>, <850000>; + }; + + opp-460000000 { + opp-hz = /bits/ 64 <460000000>; + opp-microvolt = <675000>, <850000>; + }; + + opp-500000000 { + opp-hz = /bits/ 64 <500000000>; + opp-microvolt = <687500>, <850000>; + }; + + opp-540000000 { + opp-hz = /bits/ 64 <540000000>; + opp-microvolt = <700000>, <850000>; + }; + + opp-580000000 { + opp-hz = /bits/ 64 <580000000>; + opp-microvolt = <712500>, <850000>; + }; + + opp-620000000 { + opp-hz = /bits/ 64 <620000000>; + opp-microvolt = <725000>, <850000>; + }; + + opp-653000000 { + opp-hz = /bits/ 64 <653000000>; + opp-microvolt = <743750>, <850000>; + }; + + opp-698000000 { + opp-hz = /bits/ 64 <698000000>; + opp-microvolt = <768750>, <868750>; + }; + + opp-743000000 { + opp-hz = /bits/ 64 <743000000>; + opp-microvolt = <793750>, <893750>; + }; + + opp-800000000 { + opp-hz = /bits/ 64 <800000000>; + opp-microvolt = <825000>, <925000>; + }; + }; + mmsys: syscon@14000000 { compatible = "mediatek,mt8183-mmsys", "syscon"; reg = <0 0x14000000 0 0x1000>;
It is useful to know which component cannot be powered on.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org
---
Was useful when trying to probe bifrost GPU, to understand what issue we are facing. --- drivers/gpu/drm/panfrost/panfrost_gpu.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c index 8822ec13a0d619f..ba02bbfcf28c011 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gpu.c +++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c @@ -308,21 +308,26 @@ void panfrost_gpu_power_on(struct panfrost_device *pfdev) gpu_write(pfdev, L2_PWRON_LO, pfdev->features.l2_present); ret = readl_relaxed_poll_timeout(pfdev->iomem + L2_READY_LO, val, val == pfdev->features.l2_present, 100, 1000); + if (ret) + dev_err(pfdev->dev, "error powering up gpu L2");
gpu_write(pfdev, STACK_PWRON_LO, pfdev->features.stack_present); - ret |= readl_relaxed_poll_timeout(pfdev->iomem + STACK_READY_LO, + ret = readl_relaxed_poll_timeout(pfdev->iomem + STACK_READY_LO, val, val == pfdev->features.stack_present, 100, 1000); + if (ret) + dev_err(pfdev->dev, "error powering up gpu stack");
gpu_write(pfdev, SHADER_PWRON_LO, pfdev->features.shader_present); - ret |= readl_relaxed_poll_timeout(pfdev->iomem + SHADER_READY_LO, + ret = readl_relaxed_poll_timeout(pfdev->iomem + SHADER_READY_LO, val, val == pfdev->features.shader_present, 100, 1000); + if (ret) + dev_err(pfdev->dev, "error powering up gpu shader");
gpu_write(pfdev, TILER_PWRON_LO, pfdev->features.tiler_present); - ret |= readl_relaxed_poll_timeout(pfdev->iomem + TILER_READY_LO, + ret = readl_relaxed_poll_timeout(pfdev->iomem + TILER_READY_LO, val, val == pfdev->features.tiler_present, 100, 1000); - if (ret) - dev_err(pfdev->dev, "error powering up gpu"); + dev_err(pfdev->dev, "error powering up gpu tiler"); }
void panfrost_gpu_power_off(struct panfrost_device *pfdev)
On 08/01/2020 05:23, Nicolas Boichat wrote:
It is useful to know which component cannot be powered on.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org
Looks like helpful error reporting.
Reviewed-by: Steven Price steven.price@arm.com
Was useful when trying to probe bifrost GPU, to understand what issue we are facing.
drivers/gpu/drm/panfrost/panfrost_gpu.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c index 8822ec13a0d619f..ba02bbfcf28c011 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gpu.c +++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c @@ -308,21 +308,26 @@ void panfrost_gpu_power_on(struct panfrost_device *pfdev) gpu_write(pfdev, L2_PWRON_LO, pfdev->features.l2_present); ret = readl_relaxed_poll_timeout(pfdev->iomem + L2_READY_LO, val, val == pfdev->features.l2_present, 100, 1000);
if (ret)
dev_err(pfdev->dev, "error powering up gpu L2");
gpu_write(pfdev, STACK_PWRON_LO, pfdev->features.stack_present);
- ret |= readl_relaxed_poll_timeout(pfdev->iomem + STACK_READY_LO,
- ret = readl_relaxed_poll_timeout(pfdev->iomem + STACK_READY_LO, val, val == pfdev->features.stack_present, 100, 1000);
- if (ret)
dev_err(pfdev->dev, "error powering up gpu stack");
As mentioned in my previous email - we could just drop this entire section dealing with the core stacks and let the GPU's own dependency management code handle it. Of course there might be a GPU out there for which that is broken... in which case some sort of quirk handling will be needed :(
Steve
gpu_write(pfdev, SHADER_PWRON_LO, pfdev->features.shader_present);
- ret |= readl_relaxed_poll_timeout(pfdev->iomem + SHADER_READY_LO,
ret = readl_relaxed_poll_timeout(pfdev->iomem + SHADER_READY_LO, val, val == pfdev->features.shader_present, 100, 1000);
if (ret)
dev_err(pfdev->dev, "error powering up gpu shader");
gpu_write(pfdev, TILER_PWRON_LO, pfdev->features.tiler_present);
- ret |= readl_relaxed_poll_timeout(pfdev->iomem + TILER_READY_LO,
- ret = readl_relaxed_poll_timeout(pfdev->iomem + TILER_READY_LO, val, val == pfdev->features.tiler_present, 100, 1000);
- if (ret)
dev_err(pfdev->dev, "error powering up gpu");
dev_err(pfdev->dev, "error powering up gpu tiler");
}
void panfrost_gpu_power_off(struct panfrost_device *pfdev)
Some GPUs, namely, the bifrost/g72 part on MT8183, have a second regulator for their SRAM, let's add support for that.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org --- drivers/gpu/drm/panfrost/panfrost_device.c | 21 +++++++++++++++++++++ drivers/gpu/drm/panfrost/panfrost_device.h | 1 + 2 files changed, 22 insertions(+)
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c index 238fb6d54df4732..a0b0a6fef8b4e63 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.c +++ b/drivers/gpu/drm/panfrost/panfrost_device.c @@ -102,12 +102,33 @@ static int panfrost_regulator_init(struct panfrost_device *pfdev) return ret; }
+ pfdev->regulator_sram = devm_regulator_get_optional(pfdev->dev, "sram"); + if (IS_ERR(pfdev->regulator_sram)) { + ret = PTR_ERR(pfdev->regulator_sram); + dev_err(pfdev->dev, "failed to get SRAM regulator: %d\n", ret); + goto err; + } + + if (pfdev->regulator_sram) { + ret = regulator_enable(pfdev->regulator_sram); + if (ret < 0) { + dev_err(pfdev->dev, + "failed to enable SRAM regulator: %d\n", ret); + goto err; + } + } + return 0; + +err: + regulator_disable(pfdev->regulator); + return ret; }
static void panfrost_regulator_fini(struct panfrost_device *pfdev) { regulator_disable(pfdev->regulator); + regulator_disable(pfdev->regulator_sram); }
int panfrost_device_init(struct panfrost_device *pfdev) diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index 06713811b92cdf7..a124334d69e7e93 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -60,6 +60,7 @@ struct panfrost_device { struct clk *clock; struct clk *bus_clock; struct regulator *regulator; + struct regulator *regulator_sram; struct reset_control *rstc;
struct panfrost_features features;
On Wed, Jan 08, 2020 at 01:23:34PM +0800, Nicolas Boichat wrote:
Some GPUs, namely, the bifrost/g72 part on MT8183, have a second regulator for their SRAM, let's add support for that.
- pfdev->regulator_sram = devm_regulator_get_optional(pfdev->dev, "sram");
- if (IS_ERR(pfdev->regulator_sram)) {
This supply is required for the devices that need it so I'd therefore expect the driver to request the supply non-optionally based on the compatible string rather than just hoping that a missing regulator isn't important. Though I do have to wonder given the lack of any active management of the supply if this is *really* part of the GPU or if it's more of a SoC thing, it's not clear what exactly adding this code is achieving.
On Wed, Jan 8, 2020 at 9:23 PM Mark Brown broonie@kernel.org wrote:
On Wed, Jan 08, 2020 at 01:23:34PM +0800, Nicolas Boichat wrote:
Some GPUs, namely, the bifrost/g72 part on MT8183, have a second regulator for their SRAM, let's add support for that.
pfdev->regulator_sram = devm_regulator_get_optional(pfdev->dev, "sram");
if (IS_ERR(pfdev->regulator_sram)) {
This supply is required for the devices that need it so I'd therefore expect the driver to request the supply non-optionally based on the compatible string rather than just hoping that a missing regulator isn't important.
That'd be a bit awkward to match, though... Currently all bifrost share the same compatible "arm,mali-bifrost", and it'd seem weird/wrong to match "mediatek,mt8183-mali" in this driver? I have no idea if any other Mali implementation will require a second regulator, but with the MT8183 we do need it, see below.
Though I do have to wonder given the lack of any active management of the supply if this is *really* part of the GPU or if it's more of a SoC thing, it's not clear what exactly adding this code is achieving.
Well if devfreq was working (see patch 7 https://patchwork.kernel.org/patch/11322851/ for a partial implementation), it would adjust both mali and sram regulators, see the OPP table in patch 2 (https://patchwork.kernel.org/patch/11322825/): SRAM voltage needs to be increased for frequencies >=698Mhz.
Now if you have some better idea how to implement this, I'm all ears!
Thanks.
On 08/01/2020 22:52, Nicolas Boichat wrote:
On Wed, Jan 8, 2020 at 9:23 PM Mark Brown broonie@kernel.org wrote:
On Wed, Jan 08, 2020 at 01:23:34PM +0800, Nicolas Boichat wrote:
Some GPUs, namely, the bifrost/g72 part on MT8183, have a second regulator for their SRAM, let's add support for that.
pfdev->regulator_sram = devm_regulator_get_optional(pfdev->dev, "sram");
if (IS_ERR(pfdev->regulator_sram)) {
This supply is required for the devices that need it so I'd therefore expect the driver to request the supply non-optionally based on the compatible string rather than just hoping that a missing regulator isn't important.
That'd be a bit awkward to match, though... Currently all bifrost share the same compatible "arm,mali-bifrost", and it'd seem weird/wrong to match "mediatek,mt8183-mali" in this driver? I have no idea if any other Mali implementation will require a second regulator, but with the MT8183 we do need it, see below.
Though I do have to wonder given the lack of any active management of the supply if this is *really* part of the GPU or if it's more of a SoC thing, it's not clear what exactly adding this code is achieving.
Well if devfreq was working (see patch 7 https://patchwork.kernel.org/patch/11322851/ for a partial implementation), it would adjust both mali and sram regulators, see the OPP table in patch 2 (https://patchwork.kernel.org/patch/11322825/): SRAM voltage needs to be increased for frequencies >=698Mhz.
Now if you have some better idea how to implement this, I'm all ears!
I'm not sure if it's better, but could we just encode the list of regulators into device tree. I'm a bit worried about special casing an "sram" regulator given that other platforms might have a similar situation but call the second regulator a different name.
Steve
Thanks. _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Jan 09, 2020 at 02:14:42PM +0000, Steven Price wrote:
On 08/01/2020 22:52, Nicolas Boichat wrote:
That'd be a bit awkward to match, though... Currently all bifrost share the same compatible "arm,mali-bifrost", and it'd seem weird/wrong to match "mediatek,mt8183-mali" in this driver? I have no idea if any other Mali implementation will require a second regulator, but with the MT8183 we do need it, see below.
This doesn't sound particularly hard, just new. Plenty of other devices have quirks done based on the SoC they're in or the IP revision, this would just be another of those quirks.
Well if devfreq was working (see patch 7 https://patchwork.kernel.org/patch/11322851/ for a partial implementation), it would adjust both mali and sram regulators, see the OPP table in patch 2 (https://patchwork.kernel.org/patch/11322825/): SRAM voltage needs to be increased for frequencies >=698Mhz.
Now if you have some better idea how to implement this, I'm all ears!
Set a flag based on the compatible, then base runtime decisions off that.
I'm not sure if it's better, but could we just encode the list of regulators into device tree. I'm a bit worried about special casing an "sram" regulator given that other platforms might have a similar situation but call the second regulator a different name.
Obviously the list of regulators bound on a given platform is encoded in the device tree but you shouldn't really be relying on that to figure out what to request in the driver - the driver should know what it's expecting. Bear in mind that getting regulator stuff wrong can result in physical damage to the system so it pays to be careful and to consider that platform integrators have a tendency to rely on things that just happen to work but aren't a good idea or accurate representations of the system. It's certainly *possible* to do something like that, the information is there, but I would not in any way recommend doing things that way as it's likely to not be robust.
The possibility that the regulator setup may vary on other platforms (which I'd expect TBH) does suggest that just requesting a bunch of supply names optionally and hoping that we got all the ones that are important on a given platform is going to lead to trouble down the line.
Steve, please fix your mail client to word wrap within paragraphs at something substantially less than 80 columns. Doing this makes your messages much easier to read and reply to.
On 09/01/2020 16:28, Mark Brown wrote:
On Thu, Jan 09, 2020 at 02:14:42PM +0000, Steven Price wrote:
On 08/01/2020 22:52, Nicolas Boichat wrote:
That'd be a bit awkward to match, though... Currently all bifrost share the same compatible "arm,mali-bifrost", and it'd seem weird/wrong to match "mediatek,mt8183-mali" in this driver? I have no idea if any other Mali implementation will require a second regulator, but with the MT8183 we do need it, see below.
This doesn't sound particularly hard, just new. Plenty of other devices have quirks done based on the SoC they're in or the IP revision, this would just be another of those quirks.
Well if devfreq was working (see patch 7 https://patchwork.kernel.org/patch/11322851/ for a partial implementation), it would adjust both mali and sram regulators, see the OPP table in patch 2 (https://patchwork.kernel.org/patch/11322825/): SRAM voltage needs to be increased for frequencies >=698Mhz.
Now if you have some better idea how to implement this, I'm all ears!
Set a flag based on the compatible, then base runtime decisions off that.
I'm not sure if it's better, but could we just encode the list of regulators into device tree. I'm a bit worried about special casing an "sram" regulator given that other platforms might have a similar situation but call the second regulator a different name.
Obviously the list of regulators bound on a given platform is encoded in the device tree but you shouldn't really be relying on that to figure out what to request in the driver - the driver should know what it's expecting.
From a driver perspective we don't expect to have to worry about power domains/multiple regulators - the hardware provides a bunch of power registers to turn on/off various parts of the hardware and this should be linked (in hardware) to a PDC which sorts it out. The GPU/PDC handles the required sequencing. So it *should* be a case of turn power/clocks on and go.
However certain integrations may have quirks such that there are physically multiple supplies. These are expected to all be turned on before using the GPU. Quite how this is best represented is something I'm not sure about.
Bear in mind that getting regulator stuff wrong can result in physical damage to the system so it pays to be careful and to consider that platform integrators have a tendency to rely on things that just happen to work but aren't a good idea or accurate representations of the system. It's certainly *possible* to do something like that, the information is there, but I would not in any way recommend doing things that way as it's likely to not be robust.
The possibility that the regulator setup may vary on other platforms (which I'd expect TBH) does suggest that just requesting a bunch of supply names optionally and hoping that we got all the ones that are important on a given platform is going to lead to trouble down the line.
Certainly if we miss a regulator the GPU isn't going to work properly (some cores won't be able to power up successfully). However at the moment the driver will happily do this if someone provides it with a DT which includes regulators that it doesn't know about. So I'm not sure how adding special case code for a SoC would help here.
Steve, please fix your mail client to word wrap within paragraphs at something substantially less than 80 columns. Doing this makes your messages much easier to read and reply to.
Sorry about that - I switched to my laptop to escape the noisy work going on outside the office, and apparently that was misconfigured. Hopefully fixed now, thanks for letting me know!
Steve
On Thu, Jan 09, 2020 at 04:53:02PM +0000, Steven Price wrote:
On 09/01/2020 16:28, Mark Brown wrote:
On Thu, Jan 09, 2020 at 02:14:42PM +0000, Steven Price wrote:
I'm not sure if it's better, but could we just encode the list of regulators into device tree. I'm a bit worried about special casing an "sram" regulator given that other platforms might have a similar situation but call the second regulator a different name.
Obviously the list of regulators bound on a given platform is encoded in the device tree but you shouldn't really be relying on that to figure out what to request in the driver - the driver should know what it's expecting.
From a driver perspective we don't expect to have to worry about power domains/multiple regulators - the hardware provides a bunch of power registers to turn on/off various parts of the hardware and this should be linked (in hardware) to a PDC which sorts it out. The GPU/PDC handles the required sequencing. So it *should* be a case of turn power/clocks on and go.
Ah, the well abstracted and consistent hardware with which we are all so fortunate to work :) . More seriously perhaps the thing to do here is create a driver that provides a soft PDC and then push all the special case handling into that? It can then get instantiated based on the compatible or perhaps represented directly in the device tree if that makes sense.
However certain integrations may have quirks such that there are physically multiple supplies. These are expected to all be turned on before using the GPU. Quite how this is best represented is something I'm not sure about.
If they're always on and don't ever change then that's really easy to represent in the DT without involving drivers, it's when you need to actively manage them that it's more effort.
Bear in mind that getting regulator stuff wrong can result in physical damage to the system so it pays to be careful and to consider that platform integrators have a tendency to rely on things that just happen to work but aren't a good idea or accurate representations of the system. It's certainly *possible* to do something like that, the information is there, but I would not in any way recommend doing things that way as it's likely to not be robust.
The possibility that the regulator setup may vary on other platforms (which I'd expect TBH) does suggest that just requesting a bunch of supply names optionally and hoping that we got all the ones that are important on a given platform is going to lead to trouble down the line.
Certainly if we miss a regulator the GPU isn't going to work properly (some cores won't be able to power up successfully). However at the moment the driver will happily do this if someone provides it with a DT which includes regulators that it doesn't know about. So I'm not sure how adding special case code for a SoC would help here.
I thought this SoC neeed to vary the voltage on both rails as part of the power management? Things like that can lead to hardware damage if we go out of spec far enough for long enough - there can be requirements like keeping one rail a certain voltage above another or whatever.
On 09/01/2020 19:49, Mark Brown wrote:
On Thu, Jan 09, 2020 at 04:53:02PM +0000, Steven Price wrote:
On 09/01/2020 16:28, Mark Brown wrote:
On Thu, Jan 09, 2020 at 02:14:42PM +0000, Steven Price wrote:
I'm not sure if it's better, but could we just encode the list of regulators into device tree. I'm a bit worried about special casing an "sram" regulator given that other platforms might have a similar situation but call the second regulator a different name.
Obviously the list of regulators bound on a given platform is encoded in the device tree but you shouldn't really be relying on that to figure out what to request in the driver - the driver should know what it's expecting.
From a driver perspective we don't expect to have to worry about power domains/multiple regulators - the hardware provides a bunch of power registers to turn on/off various parts of the hardware and this should be linked (in hardware) to a PDC which sorts it out. The GPU/PDC handles the required sequencing. So it *should* be a case of turn power/clocks on and go.
Ah, the well abstracted and consistent hardware with which we are all so fortunate to work :) . More seriously perhaps the thing to do here is create a driver that provides a soft PDC and then push all the special case handling into that? It can then get instantiated based on the compatible or perhaps represented directly in the device tree if that makes sense.
That makes sense to me.
However certain integrations may have quirks such that there are physically multiple supplies. These are expected to all be turned on before using the GPU. Quite how this is best represented is something I'm not sure about.
If they're always on and don't ever change then that's really easy to represent in the DT without involving drivers, it's when you need to actively manage them that it's more effort.
Sorry, I should have been more clear. They are managed as a group - so either the entire GPU is powered off, or powered on. There's no support in Panfrost or mali_kbase for attempting to power part of the GPU.
Bear in mind that getting regulator stuff wrong can result in physical damage to the system so it pays to be careful and to consider that platform integrators have a tendency to rely on things that just happen to work but aren't a good idea or accurate representations of the system. It's certainly *possible* to do something like that, the information is there, but I would not in any way recommend doing things that way as it's likely to not be robust.
The possibility that the regulator setup may vary on other platforms (which I'd expect TBH) does suggest that just requesting a bunch of supply names optionally and hoping that we got all the ones that are important on a given platform is going to lead to trouble down the line.
Certainly if we miss a regulator the GPU isn't going to work properly (some cores won't be able to power up successfully). However at the moment the driver will happily do this if someone provides it with a DT which includes regulators that it doesn't know about. So I'm not sure how adding special case code for a SoC would help here.
I thought this SoC neeed to vary the voltage on both rails as part of the power management? Things like that can lead to hardware damage if we go out of spec far enough for long enough - there can be requirements like keeping one rail a certain voltage above another or whatever.
Yes, you are correct. My concern is that a DT which specifies a new regulator (e.g. "sram2") would be accepted by an old kernel (because it wouldn't know to look for the new regulator) but wouldn't know to control the regulator. It could then create a situation which puts the board out of spec - potentially in a damaging way. Hence I'd like to express the regulator structure in such a way that old kernels wouldn't "half-work". Your "soft-PDC" approach would seem to fit that requirement.
Steve
On Fri, Jan 10, 2020 at 7:30 PM Steven Price steven.price@arm.com wrote:
On 09/01/2020 19:49, Mark Brown wrote:
On Thu, Jan 09, 2020 at 04:53:02PM +0000, Steven Price wrote:
On 09/01/2020 16:28, Mark Brown wrote:
On Thu, Jan 09, 2020 at 02:14:42PM +0000, Steven Price wrote:
I'm not sure if it's better, but could we just encode the list of regulators into device tree. I'm a bit worried about special casing an "sram" regulator given that other platforms might have a similar situation but call the second regulator a different name.
Obviously the list of regulators bound on a given platform is encoded in the device tree but you shouldn't really be relying on that to figure out what to request in the driver - the driver should know what it's expecting.
From a driver perspective we don't expect to have to worry about power domains/multiple regulators - the hardware provides a bunch of power registers to turn on/off various parts of the hardware and this should be linked (in hardware) to a PDC which sorts it out. The GPU/PDC handles the required sequencing. So it *should* be a case of turn power/clocks on and go.
Ah, the well abstracted and consistent hardware with which we are all so fortunate to work :) . More seriously perhaps the thing to do here is create a driver that provides a soft PDC and then push all the special case handling into that? It can then get instantiated based on the compatible or perhaps represented directly in the device tree if that makes sense.
That makes sense to me.
However certain integrations may have quirks such that there are physically multiple supplies. These are expected to all be turned on before using the GPU. Quite how this is best represented is something I'm not sure about.
If they're always on and don't ever change then that's really easy to represent in the DT without involving drivers, it's when you need to actively manage them that it's more effort.
Sorry, I should have been more clear. They are managed as a group - so either the entire GPU is powered off, or powered on. There's no support in Panfrost or mali_kbase for attempting to power part of the GPU.
Bear in mind that getting regulator stuff wrong can result in physical damage to the system so it pays to be careful and to consider that platform integrators have a tendency to rely on things that just happen to work but aren't a good idea or accurate representations of the system. It's certainly *possible* to do something like that, the information is there, but I would not in any way recommend doing things that way as it's likely to not be robust.
The possibility that the regulator setup may vary on other platforms (which I'd expect TBH) does suggest that just requesting a bunch of supply names optionally and hoping that we got all the ones that are important on a given platform is going to lead to trouble down the line.
Certainly if we miss a regulator the GPU isn't going to work properly (some cores won't be able to power up successfully). However at the moment the driver will happily do this if someone provides it with a DT which includes regulators that it doesn't know about. So I'm not sure how adding special case code for a SoC would help here.
I thought this SoC neeed to vary the voltage on both rails as part of the power management? Things like that can lead to hardware damage if we go out of spec far enough for long enough - there can be requirements like keeping one rail a certain voltage above another or whatever.
Yes, you are correct. My concern is that a DT which specifies a new regulator (e.g. "sram2") would be accepted by an old kernel (because it wouldn't know to look for the new regulator) but wouldn't know to control the regulator. It could then create a situation which puts the board out of spec - potentially in a damaging way. Hence I'd like to express the regulator structure in such a way that old kernels wouldn't "half-work". Your "soft-PDC" approach would seem to fit that requirement.
FYI, I sent a v3 here: https://patchwork.kernel.org/patch/11331373/ that addresses _some_ of these concerns.
I'm not quite sure how to describe the regulators in a way that we can check that the device tree does not specific extra ones (apart from doing some string matching on all properties?), and I don't think I'm best placed to implement the soft-PDC idea. See my comment on that patch.
Thanks!
On Wed, Jan 8, 2020 at 4:52 PM Nicolas Boichat drinkcat@chromium.org wrote:
On Wed, Jan 8, 2020 at 9:23 PM Mark Brown broonie@kernel.org wrote:
On Wed, Jan 08, 2020 at 01:23:34PM +0800, Nicolas Boichat wrote:
Some GPUs, namely, the bifrost/g72 part on MT8183, have a second regulator for their SRAM, let's add support for that.
pfdev->regulator_sram = devm_regulator_get_optional(pfdev->dev, "sram");
if (IS_ERR(pfdev->regulator_sram)) {
This supply is required for the devices that need it so I'd therefore expect the driver to request the supply non-optionally based on the compatible string rather than just hoping that a missing regulator isn't important.
That'd be a bit awkward to match, though... Currently all bifrost share the same compatible "arm,mali-bifrost", and it'd seem weird/wrong to match "mediatek,mt8183-mali" in this driver? I have no idea if any other Mali implementation will require a second regulator, but with the MT8183 we do need it, see below.
The current number of supported bifrost platforms is 0. It's only a matter of time until SoC specific compatibles need to be used in the driver. This is why we require them.
It could very well be that all bifrost implementations need 2 supplies. On chip RAMs are very frequently a separate thing which are synthesized differently from logic. At least within a specific IP model, I somewhat doubt there's a variable number of supplies. It could be possible to connect both to the same supply, but the correct way to handle that is both -supply properties point to the same regulator.
Rob
On 09/01/2020 16:56, Rob Herring wrote:
On Wed, Jan 8, 2020 at 4:52 PM Nicolas Boichat drinkcat@chromium.org wrote:
On Wed, Jan 8, 2020 at 9:23 PM Mark Brown broonie@kernel.org wrote:
On Wed, Jan 08, 2020 at 01:23:34PM +0800, Nicolas Boichat wrote:
Some GPUs, namely, the bifrost/g72 part on MT8183, have a second regulator for their SRAM, let's add support for that.
pfdev->regulator_sram = devm_regulator_get_optional(pfdev->dev, "sram");
if (IS_ERR(pfdev->regulator_sram)) {
This supply is required for the devices that need it so I'd therefore expect the driver to request the supply non-optionally based on the compatible string rather than just hoping that a missing regulator isn't important.
That'd be a bit awkward to match, though... Currently all bifrost share the same compatible "arm,mali-bifrost", and it'd seem weird/wrong to match "mediatek,mt8183-mali" in this driver? I have no idea if any other Mali implementation will require a second regulator, but with the MT8183 we do need it, see below.
The current number of supported bifrost platforms is 0. It's only a matter of time until SoC specific compatibles need to be used in the driver. This is why we require them.
It could very well be that all bifrost implementations need 2 supplies. On chip RAMs are very frequently a separate thing which are synthesized differently from logic. At least within a specific IP model, I somewhat doubt there's a variable number of supplies. It could be possible to connect both to the same supply, but the correct way to handle that is both -supply properties point to the same regulator.
To be honest I've no idea what different SoC designs have done, but one of the intentions of core stacks was that sets of GPU cores would be on different power supplies. (I think this is to avoid issues with inrush current etc, but I'm not a hardware engineer). So I would expect designs with a large number of cores to have more physical supplies than designs with fewer cores.
However, from a driver perspective this is all meant to be hidden by the hardware PDC which the GPU talks to. So the actual power up/down of the supplies may be completely automatic and therefore not described in the DT. So the actual number of software-controllable supplies could be 1 or could be more if the individual physical supplies are visible to software.
The Hikey960 for instance hides everything behind a mailbox interface, and it's simply a case of requesting a frequency.
Steve
When there is a single power domain per device, the core will ensure the power domains are all switched on.
However, when there are multiple ones, as in MT8183 Bifrost GPU, we need to handle them in driver code.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org ---
The downstream driver we use on chromeos-4.19 currently uses 2 additional devices in device tree to accomodate for this [1], but I believe this solution is cleaner.
[1] https://chromium.googlesource.com/chromiumos/third_party/kernel/+/refs/heads...
drivers/gpu/drm/panfrost/panfrost_device.c | 87 ++++++++++++++++++++-- drivers/gpu/drm/panfrost/panfrost_device.h | 4 + 2 files changed, 83 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c index a0b0a6fef8b4e63..c6e9e059de94a4d 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.c +++ b/drivers/gpu/drm/panfrost/panfrost_device.c @@ -5,6 +5,7 @@ #include <linux/clk.h> #include <linux/reset.h> #include <linux/platform_device.h> +#include <linux/pm_domain.h> #include <linux/regulator/consumer.h>
#include "panfrost_device.h" @@ -131,6 +132,67 @@ static void panfrost_regulator_fini(struct panfrost_device *pfdev) regulator_disable(pfdev->regulator_sram); }
+static void panfrost_pm_domain_fini(struct panfrost_device *pfdev) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(pfdev->pm_domain_devs); i++) { + if (!pfdev->pm_domain_devs[i]) + break; + + if (pfdev->pm_domain_links[i]) + device_link_del(pfdev->pm_domain_links[i]); + + dev_pm_domain_detach(pfdev->pm_domain_devs[i], true); + } +} + +static int panfrost_pm_domain_init(struct panfrost_device *pfdev) +{ + int err; + int i, num_domains; + + num_domains = of_count_phandle_with_args(pfdev->dev->of_node, + "power-domains", + "#power-domain-cells"); + /* Single domains are handled by the core. */ + if (num_domains < 2) + return 0; + + if (num_domains > ARRAY_SIZE(pfdev->pm_domain_devs)) { + dev_err(pfdev->dev, "Too many pm-domains: %d\n", num_domains); + return -EINVAL; + } + + for (i = 0; i < num_domains; i++) { + pfdev->pm_domain_devs[i] = + dev_pm_domain_attach_by_id(pfdev->dev, i); + if (IS_ERR(pfdev->pm_domain_devs[i])) { + err = PTR_ERR(pfdev->pm_domain_devs[i]); + pfdev->pm_domain_devs[i] = NULL; + dev_err(pfdev->dev, + "failed to get pm-domain %d: %d\n", i, err); + goto err; + } + + pfdev->pm_domain_links[i] = device_link_add(pfdev->dev, + pfdev->pm_domain_devs[i], DL_FLAG_PM_RUNTIME | + DL_FLAG_STATELESS | DL_FLAG_RPM_ACTIVE); + if (!pfdev->pm_domain_links[i]) { + dev_err(pfdev->pm_domain_devs[i], + "adding device link failed!\n"); + err = -ENODEV; + goto err; + } + } + + return 0; + +err: + panfrost_pm_domain_fini(pfdev); + return err; +} + int panfrost_device_init(struct panfrost_device *pfdev) { int err; @@ -161,37 +223,45 @@ int panfrost_device_init(struct panfrost_device *pfdev) goto err_out1; }
+ err = panfrost_pm_domain_init(pfdev); + if (err) { + dev_err(pfdev->dev, "pm_domain init failed %d\n", err); + goto err_out2; + } + res = platform_get_resource(pfdev->pdev, IORESOURCE_MEM, 0); pfdev->iomem = devm_ioremap_resource(pfdev->dev, res); if (IS_ERR(pfdev->iomem)) { dev_err(pfdev->dev, "failed to ioremap iomem\n"); err = PTR_ERR(pfdev->iomem); - goto err_out2; + goto err_out3; }
err = panfrost_gpu_init(pfdev); if (err) - goto err_out2; + goto err_out3;
err = panfrost_mmu_init(pfdev); if (err) - goto err_out3; + goto err_out4;
err = panfrost_job_init(pfdev); if (err) - goto err_out4; + goto err_out5;
err = panfrost_perfcnt_init(pfdev); if (err) - goto err_out5; + goto err_out6;
return 0; -err_out5: +err_out6: panfrost_job_fini(pfdev); -err_out4: +err_out5: panfrost_mmu_fini(pfdev); -err_out3: +err_out4: panfrost_gpu_fini(pfdev); +err_out3: + panfrost_pm_domain_fini(pfdev); err_out2: panfrost_reset_fini(pfdev); err_out1: @@ -208,6 +278,7 @@ void panfrost_device_fini(struct panfrost_device *pfdev) panfrost_mmu_fini(pfdev); panfrost_gpu_fini(pfdev); panfrost_reset_fini(pfdev); + panfrost_pm_domain_fini(pfdev); panfrost_regulator_fini(pfdev); panfrost_clk_fini(pfdev); } diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index a124334d69e7e93..92d471676fc7823 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -19,6 +19,7 @@ struct panfrost_job; struct panfrost_perfcnt;
#define NUM_JOB_SLOTS 3 +#define MAX_PM_DOMAINS 3
struct panfrost_features { u16 id; @@ -62,6 +63,9 @@ struct panfrost_device { struct regulator *regulator; struct regulator *regulator_sram; struct reset_control *rstc; + /* pm_domains for devices with more than one. */ + struct device *pm_domain_devs[MAX_PM_DOMAINS]; + struct device_link *pm_domain_links[MAX_PM_DOMAINS];
struct panfrost_features features;
On 08/01/2020 05:23, Nicolas Boichat wrote:
When there is a single power domain per device, the core will ensure the power domains are all switched on.
However, when there are multiple ones, as in MT8183 Bifrost GPU, we need to handle them in driver code.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org
The downstream driver we use on chromeos-4.19 currently uses 2 additional devices in device tree to accomodate for this [1], but I believe this solution is cleaner.
I'm not sure what is best, but it seems odd to encode this into the Panfrost driver itself - it doesn't have any knowledge of what to do with these power domains. The naming of the domains looks suspiciously like someone thought that e.g. only half of the cores could be powered, but it doesn't look like that was implemented in the chromeos driver linked and anyway that is *meant* to be automatic in the hardware! (I.e. if you only power up one cores in one core stack then the PDC should only enable the power domain for that set of cores).
Steve
[1] https://chromium.googlesource.com/chromiumos/third_party/kernel/+/refs/heads...
drivers/gpu/drm/panfrost/panfrost_device.c | 87 ++++++++++++++++++++-- drivers/gpu/drm/panfrost/panfrost_device.h | 4 + 2 files changed, 83 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c index a0b0a6fef8b4e63..c6e9e059de94a4d 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.c +++ b/drivers/gpu/drm/panfrost/panfrost_device.c @@ -5,6 +5,7 @@ #include <linux/clk.h> #include <linux/reset.h> #include <linux/platform_device.h> +#include <linux/pm_domain.h> #include <linux/regulator/consumer.h>
#include "panfrost_device.h" @@ -131,6 +132,67 @@ static void panfrost_regulator_fini(struct panfrost_device *pfdev) regulator_disable(pfdev->regulator_sram); }
+static void panfrost_pm_domain_fini(struct panfrost_device *pfdev) +{
- int i;
- for (i = 0; i < ARRAY_SIZE(pfdev->pm_domain_devs); i++) {
if (!pfdev->pm_domain_devs[i])
break;
if (pfdev->pm_domain_links[i])
device_link_del(pfdev->pm_domain_links[i]);
dev_pm_domain_detach(pfdev->pm_domain_devs[i], true);
- }
+}
+static int panfrost_pm_domain_init(struct panfrost_device *pfdev) +{
- int err;
- int i, num_domains;
- num_domains = of_count_phandle_with_args(pfdev->dev->of_node,
"power-domains",
"#power-domain-cells");
- /* Single domains are handled by the core. */
- if (num_domains < 2)
return 0;
- if (num_domains > ARRAY_SIZE(pfdev->pm_domain_devs)) {
dev_err(pfdev->dev, "Too many pm-domains: %d\n", num_domains);
return -EINVAL;
- }
- for (i = 0; i < num_domains; i++) {
pfdev->pm_domain_devs[i] =
dev_pm_domain_attach_by_id(pfdev->dev, i);
if (IS_ERR(pfdev->pm_domain_devs[i])) {
err = PTR_ERR(pfdev->pm_domain_devs[i]);
pfdev->pm_domain_devs[i] = NULL;
dev_err(pfdev->dev,
"failed to get pm-domain %d: %d\n", i, err);
goto err;
}
pfdev->pm_domain_links[i] = device_link_add(pfdev->dev,
pfdev->pm_domain_devs[i], DL_FLAG_PM_RUNTIME |
DL_FLAG_STATELESS | DL_FLAG_RPM_ACTIVE);
if (!pfdev->pm_domain_links[i]) {
dev_err(pfdev->pm_domain_devs[i],
"adding device link failed!\n");
err = -ENODEV;
goto err;
}
- }
- return 0;
+err:
- panfrost_pm_domain_fini(pfdev);
- return err;
+}
- int panfrost_device_init(struct panfrost_device *pfdev) { int err;
@@ -161,37 +223,45 @@ int panfrost_device_init(struct panfrost_device *pfdev) goto err_out1; }
- err = panfrost_pm_domain_init(pfdev);
- if (err) {
dev_err(pfdev->dev, "pm_domain init failed %d\n", err);
goto err_out2;
- }
- res = platform_get_resource(pfdev->pdev, IORESOURCE_MEM, 0); pfdev->iomem = devm_ioremap_resource(pfdev->dev, res); if (IS_ERR(pfdev->iomem)) { dev_err(pfdev->dev, "failed to ioremap iomem\n"); err = PTR_ERR(pfdev->iomem);
goto err_out2;
goto err_out3;
}
err = panfrost_gpu_init(pfdev); if (err)
goto err_out2;
goto err_out3;
err = panfrost_mmu_init(pfdev); if (err)
goto err_out3;
goto err_out4;
err = panfrost_job_init(pfdev); if (err)
goto err_out4;
goto err_out5;
err = panfrost_perfcnt_init(pfdev); if (err)
goto err_out5;
goto err_out6;
return 0;
-err_out5: +err_out6: panfrost_job_fini(pfdev); -err_out4: +err_out5: panfrost_mmu_fini(pfdev); -err_out3: +err_out4: panfrost_gpu_fini(pfdev); +err_out3:
- panfrost_pm_domain_fini(pfdev); err_out2: panfrost_reset_fini(pfdev); err_out1:
@@ -208,6 +278,7 @@ void panfrost_device_fini(struct panfrost_device *pfdev) panfrost_mmu_fini(pfdev); panfrost_gpu_fini(pfdev); panfrost_reset_fini(pfdev);
- panfrost_pm_domain_fini(pfdev); panfrost_regulator_fini(pfdev); panfrost_clk_fini(pfdev); }
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index a124334d69e7e93..92d471676fc7823 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -19,6 +19,7 @@ struct panfrost_job; struct panfrost_perfcnt;
#define NUM_JOB_SLOTS 3 +#define MAX_PM_DOMAINS 3
struct panfrost_features { u16 id; @@ -62,6 +63,9 @@ struct panfrost_device { struct regulator *regulator; struct regulator *regulator_sram; struct reset_control *rstc;
/* pm_domains for devices with more than one. */
struct device *pm_domain_devs[MAX_PM_DOMAINS];
struct device_link *pm_domain_links[MAX_PM_DOMAINS];
struct panfrost_features features;
+Ulf to keep me honest on the power domains
On Thu, Jan 9, 2020 at 10:08 PM Steven Price steven.price@arm.com wrote:
On 08/01/2020 05:23, Nicolas Boichat wrote:
When there is a single power domain per device, the core will ensure the power domains are all switched on.
However, when there are multiple ones, as in MT8183 Bifrost GPU, we need to handle them in driver code.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org
The downstream driver we use on chromeos-4.19 currently uses 2 additional devices in device tree to accomodate for this [1], but I believe this solution is cleaner.
I'm not sure what is best, but it seems odd to encode this into the Panfrost driver itself - it doesn't have any knowledge of what to do with these power domains. The naming of the domains looks suspiciously like someone thought that e.g. only half of the cores could be powered, but it doesn't look like that was implemented in the chromeos driver linked and anyway that is *meant* to be automatic in the hardware! (I.e. if you only power up one cores in one core stack then the PDC should only enable the power domain for that set of cores).
This is actually implemented in the Chrome OS driver [1]. IMHO power domains are a bit confusing [2]: i. If there's only 1 power domain in the device, then the core takes care of power on the domain (based on pm_runtime) ii. If there's more than 1 power domain, then the device needs to link the domains manually.
So the Chrome OS [1] driver takes approach (i), by creating 3 devices, each with 1 power domain that is switched on/off automatically using pm_runtime.
This patch takes approach (ii) with device links to handle the extra domains.
I believe the latter is more upstream-friendly, but, as always, suggestions welcome.
[2] https://elixir.bootlin.com/linux/latest/source/drivers/base/power/domain.c#L...
Steve
[1] https://chromium.googlesource.com/chromiumos/third_party/kernel/+/refs/heads...
drivers/gpu/drm/panfrost/panfrost_device.c | 87 ++++++++++++++++++++-- drivers/gpu/drm/panfrost/panfrost_device.h | 4 + 2 files changed, 83 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c index a0b0a6fef8b4e63..c6e9e059de94a4d 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.c +++ b/drivers/gpu/drm/panfrost/panfrost_device.c @@ -5,6 +5,7 @@ #include <linux/clk.h> #include <linux/reset.h> #include <linux/platform_device.h> +#include <linux/pm_domain.h> #include <linux/regulator/consumer.h>
#include "panfrost_device.h" @@ -131,6 +132,67 @@ static void panfrost_regulator_fini(struct panfrost_device *pfdev) regulator_disable(pfdev->regulator_sram); }
+static void panfrost_pm_domain_fini(struct panfrost_device *pfdev) +{
int i;
for (i = 0; i < ARRAY_SIZE(pfdev->pm_domain_devs); i++) {
if (!pfdev->pm_domain_devs[i])
break;
if (pfdev->pm_domain_links[i])
device_link_del(pfdev->pm_domain_links[i]);
dev_pm_domain_detach(pfdev->pm_domain_devs[i], true);
}
+}
+static int panfrost_pm_domain_init(struct panfrost_device *pfdev) +{
int err;
int i, num_domains;
num_domains = of_count_phandle_with_args(pfdev->dev->of_node,
"power-domains",
"#power-domain-cells");
/* Single domains are handled by the core. */
if (num_domains < 2)
return 0;
if (num_domains > ARRAY_SIZE(pfdev->pm_domain_devs)) {
dev_err(pfdev->dev, "Too many pm-domains: %d\n", num_domains);
return -EINVAL;
}
for (i = 0; i < num_domains; i++) {
pfdev->pm_domain_devs[i] =
dev_pm_domain_attach_by_id(pfdev->dev, i);
if (IS_ERR(pfdev->pm_domain_devs[i])) {
err = PTR_ERR(pfdev->pm_domain_devs[i]);
pfdev->pm_domain_devs[i] = NULL;
dev_err(pfdev->dev,
"failed to get pm-domain %d: %d\n", i, err);
goto err;
}
pfdev->pm_domain_links[i] = device_link_add(pfdev->dev,
pfdev->pm_domain_devs[i], DL_FLAG_PM_RUNTIME |
DL_FLAG_STATELESS | DL_FLAG_RPM_ACTIVE);
if (!pfdev->pm_domain_links[i]) {
dev_err(pfdev->pm_domain_devs[i],
"adding device link failed!\n");
err = -ENODEV;
goto err;
}
}
return 0;
+err:
panfrost_pm_domain_fini(pfdev);
return err;
+}
- int panfrost_device_init(struct panfrost_device *pfdev) { int err;
@@ -161,37 +223,45 @@ int panfrost_device_init(struct panfrost_device *pfdev) goto err_out1; }
err = panfrost_pm_domain_init(pfdev);
if (err) {
dev_err(pfdev->dev, "pm_domain init failed %d\n", err);
goto err_out2;
}
res = platform_get_resource(pfdev->pdev, IORESOURCE_MEM, 0); pfdev->iomem = devm_ioremap_resource(pfdev->dev, res); if (IS_ERR(pfdev->iomem)) { dev_err(pfdev->dev, "failed to ioremap iomem\n"); err = PTR_ERR(pfdev->iomem);
goto err_out2;
goto err_out3; } err = panfrost_gpu_init(pfdev); if (err)
goto err_out2;
goto err_out3; err = panfrost_mmu_init(pfdev); if (err)
goto err_out3;
goto err_out4; err = panfrost_job_init(pfdev); if (err)
goto err_out4;
goto err_out5; err = panfrost_perfcnt_init(pfdev); if (err)
goto err_out5;
goto err_out6; return 0;
-err_out5: +err_out6: panfrost_job_fini(pfdev); -err_out4: +err_out5: panfrost_mmu_fini(pfdev); -err_out3: +err_out4: panfrost_gpu_fini(pfdev); +err_out3:
err_out2: panfrost_reset_fini(pfdev); err_out1:panfrost_pm_domain_fini(pfdev);
@@ -208,6 +278,7 @@ void panfrost_device_fini(struct panfrost_device *pfdev) panfrost_mmu_fini(pfdev); panfrost_gpu_fini(pfdev); panfrost_reset_fini(pfdev);
}panfrost_pm_domain_fini(pfdev); panfrost_regulator_fini(pfdev); panfrost_clk_fini(pfdev);
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index a124334d69e7e93..92d471676fc7823 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -19,6 +19,7 @@ struct panfrost_job; struct panfrost_perfcnt;
#define NUM_JOB_SLOTS 3 +#define MAX_PM_DOMAINS 3
struct panfrost_features { u16 id; @@ -62,6 +63,9 @@ struct panfrost_device { struct regulator *regulator; struct regulator *regulator_sram; struct reset_control *rstc;
/* pm_domains for devices with more than one. */
struct device *pm_domain_devs[MAX_PM_DOMAINS];
struct device_link *pm_domain_links[MAX_PM_DOMAINS]; struct panfrost_features features;
On Fri, 10 Jan 2020 at 02:53, Nicolas Boichat drinkcat@chromium.org wrote:
+Ulf to keep me honest on the power domains
On Thu, Jan 9, 2020 at 10:08 PM Steven Price steven.price@arm.com wrote:
On 08/01/2020 05:23, Nicolas Boichat wrote:
When there is a single power domain per device, the core will ensure the power domains are all switched on.
However, when there are multiple ones, as in MT8183 Bifrost GPU, we need to handle them in driver code.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org
The downstream driver we use on chromeos-4.19 currently uses 2 additional devices in device tree to accomodate for this [1], but I believe this solution is cleaner.
I'm not sure what is best, but it seems odd to encode this into the Panfrost driver itself - it doesn't have any knowledge of what to do with these power domains. The naming of the domains looks suspiciously like someone thought that e.g. only half of the cores could be powered, but it doesn't look like that was implemented in the chromeos driver linked and anyway that is *meant* to be automatic in the hardware! (I.e. if you only power up one cores in one core stack then the PDC should only enable the power domain for that set of cores).
This is actually implemented in the Chrome OS driver [1]. IMHO power domains are a bit confusing [2]: i. If there's only 1 power domain in the device, then the core takes care of power on the domain (based on pm_runtime) ii. If there's more than 1 power domain, then the device needs to link the domains manually.
So the Chrome OS [1] driver takes approach (i), by creating 3 devices, each with 1 power domain that is switched on/off automatically using pm_runtime.
This patch takes approach (ii) with device links to handle the extra domains.
I believe the latter is more upstream-friendly, but, as always, suggestions welcome.
Apologies for the late reply. A few comments below.
If the device is partitioned across multiple PM domains (it may need several power rails), then that should be described with the "multi PM domain" approach in the DTS. As in (ii).
Using "device links" is however optional, as it may depend on the use case. If all multiple PM domains needs to be powered on/off together, then it's certainly recommended to use device links.
However, if the PM domains can be powered on/off independently (one can be on while another is off), then it's probably easier to operate directly with runtime PM, on the returned struct *device from dev_pm_domain_attach_by_id().
Also note, there is dev_pm_domain_attach_by_name(), which allows us to specify a name for the PM domain in the DTS, rather than using an index. This may be more future proof to use.
[...]
Hope this helps.
Kind regards Uffe
Hi Ulf,
On Mon, Jan 27, 2020 at 3:55 PM Ulf Hansson ulf.hansson@linaro.org wrote:
On Fri, 10 Jan 2020 at 02:53, Nicolas Boichat drinkcat@chromium.org wrote:
+Ulf to keep me honest on the power domains
On Thu, Jan 9, 2020 at 10:08 PM Steven Price steven.price@arm.com wrote:
On 08/01/2020 05:23, Nicolas Boichat wrote:
When there is a single power domain per device, the core will ensure the power domains are all switched on.
However, when there are multiple ones, as in MT8183 Bifrost GPU, we need to handle them in driver code.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org
The downstream driver we use on chromeos-4.19 currently uses 2 additional devices in device tree to accomodate for this [1], but I believe this solution is cleaner.
I'm not sure what is best, but it seems odd to encode this into the Panfrost driver itself - it doesn't have any knowledge of what to do with these power domains. The naming of the domains looks suspiciously like someone thought that e.g. only half of the cores could be powered, but it doesn't look like that was implemented in the chromeos driver linked and anyway that is *meant* to be automatic in the hardware! (I.e. if you only power up one cores in one core stack then the PDC should only enable the power domain for that set of cores).
This is actually implemented in the Chrome OS driver [1]. IMHO power domains are a bit confusing [2]: i. If there's only 1 power domain in the device, then the core takes care of power on the domain (based on pm_runtime) ii. If there's more than 1 power domain, then the device needs to link the domains manually.
So the Chrome OS [1] driver takes approach (i), by creating 3 devices, each with 1 power domain that is switched on/off automatically using pm_runtime.
This patch takes approach (ii) with device links to handle the extra domains.
I believe the latter is more upstream-friendly, but, as always, suggestions welcome.
Apologies for the late reply. A few comments below.
No worries, than for the helpful reply!
If the device is partitioned across multiple PM domains (it may need several power rails), then that should be described with the "multi PM domain" approach in the DTS. As in (ii).
Using "device links" is however optional, as it may depend on the use case. If all multiple PM domains needs to be powered on/off together, then it's certainly recommended to use device links.
That's the case here, there's no support for turning on/off the domains individually.
However, if the PM domains can be powered on/off independently (one can be on while another is off), then it's probably easier to operate directly with runtime PM, on the returned struct *device from dev_pm_domain_attach_by_id().
Also note, there is dev_pm_domain_attach_by_name(), which allows us to specify a name for the PM domain in the DTS, rather than using an index. This may be more future proof to use.
Agree, probably better to have actual names than just "counting" the number of domains like I do, especially as we have a compatible struct anyway. I'll update the patch.
[...]
Hope this helps.
Kind regards Uffe
On Fri, Feb 7, 2020 at 10:04 AM Nicolas Boichat drinkcat@chromium.org wrote:
Hi Ulf,
On Mon, Jan 27, 2020 at 3:55 PM Ulf Hansson ulf.hansson@linaro.org wrote:
On Fri, 10 Jan 2020 at 02:53, Nicolas Boichat drinkcat@chromium.org wrote:
+Ulf to keep me honest on the power domains
On Thu, Jan 9, 2020 at 10:08 PM Steven Price steven.price@arm.com wrote:
On 08/01/2020 05:23, Nicolas Boichat wrote:
When there is a single power domain per device, the core will ensure the power domains are all switched on.
However, when there are multiple ones, as in MT8183 Bifrost GPU, we need to handle them in driver code.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org
The downstream driver we use on chromeos-4.19 currently uses 2 additional devices in device tree to accomodate for this [1], but I believe this solution is cleaner.
I'm not sure what is best, but it seems odd to encode this into the Panfrost driver itself - it doesn't have any knowledge of what to do with these power domains. The naming of the domains looks suspiciously like someone thought that e.g. only half of the cores could be powered, but it doesn't look like that was implemented in the chromeos driver linked and anyway that is *meant* to be automatic in the hardware! (I.e. if you only power up one cores in one core stack then the PDC should only enable the power domain for that set of cores).
This is actually implemented in the Chrome OS driver [1]. IMHO power domains are a bit confusing [2]: i. If there's only 1 power domain in the device, then the core takes care of power on the domain (based on pm_runtime) ii. If there's more than 1 power domain, then the device needs to link the domains manually.
So the Chrome OS [1] driver takes approach (i), by creating 3 devices, each with 1 power domain that is switched on/off automatically using pm_runtime.
This patch takes approach (ii) with device links to handle the extra domains.
I believe the latter is more upstream-friendly, but, as always, suggestions welcome.
Apologies for the late reply. A few comments below.
No worries, than for the helpful reply!
(s/than/thanks/... ,-P)
If the device is partitioned across multiple PM domains (it may need several power rails), then that should be described with the "multi PM domain" approach in the DTS. As in (ii).
Using "device links" is however optional, as it may depend on the use case. If all multiple PM domains needs to be powered on/off together, then it's certainly recommended to use device links.
That's the case here, there's no support for turning on/off the domains individually.
However, if the PM domains can be powered on/off independently (one can be on while another is off), then it's probably easier to operate directly with runtime PM, on the returned struct *device from dev_pm_domain_attach_by_id().
Also note, there is dev_pm_domain_attach_by_name(), which allows us to specify a name for the PM domain in the DTS, rather than using an index. This may be more future proof to use.
Agree, probably better to have actual names than just "counting" the number of domains like I do, especially as we have a compatible struct anyway. I'll update the patch.
[...]
Hope this helps.
Kind regards Uffe
For testing only, the driver doesn't really work yet, AFAICT.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org --- drivers/gpu/drm/panfrost/panfrost_drv.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 48e3c4165247cea..f3a4d77266ba961 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -591,6 +591,7 @@ static const struct of_device_id dt_match[] = { { .compatible = "arm,mali-t830" }, { .compatible = "arm,mali-t860" }, { .compatible = "arm,mali-t880" }, + { .compatible = "arm,mali-bifrost" }, {} }; MODULE_DEVICE_TABLE(of, dt_match);
On 08/01/2020 05:23, Nicolas Boichat wrote:
For testing only, the driver doesn't really work yet, AFAICT.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org
It does work (at least on the Hikey960), we just don't have any public user space driver for it... ;)
Reviewed-by: Steven Price steven.price@arm.com
drivers/gpu/drm/panfrost/panfrost_drv.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 48e3c4165247cea..f3a4d77266ba961 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -591,6 +591,7 @@ static const struct of_device_id dt_match[] = { { .compatible = "arm,mali-t830" }, { .compatible = "arm,mali-t860" }, { .compatible = "arm,mali-t880" },
- { .compatible = "arm,mali-bifrost" }, {} }; MODULE_DEVICE_TABLE(of, dt_match);
The Bifrost GPU on MT8183 uses 2 regulators (core and SRAM) for devfreq, and provides OPP table with 2 sets of voltages.
TODO: This is incomplete as we'll need add support for setting a pair of voltages as well.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org --- drivers/gpu/drm/panfrost/panfrost_devfreq.c | 18 ++++++++++++++++++ drivers/gpu/drm/panfrost/panfrost_device.h | 2 ++ 2 files changed, 20 insertions(+)
diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.c b/drivers/gpu/drm/panfrost/panfrost_devfreq.c index 413987038fbfccb..5eb0effded7eb09 100644 --- a/drivers/gpu/drm/panfrost/panfrost_devfreq.c +++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.c @@ -79,6 +79,22 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev) struct devfreq *devfreq; struct thermal_cooling_device *cooling;
+ /* If we have 2 regulator, we need an OPP table with 2 voltages. */ + if (pfdev->regulator_sram) { + const char * const reg_names[] = { "mali", "sram" }; + + pfdev->devfreq.dev_opp_table = + dev_pm_opp_set_regulators(dev, + reg_names, ARRAY_SIZE(reg_names)); + if (IS_ERR(pfdev->devfreq.dev_opp_table)) { + ret = PTR_ERR(pfdev->devfreq.dev_opp_table); + pfdev->devfreq.dev_opp_table = NULL; + dev_err(dev, + "Failed to init devfreq opp table: %d\n", ret); + return ret; + } + } + ret = dev_pm_opp_of_add_table(dev); if (ret == -ENODEV) /* Optional, continue without devfreq */ return 0; @@ -119,6 +135,8 @@ void panfrost_devfreq_fini(struct panfrost_device *pfdev) if (pfdev->devfreq.cooling) devfreq_cooling_unregister(pfdev->devfreq.cooling); dev_pm_opp_of_remove_table(&pfdev->pdev->dev); + if (pfdev->devfreq.dev_opp_table) + dev_pm_opp_put_regulators(pfdev->devfreq.dev_opp_table); }
void panfrost_devfreq_resume(struct panfrost_device *pfdev) diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index 92d471676fc7823..581da3fe5df8b17 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -91,10 +91,12 @@ struct panfrost_device { struct { struct devfreq *devfreq; struct thermal_cooling_device *cooling; + struct opp_table *dev_opp_table; ktime_t busy_time; ktime_t idle_time; ktime_t time_last_update; atomic_t busy_count; + struct panfrost_devfreq_slot slot[NUM_JOB_SLOTS]; } devfreq; };
On Tue, Jan 7, 2020 at 11:24 PM Nicolas Boichat drinkcat@chromium.org wrote:
The Bifrost GPU on MT8183 uses 2 regulators (core and SRAM) for devfreq, and provides OPP table with 2 sets of voltages.
TODO: This is incomplete as we'll need add support for setting a pair of voltages as well.
Signed-off-by: Nicolas Boichat drinkcat@chromium.org
drivers/gpu/drm/panfrost/panfrost_devfreq.c | 18 ++++++++++++++++++ drivers/gpu/drm/panfrost/panfrost_device.h | 2 ++ 2 files changed, 20 insertions(+)
diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.c b/drivers/gpu/drm/panfrost/panfrost_devfreq.c index 413987038fbfccb..5eb0effded7eb09 100644 --- a/drivers/gpu/drm/panfrost/panfrost_devfreq.c +++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.c @@ -79,6 +79,22 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev) struct devfreq *devfreq; struct thermal_cooling_device *cooling;
/* If we have 2 regulator, we need an OPP table with 2 voltages. */
if (pfdev->regulator_sram) {
const char * const reg_names[] = { "mali", "sram" };
pfdev->devfreq.dev_opp_table =
dev_pm_opp_set_regulators(dev,
reg_names, ARRAY_SIZE(reg_names));
if (IS_ERR(pfdev->devfreq.dev_opp_table)) {
ret = PTR_ERR(pfdev->devfreq.dev_opp_table);
pfdev->devfreq.dev_opp_table = NULL;
dev_err(dev,
"Failed to init devfreq opp table: %d\n", ret);
return ret;
}
}
ret = dev_pm_opp_of_add_table(dev); if (ret == -ENODEV) /* Optional, continue without devfreq */ return 0;
@@ -119,6 +135,8 @@ void panfrost_devfreq_fini(struct panfrost_device *pfdev) if (pfdev->devfreq.cooling) devfreq_cooling_unregister(pfdev->devfreq.cooling); dev_pm_opp_of_remove_table(&pfdev->pdev->dev);
if (pfdev->devfreq.dev_opp_table)
dev_pm_opp_put_regulators(pfdev->devfreq.dev_opp_table);
}
void panfrost_devfreq_resume(struct panfrost_device *pfdev) diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index 92d471676fc7823..581da3fe5df8b17 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -91,10 +91,12 @@ struct panfrost_device { struct { struct devfreq *devfreq; struct thermal_cooling_device *cooling;
struct opp_table *dev_opp_table; ktime_t busy_time; ktime_t idle_time; ktime_t time_last_update; atomic_t busy_count;
struct panfrost_devfreq_slot slot[NUM_JOB_SLOTS];
?? Left over from some rebase?
Rob
On Thu, Jan 9, 2020 at 4:09 AM Rob Herring robh+dt@kernel.org wrote:
[snip]
void panfrost_devfreq_resume(struct panfrost_device *pfdev) diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index 92d471676fc7823..581da3fe5df8b17 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -91,10 +91,12 @@ struct panfrost_device { struct { struct devfreq *devfreq; struct thermal_cooling_device *cooling;
struct opp_table *dev_opp_table; ktime_t busy_time; ktime_t idle_time; ktime_t time_last_update; atomic_t busy_count;
struct panfrost_devfreq_slot slot[NUM_JOB_SLOTS];
?? Left over from some rebase?
Oh, yes, sorry.
Patches 1,2,3,6 are:
Reviewed-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com
The remaining patches in the series are Acked-by.
Reportedly the kernel should work on certain Bifrost boards more or less as-is, but I'm not positive about the details. It's possible some of these are G72-specific or MT-specific issues; Robin and Stephen will know more.
Very nice work so far!
Alyssa
On Wed, Jan 08, 2020 at 01:23:30PM +0800, Nicolas Boichat wrote:
Hi!
Sorry for the long delay since https://patchwork.kernel.org/patch/11132381/, finally got around to give this a real try.
The main purpose of this series is to upstream the dts change and the binding document, but I wanted to see how far I could probe the GPU, to check that the binding is indeed correct. The rest of the patches are RFC/work-in-progress, but I think some of them could already be picked up.
So this is tested on MT8183 with a chromeos-4.19 kernel, and a ton of backports to get the latest panfrost driver (I should probably try on linux-next at some point but this was the path of least resistance).
I tested it as a module as it's more challenging (originally probing would work built-in, on boot, but not as a module, as I didn't have the power domain changes, and all power domains are on by default during boot).
Probing logs looks like this, currently: [ 221.867726] panfrost 13040000.gpu: clock rate = 511999970 [ 221.867929] panfrost 13040000.gpu: Linked as a consumer to regulator.14 [ 221.868600] panfrost 13040000.gpu: Linked as a consumer to regulator.31 [ 221.870586] panfrost 13040000.gpu: Linked as a consumer to genpd:0:13040000.gpu [ 221.871492] panfrost 13040000.gpu: Linked as a consumer to genpd:1:13040000.gpu [ 221.871866] panfrost 13040000.gpu: Linked as a consumer to genpd:2:13040000.gpu [ 221.872427] panfrost 13040000.gpu: mali-g72 id 0x6221 major 0x0 minor 0x3 status 0x0 [ 221.872439] panfrost 13040000.gpu: features: 00000000,13de77ff, issues: 00000000,00000400 [ 221.872445] panfrost 13040000.gpu: Features: L2:0x07120206 Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002830 AS:0xff JS:0x7 [ 221.872449] panfrost 13040000.gpu: shader_present=0x7 l2_present=0x1 [ 221.873526] panfrost 13040000.gpu: error powering up gpu stack [ 221.878088] [drm] Initialized panfrost 1.1.0 20180908 for 13040000.gpu on minor 2 [ 221.940817] panfrost 13040000.gpu: error powering up gpu stack [ 222.018233] panfrost 13040000.gpu: error powering up gpu stack (repeated)
So the GPU is probed, but there's an issue when powering up the STACK, not quite sure why, I'll try to have a deeper look, at some point.
Thanks!
Nicolas
v2:
- Use sram instead of mali_sram as SRAM supply name.
- Rename mali@ to gpu@.
- Add dt-bindings changes
- Stacking patches after the device tree change that allow basic probing (still incomplete and broken).
Nicolas Boichat (7): dt-bindings: gpu: mali-bifrost: Add Mediatek MT8183 arm64: dts: mt8183: Add node for the Mali GPU drm/panfrost: Improve error reporting in panfrost_gpu_power_on drm/panfrost: Add support for a second regulator for the GPU drm/panfrost: Add support for multiple power domain support RFC: drm/panfrost: Add bifrost compatible string RFC: drm/panfrost: devfreq: Add support for 2 regulators
.../bindings/gpu/arm,mali-bifrost.yaml | 20 ++++ arch/arm64/boot/dts/mediatek/mt8183-evb.dts | 7 ++ arch/arm64/boot/dts/mediatek/mt8183.dtsi | 104 +++++++++++++++++ drivers/gpu/drm/panfrost/panfrost_devfreq.c | 18 +++ drivers/gpu/drm/panfrost/panfrost_device.c | 108 ++++++++++++++++-- drivers/gpu/drm/panfrost/panfrost_device.h | 7 ++ drivers/gpu/drm/panfrost/panfrost_drv.c | 1 + drivers/gpu/drm/panfrost/panfrost_gpu.c | 15 ++- 8 files changed, 267 insertions(+), 13 deletions(-)
-- 2.24.1.735.g03f4e72817-goog
On Wed, Jan 8, 2020 at 1:23 PM Nicolas Boichat drinkcat@chromium.org wrote:
Hi!
Sorry for the long delay since https://patchwork.kernel.org/patch/11132381/, finally got around to give this a real try.
The main purpose of this series is to upstream the dts change and the binding document, but I wanted to see how far I could probe the GPU, to check that the binding is indeed correct. The rest of the patches are RFC/work-in-progress, but I think some of them could already be picked up.
So this is tested on MT8183 with a chromeos-4.19 kernel, and a ton of backports to get the latest panfrost driver (I should probably try on linux-next at some point but this was the path of least resistance).
I tested it as a module as it's more challenging (originally probing would work built-in, on boot, but not as a module, as I didn't have the power domain changes, and all power domains are on by default during boot).
Probing logs looks like this, currently: [ 221.867726] panfrost 13040000.gpu: clock rate = 511999970 [ 221.867929] panfrost 13040000.gpu: Linked as a consumer to regulator.14 [ 221.868600] panfrost 13040000.gpu: Linked as a consumer to regulator.31 [ 221.870586] panfrost 13040000.gpu: Linked as a consumer to genpd:0:13040000.gpu [ 221.871492] panfrost 13040000.gpu: Linked as a consumer to genpd:1:13040000.gpu [ 221.871866] panfrost 13040000.gpu: Linked as a consumer to genpd:2:13040000.gpu [ 221.872427] panfrost 13040000.gpu: mali-g72 id 0x6221 major 0x0 minor 0x3 status 0x0 [ 221.872439] panfrost 13040000.gpu: features: 00000000,13de77ff, issues: 00000000,00000400 [ 221.872445] panfrost 13040000.gpu: Features: L2:0x07120206 Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002830 AS:0xff JS:0x7 [ 221.872449] panfrost 13040000.gpu: shader_present=0x7 l2_present=0x1 [ 221.873526] panfrost 13040000.gpu: error powering up gpu stack [ 221.878088] [drm] Initialized panfrost 1.1.0 20180908 for 13040000.gpu on minor 2 [ 221.940817] panfrost 13040000.gpu: error powering up gpu stack [ 222.018233] panfrost 13040000.gpu: error powering up gpu stack (repeated)
So the GPU is probed, but there's an issue when powering up the STACK, not quite sure why, I'll try to have a deeper look, at some point.
Just as a follow-up to that one. stack_present=0x00000007 on my GPU.
However, the ARM-provided driver I use on this platform doesn't have CONFIG_MALI_CORESTACK enabled so the "stack" is never turned on. https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-4... . So possibly this does not need to be done on Bifrost GPUs (and the error should be harmless).
Thanks!
Nicolas
v2:
- Use sram instead of mali_sram as SRAM supply name.
- Rename mali@ to gpu@.
- Add dt-bindings changes
- Stacking patches after the device tree change that allow basic probing (still incomplete and broken).
Nicolas Boichat (7): dt-bindings: gpu: mali-bifrost: Add Mediatek MT8183 arm64: dts: mt8183: Add node for the Mali GPU drm/panfrost: Improve error reporting in panfrost_gpu_power_on drm/panfrost: Add support for a second regulator for the GPU drm/panfrost: Add support for multiple power domain support RFC: drm/panfrost: Add bifrost compatible string RFC: drm/panfrost: devfreq: Add support for 2 regulators
.../bindings/gpu/arm,mali-bifrost.yaml | 20 ++++ arch/arm64/boot/dts/mediatek/mt8183-evb.dts | 7 ++ arch/arm64/boot/dts/mediatek/mt8183.dtsi | 104 +++++++++++++++++ drivers/gpu/drm/panfrost/panfrost_devfreq.c | 18 +++ drivers/gpu/drm/panfrost/panfrost_device.c | 108 ++++++++++++++++-- drivers/gpu/drm/panfrost/panfrost_device.h | 7 ++ drivers/gpu/drm/panfrost/panfrost_drv.c | 1 + drivers/gpu/drm/panfrost/panfrost_gpu.c | 15 ++- 8 files changed, 267 insertions(+), 13 deletions(-)
-- 2.24.1.735.g03f4e72817-goog
On 08/01/2020 05:23, Nicolas Boichat wrote:
Hi!
Sorry for the long delay since https://patchwork.kernel.org/patch/11132381/, finally got around to give this a real try.
The main purpose of this series is to upstream the dts change and the binding document, but I wanted to see how far I could probe the GPU, to check that the binding is indeed correct. The rest of the patches are RFC/work-in-progress, but I think some of them could already be picked up.
So this is tested on MT8183 with a chromeos-4.19 kernel, and a ton of backports to get the latest panfrost driver (I should probably try on linux-next at some point but this was the path of least resistance).
I tested it as a module as it's more challenging (originally probing would work built-in, on boot, but not as a module, as I didn't have the power domain changes, and all power domains are on by default during boot).
Probing logs looks like this, currently: [ 221.867726] panfrost 13040000.gpu: clock rate = 511999970 [ 221.867929] panfrost 13040000.gpu: Linked as a consumer to regulator.14 [ 221.868600] panfrost 13040000.gpu: Linked as a consumer to regulator.31 [ 221.870586] panfrost 13040000.gpu: Linked as a consumer to genpd:0:13040000.gpu [ 221.871492] panfrost 13040000.gpu: Linked as a consumer to genpd:1:13040000.gpu [ 221.871866] panfrost 13040000.gpu: Linked as a consumer to genpd:2:13040000.gpu [ 221.872427] panfrost 13040000.gpu: mali-g72 id 0x6221 major 0x0 minor 0x3 status 0x0 [ 221.872439] panfrost 13040000.gpu: features: 00000000,13de77ff, issues: 00000000,00000400 [ 221.872445] panfrost 13040000.gpu: Features: L2:0x07120206 Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002830 AS:0xff JS:0x7 [ 221.872449] panfrost 13040000.gpu: shader_present=0x7 l2_present=0x1 [ 221.873526] panfrost 13040000.gpu: error powering up gpu stack [ 221.878088] [drm] Initialized panfrost 1.1.0 20180908 for 13040000.gpu on minor 2 [ 221.940817] panfrost 13040000.gpu: error powering up gpu stack [ 222.018233] panfrost 13040000.gpu: error powering up gpu stack (repeated)
It's interesting that it's only the stack that is failing. In hardware there's a dependency: L2->stack->shader - so in theory the shader cores shouldn't be able to power up either. There are some known hardware bugs here though[1]:
MODULE_PARM_DESC(corestack_driver_control, "Let the driver power on/off the GPU core stack independently " "without involving the Power Domain Controller. This should " "only be enabled on platforms for which integration of the PDC " "to the Mali GPU is known to be problematic.");
[1] https://github.com/ianmacd/d2s/blob/master/drivers/gpu/arm/b_r16p0/backend/g...
It might be worth just dropping the code for powering up/down stacks and let the GPU's own dependency management handle it.
Steve
So the GPU is probed, but there's an issue when powering up the STACK, not quite sure why, I'll try to have a deeper look, at some point.
Thanks!
Nicolas
v2:
- Use sram instead of mali_sram as SRAM supply name.
- Rename mali@ to gpu@.
- Add dt-bindings changes
- Stacking patches after the device tree change that allow basic probing (still incomplete and broken).
Nicolas Boichat (7): dt-bindings: gpu: mali-bifrost: Add Mediatek MT8183 arm64: dts: mt8183: Add node for the Mali GPU drm/panfrost: Improve error reporting in panfrost_gpu_power_on drm/panfrost: Add support for a second regulator for the GPU drm/panfrost: Add support for multiple power domain support RFC: drm/panfrost: Add bifrost compatible string RFC: drm/panfrost: devfreq: Add support for 2 regulators
.../bindings/gpu/arm,mali-bifrost.yaml | 20 ++++ arch/arm64/boot/dts/mediatek/mt8183-evb.dts | 7 ++ arch/arm64/boot/dts/mediatek/mt8183.dtsi | 104 +++++++++++++++++ drivers/gpu/drm/panfrost/panfrost_devfreq.c | 18 +++ drivers/gpu/drm/panfrost/panfrost_device.c | 108 ++++++++++++++++-- drivers/gpu/drm/panfrost/panfrost_device.h | 7 ++ drivers/gpu/drm/panfrost/panfrost_drv.c | 1 + drivers/gpu/drm/panfrost/panfrost_gpu.c | 15 ++- 8 files changed, 267 insertions(+), 13 deletions(-)
On 09/01/2020 12:01 pm, Steven Price wrote:
On 08/01/2020 05:23, Nicolas Boichat wrote:
Hi!
Sorry for the long delay since https://patchwork.kernel.org/patch/11132381/, finally got around to give this a real try.
The main purpose of this series is to upstream the dts change and the binding document, but I wanted to see how far I could probe the GPU, to check that the binding is indeed correct. The rest of the patches are RFC/work-in-progress, but I think some of them could already be picked up.
So this is tested on MT8183 with a chromeos-4.19 kernel, and a ton of backports to get the latest panfrost driver (I should probably try on linux-next at some point but this was the path of least resistance).
I tested it as a module as it's more challenging (originally probing would work built-in, on boot, but not as a module, as I didn't have the power domain changes, and all power domains are on by default during boot).
Probing logs looks like this, currently: [ 221.867726] panfrost 13040000.gpu: clock rate = 511999970 [ 221.867929] panfrost 13040000.gpu: Linked as a consumer to regulator.14 [ 221.868600] panfrost 13040000.gpu: Linked as a consumer to regulator.31 [ 221.870586] panfrost 13040000.gpu: Linked as a consumer to genpd:0:13040000.gpu [ 221.871492] panfrost 13040000.gpu: Linked as a consumer to genpd:1:13040000.gpu [ 221.871866] panfrost 13040000.gpu: Linked as a consumer to genpd:2:13040000.gpu [ 221.872427] panfrost 13040000.gpu: mali-g72 id 0x6221 major 0x0 minor 0x3 status 0x0 [ 221.872439] panfrost 13040000.gpu: features: 00000000,13de77ff, issues: 00000000,00000400 [ 221.872445] panfrost 13040000.gpu: Features: L2:0x07120206 Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002830 AS:0xff JS:0x7 [ 221.872449] panfrost 13040000.gpu: shader_present=0x7 l2_present=0x1 [ 221.873526] panfrost 13040000.gpu: error powering up gpu stack [ 221.878088] [drm] Initialized panfrost 1.1.0 20180908 for 13040000.gpu on minor 2 [ 221.940817] panfrost 13040000.gpu: error powering up gpu stack [ 222.018233] panfrost 13040000.gpu: error powering up gpu stack (repeated)
It's interesting that it's only the stack that is failing. In hardware there's a dependency: L2->stack->shader - so in theory the shader cores shouldn't be able to power up either. There are some known hardware bugs here though[1]:
MODULE_PARM_DESC(corestack_driver_control, "Let the driver power on/off the GPU core stack independently " "without involving the Power Domain Controller. This should " "only be enabled on platforms for which integration of the PDC " "to the Mali GPU is known to be problematic.");
[1] https://github.com/ianmacd/d2s/blob/master/drivers/gpu/arm/b_r16p0/backend/g...
It might be worth just dropping the code for powering up/down stacks and let the GPU's own dependency management handle it.
FWIW I remember digging into that same message a while back (although I've forgotten which particular GPU I was playing with at the time), and concluded that the STACK_PWRON/STACK_READY registers might just not be implemented on some GPUs, and/or this easy-to-overlook register bit could be some kind of enable for the functionality:
https://github.com/ianmacd/d2s/blob/master/drivers/gpu/arm/b_r16p0/backend/g...
Since even in kbase this is all behind an 'expert' config option, I'm inclined to agree that just dropping it from panfrost unless and until it proves necessary is probably preferable to adding more logic and inscrutable register-magic.
Robin.
Steve
So the GPU is probed, but there's an issue when powering up the STACK, not quite sure why, I'll try to have a deeper look, at some point.
Thanks!
Nicolas
v2: - Use sram instead of mali_sram as SRAM supply name. - Rename mali@ to gpu@. - Add dt-bindings changes - Stacking patches after the device tree change that allow basic probing (still incomplete and broken).
Nicolas Boichat (7): dt-bindings: gpu: mali-bifrost: Add Mediatek MT8183 arm64: dts: mt8183: Add node for the Mali GPU drm/panfrost: Improve error reporting in panfrost_gpu_power_on drm/panfrost: Add support for a second regulator for the GPU drm/panfrost: Add support for multiple power domain support RFC: drm/panfrost: Add bifrost compatible string RFC: drm/panfrost: devfreq: Add support for 2 regulators
.../bindings/gpu/arm,mali-bifrost.yaml | 20 ++++ arch/arm64/boot/dts/mediatek/mt8183-evb.dts | 7 ++ arch/arm64/boot/dts/mediatek/mt8183.dtsi | 104 +++++++++++++++++ drivers/gpu/drm/panfrost/panfrost_devfreq.c | 18 +++ drivers/gpu/drm/panfrost/panfrost_device.c | 108 ++++++++++++++++-- drivers/gpu/drm/panfrost/panfrost_device.h | 7 ++ drivers/gpu/drm/panfrost/panfrost_drv.c | 1 + drivers/gpu/drm/panfrost/panfrost_gpu.c | 15 ++- 8 files changed, 267 insertions(+), 13 deletions(-)
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Jan 09, 2020 at 01:10:33PM +0000, Robin Murphy wrote:
On 09/01/2020 12:01 pm, Steven Price wrote:
On 08/01/2020 05:23, Nicolas Boichat wrote:
Hi!
Sorry for the long delay since https://patchwork.kernel.org/patch/11132381/, finally got around to give this a real try.
The main purpose of this series is to upstream the dts change and the binding document, but I wanted to see how far I could probe the GPU, to check that the binding is indeed correct. The rest of the patches are RFC/work-in-progress, but I think some of them could already be picked up.
So this is tested on MT8183 with a chromeos-4.19 kernel, and a ton of backports to get the latest panfrost driver (I should probably try on linux-next at some point but this was the path of least resistance).
I tested it as a module as it's more challenging (originally probing would work built-in, on boot, but not as a module, as I didn't have the power domain changes, and all power domains are on by default during boot).
Probing logs looks like this, currently: [ 221.867726] panfrost 13040000.gpu: clock rate = 511999970 [ 221.867929] panfrost 13040000.gpu: Linked as a consumer to regulator.14 [ 221.868600] panfrost 13040000.gpu: Linked as a consumer to regulator.31 [ 221.870586] panfrost 13040000.gpu: Linked as a consumer to genpd:0:13040000.gpu [ 221.871492] panfrost 13040000.gpu: Linked as a consumer to genpd:1:13040000.gpu [ 221.871866] panfrost 13040000.gpu: Linked as a consumer to genpd:2:13040000.gpu [ 221.872427] panfrost 13040000.gpu: mali-g72 id 0x6221 major 0x0 minor 0x3 status 0x0 [ 221.872439] panfrost 13040000.gpu: features: 00000000,13de77ff, issues: 00000000,00000400 [ 221.872445] panfrost 13040000.gpu: Features: L2:0x07120206 Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002830 AS:0xff JS:0x7 [ 221.872449] panfrost 13040000.gpu: shader_present=0x7 l2_present=0x1 [ 221.873526] panfrost 13040000.gpu: error powering up gpu stack [ 221.878088] [drm] Initialized panfrost 1.1.0 20180908 for 13040000.gpu on minor 2 [ 221.940817] panfrost 13040000.gpu: error powering up gpu stack [ 222.018233] panfrost 13040000.gpu: error powering up gpu stack (repeated)
It's interesting that it's only the stack that is failing. In hardware there's a dependency: L2->stack->shader - so in theory the shader cores shouldn't be able to power up either. There are some known hardware bugs here though[1]:
MODULE_PARM_DESC(corestack_driver_control, "Let the driver power on/off the GPU core stack independently " "without involving the Power Domain Controller. This should " "only be enabled on platforms for which integration of the PDC " "to the Mali GPU is known to be problematic.");
[1] https://github.com/ianmacd/d2s/blob/master/drivers/gpu/arm/b_r16p0/backend/g...
It might be worth just dropping the code for powering up/down stacks and let the GPU's own dependency management handle it.
FWIW I remember digging into that same message a while back (although I've forgotten which particular GPU I was playing with at the time), and concluded that the STACK_PWRON/STACK_READY registers might just not be implemented on some GPUs,
They are indeed not implemented on some GPUs. Specifically none of the Midgard GPUs. I believe G71 also doesn't have it. However the register addresses were picked so that on these older GPUs they should read-as-zero and write-ignore so this shouldn't actually cause any problems.
and/or this easy-to-overlook register bit could be some kind of enable for the functionality:
https://github.com/ianmacd/d2s/blob/master/drivers/gpu/arm/b_r16p0/backend/g...
Since even in kbase this is all behind an 'expert' config option, I'm inclined to agree that just dropping it from panfrost unless and until it proves necessary is probably preferable to adding more logic and inscrutable register-magic.
Indeed - I'll post a patch removing it.
Thanks,
Steve
Robin.
Steve
So the GPU is probed, but there's an issue when powering up the STACK, not quite sure why, I'll try to have a deeper look, at some point.
Thanks!
Nicolas
v2: - Use sram instead of mali_sram as SRAM supply name. - Rename mali@ to gpu@. - Add dt-bindings changes - Stacking patches after the device tree change that allow basic probing (still incomplete and broken).
Nicolas Boichat (7): dt-bindings: gpu: mali-bifrost: Add Mediatek MT8183 arm64: dts: mt8183: Add node for the Mali GPU drm/panfrost: Improve error reporting in panfrost_gpu_power_on drm/panfrost: Add support for a second regulator for the GPU drm/panfrost: Add support for multiple power domain support RFC: drm/panfrost: Add bifrost compatible string RFC: drm/panfrost: devfreq: Add support for 2 regulators
.../bindings/gpu/arm,mali-bifrost.yaml | 20 ++++ arch/arm64/boot/dts/mediatek/mt8183-evb.dts | 7 ++ arch/arm64/boot/dts/mediatek/mt8183.dtsi | 104 +++++++++++++++++ drivers/gpu/drm/panfrost/panfrost_devfreq.c | 18 +++ drivers/gpu/drm/panfrost/panfrost_device.c | 108 ++++++++++++++++-- drivers/gpu/drm/panfrost/panfrost_device.h | 7 ++ drivers/gpu/drm/panfrost/panfrost_drv.c | 1 + drivers/gpu/drm/panfrost/panfrost_gpu.c | 15 ++- 8 files changed, 267 insertions(+), 13 deletions(-)
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
dri-devel@lists.freedesktop.org