When CONFIG_INIT_ON_ALLOC_DEFAULT_ON the GMU memory allocator runs afoul of cache coherency issues because it is mapped as write-combine without clearing the cache after it was zeroed.
Rather than duplicate the hacky workaround we use in the GEM allocator for the same reason it turns out that we don't need to have a bespoke memory allocator for the GMU anyway. It uses a flat, global address space and there are only two relatively minor allocations anyway. In short, this is essentially what the DMA API was created for so replace a bunch of memory management code with two calls to allocate and free DMA memory and we're fine.
In a previous version of this series I added the dma-ranges property to the device tree file for the GMU and updated the bindings to YAML. Rob correctly pointed out that we didn't need dma-ranges any more but I'm still pushing the YAML conversion because it is good and we'll eventually need it anyway so why not.
v4: Use dma_alloc_wc() wrappers per Michael Ruhl.
v3: Fix YAML description per RobH and remove dma-ranges and replace it with the correct DMA mask in the GMU device. Convert the iova type to a dma_attr_t to make it 32 bit friendly.
v2: Fix the example bindings for dma-ranges - the third item is the size Pass false to of_dma_configure so that it fails probe if the DMA region is not set up.
Jordan Crouse (2): dt-bindings: display: msm: Convert GMU bindings to YAML drm/msm/a6xx: Use the DMA API for GMU memory objects
.../devicetree/bindings/display/msm/gmu.txt | 116 ------------------- .../devicetree/bindings/display/msm/gmu.yaml | 123 +++++++++++++++++++++ drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 113 ++----------------- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 6 +- 4 files changed, 135 insertions(+), 223 deletions(-) delete mode 100644 Documentation/devicetree/bindings/display/msm/gmu.txt create mode 100644 Documentation/devicetree/bindings/display/msm/gmu.yaml
Convert display/msm/gmu.txt to display/msm/gmu.yaml and remove the old text bindings.
Signed-off-by: Jordan Crouse jcrouse@codeaurora.org ---
.../devicetree/bindings/display/msm/gmu.txt | 116 ------------------- .../devicetree/bindings/display/msm/gmu.yaml | 123 +++++++++++++++++++++ 2 files changed, 123 insertions(+), 116 deletions(-) delete mode 100644 Documentation/devicetree/bindings/display/msm/gmu.txt create mode 100644 Documentation/devicetree/bindings/display/msm/gmu.yaml
diff --git a/Documentation/devicetree/bindings/display/msm/gmu.txt b/Documentation/devicetree/bindings/display/msm/gmu.txt deleted file mode 100644 index bf9c7a2..0000000 --- a/Documentation/devicetree/bindings/display/msm/gmu.txt +++ /dev/null @@ -1,116 +0,0 @@ -Qualcomm adreno/snapdragon GMU (Graphics management unit) - -The GMU is a programmable power controller for the GPU. the CPU controls the -GMU which in turn handles power controls for the GPU. - -Required properties: -- compatible: "qcom,adreno-gmu-XYZ.W", "qcom,adreno-gmu" - for example: "qcom,adreno-gmu-630.2", "qcom,adreno-gmu" - Note that you need to list the less specific "qcom,adreno-gmu" - for generic matches and the more specific identifier to identify - the specific device. -- reg: Physical base address and length of the GMU registers. -- reg-names: Matching names for the register regions - * "gmu" - * "gmu_pdc" - * "gmu_pdc_seg" -- interrupts: The interrupt signals from the GMU. -- interrupt-names: Matching names for the interrupts - * "hfi" - * "gmu" -- clocks: phandles to the device clocks -- clock-names: Matching names for the clocks - * "gmu" - * "cxo" - * "axi" - * "mnoc" -- power-domains: should be: - <&clock_gpucc GPU_CX_GDSC> - <&clock_gpucc GPU_GX_GDSC> -- power-domain-names: Matching names for the power domains -- iommus: phandle to the adreno iommu -- operating-points-v2: phandle to the OPP operating points - -Optional properties: -- sram: phandle to the On Chip Memory (OCMEM) that's present on some Snapdragon - SoCs. See Documentation/devicetree/bindings/sram/qcom,ocmem.yaml. - -Example: - -/ { - ... - - gmu: gmu@506a000 { - compatible="qcom,adreno-gmu-630.2", "qcom,adreno-gmu"; - - reg = <0x506a000 0x30000>, - <0xb280000 0x10000>, - <0xb480000 0x10000>; - reg-names = "gmu", "gmu_pdc", "gmu_pdc_seq"; - - interrupts = <GIC_SPI 304 IRQ_TYPE_LEVEL_HIGH>, - <GIC_SPI 305 IRQ_TYPE_LEVEL_HIGH>; - interrupt-names = "hfi", "gmu"; - - clocks = <&gpucc GPU_CC_CX_GMU_CLK>, - <&gpucc GPU_CC_CXO_CLK>, - <&gcc GCC_DDRSS_GPU_AXI_CLK>, - <&gcc GCC_GPU_MEMNOC_GFX_CLK>; - clock-names = "gmu", "cxo", "axi", "memnoc"; - - power-domains = <&gpucc GPU_CX_GDSC>, - <&gpucc GPU_GX_GDSC>; - power-domain-names = "cx", "gx"; - - iommus = <&adreno_smmu 5>; - - operating-points-v2 = <&gmu_opp_table>; - }; -}; - -a3xx example with OCMEM support: - -/ { - ... - - gpu: adreno@fdb00000 { - compatible = "qcom,adreno-330.2", - "qcom,adreno"; - reg = <0xfdb00000 0x10000>; - reg-names = "kgsl_3d0_reg_memory"; - interrupts = <GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>; - interrupt-names = "kgsl_3d0_irq"; - clock-names = "core", - "iface", - "mem_iface"; - clocks = <&mmcc OXILI_GFX3D_CLK>, - <&mmcc OXILICX_AHB_CLK>, - <&mmcc OXILICX_AXI_CLK>; - sram = <&gmu_sram>; - power-domains = <&mmcc OXILICX_GDSC>; - operating-points-v2 = <&gpu_opp_table>; - iommus = <&gpu_iommu 0>; - }; - - ocmem@fdd00000 { - compatible = "qcom,msm8974-ocmem"; - - reg = <0xfdd00000 0x2000>, - <0xfec00000 0x180000>; - reg-names = "ctrl", - "mem"; - - clocks = <&rpmcc RPM_SMD_OCMEMGX_CLK>, - <&mmcc OCMEMCX_OCMEMNOC_CLK>; - clock-names = "core", - "iface"; - - #address-cells = <1>; - #size-cells = <1>; - - gmu_sram: gmu-sram@0 { - reg = <0x0 0x100000>; - ranges = <0 0 0xfec00000 0x100000>; - }; - }; -}; diff --git a/Documentation/devicetree/bindings/display/msm/gmu.yaml b/Documentation/devicetree/bindings/display/msm/gmu.yaml new file mode 100644 index 0000000..0b8736a --- /dev/null +++ b/Documentation/devicetree/bindings/display/msm/gmu.yaml @@ -0,0 +1,123 @@ +# SPDX-License-Identifier: GPL-2.0-only +# Copyright 2019-2020, The Linux Foundation, All Rights Reserved +%YAML 1.2 +--- + +$id: "http://devicetree.org/schemas/display/msm/gmu.yaml#" +$schema: "http://devicetree.org/meta-schemas/core.yaml#" + +title: Devicetree bindings for the GMU attached to certain Adreno GPUs + +maintainers: + - Rob Clark robdclark@gmail.com + +description: | + These bindings describe the Graphics Management Unit (GMU) that is attached + to members of the Adreno A6xx GPU family. The GMU provides on-device power + management and support to improve power efficiency and reduce the load on + the CPU. + +properties: + compatible: + items: + - enum: + - qcom,adreno-gmu-630.2 + - const: qcom,adreno-gmu + + reg: + items: + - description: Core GMU registers + - description: GMU PDC registers + - description: GMU PDC sequence registers + + reg-names: + items: + - const: gmu + - const: gmu_pdc + - const: gmu_pdc_seq + + clocks: + items: + - description: GMU clock + - description: GPU CX clock + - description: GPU AXI clock + - description: GPU MEMNOC clock + + clock-names: + items: + - const: gmu + - const: cxo + - const: axi + - const: memnoc + + interrupts: + items: + - description: GMU HFI interrupt + - description: GMU interrupt + + + interrupt-names: + items: + - const: hfi + - const: gmu + + power-domains: + items: + - description: CX power domain + - description: GX power domain + + power-domain-names: + items: + - const: cx + - const: gx + + iommus: + maxItems: 1 + + operating-points-v2: true + +required: + - compatible + - reg + - reg-names + - clocks + - clock-names + - interrupts + - interrupt-names + - power-domains + - power-domain-names + - iommus + - operating-points-v2 + +examples: + - | + #include <dt-bindings/clock/qcom,gpucc-sdm845.h> + #include <dt-bindings/clock/qcom,gcc-sdm845.h> + #include <dt-bindings/interrupt-controller/irq.h> + #include <dt-bindings/interrupt-controller/arm-gic.h> + + gmu: gmu@506a000 { + compatible="qcom,adreno-gmu-630.2", "qcom,adreno-gmu"; + + reg = <0x506a000 0x30000>, + <0xb280000 0x10000>, + <0xb480000 0x10000>; + reg-names = "gmu", "gmu_pdc", "gmu_pdc_seq"; + + clocks = <&gpucc GPU_CC_CX_GMU_CLK>, + <&gpucc GPU_CC_CXO_CLK>, + <&gcc GCC_DDRSS_GPU_AXI_CLK>, + <&gcc GCC_GPU_MEMNOC_GFX_CLK>; + clock-names = "gmu", "cxo", "axi", "memnoc"; + + interrupts = <GIC_SPI 304 IRQ_TYPE_LEVEL_HIGH>, + <GIC_SPI 305 IRQ_TYPE_LEVEL_HIGH>; + interrupt-names = "hfi", "gmu"; + + power-domains = <&gpucc GPU_CX_GDSC>, + <&gpucc GPU_GX_GDSC>; + power-domain-names = "cx", "gx"; + + iommus = <&adreno_smmu 5>; + operating-points-v2 = <&gmu_opp_table>; + };
On Mon, 2 Mar 2020 13:47:46 -0700, Jordan Crouse wrote:
Convert display/msm/gmu.txt to display/msm/gmu.yaml and remove the old text bindings.
Signed-off-by: Jordan Crouse jcrouse@codeaurora.org
.../devicetree/bindings/display/msm/gmu.txt | 116 ------------------- .../devicetree/bindings/display/msm/gmu.yaml | 123 +++++++++++++++++++++ 2 files changed, 123 insertions(+), 116 deletions(-) delete mode 100644 Documentation/devicetree/bindings/display/msm/gmu.txt create mode 100644 Documentation/devicetree/bindings/display/msm/gmu.yaml
Reviewed-by: Rob Herring robh@kernel.org
The GMU has very few memory allocations and uses a flat memory space so there is no good reason to go out of our way to bypass the DMA APIs which were basically designed for this exact scenario.
v4: Use dma_alloc_wc() v3: Set the dma mask correctly and use dma_addr_t for the iova type v2: Pass force_dma false to of_dma_configure to require that the DMA region be set up and return error from of_dma_configure to fail probe.
Signed-off-by: Jordan Crouse jcrouse@codeaurora.org ---
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 113 ++++------------------------------ drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 6 +- 2 files changed, 12 insertions(+), 107 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 748cd37..dd51dd0 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -2,6 +2,7 @@ /* Copyright (c) 2017-2019 The Linux Foundation. All rights reserved. */
#include <linux/clk.h> +#include <linux/dma-mapping.h> #include <linux/interconnect.h> #include <linux/pm_domain.h> #include <linux/pm_opp.h> @@ -920,21 +921,10 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo) { - int count, i; - u64 iova; - if (IS_ERR_OR_NULL(bo)) return;
- count = bo->size >> PAGE_SHIFT; - iova = bo->iova; - - for (i = 0; i < count; i++, iova += PAGE_SIZE) { - iommu_unmap(gmu->domain, iova, PAGE_SIZE); - __free_pages(bo->pages[i], 0); - } - - kfree(bo->pages); + dma_free_wc(gmu->dev, bo->size, bo->virt, bo->iova); kfree(bo); }
@@ -942,7 +932,6 @@ static struct a6xx_gmu_bo *a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, size_t size) { struct a6xx_gmu_bo *bo; - int ret, count, i;
bo = kzalloc(sizeof(*bo), GFP_KERNEL); if (!bo) @@ -950,86 +939,14 @@ static struct a6xx_gmu_bo *a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu,
bo->size = PAGE_ALIGN(size);
- count = bo->size >> PAGE_SHIFT; + bo->virt = dma_alloc_wc(gmu->dev, bo->size, &bo->iova, GFP_KERNEL);
- bo->pages = kcalloc(count, sizeof(struct page *), GFP_KERNEL); - if (!bo->pages) { + if (!bo->virt) { kfree(bo); return ERR_PTR(-ENOMEM); }
- for (i = 0; i < count; i++) { - bo->pages[i] = alloc_page(GFP_KERNEL); - if (!bo->pages[i]) - goto err; - } - - bo->iova = gmu->uncached_iova_base; - - for (i = 0; i < count; i++) { - ret = iommu_map(gmu->domain, - bo->iova + (PAGE_SIZE * i), - page_to_phys(bo->pages[i]), PAGE_SIZE, - IOMMU_READ | IOMMU_WRITE); - - if (ret) { - DRM_DEV_ERROR(gmu->dev, "Unable to map GMU buffer object\n"); - - for (i = i - 1 ; i >= 0; i--) - iommu_unmap(gmu->domain, - bo->iova + (PAGE_SIZE * i), - PAGE_SIZE); - - goto err; - } - } - - bo->virt = vmap(bo->pages, count, VM_IOREMAP, - pgprot_writecombine(PAGE_KERNEL)); - if (!bo->virt) - goto err; - - /* Align future IOVA addresses on 1MB boundaries */ - gmu->uncached_iova_base += ALIGN(size, SZ_1M); - return bo; - -err: - for (i = 0; i < count; i++) { - if (bo->pages[i]) - __free_pages(bo->pages[i], 0); - } - - kfree(bo->pages); - kfree(bo); - - return ERR_PTR(-ENOMEM); -} - -static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) -{ - int ret; - - /* - * The GMU address space is hardcoded to treat the range - * 0x60000000 - 0x80000000 as un-cached memory. All buffers shared - * between the GMU and the CPU will live in this space - */ - gmu->uncached_iova_base = 0x60000000; - - - gmu->domain = iommu_domain_alloc(&platform_bus_type); - if (!gmu->domain) - return -ENODEV; - - ret = iommu_attach_device(gmu->domain, gmu->dev); - - if (ret) { - iommu_domain_free(gmu->domain); - gmu->domain = NULL; - } - - return ret; }
/* Return the 'arc-level' for the given frequency */ @@ -1289,10 +1206,6 @@ void a6xx_gmu_remove(struct a6xx_gpu *a6xx_gpu)
a6xx_gmu_memory_free(gmu, gmu->hfi);
- iommu_detach_device(gmu->domain, gmu->dev); - - iommu_domain_free(gmu->domain); - free_irq(gmu->gmu_irq, gmu); free_irq(gmu->hfi_irq, gmu);
@@ -1313,7 +1226,13 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
gmu->dev = &pdev->dev;
- of_dma_configure(gmu->dev, node, true); + /* Pass force_dma false to require the DT to set the dma region */ + ret = of_dma_configure(gmu->dev, node, false); + if (ret) + return ret; + + /* Set the mask after the of_dma_configure() */ + dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(31));
/* Fow now, don't do anything fancy until we get our feet under us */ gmu->idle_level = GMU_IDLE_STATE_ACTIVE; @@ -1325,11 +1244,6 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) if (ret) goto err_put_device;
- /* Set up the IOMMU context bank */ - ret = a6xx_gmu_memory_probe(gmu); - if (ret) - goto err_put_device; - /* Allocate memory for for the HFI queues */ gmu->hfi = a6xx_gmu_memory_alloc(gmu, SZ_16K); if (IS_ERR(gmu->hfi)) @@ -1375,11 +1289,6 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) err_memory: a6xx_gmu_memory_free(gmu, gmu->hfi);
- if (gmu->domain) { - iommu_detach_device(gmu->domain, gmu->dev); - - iommu_domain_free(gmu->domain); - } ret = -ENODEV;
err_put_device: diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h index 2af91ed..4af65a3 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -12,8 +12,7 @@ struct a6xx_gmu_bo { void *virt; size_t size; - u64 iova; - struct page **pages; + dma_addr_t iova; };
/* @@ -49,9 +48,6 @@ struct a6xx_gmu { int hfi_irq; int gmu_irq;
- struct iommu_domain *domain; - u64 uncached_iova_base; - struct device *gxpd;
int idle_level;
-----Original Message----- From: dri-devel dri-devel-bounces@lists.freedesktop.org On Behalf Of Jordan Crouse Sent: Monday, March 2, 2020 3:48 PM To: linux-arm-msm@vger.kernel.org Cc: Douglas Anderson dianders@chromium.org; David Airlie airlied@linux.ie; freedreno@lists.freedesktop.org; smasetty@codeaurora.org; linux-kernel@vger.kernel.org; dri- devel@lists.freedesktop.org; Stephen Boyd swboyd@chromium.org; Ruhl, Michael J michael.j.ruhl@intel.com; Sean Paul sean@poorly.run Subject: [PATCH v4 2/2] drm/msm/a6xx: Use the DMA API for GMU memory objects
The GMU has very few memory allocations and uses a flat memory space so there is no good reason to go out of our way to bypass the DMA APIs which were basically designed for this exact scenario.
v4: Use dma_alloc_wc()
The patch and the update look good to me.
Reviewed-by: Michael J. Ruhl michael.j.ruhl@intel.com
Mike
v3: Set the dma mask correctly and use dma_addr_t for the iova type v2: Pass force_dma false to of_dma_configure to require that the DMA region be set up and return error from of_dma_configure to fail probe.
Signed-off-by: Jordan Crouse jcrouse@codeaurora.org
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 113 ++++---------------------------
drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 6 +- 2 files changed, 12 insertions(+), 107 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 748cd37..dd51dd0 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -2,6 +2,7 @@ /* Copyright (c) 2017-2019 The Linux Foundation. All rights reserved. */
#include <linux/clk.h> +#include <linux/dma-mapping.h> #include <linux/interconnect.h> #include <linux/pm_domain.h> #include <linux/pm_opp.h> @@ -920,21 +921,10 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo) {
int count, i;
u64 iova;
if (IS_ERR_OR_NULL(bo)) return;
count = bo->size >> PAGE_SHIFT;
iova = bo->iova;
for (i = 0; i < count; i++, iova += PAGE_SIZE) {
iommu_unmap(gmu->domain, iova, PAGE_SIZE);
__free_pages(bo->pages[i], 0);
}
kfree(bo->pages);
- dma_free_wc(gmu->dev, bo->size, bo->virt, bo->iova); kfree(bo);
}
@@ -942,7 +932,6 @@ static struct a6xx_gmu_bo *a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, size_t size) { struct a6xx_gmu_bo *bo;
int ret, count, i;
bo = kzalloc(sizeof(*bo), GFP_KERNEL); if (!bo)
@@ -950,86 +939,14 @@ static struct a6xx_gmu_bo *a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu,
bo->size = PAGE_ALIGN(size);
- count = bo->size >> PAGE_SHIFT;
- bo->virt = dma_alloc_wc(gmu->dev, bo->size, &bo->iova,
GFP_KERNEL);
- bo->pages = kcalloc(count, sizeof(struct page *), GFP_KERNEL);
- if (!bo->pages) {
- if (!bo->virt) { kfree(bo); return ERR_PTR(-ENOMEM); }
- for (i = 0; i < count; i++) {
bo->pages[i] = alloc_page(GFP_KERNEL);
if (!bo->pages[i])
goto err;
- }
- bo->iova = gmu->uncached_iova_base;
- for (i = 0; i < count; i++) {
ret = iommu_map(gmu->domain,
bo->iova + (PAGE_SIZE * i),
page_to_phys(bo->pages[i]), PAGE_SIZE,
IOMMU_READ | IOMMU_WRITE);
if (ret) {
DRM_DEV_ERROR(gmu->dev, "Unable to map GMU
buffer object\n");
for (i = i - 1 ; i >= 0; i--)
iommu_unmap(gmu->domain,
bo->iova + (PAGE_SIZE * i),
PAGE_SIZE);
goto err;
}
- }
- bo->virt = vmap(bo->pages, count, VM_IOREMAP,
pgprot_writecombine(PAGE_KERNEL));
- if (!bo->virt)
goto err;
- /* Align future IOVA addresses on 1MB boundaries */
- gmu->uncached_iova_base += ALIGN(size, SZ_1M);
- return bo;
-err:
- for (i = 0; i < count; i++) {
if (bo->pages[i])
__free_pages(bo->pages[i], 0);
- }
- kfree(bo->pages);
- kfree(bo);
- return ERR_PTR(-ENOMEM);
-}
-static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) -{
- int ret;
- /*
* The GMU address space is hardcoded to treat the range
* 0x60000000 - 0x80000000 as un-cached memory. All buffers shared
* between the GMU and the CPU will live in this space
*/
- gmu->uncached_iova_base = 0x60000000;
- gmu->domain = iommu_domain_alloc(&platform_bus_type);
- if (!gmu->domain)
return -ENODEV;
- ret = iommu_attach_device(gmu->domain, gmu->dev);
- if (ret) {
iommu_domain_free(gmu->domain);
gmu->domain = NULL;
- }
- return ret;
}
/* Return the 'arc-level' for the given frequency */ @@ -1289,10 +1206,6 @@ void a6xx_gmu_remove(struct a6xx_gpu *a6xx_gpu)
a6xx_gmu_memory_free(gmu, gmu->hfi);
- iommu_detach_device(gmu->domain, gmu->dev);
- iommu_domain_free(gmu->domain);
- free_irq(gmu->gmu_irq, gmu); free_irq(gmu->hfi_irq, gmu);
@@ -1313,7 +1226,13 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
gmu->dev = &pdev->dev;
- of_dma_configure(gmu->dev, node, true);
/* Pass force_dma false to require the DT to set the dma region */
ret = of_dma_configure(gmu->dev, node, false);
if (ret)
return ret;
/* Set the mask after the of_dma_configure() */
dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(31));
/* Fow now, don't do anything fancy until we get our feet under us */ gmu->idle_level = GMU_IDLE_STATE_ACTIVE;
@@ -1325,11 +1244,6 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) if (ret) goto err_put_device;
- /* Set up the IOMMU context bank */
- ret = a6xx_gmu_memory_probe(gmu);
- if (ret)
goto err_put_device;
- /* Allocate memory for for the HFI queues */ gmu->hfi = a6xx_gmu_memory_alloc(gmu, SZ_16K); if (IS_ERR(gmu->hfi))
@@ -1375,11 +1289,6 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) err_memory: a6xx_gmu_memory_free(gmu, gmu->hfi);
- if (gmu->domain) {
iommu_detach_device(gmu->domain, gmu->dev);
iommu_domain_free(gmu->domain);
- } ret = -ENODEV;
err_put_device: diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h index 2af91ed..4af65a3 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -12,8 +12,7 @@ struct a6xx_gmu_bo { void *virt; size_t size;
- u64 iova;
- struct page **pages;
- dma_addr_t iova;
};
/* @@ -49,9 +48,6 @@ struct a6xx_gmu { int hfi_irq; int gmu_irq;
struct iommu_domain *domain;
u64 uncached_iova_base;
struct device *gxpd;
int idle_level;
-- 2.7.4 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
dri-devel@lists.freedesktop.org