Hey all,
this is the Etnaviv DRM driver for Vivante embedded GPUs. It is heavily influenced by the MSM driver, as can be clearly seen with the first commits.
The userspace interface does a look a lot like the MSM one, with some small differences:
- Each GPU core is a pipe as with MSM, but Vivante doesn't have a strict separation of tasks between the pipes. On some SoCs like the i.MX6 each pipe feeds one rendering backend (2D, 3D, VG), but there are also SoCs out there where on core (pipe) houses more than one backend. So pipes on Etnaviv represent one core, that may be switched between multiple execution state through the command stream. To allow for proper separation between processes each process may specify the expected execution state on submit.
- OR-ing and shifting of BO reloc addresses has been removed, as there is need for this on Vivante GPUs. The register interface is designed in a way that one always fills in complete 32bit addresses without any additional informations.
- Presumption of BO addresses is not used right now, as the GPU MMU v1 can not quarantee full protection. There is a 2GB window into physical memory without any MMU translation in between, so we always have to process all relocs to guard against malicious userspace. I've left it in the interface though as MMU v2 seems to be able to give full protection and it might become useful at that point.
Unfinished stuff: - GPU PM and context switching. This already works for the GPU 2D where there isn't much state to retain on the GPU itself. For full context switching and power down support on the GPU 3D the userspace needs to aid the kernel with a context restore buffer. This part isn't done yet.
It's a rather long series. I already tried to squash some commits together, but wanted to retain the authorship of the individual people that worked on this driver for now. Maybe if everyone involved is okay with it we could squash some of the fixup commits a bit more.
I've kept things in staging for now, as that's the place where Christian started this driver, but would really like to move it to DRM proper _before_ merging. So please review stuff with that in mind.
Russell King has some experimental support in the xf86-video-armada driver to get some X accel running atop of this. I have a working libdrm/MESA stack that basically works for some simple applications, but needs a good deal more work to clean up.
If you would like to look at this stuff as a git tree feel free to fetch:
git://git.pengutronix.de/git/lst/linux.git etnaviv-for-upstream
Regards, Lucas
Christian Gmeiner (2): staging: etnaviv: add drm driver staging: etnaviv: quiten down kernel log output
Lucas Stach (28): staging: etnaviv: add devicetree bindings staging: etnaviv: import new headers staging: etnaviv: remove IOMMUv2 stubs staging: etnaviv: allow to draw up to 256 rectangles in one draw call staging: etnaviv: align command stream size to 64 bit staging: etnaviv: correct instruction count for GC2000 and GC880 staging: etnaviv: reconfigure bus mapping on GC2000 staging: etnaviv: fix cache cleaning for uncached SHM buffers staging: etnaviv: properly flush all TLBs on MMUv1 staging: etnaviv: convert to_etnaviv_bo() to real function staging: etnaviv: take gpu instead of pipe as input to fence wait function staging: etnaviv: plug in fence waiting in cpu_prepare staging: etnaviv: allow to map buffer object into multiple address spaces staging: etnaviv: don't pretend to have a single MMU staging: etnaviv: use GPU device to construct MMU staging: etnaviv: flush MMU when switching context staging: etnaviv: add flag to force buffer through MMU staging: etnaviv: use more natural devicetree abstraction staging: etnaviv: don't override platform provided IRQ flags staging: etnaviv: separate GPU pipes from execution state staging: etnaviv: make sure to unlock DRM mutex when component bind fails staging: etnaviv: clean up public API staging: etnaviv: prune dumb buffer support staging: etnaviv: properly prefix all prime functions to etnaviv staging: etnaviv: rename last remaining bits from msm to etnaviv staging: etnaviv: add proper license header to all files staging: etnaviv: some final trivial changes to the module ARM: imx6: add Vivante GPU nodes
Philipp Zabel (1): of: Add vendor prefix for Vivante Corporation
Russell King (80): staging: etnaviv: fix oops on unbind staging: etnaviv: fix oops in timer subsystem caused by hangcheck timer staging: etnaviv: fix etnaviv_add_components() staging: etnaviv: fix etnaviv_hw_reset() staging: etnaviv: fix etnaviv gpu debugfs output staging: etnaviv: fix fence implementation staging: etnaviv: fix buffer dumping code staging: etnaviv: fix ring buffer overflow check staging: etnaviv: fix cleanup of imported dmabufs staging: etnaviv: fix printk formats staging: etnaviv: validation: ensure space for the LINK command staging: etnaviv: validation: improve command buffer size checks staging: etnaviv: validation: improve relocation validation staging: etnaviv: fix sparse warnings staging: etnaviv: use devm_ioremap_resource() staging: etnaviv: respect the submission command offset staging: etnaviv: add an offset for buffer dumping staging: etnaviv: quieten down submission debugging staging: etnaviv: fix multiple command buffer submission in etnaviv_buffer_queue() staging: etnaviv: package up events into etnaviv_event struct staging: etnaviv: track the last known GPU position staging: etnaviv: ensure that ring buffer wraps staging: etnaviv: fix checkpatch errors staging: etnaviv: fix checkpatch warnings staging: etnaviv: fix get_pages() failure path staging: etnaviv: add gem operations structure to etnaviv objects staging: etnaviv: convert prime import to use etnaviv_gem_ops staging: etnaviv: convert shmem release to use etnaviv_gem_ops staging: etnaviv: convert cmdbuf release to use etnaviv_gem_ops staging: etnaviv: move drm_gem_object_release() staging: etnaviv: ensure cleanup of reservation object staging: etnaviv: clean up etnaviv_gem_free_object() staging: etnaviv: provide etnaviv_gem_new_private() staging: etnaviv: move msm_gem_import() etc to etnaviv_gem_prime.c staging: etnaviv: clean up prime import staging: etnaviv: convert get_pages()/put_pages() to take etnaviv_obj staging: etnaviv: clean up etnaviv_gem_{get,put}_pages() staging: etnaviv: add gem get_pages() method staging: etnaviv: fix DMA API usage staging: etnaviv: add support to insert a MMU flush into GPU stream staging: etnaviv: move GPU memory management into MMU staging: etnaviv: publish and use mmu geometry staging: etnaviv: mmuv1: ensure we unmap all entries staging: etnaviv: move MMU setup and teardown code to etnaviv_mmu.c staging: etnaviv: hack: bypass iommu with contiguous buffers staging: etnaviv: implement round-robin GPU MMU allocation staging: etnaviv: fix etnaviv_iommu_map_gem() return paths staging: etnaviv: implement MMU reaping staging: etnaviv: move scatterlist creation to etnaviv_gem_get_pages() staging: etnaviv: add userptr mapping support staging: etnaviv: call the DRM device 'drm' staging: etnaviv: clean up printk()s etc staging: etnaviv: safely take down hangcheck staging: etnaviv: move hangcheck disable to separate function staging: etnaviv: stop the hangcheck timer mis-firing staging: etnaviv: ensure that we retire all pending events staging: etnaviv: ensure GPU reset times out staging: etnaviv: add workarounds for GC320 on iMX6 staging: etnaviv: increase iommu page table size to 512KiB staging: etnaviv: allow non-DT use staging: etnaviv: dump mmu allocations staging: etnaviv: use definitions for constants staging: etnaviv: fix fence wrapping for gem objects staging: etnaviv: move scatterlist map/unmap staging: etnaviv: remove presumption of BO addresses staging: etnaviv: clean up etnaviv mmu scatterlist code staging: etnaviv: "better" DMA API usage staging: etnaviv: iommu: add a poisoned bad page staging: etnaviv: validate user supplied command stream staging: etnaviv: allow get_param without auth staging: etnaviv: fix busy reporting staging: etnaviv: fix event allocation failure path staging: etnaviv: remove powerrail support staging: etnaviv: NULL out stale pointers at unbind time staging: etnaviv: move mutex around component_{un,}bind_all() staging: etnaviv: move PM calls into bind/unbind callbacks staging: etnaviv: separate out etnaviv gpu hardware initialisation staging: etnaviv: add support to shutdown and restore the front end staging: etnaviv: runtime PM: add initial support staging: etnaviv: add support for offset physical memory
.../bindings/drm/etnaviv/etnaviv-drm.txt | 44 + .../devicetree/bindings/vendor-prefixes.txt | 1 + arch/arm/boot/dts/imx6dl.dtsi | 5 + arch/arm/boot/dts/imx6q.dtsi | 14 + arch/arm/boot/dts/imx6qdl.dtsi | 19 + drivers/staging/Kconfig | 2 + drivers/staging/Makefile | 1 + drivers/staging/etnaviv/Kconfig | 20 + drivers/staging/etnaviv/Makefile | 18 + drivers/staging/etnaviv/cmdstream.xml.h | 218 ++++ drivers/staging/etnaviv/common.xml.h | 249 ++++ drivers/staging/etnaviv/etnaviv_buffer.c | 306 +++++ drivers/staging/etnaviv/etnaviv_cmd_parser.c | 121 ++ drivers/staging/etnaviv/etnaviv_drv.c | 675 +++++++++++ drivers/staging/etnaviv/etnaviv_drv.h | 143 +++ drivers/staging/etnaviv/etnaviv_gem.c | 904 ++++++++++++++ drivers/staging/etnaviv/etnaviv_gem.h | 137 +++ drivers/staging/etnaviv/etnaviv_gem_prime.c | 116 ++ drivers/staging/etnaviv/etnaviv_gem_submit.c | 427 +++++++ drivers/staging/etnaviv/etnaviv_gpu.c | 1255 ++++++++++++++++++++ drivers/staging/etnaviv/etnaviv_gpu.h | 159 +++ drivers/staging/etnaviv/etnaviv_iommu.c | 216 ++++ drivers/staging/etnaviv/etnaviv_iommu.h | 26 + drivers/staging/etnaviv/etnaviv_mmu.c | 269 +++++ drivers/staging/etnaviv/etnaviv_mmu.h | 53 + drivers/staging/etnaviv/state.xml.h | 351 ++++++ drivers/staging/etnaviv/state_hi.xml.h | 407 +++++++ include/uapi/drm/etnaviv_drm.h | 225 ++++ 28 files changed, 6381 insertions(+) create mode 100644 Documentation/devicetree/bindings/drm/etnaviv/etnaviv-drm.txt create mode 100644 drivers/staging/etnaviv/Kconfig create mode 100644 drivers/staging/etnaviv/Makefile create mode 100644 drivers/staging/etnaviv/cmdstream.xml.h create mode 100644 drivers/staging/etnaviv/common.xml.h create mode 100644 drivers/staging/etnaviv/etnaviv_buffer.c create mode 100644 drivers/staging/etnaviv/etnaviv_cmd_parser.c create mode 100644 drivers/staging/etnaviv/etnaviv_drv.c create mode 100644 drivers/staging/etnaviv/etnaviv_drv.h create mode 100644 drivers/staging/etnaviv/etnaviv_gem.c create mode 100644 drivers/staging/etnaviv/etnaviv_gem.h create mode 100644 drivers/staging/etnaviv/etnaviv_gem_prime.c create mode 100644 drivers/staging/etnaviv/etnaviv_gem_submit.c create mode 100644 drivers/staging/etnaviv/etnaviv_gpu.c create mode 100644 drivers/staging/etnaviv/etnaviv_gpu.h create mode 100644 drivers/staging/etnaviv/etnaviv_iommu.c create mode 100644 drivers/staging/etnaviv/etnaviv_iommu.h create mode 100644 drivers/staging/etnaviv/etnaviv_mmu.c create mode 100644 drivers/staging/etnaviv/etnaviv_mmu.h create mode 100644 drivers/staging/etnaviv/state.xml.h create mode 100644 drivers/staging/etnaviv/state_hi.xml.h create mode 100644 include/uapi/drm/etnaviv_drm.h
From: Philipp Zabel philipp.zabel@gmail.com
Trivial patch to add Vivante Corporation to the list of devicetree vendor prefixes.
Signed-off-by: Philipp Zabel philipp.zabel@gmail.com --- Documentation/devicetree/bindings/vendor-prefixes.txt | 1 + 1 file changed, 1 insertion(+)
diff --git a/Documentation/devicetree/bindings/vendor-prefixes.txt b/Documentation/devicetree/bindings/vendor-prefixes.txt index fae26d014aaf..ce6372dc8678 100644 --- a/Documentation/devicetree/bindings/vendor-prefixes.txt +++ b/Documentation/devicetree/bindings/vendor-prefixes.txt @@ -190,6 +190,7 @@ v3 V3 Semiconductor variscite Variscite Ltd. via VIA Technologies, Inc. virtio Virtual I/O Device Specification, developed by the OASIS consortium +vivante Vivante Corporation voipac Voipac Technologies s.r.o. winbond Winbond Electronics corp. wlf Wolfson Microelectronics
Etnaviv follows the same priciple as imx-drm to have a virtual master device node to bind all the individual GPU cores together into one DRM device.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- .../bindings/drm/etnaviv/etnaviv-drm.txt | 44 ++++++++++++++++++++++ 1 file changed, 44 insertions(+) create mode 100644 Documentation/devicetree/bindings/drm/etnaviv/etnaviv-drm.txt
diff --git a/Documentation/devicetree/bindings/drm/etnaviv/etnaviv-drm.txt b/Documentation/devicetree/bindings/drm/etnaviv/etnaviv-drm.txt new file mode 100644 index 000000000000..e27082bdba0d --- /dev/null +++ b/Documentation/devicetree/bindings/drm/etnaviv/etnaviv-drm.txt @@ -0,0 +1,44 @@ +Etnaviv DRM master device +================================ + +The Etnaviv DRM master device is a virtual device needed to list all +Vivante GPU cores that comprise the GPU subsystem. + +Required properties: +- compatible: Should be "fsl,imx-gpu-subsystem" +- cores: Should contain a list of phandles pointing to Vivante GPU devices + +example: + +gpu-subsystem { + compatible = "fsl,imx-gpu-subsystem"; + cores = <&gpu_2d>, <&gpu_3d>; +}; + + +Vivante GPU core devices +==================== + +Required properties: +- compatible: Should be "vivante,gc" +- reg: should be register base and length as documented in the + datasheet +- interrupts: Should contain the cores interrupt line +- clocks: should contain one clock for entry in clock-names + see Documentation/devicetree/bindings/clock/clock-bindings.txt +- clock-names: + - "bus": AXI/register clock + - "core": GPU core clock + - "shader": Shader clock (only required if GPU has feature PIPE_3D) + +example: + +gpu_3d: gpu@00130000 { + compatible = "vivante,gc"; + reg = <0x00130000 0x4000>; + interrupts = <0 9 IRQ_TYPE_LEVEL_HIGH>; + clocks = <&clks IMX6QDL_CLK_GPU3D_AXI>, + <&clks IMX6QDL_CLK_GPU3D_CORE>, + <&clks IMX6QDL_CLK_GPU3D_SHADER>; + clock-names = "bus", "core", "shader"; +};
From: Christian Gmeiner christian.gmeiner@gmail.com
This is a consolidation by Russell King of Christian's drm work.
Signed-off-by: Christian Gmeiner christian.gmeiner@gmail.com Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/Kconfig | 2 + drivers/staging/Makefile | 1 + drivers/staging/etnaviv/Kconfig | 20 + drivers/staging/etnaviv/Makefile | 17 + drivers/staging/etnaviv/cmdstream.xml.h | 218 ++++++ drivers/staging/etnaviv/common.xml.h | 253 +++++++ drivers/staging/etnaviv/etnaviv_buffer.c | 201 ++++++ drivers/staging/etnaviv/etnaviv_drv.c | 621 +++++++++++++++++ drivers/staging/etnaviv/etnaviv_drv.h | 143 ++++ drivers/staging/etnaviv/etnaviv_gem.c | 706 +++++++++++++++++++ drivers/staging/etnaviv/etnaviv_gem.h | 100 +++ drivers/staging/etnaviv/etnaviv_gem_prime.c | 56 ++ drivers/staging/etnaviv/etnaviv_gem_submit.c | 407 +++++++++++ drivers/staging/etnaviv/etnaviv_gpu.c | 984 +++++++++++++++++++++++++++ drivers/staging/etnaviv/etnaviv_gpu.h | 152 +++++ drivers/staging/etnaviv/etnaviv_iommu.c | 185 +++++ drivers/staging/etnaviv/etnaviv_iommu.h | 25 + drivers/staging/etnaviv/etnaviv_iommu_v2.c | 32 + drivers/staging/etnaviv/etnaviv_iommu_v2.h | 25 + drivers/staging/etnaviv/etnaviv_mmu.c | 111 +++ drivers/staging/etnaviv/etnaviv_mmu.h | 37 + drivers/staging/etnaviv/state.xml.h | 348 ++++++++++ drivers/staging/etnaviv/state_hi.xml.h | 405 +++++++++++ include/uapi/drm/etnaviv_drm.h | 225 ++++++ 24 files changed, 5274 insertions(+) create mode 100644 drivers/staging/etnaviv/Kconfig create mode 100644 drivers/staging/etnaviv/Makefile create mode 100644 drivers/staging/etnaviv/cmdstream.xml.h create mode 100644 drivers/staging/etnaviv/common.xml.h create mode 100644 drivers/staging/etnaviv/etnaviv_buffer.c create mode 100644 drivers/staging/etnaviv/etnaviv_drv.c create mode 100644 drivers/staging/etnaviv/etnaviv_drv.h create mode 100644 drivers/staging/etnaviv/etnaviv_gem.c create mode 100644 drivers/staging/etnaviv/etnaviv_gem.h create mode 100644 drivers/staging/etnaviv/etnaviv_gem_prime.c create mode 100644 drivers/staging/etnaviv/etnaviv_gem_submit.c create mode 100644 drivers/staging/etnaviv/etnaviv_gpu.c create mode 100644 drivers/staging/etnaviv/etnaviv_gpu.h create mode 100644 drivers/staging/etnaviv/etnaviv_iommu.c create mode 100644 drivers/staging/etnaviv/etnaviv_iommu.h create mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.c create mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.h create mode 100644 drivers/staging/etnaviv/etnaviv_mmu.c create mode 100644 drivers/staging/etnaviv/etnaviv_mmu.h create mode 100644 drivers/staging/etnaviv/state.xml.h create mode 100644 drivers/staging/etnaviv/state_hi.xml.h create mode 100644 include/uapi/drm/etnaviv_drm.h
diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig index 45baa83be7ce..441b1afbfe4c 100644 --- a/drivers/staging/Kconfig +++ b/drivers/staging/Kconfig @@ -108,4 +108,6 @@ source "drivers/staging/fbtft/Kconfig"
source "drivers/staging/i2o/Kconfig"
+source "drivers/staging/etnaviv/Kconfig" + endif # STAGING diff --git a/drivers/staging/Makefile b/drivers/staging/Makefile index 29160790841f..f53cf8412c0c 100644 --- a/drivers/staging/Makefile +++ b/drivers/staging/Makefile @@ -46,3 +46,4 @@ obj-$(CONFIG_UNISYSSPAR) += unisys/ obj-$(CONFIG_COMMON_CLK_XLNX_CLKWZRD) += clocking-wizard/ obj-$(CONFIG_FB_TFT) += fbtft/ obj-$(CONFIG_I2O) += i2o/ +obj-$(CONFIG_DRM_ETNAVIV) += etnaviv/ diff --git a/drivers/staging/etnaviv/Kconfig b/drivers/staging/etnaviv/Kconfig new file mode 100644 index 000000000000..6f034eda914c --- /dev/null +++ b/drivers/staging/etnaviv/Kconfig @@ -0,0 +1,20 @@ + +config DRM_ETNAVIV + tristate "etnaviv DRM" + depends on DRM + select SHMEM + select TMPFS + select IOMMU_API + select IOMMU_SUPPORT + default y + help + DRM driver for Vivante GPUs. + +config DRM_ETNAVIV_REGISTER_LOGGING + bool "etnaviv DRM register logging" + depends on DRM_ETNAVIV + default n + help + Compile in support for logging register reads/writes in a format + that can be parsed by envytools demsm tool. If enabled, register + logging can be switched on via etnaviv.reglog=y module param. diff --git a/drivers/staging/etnaviv/Makefile b/drivers/staging/etnaviv/Makefile new file mode 100644 index 000000000000..ef0cffabdcce --- /dev/null +++ b/drivers/staging/etnaviv/Makefile @@ -0,0 +1,17 @@ +ccflags-y := -Iinclude/drm -Idrivers/staging/vivante +ifeq (, $(findstring -W,$(EXTRA_CFLAGS))) + ccflags-y += -Werror +endif + +etnaviv-y := \ + etnaviv_drv.o \ + etnaviv_gem.o \ + etnaviv_gem_prime.o \ + etnaviv_gem_submit.o \ + etnaviv_gpu.o \ + etnaviv_iommu.o \ + etnaviv_iommu_v2.o \ + etnaviv_mmu.o \ + etnaviv_buffer.o + +obj-$(CONFIG_DRM_ETNAVIV) += etnaviv.o diff --git a/drivers/staging/etnaviv/cmdstream.xml.h b/drivers/staging/etnaviv/cmdstream.xml.h new file mode 100644 index 000000000000..844f82977e3e --- /dev/null +++ b/drivers/staging/etnaviv/cmdstream.xml.h @@ -0,0 +1,218 @@ +#ifndef CMDSTREAM_XML +#define CMDSTREAM_XML + +/* Autogenerated file, DO NOT EDIT manually! + +This file was generated by the rules-ng-ng headergen tool in this git repository: +http://0x04.net/cgit/index.cgi/rules-ng-ng +git clone git://0x04.net/rules-ng-ng + +The rules-ng-ng source files this header was generated from are: +- /home/orion/projects/etna_viv/rnndb/cmdstream.xml ( 12589 bytes, from 2013-09-01 10:53:22) +- /home/orion/projects/etna_viv/rnndb/common.xml ( 18379 bytes, from 2014-01-27 15:58:05) + +Copyright (C) 2013 +*/ + + +#define FE_OPCODE_LOAD_STATE 0x00000001 +#define FE_OPCODE_END 0x00000002 +#define FE_OPCODE_NOP 0x00000003 +#define FE_OPCODE_DRAW_2D 0x00000004 +#define FE_OPCODE_DRAW_PRIMITIVES 0x00000005 +#define FE_OPCODE_DRAW_INDEXED_PRIMITIVES 0x00000006 +#define FE_OPCODE_WAIT 0x00000007 +#define FE_OPCODE_LINK 0x00000008 +#define FE_OPCODE_STALL 0x00000009 +#define FE_OPCODE_CALL 0x0000000a +#define FE_OPCODE_RETURN 0x0000000b +#define FE_OPCODE_CHIP_SELECT 0x0000000d +#define PRIMITIVE_TYPE_POINTS 0x00000001 +#define PRIMITIVE_TYPE_LINES 0x00000002 +#define PRIMITIVE_TYPE_LINE_STRIP 0x00000003 +#define PRIMITIVE_TYPE_TRIANGLES 0x00000004 +#define PRIMITIVE_TYPE_TRIANGLE_STRIP 0x00000005 +#define PRIMITIVE_TYPE_TRIANGLE_FAN 0x00000006 +#define PRIMITIVE_TYPE_LINE_LOOP 0x00000007 +#define PRIMITIVE_TYPE_QUADS 0x00000008 +#define VIV_FE_LOAD_STATE 0x00000000 + +#define VIV_FE_LOAD_STATE_HEADER 0x00000000 +#define VIV_FE_LOAD_STATE_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_LOAD_STATE_HEADER_OP__SHIFT 27 +#define VIV_FE_LOAD_STATE_HEADER_OP_LOAD_STATE 0x08000000 +#define VIV_FE_LOAD_STATE_HEADER_FIXP 0x04000000 +#define VIV_FE_LOAD_STATE_HEADER_COUNT__MASK 0x03ff0000 +#define VIV_FE_LOAD_STATE_HEADER_COUNT__SHIFT 16 +#define VIV_FE_LOAD_STATE_HEADER_COUNT(x) (((x) << VIV_FE_LOAD_STATE_HEADER_COUNT__SHIFT) & VIV_FE_LOAD_STATE_HEADER_COUNT__MASK) +#define VIV_FE_LOAD_STATE_HEADER_OFFSET__MASK 0x0000ffff +#define VIV_FE_LOAD_STATE_HEADER_OFFSET__SHIFT 0 +#define VIV_FE_LOAD_STATE_HEADER_OFFSET(x) (((x) << VIV_FE_LOAD_STATE_HEADER_OFFSET__SHIFT) & VIV_FE_LOAD_STATE_HEADER_OFFSET__MASK) +#define VIV_FE_LOAD_STATE_HEADER_OFFSET__SHR 2 + +#define VIV_FE_END 0x00000000 + +#define VIV_FE_END_HEADER 0x00000000 +#define VIV_FE_END_HEADER_EVENT_ID__MASK 0x0000001f +#define VIV_FE_END_HEADER_EVENT_ID__SHIFT 0 +#define VIV_FE_END_HEADER_EVENT_ID(x) (((x) << VIV_FE_END_HEADER_EVENT_ID__SHIFT) & VIV_FE_END_HEADER_EVENT_ID__MASK) +#define VIV_FE_END_HEADER_EVENT_ENABLE 0x00000100 +#define VIV_FE_END_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_END_HEADER_OP__SHIFT 27 +#define VIV_FE_END_HEADER_OP_END 0x10000000 + +#define VIV_FE_NOP 0x00000000 + +#define VIV_FE_NOP_HEADER 0x00000000 +#define VIV_FE_NOP_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_NOP_HEADER_OP__SHIFT 27 +#define VIV_FE_NOP_HEADER_OP_NOP 0x18000000 + +#define VIV_FE_DRAW_2D 0x00000000 + +#define VIV_FE_DRAW_2D_HEADER 0x00000000 +#define VIV_FE_DRAW_2D_HEADER_COUNT__MASK 0x0000ff00 +#define VIV_FE_DRAW_2D_HEADER_COUNT__SHIFT 8 +#define VIV_FE_DRAW_2D_HEADER_COUNT(x) (((x) << VIV_FE_DRAW_2D_HEADER_COUNT__SHIFT) & VIV_FE_DRAW_2D_HEADER_COUNT__MASK) +#define VIV_FE_DRAW_2D_HEADER_DATA_COUNT__MASK 0x07ff0000 +#define VIV_FE_DRAW_2D_HEADER_DATA_COUNT__SHIFT 16 +#define VIV_FE_DRAW_2D_HEADER_DATA_COUNT(x) (((x) << VIV_FE_DRAW_2D_HEADER_DATA_COUNT__SHIFT) & VIV_FE_DRAW_2D_HEADER_DATA_COUNT__MASK) +#define VIV_FE_DRAW_2D_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_DRAW_2D_HEADER_OP__SHIFT 27 +#define VIV_FE_DRAW_2D_HEADER_OP_DRAW_2D 0x20000000 + +#define VIV_FE_DRAW_2D_TOP_LEFT 0x00000008 +#define VIV_FE_DRAW_2D_TOP_LEFT_X__MASK 0x0000ffff +#define VIV_FE_DRAW_2D_TOP_LEFT_X__SHIFT 0 +#define VIV_FE_DRAW_2D_TOP_LEFT_X(x) (((x) << VIV_FE_DRAW_2D_TOP_LEFT_X__SHIFT) & VIV_FE_DRAW_2D_TOP_LEFT_X__MASK) +#define VIV_FE_DRAW_2D_TOP_LEFT_Y__MASK 0xffff0000 +#define VIV_FE_DRAW_2D_TOP_LEFT_Y__SHIFT 16 +#define VIV_FE_DRAW_2D_TOP_LEFT_Y(x) (((x) << VIV_FE_DRAW_2D_TOP_LEFT_Y__SHIFT) & VIV_FE_DRAW_2D_TOP_LEFT_Y__MASK) + +#define VIV_FE_DRAW_2D_BOTTOM_RIGHT 0x0000000c +#define VIV_FE_DRAW_2D_BOTTOM_RIGHT_X__MASK 0x0000ffff +#define VIV_FE_DRAW_2D_BOTTOM_RIGHT_X__SHIFT 0 +#define VIV_FE_DRAW_2D_BOTTOM_RIGHT_X(x) (((x) << VIV_FE_DRAW_2D_BOTTOM_RIGHT_X__SHIFT) & VIV_FE_DRAW_2D_BOTTOM_RIGHT_X__MASK) +#define VIV_FE_DRAW_2D_BOTTOM_RIGHT_Y__MASK 0xffff0000 +#define VIV_FE_DRAW_2D_BOTTOM_RIGHT_Y__SHIFT 16 +#define VIV_FE_DRAW_2D_BOTTOM_RIGHT_Y(x) (((x) << VIV_FE_DRAW_2D_BOTTOM_RIGHT_Y__SHIFT) & VIV_FE_DRAW_2D_BOTTOM_RIGHT_Y__MASK) + +#define VIV_FE_DRAW_PRIMITIVES 0x00000000 + +#define VIV_FE_DRAW_PRIMITIVES_HEADER 0x00000000 +#define VIV_FE_DRAW_PRIMITIVES_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_DRAW_PRIMITIVES_HEADER_OP__SHIFT 27 +#define VIV_FE_DRAW_PRIMITIVES_HEADER_OP_DRAW_PRIMITIVES 0x28000000 + +#define VIV_FE_DRAW_PRIMITIVES_COMMAND 0x00000004 +#define VIV_FE_DRAW_PRIMITIVES_COMMAND_TYPE__MASK 0x000000ff +#define VIV_FE_DRAW_PRIMITIVES_COMMAND_TYPE__SHIFT 0 +#define VIV_FE_DRAW_PRIMITIVES_COMMAND_TYPE(x) (((x) << VIV_FE_DRAW_PRIMITIVES_COMMAND_TYPE__SHIFT) & VIV_FE_DRAW_PRIMITIVES_COMMAND_TYPE__MASK) + +#define VIV_FE_DRAW_PRIMITIVES_START 0x00000008 + +#define VIV_FE_DRAW_PRIMITIVES_COUNT 0x0000000c + +#define VIV_FE_DRAW_INDEXED_PRIMITIVES 0x00000000 + +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_HEADER 0x00000000 +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_HEADER_OP__SHIFT 27 +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_HEADER_OP_DRAW_INDEXED_PRIMITIVES 0x30000000 + +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_COMMAND 0x00000004 +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_COMMAND_TYPE__MASK 0x000000ff +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_COMMAND_TYPE__SHIFT 0 +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_COMMAND_TYPE(x) (((x) << VIV_FE_DRAW_INDEXED_PRIMITIVES_COMMAND_TYPE__SHIFT) & VIV_FE_DRAW_INDEXED_PRIMITIVES_COMMAND_TYPE__MASK) + +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_START 0x00000008 + +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_COUNT 0x0000000c + +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_OFFSET 0x00000010 + +#define VIV_FE_WAIT 0x00000000 + +#define VIV_FE_WAIT_HEADER 0x00000000 +#define VIV_FE_WAIT_HEADER_DELAY__MASK 0x0000ffff +#define VIV_FE_WAIT_HEADER_DELAY__SHIFT 0 +#define VIV_FE_WAIT_HEADER_DELAY(x) (((x) << VIV_FE_WAIT_HEADER_DELAY__SHIFT) & VIV_FE_WAIT_HEADER_DELAY__MASK) +#define VIV_FE_WAIT_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_WAIT_HEADER_OP__SHIFT 27 +#define VIV_FE_WAIT_HEADER_OP_WAIT 0x38000000 + +#define VIV_FE_LINK 0x00000000 + +#define VIV_FE_LINK_HEADER 0x00000000 +#define VIV_FE_LINK_HEADER_PREFETCH__MASK 0x0000ffff +#define VIV_FE_LINK_HEADER_PREFETCH__SHIFT 0 +#define VIV_FE_LINK_HEADER_PREFETCH(x) (((x) << VIV_FE_LINK_HEADER_PREFETCH__SHIFT) & VIV_FE_LINK_HEADER_PREFETCH__MASK) +#define VIV_FE_LINK_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_LINK_HEADER_OP__SHIFT 27 +#define VIV_FE_LINK_HEADER_OP_LINK 0x40000000 + +#define VIV_FE_LINK_ADDRESS 0x00000004 + +#define VIV_FE_STALL 0x00000000 + +#define VIV_FE_STALL_HEADER 0x00000000 +#define VIV_FE_STALL_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_STALL_HEADER_OP__SHIFT 27 +#define VIV_FE_STALL_HEADER_OP_STALL 0x48000000 + +#define VIV_FE_STALL_TOKEN 0x00000004 +#define VIV_FE_STALL_TOKEN_FROM__MASK 0x0000001f +#define VIV_FE_STALL_TOKEN_FROM__SHIFT 0 +#define VIV_FE_STALL_TOKEN_FROM(x) (((x) << VIV_FE_STALL_TOKEN_FROM__SHIFT) & VIV_FE_STALL_TOKEN_FROM__MASK) +#define VIV_FE_STALL_TOKEN_TO__MASK 0x00001f00 +#define VIV_FE_STALL_TOKEN_TO__SHIFT 8 +#define VIV_FE_STALL_TOKEN_TO(x) (((x) << VIV_FE_STALL_TOKEN_TO__SHIFT) & VIV_FE_STALL_TOKEN_TO__MASK) + +#define VIV_FE_CALL 0x00000000 + +#define VIV_FE_CALL_HEADER 0x00000000 +#define VIV_FE_CALL_HEADER_PREFETCH__MASK 0x0000ffff +#define VIV_FE_CALL_HEADER_PREFETCH__SHIFT 0 +#define VIV_FE_CALL_HEADER_PREFETCH(x) (((x) << VIV_FE_CALL_HEADER_PREFETCH__SHIFT) & VIV_FE_CALL_HEADER_PREFETCH__MASK) +#define VIV_FE_CALL_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_CALL_HEADER_OP__SHIFT 27 +#define VIV_FE_CALL_HEADER_OP_CALL 0x50000000 + +#define VIV_FE_CALL_ADDRESS 0x00000004 + +#define VIV_FE_CALL_RETURN_PREFETCH 0x00000008 + +#define VIV_FE_CALL_RETURN_ADDRESS 0x0000000c + +#define VIV_FE_RETURN 0x00000000 + +#define VIV_FE_RETURN_HEADER 0x00000000 +#define VIV_FE_RETURN_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_RETURN_HEADER_OP__SHIFT 27 +#define VIV_FE_RETURN_HEADER_OP_RETURN 0x58000000 + +#define VIV_FE_CHIP_SELECT 0x00000000 + +#define VIV_FE_CHIP_SELECT_HEADER 0x00000000 +#define VIV_FE_CHIP_SELECT_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_CHIP_SELECT_HEADER_OP__SHIFT 27 +#define VIV_FE_CHIP_SELECT_HEADER_OP_CHIP_SELECT 0x68000000 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP15 0x00008000 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP14 0x00004000 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP13 0x00002000 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP12 0x00001000 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP11 0x00000800 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP10 0x00000400 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP9 0x00000200 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP8 0x00000100 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP7 0x00000080 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP6 0x00000040 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP5 0x00000020 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP4 0x00000010 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP3 0x00000008 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP2 0x00000004 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP1 0x00000002 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP0 0x00000001 + + +#endif /* CMDSTREAM_XML */ diff --git a/drivers/staging/etnaviv/common.xml.h b/drivers/staging/etnaviv/common.xml.h new file mode 100644 index 000000000000..36fa0e4cf56b --- /dev/null +++ b/drivers/staging/etnaviv/common.xml.h @@ -0,0 +1,253 @@ +#ifndef COMMON_XML +#define COMMON_XML + +/* Autogenerated file, DO NOT EDIT manually! + +This file was generated by the rules-ng-ng headergen tool in this git repository: +http://0x04.net/cgit/index.cgi/rules-ng-ng +git clone git://0x04.net/rules-ng-ng + +The rules-ng-ng source files this header was generated from are: +- /home/orion/projects/etna_viv/rnndb/state.xml ( 18526 bytes, from 2013-09-11 16:52:32) +- /home/orion/projects/etna_viv/rnndb/common.xml ( 18379 bytes, from 2014-01-27 15:58:05) +- /home/orion/projects/etna_viv/rnndb/state_hi.xml ( 22236 bytes, from 2014-01-27 15:56:46) +- /home/orion/projects/etna_viv/rnndb/state_2d.xml ( 51191 bytes, from 2013-10-04 06:36:55) +- /home/orion/projects/etna_viv/rnndb/state_3d.xml ( 54570 bytes, from 2013-10-12 15:25:03) +- /home/orion/projects/etna_viv/rnndb/state_vg.xml ( 5942 bytes, from 2013-09-01 10:53:22) + +Copyright (C) 2014 +*/ + + +#define PIPE_ID_PIPE_3D 0x00000000 +#define PIPE_ID_PIPE_2D 0x00000001 +#define SYNC_RECIPIENT_FE 0x00000001 +#define SYNC_RECIPIENT_RA 0x00000005 +#define SYNC_RECIPIENT_PE 0x00000007 +#define SYNC_RECIPIENT_DE 0x0000000b +#define SYNC_RECIPIENT_VG 0x0000000f +#define SYNC_RECIPIENT_TESSELATOR 0x00000010 +#define SYNC_RECIPIENT_VG2 0x00000011 +#define SYNC_RECIPIENT_TESSELATOR2 0x00000012 +#define SYNC_RECIPIENT_VG3 0x00000013 +#define SYNC_RECIPIENT_TESSELATOR3 0x00000014 +#define ENDIAN_MODE_NO_SWAP 0x00000000 +#define ENDIAN_MODE_SWAP_16 0x00000001 +#define ENDIAN_MODE_SWAP_32 0x00000002 +#define chipModel_GC300 0x00000300 +#define chipModel_GC320 0x00000320 +#define chipModel_GC350 0x00000350 +#define chipModel_GC355 0x00000355 +#define chipModel_GC400 0x00000400 +#define chipModel_GC410 0x00000410 +#define chipModel_GC420 0x00000420 +#define chipModel_GC450 0x00000450 +#define chipModel_GC500 0x00000500 +#define chipModel_GC530 0x00000530 +#define chipModel_GC600 0x00000600 +#define chipModel_GC700 0x00000700 +#define chipModel_GC800 0x00000800 +#define chipModel_GC860 0x00000860 +#define chipModel_GC880 0x00000880 +#define chipModel_GC1000 0x00001000 +#define chipModel_GC2000 0x00002000 +#define chipModel_GC2100 0x00002100 +#define chipModel_GC4000 0x00004000 +#define RGBA_BITS_R 0x00000001 +#define RGBA_BITS_G 0x00000002 +#define RGBA_BITS_B 0x00000004 +#define RGBA_BITS_A 0x00000008 +#define chipFeatures_FAST_CLEAR 0x00000001 +#define chipFeatures_SPECIAL_ANTI_ALIASING 0x00000002 +#define chipFeatures_PIPE_3D 0x00000004 +#define chipFeatures_DXT_TEXTURE_COMPRESSION 0x00000008 +#define chipFeatures_DEBUG_MODE 0x00000010 +#define chipFeatures_Z_COMPRESSION 0x00000020 +#define chipFeatures_YUV420_SCALER 0x00000040 +#define chipFeatures_MSAA 0x00000080 +#define chipFeatures_DC 0x00000100 +#define chipFeatures_PIPE_2D 0x00000200 +#define chipFeatures_ETC1_TEXTURE_COMPRESSION 0x00000400 +#define chipFeatures_FAST_SCALER 0x00000800 +#define chipFeatures_HIGH_DYNAMIC_RANGE 0x00001000 +#define chipFeatures_YUV420_TILER 0x00002000 +#define chipFeatures_MODULE_CG 0x00004000 +#define chipFeatures_MIN_AREA 0x00008000 +#define chipFeatures_NO_EARLY_Z 0x00010000 +#define chipFeatures_NO_422_TEXTURE 0x00020000 +#define chipFeatures_BUFFER_INTERLEAVING 0x00040000 +#define chipFeatures_BYTE_WRITE_2D 0x00080000 +#define chipFeatures_NO_SCALER 0x00100000 +#define chipFeatures_YUY2_AVERAGING 0x00200000 +#define chipFeatures_HALF_PE_CACHE 0x00400000 +#define chipFeatures_HALF_TX_CACHE 0x00800000 +#define chipFeatures_YUY2_RENDER_TARGET 0x01000000 +#define chipFeatures_MEM32 0x02000000 +#define chipFeatures_PIPE_VG 0x04000000 +#define chipFeatures_VGTS 0x08000000 +#define chipFeatures_FE20 0x10000000 +#define chipFeatures_BYTE_WRITE_3D 0x20000000 +#define chipFeatures_RS_YUV_TARGET 0x40000000 +#define chipFeatures_32_BIT_INDICES 0x80000000 +#define chipMinorFeatures0_FLIP_Y 0x00000001 +#define chipMinorFeatures0_DUAL_RETURN_BUS 0x00000002 +#define chipMinorFeatures0_ENDIANNESS_CONFIG 0x00000004 +#define chipMinorFeatures0_TEXTURE_8K 0x00000008 +#define chipMinorFeatures0_CORRECT_TEXTURE_CONVERTER 0x00000010 +#define chipMinorFeatures0_SPECIAL_MSAA_LOD 0x00000020 +#define chipMinorFeatures0_FAST_CLEAR_FLUSH 0x00000040 +#define chipMinorFeatures0_2DPE20 0x00000080 +#define chipMinorFeatures0_CORRECT_AUTO_DISABLE 0x00000100 +#define chipMinorFeatures0_RENDERTARGET_8K 0x00000200 +#define chipMinorFeatures0_2BITPERTILE 0x00000400 +#define chipMinorFeatures0_SEPARATE_TILE_STATUS_WHEN_INTERLEAVED 0x00000800 +#define chipMinorFeatures0_SUPER_TILED 0x00001000 +#define chipMinorFeatures0_VG_20 0x00002000 +#define chipMinorFeatures0_TS_EXTENDED_COMMANDS 0x00004000 +#define chipMinorFeatures0_COMPRESSION_FIFO_FIXED 0x00008000 +#define chipMinorFeatures0_HAS_SIGN_FLOOR_CEIL 0x00010000 +#define chipMinorFeatures0_VG_FILTER 0x00020000 +#define chipMinorFeatures0_VG_21 0x00040000 +#define chipMinorFeatures0_SHADER_HAS_W 0x00080000 +#define chipMinorFeatures0_HAS_SQRT_TRIG 0x00100000 +#define chipMinorFeatures0_MORE_MINOR_FEATURES 0x00200000 +#define chipMinorFeatures0_MC20 0x00400000 +#define chipMinorFeatures0_MSAA_SIDEBAND 0x00800000 +#define chipMinorFeatures0_BUG_FIXES0 0x01000000 +#define chipMinorFeatures0_VAA 0x02000000 +#define chipMinorFeatures0_BYPASS_IN_MSAA 0x04000000 +#define chipMinorFeatures0_HZ 0x08000000 +#define chipMinorFeatures0_NEW_TEXTURE 0x10000000 +#define chipMinorFeatures0_2D_A8_TARGET 0x20000000 +#define chipMinorFeatures0_CORRECT_STENCIL 0x40000000 +#define chipMinorFeatures0_ENHANCE_VR 0x80000000 +#define chipMinorFeatures1_RSUV_SWIZZLE 0x00000001 +#define chipMinorFeatures1_V2_COMPRESSION 0x00000002 +#define chipMinorFeatures1_VG_DOUBLE_BUFFER 0x00000004 +#define chipMinorFeatures1_EXTRA_EVENT_STATES 0x00000008 +#define chipMinorFeatures1_NO_STRIPING_NEEDED 0x00000010 +#define chipMinorFeatures1_TEXTURE_STRIDE 0x00000020 +#define chipMinorFeatures1_BUG_FIXES3 0x00000040 +#define chipMinorFeatures1_AUTO_DISABLE 0x00000080 +#define chipMinorFeatures1_AUTO_RESTART_TS 0x00000100 +#define chipMinorFeatures1_DISABLE_PE_GATING 0x00000200 +#define chipMinorFeatures1_L2_WINDOWING 0x00000400 +#define chipMinorFeatures1_HALF_FLOAT 0x00000800 +#define chipMinorFeatures1_PIXEL_DITHER 0x00001000 +#define chipMinorFeatures1_TWO_STENCIL_REFERENCE 0x00002000 +#define chipMinorFeatures1_EXTENDED_PIXEL_FORMAT 0x00004000 +#define chipMinorFeatures1_CORRECT_MIN_MAX_DEPTH 0x00008000 +#define chipMinorFeatures1_2D_DITHER 0x00010000 +#define chipMinorFeatures1_BUG_FIXES5 0x00020000 +#define chipMinorFeatures1_NEW_2D 0x00040000 +#define chipMinorFeatures1_NEW_FP 0x00080000 +#define chipMinorFeatures1_TEXTURE_HALIGN 0x00100000 +#define chipMinorFeatures1_NON_POWER_OF_TWO 0x00200000 +#define chipMinorFeatures1_LINEAR_TEXTURE_SUPPORT 0x00400000 +#define chipMinorFeatures1_HALTI0 0x00800000 +#define chipMinorFeatures1_CORRECT_OVERFLOW_VG 0x01000000 +#define chipMinorFeatures1_NEGATIVE_LOG_FIX 0x02000000 +#define chipMinorFeatures1_RESOLVE_OFFSET 0x04000000 +#define chipMinorFeatures1_OK_TO_GATE_AXI_CLOCK 0x08000000 +#define chipMinorFeatures1_MMU_VERSION 0x10000000 +#define chipMinorFeatures1_WIDE_LINE 0x20000000 +#define chipMinorFeatures1_BUG_FIXES6 0x40000000 +#define chipMinorFeatures1_FC_FLUSH_STALL 0x80000000 +#define chipMinorFeatures2_LINE_LOOP 0x00000001 +#define chipMinorFeatures2_LOGIC_OP 0x00000002 +#define chipMinorFeatures2_UNK2 0x00000004 +#define chipMinorFeatures2_SUPERTILED_TEXTURE 0x00000008 +#define chipMinorFeatures2_UNK4 0x00000010 +#define chipMinorFeatures2_RECT_PRIMITIVE 0x00000020 +#define chipMinorFeatures2_COMPOSITION 0x00000040 +#define chipMinorFeatures2_CORRECT_AUTO_DISABLE_COUNT 0x00000080 +#define chipMinorFeatures2_UNK8 0x00000100 +#define chipMinorFeatures2_UNK9 0x00000200 +#define chipMinorFeatures2_UNK10 0x00000400 +#define chipMinorFeatures2_SAMPLERBASE_16 0x00000800 +#define chipMinorFeatures2_UNK12 0x00001000 +#define chipMinorFeatures2_UNK13 0x00002000 +#define chipMinorFeatures2_UNK14 0x00004000 +#define chipMinorFeatures2_EXTRA_TEXTURE_STATE 0x00008000 +#define chipMinorFeatures2_FULL_DIRECTFB 0x00010000 +#define chipMinorFeatures2_2D_TILING 0x00020000 +#define chipMinorFeatures2_THREAD_WALKER_IN_PS 0x00040000 +#define chipMinorFeatures2_TILE_FILLER 0x00080000 +#define chipMinorFeatures2_UNK20 0x00100000 +#define chipMinorFeatures2_2D_MULTI_SOURCE_BLIT 0x00200000 +#define chipMinorFeatures2_UNK22 0x00400000 +#define chipMinorFeatures2_UNK23 0x00800000 +#define chipMinorFeatures2_UNK24 0x01000000 +#define chipMinorFeatures2_MIXED_STREAMS 0x02000000 +#define chipMinorFeatures2_2D_420_L2CACHE 0x04000000 +#define chipMinorFeatures2_UNK27 0x08000000 +#define chipMinorFeatures2_2D_NO_INDEX8_BRUSH 0x10000000 +#define chipMinorFeatures2_TEXTURE_TILED_READ 0x20000000 +#define chipMinorFeatures2_UNK30 0x40000000 +#define chipMinorFeatures2_UNK31 0x80000000 +#define chipMinorFeatures3_ROTATION_STALL_FIX 0x00000001 +#define chipMinorFeatures3_UNK1 0x00000002 +#define chipMinorFeatures3_2D_MULTI_SOURCE_BLT_EX 0x00000004 +#define chipMinorFeatures3_UNK3 0x00000008 +#define chipMinorFeatures3_UNK4 0x00000010 +#define chipMinorFeatures3_UNK5 0x00000020 +#define chipMinorFeatures3_UNK6 0x00000040 +#define chipMinorFeatures3_UNK7 0x00000080 +#define chipMinorFeatures3_UNK8 0x00000100 +#define chipMinorFeatures3_UNK9 0x00000200 +#define chipMinorFeatures3_BUG_FIXES10 0x00000400 +#define chipMinorFeatures3_UNK11 0x00000800 +#define chipMinorFeatures3_BUG_FIXES11 0x00001000 +#define chipMinorFeatures3_UNK13 0x00002000 +#define chipMinorFeatures3_UNK14 0x00004000 +#define chipMinorFeatures3_UNK15 0x00008000 +#define chipMinorFeatures3_UNK16 0x00010000 +#define chipMinorFeatures3_UNK17 0x00020000 +#define chipMinorFeatures3_UNK18 0x00040000 +#define chipMinorFeatures3_UNK19 0x00080000 +#define chipMinorFeatures3_UNK20 0x00100000 +#define chipMinorFeatures3_UNK21 0x00200000 +#define chipMinorFeatures3_UNK22 0x00400000 +#define chipMinorFeatures3_UNK23 0x00800000 +#define chipMinorFeatures3_UNK24 0x01000000 +#define chipMinorFeatures3_UNK25 0x02000000 +#define chipMinorFeatures3_UNK26 0x04000000 +#define chipMinorFeatures3_UNK27 0x08000000 +#define chipMinorFeatures3_UNK28 0x10000000 +#define chipMinorFeatures3_UNK29 0x20000000 +#define chipMinorFeatures3_UNK30 0x40000000 +#define chipMinorFeatures3_UNK31 0x80000000 +#define chipMinorFeatures4_UNK0 0x00000001 +#define chipMinorFeatures4_UNK1 0x00000002 +#define chipMinorFeatures4_UNK2 0x00000004 +#define chipMinorFeatures4_UNK3 0x00000008 +#define chipMinorFeatures4_UNK4 0x00000010 +#define chipMinorFeatures4_UNK5 0x00000020 +#define chipMinorFeatures4_UNK6 0x00000040 +#define chipMinorFeatures4_UNK7 0x00000080 +#define chipMinorFeatures4_UNK8 0x00000100 +#define chipMinorFeatures4_UNK9 0x00000200 +#define chipMinorFeatures4_UNK10 0x00000400 +#define chipMinorFeatures4_UNK11 0x00000800 +#define chipMinorFeatures4_UNK12 0x00001000 +#define chipMinorFeatures4_UNK13 0x00002000 +#define chipMinorFeatures4_UNK14 0x00004000 +#define chipMinorFeatures4_UNK15 0x00008000 +#define chipMinorFeatures4_UNK16 0x00010000 +#define chipMinorFeatures4_UNK17 0x00020000 +#define chipMinorFeatures4_UNK18 0x00040000 +#define chipMinorFeatures4_UNK19 0x00080000 +#define chipMinorFeatures4_UNK20 0x00100000 +#define chipMinorFeatures4_UNK21 0x00200000 +#define chipMinorFeatures4_UNK22 0x00400000 +#define chipMinorFeatures4_UNK23 0x00800000 +#define chipMinorFeatures4_UNK24 0x01000000 +#define chipMinorFeatures4_UNK25 0x02000000 +#define chipMinorFeatures4_UNK26 0x04000000 +#define chipMinorFeatures4_UNK27 0x08000000 +#define chipMinorFeatures4_UNK28 0x10000000 +#define chipMinorFeatures4_UNK29 0x20000000 +#define chipMinorFeatures4_UNK30 0x40000000 +#define chipMinorFeatures4_UNK31 0x80000000 + +#endif /* COMMON_XML */ diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c new file mode 100644 index 000000000000..32764e15c5f7 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -0,0 +1,201 @@ +/* + * Copyright (C) 2014 2014 Etnaviv Project + * Author: Christian Gmeiner christian.gmeiner@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include "etnaviv_gpu.h" +#include "etnaviv_gem.h" + +#include "common.xml.h" +#include "state.xml.h" +#include "cmdstream.xml.h" + +/* + * Command Buffer helper: + */ + + +static inline void OUT(struct etnaviv_gem_object *buffer, uint32_t data) +{ + u32 *vaddr = (u32 *)buffer->vaddr; + BUG_ON(buffer->offset >= buffer->base.size); + + vaddr[buffer->offset++] = data; +} + +static inline void CMD_LOAD_STATE(struct etnaviv_gem_object *buffer, u32 reg, u32 value) +{ + buffer->offset = ALIGN(buffer->offset, 2); + + /* write a register via cmd stream */ + OUT(buffer, VIV_FE_LOAD_STATE_HEADER_OP_LOAD_STATE | VIV_FE_LOAD_STATE_HEADER_COUNT(1) | + VIV_FE_LOAD_STATE_HEADER_OFFSET(reg >> VIV_FE_LOAD_STATE_HEADER_OFFSET__SHR)); + OUT(buffer, value); +} + +static inline void CMD_LOAD_STATES(struct etnaviv_gem_object *buffer, u32 reg, u16 count, u32 *values) +{ + u16 i; + buffer->offset = ALIGN(buffer->offset, 2); + + OUT(buffer, VIV_FE_LOAD_STATE_HEADER_OP_LOAD_STATE | VIV_FE_LOAD_STATE_HEADER_COUNT(count) | + VIV_FE_LOAD_STATE_HEADER_OFFSET(reg >> VIV_FE_LOAD_STATE_HEADER_OFFSET__SHR)); + + for (i = 0; i < count; i++) + OUT(buffer, values[i]); +} + +static inline void CMD_END(struct etnaviv_gem_object *buffer) +{ + buffer->offset = ALIGN(buffer->offset, 2); + + OUT(buffer, VIV_FE_END_HEADER_OP_END); +} + +static inline void CMD_NOP(struct etnaviv_gem_object *buffer) +{ + buffer->offset = ALIGN(buffer->offset, 2); + + OUT(buffer, VIV_FE_NOP_HEADER_OP_NOP); +} + +static inline void CMD_WAIT(struct etnaviv_gem_object *buffer) +{ + buffer->offset = ALIGN(buffer->offset, 2); + + OUT(buffer, VIV_FE_WAIT_HEADER_OP_WAIT | 200); +} + +static inline void CMD_LINK(struct etnaviv_gem_object *buffer, u16 prefetch, u32 address) +{ + buffer->offset = ALIGN(buffer->offset, 2); + + OUT(buffer, VIV_FE_LINK_HEADER_OP_LINK | VIV_FE_LINK_HEADER_PREFETCH(prefetch)); + OUT(buffer, address); +} + +static inline void CMD_STALL(struct etnaviv_gem_object *buffer, u32 from, u32 to) +{ + buffer->offset = ALIGN(buffer->offset, 2); + + OUT(buffer, VIV_FE_STALL_HEADER_OP_STALL); + OUT(buffer, VIV_FE_STALL_TOKEN_FROM(from) | VIV_FE_STALL_TOKEN_TO(to)); +} + +static void etnaviv_cmd_select_pipe(struct etnaviv_gem_object *buffer, u8 pipe) +{ + u32 flush; + u32 stall; + + if (pipe == ETNA_PIPE_2D) + flush = VIVS_GL_FLUSH_CACHE_DEPTH | VIVS_GL_FLUSH_CACHE_COLOR; + else + flush = VIVS_GL_FLUSH_CACHE_TEXTURE; + + stall = VIVS_GL_SEMAPHORE_TOKEN_FROM(SYNC_RECIPIENT_FE) | + VIVS_GL_SEMAPHORE_TOKEN_TO(SYNC_RECIPIENT_PE); + + CMD_LOAD_STATE(buffer, VIVS_GL_FLUSH_CACHE, flush); + CMD_LOAD_STATE(buffer, VIVS_GL_SEMAPHORE_TOKEN, stall); + + CMD_STALL(buffer, SYNC_RECIPIENT_FE, SYNC_RECIPIENT_PE); + + CMD_LOAD_STATE(buffer, VIVS_GL_PIPE_SELECT, VIVS_GL_PIPE_SELECT_PIPE(pipe)); +} + +static void etnaviv_buffer_dump(struct etnaviv_gem_object *obj, u32 len) +{ + u32 size = obj->base.size; + u32 *ptr = obj->vaddr; + + dev_dbg(obj->gpu->dev->dev, "virt %p phys 0x%08x free 0x%08x\n", + obj->vaddr, obj->paddr, size - len * 4); + + print_hex_dump(KERN_INFO, "cmd ", DUMP_PREFIX_OFFSET, 16, 4, + ptr, len * 4, 0); +} + +u32 etnaviv_buffer_init(struct etnaviv_gpu *gpu) +{ + struct etnaviv_gem_object *buffer = to_etnaviv_bo(gpu->buffer); + + /* initialize buffer */ + buffer->offset = 0; + + etnaviv_cmd_select_pipe(buffer, gpu->pipe); + + CMD_WAIT(buffer); + CMD_LINK(buffer, 2, buffer->paddr + ((buffer->offset - 1) * 4)); + + return buffer->offset; +} + +void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct etnaviv_gem_submit *submit) +{ + struct etnaviv_gem_object *buffer = to_etnaviv_bo(gpu->buffer); + struct etnaviv_gem_object *cmd; + u32 *lw = buffer->vaddr + ((buffer->offset - 4) * 4); + u32 back; + u32 i; + + etnaviv_buffer_dump(buffer, 0x50); + + /* save offset back into main buffer */ + back = buffer->offset; + + /* trigger event */ + CMD_LOAD_STATE(buffer, VIVS_GL_EVENT, VIVS_GL_EVENT_EVENT_ID(event) | VIVS_GL_EVENT_FROM_PE); + + /* append WAIT/LINK to main buffer */ + CMD_WAIT(buffer); + CMD_LINK(buffer, 2, buffer->paddr + ((buffer->offset - 1) * 4)); + + /* update offset for every cmd stream */ + for (i = 0; i < submit->nr_cmds; i++) + submit->cmd[i].obj->offset = submit->cmd[i].size; + + /* TODO: inter-connect all cmd buffers */ + + /* jump back from last cmd to main buffer */ + cmd = submit->cmd[submit->nr_cmds - 1].obj; + CMD_LINK(cmd, 4, buffer->paddr + (back * 4)); + + printk(KERN_ERR "stream link @ 0x%08x\n", cmd->paddr + ((cmd->offset - 1) * 4)); + printk(KERN_ERR "stream link @ %p\n", cmd->vaddr + ((cmd->offset - 1) * 4)); + + for (i = 0; i < submit->nr_cmds; i++) { + struct etnaviv_gem_object *obj = submit->cmd[i].obj; + + /* TODO: remove later */ + if (unlikely(drm_debug & DRM_UT_CORE)) + etnaviv_buffer_dump(obj, obj->offset); + } + + /* change ll to NOP */ + printk(KERN_ERR "link op: %p\n", lw); + printk(KERN_ERR "link addr: %p\n", lw + 1); + printk(KERN_ERR "addr: 0x%08x\n", submit->cmd[0].obj->paddr); + printk(KERN_ERR "back: 0x%08x\n", buffer->paddr + (back * 4)); + printk(KERN_ERR "event: %d\n", event); + + /* Change WAIT into a LINK command; write the address first. */ + i = VIV_FE_LINK_HEADER_OP_LINK | VIV_FE_LINK_HEADER_PREFETCH(submit->cmd[0].size * 2); + *(lw + 1) = submit->cmd[0].obj->paddr; + mb(); + *(lw)= i; + mb(); + + etnaviv_buffer_dump(buffer, 0x50); +} diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c new file mode 100644 index 000000000000..39586b45200d --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -0,0 +1,621 @@ +/* + * Copyright (C) 2013 Red Hat + * Author: Rob Clark robdclark@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include <linux/component.h> +#include <linux/of_platform.h> + +#include "etnaviv_drv.h" +#include "etnaviv_gpu.h" + +void etnaviv_register_mmu(struct drm_device *dev, struct etnaviv_iommu *mmu) +{ + struct etnaviv_drm_private *priv = dev->dev_private; + priv->mmu = mmu; +} + +#ifdef CONFIG_DRM_ETNAVIV_REGISTER_LOGGING +static bool reglog = false; +MODULE_PARM_DESC(reglog, "Enable register read/write logging"); +module_param(reglog, bool, 0600); +#else +#define reglog 0 +#endif + +void __iomem *etnaviv_ioremap(struct platform_device *pdev, const char *name, + const char *dbgname) +{ + struct resource *res; + unsigned long size; + void __iomem *ptr; + + if (name) + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, name); + else + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + + if (!res) { + dev_err(&pdev->dev, "failed to get memory resource: %s\n", name); + return ERR_PTR(-EINVAL); + } + + size = resource_size(res); + + ptr = devm_ioremap_nocache(&pdev->dev, res->start, size); + if (!ptr) { + dev_err(&pdev->dev, "failed to ioremap: %s\n", name); + return ERR_PTR(-ENOMEM); + } + + if (reglog) + printk(KERN_DEBUG "IO:region %s %08x %08lx\n", dbgname, (u32)ptr, size); + + return ptr; +} + +void etnaviv_writel(u32 data, void __iomem *addr) +{ + if (reglog) + printk(KERN_DEBUG "IO:W %08x %08x\n", (u32)addr, data); + writel(data, addr); +} + +u32 etnaviv_readl(const void __iomem *addr) +{ + u32 val = readl(addr); + if (reglog) + printk(KERN_ERR "IO:R %08x %08x\n", (u32)addr, val); + return val; +} + +/* + * DRM operations: + */ + +static int etnaviv_unload(struct drm_device *dev) +{ + struct etnaviv_drm_private *priv = dev->dev_private; + unsigned int i; + + flush_workqueue(priv->wq); + destroy_workqueue(priv->wq); + + mutex_lock(&dev->struct_mutex); + for (i = 0; i < ETNA_MAX_PIPES; i++) { + struct etnaviv_gpu *g = priv->gpu[i]; + if (g) + etnaviv_gpu_pm_suspend(g); + } + mutex_unlock(&dev->struct_mutex); + + component_unbind_all(dev->dev, dev); + + dev->dev_private = NULL; + + kfree(priv); + + return 0; +} + + +static void load_gpu(struct drm_device *dev) +{ + struct etnaviv_drm_private *priv = dev->dev_private; + unsigned int i; + + mutex_lock(&dev->struct_mutex); + + for (i = 0; i < ETNA_MAX_PIPES; i++) { + struct etnaviv_gpu *g = priv->gpu[i]; + if (g) { + int ret; + etnaviv_gpu_pm_resume(g); + ret = etnaviv_gpu_init(g); + if (ret) { + dev_err(dev->dev, "%s hw init failed: %d\n", g->name, ret); + priv->gpu[i] = NULL; + } + } + } + + mutex_unlock(&dev->struct_mutex); +} + +static int etnaviv_load(struct drm_device *dev, unsigned long flags) +{ + struct platform_device *pdev = dev->platformdev; + struct etnaviv_drm_private *priv; + int err; + + priv = kzalloc(sizeof(*priv), GFP_KERNEL); + if (!priv) { + dev_err(dev->dev, "failed to allocate private data\n"); + return -ENOMEM; + } + + dev->dev_private = priv; + + priv->wq = alloc_ordered_workqueue("etnaviv", 0); + init_waitqueue_head(&priv->fence_event); + + INIT_LIST_HEAD(&priv->inactive_list); + + platform_set_drvdata(pdev, dev); + + err = component_bind_all(dev->dev, dev); + if (err < 0) + return err; + + load_gpu(dev); + + return 0; +} + +static int etnaviv_open(struct drm_device *dev, struct drm_file *file) +{ + struct etnaviv_file_private *ctx; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + file->driver_priv = ctx; + + return 0; +} + +static void etnaviv_preclose(struct drm_device *dev, struct drm_file *file) +{ + struct etnaviv_drm_private *priv = dev->dev_private; + struct etnaviv_file_private *ctx = file->driver_priv; + + mutex_lock(&dev->struct_mutex); + if (ctx == priv->lastctx) + priv->lastctx = NULL; + mutex_unlock(&dev->struct_mutex); + + kfree(ctx); +} + +/* + * DRM debugfs: + */ + +#ifdef CONFIG_DEBUG_FS +static int etnaviv_gpu_show(struct drm_device *dev, struct seq_file *m) +{ + struct etnaviv_drm_private *priv = dev->dev_private; + struct etnaviv_gpu *gpu; + unsigned int i; + + for (i = 0; i < ETNA_MAX_PIPES; i++) { + gpu = priv->gpu[i]; + if (gpu) { + seq_printf(m, "%s Status:\n", gpu->name); + etnaviv_gpu_debugfs(gpu, m); + } + } + + return 0; +} + +static int etnaviv_gem_show(struct drm_device *dev, struct seq_file *m) +{ + struct etnaviv_drm_private *priv = dev->dev_private; + struct etnaviv_gpu *gpu; + unsigned int i; + + for (i = 0; i < ETNA_MAX_PIPES; i++) { + gpu = priv->gpu[i]; + if (gpu) { + seq_printf(m, "Active Objects (%s):\n", gpu->name); + msm_gem_describe_objects(&gpu->active_list, m); + } + } + + seq_puts(m, "Inactive Objects:\n"); + msm_gem_describe_objects(&priv->inactive_list, m); + + return 0; +} + +static int etnaviv_mm_show(struct drm_device *dev, struct seq_file *m) +{ + return drm_mm_dump_table(m, &dev->vma_offset_manager->vm_addr_space_mm); +} + +static int show_locked(struct seq_file *m, void *arg) +{ + struct drm_info_node *node = (struct drm_info_node *) m->private; + struct drm_device *dev = node->minor->dev; + int (*show)(struct drm_device *dev, struct seq_file *m) = + node->info_ent->data; + int ret; + + ret = mutex_lock_interruptible(&dev->struct_mutex); + if (ret) + return ret; + + ret = show(dev, m); + + mutex_unlock(&dev->struct_mutex); + + return ret; +} + +static struct drm_info_list ETNAVIV_debugfs_list[] = { + {"gpu", show_locked, 0, etnaviv_gpu_show}, + {"gem", show_locked, 0, etnaviv_gem_show}, + { "mm", show_locked, 0, etnaviv_mm_show }, +}; + +static int etnaviv_debugfs_init(struct drm_minor *minor) +{ + struct drm_device *dev = minor->dev; + int ret; + + ret = drm_debugfs_create_files(ETNAVIV_debugfs_list, + ARRAY_SIZE(ETNAVIV_debugfs_list), + minor->debugfs_root, minor); + + if (ret) { + dev_err(dev->dev, "could not install ETNAVIV_debugfs_list\n"); + return ret; + } + + return ret; +} + +static void etnaviv_debugfs_cleanup(struct drm_minor *minor) +{ + drm_debugfs_remove_files(ETNAVIV_debugfs_list, + ARRAY_SIZE(ETNAVIV_debugfs_list), minor); +} +#endif + +/* + * Fences: + */ +int etnaviv_wait_fence_interruptable(struct drm_device *dev, uint32_t pipe, + uint32_t fence, struct timespec *timeout) +{ + struct etnaviv_drm_private *priv = dev->dev_private; + struct etnaviv_gpu *gpu; + int ret; + + if (pipe >= ETNA_MAX_PIPES) + return -EINVAL; + + gpu = priv->gpu[pipe]; + if (!gpu) + return -ENXIO; + + if (fence > gpu->submitted_fence) { + DRM_ERROR("waiting on invalid fence: %u (of %u)\n", + fence, gpu->submitted_fence); + return -EINVAL; + } + + if (!timeout) { + /* no-wait: */ + ret = fence_completed(dev, fence) ? 0 : -EBUSY; + } else { + unsigned long timeout_jiffies = timespec_to_jiffies(timeout); + unsigned long start_jiffies = jiffies; + unsigned long remaining_jiffies; + + if (time_after(start_jiffies, timeout_jiffies)) + remaining_jiffies = 0; + else + remaining_jiffies = timeout_jiffies - start_jiffies; + + ret = wait_event_interruptible_timeout(priv->fence_event, + fence_completed(dev, fence), + remaining_jiffies); + + if (ret == 0) { + DBG("timeout waiting for fence: %u (completed: %u)", + fence, priv->completed_fence); + ret = -ETIMEDOUT; + } else if (ret != -ERESTARTSYS) { + ret = 0; + } + } + + return ret; +} + +/* called from workqueue */ +void etnaviv_update_fence(struct drm_device *dev, uint32_t fence) +{ + struct etnaviv_drm_private *priv = dev->dev_private; + + mutex_lock(&dev->struct_mutex); + priv->completed_fence = max(fence, priv->completed_fence); + mutex_unlock(&dev->struct_mutex); + + wake_up_all(&priv->fence_event); +} + +/* + * DRM ioctls: + */ + +static int etnaviv_ioctl_get_param(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct etnaviv_drm_private *priv = dev->dev_private; + struct drm_etnaviv_param *args = data; + struct etnaviv_gpu *gpu; + + if (args->pipe >= ETNA_MAX_PIPES) + return -EINVAL; + + gpu = priv->gpu[args->pipe]; + if (!gpu) + return -ENXIO; + + return etnaviv_gpu_get_param(gpu, args->param, &args->value); +} + +static int etnaviv_ioctl_gem_new(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct drm_etnaviv_gem_new *args = data; + return etnaviv_gem_new_handle(dev, file, args->size, + args->flags, &args->handle); +} + +#define TS(t) ((struct timespec){ .tv_sec = (t).tv_sec, .tv_nsec = (t).tv_nsec }) + +static int etnaviv_ioctl_gem_cpu_prep(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct drm_etnaviv_gem_cpu_prep *args = data; + struct drm_gem_object *obj; + int ret; + + obj = drm_gem_object_lookup(dev, file, args->handle); + if (!obj) + return -ENOENT; + + ret = etnaviv_gem_cpu_prep(obj, args->op, &TS(args->timeout)); + + drm_gem_object_unreference_unlocked(obj); + + return ret; +} + +static int etnaviv_ioctl_gem_cpu_fini(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct drm_etnaviv_gem_cpu_fini *args = data; + struct drm_gem_object *obj; + int ret; + + obj = drm_gem_object_lookup(dev, file, args->handle); + if (!obj) + return -ENOENT; + + ret = etnaviv_gem_cpu_fini(obj); + + drm_gem_object_unreference_unlocked(obj); + + return ret; +} + +static int etnaviv_ioctl_gem_info(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct drm_etnaviv_gem_info *args = data; + struct drm_gem_object *obj; + int ret = 0; + + if (args->pad) + return -EINVAL; + + obj = drm_gem_object_lookup(dev, file, args->handle); + if (!obj) + return -ENOENT; + + args->offset = msm_gem_mmap_offset(obj); + + drm_gem_object_unreference_unlocked(obj); + + return ret; +} + +static int etnaviv_ioctl_wait_fence(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct drm_etnaviv_wait_fence *args = data; + return etnaviv_wait_fence_interruptable(dev, args->pipe, args->fence, &TS(args->timeout)); +} + +static const struct drm_ioctl_desc etnaviv_ioctls[] = { + DRM_IOCTL_DEF_DRV(ETNAVIV_GET_PARAM, etnaviv_ioctl_get_param, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_NEW, etnaviv_ioctl_gem_new, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_INFO, etnaviv_ioctl_gem_info, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_CPU_PREP, etnaviv_ioctl_gem_cpu_prep, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_CPU_FINI, etnaviv_ioctl_gem_cpu_fini, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_SUBMIT, etnaviv_ioctl_gem_submit, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(ETNAVIV_WAIT_FENCE, etnaviv_ioctl_wait_fence, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), +}; + +static const struct vm_operations_struct vm_ops = { + .fault = etnaviv_gem_fault, + .open = drm_gem_vm_open, + .close = drm_gem_vm_close, +}; + +static const struct file_operations fops = { + .owner = THIS_MODULE, + .open = drm_open, + .release = drm_release, + .unlocked_ioctl = drm_ioctl, +#ifdef CONFIG_COMPAT + .compat_ioctl = drm_compat_ioctl, +#endif + .poll = drm_poll, + .read = drm_read, + .llseek = no_llseek, + .mmap = etnaviv_gem_mmap, +}; + +static struct drm_driver etnaviv_drm_driver = { + .driver_features = DRIVER_HAVE_IRQ | + DRIVER_GEM | + DRIVER_PRIME | + DRIVER_RENDER, + .load = etnaviv_load, + .unload = etnaviv_unload, + .open = etnaviv_open, + .preclose = etnaviv_preclose, + .set_busid = drm_platform_set_busid, + .gem_free_object = etnaviv_gem_free_object, + .gem_vm_ops = &vm_ops, + .dumb_create = msm_gem_dumb_create, + .dumb_map_offset = msm_gem_dumb_map_offset, + .dumb_destroy = drm_gem_dumb_destroy, + .prime_handle_to_fd = drm_gem_prime_handle_to_fd, + .prime_fd_to_handle = drm_gem_prime_fd_to_handle, + .gem_prime_export = drm_gem_prime_export, + .gem_prime_import = drm_gem_prime_import, + .gem_prime_pin = msm_gem_prime_pin, + .gem_prime_unpin = msm_gem_prime_unpin, + .gem_prime_get_sg_table = msm_gem_prime_get_sg_table, + .gem_prime_import_sg_table = msm_gem_prime_import_sg_table, + .gem_prime_vmap = msm_gem_prime_vmap, + .gem_prime_vunmap = msm_gem_prime_vunmap, +#ifdef CONFIG_DEBUG_FS + .debugfs_init = etnaviv_debugfs_init, + .debugfs_cleanup = etnaviv_debugfs_cleanup, +#endif + .ioctls = etnaviv_ioctls, + .num_ioctls = DRM_ETNAVIV_NUM_IOCTLS, + .fops = &fops, + .name = "etnaviv", + .desc = "etnaviv DRM", + .date = "20130625", + .major = 1, + .minor = 0, +}; + +/* + * Platform driver: + */ + +static int etnaviv_compare(struct device *dev, void *data) +{ + struct device_node *np = data; + + return dev->of_node == np; +} + +static int etnaviv_add_components(struct device *master, struct master *m) +{ + struct device_node *np = master->of_node; + struct device_node *child_np; + + child_np = of_get_next_available_child(np, NULL); + + while (child_np) { + DRM_INFO("add child %s\n", child_np->name); + component_master_add_child(m, etnaviv_compare, child_np); + of_node_put(child_np); + child_np = of_get_next_available_child(np, child_np); + } + + return 0; +} + +static int etnaviv_bind(struct device *dev) +{ + return drm_platform_init(&etnaviv_drm_driver, to_platform_device(dev)); +} + +static void etnaviv_unbind(struct device *dev) +{ + drm_put_dev(dev_get_drvdata(dev)); +} + +static const struct component_master_ops etnaviv_master_ops = { + .add_components = etnaviv_add_components, + .bind = etnaviv_bind, + .unbind = etnaviv_unbind, +}; + +static int etnaviv_pdev_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct device_node *node = dev->of_node; + + of_platform_populate(node, NULL, NULL, dev); + + dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); + + return component_master_add(&pdev->dev, &etnaviv_master_ops); +} + +static int etnaviv_pdev_remove(struct platform_device *pdev) +{ + component_master_del(&pdev->dev, &etnaviv_master_ops); + + return 0; +} + +static const struct of_device_id dt_match[] = { + { .compatible = "vivante,gccore" }, + {} +}; +MODULE_DEVICE_TABLE(of, dt_match); + +static struct platform_driver etnaviv_platform_driver = { + .probe = etnaviv_pdev_probe, + .remove = etnaviv_pdev_remove, + .driver = { + .owner = THIS_MODULE, + .name = "vivante", + .of_match_table = dt_match, + }, +}; + +static int __init etnaviv_init(void) +{ + int ret; + + ret = platform_driver_register(&etnaviv_gpu_driver); + if (ret != 0) + return ret; + + ret = platform_driver_register(&etnaviv_platform_driver); + if (ret != 0) + platform_driver_unregister(&etnaviv_gpu_driver); + + return ret; +} +module_init(etnaviv_init); + +static void __exit etnaviv_exit(void) +{ + platform_driver_unregister(&etnaviv_gpu_driver); + platform_driver_unregister(&etnaviv_platform_driver); +} +module_exit(etnaviv_exit); + +MODULE_AUTHOR("Rob Clark <robdclark@gmail.com"); +MODULE_DESCRIPTION("etnaviv DRM Driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h new file mode 100644 index 000000000000..63994f22d8c9 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -0,0 +1,143 @@ +/* + * Copyright (C) 2013 Red Hat + * Author: Rob Clark robdclark@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#ifndef __ETNAVIV_DRV_H__ +#define __ETNAVIV_DRV_H__ + +#include <linux/kernel.h> +#include <linux/clk.h> +#include <linux/cpufreq.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/pm.h> +#include <linux/pm_runtime.h> +#include <linux/slab.h> +#include <linux/list.h> +#include <linux/iommu.h> +#include <linux/types.h> +#include <linux/sizes.h> + +#include <drm/drmP.h> +#include <drm/drm_crtc_helper.h> +#include <drm/drm_fb_helper.h> +#include <drm/drm_gem.h> +#include <drm/etnaviv_drm.h> + +struct etnaviv_gpu; +struct etnaviv_mmu; +struct etnaviv_gem_submit; + +struct etnaviv_file_private { + /* currently we don't do anything useful with this.. but when + * per-context address spaces are supported we'd keep track of + * the context's page-tables here. + */ + int dummy; +}; + +struct etnaviv_drm_private { + struct etnaviv_gpu *gpu[ETNA_MAX_PIPES]; + struct etnaviv_file_private *lastctx; + + uint32_t next_fence, completed_fence; + wait_queue_head_t fence_event; + + /* list of GEM objects: */ + struct list_head inactive_list; + + struct workqueue_struct *wq; + + /* registered MMUs: */ + struct etnaviv_iommu *mmu; +}; + +void etnaviv_register_mmu(struct drm_device *dev, struct etnaviv_iommu *mmu); + +int etnaviv_wait_fence_interruptable(struct drm_device *dev, uint32_t pipe, + uint32_t fence, struct timespec *timeout); +void etnaviv_update_fence(struct drm_device *dev, uint32_t fence); + +int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, + struct drm_file *file); + +int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma); +int etnaviv_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf); +uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); +int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, + uint32_t *iova); +int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, int id, uint32_t *iova); +struct page **etnaviv_gem_get_pages(struct drm_gem_object *obj); +void msm_gem_put_pages(struct drm_gem_object *obj); +void etnaviv_gem_put_iova(struct drm_gem_object *obj); +int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, + struct drm_mode_create_dumb *args); +int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, + uint32_t handle, uint64_t *offset); +struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj); +void *msm_gem_prime_vmap(struct drm_gem_object *obj); +void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); +struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, + size_t size, struct sg_table *sg); +int msm_gem_prime_pin(struct drm_gem_object *obj); +void msm_gem_prime_unpin(struct drm_gem_object *obj); +void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj); +void *msm_gem_vaddr(struct drm_gem_object *obj); +dma_addr_t etnaviv_gem_paddr_locked(struct drm_gem_object *obj); +void etnaviv_gem_move_to_active(struct drm_gem_object *obj, + struct etnaviv_gpu *gpu, bool write, uint32_t fence); +void etnaviv_gem_move_to_inactive(struct drm_gem_object *obj); +int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, + struct timespec *timeout); +int etnaviv_gem_cpu_fini(struct drm_gem_object *obj); +void etnaviv_gem_free_object(struct drm_gem_object *obj); +int etnaviv_gem_new_handle(struct drm_device *dev, struct drm_file *file, + uint32_t size, uint32_t flags, uint32_t *handle); +struct drm_gem_object *etnaviv_gem_new(struct drm_device *dev, + uint32_t size, uint32_t flags); +struct drm_gem_object *msm_gem_import(struct drm_device *dev, + uint32_t size, struct sg_table *sgt); +u32 etnaviv_buffer_init(struct etnaviv_gpu *gpu); +void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct etnaviv_gem_submit *submit); + +#ifdef CONFIG_DEBUG_FS +void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m); +void msm_gem_describe_objects(struct list_head *list, struct seq_file *m); +void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m); +#endif + +void __iomem *etnaviv_ioremap(struct platform_device *pdev, const char *name, + const char *dbgname); +void etnaviv_writel(u32 data, void __iomem *addr); +u32 etnaviv_readl(const void __iomem *addr); + +#define DBG(fmt, ...) DRM_DEBUG(fmt"\n", ##__VA_ARGS__) +#define VERB(fmt, ...) if (0) DRM_DEBUG(fmt"\n", ##__VA_ARGS__) + +static inline bool fence_completed(struct drm_device *dev, uint32_t fence) +{ + struct etnaviv_drm_private *priv = dev->dev_private; + return priv->completed_fence >= fence; +} + +static inline int align_pitch(int width, int bpp) +{ + int bytespp = (bpp + 7) / 8; + /* adreno needs pitch aligned to 32 pixels: */ + return bytespp * ALIGN(width, 32); +} + +#endif /* __ETNAVIV_DRV_H__ */ diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c new file mode 100644 index 000000000000..42149a2b7404 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -0,0 +1,706 @@ +/* + * Copyright (C) 2013 Red Hat + * Author: Rob Clark robdclark@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include <linux/spinlock.h> +#include <linux/shmem_fs.h> +#include <linux/dma-buf.h> + +#include "etnaviv_drv.h" +#include "etnaviv_gem.h" +#include "etnaviv_gpu.h" +#include "etnaviv_mmu.h" + +/* called with dev->struct_mutex held */ +static struct page **get_pages(struct drm_gem_object *obj) +{ + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + + if (!etnaviv_obj->pages) { + struct drm_device *dev = obj->dev; + struct page **p; + int npages = obj->size >> PAGE_SHIFT; + + p = drm_gem_get_pages(obj); + + if (IS_ERR(p)) { + dev_err(dev->dev, "could not get pages: %ld\n", + PTR_ERR(p)); + return p; + } + + etnaviv_obj->sgt = drm_prime_pages_to_sg(p, npages); + if (IS_ERR(etnaviv_obj->sgt)) { + dev_err(dev->dev, "failed to allocate sgt\n"); + return ERR_CAST(etnaviv_obj->sgt); + } + + etnaviv_obj->pages = p; + + /* For non-cached buffers, ensure the new pages are clean + * because display controller, GPU, etc. are not coherent: + */ + if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_UNCACHED)) + dma_map_sg(dev->dev, etnaviv_obj->sgt->sgl, + etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL); + } + + return etnaviv_obj->pages; +} + +static void put_pages(struct drm_gem_object *obj) +{ + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + + if (etnaviv_obj->pages) { + /* For non-cached buffers, ensure the new pages are clean + * because display controller, GPU, etc. are not coherent: + */ + if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_UNCACHED)) + dma_unmap_sg(obj->dev->dev, etnaviv_obj->sgt->sgl, + etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL); + sg_free_table(etnaviv_obj->sgt); + kfree(etnaviv_obj->sgt); + + drm_gem_put_pages(obj, etnaviv_obj->pages, true, false); + + etnaviv_obj->pages = NULL; + } +} + +struct page **etnaviv_gem_get_pages(struct drm_gem_object *obj) +{ + struct drm_device *dev = obj->dev; + struct page **p; + mutex_lock(&dev->struct_mutex); + p = get_pages(obj); + mutex_unlock(&dev->struct_mutex); + return p; +} + +void msm_gem_put_pages(struct drm_gem_object *obj) +{ + /* when we start tracking the pin count, then do something here */ +} + +static int etnaviv_gem_mmap_cmd(struct drm_gem_object *obj, + struct vm_area_struct *vma) +{ + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + int ret; + + /* + * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the + * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map + * the whole buffer. + */ + vma->vm_flags &= ~VM_PFNMAP; + vma->vm_pgoff = 0; + + ret = dma_mmap_coherent(obj->dev->dev, vma, + etnaviv_obj->vaddr, etnaviv_obj->paddr, + vma->vm_end - vma->vm_start); + + return ret; +} + +static int etnaviv_gem_mmap_obj(struct drm_gem_object *obj, + struct vm_area_struct *vma) +{ + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + + vma->vm_flags &= ~VM_PFNMAP; + vma->vm_flags |= VM_MIXEDMAP; + + if (etnaviv_obj->flags & ETNA_BO_WC) { + vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); + } else if (etnaviv_obj->flags & ETNA_BO_UNCACHED) { + vma->vm_page_prot = pgprot_noncached(vm_get_page_prot(vma->vm_flags)); + } else { + /* + * Shunt off cached objs to shmem file so they have their own + * address_space (so unmap_mapping_range does what we want, + * in particular in the case of mmap'd dmabufs) + */ + fput(vma->vm_file); + get_file(obj->filp); + vma->vm_pgoff = 0; + vma->vm_file = obj->filp; + + vma->vm_page_prot = vm_get_page_prot(vma->vm_flags); + } + + return 0; +} + +int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma) +{ + struct etnaviv_gem_object *obj; + int ret; + + ret = drm_gem_mmap(filp, vma); + if (ret) { + DBG("mmap failed: %d", ret); + return ret; + } + + obj = to_etnaviv_bo(vma->vm_private_data); + if (obj->flags & ETNA_BO_CMDSTREAM) + ret = etnaviv_gem_mmap_cmd(vma->vm_private_data, vma); + else + ret = etnaviv_gem_mmap_obj(vma->vm_private_data, vma); + + return ret; +} + +int etnaviv_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + struct drm_gem_object *obj = vma->vm_private_data; + struct drm_device *dev = obj->dev; + struct page **pages; + unsigned long pfn; + pgoff_t pgoff; + int ret; + + /* Make sure we don't parallel update on a fault, nor move or remove + * something from beneath our feet + */ + ret = mutex_lock_interruptible(&dev->struct_mutex); + if (ret) + goto out; + + /* make sure we have pages attached now */ + pages = get_pages(obj); + if (IS_ERR(pages)) { + ret = PTR_ERR(pages); + goto out_unlock; + } + + /* We don't use vmf->pgoff since that has the fake offset: */ + pgoff = ((unsigned long)vmf->virtual_address - + vma->vm_start) >> PAGE_SHIFT; + + pfn = page_to_pfn(pages[pgoff]); + + VERB("Inserting %p pfn %lx, pa %lx", vmf->virtual_address, + pfn, pfn << PAGE_SHIFT); + + ret = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn); + +out_unlock: + mutex_unlock(&dev->struct_mutex); +out: + switch (ret) { + case -EAGAIN: + case 0: + case -ERESTARTSYS: + case -EINTR: + case -EBUSY: + /* + * EBUSY is ok: this just means that another thread + * already did the job. + */ + return VM_FAULT_NOPAGE; + case -ENOMEM: + return VM_FAULT_OOM; + default: + return VM_FAULT_SIGBUS; + } +} + +/** get mmap offset */ +static uint64_t mmap_offset(struct drm_gem_object *obj) +{ + struct drm_device *dev = obj->dev; + int ret; + + WARN_ON(!mutex_is_locked(&dev->struct_mutex)); + + /* Make it mmapable */ + ret = drm_gem_create_mmap_offset(obj); + + if (ret) { + dev_err(dev->dev, "could not allocate mmap offset\n"); + return 0; + } + + return drm_vma_node_offset_addr(&obj->vma_node); +} + +uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) +{ + uint64_t offset; + mutex_lock(&obj->dev->struct_mutex); + offset = mmap_offset(obj); + mutex_unlock(&obj->dev->struct_mutex); + return offset; +} + +/* should be called under struct_mutex.. although it can be called + * from atomic context without struct_mutex to acquire an extra + * iova ref if you know one is already held. + * + * That means when I do eventually need to add support for unpinning + * the refcnt counter needs to be atomic_t. + */ +int etnaviv_gem_get_iova_locked(struct etnaviv_gpu * gpu, struct drm_gem_object *obj, + uint32_t *iova) +{ + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + int ret = 0; + + if (!etnaviv_obj->iova && !(etnaviv_obj->flags & ETNA_BO_CMDSTREAM)) { + struct etnaviv_drm_private *priv = obj->dev->dev_private; + struct etnaviv_iommu *mmu = priv->mmu; + struct page **pages = get_pages(obj); + uint32_t offset; + struct drm_mm_node *node = NULL; + + if (IS_ERR(pages)) + return PTR_ERR(pages); + + node = kzalloc(sizeof(*node), GFP_KERNEL); + if (!node) + return -ENOMEM; + + ret = drm_mm_insert_node(&gpu->mm, node, obj->size, 0, + DRM_MM_SEARCH_DEFAULT); + + if (!ret) { + offset = node->start; + etnaviv_obj->iova = offset; + etnaviv_obj->gpu_vram_node = node; + + ret = etnaviv_iommu_map(mmu, offset, etnaviv_obj->sgt, + obj->size, IOMMU_READ | IOMMU_WRITE); + } else + kfree(node); + } + + if (!ret) + *iova = etnaviv_obj->iova; + + return ret; +} + +int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, int id, uint32_t *iova) +{ + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + int ret; + + /* this is safe right now because we don't unmap until the + * bo is deleted: + */ + if (etnaviv_obj->iova) { + *iova = etnaviv_obj->iova; + return 0; + } + + mutex_lock(&obj->dev->struct_mutex); + ret = etnaviv_gem_get_iova_locked(gpu, obj, iova); + mutex_unlock(&obj->dev->struct_mutex); + return ret; +} + +void etnaviv_gem_put_iova(struct drm_gem_object *obj) +{ + // XXX TODO .. + // NOTE: probably don't need a _locked() version.. we wouldn't + // normally unmap here, but instead just mark that it could be + // unmapped (if the iova refcnt drops to zero), but then later + // if another _get_iova_locked() fails we can start unmapping + // things that are no longer needed.. +} + +int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, + struct drm_mode_create_dumb *args) +{ + args->pitch = align_pitch(args->width, args->bpp); + args->size = PAGE_ALIGN(args->pitch * args->height); + /* TODO: re-check flags */ + return etnaviv_gem_new_handle(dev, file, args->size, + ETNA_BO_WC, &args->handle); +} + +int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, + uint32_t handle, uint64_t *offset) +{ + struct drm_gem_object *obj; + int ret = 0; + + /* GEM does all our handle to object mapping */ + obj = drm_gem_object_lookup(dev, file, handle); + if (obj == NULL) { + ret = -ENOENT; + goto fail; + } + + *offset = msm_gem_mmap_offset(obj); + + drm_gem_object_unreference_unlocked(obj); + +fail: + return ret; +} + +void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj) +{ + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); + if (!etnaviv_obj->vaddr) { + struct page **pages = get_pages(obj); + if (IS_ERR(pages)) + return ERR_CAST(pages); + etnaviv_obj->vaddr = vmap(pages, obj->size >> PAGE_SHIFT, + VM_MAP, pgprot_writecombine(PAGE_KERNEL)); + } + return etnaviv_obj->vaddr; +} + +void *msm_gem_vaddr(struct drm_gem_object *obj) +{ + void *ret; + mutex_lock(&obj->dev->struct_mutex); + ret = etnaviv_gem_vaddr_locked(obj); + mutex_unlock(&obj->dev->struct_mutex); + return ret; +} + +dma_addr_t etnaviv_gem_paddr_locked(struct drm_gem_object *obj) +{ + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); + + return etnaviv_obj->paddr; +} + +void etnaviv_gem_move_to_active(struct drm_gem_object *obj, + struct etnaviv_gpu *gpu, bool write, uint32_t fence) +{ + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + etnaviv_obj->gpu = gpu; + if (write) + etnaviv_obj->write_fence = fence; + else + etnaviv_obj->read_fence = fence; + list_del_init(&etnaviv_obj->mm_list); + list_add_tail(&etnaviv_obj->mm_list, &gpu->active_list); +} + +void etnaviv_gem_move_to_inactive(struct drm_gem_object *obj) +{ + struct drm_device *dev = obj->dev; + struct etnaviv_drm_private *priv = dev->dev_private; + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + + WARN_ON(!mutex_is_locked(&dev->struct_mutex)); + + etnaviv_obj->gpu = NULL; + etnaviv_obj->read_fence = 0; + etnaviv_obj->write_fence = 0; + list_del_init(&etnaviv_obj->mm_list); + list_add_tail(&etnaviv_obj->mm_list, &priv->inactive_list); +} + +int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, + struct timespec *timeout) +{ +/* + struct drm_device *dev = obj->dev; + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); +*/ + int ret = 0; + /* TODO */ +#if 0 + if (is_active(etnaviv_obj)) { + uint32_t fence = 0; + + if (op & MSM_PREP_READ) + fence = etnaviv_obj->write_fence; + if (op & MSM_PREP_WRITE) + fence = max(fence, etnaviv_obj->read_fence); + if (op & MSM_PREP_NOSYNC) + timeout = NULL; + + ret = etnaviv_wait_fence_interruptable(dev, fence, timeout); + } + + /* TODO cache maintenance */ +#endif + return ret; +} + +int etnaviv_gem_cpu_fini(struct drm_gem_object *obj) +{ + /* TODO cache maintenance */ + return 0; +} + +#ifdef CONFIG_DEBUG_FS +void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) +{ + struct drm_device *dev = obj->dev; + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + uint64_t off = drm_vma_node_start(&obj->vma_node); + + WARN_ON(!mutex_is_locked(&dev->struct_mutex)); + seq_printf(m, "%08x: %c(r=%u,w=%u) %2d (%2d) %08llx %p %d\n", + etnaviv_obj->flags, is_active(etnaviv_obj) ? 'A' : 'I', + etnaviv_obj->read_fence, etnaviv_obj->write_fence, + obj->name, obj->refcount.refcount.counter, + off, etnaviv_obj->vaddr, obj->size); +} + +void msm_gem_describe_objects(struct list_head *list, struct seq_file *m) +{ + struct etnaviv_gem_object *etnaviv_obj; + int count = 0; + size_t size = 0; + + list_for_each_entry(etnaviv_obj, list, mm_list) { + struct drm_gem_object *obj = &etnaviv_obj->base; + seq_puts(m, " "); + msm_gem_describe(obj, m); + count++; + size += obj->size; + } + + seq_printf(m, "Total %d objects, %zu bytes\n", count, size); +} +#endif + +static void etnaviv_free_cmd(struct drm_gem_object *obj) +{ + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + + drm_gem_free_mmap_offset(obj); + + dma_free_coherent(obj->dev->dev, obj->size, + etnaviv_obj->vaddr, etnaviv_obj->paddr); + + drm_gem_object_release(obj); +} + +static void etnaviv_free_obj(struct drm_gem_object *obj) +{ + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + struct etnaviv_drm_private *priv = obj->dev->dev_private; + struct etnaviv_iommu *mmu = priv->mmu; + + if (mmu && etnaviv_obj->iova) { + uint32_t offset = etnaviv_obj->gpu_vram_node->start; + etnaviv_iommu_unmap(mmu, offset, etnaviv_obj->sgt, obj->size); + drm_mm_remove_node(etnaviv_obj->gpu_vram_node); + kfree(etnaviv_obj->gpu_vram_node); + } + + drm_gem_free_mmap_offset(obj); + + if (obj->import_attach) { + if (etnaviv_obj->vaddr) + dma_buf_vunmap(obj->import_attach->dmabuf, etnaviv_obj->vaddr); + + /* Don't drop the pages for imported dmabuf, as they are not + * ours, just free the array we allocated: + */ + if (etnaviv_obj->pages) + drm_free_large(etnaviv_obj->pages); + + } else { + if (etnaviv_obj->vaddr) + vunmap(etnaviv_obj->vaddr); + put_pages(obj); + } + + if (etnaviv_obj->resv == &etnaviv_obj->_resv) + reservation_object_fini(etnaviv_obj->resv); + + drm_gem_object_release(obj); +} + +void etnaviv_gem_free_object(struct drm_gem_object *obj) +{ + struct drm_device *dev = obj->dev; + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + + WARN_ON(!mutex_is_locked(&dev->struct_mutex)); + + /* object should not be on active list: */ + WARN_ON(is_active(etnaviv_obj)); + + list_del(&etnaviv_obj->mm_list); + + if (etnaviv_obj->flags & ETNA_BO_CMDSTREAM) + etnaviv_free_cmd(obj); + else + etnaviv_free_obj(obj); + + kfree(etnaviv_obj); +} + +/* convenience method to construct a GEM buffer object, and userspace handle */ +int etnaviv_gem_new_handle(struct drm_device *dev, struct drm_file *file, + uint32_t size, uint32_t flags, uint32_t *handle) +{ + struct drm_gem_object *obj; + int ret; + + ret = mutex_lock_interruptible(&dev->struct_mutex); + if (ret) + return ret; + + obj = etnaviv_gem_new(dev, size, flags); + + mutex_unlock(&dev->struct_mutex); + + if (IS_ERR(obj)) + return PTR_ERR(obj); + + ret = drm_gem_handle_create(file, obj, handle); + + /* drop reference from allocate - handle holds it now */ + drm_gem_object_unreference_unlocked(obj); + + return ret; +} + +static int etnaviv_gem_new_impl(struct drm_device *dev, + uint32_t size, uint32_t flags, + struct drm_gem_object **obj) +{ + struct etnaviv_drm_private *priv = dev->dev_private; + struct etnaviv_gem_object *etnaviv_obj; + unsigned sz = sizeof(*etnaviv_obj); + bool valid = true; + + /* validate flags */ + if (flags & ETNA_BO_CMDSTREAM) { + if ((flags & ETNA_BO_CACHE_MASK) != 0) + valid = false; + } else { + switch (flags & ETNA_BO_CACHE_MASK) { + case ETNA_BO_UNCACHED: + case ETNA_BO_CACHED: + case ETNA_BO_WC: + break; + default: + valid = false; + } + } + + if (!valid) { + dev_err(dev->dev, "invalid cache flag: %x (cmd: %d)\n", + (flags & ETNA_BO_CACHE_MASK), + (flags & ETNA_BO_CMDSTREAM)); + return -EINVAL; + } + + etnaviv_obj = kzalloc(sz, GFP_KERNEL); + if (!etnaviv_obj) + return -ENOMEM; + + if (flags & ETNA_BO_CMDSTREAM) { + etnaviv_obj->vaddr = dma_alloc_coherent(dev->dev, size, + &etnaviv_obj->paddr, GFP_KERNEL); + + if (!etnaviv_obj->vaddr) { + kfree(etnaviv_obj); + return -ENOMEM; + } + } + + etnaviv_obj->flags = flags; + + etnaviv_obj->resv = &etnaviv_obj->_resv; + reservation_object_init(etnaviv_obj->resv); + + INIT_LIST_HEAD(&etnaviv_obj->submit_entry); + list_add_tail(&etnaviv_obj->mm_list, &priv->inactive_list); + + *obj = &etnaviv_obj->base; + + return 0; +} + +struct drm_gem_object *etnaviv_gem_new(struct drm_device *dev, + uint32_t size, uint32_t flags) +{ + struct drm_gem_object *obj = NULL; + int ret; + + WARN_ON(!mutex_is_locked(&dev->struct_mutex)); + + size = PAGE_ALIGN(size); + + ret = etnaviv_gem_new_impl(dev, size, flags, &obj); + if (ret) + goto fail; + + ret = 0; + if (flags & ETNA_BO_CMDSTREAM) + drm_gem_private_object_init(dev, obj, size); + else + ret = drm_gem_object_init(dev, obj, size); + + if (ret) + goto fail; + + return obj; + +fail: + if (obj) + drm_gem_object_unreference(obj); + + return ERR_PTR(ret); +} + +struct drm_gem_object *msm_gem_import(struct drm_device *dev, + uint32_t size, struct sg_table *sgt) +{ + struct etnaviv_gem_object *etnaviv_obj; + struct drm_gem_object *obj; + int ret, npages; + + size = PAGE_ALIGN(size); + + ret = etnaviv_gem_new_impl(dev, size, ETNA_BO_WC, &obj); + if (ret) + goto fail; + + drm_gem_private_object_init(dev, obj, size); + + npages = size / PAGE_SIZE; + + etnaviv_obj = to_etnaviv_bo(obj); + etnaviv_obj->sgt = sgt; + etnaviv_obj->pages = drm_malloc_ab(npages, sizeof(struct page *)); + if (!etnaviv_obj->pages) { + ret = -ENOMEM; + goto fail; + } + + ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages, NULL, npages); + if (ret) + goto fail; + + return obj; + +fail: + if (obj) + drm_gem_object_unreference_unlocked(obj); + + return ERR_PTR(ret); +} diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h new file mode 100644 index 000000000000..597ff8233fb1 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -0,0 +1,100 @@ +/* + * Copyright (C) 2013 Red Hat + * Author: Rob Clark robdclark@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#ifndef __ETNAVIV_GEM_H__ +#define __ETNAVIV_GEM_H__ + +#include <linux/reservation.h> +#include "etnaviv_drv.h" + +struct etnaviv_gem_object { + struct drm_gem_object base; + + uint32_t flags; + + /* And object is either: + * inactive - on priv->inactive_list + * active - on one one of the gpu's active_list.. well, at + * least for now we don't have (I don't think) hw sync between + * 2d and 3d one devices which have both, meaning we need to + * block on submit if a bo is already on other ring + * + */ + struct list_head mm_list; + struct etnaviv_gpu *gpu; /* non-null if active */ + uint32_t read_fence, write_fence; + + /* Transiently in the process of submit ioctl, objects associated + * with the submit are on submit->bo_list.. this only lasts for + * the duration of the ioctl, so one bo can never be on multiple + * submit lists. + */ + struct list_head submit_entry; + + struct page **pages; + struct sg_table *sgt; + void *vaddr; + uint32_t iova; + + /* for ETNA_BO_CMDSTREAM */ + dma_addr_t paddr; + + /* normally (resv == &_resv) except for imported bo's */ + struct reservation_object *resv; + struct reservation_object _resv; + + struct drm_mm_node *gpu_vram_node; + + /* for buffer manipulation during submit */ + u32 offset; +}; +#define to_etnaviv_bo(x) container_of(x, struct etnaviv_gem_object, base) + +static inline bool is_active(struct etnaviv_gem_object *etnaviv_obj) +{ + return etnaviv_obj->gpu != NULL; +} + +#define MAX_CMDS 4 + +/* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, + * associated with the cmdstream submission for synchronization (and + * make it easier to unwind when things go wrong, etc). This only + * lasts for the duration of the submit-ioctl. + */ +struct etnaviv_gem_submit { + struct drm_device *dev; + struct etnaviv_gpu *gpu; + struct list_head bo_list; + struct ww_acquire_ctx ticket; + uint32_t fence; + bool valid; + unsigned int nr_cmds; + unsigned int nr_bos; + struct { + uint32_t type; + uint32_t size; /* in dwords */ + struct etnaviv_gem_object *obj; + } cmd[MAX_CMDS]; + struct { + uint32_t flags; + struct etnaviv_gem_object *obj; + uint32_t iova; + } bos[0]; +}; + +#endif /* __ETNAVIV_GEM_H__ */ diff --git a/drivers/staging/etnaviv/etnaviv_gem_prime.c b/drivers/staging/etnaviv/etnaviv_gem_prime.c new file mode 100644 index 000000000000..78dd843a8e97 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_gem_prime.c @@ -0,0 +1,56 @@ +/* + * Copyright (C) 2013 Red Hat + * Author: Rob Clark robdclark@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include "etnaviv_drv.h" +#include "etnaviv_gem.h" + + +struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) +{ + struct etnaviv_gem_object *etnaviv_obj= to_etnaviv_bo(obj); + BUG_ON(!etnaviv_obj->sgt); /* should have already pinned! */ + return etnaviv_obj->sgt; +} + +void *msm_gem_prime_vmap(struct drm_gem_object *obj) +{ + return msm_gem_vaddr(obj); +} + +void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) +{ + /* TODO msm_gem_vunmap() */ +} + +struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, + size_t size, struct sg_table *sg) +{ + return msm_gem_import(dev, size, sg); +} + +int msm_gem_prime_pin(struct drm_gem_object *obj) +{ + if (!obj->import_attach) + etnaviv_gem_get_pages(obj); + return 0; +} + +void msm_gem_prime_unpin(struct drm_gem_object *obj) +{ + if (!obj->import_attach) + msm_gem_put_pages(obj); +} diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c new file mode 100644 index 000000000000..dd87fdfe7ab5 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -0,0 +1,407 @@ +/* + * Copyright (C) 2013 Red Hat + * Author: Rob Clark robdclark@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include "etnaviv_drv.h" +#include "etnaviv_gpu.h" +#include "etnaviv_gem.h" + +/* + * Cmdstream submission: + */ + +#define BO_INVALID_FLAGS ~(ETNA_SUBMIT_BO_READ | ETNA_SUBMIT_BO_WRITE) +/* make sure these don't conflict w/ MSM_SUBMIT_BO_x */ +#define BO_VALID 0x8000 +#define BO_LOCKED 0x4000 +#define BO_PINNED 0x2000 + +static inline void __user *to_user_ptr(u64 address) +{ + return (void __user *)(uintptr_t)address; +} + +static struct etnaviv_gem_submit *submit_create(struct drm_device *dev, + struct etnaviv_gpu *gpu, int nr) +{ + struct etnaviv_gem_submit *submit; + int sz = sizeof(*submit) + (nr * sizeof(submit->bos[0])); + + submit = kmalloc(sz, GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY); + if (submit) { + submit->dev = dev; + submit->gpu = gpu; + + /* initially, until copy_from_user() and bo lookup succeeds: */ + submit->nr_bos = 0; + submit->nr_cmds = 0; + + INIT_LIST_HEAD(&submit->bo_list); + ww_acquire_init(&submit->ticket, &reservation_ww_class); + } + + return submit; +} + +static int submit_lookup_objects(struct etnaviv_gem_submit *submit, + struct drm_etnaviv_gem_submit *args, struct drm_file *file) +{ + unsigned i; + int ret = 0; + + spin_lock(&file->table_lock); + + for (i = 0; i < args->nr_bos; i++) { + struct drm_etnaviv_gem_submit_bo submit_bo; + struct drm_gem_object *obj; + struct etnaviv_gem_object *etnaviv_obj; + void __user *userptr = + to_user_ptr(args->bos + (i * sizeof(submit_bo))); + + ret = copy_from_user(&submit_bo, userptr, sizeof(submit_bo)); + if (ret) { + ret = -EFAULT; + goto out_unlock; + } + + if (submit_bo.flags & BO_INVALID_FLAGS) { + DRM_ERROR("invalid flags: %x\n", submit_bo.flags); + ret = -EINVAL; + goto out_unlock; + } + + submit->bos[i].flags = submit_bo.flags; + /* in validate_objects() we figure out if this is true: */ + submit->bos[i].iova = submit_bo.presumed; + + /* normally use drm_gem_object_lookup(), but for bulk lookup + * all under single table_lock just hit object_idr directly: + */ + obj = idr_find(&file->object_idr, submit_bo.handle); + if (!obj) { + DRM_ERROR("invalid handle %u at index %u\n", submit_bo.handle, i); + ret = -EINVAL; + goto out_unlock; + } + + etnaviv_obj = to_etnaviv_bo(obj); + + if (!list_empty(&etnaviv_obj->submit_entry)) { + DRM_ERROR("handle %u at index %u already on submit list\n", + submit_bo.handle, i); + ret = -EINVAL; + goto out_unlock; + } + + drm_gem_object_reference(obj); + + submit->bos[i].obj = etnaviv_obj; + + list_add_tail(&etnaviv_obj->submit_entry, &submit->bo_list); + } + +out_unlock: + submit->nr_bos = i; + spin_unlock(&file->table_lock); + + return ret; +} + +static void submit_unlock_unpin_bo(struct etnaviv_gem_submit *submit, int i) +{ + struct etnaviv_gem_object *etnaviv_obj = submit->bos[i].obj; + + if (submit->bos[i].flags & BO_PINNED) + etnaviv_gem_put_iova(&etnaviv_obj->base); + + if (submit->bos[i].flags & BO_LOCKED) + ww_mutex_unlock(&etnaviv_obj->resv->lock); + + if (!(submit->bos[i].flags & BO_VALID)) + submit->bos[i].iova = 0; + + submit->bos[i].flags &= ~(BO_LOCKED | BO_PINNED); +} + +/* This is where we make sure all the bo's are reserved and pin'd: */ +static int submit_validate_objects(struct etnaviv_gem_submit *submit) +{ + int contended, slow_locked = -1, i, ret = 0; + +retry: + submit->valid = true; + + for (i = 0; i < submit->nr_bos; i++) { + struct etnaviv_gem_object *etnaviv_obj = submit->bos[i].obj; + uint32_t iova; + + if (slow_locked == i) + slow_locked = -1; + + contended = i; + + if (!(submit->bos[i].flags & BO_LOCKED)) { + ret = ww_mutex_lock_interruptible(&etnaviv_obj->resv->lock, + &submit->ticket); + if (ret) + goto fail; + submit->bos[i].flags |= BO_LOCKED; + } + + + /* if locking succeeded, pin bo: */ + ret = etnaviv_gem_get_iova_locked(submit->gpu, &etnaviv_obj->base, &iova); + + /* this would break the logic in the fail path.. there is no + * reason for this to happen, but just to be on the safe side + * let's notice if this starts happening in the future: + */ + WARN_ON(ret == -EDEADLK); + + if (ret) + goto fail; + + submit->bos[i].flags |= BO_PINNED; + + if (iova == submit->bos[i].iova) { + submit->bos[i].flags |= BO_VALID; + } else { + submit->bos[i].iova = iova; + submit->bos[i].flags &= ~BO_VALID; + submit->valid = false; + } + } + + ww_acquire_done(&submit->ticket); + + return 0; + +fail: + for (; i >= 0; i--) + submit_unlock_unpin_bo(submit, i); + + if (slow_locked > 0) + submit_unlock_unpin_bo(submit, slow_locked); + + if (ret == -EDEADLK) { + struct etnaviv_gem_object *etnaviv_obj = submit->bos[contended].obj; + /* we lost out in a seqno race, lock and retry.. */ + ret = ww_mutex_lock_slow_interruptible(&etnaviv_obj->resv->lock, + &submit->ticket); + if (!ret) { + submit->bos[contended].flags |= BO_LOCKED; + slow_locked = contended; + goto retry; + } + } + + return ret; +} + +static int submit_bo(struct etnaviv_gem_submit *submit, uint32_t idx, + struct etnaviv_gem_object **obj, uint32_t *iova, bool *valid) +{ + if (idx >= submit->nr_bos) { + DRM_ERROR("invalid buffer index: %u (out of %u)\n", + idx, submit->nr_bos); + return -EINVAL; + } + + if (obj) + *obj = submit->bos[idx].obj; + if (iova) + *iova = submit->bos[idx].iova; + if (valid) + *valid = !!(submit->bos[idx].flags & BO_VALID); + + return 0; +} + +/* process the reloc's and patch up the cmdstream as needed: */ +static int submit_reloc(struct etnaviv_gem_submit *submit, struct etnaviv_gem_object *obj, + uint32_t offset, uint32_t nr_relocs, uint64_t relocs) +{ + uint32_t i, last_offset = 0; + uint32_t *ptr = obj->vaddr; + int ret; + + if (offset % 4) { + DRM_ERROR("non-aligned cmdstream buffer: %u\n", offset); + return -EINVAL; + } + + for (i = 0; i < nr_relocs; i++) { + struct drm_etnaviv_gem_submit_reloc submit_reloc; + void __user *userptr = + to_user_ptr(relocs + (i * sizeof(submit_reloc))); + uint32_t iova, off; + bool valid; + + ret = copy_from_user(&submit_reloc, userptr, sizeof(submit_reloc)); + if (ret) + return -EFAULT; + + if (submit_reloc.submit_offset % 4) { + DRM_ERROR("non-aligned reloc offset: %u\n", + submit_reloc.submit_offset); + return -EINVAL; + } + + /* offset in dwords: */ + off = submit_reloc.submit_offset / 4; + + if ((off >= (obj->base.size / 4)) || + (off < last_offset)) { + DRM_ERROR("invalid offset %u at reloc %u\n", off, i); + return -EINVAL; + } + + ret = submit_bo(submit, submit_reloc.reloc_idx, NULL, &iova, &valid); + if (ret) + return ret; + + if (valid) + continue; + + iova += submit_reloc.reloc_offset; + + if (submit_reloc.shift < 0) + iova >>= -submit_reloc.shift; + else + iova <<= submit_reloc.shift; + + ptr[off] = iova | submit_reloc.or; + + last_offset = off; + } + + return 0; +} + +static void submit_cleanup(struct etnaviv_gem_submit *submit, bool fail) +{ + unsigned i; + + for (i = 0; i < submit->nr_bos; i++) { + struct etnaviv_gem_object *etnaviv_obj = submit->bos[i].obj; + submit_unlock_unpin_bo(submit, i); + list_del_init(&etnaviv_obj->submit_entry); + drm_gem_object_unreference(&etnaviv_obj->base); + } + + ww_acquire_fini(&submit->ticket); + kfree(submit); +} + +int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct etnaviv_drm_private *priv = dev->dev_private; + struct drm_etnaviv_gem_submit *args = data; + struct etnaviv_file_private *ctx = file->driver_priv; + struct etnaviv_gem_submit *submit; + struct etnaviv_gpu *gpu; + unsigned i; + int ret; + + if (args->pipe >= ETNA_MAX_PIPES) + return -EINVAL; + + gpu = priv->gpu[args->pipe]; + if (!gpu) + return -ENXIO; + + if (args->nr_cmds > MAX_CMDS) + return -EINVAL; + + mutex_lock(&dev->struct_mutex); + + submit = submit_create(dev, gpu, args->nr_bos); + if (!submit) { + ret = -ENOMEM; + goto out; + } + + ret = submit_lookup_objects(submit, args, file); + if (ret) + goto out; + + ret = submit_validate_objects(submit); + if (ret) + goto out; + + for (i = 0; i < args->nr_cmds; i++) { + struct drm_etnaviv_gem_submit_cmd submit_cmd; + void __user *userptr = + to_user_ptr(args->cmds + (i * sizeof(submit_cmd))); + struct etnaviv_gem_object *etnaviv_obj; + + ret = copy_from_user(&submit_cmd, userptr, sizeof(submit_cmd)); + if (ret) { + ret = -EFAULT; + goto out; + } + + ret = submit_bo(submit, submit_cmd.submit_idx, + &etnaviv_obj, NULL, NULL); + if (ret) + goto out; + + if (!(etnaviv_obj->flags & ETNA_BO_CMDSTREAM)) { + DRM_ERROR("cmdstream bo has flag ETNA_BO_CMDSTREAM not set\n"); + ret = -EINVAL; + goto out; + } + + if (submit_cmd.size % 4) { + DRM_ERROR("non-aligned cmdstream buffer size: %u\n", + submit_cmd.size); + ret = -EINVAL; + goto out; + } + + if ((submit_cmd.size + submit_cmd.submit_offset) >= + etnaviv_obj->base.size) { + DRM_ERROR("invalid cmdstream size: %u\n", submit_cmd.size); + ret = -EINVAL; + goto out; + } + + submit->cmd[i].type = submit_cmd.type; + submit->cmd[i].size = submit_cmd.size / 4; + submit->cmd[i].obj = etnaviv_obj; + + if (submit->valid) + continue; + + ret = submit_reloc(submit, etnaviv_obj, submit_cmd.submit_offset, + submit_cmd.nr_relocs, submit_cmd.relocs); + if (ret) + goto out; + } + + submit->nr_cmds = i; + + ret = etnaviv_gpu_submit(gpu, submit, ctx); + + args->fence = submit->fence; + +out: + if (submit) + submit_cleanup(submit, !!ret); + mutex_unlock(&dev->struct_mutex); + return ret; +} diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c new file mode 100644 index 000000000000..d2d0556a9bad --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -0,0 +1,984 @@ +/* + * Copyright (C) 2013 Red Hat + * Author: Rob Clark robdclark@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include <linux/component.h> +#include <linux/of_device.h> +#include "etnaviv_gpu.h" +#include "etnaviv_gem.h" +#include "etnaviv_mmu.h" +#include "etnaviv_iommu.h" +#include "etnaviv_iommu_v2.h" +#include "common.xml.h" +#include "state.xml.h" +#include "state_hi.xml.h" +#include "cmdstream.xml.h" + + +/* + * Driver functions: + */ + +int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, uint32_t param, uint64_t *value) +{ + switch (param) { + case ETNAVIV_PARAM_GPU_MODEL: + *value = gpu->identity.model; + break; + + case ETNAVIV_PARAM_GPU_REVISION: + *value = gpu->identity.revision; + break; + + case ETNAVIV_PARAM_GPU_FEATURES_0: + *value = gpu->identity.features; + break; + + case ETNAVIV_PARAM_GPU_FEATURES_1: + *value = gpu->identity.minor_features0; + break; + + case ETNAVIV_PARAM_GPU_FEATURES_2: + *value = gpu->identity.minor_features1; + break; + + case ETNAVIV_PARAM_GPU_FEATURES_3: + *value = gpu->identity.minor_features2; + break; + + case ETNAVIV_PARAM_GPU_FEATURES_4: + *value = gpu->identity.minor_features3; + break; + + case ETNAVIV_PARAM_GPU_STREAM_COUNT: + *value = gpu->identity.stream_count; + break; + + case ETNAVIV_PARAM_GPU_REGISTER_MAX: + *value = gpu->identity.register_max; + break; + + case ETNAVIV_PARAM_GPU_THREAD_COUNT: + *value = gpu->identity.thread_count; + break; + + case ETNAVIV_PARAM_GPU_VERTEX_CACHE_SIZE: + *value = gpu->identity.vertex_cache_size; + break; + + case ETNAVIV_PARAM_GPU_SHADER_CORE_COUNT: + *value = gpu->identity.shader_core_count; + break; + + case ETNAVIV_PARAM_GPU_PIXEL_PIPES: + *value = gpu->identity.pixel_pipes; + break; + + case ETNAVIV_PARAM_GPU_VERTEX_OUTPUT_BUFFER_SIZE: + *value = gpu->identity.vertex_output_buffer_size; + break; + + case ETNAVIV_PARAM_GPU_BUFFER_SIZE: + *value = gpu->identity.buffer_size; + break; + + case ETNAVIV_PARAM_GPU_INSTRUCTION_COUNT: + *value = gpu->identity.instruction_count; + break; + + case ETNAVIV_PARAM_GPU_NUM_CONSTANTS: + *value = gpu->identity.num_constants; + break; + + default: + DBG("%s: invalid param: %u", gpu->name, param); + return -EINVAL; + } + + return 0; +} + +static void etnaviv_hw_specs(struct etnaviv_gpu *gpu) +{ + if (gpu->identity.minor_features0 & chipMinorFeatures0_MORE_MINOR_FEATURES) { + u32 specs[2]; + + specs[0] = gpu_read(gpu, VIVS_HI_CHIP_SPECS); + specs[1] = gpu_read(gpu, VIVS_HI_CHIP_SPECS_2); + + gpu->identity.stream_count = (specs[0] & VIVS_HI_CHIP_SPECS_STREAM_COUNT__MASK) + >> VIVS_HI_CHIP_SPECS_STREAM_COUNT__SHIFT; + gpu->identity.register_max = (specs[0] & VIVS_HI_CHIP_SPECS_REGISTER_MAX__MASK) + >> VIVS_HI_CHIP_SPECS_REGISTER_MAX__SHIFT; + gpu->identity.thread_count = (specs[0] & VIVS_HI_CHIP_SPECS_THREAD_COUNT__MASK) + >> VIVS_HI_CHIP_SPECS_THREAD_COUNT__SHIFT; + gpu->identity.vertex_cache_size = (specs[0] & VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__MASK) + >> VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__SHIFT; + gpu->identity.shader_core_count = (specs[0] & VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__MASK) + >> VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__SHIFT; + gpu->identity.pixel_pipes = (specs[0] & VIVS_HI_CHIP_SPECS_PIXEL_PIPES__MASK) + >> VIVS_HI_CHIP_SPECS_PIXEL_PIPES__SHIFT; + gpu->identity.vertex_output_buffer_size = (specs[0] & VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__MASK) + >> VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__SHIFT; + + gpu->identity.buffer_size = (specs[1] & VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__MASK) + >> VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__SHIFT; + gpu->identity.instruction_count = (specs[1] & VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__MASK) + >> VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__SHIFT; + gpu->identity.num_constants = (specs[1] & VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__MASK) + >> VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__SHIFT; + + gpu->identity.register_max = 1 << gpu->identity.register_max; + gpu->identity.thread_count = 1 << gpu->identity.thread_count; + gpu->identity.vertex_output_buffer_size = 1 << gpu->identity.vertex_output_buffer_size; + } else { + dev_err(gpu->dev->dev, "TODO: determine GPU specs based on model\n"); + } + + switch (gpu->identity.instruction_count) { + case 0: + gpu->identity.instruction_count = 256; + break; + + case 1: + gpu->identity.instruction_count = 1024; + break; + + case 2: + gpu->identity.instruction_count = 2048; + break; + + default: + gpu->identity.instruction_count = 256; + break; + } + + dev_info(gpu->dev->dev, "stream_count: %x\n", gpu->identity.stream_count); + dev_info(gpu->dev->dev, "register_max: %x\n", gpu->identity.register_max); + dev_info(gpu->dev->dev, "thread_count: %x\n", gpu->identity.thread_count); + dev_info(gpu->dev->dev, "vertex_cache_size: %x\n", gpu->identity.vertex_cache_size); + dev_info(gpu->dev->dev, "shader_core_count: %x\n", gpu->identity.shader_core_count); + dev_info(gpu->dev->dev, "pixel_pipes: %x\n", gpu->identity.pixel_pipes); + dev_info(gpu->dev->dev, "vertex_output_buffer_size: %x\n", gpu->identity.vertex_output_buffer_size); + dev_info(gpu->dev->dev, "buffer_size: %x\n", gpu->identity.buffer_size); + dev_info(gpu->dev->dev, "instruction_count: %x\n", gpu->identity.instruction_count); + dev_info(gpu->dev->dev, "num_constants: %x\n", gpu->identity.num_constants); +} + +static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) +{ + u32 chipIdentity; + + chipIdentity = gpu_read(gpu, VIVS_HI_CHIP_IDENTITY); + + /* Special case for older graphic cores. */ + if (VIVS_HI_CHIP_IDENTITY_FAMILY(chipIdentity) == 0x01) { + gpu->identity.model = 0x500; /* gc500 */ + gpu->identity.revision = VIVS_HI_CHIP_IDENTITY_REVISION(chipIdentity); + } else { + + gpu->identity.model = gpu_read(gpu, VIVS_HI_CHIP_MODEL); + gpu->identity.revision = gpu_read(gpu, VIVS_HI_CHIP_REV); + + /* !!!! HACK ALERT !!!! */ + /* Because people change device IDs without letting software know + ** about it - here is the hack to make it all look the same. Only + ** for GC400 family. Next time - TELL ME!!! */ + if (((gpu->identity.model & 0xFF00) == 0x0400) + && (gpu->identity.model != 0x0420)) { + gpu->identity.model = gpu->identity.model & 0x0400; + } + + /* An other special case */ + if ((gpu->identity.model == 0x300) + && (gpu->identity.revision == 0x2201)) { + u32 chipDate = gpu_read(gpu, VIVS_HI_CHIP_DATE); + u32 chipTime = gpu_read(gpu, VIVS_HI_CHIP_TIME); + + if ((chipDate == 0x20080814) && (chipTime == 0x12051100)) { + /* This IP has an ECO; put the correct revision in it. */ + gpu->identity.revision = 0x1051; + } + } + } + + dev_info(gpu->dev->dev, "model: %x\n", gpu->identity.model); + dev_info(gpu->dev->dev, "revision: %x\n", gpu->identity.revision); + + gpu->identity.features = gpu_read(gpu, VIVS_HI_CHIP_FEATURE); + + /* Disable fast clear on GC700. */ + if (gpu->identity.model == 0x700) + gpu->identity.features &= ~BIT(0); + + if (((gpu->identity.model == 0x500) && (gpu->identity.revision < 2)) + || ((gpu->identity.model == 0x300) && (gpu->identity.revision < 0x2000))) { + + /* GC500 rev 1.x and GC300 rev < 2.0 doesn't have these registers. */ + gpu->identity.minor_features0 = 0; + gpu->identity.minor_features1 = 0; + gpu->identity.minor_features2 = 0; + gpu->identity.minor_features3 = 0; + } else + gpu->identity.minor_features0 = gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_0); + + if (gpu->identity.minor_features0 & BIT(21)) { + gpu->identity.minor_features1 = gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_1); + gpu->identity.minor_features2 = gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_2); + gpu->identity.minor_features3 = gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_3); + } + + dev_info(gpu->dev->dev, "minor_features: %x\n", gpu->identity.minor_features0); + dev_info(gpu->dev->dev, "minor_features1: %x\n", gpu->identity.minor_features1); + dev_info(gpu->dev->dev, "minor_features2: %x\n", gpu->identity.minor_features2); + dev_info(gpu->dev->dev, "minor_features3: %x\n", gpu->identity.minor_features3); + + etnaviv_hw_specs(gpu); +} + +static void etnaviv_hw_reset(struct etnaviv_gpu *gpu) +{ + u32 control, idle; + + /* TODO + * + * - clock gating + * - puls eater + * - what about VG? + */ + + while (true) { + control = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL); + + /* isolate the GPU. */ + control |= VIVS_HI_CLOCK_CONTROL_ISOLATE_GPU; + gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, control); + + /* set soft reset. */ + control |= VIVS_HI_CLOCK_CONTROL_SOFT_RESET; + gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, control); + + /* wait for reset. */ + msleep(1); + + /* reset soft reset bit. */ + control &= ~VIVS_HI_CLOCK_CONTROL_SOFT_RESET; + gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, control); + + /* reset GPU isolation. */ + control &= ~VIVS_HI_CLOCK_CONTROL_ISOLATE_GPU; + gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, control); + + /* read idle register. */ + idle = gpu_read(gpu, VIVS_HI_IDLE_STATE); + + /* try reseting again if FE it not idle */ + if ((idle & VIVS_HI_IDLE_STATE_FE) == 0) { + dev_dbg(gpu->dev->dev, "%s: FE is not idle\n", gpu->name); + continue; + } + + /* read reset register. */ + control = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL); + + /* is the GPU idle? */ + if (((control & VIVS_HI_CLOCK_CONTROL_IDLE_3D) == 0) + || ((control & VIVS_HI_CLOCK_CONTROL_IDLE_2D) == 0)) { + dev_dbg(gpu->dev->dev, "%s: GPU is not idle\n", gpu->name); + continue; + } + + break; + } +} + +int etnaviv_gpu_init(struct etnaviv_gpu *gpu) +{ + int ret, i; + u32 words; /* 32 bit words */ + struct iommu_domain *iommu; + bool mmuv2; + + etnaviv_hw_identify(gpu); + etnaviv_hw_reset(gpu); + + /* set base addresses */ + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_RA, 0x0); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_FE, 0x0); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_TX, 0x0); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PEZ, 0x0); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PE, 0x0); + + /* Setup IOMMU.. eventually we will (I think) do this once per context + * and have separate page tables per context. For now, to keep things + * simple and to get something working, just use a single address space: + */ + mmuv2 = gpu->identity.minor_features1 & chipMinorFeatures1_MMU_VERSION; + dev_dbg(gpu->dev->dev, "mmuv2: %d\n", mmuv2); + + if (!mmuv2) + iommu = etnaviv_iommu_domain_alloc(gpu); + else + iommu = etnaviv_iommu_v2_domain_alloc(gpu); + + if (!iommu) { + ret = -ENOMEM; + goto fail; + } + + /* TODO: we will leak here memory - fix it! */ + + gpu->mmu = etnaviv_iommu_new(gpu->dev, iommu); + if (!gpu->mmu) { + ret = -ENOMEM; + goto fail; + } + etnaviv_register_mmu(gpu->dev, gpu->mmu); + + /* Create buffer: */ + gpu->buffer = etnaviv_gem_new(gpu->dev, PAGE_SIZE, ETNA_BO_CMDSTREAM); + if (IS_ERR(gpu->buffer)) { + ret = PTR_ERR(gpu->buffer); + gpu->buffer = NULL; + dev_err(gpu->dev->dev, "could not create buffer: %d\n", ret); + goto fail; + } + + /* Setup event management */ + spin_lock_init(&gpu->event_spinlock); + init_completion(&gpu->event_free); + for (i = 0; i < ARRAY_SIZE(gpu->event_used); i++) { + gpu->event_used[i] = false; + complete(&gpu->event_free); + } + + /* Start command processor */ + words = etnaviv_buffer_init(gpu); + + /* convert number of 32 bit words to number of 64 bit words */ + words = ALIGN(words, 2) / 2; + + gpu_write(gpu, VIVS_HI_INTR_ENBL, ~0U); + gpu_write(gpu, VIVS_FE_COMMAND_ADDRESS, etnaviv_gem_paddr_locked(gpu->buffer)); + gpu_write(gpu, VIVS_FE_COMMAND_CONTROL, VIVS_FE_COMMAND_CONTROL_ENABLE | VIVS_FE_COMMAND_CONTROL_PREFETCH(words)); + + return 0; + +fail: + return ret; +} + +#ifdef CONFIG_DEBUG_FS +struct dma_debug { + u32 address[2]; + u32 state[2]; +}; + +static void verify_dma(struct etnaviv_gpu *gpu, struct dma_debug *debug) +{ + u32 i; + + debug->address[0] = gpu_read(gpu, VIVS_FE_DMA_ADDRESS); + debug->state[0] = gpu_read(gpu, VIVS_FE_DMA_DEBUG_STATE); + + for (i = 0; i < 500; i++) { + debug->address[1] = gpu_read(gpu, VIVS_FE_DMA_ADDRESS); + debug->state[1] = gpu_read(gpu, VIVS_FE_DMA_DEBUG_STATE); + + if (debug->address[0] != debug->address[1]) + break; + + if (debug->state[0] != debug->state[1]) + break; + } +} + +void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m) +{ + struct dma_debug debug; + u32 dma_lo = gpu_read(gpu, VIVS_FE_DMA_LOW); + u32 dma_hi = gpu_read(gpu, VIVS_FE_DMA_HIGH); + u32 axi = gpu_read(gpu, VIVS_HI_AXI_STATUS); + u32 idle = gpu_read(gpu, VIVS_HI_IDLE_STATE); + + verify_dma(gpu, &debug); + + seq_printf(m, "\taxi: 0x08%x\n", axi); + seq_printf(m, "\tidle: 0x08%x\n", idle); + if ((idle & VIVS_HI_IDLE_STATE_FE) == 0) + seq_puts(m, "\t FE is not idle\n"); + if ((idle & VIVS_HI_IDLE_STATE_DE) == 0) + seq_puts(m, "\t DE is not idle\n"); + if ((idle & VIVS_HI_IDLE_STATE_PE) == 0) + seq_puts(m, "\t PE is not idle\n"); + if ((idle & VIVS_HI_IDLE_STATE_SH) == 0) + seq_puts(m, "\t SH is not idle\n"); + if ((idle & VIVS_HI_IDLE_STATE_PA) == 0) + seq_puts(m, "\t PA is not idle\n"); + if ((idle & VIVS_HI_IDLE_STATE_SE) == 0) + seq_puts(m, "\t SE is not idle\n"); + if ((idle & VIVS_HI_IDLE_STATE_RA) == 0) + seq_puts(m, "\t RA is not idle\n"); + if ((idle & VIVS_HI_IDLE_STATE_TX) == 0) + seq_puts(m, "\t TX is not idle\n"); + if ((idle & VIVS_HI_IDLE_STATE_VG) == 0) + seq_puts(m, "\t VG is not idle\n"); + if ((idle & VIVS_HI_IDLE_STATE_IM) == 0) + seq_puts(m, "\t IM is not idle\n"); + if ((idle & VIVS_HI_IDLE_STATE_FP) == 0) + seq_puts(m, "\t FP is not idle\n"); + if ((idle & VIVS_HI_IDLE_STATE_TS) == 0) + seq_puts(m, "\t TS is not idle\n"); + if (idle & VIVS_HI_IDLE_STATE_AXI_LP) + seq_puts(m, "\t AXI low power mode\n"); + + if (gpu->identity.features & chipFeatures_DEBUG_MODE) { + u32 read0 = gpu_read(gpu, VIVS_MC_DEBUG_READ0); + u32 read1 = gpu_read(gpu, VIVS_MC_DEBUG_READ1); + u32 write = gpu_read(gpu, VIVS_MC_DEBUG_WRITE); + + seq_puts(m, "\tMC\n"); + seq_printf(m, "\t read0: 0x%08x\n", read0); + seq_printf(m, "\t read1: 0x%08x\n", read1); + seq_printf(m, "\t write: 0x%08x\n", write); + } + + seq_puts(m, "\tDMA "); + + if ((debug.address[0] == debug.address[1]) && (debug.state[0] == debug.state[1])) { + seq_puts(m, "seems to be stuck\n"); + } else { + if (debug.address[0] == debug.address[1]) + seq_puts(m, "adress is constant\n"); + else + seq_puts(m, "is runing\n"); + } + + seq_printf(m, "\t address 0: 0x%08x\n", debug.address[0]); + seq_printf(m, "\t address 1: 0x%08x\n", debug.address[1]); + seq_printf(m, "\t state 0: 0x%08x\n", debug.state[0]); + seq_printf(m, "\t state 1: 0x%08x\n", debug.state[1]); + seq_printf(m, "\t last fetch 64 bit word: 0x%08x-0x%08x\n", dma_hi, dma_lo); +} +#endif + +/* + * Power Management: + */ + +static int enable_pwrrail(struct etnaviv_gpu *gpu) +{ +#if 0 + struct drm_device *dev = gpu->dev; + int ret = 0; + + if (gpu->gpu_reg) { + ret = regulator_enable(gpu->gpu_reg); + if (ret) { + dev_err(dev->dev, "failed to enable 'gpu_reg': %d\n", ret); + return ret; + } + } + + if (gpu->gpu_cx) { + ret = regulator_enable(gpu->gpu_cx); + if (ret) { + dev_err(dev->dev, "failed to enable 'gpu_cx': %d\n", ret); + return ret; + } + } +#endif + return 0; +} + +static int disable_pwrrail(struct etnaviv_gpu *gpu) +{ +#if 0 + if (gpu->gpu_cx) + regulator_disable(gpu->gpu_cx); + if (gpu->gpu_reg) + regulator_disable(gpu->gpu_reg); +#endif + return 0; +} + +static int enable_clk(struct etnaviv_gpu *gpu) +{ + if (gpu->clk_core) + clk_prepare_enable(gpu->clk_core); + if (gpu->clk_shader) + clk_prepare_enable(gpu->clk_shader); + + return 0; +} + +static int disable_clk(struct etnaviv_gpu *gpu) +{ + if (gpu->clk_core) + clk_disable_unprepare(gpu->clk_core); + if (gpu->clk_shader) + clk_disable_unprepare(gpu->clk_shader); + + return 0; +} + +static int enable_axi(struct etnaviv_gpu *gpu) +{ + if (gpu->clk_bus) + clk_prepare_enable(gpu->clk_bus); + + return 0; +} + +static int disable_axi(struct etnaviv_gpu *gpu) +{ + if (gpu->clk_bus) + clk_disable_unprepare(gpu->clk_bus); + + return 0; +} + +int etnaviv_gpu_pm_resume(struct etnaviv_gpu *gpu) +{ + int ret; + + DBG("%s", gpu->name); + + ret = enable_pwrrail(gpu); + if (ret) + return ret; + + ret = enable_clk(gpu); + if (ret) + return ret; + + ret = enable_axi(gpu); + if (ret) + return ret; + + return 0; +} + +int etnaviv_gpu_pm_suspend(struct etnaviv_gpu *gpu) +{ + int ret; + + DBG("%s", gpu->name); + + ret = disable_axi(gpu); + if (ret) + return ret; + + ret = disable_clk(gpu); + if (ret) + return ret; + + ret = disable_pwrrail(gpu); + if (ret) + return ret; + + return 0; +} + +/* + * Hangcheck detection for locked gpu: + */ +static void recover_worker(struct work_struct *work) +{ + struct etnaviv_gpu *gpu = container_of(work, struct etnaviv_gpu, recover_work); + struct drm_device *dev = gpu->dev; + + dev_err(dev->dev, "%s: hangcheck recover!\n", gpu->name); + + mutex_lock(&dev->struct_mutex); + /* TODO gpu->funcs->recover(gpu); */ + mutex_unlock(&dev->struct_mutex); + + etnaviv_gpu_retire(gpu); +} + +static void hangcheck_timer_reset(struct etnaviv_gpu *gpu) +{ + DBG("%s", gpu->name); + mod_timer(&gpu->hangcheck_timer, + round_jiffies_up(jiffies + DRM_MSM_HANGCHECK_JIFFIES)); +} + +static void hangcheck_handler(unsigned long data) +{ + struct etnaviv_gpu *gpu = (struct etnaviv_gpu *)data; + struct drm_device *dev = gpu->dev; + struct etnaviv_drm_private *priv = dev->dev_private; + uint32_t fence = gpu->retired_fence; + + if (fence != gpu->hangcheck_fence) { + /* some progress has been made.. ya! */ + gpu->hangcheck_fence = fence; + } else if (fence < gpu->submitted_fence) { + /* no progress and not done.. hung! */ + gpu->hangcheck_fence = fence; + dev_err(dev->dev, "%s: hangcheck detected gpu lockup!\n", + gpu->name); + dev_err(dev->dev, "%s: completed fence: %u\n", + gpu->name, fence); + dev_err(dev->dev, "%s: submitted fence: %u\n", + gpu->name, gpu->submitted_fence); + queue_work(priv->wq, &gpu->recover_work); + } + + /* if still more pending work, reset the hangcheck timer: */ + if (gpu->submitted_fence > gpu->hangcheck_fence) + hangcheck_timer_reset(gpu); +} + +/* + * event management: + */ + +static unsigned int event_alloc(struct etnaviv_gpu *gpu) +{ + unsigned long ret, flags; + unsigned int i, event = ~0U; + + ret = wait_for_completion_timeout(&gpu->event_free, msecs_to_jiffies(10 * 10000)); + if (!ret) + dev_err(gpu->dev->dev, "wait_for_completion_timeout failed"); + + spin_lock_irqsave(&gpu->event_spinlock, flags); + + /* find first free event */ + for (i = 0; i < ARRAY_SIZE(gpu->event_used); i++) { + if (gpu->event_used[i] == false) { + gpu->event_used[i] = true; + event = i; + break; + } + } + + spin_unlock_irqrestore(&gpu->event_spinlock, flags); + + return event; +} + +static void event_free(struct etnaviv_gpu *gpu, unsigned int event) +{ + unsigned long flags; + + spin_lock_irqsave(&gpu->event_spinlock, flags); + + if (gpu->event_used[event] == false) { + dev_warn(gpu->dev->dev, "event %u is already marked as free", event); + spin_unlock_irqrestore(&gpu->event_spinlock, flags); + } else { + gpu->event_used[event] = false; + spin_unlock_irqrestore(&gpu->event_spinlock, flags); + + complete(&gpu->event_free); + } +} + +/* + * Cmdstream submission/retirement: + */ + +static void retire_worker(struct work_struct *work) +{ + struct etnaviv_gpu *gpu = container_of(work, struct etnaviv_gpu, retire_work); + struct drm_device *dev = gpu->dev; + uint32_t fence = gpu->retired_fence; + + etnaviv_update_fence(gpu->dev, fence); + + mutex_lock(&dev->struct_mutex); + + while (!list_empty(&gpu->active_list)) { + struct etnaviv_gem_object *obj; + + obj = list_first_entry(&gpu->active_list, + struct etnaviv_gem_object, mm_list); + + if ((obj->read_fence <= fence) && + (obj->write_fence <= fence)) { + /* move to inactive: */ + etnaviv_gem_move_to_inactive(&obj->base); + etnaviv_gem_put_iova(&obj->base); + drm_gem_object_unreference(&obj->base); + } else { + break; + } + } + + mutex_unlock(&dev->struct_mutex); +} + +/* call from irq handler to schedule work to retire bo's */ +void etnaviv_gpu_retire(struct etnaviv_gpu *gpu) +{ + struct etnaviv_drm_private *priv = gpu->dev->dev_private; + queue_work(priv->wq, &gpu->retire_work); +} + +/* add bo's to gpu's ring, and kick gpu: */ +int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, struct etnaviv_gem_submit *submit, + struct etnaviv_file_private *ctx) +{ + struct drm_device *dev = gpu->dev; + struct etnaviv_drm_private *priv = dev->dev_private; + int ret = 0; + unsigned int event, i; + + submit->fence = ++priv->next_fence; + + gpu->submitted_fence = submit->fence; + + /* + * TODO + * + * - flush + * - data endian + * - prefetch + * + */ + + event = event_alloc(gpu); + if (unlikely(event == ~0U)) { + DRM_ERROR("no free event\n"); + ret = -EBUSY; + goto fail; + } + + gpu->event_to_fence[event] = submit->fence; + + etnaviv_buffer_queue(gpu, event, submit); + + priv->lastctx = ctx; + + for (i = 0; i < submit->nr_bos; i++) { + struct etnaviv_gem_object *etnaviv_obj = submit->bos[i].obj; + + /* can't happen yet.. but when we add 2d support we'll have + * to deal w/ cross-ring synchronization: + */ + WARN_ON(is_active(etnaviv_obj) && (etnaviv_obj->gpu != gpu)); + + if (!is_active(etnaviv_obj)) { + uint32_t iova; + + /* ring takes a reference to the bo and iova: */ + drm_gem_object_reference(&etnaviv_obj->base); + etnaviv_gem_get_iova_locked(gpu, &etnaviv_obj->base, &iova); + } + + if (submit->bos[i].flags & ETNA_SUBMIT_BO_READ) + etnaviv_gem_move_to_active(&etnaviv_obj->base, gpu, false, submit->fence); + + if (submit->bos[i].flags & ETNA_SUBMIT_BO_WRITE) + etnaviv_gem_move_to_active(&etnaviv_obj->base, gpu, true, submit->fence); + } + hangcheck_timer_reset(gpu); + +fail: + return ret; +} + +/* + * Init/Cleanup: + */ +static irqreturn_t irq_handler(int irq, void *data) +{ + struct etnaviv_gpu *gpu = data; + irqreturn_t ret = IRQ_NONE; + + u32 intr = gpu_read(gpu, VIVS_HI_INTR_ACKNOWLEDGE); + + if (intr != 0) { + dev_dbg(gpu->dev->dev, "intr 0x%08x\n", intr); + + if (intr & VIVS_HI_INTR_ACKNOWLEDGE_AXI_BUS_ERROR) + dev_err(gpu->dev->dev, "AXI bus error\n"); + else { + uint8_t event = __fls(intr); + dev_dbg(gpu->dev->dev, "event %u\n", event); + gpu->retired_fence = gpu->event_to_fence[event]; + event_free(gpu, event); + etnaviv_gpu_retire(gpu); + } + + ret = IRQ_HANDLED; + } + + return ret; +} + +static int etnaviv_gpu_bind(struct device *dev, struct device *master, + void *data) +{ + struct drm_device *drm = data; + struct etnaviv_drm_private *priv = drm->dev_private; + struct etnaviv_gpu *gpu = dev_get_drvdata(dev); + int idx = gpu->pipe; + + dev_info(dev, "pre gpu[idx]: 0x%08x\n", (u32)priv->gpu[idx]); + + if (priv->gpu[idx] == 0) { + dev_info(dev, "adding core @idx %d\n", idx); + priv->gpu[idx] = gpu; + } else { + dev_err(dev, "failed to add core @idx %d\n", idx); + goto fail; + } + + dev_info(dev, "post gpu[idx]: 0x%08x\n", (u32)priv->gpu[idx]); + + gpu->dev = drm; + + INIT_LIST_HEAD(&gpu->active_list); + INIT_WORK(&gpu->retire_work, retire_worker); + INIT_WORK(&gpu->recover_work, recover_worker); + + setup_timer(&gpu->hangcheck_timer, hangcheck_handler, + (unsigned long)gpu); + return 0; +fail: + return -1; +} + +static void etnaviv_gpu_unbind(struct device *dev, struct device *master, + void *data) +{ + struct etnaviv_gpu *gpu = dev_get_drvdata(dev); + + DBG("%s", gpu->name); + + WARN_ON(!list_empty(&gpu->active_list)); + + if (gpu->buffer) + drm_gem_object_unreference(gpu->buffer); + + if (gpu->mmu) + etnaviv_iommu_destroy(gpu->mmu); + + drm_mm_takedown(&gpu->mm); +} + +static const struct component_ops gpu_ops = { + .bind = etnaviv_gpu_bind, + .unbind = etnaviv_gpu_unbind, +}; + +static const struct of_device_id etnaviv_gpu_match[] = { + { + .compatible = "vivante,vivante-gpu-2d", + .data = (void *)ETNA_PIPE_2D + }, + { + .compatible = "vivante,vivante-gpu-3d", + .data = (void *)ETNA_PIPE_3D + }, + { + .compatible = "vivante,vivante-gpu-vg", + .data = (void *)ETNA_PIPE_VG + }, + { } +}; + +static int etnaviv_gpu_platform_probe(struct platform_device *pdev) +{ + const struct of_device_id *match; + struct device *dev = &pdev->dev; + struct etnaviv_gpu *gpu; + int err = 0; + + gpu = devm_kzalloc(dev, sizeof(*gpu), GFP_KERNEL); + if (!gpu) + return -ENOMEM; + + match = of_match_device(etnaviv_gpu_match, &pdev->dev); + if (!match) + return -EINVAL; + + gpu->name = pdev->name; + + /* Map registers: */ + gpu->mmio = etnaviv_ioremap(pdev, NULL, gpu->name); + if (IS_ERR(gpu->mmio)) + return PTR_ERR(gpu->mmio); + + /* Get Interrupt: */ + gpu->irq = platform_get_irq(pdev, 0); + if (gpu->irq < 0) { + err = gpu->irq; + dev_err(dev, "failed to get irq: %d\n", err); + goto fail; + } + + err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, + IRQF_TRIGGER_HIGH, gpu->name, gpu); + if (err) { + dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq, err); + goto fail; + } + + /* Get Clocks: */ + gpu->clk_bus = devm_clk_get(&pdev->dev, "bus"); + DBG("clk_bus: %p", gpu->clk_bus); + if (IS_ERR(gpu->clk_bus)) + gpu->clk_bus = NULL; + + gpu->clk_core = devm_clk_get(&pdev->dev, "core"); + DBG("clk_core: %p", gpu->clk_core); + if (IS_ERR(gpu->clk_core)) + gpu->clk_core = NULL; + + gpu->clk_shader = devm_clk_get(&pdev->dev, "shader"); + DBG("clk_shader: %p", gpu->clk_shader); + if (IS_ERR(gpu->clk_shader)) + gpu->clk_shader = NULL; + + gpu->pipe = (int)match->data; + + /* TODO: figure out max mapped size */ + drm_mm_init(&gpu->mm, 0x80000000, SZ_1G); + + dev_set_drvdata(dev, gpu); + + err = component_add(&pdev->dev, &gpu_ops); + if (err < 0) { + dev_err(&pdev->dev, "failed to register component: %d\n", err); + goto fail; + } + + return 0; + +fail: + return err; +} + +static int etnaviv_gpu_platform_remove(struct platform_device *pdev) +{ + component_del(&pdev->dev, &gpu_ops); + return 0; +} + +struct platform_driver etnaviv_gpu_driver = { + .driver = { + .name = "etnaviv-gpu", + .owner = THIS_MODULE, + .of_match_table = etnaviv_gpu_match, + }, + .probe = etnaviv_gpu_platform_probe, + .remove = etnaviv_gpu_platform_remove, +}; diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h new file mode 100644 index 000000000000..707096b5fe98 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -0,0 +1,152 @@ +/* + * Copyright (C) 2013 Red Hat + * Author: Rob Clark robdclark@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#ifndef __ETNAVIV_GPU_H__ +#define __ETNAVIV_GPU_H__ + +#include <linux/clk.h> +#include <linux/regulator/consumer.h> + +#include "etnaviv_drv.h" + +struct etnaviv_gem_submit; + +struct etnaviv_chip_identity { + /* Chip model. */ + uint32_t model; + + /* Revision value.*/ + uint32_t revision; + + /* Supported feature fields. */ + uint32_t features; + + /* Supported minor feature fields. */ + uint32_t minor_features0; + + /* Supported minor feature 1 fields. */ + uint32_t minor_features1; + + /* Supported minor feature 2 fields. */ + uint32_t minor_features2; + + /* Supported minor feature 3 fields. */ + uint32_t minor_features3; + + /* Number of streams supported. */ + uint32_t stream_count; + + /* Total number of temporary registers per thread. */ + uint32_t register_max; + + /* Maximum number of threads. */ + uint32_t thread_count; + + /* Number of shader cores. */ + uint32_t shader_core_count; + + /* Size of the vertex cache. */ + uint32_t vertex_cache_size; + + /* Number of entries in the vertex output buffer. */ + uint32_t vertex_output_buffer_size; + + /* Number of pixel pipes. */ + uint32_t pixel_pipes; + + /* Number of instructions. */ + uint32_t instruction_count; + + /* Number of constants. */ + uint32_t num_constants; + + /* Buffer size */ + uint32_t buffer_size; +}; + +struct etnaviv_gpu { + const char *name; + struct drm_device *dev; + struct etnaviv_chip_identity identity; + int pipe; + + /* 'ring'-buffer: */ + struct drm_gem_object *buffer; + + /* event management: */ + bool event_used[30]; + uint32_t event_to_fence[30]; + struct completion event_free; + struct spinlock event_spinlock; + + /* list of GEM active objects: */ + struct list_head active_list; + + uint32_t submitted_fence; + uint32_t retired_fence; + + /* worker for handling active-list retiring: */ + struct work_struct retire_work; + + void __iomem *mmio; + int irq; + + struct etnaviv_iommu *mmu; + + /* memory manager for GPU address area */ + struct drm_mm mm; + + /* Power Control: */ + struct clk *clk_bus; + struct clk *clk_core; + struct clk *clk_shader; + + /* Hang Detction: */ +#define DRM_MSM_HANGCHECK_PERIOD 500 /* in ms */ +#define DRM_MSM_HANGCHECK_JIFFIES msecs_to_jiffies(DRM_MSM_HANGCHECK_PERIOD) + struct timer_list hangcheck_timer; + uint32_t hangcheck_fence; + struct work_struct recover_work; +}; + +static inline void gpu_write(struct etnaviv_gpu *gpu, u32 reg, u32 data) +{ + etnaviv_writel(data, gpu->mmio + reg); +} + +static inline u32 gpu_read(struct etnaviv_gpu *gpu, u32 reg) +{ + return etnaviv_readl(gpu->mmio + reg); +} + +int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, uint32_t param, uint64_t *value); + +int etnaviv_gpu_init(struct etnaviv_gpu *gpu); +int etnaviv_gpu_pm_suspend(struct etnaviv_gpu *gpu); +int etnaviv_gpu_pm_resume(struct etnaviv_gpu *gpu); + +#ifdef CONFIG_DEBUG_FS +void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m); +#endif + +void etnaviv_gpu_retire(struct etnaviv_gpu *gpu); +int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, struct etnaviv_gem_submit *submit, + struct etnaviv_file_private *ctx); + +extern struct platform_driver etnaviv_gpu_driver; + +#endif /* __ETNAVIV_GPU_H__ */ diff --git a/drivers/staging/etnaviv/etnaviv_iommu.c b/drivers/staging/etnaviv/etnaviv_iommu.c new file mode 100644 index 000000000000..d0811fb13363 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_iommu.c @@ -0,0 +1,185 @@ +/* + * Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include <linux/iommu.h> +#include <linux/platform_device.h> +#include <linux/sizes.h> +#include <linux/slab.h> +#include <linux/dma-mapping.h> +#include <linux/bitops.h> + +#include "etnaviv_gpu.h" +#include "state_hi.xml.h" + +#define PT_SIZE SZ_256K +#define PT_ENTRIES (PT_SIZE / sizeof(uint32_t)) + +#define GPU_MEM_START 0x80000000 + +struct etnaviv_iommu_domain_pgtable { + uint32_t *pgtable; + dma_addr_t paddr; +}; + +struct etnaviv_iommu_domain { + struct etnaviv_iommu_domain_pgtable pgtable; + spinlock_t map_lock; +}; + +static int pgtable_alloc(struct etnaviv_iommu_domain_pgtable *pgtable, + size_t size) +{ + pgtable->pgtable = dma_alloc_coherent(NULL, size, &pgtable->paddr, GFP_KERNEL); + if (!pgtable->pgtable) + return -ENOMEM; + + return 0; +} + +static void pgtable_free(struct etnaviv_iommu_domain_pgtable *pgtable, + size_t size) +{ + dma_free_coherent(NULL, size, pgtable->pgtable, pgtable->paddr); +} + +static uint32_t pgtable_read(struct etnaviv_iommu_domain_pgtable *pgtable, + unsigned long iova) +{ + /* calcuate index into page table */ + unsigned int index = (iova - GPU_MEM_START) / SZ_4K; + phys_addr_t paddr; + + paddr = pgtable->pgtable[index]; + + return paddr; +} + +static void pgtable_write(struct etnaviv_iommu_domain_pgtable *pgtable, + unsigned long iova, phys_addr_t paddr) +{ + /* calcuate index into page table */ + unsigned int index = (iova - GPU_MEM_START) / SZ_4K; + + pgtable->pgtable[index] = paddr; +} + +static int etnaviv_iommu_domain_init(struct iommu_domain *domain) +{ + struct etnaviv_iommu_domain *etnaviv_domain; + int ret; + + etnaviv_domain = kmalloc(sizeof(*etnaviv_domain), GFP_KERNEL); + if (!etnaviv_domain) + return -ENOMEM; + + ret = pgtable_alloc(&etnaviv_domain->pgtable, PT_SIZE); + if (ret < 0) { + kfree(etnaviv_domain); + return ret; + } + + spin_lock_init(&etnaviv_domain->map_lock); + domain->priv = etnaviv_domain; + return 0; +} + +static void etnaviv_iommu_domain_destroy(struct iommu_domain *domain) +{ + struct etnaviv_iommu_domain *etnaviv_domain = domain->priv; + + pgtable_free(&etnaviv_domain->pgtable, PT_SIZE); + + kfree(etnaviv_domain); + domain->priv = NULL; +} + +static int etnaviv_iommu_map(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot) +{ + struct etnaviv_iommu_domain *etnaviv_domain = domain->priv; + + if (size != SZ_4K) + return -EINVAL; + + spin_lock(&etnaviv_domain->map_lock); + pgtable_write(&etnaviv_domain->pgtable, iova, paddr); + spin_unlock(&etnaviv_domain->map_lock); + + return 0; +} + +static size_t etnaviv_iommu_unmap(struct iommu_domain *domain, unsigned long iova, + size_t size) +{ + struct etnaviv_iommu_domain *etnaviv_domain = domain->priv; + + if (size != SZ_4K) + return -EINVAL; + + spin_lock(&etnaviv_domain->map_lock); + pgtable_write(&etnaviv_domain->pgtable, iova, ~0); + spin_unlock(&etnaviv_domain->map_lock); + + return 0; +} + +phys_addr_t etnaviv_iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) +{ + struct etnaviv_iommu_domain *etnaviv_domain = domain->priv; + + return pgtable_read(&etnaviv_domain->pgtable, iova); +} + +static struct iommu_ops etnaviv_iommu_ops = { + .domain_init = etnaviv_iommu_domain_init, + .domain_destroy = etnaviv_iommu_domain_destroy, + .map = etnaviv_iommu_map, + .unmap = etnaviv_iommu_unmap, + .iova_to_phys = etnaviv_iommu_iova_to_phys, + .pgsize_bitmap = SZ_4K, +}; + +struct iommu_domain *etnaviv_iommu_domain_alloc(struct etnaviv_gpu *gpu) +{ + struct iommu_domain *domain; + struct etnaviv_iommu_domain *etnaviv_domain; + int ret; + + domain = kzalloc(sizeof(*domain), GFP_KERNEL); + if (!domain) + return NULL; + + domain->ops = &etnaviv_iommu_ops; + + ret = domain->ops->domain_init(domain); + if (ret) + goto out_free; + + /* set page table address in MC */ + etnaviv_domain = domain->priv; + + gpu_write(gpu, VIVS_MC_MMU_FE_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr); + gpu_write(gpu, VIVS_MC_MMU_TX_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr); + gpu_write(gpu, VIVS_MC_MMU_PE_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr); + gpu_write(gpu, VIVS_MC_MMU_PEZ_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr); + gpu_write(gpu, VIVS_MC_MMU_RA_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr); + + return domain; + +out_free: + kfree(domain); + return NULL; +} diff --git a/drivers/staging/etnaviv/etnaviv_iommu.h b/drivers/staging/etnaviv/etnaviv_iommu.h new file mode 100644 index 000000000000..3103ff3efcbe --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_iommu.h @@ -0,0 +1,25 @@ +/* + * Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#ifndef __ETNAVIV_IOMMU_H__ +#define __ETNAVIV_IOMMU_H__ + +#include <linux/iommu.h> +struct etnaviv_gpu; + +struct iommu_domain *etnaviv_iommu_domain_alloc(struct etnaviv_gpu *gpu); + +#endif /* __ETNAVIV_IOMMU_H__ */ diff --git a/drivers/staging/etnaviv/etnaviv_iommu_v2.c b/drivers/staging/etnaviv/etnaviv_iommu_v2.c new file mode 100644 index 000000000000..3039ee9cbc6d --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_iommu_v2.c @@ -0,0 +1,32 @@ +/* + * Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include <linux/iommu.h> +#include <linux/platform_device.h> +#include <linux/sizes.h> +#include <linux/slab.h> +#include <linux/dma-mapping.h> +#include <linux/bitops.h> + +#include "etnaviv_gpu.h" +#include "state_hi.xml.h" + + +struct iommu_domain *etnaviv_iommu_v2_domain_alloc(struct etnaviv_gpu *gpu) +{ + /* TODO */ + return NULL; +} diff --git a/drivers/staging/etnaviv/etnaviv_iommu_v2.h b/drivers/staging/etnaviv/etnaviv_iommu_v2.h new file mode 100644 index 000000000000..603ea41c5389 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_iommu_v2.h @@ -0,0 +1,25 @@ +/* + * Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#ifndef __ETNAVIV_IOMMU_V2_H__ +#define __ETNAVIV_IOMMU_V2_H__ + +#include <linux/iommu.h> +struct etnaviv_gpu; + +struct iommu_domain *etnaviv_iommu_v2_domain_alloc(struct etnaviv_gpu *gpu); + +#endif /* __ETNAVIV_IOMMU_V2_H__ */ diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c new file mode 100644 index 000000000000..cee97e11117d --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -0,0 +1,111 @@ +/* + * Copyright (C) 2013 Red Hat + * Author: Rob Clark robdclark@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include "etnaviv_drv.h" +#include "etnaviv_mmu.h" + +static int etnaviv_fault_handler(struct iommu_domain *iommu, struct device *dev, + unsigned long iova, int flags, void *arg) +{ + DBG("*** fault: iova=%08lx, flags=%d", iova, flags); + return 0; +} + +int etnaviv_iommu_map(struct etnaviv_iommu *iommu, uint32_t iova, + struct sg_table *sgt, unsigned len, int prot) +{ + struct iommu_domain *domain = iommu->domain; + struct scatterlist *sg; + unsigned int da = iova; + unsigned int i, j; + int ret; + + if (!domain || !sgt) + return -EINVAL; + + for_each_sg(sgt->sgl, sg, sgt->nents, i) { + u32 pa = sg_phys(sg) - sg->offset; + size_t bytes = sg->length + sg->offset; + + VERB("map[%d]: %08x %08x(%x)", i, iova, pa, bytes); + + ret = iommu_map(domain, da, pa, bytes, prot); + if (ret) + goto fail; + + da += bytes; + } + + return 0; + +fail: + da = iova; + + for_each_sg(sgt->sgl, sg, i, j) { + size_t bytes = sg->length + sg->offset; + iommu_unmap(domain, da, bytes); + da += bytes; + } + return ret; +} + +int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova, + struct sg_table *sgt, unsigned len) +{ + struct iommu_domain *domain = iommu->domain; + struct scatterlist *sg; + unsigned int da = iova; + int i; + + for_each_sg(sgt->sgl, sg, sgt->nents, i) { + size_t bytes = sg->length + sg->offset; + size_t unmapped; + + unmapped = iommu_unmap(domain, da, bytes); + if (unmapped < bytes) + return unmapped; + + VERB("unmap[%d]: %08x(%x)", i, iova, bytes); + + BUG_ON(!PAGE_ALIGNED(bytes)); + + da += bytes; + } + + return 0; +} + +void etnaviv_iommu_destroy(struct etnaviv_iommu *mmu) +{ + iommu_domain_free(mmu->domain); + kfree(mmu); +} + +struct etnaviv_iommu *etnaviv_iommu_new(struct drm_device *dev, struct iommu_domain *domain) +{ + struct etnaviv_iommu *mmu; + + mmu = kzalloc(sizeof(*mmu), GFP_KERNEL); + if (!mmu) + return ERR_PTR(-ENOMEM); + + mmu->domain = domain; + mmu->dev = dev; + iommu_set_fault_handler(domain, etnaviv_fault_handler, dev); + + return mmu; +} diff --git a/drivers/staging/etnaviv/etnaviv_mmu.h b/drivers/staging/etnaviv/etnaviv_mmu.h new file mode 100644 index 000000000000..02e7adcc96d7 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_mmu.h @@ -0,0 +1,37 @@ +/* + * Copyright (C) 2013 Red Hat + * Author: Rob Clark robdclark@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#ifndef __ETNAVIV_MMU_H__ +#define __ETNAVIV_MMU_H__ + +#include <linux/iommu.h> + +struct etnaviv_iommu { + struct drm_device *dev; + struct iommu_domain *domain; +}; + +int etnaviv_iommu_attach(struct etnaviv_iommu *iommu, const char **names, int cnt); +int etnaviv_iommu_map(struct etnaviv_iommu *iommu, uint32_t iova, struct sg_table *sgt, + unsigned len, int prot); +int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova, struct sg_table *sgt, + unsigned len); +void etnaviv_iommu_destroy(struct etnaviv_iommu *iommu); + +struct etnaviv_iommu *etnaviv_iommu_new(struct drm_device *dev, struct iommu_domain *domain); + +#endif /* __ETNAVIV_MMU_H__ */ diff --git a/drivers/staging/etnaviv/state.xml.h b/drivers/staging/etnaviv/state.xml.h new file mode 100644 index 000000000000..e7b36df1e4e3 --- /dev/null +++ b/drivers/staging/etnaviv/state.xml.h @@ -0,0 +1,348 @@ +#ifndef STATE_XML +#define STATE_XML + +/* Autogenerated file, DO NOT EDIT manually! + +This file was generated by the rules-ng-ng headergen tool in this git repository: +http://0x04.net/cgit/index.cgi/rules-ng-ng +git clone git://0x04.net/rules-ng-ng + +The rules-ng-ng source files this header was generated from are: +- /home/orion/projects/etna_viv/rnndb/state.xml ( 18526 bytes, from 2013-09-11 16:52:32) +- /home/orion/projects/etna_viv/rnndb/common.xml ( 18379 bytes, from 2014-01-27 15:58:05) +- /home/orion/projects/etna_viv/rnndb/state_hi.xml ( 22236 bytes, from 2014-01-27 15:56:46) +- /home/orion/projects/etna_viv/rnndb/state_2d.xml ( 51191 bytes, from 2013-10-04 06:36:55) +- /home/orion/projects/etna_viv/rnndb/state_3d.xml ( 54570 bytes, from 2013-10-12 15:25:03) +- /home/orion/projects/etna_viv/rnndb/state_vg.xml ( 5942 bytes, from 2013-09-01 10:53:22) + +Copyright (C) 2013 +*/ + + +#define VARYING_COMPONENT_USE_UNUSED 0x00000000 +#define VARYING_COMPONENT_USE_USED 0x00000001 +#define VARYING_COMPONENT_USE_POINTCOORD_X 0x00000002 +#define VARYING_COMPONENT_USE_POINTCOORD_Y 0x00000003 +#define FE_VERTEX_STREAM_CONTROL_VERTEX_STRIDE__MASK 0x000000ff +#define FE_VERTEX_STREAM_CONTROL_VERTEX_STRIDE__SHIFT 0 +#define FE_VERTEX_STREAM_CONTROL_VERTEX_STRIDE(x) (((x) << FE_VERTEX_STREAM_CONTROL_VERTEX_STRIDE__SHIFT) & FE_VERTEX_STREAM_CONTROL_VERTEX_STRIDE__MASK) +#define VIVS_FE 0x00000000 + +#define VIVS_FE_VERTEX_ELEMENT_CONFIG(i0) (0x00000600 + 0x4*(i0)) +#define VIVS_FE_VERTEX_ELEMENT_CONFIG__ESIZE 0x00000004 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG__LEN 0x00000010 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE__MASK 0x0000000f +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE__SHIFT 0 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_BYTE 0x00000000 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_UNSIGNED_BYTE 0x00000001 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_SHORT 0x00000002 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_UNSIGNED_SHORT 0x00000003 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_INT 0x00000004 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_UNSIGNED_INT 0x00000005 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_FLOAT 0x00000008 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_HALF_FLOAT 0x00000009 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_FIXED 0x0000000b +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_INT_10_10_10_2 0x0000000c +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_UNSIGNED_INT_10_10_10_2 0x0000000d +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_ENDIAN__MASK 0x00000030 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_ENDIAN__SHIFT 4 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_ENDIAN(x) (((x) << VIVS_FE_VERTEX_ELEMENT_CONFIG_ENDIAN__SHIFT) & VIVS_FE_VERTEX_ELEMENT_CONFIG_ENDIAN__MASK) +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NONCONSECUTIVE 0x00000080 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_STREAM__MASK 0x00000700 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_STREAM__SHIFT 8 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_STREAM(x) (((x) << VIVS_FE_VERTEX_ELEMENT_CONFIG_STREAM__SHIFT) & VIVS_FE_VERTEX_ELEMENT_CONFIG_STREAM__MASK) +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NUM__MASK 0x00003000 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NUM__SHIFT 12 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NUM(x) (((x) << VIVS_FE_VERTEX_ELEMENT_CONFIG_NUM__SHIFT) & VIVS_FE_VERTEX_ELEMENT_CONFIG_NUM__MASK) +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NORMALIZE__MASK 0x0000c000 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NORMALIZE__SHIFT 14 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NORMALIZE_OFF 0x00000000 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NORMALIZE_ON 0x00008000 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_START__MASK 0x00ff0000 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_START__SHIFT 16 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_START(x) (((x) << VIVS_FE_VERTEX_ELEMENT_CONFIG_START__SHIFT) & VIVS_FE_VERTEX_ELEMENT_CONFIG_START__MASK) +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_END__MASK 0xff000000 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_END__SHIFT 24 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_END(x) (((x) << VIVS_FE_VERTEX_ELEMENT_CONFIG_END__SHIFT) & VIVS_FE_VERTEX_ELEMENT_CONFIG_END__MASK) + +#define VIVS_FE_CMD_STREAM_BASE_ADDR 0x00000640 + +#define VIVS_FE_INDEX_STREAM_BASE_ADDR 0x00000644 + +#define VIVS_FE_INDEX_STREAM_CONTROL 0x00000648 +#define VIVS_FE_INDEX_STREAM_CONTROL_TYPE__MASK 0x00000003 +#define VIVS_FE_INDEX_STREAM_CONTROL_TYPE__SHIFT 0 +#define VIVS_FE_INDEX_STREAM_CONTROL_TYPE_UNSIGNED_CHAR 0x00000000 +#define VIVS_FE_INDEX_STREAM_CONTROL_TYPE_UNSIGNED_SHORT 0x00000001 +#define VIVS_FE_INDEX_STREAM_CONTROL_TYPE_UNSIGNED_INT 0x00000002 + +#define VIVS_FE_VERTEX_STREAM_BASE_ADDR 0x0000064c + +#define VIVS_FE_VERTEX_STREAM_CONTROL 0x00000650 + +#define VIVS_FE_COMMAND_ADDRESS 0x00000654 + +#define VIVS_FE_COMMAND_CONTROL 0x00000658 +#define VIVS_FE_COMMAND_CONTROL_PREFETCH__MASK 0x0000ffff +#define VIVS_FE_COMMAND_CONTROL_PREFETCH__SHIFT 0 +#define VIVS_FE_COMMAND_CONTROL_PREFETCH(x) (((x) << VIVS_FE_COMMAND_CONTROL_PREFETCH__SHIFT) & VIVS_FE_COMMAND_CONTROL_PREFETCH__MASK) +#define VIVS_FE_COMMAND_CONTROL_ENABLE 0x00010000 + +#define VIVS_FE_DMA_STATUS 0x0000065c + +#define VIVS_FE_DMA_DEBUG_STATE 0x00000660 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE__MASK 0x0000001f +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE__SHIFT 0 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_IDLE 0x00000000 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_DEC 0x00000001 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_ADR0 0x00000002 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_LOAD0 0x00000003 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_ADR1 0x00000004 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_LOAD1 0x00000005 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_3DADR 0x00000006 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_3DCMD 0x00000007 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_3DCNTL 0x00000008 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_3DIDXCNTL 0x00000009 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_INITREQDMA 0x0000000a +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_DRAWIDX 0x0000000b +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_DRAW 0x0000000c +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_2DRECT0 0x0000000d +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_2DRECT1 0x0000000e +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_2DDATA0 0x0000000f +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_2DDATA1 0x00000010 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_WAITFIFO 0x00000011 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_WAIT 0x00000012 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_LINK 0x00000013 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_END 0x00000014 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_STALL 0x00000015 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_DMA_STATE__MASK 0x00000300 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_DMA_STATE__SHIFT 8 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_DMA_STATE_IDLE 0x00000000 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_DMA_STATE_START 0x00000100 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_DMA_STATE_REQ 0x00000200 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_DMA_STATE_END 0x00000300 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_FETCH_STATE__MASK 0x00000c00 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_FETCH_STATE__SHIFT 10 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_FETCH_STATE_IDLE 0x00000000 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_FETCH_STATE_RAMVALID 0x00000400 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_FETCH_STATE_VALID 0x00000800 +#define VIVS_FE_DMA_DEBUG_STATE_REQ_DMA_STATE__MASK 0x00003000 +#define VIVS_FE_DMA_DEBUG_STATE_REQ_DMA_STATE__SHIFT 12 +#define VIVS_FE_DMA_DEBUG_STATE_REQ_DMA_STATE_IDLE 0x00000000 +#define VIVS_FE_DMA_DEBUG_STATE_REQ_DMA_STATE_WAITIDX 0x00001000 +#define VIVS_FE_DMA_DEBUG_STATE_REQ_DMA_STATE_CAL 0x00002000 +#define VIVS_FE_DMA_DEBUG_STATE_CAL_STATE__MASK 0x0000c000 +#define VIVS_FE_DMA_DEBUG_STATE_CAL_STATE__SHIFT 14 +#define VIVS_FE_DMA_DEBUG_STATE_CAL_STATE_IDLE 0x00000000 +#define VIVS_FE_DMA_DEBUG_STATE_CAL_STATE_LDADR 0x00004000 +#define VIVS_FE_DMA_DEBUG_STATE_CAL_STATE_IDXCALC 0x00008000 +#define VIVS_FE_DMA_DEBUG_STATE_VE_REQ_STATE__MASK 0x00030000 +#define VIVS_FE_DMA_DEBUG_STATE_VE_REQ_STATE__SHIFT 16 +#define VIVS_FE_DMA_DEBUG_STATE_VE_REQ_STATE_IDLE 0x00000000 +#define VIVS_FE_DMA_DEBUG_STATE_VE_REQ_STATE_CKCACHE 0x00010000 +#define VIVS_FE_DMA_DEBUG_STATE_VE_REQ_STATE_MISS 0x00020000 + +#define VIVS_FE_DMA_ADDRESS 0x00000664 + +#define VIVS_FE_DMA_LOW 0x00000668 + +#define VIVS_FE_DMA_HIGH 0x0000066c + +#define VIVS_FE_AUTO_FLUSH 0x00000670 + +#define VIVS_FE_UNK00678 0x00000678 + +#define VIVS_FE_UNK0067C 0x0000067c + +#define VIVS_FE_VERTEX_STREAMS(i0) (0x00000000 + 0x4*(i0)) +#define VIVS_FE_VERTEX_STREAMS__ESIZE 0x00000004 +#define VIVS_FE_VERTEX_STREAMS__LEN 0x00000008 + +#define VIVS_FE_VERTEX_STREAMS_BASE_ADDR(i0) (0x00000680 + 0x4*(i0)) + +#define VIVS_FE_VERTEX_STREAMS_CONTROL(i0) (0x000006a0 + 0x4*(i0)) + +#define VIVS_FE_UNK00700(i0) (0x00000700 + 0x4*(i0)) +#define VIVS_FE_UNK00700__ESIZE 0x00000004 +#define VIVS_FE_UNK00700__LEN 0x00000010 + +#define VIVS_FE_UNK00740(i0) (0x00000740 + 0x4*(i0)) +#define VIVS_FE_UNK00740__ESIZE 0x00000004 +#define VIVS_FE_UNK00740__LEN 0x00000010 + +#define VIVS_FE_UNK00780(i0) (0x00000780 + 0x4*(i0)) +#define VIVS_FE_UNK00780__ESIZE 0x00000004 +#define VIVS_FE_UNK00780__LEN 0x00000010 + +#define VIVS_GL 0x00000000 + +#define VIVS_GL_PIPE_SELECT 0x00003800 +#define VIVS_GL_PIPE_SELECT_PIPE__MASK 0x00000001 +#define VIVS_GL_PIPE_SELECT_PIPE__SHIFT 0 +#define VIVS_GL_PIPE_SELECT_PIPE(x) (((x) << VIVS_GL_PIPE_SELECT_PIPE__SHIFT) & VIVS_GL_PIPE_SELECT_PIPE__MASK) + +#define VIVS_GL_EVENT 0x00003804 +#define VIVS_GL_EVENT_EVENT_ID__MASK 0x0000001f +#define VIVS_GL_EVENT_EVENT_ID__SHIFT 0 +#define VIVS_GL_EVENT_EVENT_ID(x) (((x) << VIVS_GL_EVENT_EVENT_ID__SHIFT) & VIVS_GL_EVENT_EVENT_ID__MASK) +#define VIVS_GL_EVENT_FROM_FE 0x00000020 +#define VIVS_GL_EVENT_FROM_PE 0x00000040 +#define VIVS_GL_EVENT_SOURCE__MASK 0x00001f00 +#define VIVS_GL_EVENT_SOURCE__SHIFT 8 +#define VIVS_GL_EVENT_SOURCE(x) (((x) << VIVS_GL_EVENT_SOURCE__SHIFT) & VIVS_GL_EVENT_SOURCE__MASK) + +#define VIVS_GL_SEMAPHORE_TOKEN 0x00003808 +#define VIVS_GL_SEMAPHORE_TOKEN_FROM__MASK 0x0000001f +#define VIVS_GL_SEMAPHORE_TOKEN_FROM__SHIFT 0 +#define VIVS_GL_SEMAPHORE_TOKEN_FROM(x) (((x) << VIVS_GL_SEMAPHORE_TOKEN_FROM__SHIFT) & VIVS_GL_SEMAPHORE_TOKEN_FROM__MASK) +#define VIVS_GL_SEMAPHORE_TOKEN_TO__MASK 0x00001f00 +#define VIVS_GL_SEMAPHORE_TOKEN_TO__SHIFT 8 +#define VIVS_GL_SEMAPHORE_TOKEN_TO(x) (((x) << VIVS_GL_SEMAPHORE_TOKEN_TO__SHIFT) & VIVS_GL_SEMAPHORE_TOKEN_TO__MASK) + +#define VIVS_GL_FLUSH_CACHE 0x0000380c +#define VIVS_GL_FLUSH_CACHE_DEPTH 0x00000001 +#define VIVS_GL_FLUSH_CACHE_COLOR 0x00000002 +#define VIVS_GL_FLUSH_CACHE_TEXTURE 0x00000004 +#define VIVS_GL_FLUSH_CACHE_PE2D 0x00000008 +#define VIVS_GL_FLUSH_CACHE_TEXTUREVS 0x00000010 +#define VIVS_GL_FLUSH_CACHE_SHADER_L1 0x00000020 +#define VIVS_GL_FLUSH_CACHE_SHADER_L2 0x00000040 + +#define VIVS_GL_FLUSH_MMU 0x00003810 +#define VIVS_GL_FLUSH_MMU_FLUSH_FEMMU 0x00000001 +#define VIVS_GL_FLUSH_MMU_FLUSH_PEMMU 0x00000002 + +#define VIVS_GL_VERTEX_ELEMENT_CONFIG 0x00003814 + +#define VIVS_GL_MULTI_SAMPLE_CONFIG 0x00003818 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_SAMPLES__MASK 0x00000003 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_SAMPLES__SHIFT 0 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_SAMPLES_NONE 0x00000000 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_SAMPLES_2X 0x00000001 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_SAMPLES_4X 0x00000002 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_SAMPLES_MASK 0x00000008 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_ENABLES__MASK 0x000000f0 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_ENABLES__SHIFT 4 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_ENABLES(x) (((x) << VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_ENABLES__SHIFT) & VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_ENABLES__MASK) +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_ENABLES_MASK 0x00000100 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK12__MASK 0x00007000 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK12__SHIFT 12 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK12(x) (((x) << VIVS_GL_MULTI_SAMPLE_CONFIG_UNK12__SHIFT) & VIVS_GL_MULTI_SAMPLE_CONFIG_UNK12__MASK) +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK12_MASK 0x00008000 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK16__MASK 0x00030000 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK16__SHIFT 16 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK16(x) (((x) << VIVS_GL_MULTI_SAMPLE_CONFIG_UNK16__SHIFT) & VIVS_GL_MULTI_SAMPLE_CONFIG_UNK16__MASK) +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK16_MASK 0x00080000 + +#define VIVS_GL_VARYING_TOTAL_COMPONENTS 0x0000381c +#define VIVS_GL_VARYING_TOTAL_COMPONENTS_NUM__MASK 0x000000ff +#define VIVS_GL_VARYING_TOTAL_COMPONENTS_NUM__SHIFT 0 +#define VIVS_GL_VARYING_TOTAL_COMPONENTS_NUM(x) (((x) << VIVS_GL_VARYING_TOTAL_COMPONENTS_NUM__SHIFT) & VIVS_GL_VARYING_TOTAL_COMPONENTS_NUM__MASK) + +#define VIVS_GL_VARYING_NUM_COMPONENTS 0x00003820 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR0__MASK 0x00000007 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR0__SHIFT 0 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR0(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR0__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR0__MASK) +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR1__MASK 0x00000070 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR1__SHIFT 4 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR1(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR1__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR1__MASK) +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR2__MASK 0x00000700 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR2__SHIFT 8 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR2(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR2__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR2__MASK) +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR3__MASK 0x00007000 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR3__SHIFT 12 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR3(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR3__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR3__MASK) +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR4__MASK 0x00070000 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR4__SHIFT 16 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR4(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR4__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR4__MASK) +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR5__MASK 0x00700000 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR5__SHIFT 20 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR5(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR5__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR5__MASK) +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR6__MASK 0x07000000 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR6__SHIFT 24 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR6(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR6__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR6__MASK) +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR7__MASK 0x70000000 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR7__SHIFT 28 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR7(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR7__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR7__MASK) + +#define VIVS_GL_VARYING_COMPONENT_USE(i0) (0x00003828 + 0x4*(i0)) +#define VIVS_GL_VARYING_COMPONENT_USE__ESIZE 0x00000004 +#define VIVS_GL_VARYING_COMPONENT_USE__LEN 0x00000002 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP0__MASK 0x00000003 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP0__SHIFT 0 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP0(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP0__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP0__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP1__MASK 0x0000000c +#define VIVS_GL_VARYING_COMPONENT_USE_COMP1__SHIFT 2 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP1(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP1__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP1__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP2__MASK 0x00000030 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP2__SHIFT 4 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP2(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP2__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP2__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP3__MASK 0x000000c0 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP3__SHIFT 6 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP3(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP3__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP3__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP4__MASK 0x00000300 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP4__SHIFT 8 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP4(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP4__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP4__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP5__MASK 0x00000c00 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP5__SHIFT 10 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP5(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP5__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP5__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP6__MASK 0x00003000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP6__SHIFT 12 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP6(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP6__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP6__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP7__MASK 0x0000c000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP7__SHIFT 14 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP7(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP7__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP7__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP8__MASK 0x00030000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP8__SHIFT 16 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP8(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP8__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP8__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP9__MASK 0x000c0000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP9__SHIFT 18 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP9(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP9__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP9__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP10__MASK 0x00300000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP10__SHIFT 20 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP10(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP10__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP10__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP11__MASK 0x00c00000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP11__SHIFT 22 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP11(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP11__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP11__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP12__MASK 0x03000000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP12__SHIFT 24 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP12(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP12__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP12__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP13__MASK 0x0c000000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP13__SHIFT 26 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP13(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP13__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP13__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP14__MASK 0x30000000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP14__SHIFT 28 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP14(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP14__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP14__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP15__MASK 0xc0000000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP15__SHIFT 30 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP15(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP15__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP15__MASK) + +#define VIVS_GL_UNK03834 0x00003834 + +#define VIVS_GL_UNK03838 0x00003838 + +#define VIVS_GL_API_MODE 0x0000384c +#define VIVS_GL_API_MODE_OPENGL 0x00000000 +#define VIVS_GL_API_MODE_OPENVG 0x00000001 +#define VIVS_GL_API_MODE_OPENCL 0x00000002 + +#define VIVS_GL_CONTEXT_POINTER 0x00003850 + +#define VIVS_GL_UNK03A00 0x00003a00 + +#define VIVS_GL_STALL_TOKEN 0x00003c00 +#define VIVS_GL_STALL_TOKEN_FROM__MASK 0x0000001f +#define VIVS_GL_STALL_TOKEN_FROM__SHIFT 0 +#define VIVS_GL_STALL_TOKEN_FROM(x) (((x) << VIVS_GL_STALL_TOKEN_FROM__SHIFT) & VIVS_GL_STALL_TOKEN_FROM__MASK) +#define VIVS_GL_STALL_TOKEN_TO__MASK 0x00001f00 +#define VIVS_GL_STALL_TOKEN_TO__SHIFT 8 +#define VIVS_GL_STALL_TOKEN_TO(x) (((x) << VIVS_GL_STALL_TOKEN_TO__SHIFT) & VIVS_GL_STALL_TOKEN_TO__MASK) +#define VIVS_GL_STALL_TOKEN_FLIP0 0x40000000 +#define VIVS_GL_STALL_TOKEN_FLIP1 0x80000000 + +#define VIVS_DUMMY 0x00000000 + +#define VIVS_DUMMY_DUMMY 0x0003fffc + + +#endif /* STATE_XML */ diff --git a/drivers/staging/etnaviv/state_hi.xml.h b/drivers/staging/etnaviv/state_hi.xml.h new file mode 100644 index 000000000000..9799d7473e5e --- /dev/null +++ b/drivers/staging/etnaviv/state_hi.xml.h @@ -0,0 +1,405 @@ +#ifndef STATE_HI_XML +#define STATE_HI_XML + +/* Autogenerated file, DO NOT EDIT manually! + +This file was generated by the rules-ng-ng headergen tool in this git repository: +http://0x04.net/cgit/index.cgi/rules-ng-ng +git clone git://0x04.net/rules-ng-ng + +The rules-ng-ng source files this header was generated from are: +- /home/christian/projects/etna_viv/rnndb/state.xml ( 18526 bytes, from 2014-09-06 05:57:57) +- /home/christian/projects/etna_viv/rnndb/common.xml ( 18379 bytes, from 2014-09-06 05:57:57) +- /home/christian/projects/etna_viv/rnndb/state_hi.xml ( 23176 bytes, from 2014-09-06 06:07:47) +- /home/christian/projects/etna_viv/rnndb/state_2d.xml ( 51191 bytes, from 2014-09-06 05:57:57) +- /home/christian/projects/etna_viv/rnndb/state_3d.xml ( 54570 bytes, from 2014-09-06 05:57:57) +- /home/christian/projects/etna_viv/rnndb/state_vg.xml ( 5942 bytes, from 2014-09-06 05:57:57) + +Copyright (C) 2014 +*/ + + +#define MMU_EXCEPTION_SLAVE_NOT_PRESENT 0x00000001 +#define MMU_EXCEPTION_PAGE_NOT_PRESENT 0x00000002 +#define MMU_EXCEPTION_WRITE_VIOLATION 0x00000003 +#define VIVS_HI 0x00000000 + +#define VIVS_HI_CLOCK_CONTROL 0x00000000 +#define VIVS_HI_CLOCK_CONTROL_CLK3D_DIS 0x00000001 +#define VIVS_HI_CLOCK_CONTROL_CLK2D_DIS 0x00000002 +#define VIVS_HI_CLOCK_CONTROL_FSCALE_VAL__MASK 0x000001fc +#define VIVS_HI_CLOCK_CONTROL_FSCALE_VAL__SHIFT 2 +#define VIVS_HI_CLOCK_CONTROL_FSCALE_VAL(x) (((x) << VIVS_HI_CLOCK_CONTROL_FSCALE_VAL__SHIFT) & VIVS_HI_CLOCK_CONTROL_FSCALE_VAL__MASK) +#define VIVS_HI_CLOCK_CONTROL_FSCALE_CMD_LOAD 0x00000200 +#define VIVS_HI_CLOCK_CONTROL_DISABLE_RAM_CLK_GATING 0x00000400 +#define VIVS_HI_CLOCK_CONTROL_DISABLE_DEBUG_REGISTERS 0x00000800 +#define VIVS_HI_CLOCK_CONTROL_SOFT_RESET 0x00001000 +#define VIVS_HI_CLOCK_CONTROL_IDLE_3D 0x00010000 +#define VIVS_HI_CLOCK_CONTROL_IDLE_2D 0x00020000 +#define VIVS_HI_CLOCK_CONTROL_IDLE_VG 0x00040000 +#define VIVS_HI_CLOCK_CONTROL_ISOLATE_GPU 0x00080000 +#define VIVS_HI_CLOCK_CONTROL_DEBUG_PIXEL_PIPE__MASK 0x00f00000 +#define VIVS_HI_CLOCK_CONTROL_DEBUG_PIXEL_PIPE__SHIFT 20 +#define VIVS_HI_CLOCK_CONTROL_DEBUG_PIXEL_PIPE(x) (((x) << VIVS_HI_CLOCK_CONTROL_DEBUG_PIXEL_PIPE__SHIFT) & VIVS_HI_CLOCK_CONTROL_DEBUG_PIXEL_PIPE__MASK) + +#define VIVS_HI_IDLE_STATE 0x00000004 +#define VIVS_HI_IDLE_STATE_FE 0x00000001 +#define VIVS_HI_IDLE_STATE_DE 0x00000002 +#define VIVS_HI_IDLE_STATE_PE 0x00000004 +#define VIVS_HI_IDLE_STATE_SH 0x00000008 +#define VIVS_HI_IDLE_STATE_PA 0x00000010 +#define VIVS_HI_IDLE_STATE_SE 0x00000020 +#define VIVS_HI_IDLE_STATE_RA 0x00000040 +#define VIVS_HI_IDLE_STATE_TX 0x00000080 +#define VIVS_HI_IDLE_STATE_VG 0x00000100 +#define VIVS_HI_IDLE_STATE_IM 0x00000200 +#define VIVS_HI_IDLE_STATE_FP 0x00000400 +#define VIVS_HI_IDLE_STATE_TS 0x00000800 +#define VIVS_HI_IDLE_STATE_AXI_LP 0x80000000 + +#define VIVS_HI_AXI_CONFIG 0x00000008 +#define VIVS_HI_AXI_CONFIG_AWID__MASK 0x0000000f +#define VIVS_HI_AXI_CONFIG_AWID__SHIFT 0 +#define VIVS_HI_AXI_CONFIG_AWID(x) (((x) << VIVS_HI_AXI_CONFIG_AWID__SHIFT) & VIVS_HI_AXI_CONFIG_AWID__MASK) +#define VIVS_HI_AXI_CONFIG_ARID__MASK 0x000000f0 +#define VIVS_HI_AXI_CONFIG_ARID__SHIFT 4 +#define VIVS_HI_AXI_CONFIG_ARID(x) (((x) << VIVS_HI_AXI_CONFIG_ARID__SHIFT) & VIVS_HI_AXI_CONFIG_ARID__MASK) +#define VIVS_HI_AXI_CONFIG_AWCACHE__MASK 0x00000f00 +#define VIVS_HI_AXI_CONFIG_AWCACHE__SHIFT 8 +#define VIVS_HI_AXI_CONFIG_AWCACHE(x) (((x) << VIVS_HI_AXI_CONFIG_AWCACHE__SHIFT) & VIVS_HI_AXI_CONFIG_AWCACHE__MASK) +#define VIVS_HI_AXI_CONFIG_ARCACHE__MASK 0x0000f000 +#define VIVS_HI_AXI_CONFIG_ARCACHE__SHIFT 12 +#define VIVS_HI_AXI_CONFIG_ARCACHE(x) (((x) << VIVS_HI_AXI_CONFIG_ARCACHE__SHIFT) & VIVS_HI_AXI_CONFIG_ARCACHE__MASK) + +#define VIVS_HI_AXI_STATUS 0x0000000c +#define VIVS_HI_AXI_STATUS_WR_ERR_ID__MASK 0x0000000f +#define VIVS_HI_AXI_STATUS_WR_ERR_ID__SHIFT 0 +#define VIVS_HI_AXI_STATUS_WR_ERR_ID(x) (((x) << VIVS_HI_AXI_STATUS_WR_ERR_ID__SHIFT) & VIVS_HI_AXI_STATUS_WR_ERR_ID__MASK) +#define VIVS_HI_AXI_STATUS_RD_ERR_ID__MASK 0x000000f0 +#define VIVS_HI_AXI_STATUS_RD_ERR_ID__SHIFT 4 +#define VIVS_HI_AXI_STATUS_RD_ERR_ID(x) (((x) << VIVS_HI_AXI_STATUS_RD_ERR_ID__SHIFT) & VIVS_HI_AXI_STATUS_RD_ERR_ID__MASK) +#define VIVS_HI_AXI_STATUS_DET_WR_ERR 0x00000100 +#define VIVS_HI_AXI_STATUS_DET_RD_ERR 0x00000200 + +#define VIVS_HI_INTR_ACKNOWLEDGE 0x00000010 +#define VIVS_HI_INTR_ACKNOWLEDGE_INTR_VEC__MASK 0x7fffffff +#define VIVS_HI_INTR_ACKNOWLEDGE_INTR_VEC__SHIFT 0 +#define VIVS_HI_INTR_ACKNOWLEDGE_INTR_VEC(x) (((x) << VIVS_HI_INTR_ACKNOWLEDGE_INTR_VEC__SHIFT) & VIVS_HI_INTR_ACKNOWLEDGE_INTR_VEC__MASK) +#define VIVS_HI_INTR_ACKNOWLEDGE_AXI_BUS_ERROR 0x80000000 + +#define VIVS_HI_INTR_ENBL 0x00000014 +#define VIVS_HI_INTR_ENBL_INTR_ENBL_VEC__MASK 0xffffffff +#define VIVS_HI_INTR_ENBL_INTR_ENBL_VEC__SHIFT 0 +#define VIVS_HI_INTR_ENBL_INTR_ENBL_VEC(x) (((x) << VIVS_HI_INTR_ENBL_INTR_ENBL_VEC__SHIFT) & VIVS_HI_INTR_ENBL_INTR_ENBL_VEC__MASK) + +#define VIVS_HI_CHIP_IDENTITY 0x00000018 +#define VIVS_HI_CHIP_IDENTITY_FAMILY__MASK 0xff000000 +#define VIVS_HI_CHIP_IDENTITY_FAMILY__SHIFT 24 +#define VIVS_HI_CHIP_IDENTITY_FAMILY(x) (((x) << VIVS_HI_CHIP_IDENTITY_FAMILY__SHIFT) & VIVS_HI_CHIP_IDENTITY_FAMILY__MASK) +#define VIVS_HI_CHIP_IDENTITY_PRODUCT__MASK 0x00ff0000 +#define VIVS_HI_CHIP_IDENTITY_PRODUCT__SHIFT 16 +#define VIVS_HI_CHIP_IDENTITY_PRODUCT(x) (((x) << VIVS_HI_CHIP_IDENTITY_PRODUCT__SHIFT) & VIVS_HI_CHIP_IDENTITY_PRODUCT__MASK) +#define VIVS_HI_CHIP_IDENTITY_REVISION__MASK 0x0000f000 +#define VIVS_HI_CHIP_IDENTITY_REVISION__SHIFT 12 +#define VIVS_HI_CHIP_IDENTITY_REVISION(x) (((x) << VIVS_HI_CHIP_IDENTITY_REVISION__SHIFT) & VIVS_HI_CHIP_IDENTITY_REVISION__MASK) + +#define VIVS_HI_CHIP_FEATURE 0x0000001c + +#define VIVS_HI_CHIP_MODEL 0x00000020 + +#define VIVS_HI_CHIP_REV 0x00000024 + +#define VIVS_HI_CHIP_DATE 0x00000028 + +#define VIVS_HI_CHIP_TIME 0x0000002c + +#define VIVS_HI_CHIP_MINOR_FEATURE_0 0x00000034 + +#define VIVS_HI_CACHE_CONTROL 0x00000038 + +#define VIVS_HI_MEMORY_COUNTER_RESET 0x0000003c + +#define VIVS_HI_PROFILE_READ_BYTES8 0x00000040 + +#define VIVS_HI_PROFILE_WRITE_BYTES8 0x00000044 + +#define VIVS_HI_CHIP_SPECS 0x00000048 +#define VIVS_HI_CHIP_SPECS_STREAM_COUNT__MASK 0x0000000f +#define VIVS_HI_CHIP_SPECS_STREAM_COUNT__SHIFT 0 +#define VIVS_HI_CHIP_SPECS_STREAM_COUNT(x) (((x) << VIVS_HI_CHIP_SPECS_STREAM_COUNT__SHIFT) & VIVS_HI_CHIP_SPECS_STREAM_COUNT__MASK) +#define VIVS_HI_CHIP_SPECS_REGISTER_MAX__MASK 0x000000f0 +#define VIVS_HI_CHIP_SPECS_REGISTER_MAX__SHIFT 4 +#define VIVS_HI_CHIP_SPECS_REGISTER_MAX(x) (((x) << VIVS_HI_CHIP_SPECS_REGISTER_MAX__SHIFT) & VIVS_HI_CHIP_SPECS_REGISTER_MAX__MASK) +#define VIVS_HI_CHIP_SPECS_THREAD_COUNT__MASK 0x00000f00 +#define VIVS_HI_CHIP_SPECS_THREAD_COUNT__SHIFT 8 +#define VIVS_HI_CHIP_SPECS_THREAD_COUNT(x) (((x) << VIVS_HI_CHIP_SPECS_THREAD_COUNT__SHIFT) & VIVS_HI_CHIP_SPECS_THREAD_COUNT__MASK) +#define VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__MASK 0x0001f000 +#define VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__SHIFT 12 +#define VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE(x) (((x) << VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__SHIFT) & VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__MASK) +#define VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__MASK 0x01f00000 +#define VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__SHIFT 20 +#define VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT(x) (((x) << VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__SHIFT) & VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__MASK) +#define VIVS_HI_CHIP_SPECS_PIXEL_PIPES__MASK 0x0e000000 +#define VIVS_HI_CHIP_SPECS_PIXEL_PIPES__SHIFT 25 +#define VIVS_HI_CHIP_SPECS_PIXEL_PIPES(x) (((x) << VIVS_HI_CHIP_SPECS_PIXEL_PIPES__SHIFT) & VIVS_HI_CHIP_SPECS_PIXEL_PIPES__MASK) +#define VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__MASK 0xf0000000 +#define VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__SHIFT 28 +#define VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE(x) (((x) << VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__SHIFT) & VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__MASK) + +#define VIVS_HI_PROFILE_WRITE_BURSTS 0x0000004c + +#define VIVS_HI_PROFILE_WRITE_REQUESTS 0x00000050 + +#define VIVS_HI_PROFILE_READ_BURSTS 0x00000058 + +#define VIVS_HI_PROFILE_READ_REQUESTS 0x0000005c + +#define VIVS_HI_PROFILE_READ_LASTS 0x00000060 + +#define VIVS_HI_GP_OUT0 0x00000064 + +#define VIVS_HI_GP_OUT1 0x00000068 + +#define VIVS_HI_GP_OUT2 0x0000006c + +#define VIVS_HI_AXI_CONTROL 0x00000070 +#define VIVS_HI_AXI_CONTROL_WR_FULL_BURST_MODE 0x00000001 + +#define VIVS_HI_CHIP_MINOR_FEATURE_1 0x00000074 + +#define VIVS_HI_PROFILE_TOTAL_CYCLES 0x00000078 + +#define VIVS_HI_PROFILE_IDLE_CYCLES 0x0000007c + +#define VIVS_HI_CHIP_SPECS_2 0x00000080 +#define VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__MASK 0x000000ff +#define VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__SHIFT 0 +#define VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE(x) (((x) << VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__SHIFT) & VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__MASK) +#define VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__MASK 0x0000ff00 +#define VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__SHIFT 8 +#define VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT(x) (((x) << VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__SHIFT) & VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__MASK) +#define VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__MASK 0xffff0000 +#define VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__SHIFT 16 +#define VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS(x) (((x) << VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__SHIFT) & VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__MASK) + +#define VIVS_HI_CHIP_MINOR_FEATURE_2 0x00000084 + +#define VIVS_HI_CHIP_MINOR_FEATURE_3 0x00000088 + +#define VIVS_HI_CHIP_MINOR_FEATURE_4 0x00000094 + +#define VIVS_PM 0x00000000 + +#define VIVS_PM_POWER_CONTROLS 0x00000100 +#define VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING 0x00000001 +#define VIVS_PM_POWER_CONTROLS_DISABLE_STALL_MODULE_CLOCK_GATING 0x00000002 +#define VIVS_PM_POWER_CONTROLS_DISABLE_STARVE_MODULE_CLOCK_GATING 0x00000004 +#define VIVS_PM_POWER_CONTROLS_TURN_ON_COUNTER__MASK 0x000000f0 +#define VIVS_PM_POWER_CONTROLS_TURN_ON_COUNTER__SHIFT 4 +#define VIVS_PM_POWER_CONTROLS_TURN_ON_COUNTER(x) (((x) << VIVS_PM_POWER_CONTROLS_TURN_ON_COUNTER__SHIFT) & VIVS_PM_POWER_CONTROLS_TURN_ON_COUNTER__MASK) +#define VIVS_PM_POWER_CONTROLS_TURN_OFF_COUNTER__MASK 0xffff0000 +#define VIVS_PM_POWER_CONTROLS_TURN_OFF_COUNTER__SHIFT 16 +#define VIVS_PM_POWER_CONTROLS_TURN_OFF_COUNTER(x) (((x) << VIVS_PM_POWER_CONTROLS_TURN_OFF_COUNTER__SHIFT) & VIVS_PM_POWER_CONTROLS_TURN_OFF_COUNTER__MASK) + +#define VIVS_PM_MODULE_CONTROLS 0x00000104 +#define VIVS_PM_MODULE_CONTROLS_DISABLE_MODULE_CLOCK_GATING_FE 0x00000001 +#define VIVS_PM_MODULE_CONTROLS_DISABLE_MODULE_CLOCK_GATING_DE 0x00000002 +#define VIVS_PM_MODULE_CONTROLS_DISABLE_MODULE_CLOCK_GATING_PE 0x00000004 + +#define VIVS_PM_MODULE_STATUS 0x00000108 +#define VIVS_PM_MODULE_STATUS_MODULE_CLOCK_GATED_FE 0x00000001 +#define VIVS_PM_MODULE_STATUS_MODULE_CLOCK_GATED_DE 0x00000002 +#define VIVS_PM_MODULE_STATUS_MODULE_CLOCK_GATED_PE 0x00000004 + +#define VIVS_PM_PULSE_EATER 0x0000010c + +#define VIVS_MMUv2 0x00000000 + +#define VIVS_MMUv2_SAFE_ADDRESS 0x00000180 + +#define VIVS_MMUv2_CONFIGURATION 0x00000184 +#define VIVS_MMUv2_CONFIGURATION_MODE__MASK 0x00000001 +#define VIVS_MMUv2_CONFIGURATION_MODE__SHIFT 0 +#define VIVS_MMUv2_CONFIGURATION_MODE_MODE4_K 0x00000000 +#define VIVS_MMUv2_CONFIGURATION_MODE_MODE1_K 0x00000001 +#define VIVS_MMUv2_CONFIGURATION_MODE_MASK 0x00000008 +#define VIVS_MMUv2_CONFIGURATION_FLUSH__MASK 0x00000010 +#define VIVS_MMUv2_CONFIGURATION_FLUSH__SHIFT 4 +#define VIVS_MMUv2_CONFIGURATION_FLUSH_FLUSH 0x00000010 +#define VIVS_MMUv2_CONFIGURATION_FLUSH_MASK 0x00000080 +#define VIVS_MMUv2_CONFIGURATION_ADDRESS_MASK 0x00000100 +#define VIVS_MMUv2_CONFIGURATION_ADDRESS__MASK 0xfffffc00 +#define VIVS_MMUv2_CONFIGURATION_ADDRESS__SHIFT 10 +#define VIVS_MMUv2_CONFIGURATION_ADDRESS(x) (((x) << VIVS_MMUv2_CONFIGURATION_ADDRESS__SHIFT) & VIVS_MMUv2_CONFIGURATION_ADDRESS__MASK) + +#define VIVS_MMUv2_STATUS 0x00000188 +#define VIVS_MMUv2_STATUS_EXCEPTION0__MASK 0x00000003 +#define VIVS_MMUv2_STATUS_EXCEPTION0__SHIFT 0 +#define VIVS_MMUv2_STATUS_EXCEPTION0(x) (((x) << VIVS_MMUv2_STATUS_EXCEPTION0__SHIFT) & VIVS_MMUv2_STATUS_EXCEPTION0__MASK) +#define VIVS_MMUv2_STATUS_EXCEPTION1__MASK 0x00000030 +#define VIVS_MMUv2_STATUS_EXCEPTION1__SHIFT 4 +#define VIVS_MMUv2_STATUS_EXCEPTION1(x) (((x) << VIVS_MMUv2_STATUS_EXCEPTION1__SHIFT) & VIVS_MMUv2_STATUS_EXCEPTION1__MASK) +#define VIVS_MMUv2_STATUS_EXCEPTION2__MASK 0x00000300 +#define VIVS_MMUv2_STATUS_EXCEPTION2__SHIFT 8 +#define VIVS_MMUv2_STATUS_EXCEPTION2(x) (((x) << VIVS_MMUv2_STATUS_EXCEPTION2__SHIFT) & VIVS_MMUv2_STATUS_EXCEPTION2__MASK) +#define VIVS_MMUv2_STATUS_EXCEPTION3__MASK 0x00003000 +#define VIVS_MMUv2_STATUS_EXCEPTION3__SHIFT 12 +#define VIVS_MMUv2_STATUS_EXCEPTION3(x) (((x) << VIVS_MMUv2_STATUS_EXCEPTION3__SHIFT) & VIVS_MMUv2_STATUS_EXCEPTION3__MASK) + +#define VIVS_MMUv2_CONTROL 0x0000018c +#define VIVS_MMUv2_CONTROL_ENABLE 0x00000001 + +#define VIVS_MMUv2_EXCEPTION_ADDR(i0) (0x00000190 + 0x4*(i0)) +#define VIVS_MMUv2_EXCEPTION_ADDR__ESIZE 0x00000004 +#define VIVS_MMUv2_EXCEPTION_ADDR__LEN 0x00000004 + +#define VIVS_MC 0x00000000 + +#define VIVS_MC_MMU_FE_PAGE_TABLE 0x00000400 + +#define VIVS_MC_MMU_TX_PAGE_TABLE 0x00000404 + +#define VIVS_MC_MMU_PE_PAGE_TABLE 0x00000408 + +#define VIVS_MC_MMU_PEZ_PAGE_TABLE 0x0000040c + +#define VIVS_MC_MMU_RA_PAGE_TABLE 0x00000410 + +#define VIVS_MC_DEBUG_MEMORY 0x00000414 +#define VIVS_MC_DEBUG_MEMORY_SPECIAL_PATCH_GC320 0x00000008 +#define VIVS_MC_DEBUG_MEMORY_FAST_CLEAR_BYPASS 0x00100000 +#define VIVS_MC_DEBUG_MEMORY_COMPRESSION_BYPASS 0x00200000 + +#define VIVS_MC_MEMORY_BASE_ADDR_RA 0x00000418 + +#define VIVS_MC_MEMORY_BASE_ADDR_FE 0x0000041c + +#define VIVS_MC_MEMORY_BASE_ADDR_TX 0x00000420 + +#define VIVS_MC_MEMORY_BASE_ADDR_PEZ 0x00000424 + +#define VIVS_MC_MEMORY_BASE_ADDR_PE 0x00000428 + +#define VIVS_MC_MEMORY_TIMING_CONTROL 0x0000042c + +#define VIVS_MC_MEMORY_FLUSH 0x00000430 + +#define VIVS_MC_PROFILE_CYCLE_COUNTER 0x00000438 + +#define VIVS_MC_DEBUG_READ0 0x0000043c + +#define VIVS_MC_DEBUG_READ1 0x00000440 + +#define VIVS_MC_DEBUG_WRITE 0x00000444 + +#define VIVS_MC_PROFILE_RA_READ 0x00000448 + +#define VIVS_MC_PROFILE_TX_READ 0x0000044c + +#define VIVS_MC_PROFILE_FE_READ 0x00000450 + +#define VIVS_MC_PROFILE_PE_READ 0x00000454 + +#define VIVS_MC_PROFILE_DE_READ 0x00000458 + +#define VIVS_MC_PROFILE_SH_READ 0x0000045c + +#define VIVS_MC_PROFILE_PA_READ 0x00000460 + +#define VIVS_MC_PROFILE_SE_READ 0x00000464 + +#define VIVS_MC_PROFILE_MC_READ 0x00000468 + +#define VIVS_MC_PROFILE_HI_READ 0x0000046c + +#define VIVS_MC_PROFILE_CONFIG0 0x00000470 +#define VIVS_MC_PROFILE_CONFIG0_FE__MASK 0x0000000f +#define VIVS_MC_PROFILE_CONFIG0_FE__SHIFT 0 +#define VIVS_MC_PROFILE_CONFIG0_FE_RESET 0x0000000f +#define VIVS_MC_PROFILE_CONFIG0_DE__MASK 0x00000f00 +#define VIVS_MC_PROFILE_CONFIG0_DE__SHIFT 8 +#define VIVS_MC_PROFILE_CONFIG0_DE_RESET 0x00000f00 +#define VIVS_MC_PROFILE_CONFIG0_PE__MASK 0x000f0000 +#define VIVS_MC_PROFILE_CONFIG0_PE__SHIFT 16 +#define VIVS_MC_PROFILE_CONFIG0_PE_PIXEL_COUNT_KILLED_BY_COLOR_PIPE 0x00000000 +#define VIVS_MC_PROFILE_CONFIG0_PE_PIXEL_COUNT_KILLED_BY_DEPTH_PIPE 0x00010000 +#define VIVS_MC_PROFILE_CONFIG0_PE_PIXEL_COUNT_DRAWN_BY_COLOR_PIPE 0x00020000 +#define VIVS_MC_PROFILE_CONFIG0_PE_PIXEL_COUNT_DRAWN_BY_DEPTH_PIPE 0x00030000 +#define VIVS_MC_PROFILE_CONFIG0_PE_PIXELS_RENDERED_2D 0x000b0000 +#define VIVS_MC_PROFILE_CONFIG0_PE_RESET 0x000f0000 +#define VIVS_MC_PROFILE_CONFIG0_SH__MASK 0x0f000000 +#define VIVS_MC_PROFILE_CONFIG0_SH__SHIFT 24 +#define VIVS_MC_PROFILE_CONFIG0_SH_SHADER_CYCLES 0x04000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_PS_INST_COUNTER 0x07000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_RENDERED_PIXEL_COUNTER 0x08000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_VS_INST_COUNTER 0x09000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_RENDERED_VERTICE_COUNTER 0x0a000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_VTX_BRANCH_INST_COUNTER 0x0b000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_VTX_TEXLD_INST_COUNTER 0x0c000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_PXL_BRANCH_INST_COUNTER 0x0d000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_PXL_TEXLD_INST_COUNTER 0x0e000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_RESET 0x0f000000 + +#define VIVS_MC_PROFILE_CONFIG1 0x00000474 +#define VIVS_MC_PROFILE_CONFIG1_PA__MASK 0x0000000f +#define VIVS_MC_PROFILE_CONFIG1_PA__SHIFT 0 +#define VIVS_MC_PROFILE_CONFIG1_PA_INPUT_VTX_COUNTER 0x00000003 +#define VIVS_MC_PROFILE_CONFIG1_PA_INPUT_PRIM_COUNTER 0x00000004 +#define VIVS_MC_PROFILE_CONFIG1_PA_OUTPUT_PRIM_COUNTER 0x00000005 +#define VIVS_MC_PROFILE_CONFIG1_PA_DEPTH_CLIPPED_COUNTER 0x00000006 +#define VIVS_MC_PROFILE_CONFIG1_PA_TRIVIAL_REJECTED_COUNTER 0x00000007 +#define VIVS_MC_PROFILE_CONFIG1_PA_CULLED_COUNTER 0x00000008 +#define VIVS_MC_PROFILE_CONFIG1_PA_RESET 0x0000000f +#define VIVS_MC_PROFILE_CONFIG1_SE__MASK 0x00000f00 +#define VIVS_MC_PROFILE_CONFIG1_SE__SHIFT 8 +#define VIVS_MC_PROFILE_CONFIG1_SE_CULLED_TRIANGLE_COUNT 0x00000000 +#define VIVS_MC_PROFILE_CONFIG1_SE_CULLED_LINES_COUNT 0x00000100 +#define VIVS_MC_PROFILE_CONFIG1_SE_RESET 0x00000f00 +#define VIVS_MC_PROFILE_CONFIG1_RA__MASK 0x000f0000 +#define VIVS_MC_PROFILE_CONFIG1_RA__SHIFT 16 +#define VIVS_MC_PROFILE_CONFIG1_RA_VALID_PIXEL_COUNT 0x00000000 +#define VIVS_MC_PROFILE_CONFIG1_RA_TOTAL_QUAD_COUNT 0x00010000 +#define VIVS_MC_PROFILE_CONFIG1_RA_VALID_QUAD_COUNT_AFTER_EARLY_Z 0x00020000 +#define VIVS_MC_PROFILE_CONFIG1_RA_TOTAL_PRIMITIVE_COUNT 0x00030000 +#define VIVS_MC_PROFILE_CONFIG1_RA_PIPE_CACHE_MISS_COUNTER 0x00090000 +#define VIVS_MC_PROFILE_CONFIG1_RA_PREFETCH_CACHE_MISS_COUNTER 0x000a0000 +#define VIVS_MC_PROFILE_CONFIG1_RA_CULLED_QUAD_COUNT 0x000b0000 +#define VIVS_MC_PROFILE_CONFIG1_RA_RESET 0x000f0000 +#define VIVS_MC_PROFILE_CONFIG1_TX__MASK 0x0f000000 +#define VIVS_MC_PROFILE_CONFIG1_TX__SHIFT 24 +#define VIVS_MC_PROFILE_CONFIG1_TX_TOTAL_BILINEAR_REQUESTS 0x00000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_TOTAL_TRILINEAR_REQUESTS 0x01000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_TOTAL_DISCARDED_TEXTURE_REQUESTS 0x02000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_TOTAL_TEXTURE_REQUESTS 0x03000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_UNKNOWN 0x04000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_MEM_READ_COUNT 0x05000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_MEM_READ_IN_8B_COUNT 0x06000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_CACHE_MISS_COUNT 0x07000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_CACHE_HIT_TEXEL_COUNT 0x08000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_CACHE_MISS_TEXEL_COUNT 0x09000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_RESET 0x0f000000 + +#define VIVS_MC_PROFILE_CONFIG2 0x00000478 +#define VIVS_MC_PROFILE_CONFIG2_MC__MASK 0x0000000f +#define VIVS_MC_PROFILE_CONFIG2_MC__SHIFT 0 +#define VIVS_MC_PROFILE_CONFIG2_MC_TOTAL_READ_REQ_8B_FROM_PIPELINE 0x00000001 +#define VIVS_MC_PROFILE_CONFIG2_MC_TOTAL_READ_REQ_8B_FROM_IP 0x00000002 +#define VIVS_MC_PROFILE_CONFIG2_MC_TOTAL_WRITE_REQ_8B_FROM_PIPELINE 0x00000003 +#define VIVS_MC_PROFILE_CONFIG2_MC_RESET 0x0000000f +#define VIVS_MC_PROFILE_CONFIG2_HI__MASK 0x00000f00 +#define VIVS_MC_PROFILE_CONFIG2_HI__SHIFT 8 +#define VIVS_MC_PROFILE_CONFIG2_HI_AXI_CYCLES_READ_REQUEST_STALLED 0x00000000 +#define VIVS_MC_PROFILE_CONFIG2_HI_AXI_CYCLES_WRITE_REQUEST_STALLED 0x00000100 +#define VIVS_MC_PROFILE_CONFIG2_HI_AXI_CYCLES_WRITE_DATA_STALLED 0x00000200 +#define VIVS_MC_PROFILE_CONFIG2_HI_RESET 0x00000f00 + +#define VIVS_MC_PROFILE_CONFIG3 0x0000047c + +#define VIVS_MC_BUS_CONFIG 0x00000480 + +#define VIVS_MC_START_COMPOSITION 0x00000554 + +#define VIVS_MC_128B_MERGE 0x00000558 + + +#endif /* STATE_HI_XML */ diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h new file mode 100644 index 000000000000..f7b5ac6f3842 --- /dev/null +++ b/include/uapi/drm/etnaviv_drm.h @@ -0,0 +1,225 @@ +/* + * Copyright (C) 2013 Red Hat + * Author: Rob Clark robdclark@gmail.com + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#ifndef __ETNAVIV_DRM_H__ +#define __ETNAVIV_DRM_H__ + +#include <stddef.h> +#include <drm/drm.h> + +/* Please note that modifications to all structs defined here are + * subject to backwards-compatibility constraints: + * 1) Do not use pointers, use uint64_t instead for 32 bit / 64 bit + * user/kernel compatibility + * 2) Keep fields aligned to their size + * 3) Because of how drm_ioctl() works, we can add new fields at + * the end of an ioctl if some care is taken: drm_ioctl() will + * zero out the new fields at the tail of the ioctl, so a zero + * value should have a backwards compatible meaning. And for + * output params, userspace won't see the newly added output + * fields.. so that has to be somehow ok. + */ + +#define ETNA_PIPE_3D 0x00 +#define ETNA_PIPE_2D 0x01 +#define ETNA_PIPE_VG 0x02 + +#define ETNA_MAX_PIPES 3 + +/* timeouts are specified in clock-monotonic absolute times (to simplify + * restarting interrupted ioctls). The following struct is logically the + * same as 'struct timespec' but 32/64b ABI safe. + */ +struct drm_etnaviv_timespec { + int64_t tv_sec; /* seconds */ + int64_t tv_nsec; /* nanoseconds */ +}; + +#define ETNAVIV_PARAM_GPU_MODEL 0x01 +#define ETNAVIV_PARAM_GPU_REVISION 0x02 +#define ETNAVIV_PARAM_GPU_FEATURES_0 0x03 +#define ETNAVIV_PARAM_GPU_FEATURES_1 0x04 +#define ETNAVIV_PARAM_GPU_FEATURES_2 0x05 +#define ETNAVIV_PARAM_GPU_FEATURES_3 0x06 +#define ETNAVIV_PARAM_GPU_FEATURES_4 0x07 + +#define ETNAVIV_PARAM_GPU_STREAM_COUNT 0x10 +#define ETNAVIV_PARAM_GPU_REGISTER_MAX 0x11 +#define ETNAVIV_PARAM_GPU_THREAD_COUNT 0x12 +#define ETNAVIV_PARAM_GPU_VERTEX_CACHE_SIZE 0x13 +#define ETNAVIV_PARAM_GPU_SHADER_CORE_COUNT 0x14 +#define ETNAVIV_PARAM_GPU_PIXEL_PIPES 0x15 +#define ETNAVIV_PARAM_GPU_VERTEX_OUTPUT_BUFFER_SIZE 0x16 +#define ETNAVIV_PARAM_GPU_BUFFER_SIZE 0x17 +#define ETNAVIV_PARAM_GPU_INSTRUCTION_COUNT 0x18 +#define ETNAVIV_PARAM_GPU_NUM_CONSTANTS 0x19 + +//#define MSM_PARAM_GMEM_SIZE 0x02 + +struct drm_etnaviv_param { + uint32_t pipe; /* in, ETNA_PIPE_x */ + uint32_t param; /* in, ETNAVIV_PARAM_x */ + uint64_t value; /* out (get_param) or in (set_param) */ +}; + +/* + * GEM buffers: + */ + +#define ETNA_BO_CMDSTREAM 0x00000001 +#define ETNA_BO_CACHE_MASK 0x000f0000 +/* cache modes */ +#define ETNA_BO_CACHED 0x00010000 +#define ETNA_BO_WC 0x00020000 +#define ETNA_BO_UNCACHED 0x00040000 + +struct drm_etnaviv_gem_new { + uint64_t size; /* in */ + uint32_t flags; /* in, mask of ETNA_BO_x */ + uint32_t handle; /* out */ +}; + +struct drm_etnaviv_gem_info { + uint32_t handle; /* in */ + uint32_t pad; + uint64_t offset; /* out, offset to pass to mmap() */ +}; + +#define ETNA_PREP_READ 0x01 +#define ETNA_PREP_WRITE 0x02 +#define ETNA_PREP_NOSYNC 0x04 + +struct drm_etnaviv_gem_cpu_prep { + uint32_t handle; /* in */ + uint32_t op; /* in, mask of ETNA_PREP_x */ + struct drm_etnaviv_timespec timeout; /* in */ +}; + +struct drm_etnaviv_gem_cpu_fini { + uint32_t handle; /* in */ +}; + +/* + * Cmdstream Submission: + */ + +/* The value written into the cmdstream is logically: + * + * ((relocbuf->gpuaddr + reloc_offset) << shift) | or + * + * When we have GPU's w/ >32bit ptrs, it should be possible to deal + * with this by emit'ing two reloc entries with appropriate shift + * values. Or a new ETNA_SUBMIT_CMD_x type would also be an option. + * + * NOTE that reloc's must be sorted by order of increasing submit_offset, + * otherwise EINVAL. + */ +struct drm_etnaviv_gem_submit_reloc { + uint32_t submit_offset; /* in, offset from submit_bo */ + uint32_t or; /* in, value OR'd with result */ + int32_t shift; /* in, amount of left shift (can be negative) */ + uint32_t reloc_idx; /* in, index of reloc_bo buffer */ + uint64_t reloc_offset; /* in, offset from start of reloc_bo */ +}; + +/* submit-types: + * BUF - this cmd buffer is executed normally. + * IB_TARGET_BUF - this cmd buffer is an IB target. Reloc's are + * processed normally, but the kernel does not setup an IB to + * this buffer in the first-level ringbuffer + * CTX_RESTORE_BUF - only executed if there has been a GPU context + * switch since the last SUBMIT ioctl + */ +#define ETNA_SUBMIT_CMD_BUF 0x0001 +#define ETNA_SUBMIT_CMD_IB_TARGET_BUF 0x0002 +#define ETNA_SUBMIT_CMD_CTX_RESTORE_BUF 0x0003 +struct drm_etnaviv_gem_submit_cmd { + uint32_t type; /* in, one of ETNA_SUBMIT_CMD_x */ + uint32_t submit_idx; /* in, index of submit_bo cmdstream buffer */ + uint32_t submit_offset; /* in, offset into submit_bo */ + uint32_t size; /* in, cmdstream size */ + uint32_t pad; + uint32_t nr_relocs; /* in, number of submit_reloc's */ + uint64_t __user relocs; /* in, ptr to array of submit_reloc's */ +}; + +/* Each buffer referenced elsewhere in the cmdstream submit (ie. the + * cmdstream buffer(s) themselves or reloc entries) has one (and only + * one) entry in the submit->bos[] table. + * + * As a optimization, the current buffer (gpu virtual address) can be + * passed back through the 'presumed' field. If on a subsequent reloc, + * userspace passes back a 'presumed' address that is still valid, + * then patching the cmdstream for this entry is skipped. This can + * avoid kernel needing to map/access the cmdstream bo in the common + * case. + */ +#define ETNA_SUBMIT_BO_READ 0x0001 +#define ETNA_SUBMIT_BO_WRITE 0x0002 +struct drm_etnaviv_gem_submit_bo { + uint32_t flags; /* in, mask of ETNA_SUBMIT_BO_x */ + uint32_t handle; /* in, GEM handle */ + uint64_t presumed; /* in/out, presumed buffer address */ +}; + +/* Each cmdstream submit consists of a table of buffers involved, and + * one or more cmdstream buffers. This allows for conditional execution + * (context-restore), and IB buffers needed for per tile/bin draw cmds. + */ +struct drm_etnaviv_gem_submit { + uint32_t pipe; /* in, ETNA_PIPE_x */ + uint32_t fence; /* out */ + uint32_t nr_bos; /* in, number of submit_bo's */ + uint32_t nr_cmds; /* in, number of submit_cmd's */ + uint64_t __user bos; /* in, ptr to array of submit_bo's */ + uint64_t __user cmds; /* in, ptr to array of submit_cmd's */ +}; + +/* The normal way to synchronize with the GPU is just to CPU_PREP on + * a buffer if you need to access it from the CPU (other cmdstream + * submission from same or other contexts, PAGE_FLIP ioctl, etc, all + * handle the required synchronization under the hood). This ioctl + * mainly just exists as a way to implement the gallium pipe_fence + * APIs without requiring a dummy bo to synchronize on. + */ +struct drm_etnaviv_wait_fence { + uint32_t pipe; /* in, ETNA_PIPE_x */ + uint32_t fence; /* in */ + struct drm_etnaviv_timespec timeout; /* in */ +}; + +#define DRM_ETNAVIV_GET_PARAM 0x00 +/* placeholder: +#define DRM_MSM_SET_PARAM 0x01 + */ +#define DRM_ETNAVIV_GEM_NEW 0x02 +#define DRM_ETNAVIV_GEM_INFO 0x03 +#define DRM_ETNAVIV_GEM_CPU_PREP 0x04 +#define DRM_ETNAVIV_GEM_CPU_FINI 0x05 +#define DRM_ETNAVIV_GEM_SUBMIT 0x06 +#define DRM_ETNAVIV_WAIT_FENCE 0x07 +#define DRM_ETNAVIV_NUM_IOCTLS 0x08 + +#define DRM_IOCTL_ETNAVIV_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GET_PARAM, struct drm_etnaviv_param) +#define DRM_IOCTL_ETNAVIV_GEM_NEW DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_NEW, struct drm_etnaviv_gem_new) +#define DRM_IOCTL_ETNAVIV_GEM_INFO DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_INFO, struct drm_etnaviv_gem_info) +#define DRM_IOCTL_ETNAVIV_GEM_CPU_PREP DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_CPU_PREP, struct drm_etnaviv_gem_cpu_prep) +#define DRM_IOCTL_ETNAVIV_GEM_CPU_FINI DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_CPU_FINI, struct drm_etnaviv_gem_cpu_fini) +#define DRM_IOCTL_ETNAVIV_GEM_SUBMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_SUBMIT, struct drm_etnaviv_gem_submit) +#define DRM_IOCTL_ETNAVIV_WAIT_FENCE DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_WAIT_FENCE, struct drm_etnaviv_wait_fence) + +#endif /* __ETNAVIV_DRM_H__ */
2015-04-02 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
From: Christian Gmeiner christian.gmeiner@gmail.com
This is a consolidation by Russell King of Christian's drm work.
Signed-off-by: Christian Gmeiner christian.gmeiner@gmail.com Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk
drivers/staging/Kconfig | 2 + drivers/staging/Makefile | 1 + drivers/staging/etnaviv/Kconfig | 20 + drivers/staging/etnaviv/Makefile | 17 + drivers/staging/etnaviv/cmdstream.xml.h | 218 ++++++ drivers/staging/etnaviv/common.xml.h | 253 +++++++ drivers/staging/etnaviv/etnaviv_buffer.c | 201 ++++++ drivers/staging/etnaviv/etnaviv_drv.c | 621 +++++++++++++++++ drivers/staging/etnaviv/etnaviv_drv.h | 143 ++++ drivers/staging/etnaviv/etnaviv_gem.c | 706 +++++++++++++++++++ drivers/staging/etnaviv/etnaviv_gem.h | 100 +++ drivers/staging/etnaviv/etnaviv_gem_prime.c | 56 ++ drivers/staging/etnaviv/etnaviv_gem_submit.c | 407 +++++++++++ drivers/staging/etnaviv/etnaviv_gpu.c | 984 +++++++++++++++++++++++++++ drivers/staging/etnaviv/etnaviv_gpu.h | 152 +++++ drivers/staging/etnaviv/etnaviv_iommu.c | 185 +++++ drivers/staging/etnaviv/etnaviv_iommu.h | 25 + drivers/staging/etnaviv/etnaviv_iommu_v2.c | 32 + drivers/staging/etnaviv/etnaviv_iommu_v2.h | 25 + drivers/staging/etnaviv/etnaviv_mmu.c | 111 +++ drivers/staging/etnaviv/etnaviv_mmu.h | 37 + drivers/staging/etnaviv/state.xml.h | 348 ++++++++++ drivers/staging/etnaviv/state_hi.xml.h | 405 +++++++++++ include/uapi/drm/etnaviv_drm.h | 225 ++++++ 24 files changed, 5274 insertions(+) create mode 100644 drivers/staging/etnaviv/Kconfig create mode 100644 drivers/staging/etnaviv/Makefile create mode 100644 drivers/staging/etnaviv/cmdstream.xml.h create mode 100644 drivers/staging/etnaviv/common.xml.h create mode 100644 drivers/staging/etnaviv/etnaviv_buffer.c create mode 100644 drivers/staging/etnaviv/etnaviv_drv.c create mode 100644 drivers/staging/etnaviv/etnaviv_drv.h create mode 100644 drivers/staging/etnaviv/etnaviv_gem.c create mode 100644 drivers/staging/etnaviv/etnaviv_gem.h create mode 100644 drivers/staging/etnaviv/etnaviv_gem_prime.c create mode 100644 drivers/staging/etnaviv/etnaviv_gem_submit.c create mode 100644 drivers/staging/etnaviv/etnaviv_gpu.c create mode 100644 drivers/staging/etnaviv/etnaviv_gpu.h create mode 100644 drivers/staging/etnaviv/etnaviv_iommu.c create mode 100644 drivers/staging/etnaviv/etnaviv_iommu.h create mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.c create mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.h create mode 100644 drivers/staging/etnaviv/etnaviv_mmu.c create mode 100644 drivers/staging/etnaviv/etnaviv_mmu.h create mode 100644 drivers/staging/etnaviv/state.xml.h create mode 100644 drivers/staging/etnaviv/state_hi.xml.h create mode 100644 include/uapi/drm/etnaviv_drm.h
diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig index 45baa83be7ce..441b1afbfe4c 100644 --- a/drivers/staging/Kconfig +++ b/drivers/staging/Kconfig @@ -108,4 +108,6 @@ source "drivers/staging/fbtft/Kconfig"
source "drivers/staging/i2o/Kconfig"
+source "drivers/staging/etnaviv/Kconfig"
endif # STAGING diff --git a/drivers/staging/Makefile b/drivers/staging/Makefile index 29160790841f..f53cf8412c0c 100644 --- a/drivers/staging/Makefile +++ b/drivers/staging/Makefile @@ -46,3 +46,4 @@ obj-$(CONFIG_UNISYSSPAR) += unisys/ obj-$(CONFIG_COMMON_CLK_XLNX_CLKWZRD) += clocking-wizard/ obj-$(CONFIG_FB_TFT) += fbtft/ obj-$(CONFIG_I2O) += i2o/ +obj-$(CONFIG_DRM_ETNAVIV) += etnaviv/ diff --git a/drivers/staging/etnaviv/Kconfig b/drivers/staging/etnaviv/Kconfig new file mode 100644 index 000000000000..6f034eda914c --- /dev/null +++ b/drivers/staging/etnaviv/Kconfig @@ -0,0 +1,20 @@
+config DRM_ETNAVIV
tristate "etnaviv DRM"
depends on DRM
select SHMEM
select TMPFS
select IOMMU_API
select IOMMU_SUPPORT
default y
help
DRM driver for Vivante GPUs.
+config DRM_ETNAVIV_REGISTER_LOGGING
bool "etnaviv DRM register logging"
depends on DRM_ETNAVIV
default n
help
Compile in support for logging register reads/writes in a format
that can be parsed by envytools demsm tool. If enabled, register
logging can be switched on via etnaviv.reglog=y module param.
diff --git a/drivers/staging/etnaviv/Makefile b/drivers/staging/etnaviv/Makefile new file mode 100644 index 000000000000..ef0cffabdcce --- /dev/null +++ b/drivers/staging/etnaviv/Makefile @@ -0,0 +1,17 @@ +ccflags-y := -Iinclude/drm -Idrivers/staging/vivante +ifeq (, $(findstring -W,$(EXTRA_CFLAGS)))
ccflags-y += -Werror
+endif
+etnaviv-y := \
etnaviv_drv.o \
etnaviv_gem.o \
etnaviv_gem_prime.o \
etnaviv_gem_submit.o \
etnaviv_gpu.o \
etnaviv_iommu.o \
etnaviv_iommu_v2.o \
etnaviv_mmu.o \
etnaviv_buffer.o
+obj-$(CONFIG_DRM_ETNAVIV) += etnaviv.o diff --git a/drivers/staging/etnaviv/cmdstream.xml.h b/drivers/staging/etnaviv/cmdstream.xml.h new file mode 100644 index 000000000000..844f82977e3e --- /dev/null +++ b/drivers/staging/etnaviv/cmdstream.xml.h @@ -0,0 +1,218 @@ +#ifndef CMDSTREAM_XML +#define CMDSTREAM_XML
+/* Autogenerated file, DO NOT EDIT manually!
+This file was generated by the rules-ng-ng headergen tool in this git repository: +http://0x04.net/cgit/index.cgi/rules-ng-ng +git clone git://0x04.net/rules-ng-ng
+The rules-ng-ng source files this header was generated from are: +- /home/orion/projects/etna_viv/rnndb/cmdstream.xml ( 12589 bytes, from 2013-09-01 10:53:22) +- /home/orion/projects/etna_viv/rnndb/common.xml ( 18379 bytes, from 2014-01-27 15:58:05)
+Copyright (C) 2013 +*/
+#define FE_OPCODE_LOAD_STATE 0x00000001 +#define FE_OPCODE_END 0x00000002 +#define FE_OPCODE_NOP 0x00000003 +#define FE_OPCODE_DRAW_2D 0x00000004 +#define FE_OPCODE_DRAW_PRIMITIVES 0x00000005 +#define FE_OPCODE_DRAW_INDEXED_PRIMITIVES 0x00000006 +#define FE_OPCODE_WAIT 0x00000007 +#define FE_OPCODE_LINK 0x00000008 +#define FE_OPCODE_STALL 0x00000009 +#define FE_OPCODE_CALL 0x0000000a +#define FE_OPCODE_RETURN 0x0000000b +#define FE_OPCODE_CHIP_SELECT 0x0000000d +#define PRIMITIVE_TYPE_POINTS 0x00000001 +#define PRIMITIVE_TYPE_LINES 0x00000002 +#define PRIMITIVE_TYPE_LINE_STRIP 0x00000003 +#define PRIMITIVE_TYPE_TRIANGLES 0x00000004 +#define PRIMITIVE_TYPE_TRIANGLE_STRIP 0x00000005 +#define PRIMITIVE_TYPE_TRIANGLE_FAN 0x00000006 +#define PRIMITIVE_TYPE_LINE_LOOP 0x00000007 +#define PRIMITIVE_TYPE_QUADS 0x00000008 +#define VIV_FE_LOAD_STATE 0x00000000
+#define VIV_FE_LOAD_STATE_HEADER 0x00000000 +#define VIV_FE_LOAD_STATE_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_LOAD_STATE_HEADER_OP__SHIFT 27 +#define VIV_FE_LOAD_STATE_HEADER_OP_LOAD_STATE 0x08000000 +#define VIV_FE_LOAD_STATE_HEADER_FIXP 0x04000000 +#define VIV_FE_LOAD_STATE_HEADER_COUNT__MASK 0x03ff0000 +#define VIV_FE_LOAD_STATE_HEADER_COUNT__SHIFT 16 +#define VIV_FE_LOAD_STATE_HEADER_COUNT(x) (((x) << VIV_FE_LOAD_STATE_HEADER_COUNT__SHIFT) & VIV_FE_LOAD_STATE_HEADER_COUNT__MASK) +#define VIV_FE_LOAD_STATE_HEADER_OFFSET__MASK 0x0000ffff +#define VIV_FE_LOAD_STATE_HEADER_OFFSET__SHIFT 0 +#define VIV_FE_LOAD_STATE_HEADER_OFFSET(x) (((x) << VIV_FE_LOAD_STATE_HEADER_OFFSET__SHIFT) & VIV_FE_LOAD_STATE_HEADER_OFFSET__MASK) +#define VIV_FE_LOAD_STATE_HEADER_OFFSET__SHR 2
+#define VIV_FE_END 0x00000000
+#define VIV_FE_END_HEADER 0x00000000 +#define VIV_FE_END_HEADER_EVENT_ID__MASK 0x0000001f +#define VIV_FE_END_HEADER_EVENT_ID__SHIFT 0 +#define VIV_FE_END_HEADER_EVENT_ID(x) (((x) << VIV_FE_END_HEADER_EVENT_ID__SHIFT) & VIV_FE_END_HEADER_EVENT_ID__MASK) +#define VIV_FE_END_HEADER_EVENT_ENABLE 0x00000100 +#define VIV_FE_END_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_END_HEADER_OP__SHIFT 27 +#define VIV_FE_END_HEADER_OP_END 0x10000000
+#define VIV_FE_NOP 0x00000000
+#define VIV_FE_NOP_HEADER 0x00000000 +#define VIV_FE_NOP_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_NOP_HEADER_OP__SHIFT 27 +#define VIV_FE_NOP_HEADER_OP_NOP 0x18000000
+#define VIV_FE_DRAW_2D 0x00000000
+#define VIV_FE_DRAW_2D_HEADER 0x00000000 +#define VIV_FE_DRAW_2D_HEADER_COUNT__MASK 0x0000ff00 +#define VIV_FE_DRAW_2D_HEADER_COUNT__SHIFT 8 +#define VIV_FE_DRAW_2D_HEADER_COUNT(x) (((x) << VIV_FE_DRAW_2D_HEADER_COUNT__SHIFT) & VIV_FE_DRAW_2D_HEADER_COUNT__MASK) +#define VIV_FE_DRAW_2D_HEADER_DATA_COUNT__MASK 0x07ff0000 +#define VIV_FE_DRAW_2D_HEADER_DATA_COUNT__SHIFT 16 +#define VIV_FE_DRAW_2D_HEADER_DATA_COUNT(x) (((x) << VIV_FE_DRAW_2D_HEADER_DATA_COUNT__SHIFT) & VIV_FE_DRAW_2D_HEADER_DATA_COUNT__MASK) +#define VIV_FE_DRAW_2D_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_DRAW_2D_HEADER_OP__SHIFT 27 +#define VIV_FE_DRAW_2D_HEADER_OP_DRAW_2D 0x20000000
+#define VIV_FE_DRAW_2D_TOP_LEFT 0x00000008 +#define VIV_FE_DRAW_2D_TOP_LEFT_X__MASK 0x0000ffff +#define VIV_FE_DRAW_2D_TOP_LEFT_X__SHIFT 0 +#define VIV_FE_DRAW_2D_TOP_LEFT_X(x) (((x) << VIV_FE_DRAW_2D_TOP_LEFT_X__SHIFT) & VIV_FE_DRAW_2D_TOP_LEFT_X__MASK) +#define VIV_FE_DRAW_2D_TOP_LEFT_Y__MASK 0xffff0000 +#define VIV_FE_DRAW_2D_TOP_LEFT_Y__SHIFT 16 +#define VIV_FE_DRAW_2D_TOP_LEFT_Y(x) (((x) << VIV_FE_DRAW_2D_TOP_LEFT_Y__SHIFT) & VIV_FE_DRAW_2D_TOP_LEFT_Y__MASK)
+#define VIV_FE_DRAW_2D_BOTTOM_RIGHT 0x0000000c +#define VIV_FE_DRAW_2D_BOTTOM_RIGHT_X__MASK 0x0000ffff +#define VIV_FE_DRAW_2D_BOTTOM_RIGHT_X__SHIFT 0 +#define VIV_FE_DRAW_2D_BOTTOM_RIGHT_X(x) (((x) << VIV_FE_DRAW_2D_BOTTOM_RIGHT_X__SHIFT) & VIV_FE_DRAW_2D_BOTTOM_RIGHT_X__MASK) +#define VIV_FE_DRAW_2D_BOTTOM_RIGHT_Y__MASK 0xffff0000 +#define VIV_FE_DRAW_2D_BOTTOM_RIGHT_Y__SHIFT 16 +#define VIV_FE_DRAW_2D_BOTTOM_RIGHT_Y(x) (((x) << VIV_FE_DRAW_2D_BOTTOM_RIGHT_Y__SHIFT) & VIV_FE_DRAW_2D_BOTTOM_RIGHT_Y__MASK)
+#define VIV_FE_DRAW_PRIMITIVES 0x00000000
+#define VIV_FE_DRAW_PRIMITIVES_HEADER 0x00000000 +#define VIV_FE_DRAW_PRIMITIVES_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_DRAW_PRIMITIVES_HEADER_OP__SHIFT 27 +#define VIV_FE_DRAW_PRIMITIVES_HEADER_OP_DRAW_PRIMITIVES 0x28000000
+#define VIV_FE_DRAW_PRIMITIVES_COMMAND 0x00000004 +#define VIV_FE_DRAW_PRIMITIVES_COMMAND_TYPE__MASK 0x000000ff +#define VIV_FE_DRAW_PRIMITIVES_COMMAND_TYPE__SHIFT 0 +#define VIV_FE_DRAW_PRIMITIVES_COMMAND_TYPE(x) (((x) << VIV_FE_DRAW_PRIMITIVES_COMMAND_TYPE__SHIFT) & VIV_FE_DRAW_PRIMITIVES_COMMAND_TYPE__MASK)
+#define VIV_FE_DRAW_PRIMITIVES_START 0x00000008
+#define VIV_FE_DRAW_PRIMITIVES_COUNT 0x0000000c
+#define VIV_FE_DRAW_INDEXED_PRIMITIVES 0x00000000
+#define VIV_FE_DRAW_INDEXED_PRIMITIVES_HEADER 0x00000000 +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_HEADER_OP__SHIFT 27 +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_HEADER_OP_DRAW_INDEXED_PRIMITIVES 0x30000000
+#define VIV_FE_DRAW_INDEXED_PRIMITIVES_COMMAND 0x00000004 +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_COMMAND_TYPE__MASK 0x000000ff +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_COMMAND_TYPE__SHIFT 0 +#define VIV_FE_DRAW_INDEXED_PRIMITIVES_COMMAND_TYPE(x) (((x) << VIV_FE_DRAW_INDEXED_PRIMITIVES_COMMAND_TYPE__SHIFT) & VIV_FE_DRAW_INDEXED_PRIMITIVES_COMMAND_TYPE__MASK)
+#define VIV_FE_DRAW_INDEXED_PRIMITIVES_START 0x00000008
+#define VIV_FE_DRAW_INDEXED_PRIMITIVES_COUNT 0x0000000c
+#define VIV_FE_DRAW_INDEXED_PRIMITIVES_OFFSET 0x00000010
+#define VIV_FE_WAIT 0x00000000
+#define VIV_FE_WAIT_HEADER 0x00000000 +#define VIV_FE_WAIT_HEADER_DELAY__MASK 0x0000ffff +#define VIV_FE_WAIT_HEADER_DELAY__SHIFT 0 +#define VIV_FE_WAIT_HEADER_DELAY(x) (((x) << VIV_FE_WAIT_HEADER_DELAY__SHIFT) & VIV_FE_WAIT_HEADER_DELAY__MASK) +#define VIV_FE_WAIT_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_WAIT_HEADER_OP__SHIFT 27 +#define VIV_FE_WAIT_HEADER_OP_WAIT 0x38000000
+#define VIV_FE_LINK 0x00000000
+#define VIV_FE_LINK_HEADER 0x00000000 +#define VIV_FE_LINK_HEADER_PREFETCH__MASK 0x0000ffff +#define VIV_FE_LINK_HEADER_PREFETCH__SHIFT 0 +#define VIV_FE_LINK_HEADER_PREFETCH(x) (((x) << VIV_FE_LINK_HEADER_PREFETCH__SHIFT) & VIV_FE_LINK_HEADER_PREFETCH__MASK) +#define VIV_FE_LINK_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_LINK_HEADER_OP__SHIFT 27 +#define VIV_FE_LINK_HEADER_OP_LINK 0x40000000
+#define VIV_FE_LINK_ADDRESS 0x00000004
+#define VIV_FE_STALL 0x00000000
+#define VIV_FE_STALL_HEADER 0x00000000 +#define VIV_FE_STALL_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_STALL_HEADER_OP__SHIFT 27 +#define VIV_FE_STALL_HEADER_OP_STALL 0x48000000
+#define VIV_FE_STALL_TOKEN 0x00000004 +#define VIV_FE_STALL_TOKEN_FROM__MASK 0x0000001f +#define VIV_FE_STALL_TOKEN_FROM__SHIFT 0 +#define VIV_FE_STALL_TOKEN_FROM(x) (((x) << VIV_FE_STALL_TOKEN_FROM__SHIFT) & VIV_FE_STALL_TOKEN_FROM__MASK) +#define VIV_FE_STALL_TOKEN_TO__MASK 0x00001f00 +#define VIV_FE_STALL_TOKEN_TO__SHIFT 8 +#define VIV_FE_STALL_TOKEN_TO(x) (((x) << VIV_FE_STALL_TOKEN_TO__SHIFT) & VIV_FE_STALL_TOKEN_TO__MASK)
+#define VIV_FE_CALL 0x00000000
+#define VIV_FE_CALL_HEADER 0x00000000 +#define VIV_FE_CALL_HEADER_PREFETCH__MASK 0x0000ffff +#define VIV_FE_CALL_HEADER_PREFETCH__SHIFT 0 +#define VIV_FE_CALL_HEADER_PREFETCH(x) (((x) << VIV_FE_CALL_HEADER_PREFETCH__SHIFT) & VIV_FE_CALL_HEADER_PREFETCH__MASK) +#define VIV_FE_CALL_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_CALL_HEADER_OP__SHIFT 27 +#define VIV_FE_CALL_HEADER_OP_CALL 0x50000000
+#define VIV_FE_CALL_ADDRESS 0x00000004
+#define VIV_FE_CALL_RETURN_PREFETCH 0x00000008
+#define VIV_FE_CALL_RETURN_ADDRESS 0x0000000c
+#define VIV_FE_RETURN 0x00000000
+#define VIV_FE_RETURN_HEADER 0x00000000 +#define VIV_FE_RETURN_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_RETURN_HEADER_OP__SHIFT 27 +#define VIV_FE_RETURN_HEADER_OP_RETURN 0x58000000
+#define VIV_FE_CHIP_SELECT 0x00000000
+#define VIV_FE_CHIP_SELECT_HEADER 0x00000000 +#define VIV_FE_CHIP_SELECT_HEADER_OP__MASK 0xf8000000 +#define VIV_FE_CHIP_SELECT_HEADER_OP__SHIFT 27 +#define VIV_FE_CHIP_SELECT_HEADER_OP_CHIP_SELECT 0x68000000 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP15 0x00008000 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP14 0x00004000 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP13 0x00002000 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP12 0x00001000 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP11 0x00000800 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP10 0x00000400 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP9 0x00000200 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP8 0x00000100 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP7 0x00000080 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP6 0x00000040 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP5 0x00000020 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP4 0x00000010 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP3 0x00000008 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP2 0x00000004 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP1 0x00000002 +#define VIV_FE_CHIP_SELECT_HEADER_ENABLE_CHIP0 0x00000001
+#endif /* CMDSTREAM_XML */ diff --git a/drivers/staging/etnaviv/common.xml.h b/drivers/staging/etnaviv/common.xml.h new file mode 100644 index 000000000000..36fa0e4cf56b --- /dev/null +++ b/drivers/staging/etnaviv/common.xml.h @@ -0,0 +1,253 @@ +#ifndef COMMON_XML +#define COMMON_XML
+/* Autogenerated file, DO NOT EDIT manually!
+This file was generated by the rules-ng-ng headergen tool in this git repository: +http://0x04.net/cgit/index.cgi/rules-ng-ng +git clone git://0x04.net/rules-ng-ng
+The rules-ng-ng source files this header was generated from are: +- /home/orion/projects/etna_viv/rnndb/state.xml ( 18526 bytes, from 2013-09-11 16:52:32) +- /home/orion/projects/etna_viv/rnndb/common.xml ( 18379 bytes, from 2014-01-27 15:58:05) +- /home/orion/projects/etna_viv/rnndb/state_hi.xml ( 22236 bytes, from 2014-01-27 15:56:46) +- /home/orion/projects/etna_viv/rnndb/state_2d.xml ( 51191 bytes, from 2013-10-04 06:36:55) +- /home/orion/projects/etna_viv/rnndb/state_3d.xml ( 54570 bytes, from 2013-10-12 15:25:03) +- /home/orion/projects/etna_viv/rnndb/state_vg.xml ( 5942 bytes, from 2013-09-01 10:53:22)
+Copyright (C) 2014 +*/
+#define PIPE_ID_PIPE_3D 0x00000000 +#define PIPE_ID_PIPE_2D 0x00000001 +#define SYNC_RECIPIENT_FE 0x00000001 +#define SYNC_RECIPIENT_RA 0x00000005 +#define SYNC_RECIPIENT_PE 0x00000007 +#define SYNC_RECIPIENT_DE 0x0000000b +#define SYNC_RECIPIENT_VG 0x0000000f +#define SYNC_RECIPIENT_TESSELATOR 0x00000010 +#define SYNC_RECIPIENT_VG2 0x00000011 +#define SYNC_RECIPIENT_TESSELATOR2 0x00000012 +#define SYNC_RECIPIENT_VG3 0x00000013 +#define SYNC_RECIPIENT_TESSELATOR3 0x00000014 +#define ENDIAN_MODE_NO_SWAP 0x00000000 +#define ENDIAN_MODE_SWAP_16 0x00000001 +#define ENDIAN_MODE_SWAP_32 0x00000002 +#define chipModel_GC300 0x00000300 +#define chipModel_GC320 0x00000320 +#define chipModel_GC350 0x00000350 +#define chipModel_GC355 0x00000355 +#define chipModel_GC400 0x00000400 +#define chipModel_GC410 0x00000410 +#define chipModel_GC420 0x00000420 +#define chipModel_GC450 0x00000450 +#define chipModel_GC500 0x00000500 +#define chipModel_GC530 0x00000530 +#define chipModel_GC600 0x00000600 +#define chipModel_GC700 0x00000700 +#define chipModel_GC800 0x00000800 +#define chipModel_GC860 0x00000860 +#define chipModel_GC880 0x00000880 +#define chipModel_GC1000 0x00001000 +#define chipModel_GC2000 0x00002000 +#define chipModel_GC2100 0x00002100 +#define chipModel_GC4000 0x00004000 +#define RGBA_BITS_R 0x00000001 +#define RGBA_BITS_G 0x00000002 +#define RGBA_BITS_B 0x00000004 +#define RGBA_BITS_A 0x00000008 +#define chipFeatures_FAST_CLEAR 0x00000001 +#define chipFeatures_SPECIAL_ANTI_ALIASING 0x00000002 +#define chipFeatures_PIPE_3D 0x00000004 +#define chipFeatures_DXT_TEXTURE_COMPRESSION 0x00000008 +#define chipFeatures_DEBUG_MODE 0x00000010 +#define chipFeatures_Z_COMPRESSION 0x00000020 +#define chipFeatures_YUV420_SCALER 0x00000040 +#define chipFeatures_MSAA 0x00000080 +#define chipFeatures_DC 0x00000100 +#define chipFeatures_PIPE_2D 0x00000200 +#define chipFeatures_ETC1_TEXTURE_COMPRESSION 0x00000400 +#define chipFeatures_FAST_SCALER 0x00000800 +#define chipFeatures_HIGH_DYNAMIC_RANGE 0x00001000 +#define chipFeatures_YUV420_TILER 0x00002000 +#define chipFeatures_MODULE_CG 0x00004000 +#define chipFeatures_MIN_AREA 0x00008000 +#define chipFeatures_NO_EARLY_Z 0x00010000 +#define chipFeatures_NO_422_TEXTURE 0x00020000 +#define chipFeatures_BUFFER_INTERLEAVING 0x00040000 +#define chipFeatures_BYTE_WRITE_2D 0x00080000 +#define chipFeatures_NO_SCALER 0x00100000 +#define chipFeatures_YUY2_AVERAGING 0x00200000 +#define chipFeatures_HALF_PE_CACHE 0x00400000 +#define chipFeatures_HALF_TX_CACHE 0x00800000 +#define chipFeatures_YUY2_RENDER_TARGET 0x01000000 +#define chipFeatures_MEM32 0x02000000 +#define chipFeatures_PIPE_VG 0x04000000 +#define chipFeatures_VGTS 0x08000000 +#define chipFeatures_FE20 0x10000000 +#define chipFeatures_BYTE_WRITE_3D 0x20000000 +#define chipFeatures_RS_YUV_TARGET 0x40000000 +#define chipFeatures_32_BIT_INDICES 0x80000000 +#define chipMinorFeatures0_FLIP_Y 0x00000001 +#define chipMinorFeatures0_DUAL_RETURN_BUS 0x00000002 +#define chipMinorFeatures0_ENDIANNESS_CONFIG 0x00000004 +#define chipMinorFeatures0_TEXTURE_8K 0x00000008 +#define chipMinorFeatures0_CORRECT_TEXTURE_CONVERTER 0x00000010 +#define chipMinorFeatures0_SPECIAL_MSAA_LOD 0x00000020 +#define chipMinorFeatures0_FAST_CLEAR_FLUSH 0x00000040 +#define chipMinorFeatures0_2DPE20 0x00000080 +#define chipMinorFeatures0_CORRECT_AUTO_DISABLE 0x00000100 +#define chipMinorFeatures0_RENDERTARGET_8K 0x00000200 +#define chipMinorFeatures0_2BITPERTILE 0x00000400 +#define chipMinorFeatures0_SEPARATE_TILE_STATUS_WHEN_INTERLEAVED 0x00000800 +#define chipMinorFeatures0_SUPER_TILED 0x00001000 +#define chipMinorFeatures0_VG_20 0x00002000 +#define chipMinorFeatures0_TS_EXTENDED_COMMANDS 0x00004000 +#define chipMinorFeatures0_COMPRESSION_FIFO_FIXED 0x00008000 +#define chipMinorFeatures0_HAS_SIGN_FLOOR_CEIL 0x00010000 +#define chipMinorFeatures0_VG_FILTER 0x00020000 +#define chipMinorFeatures0_VG_21 0x00040000 +#define chipMinorFeatures0_SHADER_HAS_W 0x00080000 +#define chipMinorFeatures0_HAS_SQRT_TRIG 0x00100000 +#define chipMinorFeatures0_MORE_MINOR_FEATURES 0x00200000 +#define chipMinorFeatures0_MC20 0x00400000 +#define chipMinorFeatures0_MSAA_SIDEBAND 0x00800000 +#define chipMinorFeatures0_BUG_FIXES0 0x01000000 +#define chipMinorFeatures0_VAA 0x02000000 +#define chipMinorFeatures0_BYPASS_IN_MSAA 0x04000000 +#define chipMinorFeatures0_HZ 0x08000000 +#define chipMinorFeatures0_NEW_TEXTURE 0x10000000 +#define chipMinorFeatures0_2D_A8_TARGET 0x20000000 +#define chipMinorFeatures0_CORRECT_STENCIL 0x40000000 +#define chipMinorFeatures0_ENHANCE_VR 0x80000000 +#define chipMinorFeatures1_RSUV_SWIZZLE 0x00000001 +#define chipMinorFeatures1_V2_COMPRESSION 0x00000002 +#define chipMinorFeatures1_VG_DOUBLE_BUFFER 0x00000004 +#define chipMinorFeatures1_EXTRA_EVENT_STATES 0x00000008 +#define chipMinorFeatures1_NO_STRIPING_NEEDED 0x00000010 +#define chipMinorFeatures1_TEXTURE_STRIDE 0x00000020 +#define chipMinorFeatures1_BUG_FIXES3 0x00000040 +#define chipMinorFeatures1_AUTO_DISABLE 0x00000080 +#define chipMinorFeatures1_AUTO_RESTART_TS 0x00000100 +#define chipMinorFeatures1_DISABLE_PE_GATING 0x00000200 +#define chipMinorFeatures1_L2_WINDOWING 0x00000400 +#define chipMinorFeatures1_HALF_FLOAT 0x00000800 +#define chipMinorFeatures1_PIXEL_DITHER 0x00001000 +#define chipMinorFeatures1_TWO_STENCIL_REFERENCE 0x00002000 +#define chipMinorFeatures1_EXTENDED_PIXEL_FORMAT 0x00004000 +#define chipMinorFeatures1_CORRECT_MIN_MAX_DEPTH 0x00008000 +#define chipMinorFeatures1_2D_DITHER 0x00010000 +#define chipMinorFeatures1_BUG_FIXES5 0x00020000 +#define chipMinorFeatures1_NEW_2D 0x00040000 +#define chipMinorFeatures1_NEW_FP 0x00080000 +#define chipMinorFeatures1_TEXTURE_HALIGN 0x00100000 +#define chipMinorFeatures1_NON_POWER_OF_TWO 0x00200000 +#define chipMinorFeatures1_LINEAR_TEXTURE_SUPPORT 0x00400000 +#define chipMinorFeatures1_HALTI0 0x00800000 +#define chipMinorFeatures1_CORRECT_OVERFLOW_VG 0x01000000 +#define chipMinorFeatures1_NEGATIVE_LOG_FIX 0x02000000 +#define chipMinorFeatures1_RESOLVE_OFFSET 0x04000000 +#define chipMinorFeatures1_OK_TO_GATE_AXI_CLOCK 0x08000000 +#define chipMinorFeatures1_MMU_VERSION 0x10000000 +#define chipMinorFeatures1_WIDE_LINE 0x20000000 +#define chipMinorFeatures1_BUG_FIXES6 0x40000000 +#define chipMinorFeatures1_FC_FLUSH_STALL 0x80000000 +#define chipMinorFeatures2_LINE_LOOP 0x00000001 +#define chipMinorFeatures2_LOGIC_OP 0x00000002 +#define chipMinorFeatures2_UNK2 0x00000004 +#define chipMinorFeatures2_SUPERTILED_TEXTURE 0x00000008 +#define chipMinorFeatures2_UNK4 0x00000010 +#define chipMinorFeatures2_RECT_PRIMITIVE 0x00000020 +#define chipMinorFeatures2_COMPOSITION 0x00000040 +#define chipMinorFeatures2_CORRECT_AUTO_DISABLE_COUNT 0x00000080 +#define chipMinorFeatures2_UNK8 0x00000100 +#define chipMinorFeatures2_UNK9 0x00000200 +#define chipMinorFeatures2_UNK10 0x00000400 +#define chipMinorFeatures2_SAMPLERBASE_16 0x00000800 +#define chipMinorFeatures2_UNK12 0x00001000 +#define chipMinorFeatures2_UNK13 0x00002000 +#define chipMinorFeatures2_UNK14 0x00004000 +#define chipMinorFeatures2_EXTRA_TEXTURE_STATE 0x00008000 +#define chipMinorFeatures2_FULL_DIRECTFB 0x00010000 +#define chipMinorFeatures2_2D_TILING 0x00020000 +#define chipMinorFeatures2_THREAD_WALKER_IN_PS 0x00040000 +#define chipMinorFeatures2_TILE_FILLER 0x00080000 +#define chipMinorFeatures2_UNK20 0x00100000 +#define chipMinorFeatures2_2D_MULTI_SOURCE_BLIT 0x00200000 +#define chipMinorFeatures2_UNK22 0x00400000 +#define chipMinorFeatures2_UNK23 0x00800000 +#define chipMinorFeatures2_UNK24 0x01000000 +#define chipMinorFeatures2_MIXED_STREAMS 0x02000000 +#define chipMinorFeatures2_2D_420_L2CACHE 0x04000000 +#define chipMinorFeatures2_UNK27 0x08000000 +#define chipMinorFeatures2_2D_NO_INDEX8_BRUSH 0x10000000 +#define chipMinorFeatures2_TEXTURE_TILED_READ 0x20000000 +#define chipMinorFeatures2_UNK30 0x40000000 +#define chipMinorFeatures2_UNK31 0x80000000 +#define chipMinorFeatures3_ROTATION_STALL_FIX 0x00000001 +#define chipMinorFeatures3_UNK1 0x00000002 +#define chipMinorFeatures3_2D_MULTI_SOURCE_BLT_EX 0x00000004 +#define chipMinorFeatures3_UNK3 0x00000008 +#define chipMinorFeatures3_UNK4 0x00000010 +#define chipMinorFeatures3_UNK5 0x00000020 +#define chipMinorFeatures3_UNK6 0x00000040 +#define chipMinorFeatures3_UNK7 0x00000080 +#define chipMinorFeatures3_UNK8 0x00000100 +#define chipMinorFeatures3_UNK9 0x00000200 +#define chipMinorFeatures3_BUG_FIXES10 0x00000400 +#define chipMinorFeatures3_UNK11 0x00000800 +#define chipMinorFeatures3_BUG_FIXES11 0x00001000 +#define chipMinorFeatures3_UNK13 0x00002000 +#define chipMinorFeatures3_UNK14 0x00004000 +#define chipMinorFeatures3_UNK15 0x00008000 +#define chipMinorFeatures3_UNK16 0x00010000 +#define chipMinorFeatures3_UNK17 0x00020000 +#define chipMinorFeatures3_UNK18 0x00040000 +#define chipMinorFeatures3_UNK19 0x00080000 +#define chipMinorFeatures3_UNK20 0x00100000 +#define chipMinorFeatures3_UNK21 0x00200000 +#define chipMinorFeatures3_UNK22 0x00400000 +#define chipMinorFeatures3_UNK23 0x00800000 +#define chipMinorFeatures3_UNK24 0x01000000 +#define chipMinorFeatures3_UNK25 0x02000000 +#define chipMinorFeatures3_UNK26 0x04000000 +#define chipMinorFeatures3_UNK27 0x08000000 +#define chipMinorFeatures3_UNK28 0x10000000 +#define chipMinorFeatures3_UNK29 0x20000000 +#define chipMinorFeatures3_UNK30 0x40000000 +#define chipMinorFeatures3_UNK31 0x80000000 +#define chipMinorFeatures4_UNK0 0x00000001 +#define chipMinorFeatures4_UNK1 0x00000002 +#define chipMinorFeatures4_UNK2 0x00000004 +#define chipMinorFeatures4_UNK3 0x00000008 +#define chipMinorFeatures4_UNK4 0x00000010 +#define chipMinorFeatures4_UNK5 0x00000020 +#define chipMinorFeatures4_UNK6 0x00000040 +#define chipMinorFeatures4_UNK7 0x00000080 +#define chipMinorFeatures4_UNK8 0x00000100 +#define chipMinorFeatures4_UNK9 0x00000200 +#define chipMinorFeatures4_UNK10 0x00000400 +#define chipMinorFeatures4_UNK11 0x00000800 +#define chipMinorFeatures4_UNK12 0x00001000 +#define chipMinorFeatures4_UNK13 0x00002000 +#define chipMinorFeatures4_UNK14 0x00004000 +#define chipMinorFeatures4_UNK15 0x00008000 +#define chipMinorFeatures4_UNK16 0x00010000 +#define chipMinorFeatures4_UNK17 0x00020000 +#define chipMinorFeatures4_UNK18 0x00040000 +#define chipMinorFeatures4_UNK19 0x00080000 +#define chipMinorFeatures4_UNK20 0x00100000 +#define chipMinorFeatures4_UNK21 0x00200000 +#define chipMinorFeatures4_UNK22 0x00400000 +#define chipMinorFeatures4_UNK23 0x00800000 +#define chipMinorFeatures4_UNK24 0x01000000 +#define chipMinorFeatures4_UNK25 0x02000000 +#define chipMinorFeatures4_UNK26 0x04000000 +#define chipMinorFeatures4_UNK27 0x08000000 +#define chipMinorFeatures4_UNK28 0x10000000 +#define chipMinorFeatures4_UNK29 0x20000000 +#define chipMinorFeatures4_UNK30 0x40000000 +#define chipMinorFeatures4_UNK31 0x80000000
+#endif /* COMMON_XML */ diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c new file mode 100644 index 000000000000..32764e15c5f7 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -0,0 +1,201 @@ +/*
- Copyright (C) 2014 2014 Etnaviv Project
- Author: Christian Gmeiner christian.gmeiner@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include "etnaviv_gpu.h" +#include "etnaviv_gem.h"
+#include "common.xml.h" +#include "state.xml.h" +#include "cmdstream.xml.h"
+/*
- Command Buffer helper:
- */
+static inline void OUT(struct etnaviv_gem_object *buffer, uint32_t data) +{
u32 *vaddr = (u32 *)buffer->vaddr;
BUG_ON(buffer->offset >= buffer->base.size);
vaddr[buffer->offset++] = data;
+}
+static inline void CMD_LOAD_STATE(struct etnaviv_gem_object *buffer, u32 reg, u32 value) +{
buffer->offset = ALIGN(buffer->offset, 2);
/* write a register via cmd stream */
OUT(buffer, VIV_FE_LOAD_STATE_HEADER_OP_LOAD_STATE | VIV_FE_LOAD_STATE_HEADER_COUNT(1) |
VIV_FE_LOAD_STATE_HEADER_OFFSET(reg >> VIV_FE_LOAD_STATE_HEADER_OFFSET__SHR));
OUT(buffer, value);
+}
+static inline void CMD_LOAD_STATES(struct etnaviv_gem_object *buffer, u32 reg, u16 count, u32 *values) +{
u16 i;
buffer->offset = ALIGN(buffer->offset, 2);
OUT(buffer, VIV_FE_LOAD_STATE_HEADER_OP_LOAD_STATE | VIV_FE_LOAD_STATE_HEADER_COUNT(count) |
VIV_FE_LOAD_STATE_HEADER_OFFSET(reg >> VIV_FE_LOAD_STATE_HEADER_OFFSET__SHR));
for (i = 0; i < count; i++)
OUT(buffer, values[i]);
+}
+static inline void CMD_END(struct etnaviv_gem_object *buffer) +{
buffer->offset = ALIGN(buffer->offset, 2);
OUT(buffer, VIV_FE_END_HEADER_OP_END);
+}
+static inline void CMD_NOP(struct etnaviv_gem_object *buffer) +{
buffer->offset = ALIGN(buffer->offset, 2);
OUT(buffer, VIV_FE_NOP_HEADER_OP_NOP);
+}
+static inline void CMD_WAIT(struct etnaviv_gem_object *buffer) +{
buffer->offset = ALIGN(buffer->offset, 2);
OUT(buffer, VIV_FE_WAIT_HEADER_OP_WAIT | 200);
+}
+static inline void CMD_LINK(struct etnaviv_gem_object *buffer, u16 prefetch, u32 address) +{
buffer->offset = ALIGN(buffer->offset, 2);
OUT(buffer, VIV_FE_LINK_HEADER_OP_LINK | VIV_FE_LINK_HEADER_PREFETCH(prefetch));
OUT(buffer, address);
+}
+static inline void CMD_STALL(struct etnaviv_gem_object *buffer, u32 from, u32 to) +{
buffer->offset = ALIGN(buffer->offset, 2);
OUT(buffer, VIV_FE_STALL_HEADER_OP_STALL);
OUT(buffer, VIV_FE_STALL_TOKEN_FROM(from) | VIV_FE_STALL_TOKEN_TO(to));
+}
+static void etnaviv_cmd_select_pipe(struct etnaviv_gem_object *buffer, u8 pipe) +{
u32 flush;
u32 stall;
if (pipe == ETNA_PIPE_2D)
flush = VIVS_GL_FLUSH_CACHE_DEPTH | VIVS_GL_FLUSH_CACHE_COLOR;
else
flush = VIVS_GL_FLUSH_CACHE_TEXTURE;
stall = VIVS_GL_SEMAPHORE_TOKEN_FROM(SYNC_RECIPIENT_FE) |
VIVS_GL_SEMAPHORE_TOKEN_TO(SYNC_RECIPIENT_PE);
CMD_LOAD_STATE(buffer, VIVS_GL_FLUSH_CACHE, flush);
CMD_LOAD_STATE(buffer, VIVS_GL_SEMAPHORE_TOKEN, stall);
CMD_STALL(buffer, SYNC_RECIPIENT_FE, SYNC_RECIPIENT_PE);
CMD_LOAD_STATE(buffer, VIVS_GL_PIPE_SELECT, VIVS_GL_PIPE_SELECT_PIPE(pipe));
+}
+static void etnaviv_buffer_dump(struct etnaviv_gem_object *obj, u32 len) +{
u32 size = obj->base.size;
u32 *ptr = obj->vaddr;
dev_dbg(obj->gpu->dev->dev, "virt %p phys 0x%08x free 0x%08x\n",
obj->vaddr, obj->paddr, size - len * 4);
print_hex_dump(KERN_INFO, "cmd ", DUMP_PREFIX_OFFSET, 16, 4,
ptr, len * 4, 0);
+}
+u32 etnaviv_buffer_init(struct etnaviv_gpu *gpu) +{
struct etnaviv_gem_object *buffer = to_etnaviv_bo(gpu->buffer);
/* initialize buffer */
buffer->offset = 0;
etnaviv_cmd_select_pipe(buffer, gpu->pipe);
CMD_WAIT(buffer);
CMD_LINK(buffer, 2, buffer->paddr + ((buffer->offset - 1) * 4));
return buffer->offset;
+}
+void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct etnaviv_gem_submit *submit) +{
struct etnaviv_gem_object *buffer = to_etnaviv_bo(gpu->buffer);
struct etnaviv_gem_object *cmd;
u32 *lw = buffer->vaddr + ((buffer->offset - 4) * 4);
u32 back;
u32 i;
etnaviv_buffer_dump(buffer, 0x50);
/* save offset back into main buffer */
back = buffer->offset;
/* trigger event */
CMD_LOAD_STATE(buffer, VIVS_GL_EVENT, VIVS_GL_EVENT_EVENT_ID(event) | VIVS_GL_EVENT_FROM_PE);
/* append WAIT/LINK to main buffer */
CMD_WAIT(buffer);
CMD_LINK(buffer, 2, buffer->paddr + ((buffer->offset - 1) * 4));
/* update offset for every cmd stream */
for (i = 0; i < submit->nr_cmds; i++)
submit->cmd[i].obj->offset = submit->cmd[i].size;
/* TODO: inter-connect all cmd buffers */
/* jump back from last cmd to main buffer */
cmd = submit->cmd[submit->nr_cmds - 1].obj;
CMD_LINK(cmd, 4, buffer->paddr + (back * 4));
printk(KERN_ERR "stream link @ 0x%08x\n", cmd->paddr + ((cmd->offset - 1) * 4));
printk(KERN_ERR "stream link @ %p\n", cmd->vaddr + ((cmd->offset - 1) * 4));
for (i = 0; i < submit->nr_cmds; i++) {
struct etnaviv_gem_object *obj = submit->cmd[i].obj;
/* TODO: remove later */
if (unlikely(drm_debug & DRM_UT_CORE))
etnaviv_buffer_dump(obj, obj->offset);
}
/* change ll to NOP */
printk(KERN_ERR "link op: %p\n", lw);
printk(KERN_ERR "link addr: %p\n", lw + 1);
printk(KERN_ERR "addr: 0x%08x\n", submit->cmd[0].obj->paddr);
printk(KERN_ERR "back: 0x%08x\n", buffer->paddr + (back * 4));
printk(KERN_ERR "event: %d\n", event);
/* Change WAIT into a LINK command; write the address first. */
i = VIV_FE_LINK_HEADER_OP_LINK | VIV_FE_LINK_HEADER_PREFETCH(submit->cmd[0].size * 2);
*(lw + 1) = submit->cmd[0].obj->paddr;
mb();
*(lw)= i;
mb();
etnaviv_buffer_dump(buffer, 0x50);
+} diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c new file mode 100644 index 000000000000..39586b45200d --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -0,0 +1,621 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include <linux/component.h> +#include <linux/of_platform.h>
+#include "etnaviv_drv.h" +#include "etnaviv_gpu.h"
+void etnaviv_register_mmu(struct drm_device *dev, struct etnaviv_iommu *mmu) +{
struct etnaviv_drm_private *priv = dev->dev_private;
priv->mmu = mmu;
+}
+#ifdef CONFIG_DRM_ETNAVIV_REGISTER_LOGGING +static bool reglog = false; +MODULE_PARM_DESC(reglog, "Enable register read/write logging"); +module_param(reglog, bool, 0600); +#else +#define reglog 0 +#endif
+void __iomem *etnaviv_ioremap(struct platform_device *pdev, const char *name,
const char *dbgname)
+{
struct resource *res;
unsigned long size;
void __iomem *ptr;
if (name)
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, name);
else
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(&pdev->dev, "failed to get memory resource: %s\n", name);
return ERR_PTR(-EINVAL);
}
size = resource_size(res);
ptr = devm_ioremap_nocache(&pdev->dev, res->start, size);
if (!ptr) {
dev_err(&pdev->dev, "failed to ioremap: %s\n", name);
return ERR_PTR(-ENOMEM);
}
if (reglog)
printk(KERN_DEBUG "IO:region %s %08x %08lx\n", dbgname, (u32)ptr, size);
return ptr;
+}
+void etnaviv_writel(u32 data, void __iomem *addr) +{
if (reglog)
printk(KERN_DEBUG "IO:W %08x %08x\n", (u32)addr, data);
writel(data, addr);
+}
+u32 etnaviv_readl(const void __iomem *addr) +{
u32 val = readl(addr);
if (reglog)
printk(KERN_ERR "IO:R %08x %08x\n", (u32)addr, val);
return val;
+}
+/*
- DRM operations:
- */
+static int etnaviv_unload(struct drm_device *dev) +{
struct etnaviv_drm_private *priv = dev->dev_private;
unsigned int i;
flush_workqueue(priv->wq);
destroy_workqueue(priv->wq);
mutex_lock(&dev->struct_mutex);
for (i = 0; i < ETNA_MAX_PIPES; i++) {
struct etnaviv_gpu *g = priv->gpu[i];
if (g)
etnaviv_gpu_pm_suspend(g);
}
mutex_unlock(&dev->struct_mutex);
component_unbind_all(dev->dev, dev);
dev->dev_private = NULL;
kfree(priv);
return 0;
+}
+static void load_gpu(struct drm_device *dev) +{
struct etnaviv_drm_private *priv = dev->dev_private;
unsigned int i;
mutex_lock(&dev->struct_mutex);
for (i = 0; i < ETNA_MAX_PIPES; i++) {
struct etnaviv_gpu *g = priv->gpu[i];
if (g) {
int ret;
etnaviv_gpu_pm_resume(g);
ret = etnaviv_gpu_init(g);
if (ret) {
dev_err(dev->dev, "%s hw init failed: %d\n", g->name, ret);
priv->gpu[i] = NULL;
}
}
}
mutex_unlock(&dev->struct_mutex);
+}
+static int etnaviv_load(struct drm_device *dev, unsigned long flags) +{
struct platform_device *pdev = dev->platformdev;
struct etnaviv_drm_private *priv;
int err;
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv) {
dev_err(dev->dev, "failed to allocate private data\n");
return -ENOMEM;
}
dev->dev_private = priv;
priv->wq = alloc_ordered_workqueue("etnaviv", 0);
init_waitqueue_head(&priv->fence_event);
INIT_LIST_HEAD(&priv->inactive_list);
platform_set_drvdata(pdev, dev);
err = component_bind_all(dev->dev, dev);
if (err < 0)
return err;
load_gpu(dev);
return 0;
+}
+static int etnaviv_open(struct drm_device *dev, struct drm_file *file) +{
struct etnaviv_file_private *ctx;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return -ENOMEM;
file->driver_priv = ctx;
return 0;
+}
+static void etnaviv_preclose(struct drm_device *dev, struct drm_file *file) +{
struct etnaviv_drm_private *priv = dev->dev_private;
struct etnaviv_file_private *ctx = file->driver_priv;
mutex_lock(&dev->struct_mutex);
if (ctx == priv->lastctx)
priv->lastctx = NULL;
mutex_unlock(&dev->struct_mutex);
kfree(ctx);
+}
+/*
- DRM debugfs:
- */
+#ifdef CONFIG_DEBUG_FS +static int etnaviv_gpu_show(struct drm_device *dev, struct seq_file *m) +{
struct etnaviv_drm_private *priv = dev->dev_private;
struct etnaviv_gpu *gpu;
unsigned int i;
for (i = 0; i < ETNA_MAX_PIPES; i++) {
gpu = priv->gpu[i];
if (gpu) {
seq_printf(m, "%s Status:\n", gpu->name);
etnaviv_gpu_debugfs(gpu, m);
}
}
return 0;
+}
+static int etnaviv_gem_show(struct drm_device *dev, struct seq_file *m) +{
struct etnaviv_drm_private *priv = dev->dev_private;
struct etnaviv_gpu *gpu;
unsigned int i;
for (i = 0; i < ETNA_MAX_PIPES; i++) {
gpu = priv->gpu[i];
if (gpu) {
seq_printf(m, "Active Objects (%s):\n", gpu->name);
msm_gem_describe_objects(&gpu->active_list, m);
}
}
seq_puts(m, "Inactive Objects:\n");
msm_gem_describe_objects(&priv->inactive_list, m);
return 0;
+}
+static int etnaviv_mm_show(struct drm_device *dev, struct seq_file *m) +{
return drm_mm_dump_table(m, &dev->vma_offset_manager->vm_addr_space_mm);
+}
+static int show_locked(struct seq_file *m, void *arg) +{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
int (*show)(struct drm_device *dev, struct seq_file *m) =
node->info_ent->data;
int ret;
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
ret = show(dev, m);
mutex_unlock(&dev->struct_mutex);
return ret;
+}
+static struct drm_info_list ETNAVIV_debugfs_list[] = {
{"gpu", show_locked, 0, etnaviv_gpu_show},
{"gem", show_locked, 0, etnaviv_gem_show},
{ "mm", show_locked, 0, etnaviv_mm_show },
+};
+static int etnaviv_debugfs_init(struct drm_minor *minor) +{
struct drm_device *dev = minor->dev;
int ret;
ret = drm_debugfs_create_files(ETNAVIV_debugfs_list,
ARRAY_SIZE(ETNAVIV_debugfs_list),
minor->debugfs_root, minor);
if (ret) {
dev_err(dev->dev, "could not install ETNAVIV_debugfs_list\n");
return ret;
}
return ret;
+}
+static void etnaviv_debugfs_cleanup(struct drm_minor *minor) +{
drm_debugfs_remove_files(ETNAVIV_debugfs_list,
ARRAY_SIZE(ETNAVIV_debugfs_list), minor);
+} +#endif
+/*
- Fences:
- */
+int etnaviv_wait_fence_interruptable(struct drm_device *dev, uint32_t pipe,
uint32_t fence, struct timespec *timeout)
+{
struct etnaviv_drm_private *priv = dev->dev_private;
struct etnaviv_gpu *gpu;
int ret;
if (pipe >= ETNA_MAX_PIPES)
return -EINVAL;
gpu = priv->gpu[pipe];
if (!gpu)
return -ENXIO;
if (fence > gpu->submitted_fence) {
DRM_ERROR("waiting on invalid fence: %u (of %u)\n",
fence, gpu->submitted_fence);
return -EINVAL;
}
if (!timeout) {
/* no-wait: */
ret = fence_completed(dev, fence) ? 0 : -EBUSY;
} else {
unsigned long timeout_jiffies = timespec_to_jiffies(timeout);
unsigned long start_jiffies = jiffies;
unsigned long remaining_jiffies;
if (time_after(start_jiffies, timeout_jiffies))
remaining_jiffies = 0;
else
remaining_jiffies = timeout_jiffies - start_jiffies;
ret = wait_event_interruptible_timeout(priv->fence_event,
fence_completed(dev, fence),
remaining_jiffies);
if (ret == 0) {
DBG("timeout waiting for fence: %u (completed: %u)",
fence, priv->completed_fence);
ret = -ETIMEDOUT;
} else if (ret != -ERESTARTSYS) {
ret = 0;
}
}
return ret;
+}
+/* called from workqueue */ +void etnaviv_update_fence(struct drm_device *dev, uint32_t fence) +{
struct etnaviv_drm_private *priv = dev->dev_private;
mutex_lock(&dev->struct_mutex);
priv->completed_fence = max(fence, priv->completed_fence);
mutex_unlock(&dev->struct_mutex);
wake_up_all(&priv->fence_event);
+}
+/*
- DRM ioctls:
- */
+static int etnaviv_ioctl_get_param(struct drm_device *dev, void *data,
struct drm_file *file)
+{
struct etnaviv_drm_private *priv = dev->dev_private;
struct drm_etnaviv_param *args = data;
struct etnaviv_gpu *gpu;
if (args->pipe >= ETNA_MAX_PIPES)
return -EINVAL;
gpu = priv->gpu[args->pipe];
if (!gpu)
return -ENXIO;
return etnaviv_gpu_get_param(gpu, args->param, &args->value);
+}
+static int etnaviv_ioctl_gem_new(struct drm_device *dev, void *data,
struct drm_file *file)
+{
struct drm_etnaviv_gem_new *args = data;
return etnaviv_gem_new_handle(dev, file, args->size,
args->flags, &args->handle);
+}
+#define TS(t) ((struct timespec){ .tv_sec = (t).tv_sec, .tv_nsec = (t).tv_nsec })
+static int etnaviv_ioctl_gem_cpu_prep(struct drm_device *dev, void *data,
struct drm_file *file)
+{
struct drm_etnaviv_gem_cpu_prep *args = data;
struct drm_gem_object *obj;
int ret;
obj = drm_gem_object_lookup(dev, file, args->handle);
if (!obj)
return -ENOENT;
ret = etnaviv_gem_cpu_prep(obj, args->op, &TS(args->timeout));
drm_gem_object_unreference_unlocked(obj);
return ret;
+}
+static int etnaviv_ioctl_gem_cpu_fini(struct drm_device *dev, void *data,
struct drm_file *file)
+{
struct drm_etnaviv_gem_cpu_fini *args = data;
struct drm_gem_object *obj;
int ret;
obj = drm_gem_object_lookup(dev, file, args->handle);
if (!obj)
return -ENOENT;
ret = etnaviv_gem_cpu_fini(obj);
drm_gem_object_unreference_unlocked(obj);
return ret;
+}
+static int etnaviv_ioctl_gem_info(struct drm_device *dev, void *data,
struct drm_file *file)
+{
struct drm_etnaviv_gem_info *args = data;
struct drm_gem_object *obj;
int ret = 0;
if (args->pad)
return -EINVAL;
obj = drm_gem_object_lookup(dev, file, args->handle);
if (!obj)
return -ENOENT;
args->offset = msm_gem_mmap_offset(obj);
drm_gem_object_unreference_unlocked(obj);
return ret;
+}
+static int etnaviv_ioctl_wait_fence(struct drm_device *dev, void *data,
struct drm_file *file)
+{
struct drm_etnaviv_wait_fence *args = data;
return etnaviv_wait_fence_interruptable(dev, args->pipe, args->fence, &TS(args->timeout));
+}
+static const struct drm_ioctl_desc etnaviv_ioctls[] = {
DRM_IOCTL_DEF_DRV(ETNAVIV_GET_PARAM, etnaviv_ioctl_get_param, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_NEW, etnaviv_ioctl_gem_new, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_INFO, etnaviv_ioctl_gem_info, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_CPU_PREP, etnaviv_ioctl_gem_cpu_prep, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_CPU_FINI, etnaviv_ioctl_gem_cpu_fini, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_SUBMIT, etnaviv_ioctl_gem_submit, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(ETNAVIV_WAIT_FENCE, etnaviv_ioctl_wait_fence, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW),
+};
+static const struct vm_operations_struct vm_ops = {
.fault = etnaviv_gem_fault,
.open = drm_gem_vm_open,
.close = drm_gem_vm_close,
+};
+static const struct file_operations fops = {
.owner = THIS_MODULE,
.open = drm_open,
.release = drm_release,
.unlocked_ioctl = drm_ioctl,
+#ifdef CONFIG_COMPAT
.compat_ioctl = drm_compat_ioctl,
+#endif
.poll = drm_poll,
.read = drm_read,
.llseek = no_llseek,
.mmap = etnaviv_gem_mmap,
+};
+static struct drm_driver etnaviv_drm_driver = {
.driver_features = DRIVER_HAVE_IRQ |
DRIVER_GEM |
DRIVER_PRIME |
DRIVER_RENDER,
.load = etnaviv_load,
.unload = etnaviv_unload,
.open = etnaviv_open,
.preclose = etnaviv_preclose,
.set_busid = drm_platform_set_busid,
.gem_free_object = etnaviv_gem_free_object,
.gem_vm_ops = &vm_ops,
.dumb_create = msm_gem_dumb_create,
.dumb_map_offset = msm_gem_dumb_map_offset,
.dumb_destroy = drm_gem_dumb_destroy,
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
.gem_prime_export = drm_gem_prime_export,
.gem_prime_import = drm_gem_prime_import,
.gem_prime_pin = msm_gem_prime_pin,
.gem_prime_unpin = msm_gem_prime_unpin,
.gem_prime_get_sg_table = msm_gem_prime_get_sg_table,
.gem_prime_import_sg_table = msm_gem_prime_import_sg_table,
.gem_prime_vmap = msm_gem_prime_vmap,
.gem_prime_vunmap = msm_gem_prime_vunmap,
+#ifdef CONFIG_DEBUG_FS
.debugfs_init = etnaviv_debugfs_init,
.debugfs_cleanup = etnaviv_debugfs_cleanup,
+#endif
.ioctls = etnaviv_ioctls,
.num_ioctls = DRM_ETNAVIV_NUM_IOCTLS,
.fops = &fops,
.name = "etnaviv",
.desc = "etnaviv DRM",
.date = "20130625",
.major = 1,
.minor = 0,
+};
+/*
- Platform driver:
- */
+static int etnaviv_compare(struct device *dev, void *data) +{
struct device_node *np = data;
return dev->of_node == np;
+}
+static int etnaviv_add_components(struct device *master, struct master *m) +{
struct device_node *np = master->of_node;
struct device_node *child_np;
child_np = of_get_next_available_child(np, NULL);
while (child_np) {
DRM_INFO("add child %s\n", child_np->name);
component_master_add_child(m, etnaviv_compare, child_np);
of_node_put(child_np);
child_np = of_get_next_available_child(np, child_np);
}
return 0;
+}
+static int etnaviv_bind(struct device *dev) +{
return drm_platform_init(&etnaviv_drm_driver, to_platform_device(dev));
+}
+static void etnaviv_unbind(struct device *dev) +{
drm_put_dev(dev_get_drvdata(dev));
+}
+static const struct component_master_ops etnaviv_master_ops = {
.add_components = etnaviv_add_components,
.bind = etnaviv_bind,
.unbind = etnaviv_unbind,
+};
+static int etnaviv_pdev_probe(struct platform_device *pdev) +{
struct device *dev = &pdev->dev;
struct device_node *node = dev->of_node;
of_platform_populate(node, NULL, NULL, dev);
dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
return component_master_add(&pdev->dev, &etnaviv_master_ops);
+}
+static int etnaviv_pdev_remove(struct platform_device *pdev) +{
component_master_del(&pdev->dev, &etnaviv_master_ops);
return 0;
+}
+static const struct of_device_id dt_match[] = {
{ .compatible = "vivante,gccore" },
{}
+}; +MODULE_DEVICE_TABLE(of, dt_match);
+static struct platform_driver etnaviv_platform_driver = {
.probe = etnaviv_pdev_probe,
.remove = etnaviv_pdev_remove,
.driver = {
.owner = THIS_MODULE,
.name = "vivante",
.of_match_table = dt_match,
},
+};
+static int __init etnaviv_init(void) +{
int ret;
ret = platform_driver_register(&etnaviv_gpu_driver);
if (ret != 0)
return ret;
ret = platform_driver_register(&etnaviv_platform_driver);
if (ret != 0)
platform_driver_unregister(&etnaviv_gpu_driver);
return ret;
+} +module_init(etnaviv_init);
+static void __exit etnaviv_exit(void) +{
platform_driver_unregister(&etnaviv_gpu_driver);
platform_driver_unregister(&etnaviv_platform_driver);
+} +module_exit(etnaviv_exit);
+MODULE_AUTHOR("Rob Clark <robdclark@gmail.com"); +MODULE_DESCRIPTION("etnaviv DRM Driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h new file mode 100644 index 000000000000..63994f22d8c9 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -0,0 +1,143 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __ETNAVIV_DRV_H__ +#define __ETNAVIV_DRV_H__
+#include <linux/kernel.h> +#include <linux/clk.h> +#include <linux/cpufreq.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/pm.h> +#include <linux/pm_runtime.h> +#include <linux/slab.h> +#include <linux/list.h> +#include <linux/iommu.h> +#include <linux/types.h> +#include <linux/sizes.h>
+#include <drm/drmP.h> +#include <drm/drm_crtc_helper.h> +#include <drm/drm_fb_helper.h> +#include <drm/drm_gem.h> +#include <drm/etnaviv_drm.h>
+struct etnaviv_gpu; +struct etnaviv_mmu; +struct etnaviv_gem_submit;
+struct etnaviv_file_private {
/* currently we don't do anything useful with this.. but when
* per-context address spaces are supported we'd keep track of
* the context's page-tables here.
*/
int dummy;
+};
+struct etnaviv_drm_private {
struct etnaviv_gpu *gpu[ETNA_MAX_PIPES];
struct etnaviv_file_private *lastctx;
uint32_t next_fence, completed_fence;
wait_queue_head_t fence_event;
/* list of GEM objects: */
struct list_head inactive_list;
struct workqueue_struct *wq;
/* registered MMUs: */
struct etnaviv_iommu *mmu;
+};
+void etnaviv_register_mmu(struct drm_device *dev, struct etnaviv_iommu *mmu);
+int etnaviv_wait_fence_interruptable(struct drm_device *dev, uint32_t pipe,
uint32_t fence, struct timespec *timeout);
+void etnaviv_update_fence(struct drm_device *dev, uint32_t fence);
+int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
struct drm_file *file);
+int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma); +int etnaviv_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf); +uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); +int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, struct drm_gem_object *obj,
uint32_t *iova);
+int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, int id, uint32_t *iova); +struct page **etnaviv_gem_get_pages(struct drm_gem_object *obj); +void msm_gem_put_pages(struct drm_gem_object *obj); +void etnaviv_gem_put_iova(struct drm_gem_object *obj); +int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
struct drm_mode_create_dumb *args);
+int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
uint32_t handle, uint64_t *offset);
+struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj); +void *msm_gem_prime_vmap(struct drm_gem_object *obj); +void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); +struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
size_t size, struct sg_table *sg);
+int msm_gem_prime_pin(struct drm_gem_object *obj); +void msm_gem_prime_unpin(struct drm_gem_object *obj); +void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj); +void *msm_gem_vaddr(struct drm_gem_object *obj); +dma_addr_t etnaviv_gem_paddr_locked(struct drm_gem_object *obj); +void etnaviv_gem_move_to_active(struct drm_gem_object *obj,
struct etnaviv_gpu *gpu, bool write, uint32_t fence);
+void etnaviv_gem_move_to_inactive(struct drm_gem_object *obj); +int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op,
struct timespec *timeout);
+int etnaviv_gem_cpu_fini(struct drm_gem_object *obj); +void etnaviv_gem_free_object(struct drm_gem_object *obj); +int etnaviv_gem_new_handle(struct drm_device *dev, struct drm_file *file,
uint32_t size, uint32_t flags, uint32_t *handle);
+struct drm_gem_object *etnaviv_gem_new(struct drm_device *dev,
uint32_t size, uint32_t flags);
+struct drm_gem_object *msm_gem_import(struct drm_device *dev,
uint32_t size, struct sg_table *sgt);
+u32 etnaviv_buffer_init(struct etnaviv_gpu *gpu); +void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct etnaviv_gem_submit *submit);
+#ifdef CONFIG_DEBUG_FS +void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m); +void msm_gem_describe_objects(struct list_head *list, struct seq_file *m); +void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m); +#endif
+void __iomem *etnaviv_ioremap(struct platform_device *pdev, const char *name,
const char *dbgname);
+void etnaviv_writel(u32 data, void __iomem *addr); +u32 etnaviv_readl(const void __iomem *addr);
+#define DBG(fmt, ...) DRM_DEBUG(fmt"\n", ##__VA_ARGS__) +#define VERB(fmt, ...) if (0) DRM_DEBUG(fmt"\n", ##__VA_ARGS__)
+static inline bool fence_completed(struct drm_device *dev, uint32_t fence) +{
struct etnaviv_drm_private *priv = dev->dev_private;
return priv->completed_fence >= fence;
+}
+static inline int align_pitch(int width, int bpp) +{
int bytespp = (bpp + 7) / 8;
/* adreno needs pitch aligned to 32 pixels: */
return bytespp * ALIGN(width, 32);
+}
+#endif /* __ETNAVIV_DRV_H__ */ diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c new file mode 100644 index 000000000000..42149a2b7404 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -0,0 +1,706 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include <linux/spinlock.h> +#include <linux/shmem_fs.h> +#include <linux/dma-buf.h>
+#include "etnaviv_drv.h" +#include "etnaviv_gem.h" +#include "etnaviv_gpu.h" +#include "etnaviv_mmu.h"
+/* called with dev->struct_mutex held */ +static struct page **get_pages(struct drm_gem_object *obj) +{
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
if (!etnaviv_obj->pages) {
struct drm_device *dev = obj->dev;
struct page **p;
int npages = obj->size >> PAGE_SHIFT;
p = drm_gem_get_pages(obj);
if (IS_ERR(p)) {
dev_err(dev->dev, "could not get pages: %ld\n",
PTR_ERR(p));
return p;
}
etnaviv_obj->sgt = drm_prime_pages_to_sg(p, npages);
if (IS_ERR(etnaviv_obj->sgt)) {
dev_err(dev->dev, "failed to allocate sgt\n");
return ERR_CAST(etnaviv_obj->sgt);
}
etnaviv_obj->pages = p;
/* For non-cached buffers, ensure the new pages are clean
* because display controller, GPU, etc. are not coherent:
*/
if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_UNCACHED))
dma_map_sg(dev->dev, etnaviv_obj->sgt->sgl,
etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL);
}
return etnaviv_obj->pages;
+}
+static void put_pages(struct drm_gem_object *obj) +{
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
if (etnaviv_obj->pages) {
/* For non-cached buffers, ensure the new pages are clean
* because display controller, GPU, etc. are not coherent:
*/
if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_UNCACHED))
dma_unmap_sg(obj->dev->dev, etnaviv_obj->sgt->sgl,
etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL);
sg_free_table(etnaviv_obj->sgt);
kfree(etnaviv_obj->sgt);
drm_gem_put_pages(obj, etnaviv_obj->pages, true, false);
etnaviv_obj->pages = NULL;
}
+}
+struct page **etnaviv_gem_get_pages(struct drm_gem_object *obj) +{
struct drm_device *dev = obj->dev;
struct page **p;
mutex_lock(&dev->struct_mutex);
p = get_pages(obj);
mutex_unlock(&dev->struct_mutex);
return p;
+}
+void msm_gem_put_pages(struct drm_gem_object *obj) +{
/* when we start tracking the pin count, then do something here */
+}
+static int etnaviv_gem_mmap_cmd(struct drm_gem_object *obj,
struct vm_area_struct *vma)
+{
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
int ret;
/*
* Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
* vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
* the whole buffer.
*/
vma->vm_flags &= ~VM_PFNMAP;
vma->vm_pgoff = 0;
ret = dma_mmap_coherent(obj->dev->dev, vma,
etnaviv_obj->vaddr, etnaviv_obj->paddr,
vma->vm_end - vma->vm_start);
return ret;
+}
+static int etnaviv_gem_mmap_obj(struct drm_gem_object *obj,
struct vm_area_struct *vma)
+{
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
vma->vm_flags &= ~VM_PFNMAP;
vma->vm_flags |= VM_MIXEDMAP;
if (etnaviv_obj->flags & ETNA_BO_WC) {
vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
} else if (etnaviv_obj->flags & ETNA_BO_UNCACHED) {
vma->vm_page_prot = pgprot_noncached(vm_get_page_prot(vma->vm_flags));
} else {
/*
* Shunt off cached objs to shmem file so they have their own
* address_space (so unmap_mapping_range does what we want,
* in particular in the case of mmap'd dmabufs)
*/
fput(vma->vm_file);
get_file(obj->filp);
vma->vm_pgoff = 0;
vma->vm_file = obj->filp;
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
}
return 0;
+}
+int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma) +{
struct etnaviv_gem_object *obj;
int ret;
ret = drm_gem_mmap(filp, vma);
if (ret) {
DBG("mmap failed: %d", ret);
return ret;
}
obj = to_etnaviv_bo(vma->vm_private_data);
if (obj->flags & ETNA_BO_CMDSTREAM)
ret = etnaviv_gem_mmap_cmd(vma->vm_private_data, vma);
else
ret = etnaviv_gem_mmap_obj(vma->vm_private_data, vma);
return ret;
+}
+int etnaviv_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) +{
struct drm_gem_object *obj = vma->vm_private_data;
struct drm_device *dev = obj->dev;
struct page **pages;
unsigned long pfn;
pgoff_t pgoff;
int ret;
/* Make sure we don't parallel update on a fault, nor move or remove
* something from beneath our feet
*/
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
goto out;
/* make sure we have pages attached now */
pages = get_pages(obj);
if (IS_ERR(pages)) {
ret = PTR_ERR(pages);
goto out_unlock;
}
/* We don't use vmf->pgoff since that has the fake offset: */
pgoff = ((unsigned long)vmf->virtual_address -
vma->vm_start) >> PAGE_SHIFT;
pfn = page_to_pfn(pages[pgoff]);
VERB("Inserting %p pfn %lx, pa %lx", vmf->virtual_address,
pfn, pfn << PAGE_SHIFT);
ret = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn);
+out_unlock:
mutex_unlock(&dev->struct_mutex);
+out:
switch (ret) {
case -EAGAIN:
case 0:
case -ERESTARTSYS:
case -EINTR:
case -EBUSY:
/*
* EBUSY is ok: this just means that another thread
* already did the job.
*/
return VM_FAULT_NOPAGE;
case -ENOMEM:
return VM_FAULT_OOM;
default:
return VM_FAULT_SIGBUS;
}
+}
+/** get mmap offset */ +static uint64_t mmap_offset(struct drm_gem_object *obj) +{
struct drm_device *dev = obj->dev;
int ret;
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
/* Make it mmapable */
ret = drm_gem_create_mmap_offset(obj);
if (ret) {
dev_err(dev->dev, "could not allocate mmap offset\n");
return 0;
}
return drm_vma_node_offset_addr(&obj->vma_node);
+}
+uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) +{
uint64_t offset;
mutex_lock(&obj->dev->struct_mutex);
offset = mmap_offset(obj);
mutex_unlock(&obj->dev->struct_mutex);
return offset;
+}
+/* should be called under struct_mutex.. although it can be called
- from atomic context without struct_mutex to acquire an extra
- iova ref if you know one is already held.
- That means when I do eventually need to add support for unpinning
- the refcnt counter needs to be atomic_t.
- */
+int etnaviv_gem_get_iova_locked(struct etnaviv_gpu * gpu, struct drm_gem_object *obj,
uint32_t *iova)
+{
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
int ret = 0;
if (!etnaviv_obj->iova && !(etnaviv_obj->flags & ETNA_BO_CMDSTREAM)) {
struct etnaviv_drm_private *priv = obj->dev->dev_private;
struct etnaviv_iommu *mmu = priv->mmu;
struct page **pages = get_pages(obj);
uint32_t offset;
struct drm_mm_node *node = NULL;
if (IS_ERR(pages))
return PTR_ERR(pages);
node = kzalloc(sizeof(*node), GFP_KERNEL);
if (!node)
return -ENOMEM;
ret = drm_mm_insert_node(&gpu->mm, node, obj->size, 0,
DRM_MM_SEARCH_DEFAULT);
if (!ret) {
offset = node->start;
etnaviv_obj->iova = offset;
etnaviv_obj->gpu_vram_node = node;
ret = etnaviv_iommu_map(mmu, offset, etnaviv_obj->sgt,
obj->size, IOMMU_READ | IOMMU_WRITE);
} else
kfree(node);
}
if (!ret)
*iova = etnaviv_obj->iova;
return ret;
+}
+int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, int id, uint32_t *iova) +{
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
int ret;
/* this is safe right now because we don't unmap until the
* bo is deleted:
*/
if (etnaviv_obj->iova) {
*iova = etnaviv_obj->iova;
return 0;
}
mutex_lock(&obj->dev->struct_mutex);
ret = etnaviv_gem_get_iova_locked(gpu, obj, iova);
mutex_unlock(&obj->dev->struct_mutex);
return ret;
+}
+void etnaviv_gem_put_iova(struct drm_gem_object *obj) +{
// XXX TODO ..
// NOTE: probably don't need a _locked() version.. we wouldn't
// normally unmap here, but instead just mark that it could be
// unmapped (if the iova refcnt drops to zero), but then later
// if another _get_iova_locked() fails we can start unmapping
// things that are no longer needed..
+}
+int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
struct drm_mode_create_dumb *args)
+{
args->pitch = align_pitch(args->width, args->bpp);
args->size = PAGE_ALIGN(args->pitch * args->height);
/* TODO: re-check flags */
return etnaviv_gem_new_handle(dev, file, args->size,
ETNA_BO_WC, &args->handle);
+}
+int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
uint32_t handle, uint64_t *offset)
+{
struct drm_gem_object *obj;
int ret = 0;
/* GEM does all our handle to object mapping */
obj = drm_gem_object_lookup(dev, file, handle);
if (obj == NULL) {
ret = -ENOENT;
goto fail;
}
*offset = msm_gem_mmap_offset(obj);
drm_gem_object_unreference_unlocked(obj);
+fail:
return ret;
+}
+void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj) +{
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
if (!etnaviv_obj->vaddr) {
struct page **pages = get_pages(obj);
if (IS_ERR(pages))
return ERR_CAST(pages);
etnaviv_obj->vaddr = vmap(pages, obj->size >> PAGE_SHIFT,
VM_MAP, pgprot_writecombine(PAGE_KERNEL));
}
return etnaviv_obj->vaddr;
+}
+void *msm_gem_vaddr(struct drm_gem_object *obj) +{
void *ret;
mutex_lock(&obj->dev->struct_mutex);
ret = etnaviv_gem_vaddr_locked(obj);
mutex_unlock(&obj->dev->struct_mutex);
return ret;
+}
+dma_addr_t etnaviv_gem_paddr_locked(struct drm_gem_object *obj) +{
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
return etnaviv_obj->paddr;
+}
+void etnaviv_gem_move_to_active(struct drm_gem_object *obj,
struct etnaviv_gpu *gpu, bool write, uint32_t fence)
+{
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
etnaviv_obj->gpu = gpu;
if (write)
etnaviv_obj->write_fence = fence;
else
etnaviv_obj->read_fence = fence;
list_del_init(&etnaviv_obj->mm_list);
list_add_tail(&etnaviv_obj->mm_list, &gpu->active_list);
+}
+void etnaviv_gem_move_to_inactive(struct drm_gem_object *obj) +{
struct drm_device *dev = obj->dev;
struct etnaviv_drm_private *priv = dev->dev_private;
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
etnaviv_obj->gpu = NULL;
etnaviv_obj->read_fence = 0;
etnaviv_obj->write_fence = 0;
list_del_init(&etnaviv_obj->mm_list);
list_add_tail(&etnaviv_obj->mm_list, &priv->inactive_list);
+}
+int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op,
struct timespec *timeout)
+{ +/*
struct drm_device *dev = obj->dev;
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
+*/
int ret = 0;
/* TODO */
+#if 0
if (is_active(etnaviv_obj)) {
uint32_t fence = 0;
if (op & MSM_PREP_READ)
fence = etnaviv_obj->write_fence;
if (op & MSM_PREP_WRITE)
fence = max(fence, etnaviv_obj->read_fence);
if (op & MSM_PREP_NOSYNC)
timeout = NULL;
ret = etnaviv_wait_fence_interruptable(dev, fence, timeout);
}
/* TODO cache maintenance */
+#endif
return ret;
+}
+int etnaviv_gem_cpu_fini(struct drm_gem_object *obj) +{
/* TODO cache maintenance */
return 0;
+}
+#ifdef CONFIG_DEBUG_FS +void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) +{
struct drm_device *dev = obj->dev;
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
uint64_t off = drm_vma_node_start(&obj->vma_node);
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
seq_printf(m, "%08x: %c(r=%u,w=%u) %2d (%2d) %08llx %p %d\n",
etnaviv_obj->flags, is_active(etnaviv_obj) ? 'A' : 'I',
etnaviv_obj->read_fence, etnaviv_obj->write_fence,
obj->name, obj->refcount.refcount.counter,
off, etnaviv_obj->vaddr, obj->size);
+}
+void msm_gem_describe_objects(struct list_head *list, struct seq_file *m) +{
struct etnaviv_gem_object *etnaviv_obj;
int count = 0;
size_t size = 0;
list_for_each_entry(etnaviv_obj, list, mm_list) {
struct drm_gem_object *obj = &etnaviv_obj->base;
seq_puts(m, " ");
msm_gem_describe(obj, m);
count++;
size += obj->size;
}
seq_printf(m, "Total %d objects, %zu bytes\n", count, size);
+} +#endif
+static void etnaviv_free_cmd(struct drm_gem_object *obj) +{
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
drm_gem_free_mmap_offset(obj);
dma_free_coherent(obj->dev->dev, obj->size,
etnaviv_obj->vaddr, etnaviv_obj->paddr);
drm_gem_object_release(obj);
+}
+static void etnaviv_free_obj(struct drm_gem_object *obj) +{
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
struct etnaviv_drm_private *priv = obj->dev->dev_private;
struct etnaviv_iommu *mmu = priv->mmu;
if (mmu && etnaviv_obj->iova) {
uint32_t offset = etnaviv_obj->gpu_vram_node->start;
etnaviv_iommu_unmap(mmu, offset, etnaviv_obj->sgt, obj->size);
drm_mm_remove_node(etnaviv_obj->gpu_vram_node);
kfree(etnaviv_obj->gpu_vram_node);
}
drm_gem_free_mmap_offset(obj);
if (obj->import_attach) {
if (etnaviv_obj->vaddr)
dma_buf_vunmap(obj->import_attach->dmabuf, etnaviv_obj->vaddr);
/* Don't drop the pages for imported dmabuf, as they are not
* ours, just free the array we allocated:
*/
if (etnaviv_obj->pages)
drm_free_large(etnaviv_obj->pages);
} else {
if (etnaviv_obj->vaddr)
vunmap(etnaviv_obj->vaddr);
put_pages(obj);
}
if (etnaviv_obj->resv == &etnaviv_obj->_resv)
reservation_object_fini(etnaviv_obj->resv);
drm_gem_object_release(obj);
+}
+void etnaviv_gem_free_object(struct drm_gem_object *obj) +{
struct drm_device *dev = obj->dev;
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
/* object should not be on active list: */
WARN_ON(is_active(etnaviv_obj));
list_del(&etnaviv_obj->mm_list);
if (etnaviv_obj->flags & ETNA_BO_CMDSTREAM)
etnaviv_free_cmd(obj);
else
etnaviv_free_obj(obj);
kfree(etnaviv_obj);
+}
+/* convenience method to construct a GEM buffer object, and userspace handle */ +int etnaviv_gem_new_handle(struct drm_device *dev, struct drm_file *file,
uint32_t size, uint32_t flags, uint32_t *handle)
+{
struct drm_gem_object *obj;
int ret;
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
obj = etnaviv_gem_new(dev, size, flags);
mutex_unlock(&dev->struct_mutex);
if (IS_ERR(obj))
return PTR_ERR(obj);
ret = drm_gem_handle_create(file, obj, handle);
/* drop reference from allocate - handle holds it now */
drm_gem_object_unreference_unlocked(obj);
return ret;
+}
+static int etnaviv_gem_new_impl(struct drm_device *dev,
uint32_t size, uint32_t flags,
struct drm_gem_object **obj)
+{
struct etnaviv_drm_private *priv = dev->dev_private;
struct etnaviv_gem_object *etnaviv_obj;
unsigned sz = sizeof(*etnaviv_obj);
bool valid = true;
/* validate flags */
if (flags & ETNA_BO_CMDSTREAM) {
if ((flags & ETNA_BO_CACHE_MASK) != 0)
valid = false;
} else {
switch (flags & ETNA_BO_CACHE_MASK) {
case ETNA_BO_UNCACHED:
case ETNA_BO_CACHED:
case ETNA_BO_WC:
break;
default:
valid = false;
}
}
if (!valid) {
dev_err(dev->dev, "invalid cache flag: %x (cmd: %d)\n",
(flags & ETNA_BO_CACHE_MASK),
(flags & ETNA_BO_CMDSTREAM));
return -EINVAL;
}
etnaviv_obj = kzalloc(sz, GFP_KERNEL);
if (!etnaviv_obj)
return -ENOMEM;
if (flags & ETNA_BO_CMDSTREAM) {
etnaviv_obj->vaddr = dma_alloc_coherent(dev->dev, size,
&etnaviv_obj->paddr, GFP_KERNEL);
if (!etnaviv_obj->vaddr) {
kfree(etnaviv_obj);
return -ENOMEM;
}
}
etnaviv_obj->flags = flags;
etnaviv_obj->resv = &etnaviv_obj->_resv;
reservation_object_init(etnaviv_obj->resv);
INIT_LIST_HEAD(&etnaviv_obj->submit_entry);
list_add_tail(&etnaviv_obj->mm_list, &priv->inactive_list);
*obj = &etnaviv_obj->base;
return 0;
+}
+struct drm_gem_object *etnaviv_gem_new(struct drm_device *dev,
uint32_t size, uint32_t flags)
+{
struct drm_gem_object *obj = NULL;
int ret;
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
size = PAGE_ALIGN(size);
ret = etnaviv_gem_new_impl(dev, size, flags, &obj);
if (ret)
goto fail;
ret = 0;
if (flags & ETNA_BO_CMDSTREAM)
drm_gem_private_object_init(dev, obj, size);
else
ret = drm_gem_object_init(dev, obj, size);
if (ret)
goto fail;
return obj;
+fail:
if (obj)
drm_gem_object_unreference(obj);
return ERR_PTR(ret);
+}
+struct drm_gem_object *msm_gem_import(struct drm_device *dev,
uint32_t size, struct sg_table *sgt)
+{
struct etnaviv_gem_object *etnaviv_obj;
struct drm_gem_object *obj;
int ret, npages;
size = PAGE_ALIGN(size);
ret = etnaviv_gem_new_impl(dev, size, ETNA_BO_WC, &obj);
if (ret)
goto fail;
drm_gem_private_object_init(dev, obj, size);
npages = size / PAGE_SIZE;
etnaviv_obj = to_etnaviv_bo(obj);
etnaviv_obj->sgt = sgt;
etnaviv_obj->pages = drm_malloc_ab(npages, sizeof(struct page *));
if (!etnaviv_obj->pages) {
ret = -ENOMEM;
goto fail;
}
ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages, NULL, npages);
if (ret)
goto fail;
return obj;
+fail:
if (obj)
drm_gem_object_unreference_unlocked(obj);
return ERR_PTR(ret);
+} diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h new file mode 100644 index 000000000000..597ff8233fb1 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -0,0 +1,100 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __ETNAVIV_GEM_H__ +#define __ETNAVIV_GEM_H__
+#include <linux/reservation.h> +#include "etnaviv_drv.h"
+struct etnaviv_gem_object {
struct drm_gem_object base;
uint32_t flags;
/* And object is either:
* inactive - on priv->inactive_list
* active - on one one of the gpu's active_list.. well, at
* least for now we don't have (I don't think) hw sync between
* 2d and 3d one devices which have both, meaning we need to
* block on submit if a bo is already on other ring
*
*/
struct list_head mm_list;
struct etnaviv_gpu *gpu; /* non-null if active */
uint32_t read_fence, write_fence;
/* Transiently in the process of submit ioctl, objects associated
* with the submit are on submit->bo_list.. this only lasts for
* the duration of the ioctl, so one bo can never be on multiple
* submit lists.
*/
struct list_head submit_entry;
struct page **pages;
struct sg_table *sgt;
void *vaddr;
uint32_t iova;
/* for ETNA_BO_CMDSTREAM */
dma_addr_t paddr;
/* normally (resv == &_resv) except for imported bo's */
struct reservation_object *resv;
struct reservation_object _resv;
struct drm_mm_node *gpu_vram_node;
/* for buffer manipulation during submit */
u32 offset;
+}; +#define to_etnaviv_bo(x) container_of(x, struct etnaviv_gem_object, base)
+static inline bool is_active(struct etnaviv_gem_object *etnaviv_obj) +{
return etnaviv_obj->gpu != NULL;
+}
+#define MAX_CMDS 4
+/* Created per submit-ioctl, to track bo's and cmdstream bufs, etc,
- associated with the cmdstream submission for synchronization (and
- make it easier to unwind when things go wrong, etc). This only
- lasts for the duration of the submit-ioctl.
- */
+struct etnaviv_gem_submit {
struct drm_device *dev;
struct etnaviv_gpu *gpu;
struct list_head bo_list;
struct ww_acquire_ctx ticket;
uint32_t fence;
bool valid;
unsigned int nr_cmds;
unsigned int nr_bos;
struct {
uint32_t type;
uint32_t size; /* in dwords */
struct etnaviv_gem_object *obj;
} cmd[MAX_CMDS];
struct {
uint32_t flags;
struct etnaviv_gem_object *obj;
uint32_t iova;
} bos[0];
+};
+#endif /* __ETNAVIV_GEM_H__ */ diff --git a/drivers/staging/etnaviv/etnaviv_gem_prime.c b/drivers/staging/etnaviv/etnaviv_gem_prime.c new file mode 100644 index 000000000000..78dd843a8e97 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_gem_prime.c @@ -0,0 +1,56 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include "etnaviv_drv.h" +#include "etnaviv_gem.h"
+struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) +{
struct etnaviv_gem_object *etnaviv_obj= to_etnaviv_bo(obj);
BUG_ON(!etnaviv_obj->sgt); /* should have already pinned! */
return etnaviv_obj->sgt;
+}
+void *msm_gem_prime_vmap(struct drm_gem_object *obj) +{
return msm_gem_vaddr(obj);
+}
+void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) +{
/* TODO msm_gem_vunmap() */
+}
+struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
size_t size, struct sg_table *sg)
+{
return msm_gem_import(dev, size, sg);
+}
+int msm_gem_prime_pin(struct drm_gem_object *obj) +{
if (!obj->import_attach)
etnaviv_gem_get_pages(obj);
return 0;
+}
+void msm_gem_prime_unpin(struct drm_gem_object *obj) +{
if (!obj->import_attach)
msm_gem_put_pages(obj);
+} diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c new file mode 100644 index 000000000000..dd87fdfe7ab5 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -0,0 +1,407 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include "etnaviv_drv.h" +#include "etnaviv_gpu.h" +#include "etnaviv_gem.h"
+/*
- Cmdstream submission:
- */
+#define BO_INVALID_FLAGS ~(ETNA_SUBMIT_BO_READ | ETNA_SUBMIT_BO_WRITE) +/* make sure these don't conflict w/ MSM_SUBMIT_BO_x */ +#define BO_VALID 0x8000 +#define BO_LOCKED 0x4000 +#define BO_PINNED 0x2000
+static inline void __user *to_user_ptr(u64 address) +{
return (void __user *)(uintptr_t)address;
+}
+static struct etnaviv_gem_submit *submit_create(struct drm_device *dev,
struct etnaviv_gpu *gpu, int nr)
+{
struct etnaviv_gem_submit *submit;
int sz = sizeof(*submit) + (nr * sizeof(submit->bos[0]));
submit = kmalloc(sz, GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY);
if (submit) {
submit->dev = dev;
submit->gpu = gpu;
/* initially, until copy_from_user() and bo lookup succeeds: */
submit->nr_bos = 0;
submit->nr_cmds = 0;
INIT_LIST_HEAD(&submit->bo_list);
ww_acquire_init(&submit->ticket, &reservation_ww_class);
}
return submit;
+}
+static int submit_lookup_objects(struct etnaviv_gem_submit *submit,
struct drm_etnaviv_gem_submit *args, struct drm_file *file)
+{
unsigned i;
int ret = 0;
spin_lock(&file->table_lock);
for (i = 0; i < args->nr_bos; i++) {
struct drm_etnaviv_gem_submit_bo submit_bo;
struct drm_gem_object *obj;
struct etnaviv_gem_object *etnaviv_obj;
void __user *userptr =
to_user_ptr(args->bos + (i * sizeof(submit_bo)));
ret = copy_from_user(&submit_bo, userptr, sizeof(submit_bo));
if (ret) {
ret = -EFAULT;
goto out_unlock;
}
if (submit_bo.flags & BO_INVALID_FLAGS) {
DRM_ERROR("invalid flags: %x\n", submit_bo.flags);
ret = -EINVAL;
goto out_unlock;
}
submit->bos[i].flags = submit_bo.flags;
/* in validate_objects() we figure out if this is true: */
submit->bos[i].iova = submit_bo.presumed;
/* normally use drm_gem_object_lookup(), but for bulk lookup
* all under single table_lock just hit object_idr directly:
*/
obj = idr_find(&file->object_idr, submit_bo.handle);
if (!obj) {
DRM_ERROR("invalid handle %u at index %u\n", submit_bo.handle, i);
ret = -EINVAL;
goto out_unlock;
}
etnaviv_obj = to_etnaviv_bo(obj);
if (!list_empty(&etnaviv_obj->submit_entry)) {
DRM_ERROR("handle %u at index %u already on submit list\n",
submit_bo.handle, i);
ret = -EINVAL;
goto out_unlock;
}
drm_gem_object_reference(obj);
submit->bos[i].obj = etnaviv_obj;
list_add_tail(&etnaviv_obj->submit_entry, &submit->bo_list);
}
+out_unlock:
submit->nr_bos = i;
spin_unlock(&file->table_lock);
return ret;
+}
+static void submit_unlock_unpin_bo(struct etnaviv_gem_submit *submit, int i) +{
struct etnaviv_gem_object *etnaviv_obj = submit->bos[i].obj;
if (submit->bos[i].flags & BO_PINNED)
etnaviv_gem_put_iova(&etnaviv_obj->base);
if (submit->bos[i].flags & BO_LOCKED)
ww_mutex_unlock(&etnaviv_obj->resv->lock);
if (!(submit->bos[i].flags & BO_VALID))
submit->bos[i].iova = 0;
submit->bos[i].flags &= ~(BO_LOCKED | BO_PINNED);
+}
+/* This is where we make sure all the bo's are reserved and pin'd: */ +static int submit_validate_objects(struct etnaviv_gem_submit *submit) +{
int contended, slow_locked = -1, i, ret = 0;
+retry:
submit->valid = true;
for (i = 0; i < submit->nr_bos; i++) {
struct etnaviv_gem_object *etnaviv_obj = submit->bos[i].obj;
uint32_t iova;
if (slow_locked == i)
slow_locked = -1;
contended = i;
if (!(submit->bos[i].flags & BO_LOCKED)) {
ret = ww_mutex_lock_interruptible(&etnaviv_obj->resv->lock,
&submit->ticket);
if (ret)
goto fail;
submit->bos[i].flags |= BO_LOCKED;
}
/* if locking succeeded, pin bo: */
ret = etnaviv_gem_get_iova_locked(submit->gpu, &etnaviv_obj->base, &iova);
/* this would break the logic in the fail path.. there is no
* reason for this to happen, but just to be on the safe side
* let's notice if this starts happening in the future:
*/
WARN_ON(ret == -EDEADLK);
if (ret)
goto fail;
submit->bos[i].flags |= BO_PINNED;
if (iova == submit->bos[i].iova) {
submit->bos[i].flags |= BO_VALID;
} else {
submit->bos[i].iova = iova;
submit->bos[i].flags &= ~BO_VALID;
submit->valid = false;
}
}
ww_acquire_done(&submit->ticket);
return 0;
+fail:
for (; i >= 0; i--)
submit_unlock_unpin_bo(submit, i);
if (slow_locked > 0)
submit_unlock_unpin_bo(submit, slow_locked);
if (ret == -EDEADLK) {
struct etnaviv_gem_object *etnaviv_obj = submit->bos[contended].obj;
/* we lost out in a seqno race, lock and retry.. */
ret = ww_mutex_lock_slow_interruptible(&etnaviv_obj->resv->lock,
&submit->ticket);
if (!ret) {
submit->bos[contended].flags |= BO_LOCKED;
slow_locked = contended;
goto retry;
}
}
return ret;
+}
+static int submit_bo(struct etnaviv_gem_submit *submit, uint32_t idx,
struct etnaviv_gem_object **obj, uint32_t *iova, bool *valid)
+{
if (idx >= submit->nr_bos) {
DRM_ERROR("invalid buffer index: %u (out of %u)\n",
idx, submit->nr_bos);
return -EINVAL;
}
if (obj)
*obj = submit->bos[idx].obj;
if (iova)
*iova = submit->bos[idx].iova;
if (valid)
*valid = !!(submit->bos[idx].flags & BO_VALID);
return 0;
+}
+/* process the reloc's and patch up the cmdstream as needed: */ +static int submit_reloc(struct etnaviv_gem_submit *submit, struct etnaviv_gem_object *obj,
uint32_t offset, uint32_t nr_relocs, uint64_t relocs)
+{
uint32_t i, last_offset = 0;
uint32_t *ptr = obj->vaddr;
int ret;
if (offset % 4) {
DRM_ERROR("non-aligned cmdstream buffer: %u\n", offset);
return -EINVAL;
}
for (i = 0; i < nr_relocs; i++) {
struct drm_etnaviv_gem_submit_reloc submit_reloc;
void __user *userptr =
to_user_ptr(relocs + (i * sizeof(submit_reloc)));
uint32_t iova, off;
bool valid;
ret = copy_from_user(&submit_reloc, userptr, sizeof(submit_reloc));
if (ret)
return -EFAULT;
if (submit_reloc.submit_offset % 4) {
DRM_ERROR("non-aligned reloc offset: %u\n",
submit_reloc.submit_offset);
return -EINVAL;
}
/* offset in dwords: */
off = submit_reloc.submit_offset / 4;
if ((off >= (obj->base.size / 4)) ||
(off < last_offset)) {
DRM_ERROR("invalid offset %u at reloc %u\n", off, i);
return -EINVAL;
}
ret = submit_bo(submit, submit_reloc.reloc_idx, NULL, &iova, &valid);
if (ret)
return ret;
if (valid)
continue;
iova += submit_reloc.reloc_offset;
if (submit_reloc.shift < 0)
iova >>= -submit_reloc.shift;
else
iova <<= submit_reloc.shift;
ptr[off] = iova | submit_reloc.or;
last_offset = off;
}
return 0;
+}
+static void submit_cleanup(struct etnaviv_gem_submit *submit, bool fail) +{
unsigned i;
for (i = 0; i < submit->nr_bos; i++) {
struct etnaviv_gem_object *etnaviv_obj = submit->bos[i].obj;
submit_unlock_unpin_bo(submit, i);
list_del_init(&etnaviv_obj->submit_entry);
drm_gem_object_unreference(&etnaviv_obj->base);
}
ww_acquire_fini(&submit->ticket);
kfree(submit);
+}
+int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
struct drm_file *file)
+{
struct etnaviv_drm_private *priv = dev->dev_private;
struct drm_etnaviv_gem_submit *args = data;
struct etnaviv_file_private *ctx = file->driver_priv;
struct etnaviv_gem_submit *submit;
struct etnaviv_gpu *gpu;
unsigned i;
int ret;
if (args->pipe >= ETNA_MAX_PIPES)
return -EINVAL;
gpu = priv->gpu[args->pipe];
if (!gpu)
return -ENXIO;
if (args->nr_cmds > MAX_CMDS)
return -EINVAL;
mutex_lock(&dev->struct_mutex);
submit = submit_create(dev, gpu, args->nr_bos);
if (!submit) {
ret = -ENOMEM;
goto out;
}
ret = submit_lookup_objects(submit, args, file);
if (ret)
goto out;
ret = submit_validate_objects(submit);
if (ret)
goto out;
for (i = 0; i < args->nr_cmds; i++) {
struct drm_etnaviv_gem_submit_cmd submit_cmd;
void __user *userptr =
to_user_ptr(args->cmds + (i * sizeof(submit_cmd)));
struct etnaviv_gem_object *etnaviv_obj;
ret = copy_from_user(&submit_cmd, userptr, sizeof(submit_cmd));
if (ret) {
ret = -EFAULT;
goto out;
}
ret = submit_bo(submit, submit_cmd.submit_idx,
&etnaviv_obj, NULL, NULL);
if (ret)
goto out;
if (!(etnaviv_obj->flags & ETNA_BO_CMDSTREAM)) {
DRM_ERROR("cmdstream bo has flag ETNA_BO_CMDSTREAM not set\n");
ret = -EINVAL;
goto out;
}
if (submit_cmd.size % 4) {
DRM_ERROR("non-aligned cmdstream buffer size: %u\n",
submit_cmd.size);
ret = -EINVAL;
goto out;
}
if ((submit_cmd.size + submit_cmd.submit_offset) >=
etnaviv_obj->base.size) {
DRM_ERROR("invalid cmdstream size: %u\n", submit_cmd.size);
ret = -EINVAL;
goto out;
}
submit->cmd[i].type = submit_cmd.type;
submit->cmd[i].size = submit_cmd.size / 4;
submit->cmd[i].obj = etnaviv_obj;
if (submit->valid)
continue;
ret = submit_reloc(submit, etnaviv_obj, submit_cmd.submit_offset,
submit_cmd.nr_relocs, submit_cmd.relocs);
if (ret)
goto out;
}
submit->nr_cmds = i;
ret = etnaviv_gpu_submit(gpu, submit, ctx);
args->fence = submit->fence;
+out:
if (submit)
submit_cleanup(submit, !!ret);
mutex_unlock(&dev->struct_mutex);
return ret;
+} diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c new file mode 100644 index 000000000000..d2d0556a9bad --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -0,0 +1,984 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include <linux/component.h> +#include <linux/of_device.h> +#include "etnaviv_gpu.h" +#include "etnaviv_gem.h" +#include "etnaviv_mmu.h" +#include "etnaviv_iommu.h" +#include "etnaviv_iommu_v2.h" +#include "common.xml.h" +#include "state.xml.h" +#include "state_hi.xml.h" +#include "cmdstream.xml.h"
+/*
- Driver functions:
- */
+int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, uint32_t param, uint64_t *value) +{
switch (param) {
case ETNAVIV_PARAM_GPU_MODEL:
*value = gpu->identity.model;
break;
case ETNAVIV_PARAM_GPU_REVISION:
*value = gpu->identity.revision;
break;
case ETNAVIV_PARAM_GPU_FEATURES_0:
*value = gpu->identity.features;
break;
case ETNAVIV_PARAM_GPU_FEATURES_1:
*value = gpu->identity.minor_features0;
break;
case ETNAVIV_PARAM_GPU_FEATURES_2:
*value = gpu->identity.minor_features1;
break;
case ETNAVIV_PARAM_GPU_FEATURES_3:
*value = gpu->identity.minor_features2;
break;
case ETNAVIV_PARAM_GPU_FEATURES_4:
*value = gpu->identity.minor_features3;
break;
case ETNAVIV_PARAM_GPU_STREAM_COUNT:
*value = gpu->identity.stream_count;
break;
case ETNAVIV_PARAM_GPU_REGISTER_MAX:
*value = gpu->identity.register_max;
break;
case ETNAVIV_PARAM_GPU_THREAD_COUNT:
*value = gpu->identity.thread_count;
break;
case ETNAVIV_PARAM_GPU_VERTEX_CACHE_SIZE:
*value = gpu->identity.vertex_cache_size;
break;
case ETNAVIV_PARAM_GPU_SHADER_CORE_COUNT:
*value = gpu->identity.shader_core_count;
break;
case ETNAVIV_PARAM_GPU_PIXEL_PIPES:
*value = gpu->identity.pixel_pipes;
break;
case ETNAVIV_PARAM_GPU_VERTEX_OUTPUT_BUFFER_SIZE:
*value = gpu->identity.vertex_output_buffer_size;
break;
case ETNAVIV_PARAM_GPU_BUFFER_SIZE:
*value = gpu->identity.buffer_size;
break;
case ETNAVIV_PARAM_GPU_INSTRUCTION_COUNT:
*value = gpu->identity.instruction_count;
break;
case ETNAVIV_PARAM_GPU_NUM_CONSTANTS:
*value = gpu->identity.num_constants;
break;
default:
DBG("%s: invalid param: %u", gpu->name, param);
return -EINVAL;
}
return 0;
+}
+static void etnaviv_hw_specs(struct etnaviv_gpu *gpu) +{
if (gpu->identity.minor_features0 & chipMinorFeatures0_MORE_MINOR_FEATURES) {
u32 specs[2];
specs[0] = gpu_read(gpu, VIVS_HI_CHIP_SPECS);
specs[1] = gpu_read(gpu, VIVS_HI_CHIP_SPECS_2);
gpu->identity.stream_count = (specs[0] & VIVS_HI_CHIP_SPECS_STREAM_COUNT__MASK)
>> VIVS_HI_CHIP_SPECS_STREAM_COUNT__SHIFT;
gpu->identity.register_max = (specs[0] & VIVS_HI_CHIP_SPECS_REGISTER_MAX__MASK)
>> VIVS_HI_CHIP_SPECS_REGISTER_MAX__SHIFT;
gpu->identity.thread_count = (specs[0] & VIVS_HI_CHIP_SPECS_THREAD_COUNT__MASK)
>> VIVS_HI_CHIP_SPECS_THREAD_COUNT__SHIFT;
gpu->identity.vertex_cache_size = (specs[0] & VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__MASK)
>> VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__SHIFT;
gpu->identity.shader_core_count = (specs[0] & VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__MASK)
>> VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__SHIFT;
gpu->identity.pixel_pipes = (specs[0] & VIVS_HI_CHIP_SPECS_PIXEL_PIPES__MASK)
>> VIVS_HI_CHIP_SPECS_PIXEL_PIPES__SHIFT;
gpu->identity.vertex_output_buffer_size = (specs[0] & VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__MASK)
>> VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__SHIFT;
gpu->identity.buffer_size = (specs[1] & VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__MASK)
>> VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__SHIFT;
gpu->identity.instruction_count = (specs[1] & VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__MASK)
>> VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__SHIFT;
gpu->identity.num_constants = (specs[1] & VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__MASK)
>> VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__SHIFT;
gpu->identity.register_max = 1 << gpu->identity.register_max;
gpu->identity.thread_count = 1 << gpu->identity.thread_count;
gpu->identity.vertex_output_buffer_size = 1 << gpu->identity.vertex_output_buffer_size;
} else {
dev_err(gpu->dev->dev, "TODO: determine GPU specs based on model\n");
}
switch (gpu->identity.instruction_count) {
case 0:
gpu->identity.instruction_count = 256;
break;
case 1:
gpu->identity.instruction_count = 1024;
break;
case 2:
gpu->identity.instruction_count = 2048;
break;
default:
gpu->identity.instruction_count = 256;
break;
}
dev_info(gpu->dev->dev, "stream_count: %x\n", gpu->identity.stream_count);
dev_info(gpu->dev->dev, "register_max: %x\n", gpu->identity.register_max);
dev_info(gpu->dev->dev, "thread_count: %x\n", gpu->identity.thread_count);
dev_info(gpu->dev->dev, "vertex_cache_size: %x\n", gpu->identity.vertex_cache_size);
dev_info(gpu->dev->dev, "shader_core_count: %x\n", gpu->identity.shader_core_count);
dev_info(gpu->dev->dev, "pixel_pipes: %x\n", gpu->identity.pixel_pipes);
dev_info(gpu->dev->dev, "vertex_output_buffer_size: %x\n", gpu->identity.vertex_output_buffer_size);
dev_info(gpu->dev->dev, "buffer_size: %x\n", gpu->identity.buffer_size);
dev_info(gpu->dev->dev, "instruction_count: %x\n", gpu->identity.instruction_count);
dev_info(gpu->dev->dev, "num_constants: %x\n", gpu->identity.num_constants);
+}
+static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) +{
u32 chipIdentity;
chipIdentity = gpu_read(gpu, VIVS_HI_CHIP_IDENTITY);
/* Special case for older graphic cores. */
if (VIVS_HI_CHIP_IDENTITY_FAMILY(chipIdentity) == 0x01) {
gpu->identity.model = 0x500; /* gc500 */
gpu->identity.revision = VIVS_HI_CHIP_IDENTITY_REVISION(chipIdentity);
} else {
gpu->identity.model = gpu_read(gpu, VIVS_HI_CHIP_MODEL);
gpu->identity.revision = gpu_read(gpu, VIVS_HI_CHIP_REV);
/* !!!! HACK ALERT !!!! */
/* Because people change device IDs without letting software know
** about it - here is the hack to make it all look the same. Only
** for GC400 family. Next time - TELL ME!!! */
if (((gpu->identity.model & 0xFF00) == 0x0400)
&& (gpu->identity.model != 0x0420)) {
gpu->identity.model = gpu->identity.model & 0x0400;
}
/* An other special case */
if ((gpu->identity.model == 0x300)
&& (gpu->identity.revision == 0x2201)) {
u32 chipDate = gpu_read(gpu, VIVS_HI_CHIP_DATE);
u32 chipTime = gpu_read(gpu, VIVS_HI_CHIP_TIME);
if ((chipDate == 0x20080814) && (chipTime == 0x12051100)) {
/* This IP has an ECO; put the correct revision in it. */
gpu->identity.revision = 0x1051;
}
}
}
dev_info(gpu->dev->dev, "model: %x\n", gpu->identity.model);
dev_info(gpu->dev->dev, "revision: %x\n", gpu->identity.revision);
gpu->identity.features = gpu_read(gpu, VIVS_HI_CHIP_FEATURE);
/* Disable fast clear on GC700. */
if (gpu->identity.model == 0x700)
gpu->identity.features &= ~BIT(0);
if (((gpu->identity.model == 0x500) && (gpu->identity.revision < 2))
|| ((gpu->identity.model == 0x300) && (gpu->identity.revision < 0x2000))) {
/* GC500 rev 1.x and GC300 rev < 2.0 doesn't have these registers. */
gpu->identity.minor_features0 = 0;
gpu->identity.minor_features1 = 0;
gpu->identity.minor_features2 = 0;
gpu->identity.minor_features3 = 0;
} else
gpu->identity.minor_features0 = gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_0);
if (gpu->identity.minor_features0 & BIT(21)) {
gpu->identity.minor_features1 = gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_1);
gpu->identity.minor_features2 = gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_2);
gpu->identity.minor_features3 = gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_3);
}
dev_info(gpu->dev->dev, "minor_features: %x\n", gpu->identity.minor_features0);
dev_info(gpu->dev->dev, "minor_features1: %x\n", gpu->identity.minor_features1);
dev_info(gpu->dev->dev, "minor_features2: %x\n", gpu->identity.minor_features2);
dev_info(gpu->dev->dev, "minor_features3: %x\n", gpu->identity.minor_features3);
etnaviv_hw_specs(gpu);
+}
+static void etnaviv_hw_reset(struct etnaviv_gpu *gpu) +{
u32 control, idle;
/* TODO
*
* - clock gating
* - puls eater
* - what about VG?
*/
while (true) {
control = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL);
/* isolate the GPU. */
control |= VIVS_HI_CLOCK_CONTROL_ISOLATE_GPU;
gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, control);
/* set soft reset. */
control |= VIVS_HI_CLOCK_CONTROL_SOFT_RESET;
gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, control);
/* wait for reset. */
msleep(1);
/* reset soft reset bit. */
control &= ~VIVS_HI_CLOCK_CONTROL_SOFT_RESET;
gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, control);
/* reset GPU isolation. */
control &= ~VIVS_HI_CLOCK_CONTROL_ISOLATE_GPU;
gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, control);
/* read idle register. */
idle = gpu_read(gpu, VIVS_HI_IDLE_STATE);
/* try reseting again if FE it not idle */
if ((idle & VIVS_HI_IDLE_STATE_FE) == 0) {
dev_dbg(gpu->dev->dev, "%s: FE is not idle\n", gpu->name);
continue;
}
/* read reset register. */
control = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL);
/* is the GPU idle? */
if (((control & VIVS_HI_CLOCK_CONTROL_IDLE_3D) == 0)
|| ((control & VIVS_HI_CLOCK_CONTROL_IDLE_2D) == 0)) {
dev_dbg(gpu->dev->dev, "%s: GPU is not idle\n", gpu->name);
continue;
}
break;
}
+}
+int etnaviv_gpu_init(struct etnaviv_gpu *gpu) +{
int ret, i;
u32 words; /* 32 bit words */
struct iommu_domain *iommu;
bool mmuv2;
etnaviv_hw_identify(gpu);
etnaviv_hw_reset(gpu);
/* set base addresses */
gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_RA, 0x0);
gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_FE, 0x0);
gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_TX, 0x0);
gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PEZ, 0x0);
gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PE, 0x0);
/* Setup IOMMU.. eventually we will (I think) do this once per context
* and have separate page tables per context. For now, to keep things
* simple and to get something working, just use a single address space:
*/
mmuv2 = gpu->identity.minor_features1 & chipMinorFeatures1_MMU_VERSION;
dev_dbg(gpu->dev->dev, "mmuv2: %d\n", mmuv2);
if (!mmuv2)
iommu = etnaviv_iommu_domain_alloc(gpu);
else
iommu = etnaviv_iommu_v2_domain_alloc(gpu);
if (!iommu) {
ret = -ENOMEM;
goto fail;
}
/* TODO: we will leak here memory - fix it! */
gpu->mmu = etnaviv_iommu_new(gpu->dev, iommu);
if (!gpu->mmu) {
ret = -ENOMEM;
goto fail;
}
etnaviv_register_mmu(gpu->dev, gpu->mmu);
/* Create buffer: */
gpu->buffer = etnaviv_gem_new(gpu->dev, PAGE_SIZE, ETNA_BO_CMDSTREAM);
if (IS_ERR(gpu->buffer)) {
ret = PTR_ERR(gpu->buffer);
gpu->buffer = NULL;
dev_err(gpu->dev->dev, "could not create buffer: %d\n", ret);
goto fail;
}
/* Setup event management */
spin_lock_init(&gpu->event_spinlock);
init_completion(&gpu->event_free);
for (i = 0; i < ARRAY_SIZE(gpu->event_used); i++) {
gpu->event_used[i] = false;
complete(&gpu->event_free);
}
/* Start command processor */
words = etnaviv_buffer_init(gpu);
/* convert number of 32 bit words to number of 64 bit words */
words = ALIGN(words, 2) / 2;
gpu_write(gpu, VIVS_HI_INTR_ENBL, ~0U);
gpu_write(gpu, VIVS_FE_COMMAND_ADDRESS, etnaviv_gem_paddr_locked(gpu->buffer));
gpu_write(gpu, VIVS_FE_COMMAND_CONTROL, VIVS_FE_COMMAND_CONTROL_ENABLE | VIVS_FE_COMMAND_CONTROL_PREFETCH(words));
return 0;
+fail:
return ret;
+}
+#ifdef CONFIG_DEBUG_FS +struct dma_debug {
u32 address[2];
u32 state[2];
+};
+static void verify_dma(struct etnaviv_gpu *gpu, struct dma_debug *debug) +{
u32 i;
debug->address[0] = gpu_read(gpu, VIVS_FE_DMA_ADDRESS);
debug->state[0] = gpu_read(gpu, VIVS_FE_DMA_DEBUG_STATE);
for (i = 0; i < 500; i++) {
debug->address[1] = gpu_read(gpu, VIVS_FE_DMA_ADDRESS);
debug->state[1] = gpu_read(gpu, VIVS_FE_DMA_DEBUG_STATE);
if (debug->address[0] != debug->address[1])
break;
if (debug->state[0] != debug->state[1])
break;
}
+}
+void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m) +{
struct dma_debug debug;
u32 dma_lo = gpu_read(gpu, VIVS_FE_DMA_LOW);
u32 dma_hi = gpu_read(gpu, VIVS_FE_DMA_HIGH);
u32 axi = gpu_read(gpu, VIVS_HI_AXI_STATUS);
u32 idle = gpu_read(gpu, VIVS_HI_IDLE_STATE);
verify_dma(gpu, &debug);
seq_printf(m, "\taxi: 0x08%x\n", axi);
seq_printf(m, "\tidle: 0x08%x\n", idle);
if ((idle & VIVS_HI_IDLE_STATE_FE) == 0)
seq_puts(m, "\t FE is not idle\n");
if ((idle & VIVS_HI_IDLE_STATE_DE) == 0)
seq_puts(m, "\t DE is not idle\n");
if ((idle & VIVS_HI_IDLE_STATE_PE) == 0)
seq_puts(m, "\t PE is not idle\n");
if ((idle & VIVS_HI_IDLE_STATE_SH) == 0)
seq_puts(m, "\t SH is not idle\n");
if ((idle & VIVS_HI_IDLE_STATE_PA) == 0)
seq_puts(m, "\t PA is not idle\n");
if ((idle & VIVS_HI_IDLE_STATE_SE) == 0)
seq_puts(m, "\t SE is not idle\n");
if ((idle & VIVS_HI_IDLE_STATE_RA) == 0)
seq_puts(m, "\t RA is not idle\n");
if ((idle & VIVS_HI_IDLE_STATE_TX) == 0)
seq_puts(m, "\t TX is not idle\n");
if ((idle & VIVS_HI_IDLE_STATE_VG) == 0)
seq_puts(m, "\t VG is not idle\n");
if ((idle & VIVS_HI_IDLE_STATE_IM) == 0)
seq_puts(m, "\t IM is not idle\n");
if ((idle & VIVS_HI_IDLE_STATE_FP) == 0)
seq_puts(m, "\t FP is not idle\n");
if ((idle & VIVS_HI_IDLE_STATE_TS) == 0)
seq_puts(m, "\t TS is not idle\n");
if (idle & VIVS_HI_IDLE_STATE_AXI_LP)
seq_puts(m, "\t AXI low power mode\n");
if (gpu->identity.features & chipFeatures_DEBUG_MODE) {
u32 read0 = gpu_read(gpu, VIVS_MC_DEBUG_READ0);
u32 read1 = gpu_read(gpu, VIVS_MC_DEBUG_READ1);
u32 write = gpu_read(gpu, VIVS_MC_DEBUG_WRITE);
seq_puts(m, "\tMC\n");
seq_printf(m, "\t read0: 0x%08x\n", read0);
seq_printf(m, "\t read1: 0x%08x\n", read1);
seq_printf(m, "\t write: 0x%08x\n", write);
}
seq_puts(m, "\tDMA ");
if ((debug.address[0] == debug.address[1]) && (debug.state[0] == debug.state[1])) {
seq_puts(m, "seems to be stuck\n");
} else {
if (debug.address[0] == debug.address[1])
seq_puts(m, "adress is constant\n");
else
seq_puts(m, "is runing\n");
}
seq_printf(m, "\t address 0: 0x%08x\n", debug.address[0]);
seq_printf(m, "\t address 1: 0x%08x\n", debug.address[1]);
seq_printf(m, "\t state 0: 0x%08x\n", debug.state[0]);
seq_printf(m, "\t state 1: 0x%08x\n", debug.state[1]);
seq_printf(m, "\t last fetch 64 bit word: 0x%08x-0x%08x\n", dma_hi, dma_lo);
+} +#endif
+/*
- Power Management:
- */
+static int enable_pwrrail(struct etnaviv_gpu *gpu) +{ +#if 0
struct drm_device *dev = gpu->dev;
int ret = 0;
if (gpu->gpu_reg) {
ret = regulator_enable(gpu->gpu_reg);
if (ret) {
dev_err(dev->dev, "failed to enable 'gpu_reg': %d\n", ret);
return ret;
}
}
if (gpu->gpu_cx) {
ret = regulator_enable(gpu->gpu_cx);
if (ret) {
dev_err(dev->dev, "failed to enable 'gpu_cx': %d\n", ret);
return ret;
}
}
+#endif
return 0;
+}
+static int disable_pwrrail(struct etnaviv_gpu *gpu) +{ +#if 0
if (gpu->gpu_cx)
regulator_disable(gpu->gpu_cx);
if (gpu->gpu_reg)
regulator_disable(gpu->gpu_reg);
+#endif
return 0;
+}
+static int enable_clk(struct etnaviv_gpu *gpu) +{
if (gpu->clk_core)
clk_prepare_enable(gpu->clk_core);
if (gpu->clk_shader)
clk_prepare_enable(gpu->clk_shader);
return 0;
+}
+static int disable_clk(struct etnaviv_gpu *gpu) +{
if (gpu->clk_core)
clk_disable_unprepare(gpu->clk_core);
if (gpu->clk_shader)
clk_disable_unprepare(gpu->clk_shader);
return 0;
+}
+static int enable_axi(struct etnaviv_gpu *gpu) +{
if (gpu->clk_bus)
clk_prepare_enable(gpu->clk_bus);
return 0;
+}
+static int disable_axi(struct etnaviv_gpu *gpu) +{
if (gpu->clk_bus)
clk_disable_unprepare(gpu->clk_bus);
return 0;
+}
+int etnaviv_gpu_pm_resume(struct etnaviv_gpu *gpu) +{
int ret;
DBG("%s", gpu->name);
ret = enable_pwrrail(gpu);
if (ret)
return ret;
ret = enable_clk(gpu);
if (ret)
return ret;
ret = enable_axi(gpu);
if (ret)
return ret;
return 0;
+}
+int etnaviv_gpu_pm_suspend(struct etnaviv_gpu *gpu) +{
int ret;
DBG("%s", gpu->name);
ret = disable_axi(gpu);
if (ret)
return ret;
ret = disable_clk(gpu);
if (ret)
return ret;
ret = disable_pwrrail(gpu);
if (ret)
return ret;
return 0;
+}
+/*
- Hangcheck detection for locked gpu:
- */
+static void recover_worker(struct work_struct *work) +{
struct etnaviv_gpu *gpu = container_of(work, struct etnaviv_gpu, recover_work);
struct drm_device *dev = gpu->dev;
dev_err(dev->dev, "%s: hangcheck recover!\n", gpu->name);
mutex_lock(&dev->struct_mutex);
/* TODO gpu->funcs->recover(gpu); */
mutex_unlock(&dev->struct_mutex);
etnaviv_gpu_retire(gpu);
+}
+static void hangcheck_timer_reset(struct etnaviv_gpu *gpu) +{
DBG("%s", gpu->name);
mod_timer(&gpu->hangcheck_timer,
round_jiffies_up(jiffies + DRM_MSM_HANGCHECK_JIFFIES));
+}
+static void hangcheck_handler(unsigned long data) +{
struct etnaviv_gpu *gpu = (struct etnaviv_gpu *)data;
struct drm_device *dev = gpu->dev;
struct etnaviv_drm_private *priv = dev->dev_private;
uint32_t fence = gpu->retired_fence;
if (fence != gpu->hangcheck_fence) {
/* some progress has been made.. ya! */
gpu->hangcheck_fence = fence;
} else if (fence < gpu->submitted_fence) {
/* no progress and not done.. hung! */
gpu->hangcheck_fence = fence;
dev_err(dev->dev, "%s: hangcheck detected gpu lockup!\n",
gpu->name);
dev_err(dev->dev, "%s: completed fence: %u\n",
gpu->name, fence);
dev_err(dev->dev, "%s: submitted fence: %u\n",
gpu->name, gpu->submitted_fence);
queue_work(priv->wq, &gpu->recover_work);
}
/* if still more pending work, reset the hangcheck timer: */
if (gpu->submitted_fence > gpu->hangcheck_fence)
hangcheck_timer_reset(gpu);
+}
+/*
- event management:
- */
+static unsigned int event_alloc(struct etnaviv_gpu *gpu) +{
unsigned long ret, flags;
unsigned int i, event = ~0U;
ret = wait_for_completion_timeout(&gpu->event_free, msecs_to_jiffies(10 * 10000));
if (!ret)
dev_err(gpu->dev->dev, "wait_for_completion_timeout failed");
spin_lock_irqsave(&gpu->event_spinlock, flags);
/* find first free event */
for (i = 0; i < ARRAY_SIZE(gpu->event_used); i++) {
if (gpu->event_used[i] == false) {
gpu->event_used[i] = true;
event = i;
break;
}
}
spin_unlock_irqrestore(&gpu->event_spinlock, flags);
return event;
+}
+static void event_free(struct etnaviv_gpu *gpu, unsigned int event) +{
unsigned long flags;
spin_lock_irqsave(&gpu->event_spinlock, flags);
if (gpu->event_used[event] == false) {
dev_warn(gpu->dev->dev, "event %u is already marked as free", event);
spin_unlock_irqrestore(&gpu->event_spinlock, flags);
} else {
gpu->event_used[event] = false;
spin_unlock_irqrestore(&gpu->event_spinlock, flags);
complete(&gpu->event_free);
}
+}
+/*
- Cmdstream submission/retirement:
- */
+static void retire_worker(struct work_struct *work) +{
struct etnaviv_gpu *gpu = container_of(work, struct etnaviv_gpu, retire_work);
struct drm_device *dev = gpu->dev;
uint32_t fence = gpu->retired_fence;
etnaviv_update_fence(gpu->dev, fence);
mutex_lock(&dev->struct_mutex);
while (!list_empty(&gpu->active_list)) {
struct etnaviv_gem_object *obj;
obj = list_first_entry(&gpu->active_list,
struct etnaviv_gem_object, mm_list);
if ((obj->read_fence <= fence) &&
(obj->write_fence <= fence)) {
/* move to inactive: */
etnaviv_gem_move_to_inactive(&obj->base);
etnaviv_gem_put_iova(&obj->base);
drm_gem_object_unreference(&obj->base);
} else {
break;
}
}
mutex_unlock(&dev->struct_mutex);
+}
+/* call from irq handler to schedule work to retire bo's */ +void etnaviv_gpu_retire(struct etnaviv_gpu *gpu) +{
struct etnaviv_drm_private *priv = gpu->dev->dev_private;
queue_work(priv->wq, &gpu->retire_work);
+}
+/* add bo's to gpu's ring, and kick gpu: */ +int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, struct etnaviv_gem_submit *submit,
struct etnaviv_file_private *ctx)
+{
struct drm_device *dev = gpu->dev;
struct etnaviv_drm_private *priv = dev->dev_private;
int ret = 0;
unsigned int event, i;
submit->fence = ++priv->next_fence;
gpu->submitted_fence = submit->fence;
/*
* TODO
*
* - flush
* - data endian
* - prefetch
*
*/
event = event_alloc(gpu);
if (unlikely(event == ~0U)) {
DRM_ERROR("no free event\n");
ret = -EBUSY;
goto fail;
}
gpu->event_to_fence[event] = submit->fence;
etnaviv_buffer_queue(gpu, event, submit);
priv->lastctx = ctx;
for (i = 0; i < submit->nr_bos; i++) {
struct etnaviv_gem_object *etnaviv_obj = submit->bos[i].obj;
/* can't happen yet.. but when we add 2d support we'll have
* to deal w/ cross-ring synchronization:
*/
WARN_ON(is_active(etnaviv_obj) && (etnaviv_obj->gpu != gpu));
if (!is_active(etnaviv_obj)) {
uint32_t iova;
/* ring takes a reference to the bo and iova: */
drm_gem_object_reference(&etnaviv_obj->base);
etnaviv_gem_get_iova_locked(gpu, &etnaviv_obj->base, &iova);
}
if (submit->bos[i].flags & ETNA_SUBMIT_BO_READ)
etnaviv_gem_move_to_active(&etnaviv_obj->base, gpu, false, submit->fence);
if (submit->bos[i].flags & ETNA_SUBMIT_BO_WRITE)
etnaviv_gem_move_to_active(&etnaviv_obj->base, gpu, true, submit->fence);
}
hangcheck_timer_reset(gpu);
+fail:
return ret;
+}
+/*
- Init/Cleanup:
- */
+static irqreturn_t irq_handler(int irq, void *data) +{
struct etnaviv_gpu *gpu = data;
irqreturn_t ret = IRQ_NONE;
u32 intr = gpu_read(gpu, VIVS_HI_INTR_ACKNOWLEDGE);
if (intr != 0) {
dev_dbg(gpu->dev->dev, "intr 0x%08x\n", intr);
if (intr & VIVS_HI_INTR_ACKNOWLEDGE_AXI_BUS_ERROR)
dev_err(gpu->dev->dev, "AXI bus error\n");
else {
uint8_t event = __fls(intr);
dev_dbg(gpu->dev->dev, "event %u\n", event);
gpu->retired_fence = gpu->event_to_fence[event];
event_free(gpu, event);
etnaviv_gpu_retire(gpu);
}
ret = IRQ_HANDLED;
}
return ret;
+}
+static int etnaviv_gpu_bind(struct device *dev, struct device *master,
void *data)
+{
struct drm_device *drm = data;
struct etnaviv_drm_private *priv = drm->dev_private;
struct etnaviv_gpu *gpu = dev_get_drvdata(dev);
int idx = gpu->pipe;
dev_info(dev, "pre gpu[idx]: 0x%08x\n", (u32)priv->gpu[idx]);
if (priv->gpu[idx] == 0) {
dev_info(dev, "adding core @idx %d\n", idx);
priv->gpu[idx] = gpu;
} else {
dev_err(dev, "failed to add core @idx %d\n", idx);
goto fail;
}
dev_info(dev, "post gpu[idx]: 0x%08x\n", (u32)priv->gpu[idx]);
gpu->dev = drm;
INIT_LIST_HEAD(&gpu->active_list);
INIT_WORK(&gpu->retire_work, retire_worker);
INIT_WORK(&gpu->recover_work, recover_worker);
setup_timer(&gpu->hangcheck_timer, hangcheck_handler,
(unsigned long)gpu);
return 0;
+fail:
return -1;
+}
+static void etnaviv_gpu_unbind(struct device *dev, struct device *master,
void *data)
+{
struct etnaviv_gpu *gpu = dev_get_drvdata(dev);
DBG("%s", gpu->name);
WARN_ON(!list_empty(&gpu->active_list));
if (gpu->buffer)
drm_gem_object_unreference(gpu->buffer);
if (gpu->mmu)
etnaviv_iommu_destroy(gpu->mmu);
drm_mm_takedown(&gpu->mm);
+}
+static const struct component_ops gpu_ops = {
.bind = etnaviv_gpu_bind,
.unbind = etnaviv_gpu_unbind,
+};
+static const struct of_device_id etnaviv_gpu_match[] = {
{
.compatible = "vivante,vivante-gpu-2d",
.data = (void *)ETNA_PIPE_2D
},
{
.compatible = "vivante,vivante-gpu-3d",
.data = (void *)ETNA_PIPE_3D
},
{
.compatible = "vivante,vivante-gpu-vg",
.data = (void *)ETNA_PIPE_VG
},
{ }
+};
+static int etnaviv_gpu_platform_probe(struct platform_device *pdev) +{
const struct of_device_id *match;
struct device *dev = &pdev->dev;
struct etnaviv_gpu *gpu;
int err = 0;
gpu = devm_kzalloc(dev, sizeof(*gpu), GFP_KERNEL);
if (!gpu)
return -ENOMEM;
match = of_match_device(etnaviv_gpu_match, &pdev->dev);
if (!match)
return -EINVAL;
gpu->name = pdev->name;
/* Map registers: */
gpu->mmio = etnaviv_ioremap(pdev, NULL, gpu->name);
if (IS_ERR(gpu->mmio))
return PTR_ERR(gpu->mmio);
/* Get Interrupt: */
gpu->irq = platform_get_irq(pdev, 0);
if (gpu->irq < 0) {
err = gpu->irq;
dev_err(dev, "failed to get irq: %d\n", err);
goto fail;
}
err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler,
IRQF_TRIGGER_HIGH, gpu->name, gpu);
if (err) {
dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq, err);
goto fail;
}
/* Get Clocks: */
gpu->clk_bus = devm_clk_get(&pdev->dev, "bus");
DBG("clk_bus: %p", gpu->clk_bus);
if (IS_ERR(gpu->clk_bus))
gpu->clk_bus = NULL;
gpu->clk_core = devm_clk_get(&pdev->dev, "core");
DBG("clk_core: %p", gpu->clk_core);
if (IS_ERR(gpu->clk_core))
gpu->clk_core = NULL;
gpu->clk_shader = devm_clk_get(&pdev->dev, "shader");
DBG("clk_shader: %p", gpu->clk_shader);
if (IS_ERR(gpu->clk_shader))
gpu->clk_shader = NULL;
gpu->pipe = (int)match->data;
/* TODO: figure out max mapped size */
drm_mm_init(&gpu->mm, 0x80000000, SZ_1G);
dev_set_drvdata(dev, gpu);
err = component_add(&pdev->dev, &gpu_ops);
if (err < 0) {
dev_err(&pdev->dev, "failed to register component: %d\n", err);
goto fail;
}
return 0;
+fail:
return err;
+}
+static int etnaviv_gpu_platform_remove(struct platform_device *pdev) +{
component_del(&pdev->dev, &gpu_ops);
return 0;
+}
+struct platform_driver etnaviv_gpu_driver = {
.driver = {
.name = "etnaviv-gpu",
.owner = THIS_MODULE,
.of_match_table = etnaviv_gpu_match,
},
.probe = etnaviv_gpu_platform_probe,
.remove = etnaviv_gpu_platform_remove,
+}; diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h new file mode 100644 index 000000000000..707096b5fe98 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -0,0 +1,152 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __ETNAVIV_GPU_H__ +#define __ETNAVIV_GPU_H__
+#include <linux/clk.h> +#include <linux/regulator/consumer.h>
+#include "etnaviv_drv.h"
+struct etnaviv_gem_submit;
+struct etnaviv_chip_identity {
/* Chip model. */
uint32_t model;
/* Revision value.*/
uint32_t revision;
/* Supported feature fields. */
uint32_t features;
/* Supported minor feature fields. */
uint32_t minor_features0;
/* Supported minor feature 1 fields. */
uint32_t minor_features1;
/* Supported minor feature 2 fields. */
uint32_t minor_features2;
/* Supported minor feature 3 fields. */
uint32_t minor_features3;
/* Number of streams supported. */
uint32_t stream_count;
/* Total number of temporary registers per thread. */
uint32_t register_max;
/* Maximum number of threads. */
uint32_t thread_count;
/* Number of shader cores. */
uint32_t shader_core_count;
/* Size of the vertex cache. */
uint32_t vertex_cache_size;
/* Number of entries in the vertex output buffer. */
uint32_t vertex_output_buffer_size;
/* Number of pixel pipes. */
uint32_t pixel_pipes;
/* Number of instructions. */
uint32_t instruction_count;
/* Number of constants. */
uint32_t num_constants;
/* Buffer size */
uint32_t buffer_size;
+};
+struct etnaviv_gpu {
const char *name;
struct drm_device *dev;
struct etnaviv_chip_identity identity;
int pipe;
/* 'ring'-buffer: */
struct drm_gem_object *buffer;
/* event management: */
bool event_used[30];
uint32_t event_to_fence[30];
struct completion event_free;
struct spinlock event_spinlock;
/* list of GEM active objects: */
struct list_head active_list;
uint32_t submitted_fence;
uint32_t retired_fence;
/* worker for handling active-list retiring: */
struct work_struct retire_work;
void __iomem *mmio;
int irq;
struct etnaviv_iommu *mmu;
/* memory manager for GPU address area */
struct drm_mm mm;
/* Power Control: */
struct clk *clk_bus;
struct clk *clk_core;
struct clk *clk_shader;
/* Hang Detction: */
+#define DRM_MSM_HANGCHECK_PERIOD 500 /* in ms */ +#define DRM_MSM_HANGCHECK_JIFFIES msecs_to_jiffies(DRM_MSM_HANGCHECK_PERIOD)
struct timer_list hangcheck_timer;
uint32_t hangcheck_fence;
struct work_struct recover_work;
+};
+static inline void gpu_write(struct etnaviv_gpu *gpu, u32 reg, u32 data) +{
etnaviv_writel(data, gpu->mmio + reg);
+}
+static inline u32 gpu_read(struct etnaviv_gpu *gpu, u32 reg) +{
return etnaviv_readl(gpu->mmio + reg);
+}
+int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, uint32_t param, uint64_t *value);
+int etnaviv_gpu_init(struct etnaviv_gpu *gpu); +int etnaviv_gpu_pm_suspend(struct etnaviv_gpu *gpu); +int etnaviv_gpu_pm_resume(struct etnaviv_gpu *gpu);
+#ifdef CONFIG_DEBUG_FS +void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m); +#endif
+void etnaviv_gpu_retire(struct etnaviv_gpu *gpu); +int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, struct etnaviv_gem_submit *submit,
struct etnaviv_file_private *ctx);
+extern struct platform_driver etnaviv_gpu_driver;
+#endif /* __ETNAVIV_GPU_H__ */ diff --git a/drivers/staging/etnaviv/etnaviv_iommu.c b/drivers/staging/etnaviv/etnaviv_iommu.c new file mode 100644 index 000000000000..d0811fb13363 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_iommu.c @@ -0,0 +1,185 @@ +/*
- Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include <linux/iommu.h> +#include <linux/platform_device.h> +#include <linux/sizes.h> +#include <linux/slab.h> +#include <linux/dma-mapping.h> +#include <linux/bitops.h>
+#include "etnaviv_gpu.h" +#include "state_hi.xml.h"
+#define PT_SIZE SZ_256K +#define PT_ENTRIES (PT_SIZE / sizeof(uint32_t))
+#define GPU_MEM_START 0x80000000
+struct etnaviv_iommu_domain_pgtable {
uint32_t *pgtable;
dma_addr_t paddr;
+};
+struct etnaviv_iommu_domain {
struct etnaviv_iommu_domain_pgtable pgtable;
spinlock_t map_lock;
+};
+static int pgtable_alloc(struct etnaviv_iommu_domain_pgtable *pgtable,
size_t size)
+{
pgtable->pgtable = dma_alloc_coherent(NULL, size, &pgtable->paddr, GFP_KERNEL);
if (!pgtable->pgtable)
return -ENOMEM;
return 0;
+}
+static void pgtable_free(struct etnaviv_iommu_domain_pgtable *pgtable,
size_t size)
+{
dma_free_coherent(NULL, size, pgtable->pgtable, pgtable->paddr);
+}
+static uint32_t pgtable_read(struct etnaviv_iommu_domain_pgtable *pgtable,
unsigned long iova)
+{
/* calcuate index into page table */
unsigned int index = (iova - GPU_MEM_START) / SZ_4K;
phys_addr_t paddr;
paddr = pgtable->pgtable[index];
return paddr;
+}
+static void pgtable_write(struct etnaviv_iommu_domain_pgtable *pgtable,
unsigned long iova, phys_addr_t paddr)
+{
/* calcuate index into page table */
unsigned int index = (iova - GPU_MEM_START) / SZ_4K;
pgtable->pgtable[index] = paddr;
+}
+static int etnaviv_iommu_domain_init(struct iommu_domain *domain) +{
struct etnaviv_iommu_domain *etnaviv_domain;
int ret;
etnaviv_domain = kmalloc(sizeof(*etnaviv_domain), GFP_KERNEL);
if (!etnaviv_domain)
return -ENOMEM;
ret = pgtable_alloc(&etnaviv_domain->pgtable, PT_SIZE);
if (ret < 0) {
kfree(etnaviv_domain);
return ret;
}
spin_lock_init(&etnaviv_domain->map_lock);
domain->priv = etnaviv_domain;
return 0;
+}
+static void etnaviv_iommu_domain_destroy(struct iommu_domain *domain) +{
struct etnaviv_iommu_domain *etnaviv_domain = domain->priv;
pgtable_free(&etnaviv_domain->pgtable, PT_SIZE);
kfree(etnaviv_domain);
domain->priv = NULL;
+}
+static int etnaviv_iommu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot)
+{
struct etnaviv_iommu_domain *etnaviv_domain = domain->priv;
if (size != SZ_4K)
return -EINVAL;
spin_lock(&etnaviv_domain->map_lock);
pgtable_write(&etnaviv_domain->pgtable, iova, paddr);
spin_unlock(&etnaviv_domain->map_lock);
return 0;
+}
+static size_t etnaviv_iommu_unmap(struct iommu_domain *domain, unsigned long iova,
size_t size)
+{
struct etnaviv_iommu_domain *etnaviv_domain = domain->priv;
if (size != SZ_4K)
return -EINVAL;
spin_lock(&etnaviv_domain->map_lock);
pgtable_write(&etnaviv_domain->pgtable, iova, ~0);
spin_unlock(&etnaviv_domain->map_lock);
return 0;
+}
+phys_addr_t etnaviv_iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) +{
struct etnaviv_iommu_domain *etnaviv_domain = domain->priv;
return pgtable_read(&etnaviv_domain->pgtable, iova);
+}
+static struct iommu_ops etnaviv_iommu_ops = {
.domain_init = etnaviv_iommu_domain_init,
.domain_destroy = etnaviv_iommu_domain_destroy,
.map = etnaviv_iommu_map,
.unmap = etnaviv_iommu_unmap,
.iova_to_phys = etnaviv_iommu_iova_to_phys,
.pgsize_bitmap = SZ_4K,
+};
+struct iommu_domain *etnaviv_iommu_domain_alloc(struct etnaviv_gpu *gpu) +{
struct iommu_domain *domain;
struct etnaviv_iommu_domain *etnaviv_domain;
int ret;
domain = kzalloc(sizeof(*domain), GFP_KERNEL);
if (!domain)
return NULL;
domain->ops = &etnaviv_iommu_ops;
ret = domain->ops->domain_init(domain);
if (ret)
goto out_free;
/* set page table address in MC */
etnaviv_domain = domain->priv;
gpu_write(gpu, VIVS_MC_MMU_FE_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr);
gpu_write(gpu, VIVS_MC_MMU_TX_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr);
gpu_write(gpu, VIVS_MC_MMU_PE_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr);
gpu_write(gpu, VIVS_MC_MMU_PEZ_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr);
gpu_write(gpu, VIVS_MC_MMU_RA_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr);
return domain;
+out_free:
kfree(domain);
return NULL;
+} diff --git a/drivers/staging/etnaviv/etnaviv_iommu.h b/drivers/staging/etnaviv/etnaviv_iommu.h new file mode 100644 index 000000000000..3103ff3efcbe --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_iommu.h @@ -0,0 +1,25 @@ +/*
- Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __ETNAVIV_IOMMU_H__ +#define __ETNAVIV_IOMMU_H__
+#include <linux/iommu.h> +struct etnaviv_gpu;
+struct iommu_domain *etnaviv_iommu_domain_alloc(struct etnaviv_gpu *gpu);
+#endif /* __ETNAVIV_IOMMU_H__ */ diff --git a/drivers/staging/etnaviv/etnaviv_iommu_v2.c b/drivers/staging/etnaviv/etnaviv_iommu_v2.c new file mode 100644 index 000000000000..3039ee9cbc6d --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_iommu_v2.c @@ -0,0 +1,32 @@ +/*
- Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include <linux/iommu.h> +#include <linux/platform_device.h> +#include <linux/sizes.h> +#include <linux/slab.h> +#include <linux/dma-mapping.h> +#include <linux/bitops.h>
+#include "etnaviv_gpu.h" +#include "state_hi.xml.h"
+struct iommu_domain *etnaviv_iommu_v2_domain_alloc(struct etnaviv_gpu *gpu) +{
/* TODO */
return NULL;
+} diff --git a/drivers/staging/etnaviv/etnaviv_iommu_v2.h b/drivers/staging/etnaviv/etnaviv_iommu_v2.h new file mode 100644 index 000000000000..603ea41c5389 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_iommu_v2.h @@ -0,0 +1,25 @@ +/*
- Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __ETNAVIV_IOMMU_V2_H__ +#define __ETNAVIV_IOMMU_V2_H__
+#include <linux/iommu.h> +struct etnaviv_gpu;
+struct iommu_domain *etnaviv_iommu_v2_domain_alloc(struct etnaviv_gpu *gpu);
+#endif /* __ETNAVIV_IOMMU_V2_H__ */ diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c new file mode 100644 index 000000000000..cee97e11117d --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -0,0 +1,111 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#include "etnaviv_drv.h" +#include "etnaviv_mmu.h"
+static int etnaviv_fault_handler(struct iommu_domain *iommu, struct device *dev,
unsigned long iova, int flags, void *arg)
+{
DBG("*** fault: iova=%08lx, flags=%d", iova, flags);
return 0;
+}
+int etnaviv_iommu_map(struct etnaviv_iommu *iommu, uint32_t iova,
struct sg_table *sgt, unsigned len, int prot)
+{
struct iommu_domain *domain = iommu->domain;
struct scatterlist *sg;
unsigned int da = iova;
unsigned int i, j;
int ret;
if (!domain || !sgt)
return -EINVAL;
for_each_sg(sgt->sgl, sg, sgt->nents, i) {
u32 pa = sg_phys(sg) - sg->offset;
size_t bytes = sg->length + sg->offset;
VERB("map[%d]: %08x %08x(%x)", i, iova, pa, bytes);
ret = iommu_map(domain, da, pa, bytes, prot);
if (ret)
goto fail;
da += bytes;
}
return 0;
+fail:
da = iova;
for_each_sg(sgt->sgl, sg, i, j) {
size_t bytes = sg->length + sg->offset;
iommu_unmap(domain, da, bytes);
da += bytes;
}
return ret;
+}
+int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova,
struct sg_table *sgt, unsigned len)
+{
struct iommu_domain *domain = iommu->domain;
struct scatterlist *sg;
unsigned int da = iova;
int i;
for_each_sg(sgt->sgl, sg, sgt->nents, i) {
size_t bytes = sg->length + sg->offset;
size_t unmapped;
unmapped = iommu_unmap(domain, da, bytes);
if (unmapped < bytes)
return unmapped;
VERB("unmap[%d]: %08x(%x)", i, iova, bytes);
BUG_ON(!PAGE_ALIGNED(bytes));
da += bytes;
}
return 0;
+}
+void etnaviv_iommu_destroy(struct etnaviv_iommu *mmu) +{
iommu_domain_free(mmu->domain);
kfree(mmu);
+}
+struct etnaviv_iommu *etnaviv_iommu_new(struct drm_device *dev, struct iommu_domain *domain) +{
struct etnaviv_iommu *mmu;
mmu = kzalloc(sizeof(*mmu), GFP_KERNEL);
if (!mmu)
return ERR_PTR(-ENOMEM);
mmu->domain = domain;
mmu->dev = dev;
iommu_set_fault_handler(domain, etnaviv_fault_handler, dev);
return mmu;
+} diff --git a/drivers/staging/etnaviv/etnaviv_mmu.h b/drivers/staging/etnaviv/etnaviv_mmu.h new file mode 100644 index 000000000000..02e7adcc96d7 --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_mmu.h @@ -0,0 +1,37 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __ETNAVIV_MMU_H__ +#define __ETNAVIV_MMU_H__
+#include <linux/iommu.h>
+struct etnaviv_iommu {
struct drm_device *dev;
struct iommu_domain *domain;
+};
+int etnaviv_iommu_attach(struct etnaviv_iommu *iommu, const char **names, int cnt); +int etnaviv_iommu_map(struct etnaviv_iommu *iommu, uint32_t iova, struct sg_table *sgt,
unsigned len, int prot);
+int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova, struct sg_table *sgt,
unsigned len);
+void etnaviv_iommu_destroy(struct etnaviv_iommu *iommu);
+struct etnaviv_iommu *etnaviv_iommu_new(struct drm_device *dev, struct iommu_domain *domain);
+#endif /* __ETNAVIV_MMU_H__ */ diff --git a/drivers/staging/etnaviv/state.xml.h b/drivers/staging/etnaviv/state.xml.h new file mode 100644 index 000000000000..e7b36df1e4e3 --- /dev/null +++ b/drivers/staging/etnaviv/state.xml.h @@ -0,0 +1,348 @@ +#ifndef STATE_XML +#define STATE_XML
+/* Autogenerated file, DO NOT EDIT manually!
+This file was generated by the rules-ng-ng headergen tool in this git repository: +http://0x04.net/cgit/index.cgi/rules-ng-ng +git clone git://0x04.net/rules-ng-ng
+The rules-ng-ng source files this header was generated from are: +- /home/orion/projects/etna_viv/rnndb/state.xml ( 18526 bytes, from 2013-09-11 16:52:32) +- /home/orion/projects/etna_viv/rnndb/common.xml ( 18379 bytes, from 2014-01-27 15:58:05) +- /home/orion/projects/etna_viv/rnndb/state_hi.xml ( 22236 bytes, from 2014-01-27 15:56:46) +- /home/orion/projects/etna_viv/rnndb/state_2d.xml ( 51191 bytes, from 2013-10-04 06:36:55) +- /home/orion/projects/etna_viv/rnndb/state_3d.xml ( 54570 bytes, from 2013-10-12 15:25:03) +- /home/orion/projects/etna_viv/rnndb/state_vg.xml ( 5942 bytes, from 2013-09-01 10:53:22)
+Copyright (C) 2013 +*/
+#define VARYING_COMPONENT_USE_UNUSED 0x00000000 +#define VARYING_COMPONENT_USE_USED 0x00000001 +#define VARYING_COMPONENT_USE_POINTCOORD_X 0x00000002 +#define VARYING_COMPONENT_USE_POINTCOORD_Y 0x00000003 +#define FE_VERTEX_STREAM_CONTROL_VERTEX_STRIDE__MASK 0x000000ff +#define FE_VERTEX_STREAM_CONTROL_VERTEX_STRIDE__SHIFT 0 +#define FE_VERTEX_STREAM_CONTROL_VERTEX_STRIDE(x) (((x) << FE_VERTEX_STREAM_CONTROL_VERTEX_STRIDE__SHIFT) & FE_VERTEX_STREAM_CONTROL_VERTEX_STRIDE__MASK) +#define VIVS_FE 0x00000000
+#define VIVS_FE_VERTEX_ELEMENT_CONFIG(i0) (0x00000600 + 0x4*(i0)) +#define VIVS_FE_VERTEX_ELEMENT_CONFIG__ESIZE 0x00000004 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG__LEN 0x00000010 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE__MASK 0x0000000f +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE__SHIFT 0 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_BYTE 0x00000000 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_UNSIGNED_BYTE 0x00000001 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_SHORT 0x00000002 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_UNSIGNED_SHORT 0x00000003 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_INT 0x00000004 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_UNSIGNED_INT 0x00000005 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_FLOAT 0x00000008 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_HALF_FLOAT 0x00000009 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_FIXED 0x0000000b +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_INT_10_10_10_2 0x0000000c +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_TYPE_UNSIGNED_INT_10_10_10_2 0x0000000d +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_ENDIAN__MASK 0x00000030 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_ENDIAN__SHIFT 4 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_ENDIAN(x) (((x) << VIVS_FE_VERTEX_ELEMENT_CONFIG_ENDIAN__SHIFT) & VIVS_FE_VERTEX_ELEMENT_CONFIG_ENDIAN__MASK) +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NONCONSECUTIVE 0x00000080 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_STREAM__MASK 0x00000700 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_STREAM__SHIFT 8 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_STREAM(x) (((x) << VIVS_FE_VERTEX_ELEMENT_CONFIG_STREAM__SHIFT) & VIVS_FE_VERTEX_ELEMENT_CONFIG_STREAM__MASK) +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NUM__MASK 0x00003000 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NUM__SHIFT 12 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NUM(x) (((x) << VIVS_FE_VERTEX_ELEMENT_CONFIG_NUM__SHIFT) & VIVS_FE_VERTEX_ELEMENT_CONFIG_NUM__MASK) +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NORMALIZE__MASK 0x0000c000 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NORMALIZE__SHIFT 14 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NORMALIZE_OFF 0x00000000 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_NORMALIZE_ON 0x00008000 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_START__MASK 0x00ff0000 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_START__SHIFT 16 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_START(x) (((x) << VIVS_FE_VERTEX_ELEMENT_CONFIG_START__SHIFT) & VIVS_FE_VERTEX_ELEMENT_CONFIG_START__MASK) +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_END__MASK 0xff000000 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_END__SHIFT 24 +#define VIVS_FE_VERTEX_ELEMENT_CONFIG_END(x) (((x) << VIVS_FE_VERTEX_ELEMENT_CONFIG_END__SHIFT) & VIVS_FE_VERTEX_ELEMENT_CONFIG_END__MASK)
+#define VIVS_FE_CMD_STREAM_BASE_ADDR 0x00000640
+#define VIVS_FE_INDEX_STREAM_BASE_ADDR 0x00000644
+#define VIVS_FE_INDEX_STREAM_CONTROL 0x00000648 +#define VIVS_FE_INDEX_STREAM_CONTROL_TYPE__MASK 0x00000003 +#define VIVS_FE_INDEX_STREAM_CONTROL_TYPE__SHIFT 0 +#define VIVS_FE_INDEX_STREAM_CONTROL_TYPE_UNSIGNED_CHAR 0x00000000 +#define VIVS_FE_INDEX_STREAM_CONTROL_TYPE_UNSIGNED_SHORT 0x00000001 +#define VIVS_FE_INDEX_STREAM_CONTROL_TYPE_UNSIGNED_INT 0x00000002
+#define VIVS_FE_VERTEX_STREAM_BASE_ADDR 0x0000064c
+#define VIVS_FE_VERTEX_STREAM_CONTROL 0x00000650
+#define VIVS_FE_COMMAND_ADDRESS 0x00000654
+#define VIVS_FE_COMMAND_CONTROL 0x00000658 +#define VIVS_FE_COMMAND_CONTROL_PREFETCH__MASK 0x0000ffff +#define VIVS_FE_COMMAND_CONTROL_PREFETCH__SHIFT 0 +#define VIVS_FE_COMMAND_CONTROL_PREFETCH(x) (((x) << VIVS_FE_COMMAND_CONTROL_PREFETCH__SHIFT) & VIVS_FE_COMMAND_CONTROL_PREFETCH__MASK) +#define VIVS_FE_COMMAND_CONTROL_ENABLE 0x00010000
+#define VIVS_FE_DMA_STATUS 0x0000065c
+#define VIVS_FE_DMA_DEBUG_STATE 0x00000660 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE__MASK 0x0000001f +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE__SHIFT 0 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_IDLE 0x00000000 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_DEC 0x00000001 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_ADR0 0x00000002 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_LOAD0 0x00000003 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_ADR1 0x00000004 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_LOAD1 0x00000005 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_3DADR 0x00000006 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_3DCMD 0x00000007 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_3DCNTL 0x00000008 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_3DIDXCNTL 0x00000009 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_INITREQDMA 0x0000000a +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_DRAWIDX 0x0000000b +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_DRAW 0x0000000c +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_2DRECT0 0x0000000d +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_2DRECT1 0x0000000e +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_2DDATA0 0x0000000f +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_2DDATA1 0x00000010 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_WAITFIFO 0x00000011 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_WAIT 0x00000012 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_LINK 0x00000013 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_END 0x00000014 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_STATE_STALL 0x00000015 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_DMA_STATE__MASK 0x00000300 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_DMA_STATE__SHIFT 8 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_DMA_STATE_IDLE 0x00000000 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_DMA_STATE_START 0x00000100 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_DMA_STATE_REQ 0x00000200 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_DMA_STATE_END 0x00000300 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_FETCH_STATE__MASK 0x00000c00 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_FETCH_STATE__SHIFT 10 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_FETCH_STATE_IDLE 0x00000000 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_FETCH_STATE_RAMVALID 0x00000400 +#define VIVS_FE_DMA_DEBUG_STATE_CMD_FETCH_STATE_VALID 0x00000800 +#define VIVS_FE_DMA_DEBUG_STATE_REQ_DMA_STATE__MASK 0x00003000 +#define VIVS_FE_DMA_DEBUG_STATE_REQ_DMA_STATE__SHIFT 12 +#define VIVS_FE_DMA_DEBUG_STATE_REQ_DMA_STATE_IDLE 0x00000000 +#define VIVS_FE_DMA_DEBUG_STATE_REQ_DMA_STATE_WAITIDX 0x00001000 +#define VIVS_FE_DMA_DEBUG_STATE_REQ_DMA_STATE_CAL 0x00002000 +#define VIVS_FE_DMA_DEBUG_STATE_CAL_STATE__MASK 0x0000c000 +#define VIVS_FE_DMA_DEBUG_STATE_CAL_STATE__SHIFT 14 +#define VIVS_FE_DMA_DEBUG_STATE_CAL_STATE_IDLE 0x00000000 +#define VIVS_FE_DMA_DEBUG_STATE_CAL_STATE_LDADR 0x00004000 +#define VIVS_FE_DMA_DEBUG_STATE_CAL_STATE_IDXCALC 0x00008000 +#define VIVS_FE_DMA_DEBUG_STATE_VE_REQ_STATE__MASK 0x00030000 +#define VIVS_FE_DMA_DEBUG_STATE_VE_REQ_STATE__SHIFT 16 +#define VIVS_FE_DMA_DEBUG_STATE_VE_REQ_STATE_IDLE 0x00000000 +#define VIVS_FE_DMA_DEBUG_STATE_VE_REQ_STATE_CKCACHE 0x00010000 +#define VIVS_FE_DMA_DEBUG_STATE_VE_REQ_STATE_MISS 0x00020000
+#define VIVS_FE_DMA_ADDRESS 0x00000664
+#define VIVS_FE_DMA_LOW 0x00000668
+#define VIVS_FE_DMA_HIGH 0x0000066c
+#define VIVS_FE_AUTO_FLUSH 0x00000670
+#define VIVS_FE_UNK00678 0x00000678
+#define VIVS_FE_UNK0067C 0x0000067c
+#define VIVS_FE_VERTEX_STREAMS(i0) (0x00000000 + 0x4*(i0)) +#define VIVS_FE_VERTEX_STREAMS__ESIZE 0x00000004 +#define VIVS_FE_VERTEX_STREAMS__LEN 0x00000008
+#define VIVS_FE_VERTEX_STREAMS_BASE_ADDR(i0) (0x00000680 + 0x4*(i0))
+#define VIVS_FE_VERTEX_STREAMS_CONTROL(i0) (0x000006a0 + 0x4*(i0))
+#define VIVS_FE_UNK00700(i0) (0x00000700 + 0x4*(i0)) +#define VIVS_FE_UNK00700__ESIZE 0x00000004 +#define VIVS_FE_UNK00700__LEN 0x00000010
+#define VIVS_FE_UNK00740(i0) (0x00000740 + 0x4*(i0)) +#define VIVS_FE_UNK00740__ESIZE 0x00000004 +#define VIVS_FE_UNK00740__LEN 0x00000010
+#define VIVS_FE_UNK00780(i0) (0x00000780 + 0x4*(i0)) +#define VIVS_FE_UNK00780__ESIZE 0x00000004 +#define VIVS_FE_UNK00780__LEN 0x00000010
+#define VIVS_GL 0x00000000
+#define VIVS_GL_PIPE_SELECT 0x00003800 +#define VIVS_GL_PIPE_SELECT_PIPE__MASK 0x00000001 +#define VIVS_GL_PIPE_SELECT_PIPE__SHIFT 0 +#define VIVS_GL_PIPE_SELECT_PIPE(x) (((x) << VIVS_GL_PIPE_SELECT_PIPE__SHIFT) & VIVS_GL_PIPE_SELECT_PIPE__MASK)
+#define VIVS_GL_EVENT 0x00003804 +#define VIVS_GL_EVENT_EVENT_ID__MASK 0x0000001f +#define VIVS_GL_EVENT_EVENT_ID__SHIFT 0 +#define VIVS_GL_EVENT_EVENT_ID(x) (((x) << VIVS_GL_EVENT_EVENT_ID__SHIFT) & VIVS_GL_EVENT_EVENT_ID__MASK) +#define VIVS_GL_EVENT_FROM_FE 0x00000020 +#define VIVS_GL_EVENT_FROM_PE 0x00000040 +#define VIVS_GL_EVENT_SOURCE__MASK 0x00001f00 +#define VIVS_GL_EVENT_SOURCE__SHIFT 8 +#define VIVS_GL_EVENT_SOURCE(x) (((x) << VIVS_GL_EVENT_SOURCE__SHIFT) & VIVS_GL_EVENT_SOURCE__MASK)
+#define VIVS_GL_SEMAPHORE_TOKEN 0x00003808 +#define VIVS_GL_SEMAPHORE_TOKEN_FROM__MASK 0x0000001f +#define VIVS_GL_SEMAPHORE_TOKEN_FROM__SHIFT 0 +#define VIVS_GL_SEMAPHORE_TOKEN_FROM(x) (((x) << VIVS_GL_SEMAPHORE_TOKEN_FROM__SHIFT) & VIVS_GL_SEMAPHORE_TOKEN_FROM__MASK) +#define VIVS_GL_SEMAPHORE_TOKEN_TO__MASK 0x00001f00 +#define VIVS_GL_SEMAPHORE_TOKEN_TO__SHIFT 8 +#define VIVS_GL_SEMAPHORE_TOKEN_TO(x) (((x) << VIVS_GL_SEMAPHORE_TOKEN_TO__SHIFT) & VIVS_GL_SEMAPHORE_TOKEN_TO__MASK)
+#define VIVS_GL_FLUSH_CACHE 0x0000380c +#define VIVS_GL_FLUSH_CACHE_DEPTH 0x00000001 +#define VIVS_GL_FLUSH_CACHE_COLOR 0x00000002 +#define VIVS_GL_FLUSH_CACHE_TEXTURE 0x00000004 +#define VIVS_GL_FLUSH_CACHE_PE2D 0x00000008 +#define VIVS_GL_FLUSH_CACHE_TEXTUREVS 0x00000010 +#define VIVS_GL_FLUSH_CACHE_SHADER_L1 0x00000020 +#define VIVS_GL_FLUSH_CACHE_SHADER_L2 0x00000040
+#define VIVS_GL_FLUSH_MMU 0x00003810 +#define VIVS_GL_FLUSH_MMU_FLUSH_FEMMU 0x00000001 +#define VIVS_GL_FLUSH_MMU_FLUSH_PEMMU 0x00000002
+#define VIVS_GL_VERTEX_ELEMENT_CONFIG 0x00003814
+#define VIVS_GL_MULTI_SAMPLE_CONFIG 0x00003818 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_SAMPLES__MASK 0x00000003 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_SAMPLES__SHIFT 0 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_SAMPLES_NONE 0x00000000 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_SAMPLES_2X 0x00000001 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_SAMPLES_4X 0x00000002 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_SAMPLES_MASK 0x00000008 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_ENABLES__MASK 0x000000f0 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_ENABLES__SHIFT 4 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_ENABLES(x) (((x) << VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_ENABLES__SHIFT) & VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_ENABLES__MASK) +#define VIVS_GL_MULTI_SAMPLE_CONFIG_MSAA_ENABLES_MASK 0x00000100 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK12__MASK 0x00007000 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK12__SHIFT 12 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK12(x) (((x) << VIVS_GL_MULTI_SAMPLE_CONFIG_UNK12__SHIFT) & VIVS_GL_MULTI_SAMPLE_CONFIG_UNK12__MASK) +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK12_MASK 0x00008000 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK16__MASK 0x00030000 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK16__SHIFT 16 +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK16(x) (((x) << VIVS_GL_MULTI_SAMPLE_CONFIG_UNK16__SHIFT) & VIVS_GL_MULTI_SAMPLE_CONFIG_UNK16__MASK) +#define VIVS_GL_MULTI_SAMPLE_CONFIG_UNK16_MASK 0x00080000
+#define VIVS_GL_VARYING_TOTAL_COMPONENTS 0x0000381c +#define VIVS_GL_VARYING_TOTAL_COMPONENTS_NUM__MASK 0x000000ff +#define VIVS_GL_VARYING_TOTAL_COMPONENTS_NUM__SHIFT 0 +#define VIVS_GL_VARYING_TOTAL_COMPONENTS_NUM(x) (((x) << VIVS_GL_VARYING_TOTAL_COMPONENTS_NUM__SHIFT) & VIVS_GL_VARYING_TOTAL_COMPONENTS_NUM__MASK)
+#define VIVS_GL_VARYING_NUM_COMPONENTS 0x00003820 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR0__MASK 0x00000007 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR0__SHIFT 0 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR0(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR0__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR0__MASK) +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR1__MASK 0x00000070 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR1__SHIFT 4 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR1(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR1__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR1__MASK) +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR2__MASK 0x00000700 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR2__SHIFT 8 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR2(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR2__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR2__MASK) +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR3__MASK 0x00007000 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR3__SHIFT 12 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR3(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR3__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR3__MASK) +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR4__MASK 0x00070000 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR4__SHIFT 16 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR4(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR4__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR4__MASK) +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR5__MASK 0x00700000 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR5__SHIFT 20 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR5(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR5__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR5__MASK) +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR6__MASK 0x07000000 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR6__SHIFT 24 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR6(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR6__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR6__MASK) +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR7__MASK 0x70000000 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR7__SHIFT 28 +#define VIVS_GL_VARYING_NUM_COMPONENTS_VAR7(x) (((x) << VIVS_GL_VARYING_NUM_COMPONENTS_VAR7__SHIFT) & VIVS_GL_VARYING_NUM_COMPONENTS_VAR7__MASK)
+#define VIVS_GL_VARYING_COMPONENT_USE(i0) (0x00003828 + 0x4*(i0)) +#define VIVS_GL_VARYING_COMPONENT_USE__ESIZE 0x00000004 +#define VIVS_GL_VARYING_COMPONENT_USE__LEN 0x00000002 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP0__MASK 0x00000003 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP0__SHIFT 0 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP0(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP0__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP0__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP1__MASK 0x0000000c +#define VIVS_GL_VARYING_COMPONENT_USE_COMP1__SHIFT 2 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP1(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP1__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP1__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP2__MASK 0x00000030 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP2__SHIFT 4 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP2(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP2__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP2__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP3__MASK 0x000000c0 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP3__SHIFT 6 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP3(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP3__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP3__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP4__MASK 0x00000300 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP4__SHIFT 8 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP4(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP4__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP4__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP5__MASK 0x00000c00 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP5__SHIFT 10 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP5(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP5__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP5__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP6__MASK 0x00003000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP6__SHIFT 12 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP6(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP6__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP6__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP7__MASK 0x0000c000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP7__SHIFT 14 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP7(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP7__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP7__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP8__MASK 0x00030000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP8__SHIFT 16 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP8(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP8__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP8__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP9__MASK 0x000c0000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP9__SHIFT 18 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP9(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP9__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP9__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP10__MASK 0x00300000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP10__SHIFT 20 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP10(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP10__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP10__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP11__MASK 0x00c00000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP11__SHIFT 22 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP11(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP11__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP11__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP12__MASK 0x03000000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP12__SHIFT 24 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP12(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP12__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP12__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP13__MASK 0x0c000000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP13__SHIFT 26 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP13(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP13__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP13__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP14__MASK 0x30000000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP14__SHIFT 28 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP14(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP14__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP14__MASK) +#define VIVS_GL_VARYING_COMPONENT_USE_COMP15__MASK 0xc0000000 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP15__SHIFT 30 +#define VIVS_GL_VARYING_COMPONENT_USE_COMP15(x) (((x) << VIVS_GL_VARYING_COMPONENT_USE_COMP15__SHIFT) & VIVS_GL_VARYING_COMPONENT_USE_COMP15__MASK)
+#define VIVS_GL_UNK03834 0x00003834
+#define VIVS_GL_UNK03838 0x00003838
+#define VIVS_GL_API_MODE 0x0000384c +#define VIVS_GL_API_MODE_OPENGL 0x00000000 +#define VIVS_GL_API_MODE_OPENVG 0x00000001 +#define VIVS_GL_API_MODE_OPENCL 0x00000002
+#define VIVS_GL_CONTEXT_POINTER 0x00003850
+#define VIVS_GL_UNK03A00 0x00003a00
+#define VIVS_GL_STALL_TOKEN 0x00003c00 +#define VIVS_GL_STALL_TOKEN_FROM__MASK 0x0000001f +#define VIVS_GL_STALL_TOKEN_FROM__SHIFT 0 +#define VIVS_GL_STALL_TOKEN_FROM(x) (((x) << VIVS_GL_STALL_TOKEN_FROM__SHIFT) & VIVS_GL_STALL_TOKEN_FROM__MASK) +#define VIVS_GL_STALL_TOKEN_TO__MASK 0x00001f00 +#define VIVS_GL_STALL_TOKEN_TO__SHIFT 8 +#define VIVS_GL_STALL_TOKEN_TO(x) (((x) << VIVS_GL_STALL_TOKEN_TO__SHIFT) & VIVS_GL_STALL_TOKEN_TO__MASK) +#define VIVS_GL_STALL_TOKEN_FLIP0 0x40000000 +#define VIVS_GL_STALL_TOKEN_FLIP1 0x80000000
+#define VIVS_DUMMY 0x00000000
+#define VIVS_DUMMY_DUMMY 0x0003fffc
+#endif /* STATE_XML */ diff --git a/drivers/staging/etnaviv/state_hi.xml.h b/drivers/staging/etnaviv/state_hi.xml.h new file mode 100644 index 000000000000..9799d7473e5e --- /dev/null +++ b/drivers/staging/etnaviv/state_hi.xml.h @@ -0,0 +1,405 @@ +#ifndef STATE_HI_XML +#define STATE_HI_XML
+/* Autogenerated file, DO NOT EDIT manually!
+This file was generated by the rules-ng-ng headergen tool in this git repository: +http://0x04.net/cgit/index.cgi/rules-ng-ng +git clone git://0x04.net/rules-ng-ng
+The rules-ng-ng source files this header was generated from are: +- /home/christian/projects/etna_viv/rnndb/state.xml ( 18526 bytes, from 2014-09-06 05:57:57) +- /home/christian/projects/etna_viv/rnndb/common.xml ( 18379 bytes, from 2014-09-06 05:57:57) +- /home/christian/projects/etna_viv/rnndb/state_hi.xml ( 23176 bytes, from 2014-09-06 06:07:47) +- /home/christian/projects/etna_viv/rnndb/state_2d.xml ( 51191 bytes, from 2014-09-06 05:57:57) +- /home/christian/projects/etna_viv/rnndb/state_3d.xml ( 54570 bytes, from 2014-09-06 05:57:57) +- /home/christian/projects/etna_viv/rnndb/state_vg.xml ( 5942 bytes, from 2014-09-06 05:57:57)
+Copyright (C) 2014 +*/
+#define MMU_EXCEPTION_SLAVE_NOT_PRESENT 0x00000001 +#define MMU_EXCEPTION_PAGE_NOT_PRESENT 0x00000002 +#define MMU_EXCEPTION_WRITE_VIOLATION 0x00000003 +#define VIVS_HI 0x00000000
+#define VIVS_HI_CLOCK_CONTROL 0x00000000 +#define VIVS_HI_CLOCK_CONTROL_CLK3D_DIS 0x00000001 +#define VIVS_HI_CLOCK_CONTROL_CLK2D_DIS 0x00000002 +#define VIVS_HI_CLOCK_CONTROL_FSCALE_VAL__MASK 0x000001fc +#define VIVS_HI_CLOCK_CONTROL_FSCALE_VAL__SHIFT 2 +#define VIVS_HI_CLOCK_CONTROL_FSCALE_VAL(x) (((x) << VIVS_HI_CLOCK_CONTROL_FSCALE_VAL__SHIFT) & VIVS_HI_CLOCK_CONTROL_FSCALE_VAL__MASK) +#define VIVS_HI_CLOCK_CONTROL_FSCALE_CMD_LOAD 0x00000200 +#define VIVS_HI_CLOCK_CONTROL_DISABLE_RAM_CLK_GATING 0x00000400 +#define VIVS_HI_CLOCK_CONTROL_DISABLE_DEBUG_REGISTERS 0x00000800 +#define VIVS_HI_CLOCK_CONTROL_SOFT_RESET 0x00001000 +#define VIVS_HI_CLOCK_CONTROL_IDLE_3D 0x00010000 +#define VIVS_HI_CLOCK_CONTROL_IDLE_2D 0x00020000 +#define VIVS_HI_CLOCK_CONTROL_IDLE_VG 0x00040000 +#define VIVS_HI_CLOCK_CONTROL_ISOLATE_GPU 0x00080000 +#define VIVS_HI_CLOCK_CONTROL_DEBUG_PIXEL_PIPE__MASK 0x00f00000 +#define VIVS_HI_CLOCK_CONTROL_DEBUG_PIXEL_PIPE__SHIFT 20 +#define VIVS_HI_CLOCK_CONTROL_DEBUG_PIXEL_PIPE(x) (((x) << VIVS_HI_CLOCK_CONTROL_DEBUG_PIXEL_PIPE__SHIFT) & VIVS_HI_CLOCK_CONTROL_DEBUG_PIXEL_PIPE__MASK)
+#define VIVS_HI_IDLE_STATE 0x00000004 +#define VIVS_HI_IDLE_STATE_FE 0x00000001 +#define VIVS_HI_IDLE_STATE_DE 0x00000002 +#define VIVS_HI_IDLE_STATE_PE 0x00000004 +#define VIVS_HI_IDLE_STATE_SH 0x00000008 +#define VIVS_HI_IDLE_STATE_PA 0x00000010 +#define VIVS_HI_IDLE_STATE_SE 0x00000020 +#define VIVS_HI_IDLE_STATE_RA 0x00000040 +#define VIVS_HI_IDLE_STATE_TX 0x00000080 +#define VIVS_HI_IDLE_STATE_VG 0x00000100 +#define VIVS_HI_IDLE_STATE_IM 0x00000200 +#define VIVS_HI_IDLE_STATE_FP 0x00000400 +#define VIVS_HI_IDLE_STATE_TS 0x00000800 +#define VIVS_HI_IDLE_STATE_AXI_LP 0x80000000
+#define VIVS_HI_AXI_CONFIG 0x00000008 +#define VIVS_HI_AXI_CONFIG_AWID__MASK 0x0000000f +#define VIVS_HI_AXI_CONFIG_AWID__SHIFT 0 +#define VIVS_HI_AXI_CONFIG_AWID(x) (((x) << VIVS_HI_AXI_CONFIG_AWID__SHIFT) & VIVS_HI_AXI_CONFIG_AWID__MASK) +#define VIVS_HI_AXI_CONFIG_ARID__MASK 0x000000f0 +#define VIVS_HI_AXI_CONFIG_ARID__SHIFT 4 +#define VIVS_HI_AXI_CONFIG_ARID(x) (((x) << VIVS_HI_AXI_CONFIG_ARID__SHIFT) & VIVS_HI_AXI_CONFIG_ARID__MASK) +#define VIVS_HI_AXI_CONFIG_AWCACHE__MASK 0x00000f00 +#define VIVS_HI_AXI_CONFIG_AWCACHE__SHIFT 8 +#define VIVS_HI_AXI_CONFIG_AWCACHE(x) (((x) << VIVS_HI_AXI_CONFIG_AWCACHE__SHIFT) & VIVS_HI_AXI_CONFIG_AWCACHE__MASK) +#define VIVS_HI_AXI_CONFIG_ARCACHE__MASK 0x0000f000 +#define VIVS_HI_AXI_CONFIG_ARCACHE__SHIFT 12 +#define VIVS_HI_AXI_CONFIG_ARCACHE(x) (((x) << VIVS_HI_AXI_CONFIG_ARCACHE__SHIFT) & VIVS_HI_AXI_CONFIG_ARCACHE__MASK)
+#define VIVS_HI_AXI_STATUS 0x0000000c +#define VIVS_HI_AXI_STATUS_WR_ERR_ID__MASK 0x0000000f +#define VIVS_HI_AXI_STATUS_WR_ERR_ID__SHIFT 0 +#define VIVS_HI_AXI_STATUS_WR_ERR_ID(x) (((x) << VIVS_HI_AXI_STATUS_WR_ERR_ID__SHIFT) & VIVS_HI_AXI_STATUS_WR_ERR_ID__MASK) +#define VIVS_HI_AXI_STATUS_RD_ERR_ID__MASK 0x000000f0 +#define VIVS_HI_AXI_STATUS_RD_ERR_ID__SHIFT 4 +#define VIVS_HI_AXI_STATUS_RD_ERR_ID(x) (((x) << VIVS_HI_AXI_STATUS_RD_ERR_ID__SHIFT) & VIVS_HI_AXI_STATUS_RD_ERR_ID__MASK) +#define VIVS_HI_AXI_STATUS_DET_WR_ERR 0x00000100 +#define VIVS_HI_AXI_STATUS_DET_RD_ERR 0x00000200
+#define VIVS_HI_INTR_ACKNOWLEDGE 0x00000010 +#define VIVS_HI_INTR_ACKNOWLEDGE_INTR_VEC__MASK 0x7fffffff +#define VIVS_HI_INTR_ACKNOWLEDGE_INTR_VEC__SHIFT 0 +#define VIVS_HI_INTR_ACKNOWLEDGE_INTR_VEC(x) (((x) << VIVS_HI_INTR_ACKNOWLEDGE_INTR_VEC__SHIFT) & VIVS_HI_INTR_ACKNOWLEDGE_INTR_VEC__MASK) +#define VIVS_HI_INTR_ACKNOWLEDGE_AXI_BUS_ERROR 0x80000000
+#define VIVS_HI_INTR_ENBL 0x00000014 +#define VIVS_HI_INTR_ENBL_INTR_ENBL_VEC__MASK 0xffffffff +#define VIVS_HI_INTR_ENBL_INTR_ENBL_VEC__SHIFT 0 +#define VIVS_HI_INTR_ENBL_INTR_ENBL_VEC(x) (((x) << VIVS_HI_INTR_ENBL_INTR_ENBL_VEC__SHIFT) & VIVS_HI_INTR_ENBL_INTR_ENBL_VEC__MASK)
+#define VIVS_HI_CHIP_IDENTITY 0x00000018 +#define VIVS_HI_CHIP_IDENTITY_FAMILY__MASK 0xff000000 +#define VIVS_HI_CHIP_IDENTITY_FAMILY__SHIFT 24 +#define VIVS_HI_CHIP_IDENTITY_FAMILY(x) (((x) << VIVS_HI_CHIP_IDENTITY_FAMILY__SHIFT) & VIVS_HI_CHIP_IDENTITY_FAMILY__MASK) +#define VIVS_HI_CHIP_IDENTITY_PRODUCT__MASK 0x00ff0000 +#define VIVS_HI_CHIP_IDENTITY_PRODUCT__SHIFT 16 +#define VIVS_HI_CHIP_IDENTITY_PRODUCT(x) (((x) << VIVS_HI_CHIP_IDENTITY_PRODUCT__SHIFT) & VIVS_HI_CHIP_IDENTITY_PRODUCT__MASK) +#define VIVS_HI_CHIP_IDENTITY_REVISION__MASK 0x0000f000 +#define VIVS_HI_CHIP_IDENTITY_REVISION__SHIFT 12 +#define VIVS_HI_CHIP_IDENTITY_REVISION(x) (((x) << VIVS_HI_CHIP_IDENTITY_REVISION__SHIFT) & VIVS_HI_CHIP_IDENTITY_REVISION__MASK)
+#define VIVS_HI_CHIP_FEATURE 0x0000001c
+#define VIVS_HI_CHIP_MODEL 0x00000020
+#define VIVS_HI_CHIP_REV 0x00000024
+#define VIVS_HI_CHIP_DATE 0x00000028
+#define VIVS_HI_CHIP_TIME 0x0000002c
+#define VIVS_HI_CHIP_MINOR_FEATURE_0 0x00000034
+#define VIVS_HI_CACHE_CONTROL 0x00000038
+#define VIVS_HI_MEMORY_COUNTER_RESET 0x0000003c
+#define VIVS_HI_PROFILE_READ_BYTES8 0x00000040
+#define VIVS_HI_PROFILE_WRITE_BYTES8 0x00000044
+#define VIVS_HI_CHIP_SPECS 0x00000048 +#define VIVS_HI_CHIP_SPECS_STREAM_COUNT__MASK 0x0000000f +#define VIVS_HI_CHIP_SPECS_STREAM_COUNT__SHIFT 0 +#define VIVS_HI_CHIP_SPECS_STREAM_COUNT(x) (((x) << VIVS_HI_CHIP_SPECS_STREAM_COUNT__SHIFT) & VIVS_HI_CHIP_SPECS_STREAM_COUNT__MASK) +#define VIVS_HI_CHIP_SPECS_REGISTER_MAX__MASK 0x000000f0 +#define VIVS_HI_CHIP_SPECS_REGISTER_MAX__SHIFT 4 +#define VIVS_HI_CHIP_SPECS_REGISTER_MAX(x) (((x) << VIVS_HI_CHIP_SPECS_REGISTER_MAX__SHIFT) & VIVS_HI_CHIP_SPECS_REGISTER_MAX__MASK) +#define VIVS_HI_CHIP_SPECS_THREAD_COUNT__MASK 0x00000f00 +#define VIVS_HI_CHIP_SPECS_THREAD_COUNT__SHIFT 8 +#define VIVS_HI_CHIP_SPECS_THREAD_COUNT(x) (((x) << VIVS_HI_CHIP_SPECS_THREAD_COUNT__SHIFT) & VIVS_HI_CHIP_SPECS_THREAD_COUNT__MASK) +#define VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__MASK 0x0001f000 +#define VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__SHIFT 12 +#define VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE(x) (((x) << VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__SHIFT) & VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__MASK) +#define VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__MASK 0x01f00000 +#define VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__SHIFT 20 +#define VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT(x) (((x) << VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__SHIFT) & VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__MASK) +#define VIVS_HI_CHIP_SPECS_PIXEL_PIPES__MASK 0x0e000000 +#define VIVS_HI_CHIP_SPECS_PIXEL_PIPES__SHIFT 25 +#define VIVS_HI_CHIP_SPECS_PIXEL_PIPES(x) (((x) << VIVS_HI_CHIP_SPECS_PIXEL_PIPES__SHIFT) & VIVS_HI_CHIP_SPECS_PIXEL_PIPES__MASK) +#define VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__MASK 0xf0000000 +#define VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__SHIFT 28 +#define VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE(x) (((x) << VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__SHIFT) & VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__MASK)
+#define VIVS_HI_PROFILE_WRITE_BURSTS 0x0000004c
+#define VIVS_HI_PROFILE_WRITE_REQUESTS 0x00000050
+#define VIVS_HI_PROFILE_READ_BURSTS 0x00000058
+#define VIVS_HI_PROFILE_READ_REQUESTS 0x0000005c
+#define VIVS_HI_PROFILE_READ_LASTS 0x00000060
+#define VIVS_HI_GP_OUT0 0x00000064
+#define VIVS_HI_GP_OUT1 0x00000068
+#define VIVS_HI_GP_OUT2 0x0000006c
+#define VIVS_HI_AXI_CONTROL 0x00000070 +#define VIVS_HI_AXI_CONTROL_WR_FULL_BURST_MODE 0x00000001
+#define VIVS_HI_CHIP_MINOR_FEATURE_1 0x00000074
+#define VIVS_HI_PROFILE_TOTAL_CYCLES 0x00000078
+#define VIVS_HI_PROFILE_IDLE_CYCLES 0x0000007c
+#define VIVS_HI_CHIP_SPECS_2 0x00000080 +#define VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__MASK 0x000000ff +#define VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__SHIFT 0 +#define VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE(x) (((x) << VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__SHIFT) & VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__MASK) +#define VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__MASK 0x0000ff00 +#define VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__SHIFT 8 +#define VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT(x) (((x) << VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__SHIFT) & VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__MASK) +#define VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__MASK 0xffff0000 +#define VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__SHIFT 16 +#define VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS(x) (((x) << VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__SHIFT) & VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__MASK)
+#define VIVS_HI_CHIP_MINOR_FEATURE_2 0x00000084
+#define VIVS_HI_CHIP_MINOR_FEATURE_3 0x00000088
+#define VIVS_HI_CHIP_MINOR_FEATURE_4 0x00000094
+#define VIVS_PM 0x00000000
+#define VIVS_PM_POWER_CONTROLS 0x00000100 +#define VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING 0x00000001 +#define VIVS_PM_POWER_CONTROLS_DISABLE_STALL_MODULE_CLOCK_GATING 0x00000002 +#define VIVS_PM_POWER_CONTROLS_DISABLE_STARVE_MODULE_CLOCK_GATING 0x00000004 +#define VIVS_PM_POWER_CONTROLS_TURN_ON_COUNTER__MASK 0x000000f0 +#define VIVS_PM_POWER_CONTROLS_TURN_ON_COUNTER__SHIFT 4 +#define VIVS_PM_POWER_CONTROLS_TURN_ON_COUNTER(x) (((x) << VIVS_PM_POWER_CONTROLS_TURN_ON_COUNTER__SHIFT) & VIVS_PM_POWER_CONTROLS_TURN_ON_COUNTER__MASK) +#define VIVS_PM_POWER_CONTROLS_TURN_OFF_COUNTER__MASK 0xffff0000 +#define VIVS_PM_POWER_CONTROLS_TURN_OFF_COUNTER__SHIFT 16 +#define VIVS_PM_POWER_CONTROLS_TURN_OFF_COUNTER(x) (((x) << VIVS_PM_POWER_CONTROLS_TURN_OFF_COUNTER__SHIFT) & VIVS_PM_POWER_CONTROLS_TURN_OFF_COUNTER__MASK)
+#define VIVS_PM_MODULE_CONTROLS 0x00000104 +#define VIVS_PM_MODULE_CONTROLS_DISABLE_MODULE_CLOCK_GATING_FE 0x00000001 +#define VIVS_PM_MODULE_CONTROLS_DISABLE_MODULE_CLOCK_GATING_DE 0x00000002 +#define VIVS_PM_MODULE_CONTROLS_DISABLE_MODULE_CLOCK_GATING_PE 0x00000004
+#define VIVS_PM_MODULE_STATUS 0x00000108 +#define VIVS_PM_MODULE_STATUS_MODULE_CLOCK_GATED_FE 0x00000001 +#define VIVS_PM_MODULE_STATUS_MODULE_CLOCK_GATED_DE 0x00000002 +#define VIVS_PM_MODULE_STATUS_MODULE_CLOCK_GATED_PE 0x00000004
+#define VIVS_PM_PULSE_EATER 0x0000010c
+#define VIVS_MMUv2 0x00000000
+#define VIVS_MMUv2_SAFE_ADDRESS 0x00000180
+#define VIVS_MMUv2_CONFIGURATION 0x00000184 +#define VIVS_MMUv2_CONFIGURATION_MODE__MASK 0x00000001 +#define VIVS_MMUv2_CONFIGURATION_MODE__SHIFT 0 +#define VIVS_MMUv2_CONFIGURATION_MODE_MODE4_K 0x00000000 +#define VIVS_MMUv2_CONFIGURATION_MODE_MODE1_K 0x00000001 +#define VIVS_MMUv2_CONFIGURATION_MODE_MASK 0x00000008 +#define VIVS_MMUv2_CONFIGURATION_FLUSH__MASK 0x00000010 +#define VIVS_MMUv2_CONFIGURATION_FLUSH__SHIFT 4 +#define VIVS_MMUv2_CONFIGURATION_FLUSH_FLUSH 0x00000010 +#define VIVS_MMUv2_CONFIGURATION_FLUSH_MASK 0x00000080 +#define VIVS_MMUv2_CONFIGURATION_ADDRESS_MASK 0x00000100 +#define VIVS_MMUv2_CONFIGURATION_ADDRESS__MASK 0xfffffc00 +#define VIVS_MMUv2_CONFIGURATION_ADDRESS__SHIFT 10 +#define VIVS_MMUv2_CONFIGURATION_ADDRESS(x) (((x) << VIVS_MMUv2_CONFIGURATION_ADDRESS__SHIFT) & VIVS_MMUv2_CONFIGURATION_ADDRESS__MASK)
+#define VIVS_MMUv2_STATUS 0x00000188 +#define VIVS_MMUv2_STATUS_EXCEPTION0__MASK 0x00000003 +#define VIVS_MMUv2_STATUS_EXCEPTION0__SHIFT 0 +#define VIVS_MMUv2_STATUS_EXCEPTION0(x) (((x) << VIVS_MMUv2_STATUS_EXCEPTION0__SHIFT) & VIVS_MMUv2_STATUS_EXCEPTION0__MASK) +#define VIVS_MMUv2_STATUS_EXCEPTION1__MASK 0x00000030 +#define VIVS_MMUv2_STATUS_EXCEPTION1__SHIFT 4 +#define VIVS_MMUv2_STATUS_EXCEPTION1(x) (((x) << VIVS_MMUv2_STATUS_EXCEPTION1__SHIFT) & VIVS_MMUv2_STATUS_EXCEPTION1__MASK) +#define VIVS_MMUv2_STATUS_EXCEPTION2__MASK 0x00000300 +#define VIVS_MMUv2_STATUS_EXCEPTION2__SHIFT 8 +#define VIVS_MMUv2_STATUS_EXCEPTION2(x) (((x) << VIVS_MMUv2_STATUS_EXCEPTION2__SHIFT) & VIVS_MMUv2_STATUS_EXCEPTION2__MASK) +#define VIVS_MMUv2_STATUS_EXCEPTION3__MASK 0x00003000 +#define VIVS_MMUv2_STATUS_EXCEPTION3__SHIFT 12 +#define VIVS_MMUv2_STATUS_EXCEPTION3(x) (((x) << VIVS_MMUv2_STATUS_EXCEPTION3__SHIFT) & VIVS_MMUv2_STATUS_EXCEPTION3__MASK)
+#define VIVS_MMUv2_CONTROL 0x0000018c +#define VIVS_MMUv2_CONTROL_ENABLE 0x00000001
+#define VIVS_MMUv2_EXCEPTION_ADDR(i0) (0x00000190 + 0x4*(i0)) +#define VIVS_MMUv2_EXCEPTION_ADDR__ESIZE 0x00000004 +#define VIVS_MMUv2_EXCEPTION_ADDR__LEN 0x00000004
+#define VIVS_MC 0x00000000
+#define VIVS_MC_MMU_FE_PAGE_TABLE 0x00000400
+#define VIVS_MC_MMU_TX_PAGE_TABLE 0x00000404
+#define VIVS_MC_MMU_PE_PAGE_TABLE 0x00000408
+#define VIVS_MC_MMU_PEZ_PAGE_TABLE 0x0000040c
+#define VIVS_MC_MMU_RA_PAGE_TABLE 0x00000410
+#define VIVS_MC_DEBUG_MEMORY 0x00000414 +#define VIVS_MC_DEBUG_MEMORY_SPECIAL_PATCH_GC320 0x00000008 +#define VIVS_MC_DEBUG_MEMORY_FAST_CLEAR_BYPASS 0x00100000 +#define VIVS_MC_DEBUG_MEMORY_COMPRESSION_BYPASS 0x00200000
+#define VIVS_MC_MEMORY_BASE_ADDR_RA 0x00000418
+#define VIVS_MC_MEMORY_BASE_ADDR_FE 0x0000041c
+#define VIVS_MC_MEMORY_BASE_ADDR_TX 0x00000420
+#define VIVS_MC_MEMORY_BASE_ADDR_PEZ 0x00000424
+#define VIVS_MC_MEMORY_BASE_ADDR_PE 0x00000428
+#define VIVS_MC_MEMORY_TIMING_CONTROL 0x0000042c
+#define VIVS_MC_MEMORY_FLUSH 0x00000430
+#define VIVS_MC_PROFILE_CYCLE_COUNTER 0x00000438
+#define VIVS_MC_DEBUG_READ0 0x0000043c
+#define VIVS_MC_DEBUG_READ1 0x00000440
+#define VIVS_MC_DEBUG_WRITE 0x00000444
+#define VIVS_MC_PROFILE_RA_READ 0x00000448
+#define VIVS_MC_PROFILE_TX_READ 0x0000044c
+#define VIVS_MC_PROFILE_FE_READ 0x00000450
+#define VIVS_MC_PROFILE_PE_READ 0x00000454
+#define VIVS_MC_PROFILE_DE_READ 0x00000458
+#define VIVS_MC_PROFILE_SH_READ 0x0000045c
+#define VIVS_MC_PROFILE_PA_READ 0x00000460
+#define VIVS_MC_PROFILE_SE_READ 0x00000464
+#define VIVS_MC_PROFILE_MC_READ 0x00000468
+#define VIVS_MC_PROFILE_HI_READ 0x0000046c
+#define VIVS_MC_PROFILE_CONFIG0 0x00000470 +#define VIVS_MC_PROFILE_CONFIG0_FE__MASK 0x0000000f +#define VIVS_MC_PROFILE_CONFIG0_FE__SHIFT 0 +#define VIVS_MC_PROFILE_CONFIG0_FE_RESET 0x0000000f +#define VIVS_MC_PROFILE_CONFIG0_DE__MASK 0x00000f00 +#define VIVS_MC_PROFILE_CONFIG0_DE__SHIFT 8 +#define VIVS_MC_PROFILE_CONFIG0_DE_RESET 0x00000f00 +#define VIVS_MC_PROFILE_CONFIG0_PE__MASK 0x000f0000 +#define VIVS_MC_PROFILE_CONFIG0_PE__SHIFT 16 +#define VIVS_MC_PROFILE_CONFIG0_PE_PIXEL_COUNT_KILLED_BY_COLOR_PIPE 0x00000000 +#define VIVS_MC_PROFILE_CONFIG0_PE_PIXEL_COUNT_KILLED_BY_DEPTH_PIPE 0x00010000 +#define VIVS_MC_PROFILE_CONFIG0_PE_PIXEL_COUNT_DRAWN_BY_COLOR_PIPE 0x00020000 +#define VIVS_MC_PROFILE_CONFIG0_PE_PIXEL_COUNT_DRAWN_BY_DEPTH_PIPE 0x00030000 +#define VIVS_MC_PROFILE_CONFIG0_PE_PIXELS_RENDERED_2D 0x000b0000 +#define VIVS_MC_PROFILE_CONFIG0_PE_RESET 0x000f0000 +#define VIVS_MC_PROFILE_CONFIG0_SH__MASK 0x0f000000 +#define VIVS_MC_PROFILE_CONFIG0_SH__SHIFT 24 +#define VIVS_MC_PROFILE_CONFIG0_SH_SHADER_CYCLES 0x04000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_PS_INST_COUNTER 0x07000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_RENDERED_PIXEL_COUNTER 0x08000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_VS_INST_COUNTER 0x09000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_RENDERED_VERTICE_COUNTER 0x0a000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_VTX_BRANCH_INST_COUNTER 0x0b000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_VTX_TEXLD_INST_COUNTER 0x0c000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_PXL_BRANCH_INST_COUNTER 0x0d000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_PXL_TEXLD_INST_COUNTER 0x0e000000 +#define VIVS_MC_PROFILE_CONFIG0_SH_RESET 0x0f000000
+#define VIVS_MC_PROFILE_CONFIG1 0x00000474 +#define VIVS_MC_PROFILE_CONFIG1_PA__MASK 0x0000000f +#define VIVS_MC_PROFILE_CONFIG1_PA__SHIFT 0 +#define VIVS_MC_PROFILE_CONFIG1_PA_INPUT_VTX_COUNTER 0x00000003 +#define VIVS_MC_PROFILE_CONFIG1_PA_INPUT_PRIM_COUNTER 0x00000004 +#define VIVS_MC_PROFILE_CONFIG1_PA_OUTPUT_PRIM_COUNTER 0x00000005 +#define VIVS_MC_PROFILE_CONFIG1_PA_DEPTH_CLIPPED_COUNTER 0x00000006 +#define VIVS_MC_PROFILE_CONFIG1_PA_TRIVIAL_REJECTED_COUNTER 0x00000007 +#define VIVS_MC_PROFILE_CONFIG1_PA_CULLED_COUNTER 0x00000008 +#define VIVS_MC_PROFILE_CONFIG1_PA_RESET 0x0000000f +#define VIVS_MC_PROFILE_CONFIG1_SE__MASK 0x00000f00 +#define VIVS_MC_PROFILE_CONFIG1_SE__SHIFT 8 +#define VIVS_MC_PROFILE_CONFIG1_SE_CULLED_TRIANGLE_COUNT 0x00000000 +#define VIVS_MC_PROFILE_CONFIG1_SE_CULLED_LINES_COUNT 0x00000100 +#define VIVS_MC_PROFILE_CONFIG1_SE_RESET 0x00000f00 +#define VIVS_MC_PROFILE_CONFIG1_RA__MASK 0x000f0000 +#define VIVS_MC_PROFILE_CONFIG1_RA__SHIFT 16 +#define VIVS_MC_PROFILE_CONFIG1_RA_VALID_PIXEL_COUNT 0x00000000 +#define VIVS_MC_PROFILE_CONFIG1_RA_TOTAL_QUAD_COUNT 0x00010000 +#define VIVS_MC_PROFILE_CONFIG1_RA_VALID_QUAD_COUNT_AFTER_EARLY_Z 0x00020000 +#define VIVS_MC_PROFILE_CONFIG1_RA_TOTAL_PRIMITIVE_COUNT 0x00030000 +#define VIVS_MC_PROFILE_CONFIG1_RA_PIPE_CACHE_MISS_COUNTER 0x00090000 +#define VIVS_MC_PROFILE_CONFIG1_RA_PREFETCH_CACHE_MISS_COUNTER 0x000a0000 +#define VIVS_MC_PROFILE_CONFIG1_RA_CULLED_QUAD_COUNT 0x000b0000 +#define VIVS_MC_PROFILE_CONFIG1_RA_RESET 0x000f0000 +#define VIVS_MC_PROFILE_CONFIG1_TX__MASK 0x0f000000 +#define VIVS_MC_PROFILE_CONFIG1_TX__SHIFT 24 +#define VIVS_MC_PROFILE_CONFIG1_TX_TOTAL_BILINEAR_REQUESTS 0x00000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_TOTAL_TRILINEAR_REQUESTS 0x01000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_TOTAL_DISCARDED_TEXTURE_REQUESTS 0x02000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_TOTAL_TEXTURE_REQUESTS 0x03000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_UNKNOWN 0x04000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_MEM_READ_COUNT 0x05000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_MEM_READ_IN_8B_COUNT 0x06000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_CACHE_MISS_COUNT 0x07000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_CACHE_HIT_TEXEL_COUNT 0x08000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_CACHE_MISS_TEXEL_COUNT 0x09000000 +#define VIVS_MC_PROFILE_CONFIG1_TX_RESET 0x0f000000
+#define VIVS_MC_PROFILE_CONFIG2 0x00000478 +#define VIVS_MC_PROFILE_CONFIG2_MC__MASK 0x0000000f +#define VIVS_MC_PROFILE_CONFIG2_MC__SHIFT 0 +#define VIVS_MC_PROFILE_CONFIG2_MC_TOTAL_READ_REQ_8B_FROM_PIPELINE 0x00000001 +#define VIVS_MC_PROFILE_CONFIG2_MC_TOTAL_READ_REQ_8B_FROM_IP 0x00000002 +#define VIVS_MC_PROFILE_CONFIG2_MC_TOTAL_WRITE_REQ_8B_FROM_PIPELINE 0x00000003 +#define VIVS_MC_PROFILE_CONFIG2_MC_RESET 0x0000000f +#define VIVS_MC_PROFILE_CONFIG2_HI__MASK 0x00000f00 +#define VIVS_MC_PROFILE_CONFIG2_HI__SHIFT 8 +#define VIVS_MC_PROFILE_CONFIG2_HI_AXI_CYCLES_READ_REQUEST_STALLED 0x00000000 +#define VIVS_MC_PROFILE_CONFIG2_HI_AXI_CYCLES_WRITE_REQUEST_STALLED 0x00000100 +#define VIVS_MC_PROFILE_CONFIG2_HI_AXI_CYCLES_WRITE_DATA_STALLED 0x00000200 +#define VIVS_MC_PROFILE_CONFIG2_HI_RESET 0x00000f00
+#define VIVS_MC_PROFILE_CONFIG3 0x0000047c
+#define VIVS_MC_BUS_CONFIG 0x00000480
+#define VIVS_MC_START_COMPOSITION 0x00000554
+#define VIVS_MC_128B_MERGE 0x00000558
+#endif /* STATE_HI_XML */ diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h new file mode 100644 index 000000000000..f7b5ac6f3842 --- /dev/null +++ b/include/uapi/drm/etnaviv_drm.h @@ -0,0 +1,225 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __ETNAVIV_DRM_H__ +#define __ETNAVIV_DRM_H__
+#include <stddef.h> +#include <drm/drm.h>
+/* Please note that modifications to all structs defined here are
- subject to backwards-compatibility constraints:
- Do not use pointers, use uint64_t instead for 32 bit / 64 bit
user/kernel compatibility
- Keep fields aligned to their size
- Because of how drm_ioctl() works, we can add new fields at
the end of an ioctl if some care is taken: drm_ioctl() will
zero out the new fields at the tail of the ioctl, so a zero
value should have a backwards compatible meaning. And for
output params, userspace won't see the newly added output
fields.. so that has to be somehow ok.
- */
+#define ETNA_PIPE_3D 0x00 +#define ETNA_PIPE_2D 0x01 +#define ETNA_PIPE_VG 0x02
+#define ETNA_MAX_PIPES 3
+/* timeouts are specified in clock-monotonic absolute times (to simplify
- restarting interrupted ioctls). The following struct is logically the
- same as 'struct timespec' but 32/64b ABI safe.
- */
+struct drm_etnaviv_timespec {
int64_t tv_sec; /* seconds */
int64_t tv_nsec; /* nanoseconds */
+};
+#define ETNAVIV_PARAM_GPU_MODEL 0x01 +#define ETNAVIV_PARAM_GPU_REVISION 0x02 +#define ETNAVIV_PARAM_GPU_FEATURES_0 0x03 +#define ETNAVIV_PARAM_GPU_FEATURES_1 0x04 +#define ETNAVIV_PARAM_GPU_FEATURES_2 0x05 +#define ETNAVIV_PARAM_GPU_FEATURES_3 0x06 +#define ETNAVIV_PARAM_GPU_FEATURES_4 0x07
+#define ETNAVIV_PARAM_GPU_STREAM_COUNT 0x10 +#define ETNAVIV_PARAM_GPU_REGISTER_MAX 0x11 +#define ETNAVIV_PARAM_GPU_THREAD_COUNT 0x12 +#define ETNAVIV_PARAM_GPU_VERTEX_CACHE_SIZE 0x13 +#define ETNAVIV_PARAM_GPU_SHADER_CORE_COUNT 0x14 +#define ETNAVIV_PARAM_GPU_PIXEL_PIPES 0x15 +#define ETNAVIV_PARAM_GPU_VERTEX_OUTPUT_BUFFER_SIZE 0x16 +#define ETNAVIV_PARAM_GPU_BUFFER_SIZE 0x17 +#define ETNAVIV_PARAM_GPU_INSTRUCTION_COUNT 0x18 +#define ETNAVIV_PARAM_GPU_NUM_CONSTANTS 0x19
+//#define MSM_PARAM_GMEM_SIZE 0x02
+struct drm_etnaviv_param {
uint32_t pipe; /* in, ETNA_PIPE_x */
uint32_t param; /* in, ETNAVIV_PARAM_x */
uint64_t value; /* out (get_param) or in (set_param) */
+};
+/*
- GEM buffers:
- */
+#define ETNA_BO_CMDSTREAM 0x00000001 +#define ETNA_BO_CACHE_MASK 0x000f0000 +/* cache modes */ +#define ETNA_BO_CACHED 0x00010000 +#define ETNA_BO_WC 0x00020000 +#define ETNA_BO_UNCACHED 0x00040000
+struct drm_etnaviv_gem_new {
uint64_t size; /* in */
uint32_t flags; /* in, mask of ETNA_BO_x */
uint32_t handle; /* out */
+};
+struct drm_etnaviv_gem_info {
uint32_t handle; /* in */
uint32_t pad;
uint64_t offset; /* out, offset to pass to mmap() */
+};
+#define ETNA_PREP_READ 0x01 +#define ETNA_PREP_WRITE 0x02 +#define ETNA_PREP_NOSYNC 0x04
+struct drm_etnaviv_gem_cpu_prep {
uint32_t handle; /* in */
uint32_t op; /* in, mask of ETNA_PREP_x */
struct drm_etnaviv_timespec timeout; /* in */
+};
+struct drm_etnaviv_gem_cpu_fini {
uint32_t handle; /* in */
+};
+/*
- Cmdstream Submission:
- */
+/* The value written into the cmdstream is logically:
- ((relocbuf->gpuaddr + reloc_offset) << shift) | or
- When we have GPU's w/ >32bit ptrs, it should be possible to deal
- with this by emit'ing two reloc entries with appropriate shift
- values. Or a new ETNA_SUBMIT_CMD_x type would also be an option.
- NOTE that reloc's must be sorted by order of increasing submit_offset,
- otherwise EINVAL.
- */
+struct drm_etnaviv_gem_submit_reloc {
uint32_t submit_offset; /* in, offset from submit_bo */
uint32_t or; /* in, value OR'd with result */
int32_t shift; /* in, amount of left shift (can be negative) */
uint32_t reloc_idx; /* in, index of reloc_bo buffer */
uint64_t reloc_offset; /* in, offset from start of reloc_bo */
+};
+/* submit-types:
- BUF - this cmd buffer is executed normally.
- IB_TARGET_BUF - this cmd buffer is an IB target. Reloc's are
processed normally, but the kernel does not setup an IB to
this buffer in the first-level ringbuffer
- CTX_RESTORE_BUF - only executed if there has been a GPU context
switch since the last SUBMIT ioctl
- */
+#define ETNA_SUBMIT_CMD_BUF 0x0001 +#define ETNA_SUBMIT_CMD_IB_TARGET_BUF 0x0002 +#define ETNA_SUBMIT_CMD_CTX_RESTORE_BUF 0x0003 is a +struct drm_etnaviv_gem_submit_cmd {
uint32_t type; /* in, one of ETNA_SUBMIT_CMD_x */
Do we need different types? I did not use this in my kernel tree.
uint32_t submit_idx; /* in, index of submit_bo cmdstream buffer */
uint32_t submit_offset; /* in, offset into submit_bo */
Do we really want/need the offset? I have removed it form cause it makes things in userspace more complex then needed.
uint32_t size; /* in, cmdstream size */
uint32_t pad;
uint32_t nr_relocs; /* in, number of submit_reloc's */
uint64_t __user relocs; /* in, ptr to array of submit_reloc's */
+};
+/* Each buffer referenced elsewhere in the cmdstream submit (ie. the
- cmdstream buffer(s) themselves or reloc entries) has one (and only
- one) entry in the submit->bos[] table.
- As a optimization, the current buffer (gpu virtual address) can be
- passed back through the 'presumed' field. If on a subsequent reloc,
- userspace passes back a 'presumed' address that is still valid,
- then patching the cmdstream for this entry is skipped. This can
- avoid kernel needing to map/access the cmdstream bo in the common
- case.
- */
+#define ETNA_SUBMIT_BO_READ 0x0001 +#define ETNA_SUBMIT_BO_WRITE 0x0002 +struct drm_etnaviv_gem_submit_bo {
uint32_t flags; /* in, mask of ETNA_SUBMIT_BO_x */
uint32_t handle; /* in, GEM handle */
uint64_t presumed; /* in/out, presumed buffer address */
presumed support should never hit etnaviv driver.
+};
+/* Each cmdstream submit consists of a table of buffers involved, and
- one or more cmdstream buffers. This allows for conditional execution
- (context-restore), and IB buffers needed for per tile/bin draw cmds.
- */
+struct drm_etnaviv_gem_submit {
uint32_t pipe; /* in, ETNA_PIPE_x */
uint32_t fence; /* out */
uint32_t nr_bos; /* in, number of submit_bo's */
uint32_t nr_cmds; /* in, number of submit_cmd's */
Do we really need to support mutliple cmds per submit? I have removed this from my kernel.
uint64_t __user bos; /* in, ptr to array of submit_bo's */
uint64_t __user cmds; /* in, ptr to array of submit_cmd's */
+};
+/* The normal way to synchronize with the GPU is just to CPU_PREP on
- a buffer if you need to access it from the CPU (other cmdstream
- submission from same or other contexts, PAGE_FLIP ioctl, etc, all
- handle the required synchronization under the hood). This ioctl
- mainly just exists as a way to implement the gallium pipe_fence
- APIs without requiring a dummy bo to synchronize on.
- */
+struct drm_etnaviv_wait_fence {
uint32_t pipe; /* in, ETNA_PIPE_x */
uint32_t fence; /* in */
struct drm_etnaviv_timespec timeout; /* in */
+};
+#define DRM_ETNAVIV_GET_PARAM 0x00 +/* placeholder: +#define DRM_MSM_SET_PARAM 0x01
- */
+#define DRM_ETNAVIV_GEM_NEW 0x02 +#define DRM_ETNAVIV_GEM_INFO 0x03 +#define DRM_ETNAVIV_GEM_CPU_PREP 0x04 +#define DRM_ETNAVIV_GEM_CPU_FINI 0x05 +#define DRM_ETNAVIV_GEM_SUBMIT 0x06 +#define DRM_ETNAVIV_WAIT_FENCE 0x07 +#define DRM_ETNAVIV_NUM_IOCTLS 0x08
+#define DRM_IOCTL_ETNAVIV_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GET_PARAM, struct drm_etnaviv_param) +#define DRM_IOCTL_ETNAVIV_GEM_NEW DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_NEW, struct drm_etnaviv_gem_new) +#define DRM_IOCTL_ETNAVIV_GEM_INFO DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_INFO, struct drm_etnaviv_gem_info) +#define DRM_IOCTL_ETNAVIV_GEM_CPU_PREP DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_CPU_PREP, struct drm_etnaviv_gem_cpu_prep) +#define DRM_IOCTL_ETNAVIV_GEM_CPU_FINI DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_CPU_FINI, struct drm_etnaviv_gem_cpu_fini) +#define DRM_IOCTL_ETNAVIV_GEM_SUBMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_SUBMIT, struct drm_etnaviv_gem_submit) +#define DRM_IOCTL_ETNAVIV_WAIT_FENCE DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_WAIT_FENCE, struct drm_etnaviv_wait_fence)
+#endif /* __ETNAVIV_DRM_H__ */
2.1.4
greets -- Christian Gmeiner, MSc
Am Sonntag, den 05.04.2015, 21:26 +0200 schrieb Christian Gmeiner:
2015-04-02 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
From: Christian Gmeiner christian.gmeiner@gmail.com
This is a consolidation by Russell King of Christian's drm work.
Signed-off-by: Christian Gmeiner christian.gmeiner@gmail.com Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk
[...]
+#endif /* STATE_HI_XML */ diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h new file mode 100644 index 000000000000..f7b5ac6f3842 --- /dev/null +++ b/include/uapi/drm/etnaviv_drm.h @@ -0,0 +1,225 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __ETNAVIV_DRM_H__ +#define __ETNAVIV_DRM_H__
+#include <stddef.h> +#include <drm/drm.h>
+/* Please note that modifications to all structs defined here are
- subject to backwards-compatibility constraints:
- Do not use pointers, use uint64_t instead for 32 bit / 64 bit
user/kernel compatibility
- Keep fields aligned to their size
- Because of how drm_ioctl() works, we can add new fields at
the end of an ioctl if some care is taken: drm_ioctl() will
zero out the new fields at the tail of the ioctl, so a zero
value should have a backwards compatible meaning. And for
output params, userspace won't see the newly added output
fields.. so that has to be somehow ok.
- */
+#define ETNA_PIPE_3D 0x00 +#define ETNA_PIPE_2D 0x01 +#define ETNA_PIPE_VG 0x02
+#define ETNA_MAX_PIPES 3
+/* timeouts are specified in clock-monotonic absolute times (to simplify
- restarting interrupted ioctls). The following struct is logically the
- same as 'struct timespec' but 32/64b ABI safe.
- */
+struct drm_etnaviv_timespec {
int64_t tv_sec; /* seconds */
int64_t tv_nsec; /* nanoseconds */
+};
+#define ETNAVIV_PARAM_GPU_MODEL 0x01 +#define ETNAVIV_PARAM_GPU_REVISION 0x02 +#define ETNAVIV_PARAM_GPU_FEATURES_0 0x03 +#define ETNAVIV_PARAM_GPU_FEATURES_1 0x04 +#define ETNAVIV_PARAM_GPU_FEATURES_2 0x05 +#define ETNAVIV_PARAM_GPU_FEATURES_3 0x06 +#define ETNAVIV_PARAM_GPU_FEATURES_4 0x07
+#define ETNAVIV_PARAM_GPU_STREAM_COUNT 0x10 +#define ETNAVIV_PARAM_GPU_REGISTER_MAX 0x11 +#define ETNAVIV_PARAM_GPU_THREAD_COUNT 0x12 +#define ETNAVIV_PARAM_GPU_VERTEX_CACHE_SIZE 0x13 +#define ETNAVIV_PARAM_GPU_SHADER_CORE_COUNT 0x14 +#define ETNAVIV_PARAM_GPU_PIXEL_PIPES 0x15 +#define ETNAVIV_PARAM_GPU_VERTEX_OUTPUT_BUFFER_SIZE 0x16 +#define ETNAVIV_PARAM_GPU_BUFFER_SIZE 0x17 +#define ETNAVIV_PARAM_GPU_INSTRUCTION_COUNT 0x18 +#define ETNAVIV_PARAM_GPU_NUM_CONSTANTS 0x19
+//#define MSM_PARAM_GMEM_SIZE 0x02
+struct drm_etnaviv_param {
uint32_t pipe; /* in, ETNA_PIPE_x */
uint32_t param; /* in, ETNAVIV_PARAM_x */
uint64_t value; /* out (get_param) or in (set_param) */
+};
+/*
- GEM buffers:
- */
+#define ETNA_BO_CMDSTREAM 0x00000001 +#define ETNA_BO_CACHE_MASK 0x000f0000 +/* cache modes */ +#define ETNA_BO_CACHED 0x00010000 +#define ETNA_BO_WC 0x00020000 +#define ETNA_BO_UNCACHED 0x00040000
+struct drm_etnaviv_gem_new {
uint64_t size; /* in */
uint32_t flags; /* in, mask of ETNA_BO_x */
uint32_t handle; /* out */
+};
+struct drm_etnaviv_gem_info {
uint32_t handle; /* in */
uint32_t pad;
uint64_t offset; /* out, offset to pass to mmap() */
+};
+#define ETNA_PREP_READ 0x01 +#define ETNA_PREP_WRITE 0x02 +#define ETNA_PREP_NOSYNC 0x04
+struct drm_etnaviv_gem_cpu_prep {
uint32_t handle; /* in */
uint32_t op; /* in, mask of ETNA_PREP_x */
struct drm_etnaviv_timespec timeout; /* in */
+};
+struct drm_etnaviv_gem_cpu_fini {
uint32_t handle; /* in */
+};
+/*
- Cmdstream Submission:
- */
+/* The value written into the cmdstream is logically:
- ((relocbuf->gpuaddr + reloc_offset) << shift) | or
- When we have GPU's w/ >32bit ptrs, it should be possible to deal
- with this by emit'ing two reloc entries with appropriate shift
- values. Or a new ETNA_SUBMIT_CMD_x type would also be an option.
- NOTE that reloc's must be sorted by order of increasing submit_offset,
- otherwise EINVAL.
- */
+struct drm_etnaviv_gem_submit_reloc {
uint32_t submit_offset; /* in, offset from submit_bo */
uint32_t or; /* in, value OR'd with result */
int32_t shift; /* in, amount of left shift (can be negative) */
uint32_t reloc_idx; /* in, index of reloc_bo buffer */
uint64_t reloc_offset; /* in, offset from start of reloc_bo */
+};
+/* submit-types:
- BUF - this cmd buffer is executed normally.
- IB_TARGET_BUF - this cmd buffer is an IB target. Reloc's are
processed normally, but the kernel does not setup an IB to
this buffer in the first-level ringbuffer
- CTX_RESTORE_BUF - only executed if there has been a GPU context
switch since the last SUBMIT ioctl
- */
+#define ETNA_SUBMIT_CMD_BUF 0x0001 +#define ETNA_SUBMIT_CMD_IB_TARGET_BUF 0x0002 +#define ETNA_SUBMIT_CMD_CTX_RESTORE_BUF 0x0003 is a +struct drm_etnaviv_gem_submit_cmd {
uint32_t type; /* in, one of ETNA_SUBMIT_CMD_x */
Do we need different types? I did not use this in my kernel tree.
Please also not the commit cleaning this API, which changes this a bit.
But yes we need different types. At least the context restore buffer type is needed to properly implement GPU power management and context switching.
uint32_t submit_idx; /* in, index of submit_bo cmdstream buffer */
uint32_t submit_offset; /* in, offset into submit_bo */
Do we really want/need the offset? I have removed it form cause it makes things in userspace more complex then needed.
It makes things a bit more complex, but it allows for far more efficient buffer use if you have are dealing with a lot of flushes. I don't see why we should prevent userspace from using this optimization.
uint32_t size; /* in, cmdstream size */
uint32_t pad;
uint32_t nr_relocs; /* in, number of submit_reloc's */
uint64_t __user relocs; /* in, ptr to array of submit_reloc's */
+};
+/* Each buffer referenced elsewhere in the cmdstream submit (ie. the
- cmdstream buffer(s) themselves or reloc entries) has one (and only
- one) entry in the submit->bos[] table.
- As a optimization, the current buffer (gpu virtual address) can be
- passed back through the 'presumed' field. If on a subsequent reloc,
- userspace passes back a 'presumed' address that is still valid,
- then patching the cmdstream for this entry is skipped. This can
- avoid kernel needing to map/access the cmdstream bo in the common
- case.
- */
+#define ETNA_SUBMIT_BO_READ 0x0001 +#define ETNA_SUBMIT_BO_WRITE 0x0002 +struct drm_etnaviv_gem_submit_bo {
uint32_t flags; /* in, mask of ETNA_SUBMIT_BO_x */
uint32_t handle; /* in, GEM handle */
uint64_t presumed; /* in/out, presumed buffer address */
presumed support should never hit etnaviv driver.
As stated in the cover letter I think presumed support will become possible with MMUv2 and may provide a good optimization there. So I would rather leave this in here and just ignore it for now.
+};
+/* Each cmdstream submit consists of a table of buffers involved, and
- one or more cmdstream buffers. This allows for conditional execution
- (context-restore), and IB buffers needed for per tile/bin draw cmds.
- */
+struct drm_etnaviv_gem_submit {
uint32_t pipe; /* in, ETNA_PIPE_x */
uint32_t fence; /* out */
uint32_t nr_bos; /* in, number of submit_bo's */
uint32_t nr_cmds; /* in, number of submit_cmd's */
Do we really need to support mutliple cmds per submit? I have removed this from my kernel.
We need to support at least one additional context buffer, so I don't see why we shouldn't support n buffers.
uint64_t __user bos; /* in, ptr to array of submit_bo's */
uint64_t __user cmds; /* in, ptr to array of submit_cmd's */
+};
+/* The normal way to synchronize with the GPU is just to CPU_PREP on
- a buffer if you need to access it from the CPU (other cmdstream
- submission from same or other contexts, PAGE_FLIP ioctl, etc, all
- handle the required synchronization under the hood). This ioctl
- mainly just exists as a way to implement the gallium pipe_fence
- APIs without requiring a dummy bo to synchronize on.
- */
+struct drm_etnaviv_wait_fence {
uint32_t pipe; /* in, ETNA_PIPE_x */
uint32_t fence; /* in */
struct drm_etnaviv_timespec timeout; /* in */
+};
+#define DRM_ETNAVIV_GET_PARAM 0x00 +/* placeholder: +#define DRM_MSM_SET_PARAM 0x01
- */
+#define DRM_ETNAVIV_GEM_NEW 0x02 +#define DRM_ETNAVIV_GEM_INFO 0x03 +#define DRM_ETNAVIV_GEM_CPU_PREP 0x04 +#define DRM_ETNAVIV_GEM_CPU_FINI 0x05 +#define DRM_ETNAVIV_GEM_SUBMIT 0x06 +#define DRM_ETNAVIV_WAIT_FENCE 0x07 +#define DRM_ETNAVIV_NUM_IOCTLS 0x08
+#define DRM_IOCTL_ETNAVIV_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GET_PARAM, struct drm_etnaviv_param) +#define DRM_IOCTL_ETNAVIV_GEM_NEW DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_NEW, struct drm_etnaviv_gem_new) +#define DRM_IOCTL_ETNAVIV_GEM_INFO DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_INFO, struct drm_etnaviv_gem_info) +#define DRM_IOCTL_ETNAVIV_GEM_CPU_PREP DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_CPU_PREP, struct drm_etnaviv_gem_cpu_prep) +#define DRM_IOCTL_ETNAVIV_GEM_CPU_FINI DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_CPU_FINI, struct drm_etnaviv_gem_cpu_fini) +#define DRM_IOCTL_ETNAVIV_GEM_SUBMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_SUBMIT, struct drm_etnaviv_gem_submit) +#define DRM_IOCTL_ETNAVIV_WAIT_FENCE DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_WAIT_FENCE, struct drm_etnaviv_wait_fence)
+#endif /* __ETNAVIV_DRM_H__ */
2.1.4
Regards, Lucas
Hi Lucas
2015-04-07 9:35 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Sonntag, den 05.04.2015, 21:26 +0200 schrieb Christian Gmeiner:
2015-04-02 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
From: Christian Gmeiner christian.gmeiner@gmail.com
This is a consolidation by Russell King of Christian's drm work.
Signed-off-by: Christian Gmeiner christian.gmeiner@gmail.com Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk
[...]
+#endif /* STATE_HI_XML */ diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h new file mode 100644 index 000000000000..f7b5ac6f3842 --- /dev/null +++ b/include/uapi/drm/etnaviv_drm.h @@ -0,0 +1,225 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __ETNAVIV_DRM_H__ +#define __ETNAVIV_DRM_H__
+#include <stddef.h> +#include <drm/drm.h>
+/* Please note that modifications to all structs defined here are
- subject to backwards-compatibility constraints:
- Do not use pointers, use uint64_t instead for 32 bit / 64 bit
user/kernel compatibility
- Keep fields aligned to their size
- Because of how drm_ioctl() works, we can add new fields at
the end of an ioctl if some care is taken: drm_ioctl() will
zero out the new fields at the tail of the ioctl, so a zero
value should have a backwards compatible meaning. And for
output params, userspace won't see the newly added output
fields.. so that has to be somehow ok.
- */
+#define ETNA_PIPE_3D 0x00 +#define ETNA_PIPE_2D 0x01 +#define ETNA_PIPE_VG 0x02
+#define ETNA_MAX_PIPES 3
+/* timeouts are specified in clock-monotonic absolute times (to simplify
- restarting interrupted ioctls). The following struct is logically the
- same as 'struct timespec' but 32/64b ABI safe.
- */
+struct drm_etnaviv_timespec {
int64_t tv_sec; /* seconds */
int64_t tv_nsec; /* nanoseconds */
+};
+#define ETNAVIV_PARAM_GPU_MODEL 0x01 +#define ETNAVIV_PARAM_GPU_REVISION 0x02 +#define ETNAVIV_PARAM_GPU_FEATURES_0 0x03 +#define ETNAVIV_PARAM_GPU_FEATURES_1 0x04 +#define ETNAVIV_PARAM_GPU_FEATURES_2 0x05 +#define ETNAVIV_PARAM_GPU_FEATURES_3 0x06 +#define ETNAVIV_PARAM_GPU_FEATURES_4 0x07
+#define ETNAVIV_PARAM_GPU_STREAM_COUNT 0x10 +#define ETNAVIV_PARAM_GPU_REGISTER_MAX 0x11 +#define ETNAVIV_PARAM_GPU_THREAD_COUNT 0x12 +#define ETNAVIV_PARAM_GPU_VERTEX_CACHE_SIZE 0x13 +#define ETNAVIV_PARAM_GPU_SHADER_CORE_COUNT 0x14 +#define ETNAVIV_PARAM_GPU_PIXEL_PIPES 0x15 +#define ETNAVIV_PARAM_GPU_VERTEX_OUTPUT_BUFFER_SIZE 0x16 +#define ETNAVIV_PARAM_GPU_BUFFER_SIZE 0x17 +#define ETNAVIV_PARAM_GPU_INSTRUCTION_COUNT 0x18 +#define ETNAVIV_PARAM_GPU_NUM_CONSTANTS 0x19
+//#define MSM_PARAM_GMEM_SIZE 0x02
+struct drm_etnaviv_param {
uint32_t pipe; /* in, ETNA_PIPE_x */
uint32_t param; /* in, ETNAVIV_PARAM_x */
uint64_t value; /* out (get_param) or in (set_param) */
+};
+/*
- GEM buffers:
- */
+#define ETNA_BO_CMDSTREAM 0x00000001 +#define ETNA_BO_CACHE_MASK 0x000f0000 +/* cache modes */ +#define ETNA_BO_CACHED 0x00010000 +#define ETNA_BO_WC 0x00020000 +#define ETNA_BO_UNCACHED 0x00040000
+struct drm_etnaviv_gem_new {
uint64_t size; /* in */
uint32_t flags; /* in, mask of ETNA_BO_x */
uint32_t handle; /* out */
+};
+struct drm_etnaviv_gem_info {
uint32_t handle; /* in */
uint32_t pad;
uint64_t offset; /* out, offset to pass to mmap() */
+};
+#define ETNA_PREP_READ 0x01 +#define ETNA_PREP_WRITE 0x02 +#define ETNA_PREP_NOSYNC 0x04
+struct drm_etnaviv_gem_cpu_prep {
uint32_t handle; /* in */
uint32_t op; /* in, mask of ETNA_PREP_x */
struct drm_etnaviv_timespec timeout; /* in */
+};
+struct drm_etnaviv_gem_cpu_fini {
uint32_t handle; /* in */
+};
+/*
- Cmdstream Submission:
- */
+/* The value written into the cmdstream is logically:
- ((relocbuf->gpuaddr + reloc_offset) << shift) | or
- When we have GPU's w/ >32bit ptrs, it should be possible to deal
- with this by emit'ing two reloc entries with appropriate shift
- values. Or a new ETNA_SUBMIT_CMD_x type would also be an option.
- NOTE that reloc's must be sorted by order of increasing submit_offset,
- otherwise EINVAL.
- */
+struct drm_etnaviv_gem_submit_reloc {
uint32_t submit_offset; /* in, offset from submit_bo */
uint32_t or; /* in, value OR'd with result */
int32_t shift; /* in, amount of left shift (can be negative) */
uint32_t reloc_idx; /* in, index of reloc_bo buffer */
uint64_t reloc_offset; /* in, offset from start of reloc_bo */
+};
+/* submit-types:
- BUF - this cmd buffer is executed normally.
- IB_TARGET_BUF - this cmd buffer is an IB target. Reloc's are
processed normally, but the kernel does not setup an IB to
this buffer in the first-level ringbuffer
- CTX_RESTORE_BUF - only executed if there has been a GPU context
switch since the last SUBMIT ioctl
- */
+#define ETNA_SUBMIT_CMD_BUF 0x0001 +#define ETNA_SUBMIT_CMD_IB_TARGET_BUF 0x0002 +#define ETNA_SUBMIT_CMD_CTX_RESTORE_BUF 0x0003 is a +struct drm_etnaviv_gem_submit_cmd {
uint32_t type; /* in, one of ETNA_SUBMIT_CMD_x */
Do we need different types? I did not use this in my kernel tree.
Please also not the commit cleaning this API, which changes this a bit.
Ah yes.. I see it.
But yes we need different types. At least the context restore buffer type is needed to properly implement GPU power management and context switching.
What role does GPU power management plays here? For the context switching it could make sense. But for the 2d core the context is so small that it does not hurt to send it with every command stream. For the 3d core it is much bigger, but this could be done completely in the kernel. Or I am wrong here?
uint32_t submit_idx; /* in, index of submit_bo cmdstream buffer */
uint32_t submit_offset; /* in, offset into submit_bo */
Do we really want/need the offset? I have removed it form cause it makes things in userspace more complex then needed.
It makes things a bit more complex, but it allows for far more efficient buffer use if you have are dealing with a lot of flushes. I don't see why we should prevent userspace from using this optimization.
I tend to get things up and running and do the optimization step if it is really worth. Also I like stuff to be stupid simple. There is an other interesting fact: flushing the iommuv2 is done via command stream and we need to reserve more space for the tail of the used bo. So if we reserve some space in the command buffer, we have other space limits for the tail depending on used hardware.
uint32_t size; /* in, cmdstream size */
uint32_t pad;
uint32_t nr_relocs; /* in, number of submit_reloc's */
uint64_t __user relocs; /* in, ptr to array of submit_reloc's */
+};
+/* Each buffer referenced elsewhere in the cmdstream submit (ie. the
- cmdstream buffer(s) themselves or reloc entries) has one (and only
- one) entry in the submit->bos[] table.
- As a optimization, the current buffer (gpu virtual address) can be
- passed back through the 'presumed' field. If on a subsequent reloc,
- userspace passes back a 'presumed' address that is still valid,
- then patching the cmdstream for this entry is skipped. This can
- avoid kernel needing to map/access the cmdstream bo in the common
- case.
- */
+#define ETNA_SUBMIT_BO_READ 0x0001 +#define ETNA_SUBMIT_BO_WRITE 0x0002 +struct drm_etnaviv_gem_submit_bo {
uint32_t flags; /* in, mask of ETNA_SUBMIT_BO_x */
uint32_t handle; /* in, GEM handle */
uint64_t presumed; /* in/out, presumed buffer address */
presumed support should never hit etnaviv driver.
As stated in the cover letter I think presumed support will become possible with MMUv2 and may provide a good optimization there. So I would rather leave this in here and just ignore it for now.
Your statement is funny as you have the following patch in your series: [PATCH RFC 070/111] staging: etnaviv: remove presumption of BO addresses
I have taken the idea of presumption from *drumroll* freedreno as the etnaviv driver started as 1:1 copy of freedreno. But what should I say, it never should be there and even freedreno does not make use of it (checked it about 2-3 months ago). Should the user space really know anything about physical addresses at all? Would be nice to hear the opinion of a drm guru here and maybe Russell.
+};
+/* Each cmdstream submit consists of a table of buffers involved, and
- one or more cmdstream buffers. This allows for conditional execution
- (context-restore), and IB buffers needed for per tile/bin draw cmds.
- */
+struct drm_etnaviv_gem_submit {
uint32_t pipe; /* in, ETNA_PIPE_x */
uint32_t fence; /* out */
uint32_t nr_bos; /* in, number of submit_bo's */
uint32_t nr_cmds; /* in, number of submit_cmd's */
Do we really need to support mutliple cmds per submit? I have removed this from my kernel.
We need to support at least one additional context buffer, so I don't see why we shouldn't support n buffers.
Keep it stupid simple. In my libdrm repo, which you hopefully know, I have implemented the buffer handling from the original libetnaviv. We allocate 5 command buffers of a defined size and rotate through them. During command buffer building we reserve space in the stream. if there is not enough space we flush the current buffer stream and switch to the next and us it. Then there is a way to explicit flush a command buffer.
For more details see: https://github.com/laanwj/etna_viv/tree/master/src/etnaviv https://github.com/austriancoder/libdrm
uint64_t __user bos; /* in, ptr to array of submit_bo's */
uint64_t __user cmds; /* in, ptr to array of submit_cmd's */
+};
+/* The normal way to synchronize with the GPU is just to CPU_PREP on
- a buffer if you need to access it from the CPU (other cmdstream
- submission from same or other contexts, PAGE_FLIP ioctl, etc, all
- handle the required synchronization under the hood). This ioctl
- mainly just exists as a way to implement the gallium pipe_fence
- APIs without requiring a dummy bo to synchronize on.
- */
+struct drm_etnaviv_wait_fence {
uint32_t pipe; /* in, ETNA_PIPE_x */
uint32_t fence; /* in */
struct drm_etnaviv_timespec timeout; /* in */
+};
+#define DRM_ETNAVIV_GET_PARAM 0x00 +/* placeholder: +#define DRM_MSM_SET_PARAM 0x01
- */
+#define DRM_ETNAVIV_GEM_NEW 0x02 +#define DRM_ETNAVIV_GEM_INFO 0x03 +#define DRM_ETNAVIV_GEM_CPU_PREP 0x04 +#define DRM_ETNAVIV_GEM_CPU_FINI 0x05 +#define DRM_ETNAVIV_GEM_SUBMIT 0x06 +#define DRM_ETNAVIV_WAIT_FENCE 0x07 +#define DRM_ETNAVIV_NUM_IOCTLS 0x08
+#define DRM_IOCTL_ETNAVIV_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GET_PARAM, struct drm_etnaviv_param) +#define DRM_IOCTL_ETNAVIV_GEM_NEW DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_NEW, struct drm_etnaviv_gem_new) +#define DRM_IOCTL_ETNAVIV_GEM_INFO DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_INFO, struct drm_etnaviv_gem_info) +#define DRM_IOCTL_ETNAVIV_GEM_CPU_PREP DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_CPU_PREP, struct drm_etnaviv_gem_cpu_prep) +#define DRM_IOCTL_ETNAVIV_GEM_CPU_FINI DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_CPU_FINI, struct drm_etnaviv_gem_cpu_fini) +#define DRM_IOCTL_ETNAVIV_GEM_SUBMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_SUBMIT, struct drm_etnaviv_gem_submit) +#define DRM_IOCTL_ETNAVIV_WAIT_FENCE DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_WAIT_FENCE, struct drm_etnaviv_wait_fence)
+#endif /* __ETNAVIV_DRM_H__ */
2.1.4
greets -- Christian Gmeiner, MSc
Am Dienstag, den 07.04.2015, 11:04 +0200 schrieb Christian Gmeiner:
Hi Lucas
2015-04-07 9:35 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Sonntag, den 05.04.2015, 21:26 +0200 schrieb Christian Gmeiner:
2015-04-02 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
From: Christian Gmeiner christian.gmeiner@gmail.com
This is a consolidation by Russell King of Christian's drm work.
Signed-off-by: Christian Gmeiner christian.gmeiner@gmail.com Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk
[...]
+#endif /* STATE_HI_XML */ diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h new file mode 100644 index 000000000000..f7b5ac6f3842 --- /dev/null +++ b/include/uapi/drm/etnaviv_drm.h @@ -0,0 +1,225 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __ETNAVIV_DRM_H__ +#define __ETNAVIV_DRM_H__
+#include <stddef.h> +#include <drm/drm.h>
+/* Please note that modifications to all structs defined here are
- subject to backwards-compatibility constraints:
- Do not use pointers, use uint64_t instead for 32 bit / 64 bit
user/kernel compatibility
- Keep fields aligned to their size
- Because of how drm_ioctl() works, we can add new fields at
the end of an ioctl if some care is taken: drm_ioctl() will
zero out the new fields at the tail of the ioctl, so a zero
value should have a backwards compatible meaning. And for
output params, userspace won't see the newly added output
fields.. so that has to be somehow ok.
- */
+#define ETNA_PIPE_3D 0x00 +#define ETNA_PIPE_2D 0x01 +#define ETNA_PIPE_VG 0x02
+#define ETNA_MAX_PIPES 3
+/* timeouts are specified in clock-monotonic absolute times (to simplify
- restarting interrupted ioctls). The following struct is logically the
- same as 'struct timespec' but 32/64b ABI safe.
- */
+struct drm_etnaviv_timespec {
int64_t tv_sec; /* seconds */
int64_t tv_nsec; /* nanoseconds */
+};
+#define ETNAVIV_PARAM_GPU_MODEL 0x01 +#define ETNAVIV_PARAM_GPU_REVISION 0x02 +#define ETNAVIV_PARAM_GPU_FEATURES_0 0x03 +#define ETNAVIV_PARAM_GPU_FEATURES_1 0x04 +#define ETNAVIV_PARAM_GPU_FEATURES_2 0x05 +#define ETNAVIV_PARAM_GPU_FEATURES_3 0x06 +#define ETNAVIV_PARAM_GPU_FEATURES_4 0x07
+#define ETNAVIV_PARAM_GPU_STREAM_COUNT 0x10 +#define ETNAVIV_PARAM_GPU_REGISTER_MAX 0x11 +#define ETNAVIV_PARAM_GPU_THREAD_COUNT 0x12 +#define ETNAVIV_PARAM_GPU_VERTEX_CACHE_SIZE 0x13 +#define ETNAVIV_PARAM_GPU_SHADER_CORE_COUNT 0x14 +#define ETNAVIV_PARAM_GPU_PIXEL_PIPES 0x15 +#define ETNAVIV_PARAM_GPU_VERTEX_OUTPUT_BUFFER_SIZE 0x16 +#define ETNAVIV_PARAM_GPU_BUFFER_SIZE 0x17 +#define ETNAVIV_PARAM_GPU_INSTRUCTION_COUNT 0x18 +#define ETNAVIV_PARAM_GPU_NUM_CONSTANTS 0x19
+//#define MSM_PARAM_GMEM_SIZE 0x02
+struct drm_etnaviv_param {
uint32_t pipe; /* in, ETNA_PIPE_x */
uint32_t param; /* in, ETNAVIV_PARAM_x */
uint64_t value; /* out (get_param) or in (set_param) */
+};
+/*
- GEM buffers:
- */
+#define ETNA_BO_CMDSTREAM 0x00000001 +#define ETNA_BO_CACHE_MASK 0x000f0000 +/* cache modes */ +#define ETNA_BO_CACHED 0x00010000 +#define ETNA_BO_WC 0x00020000 +#define ETNA_BO_UNCACHED 0x00040000
+struct drm_etnaviv_gem_new {
uint64_t size; /* in */
uint32_t flags; /* in, mask of ETNA_BO_x */
uint32_t handle; /* out */
+};
+struct drm_etnaviv_gem_info {
uint32_t handle; /* in */
uint32_t pad;
uint64_t offset; /* out, offset to pass to mmap() */
+};
+#define ETNA_PREP_READ 0x01 +#define ETNA_PREP_WRITE 0x02 +#define ETNA_PREP_NOSYNC 0x04
+struct drm_etnaviv_gem_cpu_prep {
uint32_t handle; /* in */
uint32_t op; /* in, mask of ETNA_PREP_x */
struct drm_etnaviv_timespec timeout; /* in */
+};
+struct drm_etnaviv_gem_cpu_fini {
uint32_t handle; /* in */
+};
+/*
- Cmdstream Submission:
- */
+/* The value written into the cmdstream is logically:
- ((relocbuf->gpuaddr + reloc_offset) << shift) | or
- When we have GPU's w/ >32bit ptrs, it should be possible to deal
- with this by emit'ing two reloc entries with appropriate shift
- values. Or a new ETNA_SUBMIT_CMD_x type would also be an option.
- NOTE that reloc's must be sorted by order of increasing submit_offset,
- otherwise EINVAL.
- */
+struct drm_etnaviv_gem_submit_reloc {
uint32_t submit_offset; /* in, offset from submit_bo */
uint32_t or; /* in, value OR'd with result */
int32_t shift; /* in, amount of left shift (can be negative) */
uint32_t reloc_idx; /* in, index of reloc_bo buffer */
uint64_t reloc_offset; /* in, offset from start of reloc_bo */
+};
+/* submit-types:
- BUF - this cmd buffer is executed normally.
- IB_TARGET_BUF - this cmd buffer is an IB target. Reloc's are
processed normally, but the kernel does not setup an IB to
this buffer in the first-level ringbuffer
- CTX_RESTORE_BUF - only executed if there has been a GPU context
switch since the last SUBMIT ioctl
- */
+#define ETNA_SUBMIT_CMD_BUF 0x0001 +#define ETNA_SUBMIT_CMD_IB_TARGET_BUF 0x0002 +#define ETNA_SUBMIT_CMD_CTX_RESTORE_BUF 0x0003 is a +struct drm_etnaviv_gem_submit_cmd {
uint32_t type; /* in, one of ETNA_SUBMIT_CMD_x */
Do we need different types? I did not use this in my kernel tree.
Please also not the commit cleaning this API, which changes this a bit.
Ah yes.. I see it.
But yes we need different types. At least the context restore buffer type is needed to properly implement GPU power management and context switching.
What role does GPU power management plays here? For the context switching it could make sense. But for the 2d core the context is so small that it does not hurt to send it with every command stream. For the 3d core it is much bigger, but this could be done completely in the kernel. Or I am wrong here?
If you power down the GPU you loose the context. You are right that we could save/restore the context from kernel space, but that is really taking a toll on CPU time. It is much better to have userspace provide a context buffer to get the GPU in the expected state, as you then only need to splice this into the execution stream to restore the context instead of pushing it with the CPU. Reading back the context on every switch will kill any performance.
uint32_t submit_idx; /* in, index of submit_bo cmdstream buffer */
uint32_t submit_offset; /* in, offset into submit_bo */
Do we really want/need the offset? I have removed it form cause it makes things in userspace more complex then needed.
It makes things a bit more complex, but it allows for far more efficient buffer use if you have are dealing with a lot of flushes. I don't see why we should prevent userspace from using this optimization.
I tend to get things up and running and do the optimization step if it is really worth. Also I like stuff to be stupid simple. There is an other interesting fact: flushing the iommuv2 is done via command stream and we need to reserve more space for the tail of the used bo. So if we reserve some space in the command buffer, we have other space limits for the tail depending on used hardware.
You may be aware that once this is upstream there is no easy way to change the userspace interface anymore. So whatever is left out now is likely to be very hard to reintroduce later.
What' the problem with having a command buffer in the kernel to flush the MMUv2? Why do you need to insert those commands into the userspace command stream?
uint32_t size; /* in, cmdstream size */
uint32_t pad;
uint32_t nr_relocs; /* in, number of submit_reloc's */
uint64_t __user relocs; /* in, ptr to array of submit_reloc's */
+};
+/* Each buffer referenced elsewhere in the cmdstream submit (ie. the
- cmdstream buffer(s) themselves or reloc entries) has one (and only
- one) entry in the submit->bos[] table.
- As a optimization, the current buffer (gpu virtual address) can be
- passed back through the 'presumed' field. If on a subsequent reloc,
- userspace passes back a 'presumed' address that is still valid,
- then patching the cmdstream for this entry is skipped. This can
- avoid kernel needing to map/access the cmdstream bo in the common
- case.
- */
+#define ETNA_SUBMIT_BO_READ 0x0001 +#define ETNA_SUBMIT_BO_WRITE 0x0002 +struct drm_etnaviv_gem_submit_bo {
uint32_t flags; /* in, mask of ETNA_SUBMIT_BO_x */
uint32_t handle; /* in, GEM handle */
uint64_t presumed; /* in/out, presumed buffer address */
presumed support should never hit etnaviv driver.
As stated in the cover letter I think presumed support will become possible with MMUv2 and may provide a good optimization there. So I would rather leave this in here and just ignore it for now.
Your statement is funny as you have the following patch in your series: [PATCH RFC 070/111] staging: etnaviv: remove presumption of BO addresses
You may notice the difference between interface and implementation.
I have taken the idea of presumption from *drumroll* freedreno as the etnaviv driver started as 1:1 copy of freedreno. But what should I say, it never should be there and even freedreno does not make use of it (checked it about 2-3 months ago). Should the user space really know anything about physical addresses at all? Would be nice to hear the opinion of a drm guru here and maybe Russell.
A presumed address can not be a physical address, but is an address in the VM context of that process. Nouveau uses the same thing on NV50+ where you have a proper MMU to protect all GPU accesses. I would expect the same thing to be true for Vivante MMUv2.
+};
+/* Each cmdstream submit consists of a table of buffers involved, and
- one or more cmdstream buffers. This allows for conditional execution
- (context-restore), and IB buffers needed for per tile/bin draw cmds.
- */
+struct drm_etnaviv_gem_submit {
uint32_t pipe; /* in, ETNA_PIPE_x */
uint32_t fence; /* out */
uint32_t nr_bos; /* in, number of submit_bo's */
uint32_t nr_cmds; /* in, number of submit_cmd's */
Do we really need to support mutliple cmds per submit? I have removed this from my kernel.
We need to support at least one additional context buffer, so I don't see why we shouldn't support n buffers.
Keep it stupid simple. In my libdrm repo, which you hopefully know, I have implemented the buffer handling from the original libetnaviv. We allocate 5 command buffers of a defined size and rotate through them. During command buffer building we reserve space in the stream. if there is not enough space we flush the current buffer stream and switch to the next and us it. Then there is a way to explicit flush a command buffer.
For more details see: https://github.com/laanwj/etna_viv/tree/master/src/etnaviv https://github.com/austriancoder/libdrm
Same argument as above really. We need at least the context buffer.
Regards, Lucas
2015-04-07 11:20 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Dienstag, den 07.04.2015, 11:04 +0200 schrieb Christian Gmeiner:
Hi Lucas
2015-04-07 9:35 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Sonntag, den 05.04.2015, 21:26 +0200 schrieb Christian Gmeiner:
2015-04-02 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
From: Christian Gmeiner christian.gmeiner@gmail.com
This is a consolidation by Russell King of Christian's drm work.
Signed-off-by: Christian Gmeiner christian.gmeiner@gmail.com Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk
[...]
+#endif /* STATE_HI_XML */ diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h new file mode 100644 index 000000000000..f7b5ac6f3842 --- /dev/null +++ b/include/uapi/drm/etnaviv_drm.h @@ -0,0 +1,225 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __ETNAVIV_DRM_H__ +#define __ETNAVIV_DRM_H__
+#include <stddef.h> +#include <drm/drm.h>
+/* Please note that modifications to all structs defined here are
- subject to backwards-compatibility constraints:
- Do not use pointers, use uint64_t instead for 32 bit / 64 bit
user/kernel compatibility
- Keep fields aligned to their size
- Because of how drm_ioctl() works, we can add new fields at
the end of an ioctl if some care is taken: drm_ioctl() will
zero out the new fields at the tail of the ioctl, so a zero
value should have a backwards compatible meaning. And for
output params, userspace won't see the newly added output
fields.. so that has to be somehow ok.
- */
+#define ETNA_PIPE_3D 0x00 +#define ETNA_PIPE_2D 0x01 +#define ETNA_PIPE_VG 0x02
+#define ETNA_MAX_PIPES 3
+/* timeouts are specified in clock-monotonic absolute times (to simplify
- restarting interrupted ioctls). The following struct is logically the
- same as 'struct timespec' but 32/64b ABI safe.
- */
+struct drm_etnaviv_timespec {
int64_t tv_sec; /* seconds */
int64_t tv_nsec; /* nanoseconds */
+};
+#define ETNAVIV_PARAM_GPU_MODEL 0x01 +#define ETNAVIV_PARAM_GPU_REVISION 0x02 +#define ETNAVIV_PARAM_GPU_FEATURES_0 0x03 +#define ETNAVIV_PARAM_GPU_FEATURES_1 0x04 +#define ETNAVIV_PARAM_GPU_FEATURES_2 0x05 +#define ETNAVIV_PARAM_GPU_FEATURES_3 0x06 +#define ETNAVIV_PARAM_GPU_FEATURES_4 0x07
+#define ETNAVIV_PARAM_GPU_STREAM_COUNT 0x10 +#define ETNAVIV_PARAM_GPU_REGISTER_MAX 0x11 +#define ETNAVIV_PARAM_GPU_THREAD_COUNT 0x12 +#define ETNAVIV_PARAM_GPU_VERTEX_CACHE_SIZE 0x13 +#define ETNAVIV_PARAM_GPU_SHADER_CORE_COUNT 0x14 +#define ETNAVIV_PARAM_GPU_PIXEL_PIPES 0x15 +#define ETNAVIV_PARAM_GPU_VERTEX_OUTPUT_BUFFER_SIZE 0x16 +#define ETNAVIV_PARAM_GPU_BUFFER_SIZE 0x17 +#define ETNAVIV_PARAM_GPU_INSTRUCTION_COUNT 0x18 +#define ETNAVIV_PARAM_GPU_NUM_CONSTANTS 0x19
+//#define MSM_PARAM_GMEM_SIZE 0x02
+struct drm_etnaviv_param {
uint32_t pipe; /* in, ETNA_PIPE_x */
uint32_t param; /* in, ETNAVIV_PARAM_x */
uint64_t value; /* out (get_param) or in (set_param) */
+};
+/*
- GEM buffers:
- */
+#define ETNA_BO_CMDSTREAM 0x00000001 +#define ETNA_BO_CACHE_MASK 0x000f0000 +/* cache modes */ +#define ETNA_BO_CACHED 0x00010000 +#define ETNA_BO_WC 0x00020000 +#define ETNA_BO_UNCACHED 0x00040000
+struct drm_etnaviv_gem_new {
uint64_t size; /* in */
uint32_t flags; /* in, mask of ETNA_BO_x */
uint32_t handle; /* out */
+};
+struct drm_etnaviv_gem_info {
uint32_t handle; /* in */
uint32_t pad;
uint64_t offset; /* out, offset to pass to mmap() */
+};
+#define ETNA_PREP_READ 0x01 +#define ETNA_PREP_WRITE 0x02 +#define ETNA_PREP_NOSYNC 0x04
+struct drm_etnaviv_gem_cpu_prep {
uint32_t handle; /* in */
uint32_t op; /* in, mask of ETNA_PREP_x */
struct drm_etnaviv_timespec timeout; /* in */
+};
+struct drm_etnaviv_gem_cpu_fini {
uint32_t handle; /* in */
+};
+/*
- Cmdstream Submission:
- */
+/* The value written into the cmdstream is logically:
- ((relocbuf->gpuaddr + reloc_offset) << shift) | or
- When we have GPU's w/ >32bit ptrs, it should be possible to deal
- with this by emit'ing two reloc entries with appropriate shift
- values. Or a new ETNA_SUBMIT_CMD_x type would also be an option.
- NOTE that reloc's must be sorted by order of increasing submit_offset,
- otherwise EINVAL.
- */
+struct drm_etnaviv_gem_submit_reloc {
uint32_t submit_offset; /* in, offset from submit_bo */
uint32_t or; /* in, value OR'd with result */
int32_t shift; /* in, amount of left shift (can be negative) */
uint32_t reloc_idx; /* in, index of reloc_bo buffer */
uint64_t reloc_offset; /* in, offset from start of reloc_bo */
+};
+/* submit-types:
- BUF - this cmd buffer is executed normally.
- IB_TARGET_BUF - this cmd buffer is an IB target. Reloc's are
processed normally, but the kernel does not setup an IB to
this buffer in the first-level ringbuffer
- CTX_RESTORE_BUF - only executed if there has been a GPU context
switch since the last SUBMIT ioctl
- */
+#define ETNA_SUBMIT_CMD_BUF 0x0001 +#define ETNA_SUBMIT_CMD_IB_TARGET_BUF 0x0002 +#define ETNA_SUBMIT_CMD_CTX_RESTORE_BUF 0x0003 is a +struct drm_etnaviv_gem_submit_cmd {
uint32_t type; /* in, one of ETNA_SUBMIT_CMD_x */
Do we need different types? I did not use this in my kernel tree.
Please also not the commit cleaning this API, which changes this a bit.
Ah yes.. I see it.
But yes we need different types. At least the context restore buffer type is needed to properly implement GPU power management and context switching.
What role does GPU power management plays here? For the context switching it could make sense. But for the 2d core the context is so small that it does not hurt to send it with every command stream. For the 3d core it is much bigger, but this could be done completely in the kernel. Or I am wrong here?
If you power down the GPU you loose the context. You are right that we could save/restore the context from kernel space, but that is really taking a toll on CPU time. It is much better to have userspace provide a context buffer to get the GPU in the expected state, as you then only need to splice this into the execution stream to restore the context instead of pushing it with the CPU. Reading back the context on every switch will kill any performance.
And for this you need a own command type? The context is nothing special. Only load state commands in the command buffer. You can have an internal representation of the context in the user space (as libetnaviv does it right now )and work with it. Then if you want to submit some render calls etc you can look if the state is dirty and submit the whole or the changes values. So I am not sure if there is a need for a context buffer type as it is nothing special.
uint32_t submit_idx; /* in, index of submit_bo cmdstream buffer */
uint32_t submit_offset; /* in, offset into submit_bo */
Do we really want/need the offset? I have removed it form cause it makes things in userspace more complex then needed.
It makes things a bit more complex, but it allows for far more efficient buffer use if you have are dealing with a lot of flushes. I don't see why we should prevent userspace from using this optimization.
I tend to get things up and running and do the optimization step if it is really worth. Also I like stuff to be stupid simple. There is an other interesting fact: flushing the iommuv2 is done via command stream and we need to reserve more space for the tail of the used bo. So if we reserve some space in the command buffer, we have other space limits for the tail depending on used hardware.
You may be aware that once this is upstream there is no easy way to change the userspace interface anymore. So whatever is left out now is likely to be very hard to reintroduce later.
I am aware of this and it gets even harder if you/we want to jump over staging. This is why I bug you with all those questions as I am also interested to get it right.
What' the problem with having a command buffer in the kernel to flush the MMUv2? Why do you need to insert those commands into the userspace command stream?
There is no problem - ignore my concerns.
uint32_t size; /* in, cmdstream size */
uint32_t pad;
uint32_t nr_relocs; /* in, number of submit_reloc's */
uint64_t __user relocs; /* in, ptr to array of submit_reloc's */
+};
+/* Each buffer referenced elsewhere in the cmdstream submit (ie. the
- cmdstream buffer(s) themselves or reloc entries) has one (and only
- one) entry in the submit->bos[] table.
- As a optimization, the current buffer (gpu virtual address) can be
- passed back through the 'presumed' field. If on a subsequent reloc,
- userspace passes back a 'presumed' address that is still valid,
- then patching the cmdstream for this entry is skipped. This can
- avoid kernel needing to map/access the cmdstream bo in the common
- case.
- */
+#define ETNA_SUBMIT_BO_READ 0x0001 +#define ETNA_SUBMIT_BO_WRITE 0x0002 +struct drm_etnaviv_gem_submit_bo {
uint32_t flags; /* in, mask of ETNA_SUBMIT_BO_x */
uint32_t handle; /* in, GEM handle */
uint64_t presumed; /* in/out, presumed buffer address */
presumed support should never hit etnaviv driver.
As stated in the cover letter I think presumed support will become possible with MMUv2 and may provide a good optimization there. So I would rather leave this in here and just ignore it for now.
Your statement is funny as you have the following patch in your series: [PATCH RFC 070/111] staging: etnaviv: remove presumption of BO addresses
You may notice the difference between interface and implementation.
I have taken the idea of presumption from *drumroll* freedreno as the etnaviv driver started as 1:1 copy of freedreno. But what should I say, it never should be there and even freedreno does not make use of it (checked it about 2-3 months ago). Should the user space really know anything about physical addresses at all? Would be nice to hear the opinion of a drm guru here and maybe Russell.
A presumed address can not be a physical address, but is an address in the VM context of that process.
That is correct.
Nouveau uses the same thing on NV50+ where you have a proper MMU to protect all GPU accesses. I would expect the same thing to be true for Vivante MMUv2.
Okay - so for mmuv1 this will be a noop. I cant wait to see your user space.
+};
+/* Each cmdstream submit consists of a table of buffers involved, and
- one or more cmdstream buffers. This allows for conditional execution
- (context-restore), and IB buffers needed for per tile/bin draw cmds.
- */
+struct drm_etnaviv_gem_submit {
uint32_t pipe; /* in, ETNA_PIPE_x */
uint32_t fence; /* out */
uint32_t nr_bos; /* in, number of submit_bo's */
uint32_t nr_cmds; /* in, number of submit_cmd's */
Do we really need to support mutliple cmds per submit? I have removed this from my kernel.
We need to support at least one additional context buffer, so I don't see why we shouldn't support n buffers.
Keep it stupid simple. In my libdrm repo, which you hopefully know, I have implemented the buffer handling from the original libetnaviv. We allocate 5 command buffers of a defined size and rotate through them. During command buffer building we reserve space in the stream. if there is not enough space we flush the current buffer stream and switch to the next and us it. Then there is a way to explicit flush a command buffer.
For more details see: https://github.com/laanwj/etna_viv/tree/master/src/etnaviv https://github.com/austriancoder/libdrm
Same argument as above really. We need at least the context buffer.
I am still not sure about this.
greets -- Christian Gmeiner, MSc
Am Dienstag, den 07.04.2015, 11:40 +0200 schrieb Christian Gmeiner:
2015-04-07 11:20 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Dienstag, den 07.04.2015, 11:04 +0200 schrieb Christian Gmeiner:
Hi Lucas
2015-04-07 9:35 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Sonntag, den 05.04.2015, 21:26 +0200 schrieb Christian Gmeiner:
2015-04-02 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
From: Christian Gmeiner christian.gmeiner@gmail.com
This is a consolidation by Russell King of Christian's drm work.
Signed-off-by: Christian Gmeiner christian.gmeiner@gmail.com Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk
[...]
+#endif /* STATE_HI_XML */ diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h new file mode 100644 index 000000000000..f7b5ac6f3842 --- /dev/null +++ b/include/uapi/drm/etnaviv_drm.h @@ -0,0 +1,225 @@ +/*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
+#ifndef __ETNAVIV_DRM_H__ +#define __ETNAVIV_DRM_H__
+#include <stddef.h> +#include <drm/drm.h>
+/* Please note that modifications to all structs defined here are
- subject to backwards-compatibility constraints:
- Do not use pointers, use uint64_t instead for 32 bit / 64 bit
user/kernel compatibility
- Keep fields aligned to their size
- Because of how drm_ioctl() works, we can add new fields at
the end of an ioctl if some care is taken: drm_ioctl() will
zero out the new fields at the tail of the ioctl, so a zero
value should have a backwards compatible meaning. And for
output params, userspace won't see the newly added output
fields.. so that has to be somehow ok.
- */
+#define ETNA_PIPE_3D 0x00 +#define ETNA_PIPE_2D 0x01 +#define ETNA_PIPE_VG 0x02
+#define ETNA_MAX_PIPES 3
+/* timeouts are specified in clock-monotonic absolute times (to simplify
- restarting interrupted ioctls). The following struct is logically the
- same as 'struct timespec' but 32/64b ABI safe.
- */
+struct drm_etnaviv_timespec {
int64_t tv_sec; /* seconds */
int64_t tv_nsec; /* nanoseconds */
+};
+#define ETNAVIV_PARAM_GPU_MODEL 0x01 +#define ETNAVIV_PARAM_GPU_REVISION 0x02 +#define ETNAVIV_PARAM_GPU_FEATURES_0 0x03 +#define ETNAVIV_PARAM_GPU_FEATURES_1 0x04 +#define ETNAVIV_PARAM_GPU_FEATURES_2 0x05 +#define ETNAVIV_PARAM_GPU_FEATURES_3 0x06 +#define ETNAVIV_PARAM_GPU_FEATURES_4 0x07
+#define ETNAVIV_PARAM_GPU_STREAM_COUNT 0x10 +#define ETNAVIV_PARAM_GPU_REGISTER_MAX 0x11 +#define ETNAVIV_PARAM_GPU_THREAD_COUNT 0x12 +#define ETNAVIV_PARAM_GPU_VERTEX_CACHE_SIZE 0x13 +#define ETNAVIV_PARAM_GPU_SHADER_CORE_COUNT 0x14 +#define ETNAVIV_PARAM_GPU_PIXEL_PIPES 0x15 +#define ETNAVIV_PARAM_GPU_VERTEX_OUTPUT_BUFFER_SIZE 0x16 +#define ETNAVIV_PARAM_GPU_BUFFER_SIZE 0x17 +#define ETNAVIV_PARAM_GPU_INSTRUCTION_COUNT 0x18 +#define ETNAVIV_PARAM_GPU_NUM_CONSTANTS 0x19
+//#define MSM_PARAM_GMEM_SIZE 0x02
+struct drm_etnaviv_param {
uint32_t pipe; /* in, ETNA_PIPE_x */
uint32_t param; /* in, ETNAVIV_PARAM_x */
uint64_t value; /* out (get_param) or in (set_param) */
+};
+/*
- GEM buffers:
- */
+#define ETNA_BO_CMDSTREAM 0x00000001 +#define ETNA_BO_CACHE_MASK 0x000f0000 +/* cache modes */ +#define ETNA_BO_CACHED 0x00010000 +#define ETNA_BO_WC 0x00020000 +#define ETNA_BO_UNCACHED 0x00040000
+struct drm_etnaviv_gem_new {
uint64_t size; /* in */
uint32_t flags; /* in, mask of ETNA_BO_x */
uint32_t handle; /* out */
+};
+struct drm_etnaviv_gem_info {
uint32_t handle; /* in */
uint32_t pad;
uint64_t offset; /* out, offset to pass to mmap() */
+};
+#define ETNA_PREP_READ 0x01 +#define ETNA_PREP_WRITE 0x02 +#define ETNA_PREP_NOSYNC 0x04
+struct drm_etnaviv_gem_cpu_prep {
uint32_t handle; /* in */
uint32_t op; /* in, mask of ETNA_PREP_x */
struct drm_etnaviv_timespec timeout; /* in */
+};
+struct drm_etnaviv_gem_cpu_fini {
uint32_t handle; /* in */
+};
+/*
- Cmdstream Submission:
- */
+/* The value written into the cmdstream is logically:
- ((relocbuf->gpuaddr + reloc_offset) << shift) | or
- When we have GPU's w/ >32bit ptrs, it should be possible to deal
- with this by emit'ing two reloc entries with appropriate shift
- values. Or a new ETNA_SUBMIT_CMD_x type would also be an option.
- NOTE that reloc's must be sorted by order of increasing submit_offset,
- otherwise EINVAL.
- */
+struct drm_etnaviv_gem_submit_reloc {
uint32_t submit_offset; /* in, offset from submit_bo */
uint32_t or; /* in, value OR'd with result */
int32_t shift; /* in, amount of left shift (can be negative) */
uint32_t reloc_idx; /* in, index of reloc_bo buffer */
uint64_t reloc_offset; /* in, offset from start of reloc_bo */
+};
+/* submit-types:
- BUF - this cmd buffer is executed normally.
- IB_TARGET_BUF - this cmd buffer is an IB target. Reloc's are
processed normally, but the kernel does not setup an IB to
this buffer in the first-level ringbuffer
- CTX_RESTORE_BUF - only executed if there has been a GPU context
switch since the last SUBMIT ioctl
- */
+#define ETNA_SUBMIT_CMD_BUF 0x0001 +#define ETNA_SUBMIT_CMD_IB_TARGET_BUF 0x0002 +#define ETNA_SUBMIT_CMD_CTX_RESTORE_BUF 0x0003 is a +struct drm_etnaviv_gem_submit_cmd {
uint32_t type; /* in, one of ETNA_SUBMIT_CMD_x */
Do we need different types? I did not use this in my kernel tree.
Please also not the commit cleaning this API, which changes this a bit.
Ah yes.. I see it.
But yes we need different types. At least the context restore buffer type is needed to properly implement GPU power management and context switching.
What role does GPU power management plays here? For the context switching it could make sense. But for the 2d core the context is so small that it does not hurt to send it with every command stream. For the 3d core it is much bigger, but this could be done completely in the kernel. Or I am wrong here?
If you power down the GPU you loose the context. You are right that we could save/restore the context from kernel space, but that is really taking a toll on CPU time. It is much better to have userspace provide a context buffer to get the GPU in the expected state, as you then only need to splice this into the execution stream to restore the context instead of pushing it with the CPU. Reading back the context on every switch will kill any performance.
And for this you need a own command type? The context is nothing special. Only load state commands in the command buffer. You can have an internal representation of the context in the user space (as libetnaviv does it right now )and work with it. Then if you want to submit some render calls etc you can look if the state is dirty and submit the whole or the changes values. So I am not sure if there is a need for a context buffer type as it is nothing special.
How would the userspace know if another process / a GPU hang / a power management event has dirtied the state of the GPU? It is only the kernel who knows if the state of the GPU has changed since the last submit of this process.
In the common case when nothing has disturbed the context we don't want to insert the context buffer, as we really want minimal state updates in that case. We need a way to tell the kernel which command buffer is the context buffer, so the kernel only splices this buffer into the stream if the context is dirty.
Regards, Lucas
Hi Lucas.
2015-04-07 11:47 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Dienstag, den 07.04.2015, 11:40 +0200 schrieb Christian Gmeiner:
2015-04-07 11:20 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Dienstag, den 07.04.2015, 11:04 +0200 schrieb Christian Gmeiner:
Hi Lucas
2015-04-07 9:35 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Sonntag, den 05.04.2015, 21:26 +0200 schrieb Christian Gmeiner:
2015-04-02 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de: > From: Christian Gmeiner christian.gmeiner@gmail.com > > This is a consolidation by Russell King of Christian's drm work. > > Signed-off-by: Christian Gmeiner christian.gmeiner@gmail.com > Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk
> ---
[...]
> +#endif /* STATE_HI_XML */ > diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h > new file mode 100644 > index 000000000000..f7b5ac6f3842 > --- /dev/null > +++ b/include/uapi/drm/etnaviv_drm.h > @@ -0,0 +1,225 @@ > +/* > + * Copyright (C) 2013 Red Hat > + * Author: Rob Clark robdclark@gmail.com > + * > + * This program is free software; you can redistribute it and/or modify it > + * under the terms of the GNU General Public License version 2 as published by > + * the Free Software Foundation. > + * > + * This program is distributed in the hope that it will be useful, but WITHOUT > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for > + * more details. > + * > + * You should have received a copy of the GNU General Public License along with > + * this program. If not, see http://www.gnu.org/licenses/. > + */ > + > +#ifndef __ETNAVIV_DRM_H__ > +#define __ETNAVIV_DRM_H__ > + > +#include <stddef.h> > +#include <drm/drm.h> > + > +/* Please note that modifications to all structs defined here are > + * subject to backwards-compatibility constraints: > + * 1) Do not use pointers, use uint64_t instead for 32 bit / 64 bit > + * user/kernel compatibility > + * 2) Keep fields aligned to their size > + * 3) Because of how drm_ioctl() works, we can add new fields at > + * the end of an ioctl if some care is taken: drm_ioctl() will > + * zero out the new fields at the tail of the ioctl, so a zero > + * value should have a backwards compatible meaning. And for > + * output params, userspace won't see the newly added output > + * fields.. so that has to be somehow ok. > + */ > + > +#define ETNA_PIPE_3D 0x00 > +#define ETNA_PIPE_2D 0x01 > +#define ETNA_PIPE_VG 0x02 > + > +#define ETNA_MAX_PIPES 3 > + > +/* timeouts are specified in clock-monotonic absolute times (to simplify > + * restarting interrupted ioctls). The following struct is logically the > + * same as 'struct timespec' but 32/64b ABI safe. > + */ > +struct drm_etnaviv_timespec { > + int64_t tv_sec; /* seconds */ > + int64_t tv_nsec; /* nanoseconds */ > +}; > + > +#define ETNAVIV_PARAM_GPU_MODEL 0x01 > +#define ETNAVIV_PARAM_GPU_REVISION 0x02 > +#define ETNAVIV_PARAM_GPU_FEATURES_0 0x03 > +#define ETNAVIV_PARAM_GPU_FEATURES_1 0x04 > +#define ETNAVIV_PARAM_GPU_FEATURES_2 0x05 > +#define ETNAVIV_PARAM_GPU_FEATURES_3 0x06 > +#define ETNAVIV_PARAM_GPU_FEATURES_4 0x07 > + > +#define ETNAVIV_PARAM_GPU_STREAM_COUNT 0x10 > +#define ETNAVIV_PARAM_GPU_REGISTER_MAX 0x11 > +#define ETNAVIV_PARAM_GPU_THREAD_COUNT 0x12 > +#define ETNAVIV_PARAM_GPU_VERTEX_CACHE_SIZE 0x13 > +#define ETNAVIV_PARAM_GPU_SHADER_CORE_COUNT 0x14 > +#define ETNAVIV_PARAM_GPU_PIXEL_PIPES 0x15 > +#define ETNAVIV_PARAM_GPU_VERTEX_OUTPUT_BUFFER_SIZE 0x16 > +#define ETNAVIV_PARAM_GPU_BUFFER_SIZE 0x17 > +#define ETNAVIV_PARAM_GPU_INSTRUCTION_COUNT 0x18 > +#define ETNAVIV_PARAM_GPU_NUM_CONSTANTS 0x19 > + > +//#define MSM_PARAM_GMEM_SIZE 0x02 > + > +struct drm_etnaviv_param { > + uint32_t pipe; /* in, ETNA_PIPE_x */ > + uint32_t param; /* in, ETNAVIV_PARAM_x */ > + uint64_t value; /* out (get_param) or in (set_param) */ > +}; > + > +/* > + * GEM buffers: > + */ > + > +#define ETNA_BO_CMDSTREAM 0x00000001 > +#define ETNA_BO_CACHE_MASK 0x000f0000 > +/* cache modes */ > +#define ETNA_BO_CACHED 0x00010000 > +#define ETNA_BO_WC 0x00020000 > +#define ETNA_BO_UNCACHED 0x00040000 > + > +struct drm_etnaviv_gem_new { > + uint64_t size; /* in */ > + uint32_t flags; /* in, mask of ETNA_BO_x */ > + uint32_t handle; /* out */ > +}; > + > +struct drm_etnaviv_gem_info { > + uint32_t handle; /* in */ > + uint32_t pad; > + uint64_t offset; /* out, offset to pass to mmap() */ > +}; > + > +#define ETNA_PREP_READ 0x01 > +#define ETNA_PREP_WRITE 0x02 > +#define ETNA_PREP_NOSYNC 0x04 > + > +struct drm_etnaviv_gem_cpu_prep { > + uint32_t handle; /* in */ > + uint32_t op; /* in, mask of ETNA_PREP_x */ > + struct drm_etnaviv_timespec timeout; /* in */ > +}; > + > +struct drm_etnaviv_gem_cpu_fini { > + uint32_t handle; /* in */ > +}; > + > +/* > + * Cmdstream Submission: > + */ > + > +/* The value written into the cmdstream is logically: > + * > + * ((relocbuf->gpuaddr + reloc_offset) << shift) | or > + * > + * When we have GPU's w/ >32bit ptrs, it should be possible to deal > + * with this by emit'ing two reloc entries with appropriate shift > + * values. Or a new ETNA_SUBMIT_CMD_x type would also be an option. > + * > + * NOTE that reloc's must be sorted by order of increasing submit_offset, > + * otherwise EINVAL. > + */ > +struct drm_etnaviv_gem_submit_reloc { > + uint32_t submit_offset; /* in, offset from submit_bo */ > + uint32_t or; /* in, value OR'd with result */ > + int32_t shift; /* in, amount of left shift (can be negative) */ > + uint32_t reloc_idx; /* in, index of reloc_bo buffer */ > + uint64_t reloc_offset; /* in, offset from start of reloc_bo */ > +}; > + > +/* submit-types: > + * BUF - this cmd buffer is executed normally. > + * IB_TARGET_BUF - this cmd buffer is an IB target. Reloc's are > + * processed normally, but the kernel does not setup an IB to > + * this buffer in the first-level ringbuffer > + * CTX_RESTORE_BUF - only executed if there has been a GPU context > + * switch since the last SUBMIT ioctl > + */ > +#define ETNA_SUBMIT_CMD_BUF 0x0001 > +#define ETNA_SUBMIT_CMD_IB_TARGET_BUF 0x0002 > +#define ETNA_SUBMIT_CMD_CTX_RESTORE_BUF 0x0003 is a > +struct drm_etnaviv_gem_submit_cmd { > + uint32_t type; /* in, one of ETNA_SUBMIT_CMD_x */
Do we need different types? I did not use this in my kernel tree.
Please also not the commit cleaning this API, which changes this a bit.
Ah yes.. I see it.
But yes we need different types. At least the context restore buffer type is needed to properly implement GPU power management and context switching.
What role does GPU power management plays here? For the context switching it could make sense. But for the 2d core the context is so small that it does not hurt to send it with every command stream. For the 3d core it is much bigger, but this could be done completely in the kernel. Or I am wrong here?
If you power down the GPU you loose the context. You are right that we could save/restore the context from kernel space, but that is really taking a toll on CPU time. It is much better to have userspace provide a context buffer to get the GPU in the expected state, as you then only need to splice this into the execution stream to restore the context instead of pushing it with the CPU. Reading back the context on every switch will kill any performance.
And for this you need a own command type? The context is nothing special. Only load state commands in the command buffer. You can have an internal representation of the context in the user space (as libetnaviv does it right now )and work with it. Then if you want to submit some render calls etc you can look if the state is dirty and submit the whole or the changes values. So I am not sure if there is a need for a context buffer type as it is nothing special.
How would the userspace know if another process / a GPU hang / a power management event has dirtied the state of the GPU? It is only the kernel who knows if the state of the GPU has changed since the last submit of this process.
Okay got it.
In the common case when nothing has disturbed the context we don't want to insert the context buffer, as we really want minimal state updates in that case. We need a way to tell the kernel which command buffer is the context buffer, so the kernel only splices this buffer into the stream if the context is dirty.
So the context buffer holds the full GPU context and the kernel does the partial update of the current hardware context. This makes the user space a lot simpler as we can send the whole context and do not need take care of partial updates.
I like the idea.
greets -- Christian Gmeiner, MSc
Am Dienstag, den 07.04.2015, 11:58 +0200 schrieb Christian Gmeiner:
Hi Lucas.
2015-04-07 11:47 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Dienstag, den 07.04.2015, 11:40 +0200 schrieb Christian Gmeiner:
2015-04-07 11:20 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Dienstag, den 07.04.2015, 11:04 +0200 schrieb Christian Gmeiner:
Hi Lucas
2015-04-07 9:35 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Sonntag, den 05.04.2015, 21:26 +0200 schrieb Christian Gmeiner: > 2015-04-02 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de: > > From: Christian Gmeiner christian.gmeiner@gmail.com > > > > This is a consolidation by Russell King of Christian's drm work. > > > > Signed-off-by: Christian Gmeiner christian.gmeiner@gmail.com > > Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk
> > --- [...]
> > +#endif /* STATE_HI_XML */ > > diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h > > new file mode 100644 > > index 000000000000..f7b5ac6f3842 > > --- /dev/null > > +++ b/include/uapi/drm/etnaviv_drm.h > > @@ -0,0 +1,225 @@ > > +/* > > + * Copyright (C) 2013 Red Hat > > + * Author: Rob Clark robdclark@gmail.com > > + * > > + * This program is free software; you can redistribute it and/or modify it > > + * under the terms of the GNU General Public License version 2 as published by > > + * the Free Software Foundation. > > + * > > + * This program is distributed in the hope that it will be useful, but WITHOUT > > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or > > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for > > + * more details. > > + * > > + * You should have received a copy of the GNU General Public License along with > > + * this program. If not, see http://www.gnu.org/licenses/. > > + */ > > + > > +#ifndef __ETNAVIV_DRM_H__ > > +#define __ETNAVIV_DRM_H__ > > + > > +#include <stddef.h> > > +#include <drm/drm.h> > > + > > +/* Please note that modifications to all structs defined here are > > + * subject to backwards-compatibility constraints: > > + * 1) Do not use pointers, use uint64_t instead for 32 bit / 64 bit > > + * user/kernel compatibility > > + * 2) Keep fields aligned to their size > > + * 3) Because of how drm_ioctl() works, we can add new fields at > > + * the end of an ioctl if some care is taken: drm_ioctl() will > > + * zero out the new fields at the tail of the ioctl, so a zero > > + * value should have a backwards compatible meaning. And for > > + * output params, userspace won't see the newly added output > > + * fields.. so that has to be somehow ok. > > + */ > > + > > +#define ETNA_PIPE_3D 0x00 > > +#define ETNA_PIPE_2D 0x01 > > +#define ETNA_PIPE_VG 0x02 > > + > > +#define ETNA_MAX_PIPES 3 > > + > > +/* timeouts are specified in clock-monotonic absolute times (to simplify > > + * restarting interrupted ioctls). The following struct is logically the > > + * same as 'struct timespec' but 32/64b ABI safe. > > + */ > > +struct drm_etnaviv_timespec { > > + int64_t tv_sec; /* seconds */ > > + int64_t tv_nsec; /* nanoseconds */ > > +}; > > + > > +#define ETNAVIV_PARAM_GPU_MODEL 0x01 > > +#define ETNAVIV_PARAM_GPU_REVISION 0x02 > > +#define ETNAVIV_PARAM_GPU_FEATURES_0 0x03 > > +#define ETNAVIV_PARAM_GPU_FEATURES_1 0x04 > > +#define ETNAVIV_PARAM_GPU_FEATURES_2 0x05 > > +#define ETNAVIV_PARAM_GPU_FEATURES_3 0x06 > > +#define ETNAVIV_PARAM_GPU_FEATURES_4 0x07 > > + > > +#define ETNAVIV_PARAM_GPU_STREAM_COUNT 0x10 > > +#define ETNAVIV_PARAM_GPU_REGISTER_MAX 0x11 > > +#define ETNAVIV_PARAM_GPU_THREAD_COUNT 0x12 > > +#define ETNAVIV_PARAM_GPU_VERTEX_CACHE_SIZE 0x13 > > +#define ETNAVIV_PARAM_GPU_SHADER_CORE_COUNT 0x14 > > +#define ETNAVIV_PARAM_GPU_PIXEL_PIPES 0x15 > > +#define ETNAVIV_PARAM_GPU_VERTEX_OUTPUT_BUFFER_SIZE 0x16 > > +#define ETNAVIV_PARAM_GPU_BUFFER_SIZE 0x17 > > +#define ETNAVIV_PARAM_GPU_INSTRUCTION_COUNT 0x18 > > +#define ETNAVIV_PARAM_GPU_NUM_CONSTANTS 0x19 > > + > > +//#define MSM_PARAM_GMEM_SIZE 0x02 > > + > > +struct drm_etnaviv_param { > > + uint32_t pipe; /* in, ETNA_PIPE_x */ > > + uint32_t param; /* in, ETNAVIV_PARAM_x */ > > + uint64_t value; /* out (get_param) or in (set_param) */ > > +}; > > + > > +/* > > + * GEM buffers: > > + */ > > + > > +#define ETNA_BO_CMDSTREAM 0x00000001 > > +#define ETNA_BO_CACHE_MASK 0x000f0000 > > +/* cache modes */ > > +#define ETNA_BO_CACHED 0x00010000 > > +#define ETNA_BO_WC 0x00020000 > > +#define ETNA_BO_UNCACHED 0x00040000 > > + > > +struct drm_etnaviv_gem_new { > > + uint64_t size; /* in */ > > + uint32_t flags; /* in, mask of ETNA_BO_x */ > > + uint32_t handle; /* out */ > > +}; > > + > > +struct drm_etnaviv_gem_info { > > + uint32_t handle; /* in */ > > + uint32_t pad; > > + uint64_t offset; /* out, offset to pass to mmap() */ > > +}; > > + > > +#define ETNA_PREP_READ 0x01 > > +#define ETNA_PREP_WRITE 0x02 > > +#define ETNA_PREP_NOSYNC 0x04 > > + > > +struct drm_etnaviv_gem_cpu_prep { > > + uint32_t handle; /* in */ > > + uint32_t op; /* in, mask of ETNA_PREP_x */ > > + struct drm_etnaviv_timespec timeout; /* in */ > > +}; > > + > > +struct drm_etnaviv_gem_cpu_fini { > > + uint32_t handle; /* in */ > > +}; > > + > > +/* > > + * Cmdstream Submission: > > + */ > > + > > +/* The value written into the cmdstream is logically: > > + * > > + * ((relocbuf->gpuaddr + reloc_offset) << shift) | or > > + * > > + * When we have GPU's w/ >32bit ptrs, it should be possible to deal > > + * with this by emit'ing two reloc entries with appropriate shift > > + * values. Or a new ETNA_SUBMIT_CMD_x type would also be an option. > > + * > > + * NOTE that reloc's must be sorted by order of increasing submit_offset, > > + * otherwise EINVAL. > > + */ > > +struct drm_etnaviv_gem_submit_reloc { > > + uint32_t submit_offset; /* in, offset from submit_bo */ > > + uint32_t or; /* in, value OR'd with result */ > > + int32_t shift; /* in, amount of left shift (can be negative) */ > > + uint32_t reloc_idx; /* in, index of reloc_bo buffer */ > > + uint64_t reloc_offset; /* in, offset from start of reloc_bo */ > > +}; > > + > > +/* submit-types: > > + * BUF - this cmd buffer is executed normally. > > + * IB_TARGET_BUF - this cmd buffer is an IB target. Reloc's are > > + * processed normally, but the kernel does not setup an IB to > > + * this buffer in the first-level ringbuffer > > + * CTX_RESTORE_BUF - only executed if there has been a GPU context > > + * switch since the last SUBMIT ioctl > > + */ > > +#define ETNA_SUBMIT_CMD_BUF 0x0001 > > +#define ETNA_SUBMIT_CMD_IB_TARGET_BUF 0x0002 > > +#define ETNA_SUBMIT_CMD_CTX_RESTORE_BUF 0x0003 is a > > +struct drm_etnaviv_gem_submit_cmd { > > + uint32_t type; /* in, one of ETNA_SUBMIT_CMD_x */ > > Do we need different types? I did not use this in my kernel tree. >
Please also not the commit cleaning this API, which changes this a bit.
Ah yes.. I see it.
But yes we need different types. At least the context restore buffer type is needed to properly implement GPU power management and context switching.
What role does GPU power management plays here? For the context switching it could make sense. But for the 2d core the context is so small that it does not hurt to send it with every command stream. For the 3d core it is much bigger, but this could be done completely in the kernel. Or I am wrong here?
If you power down the GPU you loose the context. You are right that we could save/restore the context from kernel space, but that is really taking a toll on CPU time. It is much better to have userspace provide a context buffer to get the GPU in the expected state, as you then only need to splice this into the execution stream to restore the context instead of pushing it with the CPU. Reading back the context on every switch will kill any performance.
And for this you need a own command type? The context is nothing special. Only load state commands in the command buffer. You can have an internal representation of the context in the user space (as libetnaviv does it right now )and work with it. Then if you want to submit some render calls etc you can look if the state is dirty and submit the whole or the changes values. So I am not sure if there is a need for a context buffer type as it is nothing special.
How would the userspace know if another process / a GPU hang / a power management event has dirtied the state of the GPU? It is only the kernel who knows if the state of the GPU has changed since the last submit of this process.
Okay got it.
In the common case when nothing has disturbed the context we don't want to insert the context buffer, as we really want minimal state updates in that case. We need a way to tell the kernel which command buffer is the context buffer, so the kernel only splices this buffer into the stream if the context is dirty.
So the context buffer holds the full GPU context and the kernel does the partial update of the current hardware context. This makes the user space a lot simpler as we can send the whole context and do not need take care of partial updates.
I like the idea.
I still think your understanding is not completely what I wanted to say with that.
You are right that I want to kick out all this state tracking on individual registers. But I don't want to move it into the kernel, but scrap it altogether.
Let me try to explain this in a bit more detail:
First let's postulate that we already have pretty good dirty state tracking on the gallium state object level. I don't think it buys us anything to do more fine grained state tracking.
The gallium userspace driver only pushes state _changes_ to a normal command buffer. For example this means that if nothing has changed since the last submit of this process except some vertex buffers the only thing that will be contained in the command buffer are a few SET_STATEs for the vertex buffer addresses and the DRAW call.
The context buffer in contrast holds the full GPU context as of the last flush. So if you flush the stream MESA dumps the full Gallium state into the context buffer.
Now if you submit both buffers together the kernel can check if your context is still valid (nothing other has touched the GPU since your last submit) and in that case only splice the normal command bo into the stream. If something has changed since your last submit the kernel will splice the context buffer first, then the command buffer. This way you always get a predictable state without tracking any GPU state on a register level.
Regards, Lucas
Okay got it.
In the common case when nothing has disturbed the context we don't want to insert the context buffer, as we really want minimal state updates in that case. We need a way to tell the kernel which command buffer is the context buffer, so the kernel only splices this buffer into the stream if the context is dirty.
So the context buffer holds the full GPU context and the kernel does the partial update of the current hardware context. This makes the user space a lot simpler as we can send the whole context and do not need take care of partial updates.
I like the idea.
I still think your understanding is not completely what I wanted to say with that.
You are right that I want to kick out all this state tracking on individual registers. But I don't want to move it into the kernel, but scrap it altogether.
Let me try to explain this in a bit more detail:
First let's postulate that we already have pretty good dirty state tracking on the gallium state object level. I don't think it buys us anything to do more fine grained state tracking.
The gallium userspace driver only pushes state _changes_ to a normal command buffer. For example this means that if nothing has changed since the last submit of this process except some vertex buffers the only thing that will be contained in the command buffer are a few SET_STATEs for the vertex buffer addresses and the DRAW call.
The context buffer in contrast holds the full GPU context as of the last flush. So if you flush the stream MESA dumps the full Gallium state into the context buffer.
Now if you submit both buffers together the kernel can check if your context is still valid (nothing other has touched the GPU since your last submit) and in that case only splice the normal command bo into the stream. If something has changed since your last submit the kernel will splice the context buffer first, then the command buffer. This way you always get a predictable state without tracking any GPU state on a register level.
My memory of trying to make this work in some other driver in some other time is pretty bad.
I think I realised that you never had the "last known good" state, you'd just generate the latest state,
so it was really hard to split things into two state buffers, one containing the "current" state and one with state updates, in the end I gave up and just submitted everything everytime.
I'm not saying its not possible, but the userspace driver model didn't lend itself to making it easy.
Dave.
Am Mittwoch, den 08.04.2015, 10:13 +1000 schrieb Dave Airlie:
Okay got it.
In the common case when nothing has disturbed the context we don't want to insert the context buffer, as we really want minimal state updates in that case. We need a way to tell the kernel which command buffer is the context buffer, so the kernel only splices this buffer into the stream if the context is dirty.
So the context buffer holds the full GPU context and the kernel does the partial update of the current hardware context. This makes the user space a lot simpler as we can send the whole context and do not need take care of partial updates.
I like the idea.
I still think your understanding is not completely what I wanted to say with that.
You are right that I want to kick out all this state tracking on individual registers. But I don't want to move it into the kernel, but scrap it altogether.
Let me try to explain this in a bit more detail:
First let's postulate that we already have pretty good dirty state tracking on the gallium state object level. I don't think it buys us anything to do more fine grained state tracking.
The gallium userspace driver only pushes state _changes_ to a normal command buffer. For example this means that if nothing has changed since the last submit of this process except some vertex buffers the only thing that will be contained in the command buffer are a few SET_STATEs for the vertex buffer addresses and the DRAW call.
The context buffer in contrast holds the full GPU context as of the last flush. So if you flush the stream MESA dumps the full Gallium state into the context buffer.
Now if you submit both buffers together the kernel can check if your context is still valid (nothing other has touched the GPU since your last submit) and in that case only splice the normal command bo into the stream. If something has changed since your last submit the kernel will splice the context buffer first, then the command buffer. This way you always get a predictable state without tracking any GPU state on a register level.
My memory of trying to make this work in some other driver in some other time is pretty bad.
I think I realised that you never had the "last known good" state, you'd just generate the latest state,
so it was really hard to split things into two state buffers, one containing the "current" state and one with state updates, in the end I gave up and just submitted everything everytime.
I'm not saying its not possible, but the userspace driver model didn't lend itself to making it easy.
Hm, this together with the argument that we have to push out all state with relocs anyway on a single submit really makes me think we should do away with the context buffer stuff and just dirty all state on flush in the userspace driver.
Regards, Lucas
On Tue, Apr 07, 2015 at 11:20:10AM +0200, Lucas Stach wrote:
Am Dienstag, den 07.04.2015, 11:04 +0200 schrieb Christian Gmeiner:
What role does GPU power management plays here? For the context switching it could make sense. But for the 2d core the context is so small that it does not hurt to send it with every command stream. For the 3d core it is much bigger, but this could be done completely in the kernel. Or I am wrong here?
If you power down the GPU you loose the context. You are right that we could save/restore the context from kernel space, but that is really taking a toll on CPU time. It is much better to have userspace provide a context buffer to get the GPU in the expected state, as you then only need to splice this into the execution stream to restore the context instead of pushing it with the CPU. Reading back the context on every switch will kill any performance.
For both Vivante and Etnaviv, it's already the accepted way that 2D cores need the full context loaded for each operation, and the few userspace bits we have comply with that today.
With Etnaviv DRM, we already must ensure that the command buffer submitted to the GPU contains all references to buffer objects to be operated on by that command block - or to put it another way, we need to ensure that each GPU operation is complete inside the command submitted buffer.
The 2D core is rather messy as far as which bits of state need to be preserved, especially when you consider that you have the 2D drawing and blit ops, as well as a video rasteriser which shares some state registers for the destination, but uses different state registers for the source. It quickly becomes rather messy to keep track of the GPU state.
In any case, the amount of state which needs to be loaded for 2D operations is small, so I think it really makes sense to require userspace to only submit complete, fully described 2D operations within a single command buffer.
I tend to get things up and running and do the optimization step if it is really worth. Also I like stuff to be stupid simple. There is an other interesting fact: flushing the iommuv2 is done via command stream and we need to reserve more space for the tail of the used bo. So if we reserve some space in the command buffer, we have other space limits for the tail depending on used hardware.
I would much rather we only appended LINK commands to the submitted command BO, and added whatever GPU management commands to either the ring buffer (which is easy) or a separate GPU management command buffer. Given that I'm already doing this to flush the V1 MMU in the kernel ring buffer, this is the option I prefer.
You may be aware that once this is upstream there is no easy way to change the userspace interface anymore. So whatever is left out now is likely to be very hard to reintroduce later.
Indeed - we need to agree what the submited command buffer will contain, and how much space to enforce at the end of the command buffer.
The minimum space is one LINK command, which is 8 bytes. As long as we can add a LINK command, we can redirect the GPU's execution elsewhere to do whatever other operations we want to do.
I think the only danger there is if Vivante produce a GPU with 64-bit addressing for the command stream - if they do, commands like LINK will most likely change format, and possibly would be a different number of bits.
The simple solution to this would be to introduce into the API a property (like is done for the feature bits) which tells userspace the minimum number of bytes which must be reserved at the end of the command buffer. If we need to change that in the future, we have the flexibility to do so.
What' the problem with having a command buffer in the kernel to flush the MMUv2? Why do you need to insert those commands into the userspace command stream?
That's certainly where I would put it.
+#define ETNA_SUBMIT_BO_READ 0x0001 +#define ETNA_SUBMIT_BO_WRITE 0x0002 +struct drm_etnaviv_gem_submit_bo {
uint32_t flags; /* in, mask of ETNA_SUBMIT_BO_x */
uint32_t handle; /* in, GEM handle */
uint64_t presumed; /* in/out, presumed buffer address */
presumed support should never hit etnaviv driver.
As stated in the cover letter I think presumed support will become possible with MMUv2 and may provide a good optimization there. So I would rather leave this in here and just ignore it for now.
Could we rename this member 'reserved' if it's something that we think we're going to implement in the near future - but also please add a flag which indicates whether the presumed address is present or not. Zero _can_ be a valid address too!
A presumed address can not be a physical address, but is an address in the VM context of that process. Nouveau uses the same thing on NV50+ where you have a proper MMU to protect all GPU accesses. I would expect the same thing to be true for Vivante MMUv2.
I know that there are very strong opinions about exposing _physical_ addresses to userspace (David, for example, doesn't like it one bit.) If it's a GPU address, then that's less useful to userspace.
However, that data has to be treated as suspect coming from userspace.
You still need to look up the buffer object in the kernel, so that you can manage the buffer object's state. If you have the buffer object's state, then you most likely have easy access to its GPU mapping, which means you can retrieve the GPU address, which you can then use to validate the address passed from userspace... but if you've found the GPU address via this method, you haven't saved anything.
An alternative approach would be to lookup the presumed address in (eg) a rbtree to locate the buffer object's state, which would save the lookup by object ID, but does this save anything by doing that, does it add complexity and additional kernel processing?
I'm not sure.
Keep it stupid simple. In my libdrm repo, which you hopefully know, I have implemented the buffer handling from the original libetnaviv. We allocate 5 command buffers of a defined size and rotate through them. During command buffer building we reserve space in the stream. if there is not enough space we flush the current buffer stream and switch to the next and us it. Then there is a way to explicit flush a command buffer.
For more details see: https://github.com/laanwj/etna_viv/tree/master/src/etnaviv https://github.com/austriancoder/libdrm
Same argument as above really. We need at least the context buffer.
An important question is whether the context buffer, built by userspace, should be submitted as one of these command buffers, or kept separate so the kernel can keep track of it and decide whether or not to use it according to the state it's tracking.
Another point to bring up here is about how command buffers are submitted.
Consider this scenario:
- Userspace creates a command buffer, and arranges for the initial commands to be time consuming (eg, long WAIT commands.) It fills the rest of the buffer with dummy LOAD STATE commands. - Userspace submits this, the kernel validates the command buffer, and submits it to the GPU. The GPU starts executing the buffer. - Userspace, which still has access to the command buffer, overwrites the LOAD STATE commands with malicious GPU commands. - GPU executes malicious GPU commands.
This brings up several questions:
1. Do we care about this? 2. If we do care, should we insist that a command buffer is not mapped in userspace when it is submitted, and prevent an in-use command buffer being mapped? 3. If we don't care, what's the point of validating the supplied command buffer?
(2) would be quite an API change over what we have today, and introduce an amount of overhead, though something which could be handled in the userspace library (eg, if we're modelling on etnaviv's five command buffer model, we could copy the command buffer immediately before submission.)
Given this, I think (3) has some value irrespective of the outcome of (1) as it gives us a way to catch silly errors from userspace before they hit the GPU and become a problem.
Am Dienstag, den 07.04.2015, 11:46 +0100 schrieb Russell King - ARM Linux:
On Tue, Apr 07, 2015 at 11:20:10AM +0200, Lucas Stach wrote:
Am Dienstag, den 07.04.2015, 11:04 +0200 schrieb Christian Gmeiner:
What role does GPU power management plays here? For the context switching it could make sense. But for the 2d core the context is so small that it does not hurt to send it with every command stream. For the 3d core it is much bigger, but this could be done completely in the kernel. Or I am wrong here?
If you power down the GPU you loose the context. You are right that we could save/restore the context from kernel space, but that is really taking a toll on CPU time. It is much better to have userspace provide a context buffer to get the GPU in the expected state, as you then only need to splice this into the execution stream to restore the context instead of pushing it with the CPU. Reading back the context on every switch will kill any performance.
For both Vivante and Etnaviv, it's already the accepted way that 2D cores need the full context loaded for each operation, and the few userspace bits we have comply with that today.
With Etnaviv DRM, we already must ensure that the command buffer submitted to the GPU contains all references to buffer objects to be operated on by that command block - or to put it another way, we need to ensure that each GPU operation is complete inside the command submitted buffer.
Right that's one thing that I really hadn't thought through until now. So this means we must at least emit all states that contain relocs, which may further reduce the possibility to do minimal state updates. Urghs.
The 2D core is rather messy as far as which bits of state need to be preserved, especially when you consider that you have the 2D drawing and blit ops, as well as a video rasteriser which shares some state registers for the destination, but uses different state registers for the source. It quickly becomes rather messy to keep track of the GPU state.
In any case, the amount of state which needs to be loaded for 2D operations is small, so I think it really makes sense to require userspace to only submit complete, fully described 2D operations within a single command buffer.
I tend to get things up and running and do the optimization step if it is really worth. Also I like stuff to be stupid simple. There is an other interesting fact: flushing the iommuv2 is done via command stream and we need to reserve more space for the tail of the used bo. So if we reserve some space in the command buffer, we have other space limits for the tail depending on used hardware.
I would much rather we only appended LINK commands to the submitted command BO, and added whatever GPU management commands to either the ring buffer (which is easy) or a separate GPU management command buffer. Given that I'm already doing this to flush the V1 MMU in the kernel ring buffer, this is the option I prefer.
You may be aware that once this is upstream there is no easy way to change the userspace interface anymore. So whatever is left out now is likely to be very hard to reintroduce later.
Indeed - we need to agree what the submited command buffer will contain, and how much space to enforce at the end of the command buffer.
The minimum space is one LINK command, which is 8 bytes. As long as we can add a LINK command, we can redirect the GPU's execution elsewhere to do whatever other operations we want to do.
I think the only danger there is if Vivante produce a GPU with 64-bit addressing for the command stream - if they do, commands like LINK will most likely change format, and possibly would be a different number of bits.
The simple solution to this would be to introduce into the API a property (like is done for the feature bits) which tells userspace the minimum number of bytes which must be reserved at the end of the command buffer. If we need to change that in the future, we have the flexibility to do so.
Yes, that seems to be the straight forward solution. Export a property from the kernel to signal the userspace how much free space is needed at the end of the buffer and reject any buffer violating this.
Though I agree that we should not overuse this and try to do as much as possible outside of the user command streams.
What' the problem with having a command buffer in the kernel to flush the MMUv2? Why do you need to insert those commands into the userspace command stream?
That's certainly where I would put it.
+#define ETNA_SUBMIT_BO_READ 0x0001 +#define ETNA_SUBMIT_BO_WRITE 0x0002 +struct drm_etnaviv_gem_submit_bo {
uint32_t flags; /* in, mask of ETNA_SUBMIT_BO_x */
uint32_t handle; /* in, GEM handle */
uint64_t presumed; /* in/out, presumed buffer address */
presumed support should never hit etnaviv driver.
As stated in the cover letter I think presumed support will become possible with MMUv2 and may provide a good optimization there. So I would rather leave this in here and just ignore it for now.
Could we rename this member 'reserved' if it's something that we think we're going to implement in the near future - but also please add a flag which indicates whether the presumed address is present or not. Zero _can_ be a valid address too!
A presumed address can not be a physical address, but is an address in the VM context of that process. Nouveau uses the same thing on NV50+ where you have a proper MMU to protect all GPU accesses. I would expect the same thing to be true for Vivante MMUv2.
I know that there are very strong opinions about exposing _physical_ addresses to userspace (David, for example, doesn't like it one bit.) If it's a GPU address, then that's less useful to userspace.
A GPU address with per-process pagetables and full translation support on the GPU MMU is as good as a CPU virtual address. I wouldn't expect any objections against those. MMUv1 is more like a GART window and doesn't provide full translation, so we shouldn't trust userspace with MMUv1.
However, that data has to be treated as suspect coming from userspace.
You still need to look up the buffer object in the kernel, so that you can manage the buffer object's state. If you have the buffer object's state, then you most likely have easy access to its GPU mapping, which means you can retrieve the GPU address, which you can then use to validate the address passed from userspace... but if you've found the GPU address via this method, you haven't saved anything.
An alternative approach would be to lookup the presumed address in (eg) a rbtree to locate the buffer object's state, which would save the lookup by object ID, but does this save anything by doing that, does it add complexity and additional kernel processing?
I'm not sure.
I'm not sure about this. With MMUv2 and per-process pagetables there really is no need to validate the addresses from userspace as each process is only able to shoot itself in the foot.
Keep it stupid simple. In my libdrm repo, which you hopefully know, I have implemented the buffer handling from the original libetnaviv. We allocate 5 command buffers of a defined size and rotate through them. During command buffer building we reserve space in the stream. if there is not enough space we flush the current buffer stream and switch to the next and us it. Then there is a way to explicit flush a command buffer.
For more details see: https://github.com/laanwj/etna_viv/tree/master/src/etnaviv https://github.com/austriancoder/libdrm
Same argument as above really. We need at least the context buffer.
An important question is whether the context buffer, built by userspace, should be submitted as one of these command buffers, or kept separate so the kernel can keep track of it and decide whether or not to use it according to the state it's tracking.
Another point to bring up here is about how command buffers are submitted.
Consider this scenario:
- Userspace creates a command buffer, and arranges for the initial commands to be time consuming (eg, long WAIT commands.) It fills the rest of the buffer with dummy LOAD STATE commands.
- Userspace submits this, the kernel validates the command buffer, and submits it to the GPU. The GPU starts executing the buffer.
- Userspace, which still has access to the command buffer, overwrites the LOAD STATE commands with malicious GPU commands.
- GPU executes malicious GPU commands.
This brings up several questions:
- Do we care about this?
- If we do care, should we insist that a command buffer is not mapped in userspace when it is submitted, and prevent an in-use command buffer being mapped?
- If we don't care, what's the point of validating the supplied command buffer?
(2) would be quite an API change over what we have today, and introduce an amount of overhead, though something which could be handled in the userspace library (eg, if we're modelling on etnaviv's five command buffer model, we could copy the command buffer immediately before submission.)
Given this, I think (3) has some value irrespective of the outcome of (1) as it gives us a way to catch silly errors from userspace before they hit the GPU and become a problem.
I think we should care. I fail to see how this would have to be an API change. Why can't we just hand out buffers to userspace like we do now and copy their contents into an internal buffer as we validate and apply relocs? This model may be beneficial even without the security benefits, as we could hand out cached buffers to userspace, so we can read them more efficiently for validation and stuff things into an internal write-combined buffer.
Regards, Lucas
On Tue, Apr 07, 2015 at 02:52:31PM +0200, Lucas Stach wrote:
Am Dienstag, den 07.04.2015, 11:46 +0100 schrieb Russell King - ARM Linux:
For both Vivante and Etnaviv, it's already the accepted way that 2D cores need the full context loaded for each operation, and the few userspace bits we have comply with that today.
With Etnaviv DRM, we already must ensure that the command buffer submitted to the GPU contains all references to buffer objects to be operated on by that command block - or to put it another way, we need to ensure that each GPU operation is complete inside the command submitted buffer.
Right that's one thing that I really hadn't thought through until now. So this means we must at least emit all states that contain relocs, which may further reduce the possibility to do minimal state updates. Urghs.
Before trying hard to minimize the number of state emitted i would like to encourage you to actually benchmark this and see if it really makes a difference. I was convinced once it would be usefull but simple bench mark proved me wrong. Like you could draw over and over a simple vbo with bunch of states versus submitting the same vbo over and over and submitting the states once.
Turns out on other hw the cost of tracking dirty state (cpu overhead) was more important that the very small fraction (i think it was barely significative in respect to the standard deviation) of performance improvement.
[...]
An important question is whether the context buffer, built by userspace, should be submitted as one of these command buffers, or kept separate so the kernel can keep track of it and decide whether or not to use it according to the state it's tracking.
Another point to bring up here is about how command buffers are submitted.
Consider this scenario:
- Userspace creates a command buffer, and arranges for the initial commands to be time consuming (eg, long WAIT commands.) It fills the rest of the buffer with dummy LOAD STATE commands.
- Userspace submits this, the kernel validates the command buffer, and submits it to the GPU. The GPU starts executing the buffer.
- Userspace, which still has access to the command buffer, overwrites the LOAD STATE commands with malicious GPU commands.
- GPU executes malicious GPU commands.
This brings up several questions:
- Do we care about this?
- If we do care, should we insist that a command buffer is not mapped in userspace when it is submitted, and prevent an in-use command buffer being mapped?
- If we don't care, what's the point of validating the supplied command buffer?
(2) would be quite an API change over what we have today, and introduce an amount of overhead, though something which could be handled in the userspace library (eg, if we're modelling on etnaviv's five command buffer model, we could copy the command buffer immediately before submission.)
Given this, I think (3) has some value irrespective of the outcome of (1) as it gives us a way to catch silly errors from userspace before they hit the GPU and become a problem.
I think we should care. I fail to see how this would have to be an API change. Why can't we just hand out buffers to userspace like we do now and copy their contents into an internal buffer as we validate and apply relocs? This model may be beneficial even without the security benefits, as we could hand out cached buffers to userspace, so we can read them more efficiently for validation and stuff things into an internal write-combined buffer.
You should definitly care about that. For instance in the radeon driver for GPU we can not trust (ie gpu where userspace could access physical memory through the gpu) we do copy the user space command buffer while validating it inside the kernel. Yes there is an overhead for doing that but this is the only way to have security on such GPU.
In case you have virtual address space and userspace can not reprogram it from the command buffer than yes you can directly execute the user cmd buffer without copying or checking it.
I would strongly advice not to give up on security.
Cheers, Jérôme
Avoids leaking developer directories into the generated headers and adds some defines needed for proper MMU flush.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/cmdstream.xml.h | 6 +++--- drivers/staging/etnaviv/common.xml.h | 10 +++------- drivers/staging/etnaviv/state.xml.h | 21 ++++++++++++--------- drivers/staging/etnaviv/state_hi.xml.h | 18 ++++++++++-------- 4 files changed, 28 insertions(+), 27 deletions(-)
diff --git a/drivers/staging/etnaviv/cmdstream.xml.h b/drivers/staging/etnaviv/cmdstream.xml.h index 844f82977e3e..8c44ba9a694e 100644 --- a/drivers/staging/etnaviv/cmdstream.xml.h +++ b/drivers/staging/etnaviv/cmdstream.xml.h @@ -8,10 +8,10 @@ http://0x04.net/cgit/index.cgi/rules-ng-ng git clone git://0x04.net/rules-ng-ng
The rules-ng-ng source files this header was generated from are: -- /home/orion/projects/etna_viv/rnndb/cmdstream.xml ( 12589 bytes, from 2013-09-01 10:53:22) -- /home/orion/projects/etna_viv/rnndb/common.xml ( 18379 bytes, from 2014-01-27 15:58:05) +- cmdstream.xml ( 12589 bytes, from 2014-02-17 14:57:56) +- common.xml ( 18437 bytes, from 2015-03-25 11:27:41)
-Copyright (C) 2013 +Copyright (C) 2014 */
diff --git a/drivers/staging/etnaviv/common.xml.h b/drivers/staging/etnaviv/common.xml.h index 36fa0e4cf56b..9e585d51fb78 100644 --- a/drivers/staging/etnaviv/common.xml.h +++ b/drivers/staging/etnaviv/common.xml.h @@ -8,14 +8,10 @@ http://0x04.net/cgit/index.cgi/rules-ng-ng git clone git://0x04.net/rules-ng-ng
The rules-ng-ng source files this header was generated from are: -- /home/orion/projects/etna_viv/rnndb/state.xml ( 18526 bytes, from 2013-09-11 16:52:32) -- /home/orion/projects/etna_viv/rnndb/common.xml ( 18379 bytes, from 2014-01-27 15:58:05) -- /home/orion/projects/etna_viv/rnndb/state_hi.xml ( 22236 bytes, from 2014-01-27 15:56:46) -- /home/orion/projects/etna_viv/rnndb/state_2d.xml ( 51191 bytes, from 2013-10-04 06:36:55) -- /home/orion/projects/etna_viv/rnndb/state_3d.xml ( 54570 bytes, from 2013-10-12 15:25:03) -- /home/orion/projects/etna_viv/rnndb/state_vg.xml ( 5942 bytes, from 2013-09-01 10:53:22) +- state_vg.xml ( 5973 bytes, from 2015-03-25 11:26:01) +- common.xml ( 18437 bytes, from 2015-03-25 11:27:41)
-Copyright (C) 2014 +Copyright (C) 2015 */
diff --git a/drivers/staging/etnaviv/state.xml.h b/drivers/staging/etnaviv/state.xml.h index e7b36df1e4e3..368218304566 100644 --- a/drivers/staging/etnaviv/state.xml.h +++ b/drivers/staging/etnaviv/state.xml.h @@ -8,14 +8,14 @@ http://0x04.net/cgit/index.cgi/rules-ng-ng git clone git://0x04.net/rules-ng-ng
The rules-ng-ng source files this header was generated from are: -- /home/orion/projects/etna_viv/rnndb/state.xml ( 18526 bytes, from 2013-09-11 16:52:32) -- /home/orion/projects/etna_viv/rnndb/common.xml ( 18379 bytes, from 2014-01-27 15:58:05) -- /home/orion/projects/etna_viv/rnndb/state_hi.xml ( 22236 bytes, from 2014-01-27 15:56:46) -- /home/orion/projects/etna_viv/rnndb/state_2d.xml ( 51191 bytes, from 2013-10-04 06:36:55) -- /home/orion/projects/etna_viv/rnndb/state_3d.xml ( 54570 bytes, from 2013-10-12 15:25:03) -- /home/orion/projects/etna_viv/rnndb/state_vg.xml ( 5942 bytes, from 2013-09-01 10:53:22) - -Copyright (C) 2013 +- state.xml ( 18882 bytes, from 2015-03-25 11:42:32) +- common.xml ( 18437 bytes, from 2015-03-25 11:27:41) +- state_hi.xml ( 23420 bytes, from 2015-03-25 11:47:21) +- state_2d.xml ( 51549 bytes, from 2015-03-25 11:25:06) +- state_3d.xml ( 54600 bytes, from 2015-03-25 11:25:19) +- state_vg.xml ( 5973 bytes, from 2015-03-25 11:26:01) + +Copyright (C) 2015 */
@@ -210,7 +210,10 @@ Copyright (C) 2013
#define VIVS_GL_FLUSH_MMU 0x00003810 #define VIVS_GL_FLUSH_MMU_FLUSH_FEMMU 0x00000001 -#define VIVS_GL_FLUSH_MMU_FLUSH_PEMMU 0x00000002 +#define VIVS_GL_FLUSH_MMU_FLUSH_UNK1 0x00000002 +#define VIVS_GL_FLUSH_MMU_FLUSH_UNK2 0x00000004 +#define VIVS_GL_FLUSH_MMU_FLUSH_PEMMU 0x00000008 +#define VIVS_GL_FLUSH_MMU_FLUSH_UNK4 0x00000010
#define VIVS_GL_VERTEX_ELEMENT_CONFIG 0x00003814
diff --git a/drivers/staging/etnaviv/state_hi.xml.h b/drivers/staging/etnaviv/state_hi.xml.h index 9799d7473e5e..0064f2640396 100644 --- a/drivers/staging/etnaviv/state_hi.xml.h +++ b/drivers/staging/etnaviv/state_hi.xml.h @@ -8,14 +8,10 @@ http://0x04.net/cgit/index.cgi/rules-ng-ng git clone git://0x04.net/rules-ng-ng
The rules-ng-ng source files this header was generated from are: -- /home/christian/projects/etna_viv/rnndb/state.xml ( 18526 bytes, from 2014-09-06 05:57:57) -- /home/christian/projects/etna_viv/rnndb/common.xml ( 18379 bytes, from 2014-09-06 05:57:57) -- /home/christian/projects/etna_viv/rnndb/state_hi.xml ( 23176 bytes, from 2014-09-06 06:07:47) -- /home/christian/projects/etna_viv/rnndb/state_2d.xml ( 51191 bytes, from 2014-09-06 05:57:57) -- /home/christian/projects/etna_viv/rnndb/state_3d.xml ( 54570 bytes, from 2014-09-06 05:57:57) -- /home/christian/projects/etna_viv/rnndb/state_vg.xml ( 5942 bytes, from 2014-09-06 05:57:57) - -Copyright (C) 2014 +- state_hi.xml ( 23420 bytes, from 2015-03-25 11:47:21) +- common.xml ( 18437 bytes, from 2015-03-25 11:27:41) + +Copyright (C) 2015 */
@@ -396,6 +392,12 @@ Copyright (C) 2014 #define VIVS_MC_PROFILE_CONFIG3 0x0000047c
#define VIVS_MC_BUS_CONFIG 0x00000480 +#define VIVS_MC_BUS_CONFIG_FE_BUS_CONFIG__MASK 0x0000000f +#define VIVS_MC_BUS_CONFIG_FE_BUS_CONFIG__SHIFT 0 +#define VIVS_MC_BUS_CONFIG_FE_BUS_CONFIG(x) (((x) << VIVS_MC_BUS_CONFIG_FE_BUS_CONFIG__SHIFT) & VIVS_MC_BUS_CONFIG_FE_BUS_CONFIG__MASK) +#define VIVS_MC_BUS_CONFIG_TX_BUS_CONFIG__MASK 0x000000f0 +#define VIVS_MC_BUS_CONFIG_TX_BUS_CONFIG__SHIFT 4 +#define VIVS_MC_BUS_CONFIG_TX_BUS_CONFIG(x) (((x) << VIVS_MC_BUS_CONFIG_TX_BUS_CONFIG__SHIFT) & VIVS_MC_BUS_CONFIG_TX_BUS_CONFIG__MASK)
#define VIVS_MC_START_COMPOSITION 0x00000554
IOMMUv2 support isn't implemented yet, so don't pretend it is there.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_gpu.c | 10 ++++++---- drivers/staging/etnaviv/etnaviv_iommu_v2.c | 32 ------------------------------ drivers/staging/etnaviv/etnaviv_iommu_v2.h | 25 ----------------------- 3 files changed, 6 insertions(+), 61 deletions(-) delete mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.c delete mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.h
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index d2d0556a9bad..e3b93c293dca 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -21,7 +21,6 @@ #include "etnaviv_gem.h" #include "etnaviv_mmu.h" #include "etnaviv_iommu.h" -#include "etnaviv_iommu_v2.h" #include "common.xml.h" #include "state.xml.h" #include "state_hi.xml.h" @@ -329,10 +328,13 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) mmuv2 = gpu->identity.minor_features1 & chipMinorFeatures1_MMU_VERSION; dev_dbg(gpu->dev->dev, "mmuv2: %d\n", mmuv2);
- if (!mmuv2) + if (!mmuv2) { iommu = etnaviv_iommu_domain_alloc(gpu); - else - iommu = etnaviv_iommu_v2_domain_alloc(gpu); + } else { + dev_err(gpu->dev, "IOMMUv2 support is not implemented yet!\n"); + ret = -ENODEV; + goto fail; + }
if (!iommu) { ret = -ENOMEM; diff --git a/drivers/staging/etnaviv/etnaviv_iommu_v2.c b/drivers/staging/etnaviv/etnaviv_iommu_v2.c deleted file mode 100644 index 3039ee9cbc6d..000000000000 --- a/drivers/staging/etnaviv/etnaviv_iommu_v2.c +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com - * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License version 2 as published by - * the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * - * You should have received a copy of the GNU General Public License along with - * this program. If not, see http://www.gnu.org/licenses/. - */ - -#include <linux/iommu.h> -#include <linux/platform_device.h> -#include <linux/sizes.h> -#include <linux/slab.h> -#include <linux/dma-mapping.h> -#include <linux/bitops.h> - -#include "etnaviv_gpu.h" -#include "state_hi.xml.h" - - -struct iommu_domain *etnaviv_iommu_v2_domain_alloc(struct etnaviv_gpu *gpu) -{ - /* TODO */ - return NULL; -} diff --git a/drivers/staging/etnaviv/etnaviv_iommu_v2.h b/drivers/staging/etnaviv/etnaviv_iommu_v2.h deleted file mode 100644 index 603ea41c5389..000000000000 --- a/drivers/staging/etnaviv/etnaviv_iommu_v2.h +++ /dev/null @@ -1,25 +0,0 @@ -/* - * Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com - * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License version 2 as published by - * the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * - * You should have received a copy of the GNU General Public License along with - * this program. If not, see http://www.gnu.org/licenses/. - */ - -#ifndef __ETNAVIV_IOMMU_V2_H__ -#define __ETNAVIV_IOMMU_V2_H__ - -#include <linux/iommu.h> -struct etnaviv_gpu; - -struct iommu_domain *etnaviv_iommu_v2_domain_alloc(struct etnaviv_gpu *gpu); - -#endif /* __ETNAVIV_IOMMU_V2_H__ */
On Thu, Apr 2, 2015 at 10:29 AM, Lucas Stach l.stach@pengutronix.de wrote:
IOMMUv2 support isn't implemented yet, so don't pretend it is there.
Signed-off-by: Lucas Stach l.stach@pengutronix.de
drivers/staging/etnaviv/etnaviv_gpu.c | 10 ++++++---- drivers/staging/etnaviv/etnaviv_iommu_v2.c | 32 ------------------------------ drivers/staging/etnaviv/etnaviv_iommu_v2.h | 25 ----------------------- 3 files changed, 6 insertions(+), 61 deletions(-) delete mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.c delete mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.h
etnaviv_iommu_v2 is still referenced in the Makefile:
etnaviv_iommu_v2.o \
make[3]: *** No rule to make target 'drivers/staging/etnaviv/etnaviv_iommu_v2.o', needed by 'drivers/staging/etnaviv/etnaviv.o'. Stop. make[3]: *** Waiting for unfinished jobs....
Regards,
Am Donnerstag, den 02.04.2015, 12:14 -0500 schrieb Robert Nelson:
On Thu, Apr 2, 2015 at 10:29 AM, Lucas Stach l.stach@pengutronix.de wrote:
IOMMUv2 support isn't implemented yet, so don't pretend it is there.
Signed-off-by: Lucas Stach l.stach@pengutronix.de
drivers/staging/etnaviv/etnaviv_gpu.c | 10 ++++++---- drivers/staging/etnaviv/etnaviv_iommu_v2.c | 32 ------------------------------ drivers/staging/etnaviv/etnaviv_iommu_v2.h | 25 ----------------------- 3 files changed, 6 insertions(+), 61 deletions(-) delete mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.c delete mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.h
etnaviv_iommu_v2 is still referenced in the Makefile:
etnaviv_iommu_v2.o \
make[3]: *** No rule to make target 'drivers/staging/etnaviv/etnaviv_iommu_v2.o', needed by 'drivers/staging/etnaviv/etnaviv.o'. Stop. make[3]: *** Waiting for unfinished jobs....
Doh! Should have done a build test with a clean tree before sending out.
Thanks for noticing, Lucas
2015-04-02 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
IOMMUv2 support isn't implemented yet, so don't pretend it is there.
Signed-off-by: Lucas Stach l.stach@pengutronix.de
drivers/staging/etnaviv/etnaviv_gpu.c | 10 ++++++---- drivers/staging/etnaviv/etnaviv_iommu_v2.c | 32 ------------------------------ drivers/staging/etnaviv/etnaviv_iommu_v2.h | 25 ----------------------- 3 files changed, 6 insertions(+), 61 deletions(-) delete mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.c delete mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.h
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index d2d0556a9bad..e3b93c293dca 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -21,7 +21,6 @@ #include "etnaviv_gem.h" #include "etnaviv_mmu.h" #include "etnaviv_iommu.h" -#include "etnaviv_iommu_v2.h" #include "common.xml.h" #include "state.xml.h" #include "state_hi.xml.h" @@ -329,10 +328,13 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) mmuv2 = gpu->identity.minor_features1 & chipMinorFeatures1_MMU_VERSION; dev_dbg(gpu->dev->dev, "mmuv2: %d\n", mmuv2);
if (!mmuv2)
if (!mmuv2) { iommu = etnaviv_iommu_domain_alloc(gpu);
else
iommu = etnaviv_iommu_v2_domain_alloc(gpu);
} else {
dev_err(gpu->dev, "IOMMUv2 support is not implemented yet!\n");
ret = -ENODEV;
goto fail;
} if (!iommu) { ret = -ENOMEM;
diff --git a/drivers/staging/etnaviv/etnaviv_iommu_v2.c b/drivers/staging/etnaviv/etnaviv_iommu_v2.c deleted file mode 100644 index 3039ee9cbc6d..000000000000 --- a/drivers/staging/etnaviv/etnaviv_iommu_v2.c +++ /dev/null @@ -1,32 +0,0 @@ -/*
- Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
-#include <linux/iommu.h> -#include <linux/platform_device.h> -#include <linux/sizes.h> -#include <linux/slab.h> -#include <linux/dma-mapping.h> -#include <linux/bitops.h>
-#include "etnaviv_gpu.h" -#include "state_hi.xml.h"
-struct iommu_domain *etnaviv_iommu_v2_domain_alloc(struct etnaviv_gpu *gpu) -{
/* TODO */
return NULL;
-} diff --git a/drivers/staging/etnaviv/etnaviv_iommu_v2.h b/drivers/staging/etnaviv/etnaviv_iommu_v2.h deleted file mode 100644 index 603ea41c5389..000000000000 --- a/drivers/staging/etnaviv/etnaviv_iommu_v2.h +++ /dev/null @@ -1,25 +0,0 @@ -/*
- Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
-#ifndef __ETNAVIV_IOMMU_V2_H__ -#define __ETNAVIV_IOMMU_V2_H__
-#include <linux/iommu.h> -struct etnaviv_gpu;
-struct iommu_domain *etnaviv_iommu_v2_domain_alloc(struct etnaviv_gpu *gpu);
-#endif /* __ETNAVIV_IOMMU_V2_H__ */
2.1.4
I am fine with this change. You may have seen that I have a code for mmuv2 ready in my git tree. but at the moment I have no device to test it. So I will bring back support later.
greets -- Christian Gmeiner, MSc
Hi Christian,
Am Sonntag, den 05.04.2015, 20:32 +0200 schrieb Christian Gmeiner:
2015-04-02 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
IOMMUv2 support isn't implemented yet, so don't pretend it is there.
Signed-off-by: Lucas Stach l.stach@pengutronix.de
drivers/staging/etnaviv/etnaviv_gpu.c | 10 ++++++---- drivers/staging/etnaviv/etnaviv_iommu_v2.c | 32 ------------------------------ drivers/staging/etnaviv/etnaviv_iommu_v2.h | 25 ----------------------- 3 files changed, 6 insertions(+), 61 deletions(-) delete mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.c delete mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.h
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index d2d0556a9bad..e3b93c293dca 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -21,7 +21,6 @@ #include "etnaviv_gem.h" #include "etnaviv_mmu.h" #include "etnaviv_iommu.h" -#include "etnaviv_iommu_v2.h" #include "common.xml.h" #include "state.xml.h" #include "state_hi.xml.h" @@ -329,10 +328,13 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) mmuv2 = gpu->identity.minor_features1 & chipMinorFeatures1_MMU_VERSION; dev_dbg(gpu->dev->dev, "mmuv2: %d\n", mmuv2);
if (!mmuv2)
if (!mmuv2) { iommu = etnaviv_iommu_domain_alloc(gpu);
else
iommu = etnaviv_iommu_v2_domain_alloc(gpu);
} else {
dev_err(gpu->dev, "IOMMUv2 support is not implemented yet!\n");
ret = -ENODEV;
goto fail;
} if (!iommu) { ret = -ENOMEM;
diff --git a/drivers/staging/etnaviv/etnaviv_iommu_v2.c b/drivers/staging/etnaviv/etnaviv_iommu_v2.c deleted file mode 100644 index 3039ee9cbc6d..000000000000 --- a/drivers/staging/etnaviv/etnaviv_iommu_v2.c +++ /dev/null @@ -1,32 +0,0 @@ -/*
- Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
-#include <linux/iommu.h> -#include <linux/platform_device.h> -#include <linux/sizes.h> -#include <linux/slab.h> -#include <linux/dma-mapping.h> -#include <linux/bitops.h>
-#include "etnaviv_gpu.h" -#include "state_hi.xml.h"
-struct iommu_domain *etnaviv_iommu_v2_domain_alloc(struct etnaviv_gpu *gpu) -{
/* TODO */
return NULL;
-} diff --git a/drivers/staging/etnaviv/etnaviv_iommu_v2.h b/drivers/staging/etnaviv/etnaviv_iommu_v2.h deleted file mode 100644 index 603ea41c5389..000000000000 --- a/drivers/staging/etnaviv/etnaviv_iommu_v2.h +++ /dev/null @@ -1,25 +0,0 @@ -/*
- Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
-#ifndef __ETNAVIV_IOMMU_V2_H__ -#define __ETNAVIV_IOMMU_V2_H__
-#include <linux/iommu.h> -struct etnaviv_gpu;
-struct iommu_domain *etnaviv_iommu_v2_domain_alloc(struct etnaviv_gpu *gpu);
-#endif /* __ETNAVIV_IOMMU_V2_H__ */
2.1.4
I am fine with this change. You may have seen that I have a code for mmuv2 ready in my git tree. but at the moment I have no device to test it. So I will bring back support later.
Yes, I noticed that you had something implemented. But given that I didn't see any hardware where one could test this I would rather leave it out for now. I'm happy to pull this in once it has been tested on real hardware.
Regards, Lucas
Hi Lucas
2015-04-07 9:24 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Hi Christian,
Am Sonntag, den 05.04.2015, 20:32 +0200 schrieb Christian Gmeiner:
2015-04-02 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
IOMMUv2 support isn't implemented yet, so don't pretend it is there.
Signed-off-by: Lucas Stach l.stach@pengutronix.de
drivers/staging/etnaviv/etnaviv_gpu.c | 10 ++++++---- drivers/staging/etnaviv/etnaviv_iommu_v2.c | 32 ------------------------------ drivers/staging/etnaviv/etnaviv_iommu_v2.h | 25 ----------------------- 3 files changed, 6 insertions(+), 61 deletions(-) delete mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.c delete mode 100644 drivers/staging/etnaviv/etnaviv_iommu_v2.h
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index d2d0556a9bad..e3b93c293dca 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -21,7 +21,6 @@ #include "etnaviv_gem.h" #include "etnaviv_mmu.h" #include "etnaviv_iommu.h" -#include "etnaviv_iommu_v2.h" #include "common.xml.h" #include "state.xml.h" #include "state_hi.xml.h" @@ -329,10 +328,13 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) mmuv2 = gpu->identity.minor_features1 & chipMinorFeatures1_MMU_VERSION; dev_dbg(gpu->dev->dev, "mmuv2: %d\n", mmuv2);
if (!mmuv2)
if (!mmuv2) { iommu = etnaviv_iommu_domain_alloc(gpu);
else
iommu = etnaviv_iommu_v2_domain_alloc(gpu);
} else {
dev_err(gpu->dev, "IOMMUv2 support is not implemented yet!\n");
ret = -ENODEV;
goto fail;
} if (!iommu) { ret = -ENOMEM;
diff --git a/drivers/staging/etnaviv/etnaviv_iommu_v2.c b/drivers/staging/etnaviv/etnaviv_iommu_v2.c deleted file mode 100644 index 3039ee9cbc6d..000000000000 --- a/drivers/staging/etnaviv/etnaviv_iommu_v2.c +++ /dev/null @@ -1,32 +0,0 @@ -/*
- Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
-#include <linux/iommu.h> -#include <linux/platform_device.h> -#include <linux/sizes.h> -#include <linux/slab.h> -#include <linux/dma-mapping.h> -#include <linux/bitops.h>
-#include "etnaviv_gpu.h" -#include "state_hi.xml.h"
-struct iommu_domain *etnaviv_iommu_v2_domain_alloc(struct etnaviv_gpu *gpu) -{
/* TODO */
return NULL;
-} diff --git a/drivers/staging/etnaviv/etnaviv_iommu_v2.h b/drivers/staging/etnaviv/etnaviv_iommu_v2.h deleted file mode 100644 index 603ea41c5389..000000000000 --- a/drivers/staging/etnaviv/etnaviv_iommu_v2.h +++ /dev/null @@ -1,25 +0,0 @@ -/*
- Copyright (C) 2014 Christian Gmeiner christian.gmeiner@gmail.com
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
- the Free Software Foundation.
- This program is distributed in the hope that it will be useful, but WITHOUT
- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- more details.
- You should have received a copy of the GNU General Public License along with
- this program. If not, see http://www.gnu.org/licenses/.
- */
-#ifndef __ETNAVIV_IOMMU_V2_H__ -#define __ETNAVIV_IOMMU_V2_H__
-#include <linux/iommu.h> -struct etnaviv_gpu;
-struct iommu_domain *etnaviv_iommu_v2_domain_alloc(struct etnaviv_gpu *gpu);
-#endif /* __ETNAVIV_IOMMU_V2_H__ */
2.1.4
I am fine with this change. You may have seen that I have a code for mmuv2 ready in my git tree. but at the moment I have no device to test it. So I will bring back support later.
Yes, I noticed that you had something implemented. But given that I didn't see any hardware where one could test this I would rather leave it out for now. I'm happy to pull this in once it has been tested on real hardware.
I know one :) Once it starts to work, there will also be patches.
<6>[1.1, swapper] [ 0.780917] Galcore version 4.6.9.9754 <4>[1.1, swapper] [ 0.780940] galcore options: <4>[1.1, swapper] [ 0.780960] irqLine = 64 <4>[1.1, swapper] [ 0.780980] registerMemBase = 0xFA000000 <4>[1.1, swapper] [ 0.780999] registerMemSize = 0x00040000 <4>[1.1, swapper] [ 0.781018] irqLine2D = 65 <4>[1.1, swapper] [ 0.781036] registerMemBase2D = 0xFA040000 <4>[1.1, swapper] [ 0.781055] registerMemSize2D = 0x00040000 <4>[1.1, swapper] [ 0.781074] contiguousSize = 152363008 <4>[1.1, swapper] [ 0.781094] contiguousBase = 0x2B600000 <4>[1.1, swapper] [ 0.781113] bankSize = 0x02000000 <4>[1.1, swapper] [ 0.781133] fastClear = -1 <4>[1.1, swapper] [ 0.781150] compression = -1 <4>[1.1, swapper] [ 0.781168] signal = 48 <4>[1.1, swapper] [ 0.781186] baseAddress = 0x00000000 <4>[1.1, swapper] [ 0.781205] physSize = 0x80000000 <4>[1.1, swapper] [ 0.781224] logFileSize = 0 KB <4>[1.1, swapper] [ 0.781242] powerManagement = 1 <4>[1.1, swapper] [ 0.781260] gpuProfiler = 0 <3>[1.0, swapper] [ 0.782305] Identity: chipModel=4000 <3>[1.0, swapper] [ 0.782332] Identity: chipRevision=5222 <3>[1.0, swapper] [ 0.782355] Identity: chipFeatures=0xE0287CAD <3>[1.0, swapper] [ 0.782375] Identity: chipMinorFeatures=0xC1799EFB <3>[1.0, swapper] [ 0.782395] Identity: chipMinorFeatures1=0xFEFBFAD9 <3>[1.0, swapper] [ 0.782415] Identity: chipMinorFeatures2=0xCB9E4BFF <3>[1.0, swapper] [ 0.782435] Identity: chipMinorFeatures3=0x00000401 <3>[1.0, swapper] [ 0.785428] Identity: chipModel=320 <3>[1.0, swapper] [ 0.785452] Identity: chipRevision=5220 <3>[1.0, swapper] [ 0.785472] Identity: chipFeatures=0xE02C7ECA <3>[1.0, swapper] [ 0.785492] Identity: chipMinorFeatures=0xC1399EFF <3>[1.0, swapper] [ 0.785511] Identity: chipMinorFeatures1=0xFE1FB2DB <3>[1.0, swapper] [ 0.785530] Identity: chipMinorFeatures2=0x02FE4080 <3>[1.0, swapper] [ 0.785550] Identity: chipMinorFeatures3=0x00000000 <3>[1.0, swapper] [ 0.858153] Identity: chipModel=4000 <3>[1.0, swapper] [ 0.858178] Identity: chipRevision=5222 <3>[1.0, swapper] [ 0.858199] Identity: chipFeatures=0xE0287CAD <3>[1.0, swapper] [ 0.858219] Identity: chipMinorFeatures=0xC1799EFB <3>[1.0, swapper] [ 0.858239] Identity: chipMinorFeatures1=0xFEFBFAD9 <3>[1.0, swapper] [ 0.858259] Identity: chipMinorFeatures2=0xCB9E4BFF <3>[1.0, swapper] [ 0.858279] Identity: chipMinorFeatures3=0x00000401 <3>[1.0, swapper] [ 0.859965] Identity: chipModel=320 <3>[1.0, swapper] [ 0.859989] Identity: chipRevision=5220 <3>[1.0, swapper] [ 0.860009] Identity: chipFeatures=0xE02C7ECA <3>[1.0, swapper] [ 0.860029] Identity: chipMinorFeatures=0xC1399EFF <3>[1.0, swapper] [ 0.860049] Identity: chipMinorFeatures1=0xFE1FB2DB <3>[1.0, swapper] [ 0.860068] Identity: chipMinorFeatures2=0x02FE4080 <3>[1.0, swapper] [ 0.860087] Identity: chipMinorFeatures3=0x00000000
greets -- Christian Gmeiner, MSc
From: Russell King rmk+kernel@arm.linux.org.uk
The oops below was found when unbinding the etnaviv drm driver. This is caused by using drm_gem_object_unreference() in a region which is not protected by drm_dev->struct_mutex. Fix this by using drm_gem_object_unreference_unlocked() instead.
Kernel BUG at c026cab8 [verbose debug info unavailable] Internal error: Oops - BUG: 0 [#1] PREEMPT ARM Modules linked in: etnaviv(C-) ... task: d41dda80 ti: d63f2000 task.ti: d63f2000 PC is at drm_gem_object_free+0x34/0x38 LR is at etnaviv_gpu_unbind+0x7c/0xb0 [etnaviv] pc : [<c026cab8>] lr : [<bf1067a8>] psr: 600d0013 sp : d63f3da0 ip : d63f3db0 fp : d63f3dac r10: 00020000 r9 : d63f2000 r8 : c000ece8 r7 : d4386c34 r6 : d4386c00 r5 : d42c4d80 r4 : d621ac10 r3 : d4386c00 r2 : 00000001 r1 : d608c410 r0 : d59da580 Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user Control: 10c5387d Table: 14234019 DAC: 00000015 Process rmmod (pid: 1633, stack limit = 0xd63f2250) [<c026ca84>] (drm_gem_object_free) from [<bf1067a8>] (etnaviv_gpu_unbind+0x7c/0xb0 [etnaviv]) [<bf10672c>] (etnaviv_gpu_unbind [etnaviv]) from [<c028a47c>] (component_unbind+0x38/0x70) [<c028a444>] (component_unbind) from [<c028a52c>] (component_unbind_all+0x78/0xac) [<c028a4b4>] (component_unbind_all) from [<bf104560>] (etnaviv_unload+0x68/0x7c [etnaviv]) [<bf1044f8>] (etnaviv_unload [etnaviv]) from [<c0270498>] (drm_dev_unregister+0x2c/0xa0) [<c0270c90>] (drm_put_dev) from [<bf104058>] (etnaviv_unbind+0x14/0x18 [etnaviv]) [<bf104044>] (etnaviv_unbind [etnaviv]) from [<c028a2a4>] (take_down_master+0x2c/0x4c) ...
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index e3b93c293dca..8e44493038cb 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -868,7 +868,7 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master, WARN_ON(!list_empty(&gpu->active_list));
if (gpu->buffer) - drm_gem_object_unreference(gpu->buffer); + drm_gem_object_unreference_unlocked(gpu->buffer);
if (gpu->mmu) etnaviv_iommu_destroy(gpu->mmu);
From: Russell King rmk+kernel@arm.linux.org.uk
We must delete the hangcheck timer before freeing its memory, otherwise we oops the kernel in the timer code when the memory is overwritten.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 8e44493038cb..40ee6ac2ccd7 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -863,6 +863,8 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master, { struct etnaviv_gpu *gpu = dev_get_drvdata(dev);
+ del_timer(&gpu->hangcheck_timer); + DBG("%s", gpu->name);
WARN_ON(!list_empty(&gpu->active_list));
From: Russell King rmk+kernel@arm.linux.org.uk
There are a couple of issues with etnaviv_add_components(): 1. it releases each child node as it parses, which is unnecessary Fix this by using the for_each_available_child_of_node() helper rather than open-coding this. 2. it fails to check the return value from component_master_add_child(). In this case, we must drop the child reference before breaking out of the loop.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_drv.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 39586b45200d..da7035ce07a2 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -527,19 +527,20 @@ static int etnaviv_compare(struct device *dev, void *data)
static int etnaviv_add_components(struct device *master, struct master *m) { - struct device_node *np = master->of_node; struct device_node *child_np; + int ret = 0;
- child_np = of_get_next_available_child(np, NULL); - - while (child_np) { + for_each_available_child_of_node(master->of_node, child_np) { DRM_INFO("add child %s\n", child_np->name); - component_master_add_child(m, etnaviv_compare, child_np); - of_node_put(child_np); - child_np = of_get_next_available_child(np, child_np); + + ret = component_master_add_child(m, etnaviv_compare, child_np); + if (ret) { + of_node_put(child_np); + break; + } }
- return 0; + return ret; }
static int etnaviv_bind(struct device *dev)
From: Russell King rmk+kernel@arm.linux.org.uk
Ensure that we reset the GPU with the clocks on, and restore the clock state after GPU reset has completed. Without this, register accesses can fail:
Unhandled fault: external abort on non-linefetch (0x1828) at 0xfe640418 Internal error: : 1828 [#1] PREEMPT ARM Modules linked in: etnaviv(C+) ... CPU: 0 PID: 1617 Comm: modprobe Tainted: G C 3.16.0+ #1010 task: d6210140 ti: d4102000 task.ti: d4102000 PC is at etnaviv_writel+0x2c/0x38 [etnaviv] LR is at etnaviv_gpu_init+0x304/0x5cc [etnaviv] pc : [<bf104960>] lr : [<bf107228>] psr: 600f0013 sp : d4103ba8 ip : d4103bc0 fp : d4103bbc r10: 00000000 r9 : 00000000 r8 : cd45b034 r7 : d43ee480 r6 : d5ab1c10 r5 : 00000000 r4 : fe640418 r3 : 00000000 r2 : 00000000 r1 : fe640418 r0 : 00000000 Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user Control: 10c5387d Table: 143d4019 DAC: 00000015 Process modprobe (pid: 1617, stack limit = 0xd4102250) Backtrace: [<bf104934>] (etnaviv_writel [etnaviv]) from [<bf107228>] (etnaviv_gpu_init+0x304/0x5cc [etnaviv]) [<bf106f24>] (etnaviv_gpu_init [etnaviv]) from [<bf10481c>] (etnaviv_load+0xbc/0x128 [etnaviv]) [<bf104760>] (etnaviv_load [etnaviv]) from [<c0270a8c>] (drm_dev_register+0xa8/0x108) [<c02709e4>] (drm_dev_register) from [<c027238c>] (drm_platform_init+0x50/0xe8) ...
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 40ee6ac2ccd7..56afba7625ed 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -260,7 +260,16 @@ static void etnaviv_hw_reset(struct etnaviv_gpu *gpu) */
while (true) { - control = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL); + control = VIVS_HI_CLOCK_CONTROL_DISABLE_DEBUG_REGISTERS | + VIVS_HI_CLOCK_CONTROL_FSCALE_VAL(0x40); + + /* enable clock */ + gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, control | + VIVS_HI_CLOCK_CONTROL_FSCALE_CMD_LOAD); + gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, control); + + /* Wait for stable clock. Vivante's code waited for 1ms */ + usleep_range(1000, 10000);
/* isolate the GPU. */ control |= VIVS_HI_CLOCK_CONTROL_ISOLATE_GPU; @@ -302,6 +311,15 @@ static void etnaviv_hw_reset(struct etnaviv_gpu *gpu)
break; } + + /* We rely on the GPU running, so program the clock */ + control = VIVS_HI_CLOCK_CONTROL_DISABLE_DEBUG_REGISTERS | + VIVS_HI_CLOCK_CONTROL_FSCALE_VAL(0x40); + + /* enable clock */ + gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, control | + VIVS_HI_CLOCK_CONTROL_FSCALE_CMD_LOAD); + gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, control); }
int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
From: Russell King rmk+kernel@arm.linux.org.uk
"0x08%x" is a confusing format. It prints values prefixed with 0x08 irrespective of their values, which leads you to think that 0x08fe really is 0x08fe. Fix the format string to place the % correctly.
Reverse the 64-bit values - even though the GPU wants 64-bit alignment in the command stream, and sees the stream as 64-bit values, it's more humanly understandable to print them as separate 32-bit values in "lo hi" format - the order that they appear in the command stream.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 56afba7625ed..df5bef16ff4c 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -436,8 +436,8 @@ void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m)
verify_dma(gpu, &debug);
- seq_printf(m, "\taxi: 0x08%x\n", axi); - seq_printf(m, "\tidle: 0x08%x\n", idle); + seq_printf(m, "\taxi: 0x%08x\n", axi); + seq_printf(m, "\tidle: 0x%08x\n", idle); if ((idle & VIVS_HI_IDLE_STATE_FE) == 0) seq_puts(m, "\t FE is not idle\n"); if ((idle & VIVS_HI_IDLE_STATE_DE) == 0) @@ -491,7 +491,8 @@ void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m) seq_printf(m, "\t address 1: 0x%08x\n", debug.address[1]); seq_printf(m, "\t state 0: 0x%08x\n", debug.state[0]); seq_printf(m, "\t state 1: 0x%08x\n", debug.state[1]); - seq_printf(m, "\t last fetch 64 bit word: 0x%08x-0x%08x\n", dma_hi, dma_lo); + seq_printf(m, "\t last fetch 64 bit word: 0x%08x 0x%08x\n", + dma_lo, dma_hi); } #endif
From: Russell King rmk+kernel@arm.linux.org.uk
The fence implementation relied upon incrementing a 32-bit number, and using unsigned comparisons. This is a limited number space, which, when exhausted will lead to unpredictable behaviour. Turn this into a circular namespace, and use signed difference comparisons.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_drv.c | 5 +++-- drivers/staging/etnaviv/etnaviv_drv.h | 14 +++++++++++++- drivers/staging/etnaviv/etnaviv_gpu.c | 4 ++-- 3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index da7035ce07a2..cc860b63447f 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -303,7 +303,7 @@ int etnaviv_wait_fence_interruptable(struct drm_device *dev, uint32_t pipe, if (!gpu) return -ENXIO;
- if (fence > gpu->submitted_fence) { + if (fence_after(fence, gpu->submitted_fence)) { DRM_ERROR("waiting on invalid fence: %u (of %u)\n", fence, gpu->submitted_fence); return -EINVAL; @@ -344,7 +344,8 @@ void etnaviv_update_fence(struct drm_device *dev, uint32_t fence) struct etnaviv_drm_private *priv = dev->dev_private;
mutex_lock(&dev->struct_mutex); - priv->completed_fence = max(fence, priv->completed_fence); + if (fence_after(fence, priv->completed_fence)) + priv->completed_fence = fence; mutex_unlock(&dev->struct_mutex);
wake_up_all(&priv->fence_event); diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index 63994f22d8c9..bf5d1d9cc891 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -127,10 +127,22 @@ u32 etnaviv_readl(const void __iomem *addr); #define DBG(fmt, ...) DRM_DEBUG(fmt"\n", ##__VA_ARGS__) #define VERB(fmt, ...) if (0) DRM_DEBUG(fmt"\n", ##__VA_ARGS__)
+/* returns true if fence a comes after fence b */ +static inline bool fence_after(uint32_t a, uint32_t b) +{ + return (int32_t)(a - b) > 0; +} + +static inline bool fence_after_eq(uint32_t a, uint32_t b) +{ + return (int32_t)(a - b) >= 0; +} + static inline bool fence_completed(struct drm_device *dev, uint32_t fence) { struct etnaviv_drm_private *priv = dev->dev_private; - return priv->completed_fence >= fence; + + return fence_after_eq(priv->completed_fence, fence); }
static inline int align_pitch(int width, int bpp) diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index df5bef16ff4c..859edcccdda6 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -648,7 +648,7 @@ static void hangcheck_handler(unsigned long data) if (fence != gpu->hangcheck_fence) { /* some progress has been made.. ya! */ gpu->hangcheck_fence = fence; - } else if (fence < gpu->submitted_fence) { + } else if (fence_after(gpu->submitted_fence, fence)) { /* no progress and not done.. hung! */ gpu->hangcheck_fence = fence; dev_err(dev->dev, "%s: hangcheck detected gpu lockup!\n", @@ -661,7 +661,7 @@ static void hangcheck_handler(unsigned long data) }
/* if still more pending work, reset the hangcheck timer: */ - if (gpu->submitted_fence > gpu->hangcheck_fence) + if (fence_after(gpu->submitted_fence, gpu->hangcheck_fence)) hangcheck_timer_reset(gpu); }
From: Russell King rmk+kernel@arm.linux.org.uk
The buffer dumping code tries to dereference obj->gpu. However, for submitted command buffers, obj->gpu is only set after the buffer has been submitted to the GPU. Explicitly pass in the etnaviv_gpu struct.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_buffer.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index 32764e15c5f7..6afb9c702628 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -115,12 +115,13 @@ static void etnaviv_cmd_select_pipe(struct etnaviv_gem_object *buffer, u8 pipe) CMD_LOAD_STATE(buffer, VIVS_GL_PIPE_SELECT, VIVS_GL_PIPE_SELECT_PIPE(pipe)); }
-static void etnaviv_buffer_dump(struct etnaviv_gem_object *obj, u32 len) +static void etnaviv_buffer_dump(struct etnaviv_gpu *gpu, + struct etnaviv_gem_object *obj, u32 len) { u32 size = obj->base.size; u32 *ptr = obj->vaddr;
- dev_dbg(obj->gpu->dev->dev, "virt %p phys 0x%08x free 0x%08x\n", + dev_info(gpu->dev->dev, "virt %p phys 0x%08x free 0x%08x\n", obj->vaddr, obj->paddr, size - len * 4);
print_hex_dump(KERN_INFO, "cmd ", DUMP_PREFIX_OFFSET, 16, 4, @@ -150,7 +151,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et u32 back; u32 i;
- etnaviv_buffer_dump(buffer, 0x50); + etnaviv_buffer_dump(gpu, buffer, 0x50);
/* save offset back into main buffer */ back = buffer->offset; @@ -180,7 +181,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et
/* TODO: remove later */ if (unlikely(drm_debug & DRM_UT_CORE)) - etnaviv_buffer_dump(obj, obj->offset); + etnaviv_buffer_dump(gpu, obj, submit->cmd[i].size); }
/* change ll to NOP */ @@ -197,5 +198,5 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et *(lw)= i; mb();
- etnaviv_buffer_dump(buffer, 0x50); + etnaviv_buffer_dump(gpu, buffer, 0x50); }
From: Russell King rmk+kernel@arm.linux.org.uk
The ring buffer offset is an index into an array of uint32_t, whereas obj->base.size is measured in bytes. Comparing these two is nonsense. Convert the index into a byte offset first.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_buffer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index 6afb9c702628..729387571537 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -30,7 +30,7 @@ static inline void OUT(struct etnaviv_gem_object *buffer, uint32_t data) { u32 *vaddr = (u32 *)buffer->vaddr; - BUG_ON(buffer->offset >= buffer->base.size); + BUG_ON(buffer->offset * sizeof(*vaddr) >= buffer->base.size);
vaddr[buffer->offset++] = data; }
From: Russell King rmk+kernel@arm.linux.org.uk
We need to call drm_prime_gem_destroy() to properly clean up imported dmabufs when their associated GEM object is deleted, otherwise we drop a refcount on them, preventing them from being cleaned up.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 42149a2b7404..f98d5ee43853 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -519,6 +519,7 @@ static void etnaviv_free_obj(struct drm_gem_object *obj) if (etnaviv_obj->pages) drm_free_large(etnaviv_obj->pages);
+ drm_prime_gem_destroy(obj, etnaviv_obj->sgt); } else { if (etnaviv_obj->vaddr) vunmap(etnaviv_obj->vaddr);
From: Russell King rmk+kernel@arm.linux.org.uk
Use %p for pointers rather than casting to u32 and using 0x%08x.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_drv.c | 4 ++-- drivers/staging/etnaviv/etnaviv_gpu.c | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index cc860b63447f..863f9d6a0174 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -69,7 +69,7 @@ void __iomem *etnaviv_ioremap(struct platform_device *pdev, const char *name, void etnaviv_writel(u32 data, void __iomem *addr) { if (reglog) - printk(KERN_DEBUG "IO:W %08x %08x\n", (u32)addr, data); + printk(KERN_DEBUG "IO:W %p %08x\n", addr, data); writel(data, addr); }
@@ -77,7 +77,7 @@ u32 etnaviv_readl(const void __iomem *addr) { u32 val = readl(addr); if (reglog) - printk(KERN_ERR "IO:R %08x %08x\n", (u32)addr, val); + printk(KERN_DEBUG "IO:R %p %08x\n", addr, val); return val; }
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 859edcccdda6..d06665aa319b 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -852,9 +852,9 @@ static int etnaviv_gpu_bind(struct device *dev, struct device *master, struct etnaviv_gpu *gpu = dev_get_drvdata(dev); int idx = gpu->pipe;
- dev_info(dev, "pre gpu[idx]: 0x%08x\n", (u32)priv->gpu[idx]); + dev_info(dev, "pre gpu[idx]: %p\n", priv->gpu[idx]);
- if (priv->gpu[idx] == 0) { + if (priv->gpu[idx] == NULL) { dev_info(dev, "adding core @idx %d\n", idx); priv->gpu[idx] = gpu; } else { @@ -862,7 +862,7 @@ static int etnaviv_gpu_bind(struct device *dev, struct device *master, goto fail; }
- dev_info(dev, "post gpu[idx]: 0x%08x\n", (u32)priv->gpu[idx]); + dev_info(dev, "post gpu[idx]: %p\n", priv->gpu[idx]);
gpu->dev = drm;
From: Russell King rmk+kernel@arm.linux.org.uk
Ensure that we reject command buffers which are fully populated, as we always need to append two words for a LINK command to the end of the buffer.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem_submit.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index dd87fdfe7ab5..f8b733a0e313 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -348,6 +348,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, void __user *userptr = to_user_ptr(args->cmds + (i * sizeof(submit_cmd))); struct etnaviv_gem_object *etnaviv_obj; + unsigned max_size;
ret = copy_from_user(&submit_cmd, userptr, sizeof(submit_cmd)); if (ret) { @@ -373,8 +374,13 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, goto out; }
- if ((submit_cmd.size + submit_cmd.submit_offset) >= - etnaviv_obj->base.size) { + /* + * We must have space to add a LINK command at the end of + * the command buffer. + */ + max_size = etnaviv_obj->base.size - 8; + + if ((submit_cmd.size + submit_cmd.submit_offset) > max_size) { DRM_ERROR("invalid cmdstream size: %u\n", submit_cmd.size); ret = -EINVAL; goto out;
From: Russell King rmk+kernel@arm.linux.org.uk
Additions can overflow, when they do, they can lead to incorrect results. When we verify that the buffer offset and size fit within the buffer object, we must do this safely.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem_submit.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index f8b733a0e313..39ae61ab43fd 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -380,7 +380,8 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, */ max_size = etnaviv_obj->base.size - 8;
- if ((submit_cmd.size + submit_cmd.submit_offset) > max_size) { + if (submit_cmd.size > max_size || + submit_cmd.submit_offset > max_size - submit_cmd.size) { DRM_ERROR("invalid cmdstream size: %u\n", submit_cmd.size); ret = -EINVAL; goto out;
From: Russell King rmk+kernel@arm.linux.org.uk
Currently, relocations can apply an unbounded amount of offset to the address member. This permits the offset to be used to access memory outside of the associated buffer.
Ensure that the offset is within the size of the object. This is not a complete fix, as we are unaware of the size of the GPU rectangles operation, but this at least ensures that we catch this form of abuse.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem_submit.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index 39ae61ab43fd..78c56adfcffc 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -245,6 +245,7 @@ static int submit_reloc(struct etnaviv_gem_submit *submit, struct etnaviv_gem_ob
for (i = 0; i < nr_relocs; i++) { struct drm_etnaviv_gem_submit_reloc submit_reloc; + struct etnaviv_gem_object *bobj; void __user *userptr = to_user_ptr(relocs + (i * sizeof(submit_reloc))); uint32_t iova, off; @@ -269,13 +270,20 @@ static int submit_reloc(struct etnaviv_gem_submit *submit, struct etnaviv_gem_ob return -EINVAL; }
- ret = submit_bo(submit, submit_reloc.reloc_idx, NULL, &iova, &valid); + ret = submit_bo(submit, submit_reloc.reloc_idx, &bobj, + &iova, &valid); if (ret) return ret;
if (valid) continue;
+ if (submit_reloc.reloc_offset >= + bobj->base.size - sizeof(*ptr)) { + DRM_ERROR("relocation %u outside object", i); + return -EINVAL; + } + iova += submit_reloc.reloc_offset;
if (submit_reloc.shift < 0)
From: Russell King rmk+kernel@arm.linux.org.uk
.../etnaviv_gem_submit.c:72:37: warning: dereference of noderef expression .../etnaviv_gem_submit.c:364:37: warning: dereference of noderef expression .../etnaviv_gem_submit.c:423:58: warning: dereference of noderef expression .../etnaviv_iommu.c:139:13: warning: symbol 'etnaviv_iommu_iova_to_phys' was not declared. Should it be static? .../etnaviv_iommu.c:156:21: warning: symbol 'etnaviv_iommu_domain_alloc' was not declared. Should it be static?
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_iommu.c | 4 +++- drivers/staging/etnaviv/etnaviv_iommu.h | 1 + include/uapi/drm/etnaviv_drm.h | 6 +++--- 3 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_iommu.c b/drivers/staging/etnaviv/etnaviv_iommu.c index d0811fb13363..5841a08f627f 100644 --- a/drivers/staging/etnaviv/etnaviv_iommu.c +++ b/drivers/staging/etnaviv/etnaviv_iommu.c @@ -22,6 +22,7 @@ #include <linux/bitops.h>
#include "etnaviv_gpu.h" +#include "etnaviv_iommu.h" #include "state_hi.xml.h"
#define PT_SIZE SZ_256K @@ -136,7 +137,8 @@ static size_t etnaviv_iommu_unmap(struct iommu_domain *domain, unsigned long iov return 0; }
-phys_addr_t etnaviv_iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) +static phys_addr_t etnaviv_iommu_iova_to_phys(struct iommu_domain *domain, + dma_addr_t iova) { struct etnaviv_iommu_domain *etnaviv_domain = domain->priv;
diff --git a/drivers/staging/etnaviv/etnaviv_iommu.h b/drivers/staging/etnaviv/etnaviv_iommu.h index 3103ff3efcbe..c0c359d4f166 100644 --- a/drivers/staging/etnaviv/etnaviv_iommu.h +++ b/drivers/staging/etnaviv/etnaviv_iommu.h @@ -21,5 +21,6 @@ struct etnaviv_gpu;
struct iommu_domain *etnaviv_iommu_domain_alloc(struct etnaviv_gpu *gpu); +struct iommu_domain *etnaviv_iommu_v2_domain_alloc(struct etnaviv_gpu *gpu);
#endif /* __ETNAVIV_IOMMU_H__ */ diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h index f7b5ac6f3842..a9f020ed71ea 100644 --- a/include/uapi/drm/etnaviv_drm.h +++ b/include/uapi/drm/etnaviv_drm.h @@ -154,7 +154,7 @@ struct drm_etnaviv_gem_submit_cmd { uint32_t size; /* in, cmdstream size */ uint32_t pad; uint32_t nr_relocs; /* in, number of submit_reloc's */ - uint64_t __user relocs; /* in, ptr to array of submit_reloc's */ + uint64_t relocs; /* in, ptr to array of submit_reloc's */ };
/* Each buffer referenced elsewhere in the cmdstream submit (ie. the @@ -185,8 +185,8 @@ struct drm_etnaviv_gem_submit { uint32_t fence; /* out */ uint32_t nr_bos; /* in, number of submit_bo's */ uint32_t nr_cmds; /* in, number of submit_cmd's */ - uint64_t __user bos; /* in, ptr to array of submit_bo's */ - uint64_t __user cmds; /* in, ptr to array of submit_cmd's */ + uint64_t bos; /* in, ptr to array of submit_bo's */ + uint64_t cmds; /* in, ptr to array of submit_cmd's */ };
/* The normal way to synchronize with the GPU is just to CPU_PREP on
From: Russell King rmk+kernel@arm.linux.org.uk
Use devm_ioremap_resource() rather than devm_ioremap_nocache() when remapping resources. devm_ioremap_resource() is the preferred interface for this, and is less error-prone than the older devm_ioremap_nocache().
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_drv.c | 20 +++++++------------- 1 file changed, 7 insertions(+), 13 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 863f9d6a0174..7b4999c78417 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -39,7 +39,6 @@ void __iomem *etnaviv_ioremap(struct platform_device *pdev, const char *name, const char *dbgname) { struct resource *res; - unsigned long size; void __iomem *ptr;
if (name) @@ -47,21 +46,16 @@ void __iomem *etnaviv_ioremap(struct platform_device *pdev, const char *name, else res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (!res) { - dev_err(&pdev->dev, "failed to get memory resource: %s\n", name); - return ERR_PTR(-EINVAL); - } - - size = resource_size(res); - - ptr = devm_ioremap_nocache(&pdev->dev, res->start, size); - if (!ptr) { - dev_err(&pdev->dev, "failed to ioremap: %s\n", name); - return ERR_PTR(-ENOMEM); + ptr = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(ptr)) { + dev_err(&pdev->dev, "failed to ioremap %s: %ld\n", name, + PTR_ERR(ptr)); + return ptr; }
if (reglog) - printk(KERN_DEBUG "IO:region %s %08x %08lx\n", dbgname, (u32)ptr, size); + dev_printk(KERN_DEBUG, &pdev->dev, "IO:region %s 0x%p %08zx\n", + dbgname, ptr, (size_t)resource_size(res));
return ptr; }
From: Russell King rmk+kernel@arm.linux.org.uk
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_buffer.c | 10 ++++++++-- drivers/staging/etnaviv/etnaviv_gem.h | 1 + drivers/staging/etnaviv/etnaviv_gem_submit.c | 8 ++++++++ 3 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index 729387571537..30ef93aed22a 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -165,7 +165,8 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et
/* update offset for every cmd stream */ for (i = 0; i < submit->nr_cmds; i++) - submit->cmd[i].obj->offset = submit->cmd[i].size; + submit->cmd[i].obj->offset = submit->cmd[i].offset + + submit->cmd[i].size;
/* TODO: inter-connect all cmd buffers */
@@ -173,6 +174,11 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et cmd = submit->cmd[submit->nr_cmds - 1].obj; CMD_LINK(cmd, 4, buffer->paddr + (back * 4));
+ /* update the size */ + for (i = 0; i < submit->nr_cmds; i++) + submit->cmd[i].size = submit->cmd[i].obj->offset - + submit->cmd[i].offset; + printk(KERN_ERR "stream link @ 0x%08x\n", cmd->paddr + ((cmd->offset - 1) * 4)); printk(KERN_ERR "stream link @ %p\n", cmd->vaddr + ((cmd->offset - 1) * 4));
@@ -193,7 +199,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et
/* Change WAIT into a LINK command; write the address first. */ i = VIV_FE_LINK_HEADER_OP_LINK | VIV_FE_LINK_HEADER_PREFETCH(submit->cmd[0].size * 2); - *(lw + 1) = submit->cmd[0].obj->paddr; + *(lw + 1) = submit->cmd[0].obj->paddr + submit->cmd[0].offset * 4; mb(); *(lw)= i; mb(); diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h index 597ff8233fb1..97302ca6efaa 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.h +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -87,6 +87,7 @@ struct etnaviv_gem_submit { unsigned int nr_bos; struct { uint32_t type; + uint32_t offset; /* in dwords */ uint32_t size; /* in dwords */ struct etnaviv_gem_object *obj; } cmd[MAX_CMDS]; diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index 78c56adfcffc..7eb02a121cff 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -382,6 +382,13 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, goto out; }
+ if (submit_cmd.submit_offset % 8) { + DRM_ERROR("non-aligned cmdstream buffer size: %u\n", + submit_cmd.size); + ret = -EINVAL; + goto out; + } + /* * We must have space to add a LINK command at the end of * the command buffer. @@ -396,6 +403,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, }
submit->cmd[i].type = submit_cmd.type; + submit->cmd[i].offset = submit_cmd.submit_offset / 4; submit->cmd[i].size = submit_cmd.size / 4; submit->cmd[i].obj = etnaviv_obj;
From: Russell King rmk+kernel@arm.linux.org.uk
We don't always want to dump the start of the buffer. Pass a byte offset from the beginning of the buffer to be dumped.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_buffer.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index 30ef93aed22a..38b103543cce 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -116,13 +116,13 @@ static void etnaviv_cmd_select_pipe(struct etnaviv_gem_object *buffer, u8 pipe) }
static void etnaviv_buffer_dump(struct etnaviv_gpu *gpu, - struct etnaviv_gem_object *obj, u32 len) + struct etnaviv_gem_object *obj, u32 off, u32 len) { u32 size = obj->base.size; - u32 *ptr = obj->vaddr; + u32 *ptr = obj->vaddr + off;
dev_info(gpu->dev->dev, "virt %p phys 0x%08x free 0x%08x\n", - obj->vaddr, obj->paddr, size - len * 4); + ptr, obj->paddr + off, size - len * 4 - off);
print_hex_dump(KERN_INFO, "cmd ", DUMP_PREFIX_OFFSET, 16, 4, ptr, len * 4, 0); @@ -151,7 +151,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et u32 back; u32 i;
- etnaviv_buffer_dump(gpu, buffer, 0x50); + etnaviv_buffer_dump(gpu, buffer, 0, 0x50);
/* save offset back into main buffer */ back = buffer->offset; @@ -187,7 +187,8 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et
/* TODO: remove later */ if (unlikely(drm_debug & DRM_UT_CORE)) - etnaviv_buffer_dump(gpu, obj, submit->cmd[i].size); + etnaviv_buffer_dump(gpu, obj, submit->cmd[i].offset * 4, + submit->cmd[i].size); }
/* change ll to NOP */ @@ -204,5 +205,5 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et *(lw)= i; mb();
- etnaviv_buffer_dump(gpu, buffer, 0x50); + etnaviv_buffer_dump(gpu, buffer, 0, 0x50); }
From: Russell King rmk+kernel@arm.linux.org.uk
The submission debug was always being printed, and printed at error level. Contain this debug within DRM_UT_DRIVER, and reduce it down to info level.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_buffer.c | 37 +++++++++++++++++--------------- 1 file changed, 20 insertions(+), 17 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index 38b103543cce..945af22db3f1 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -151,7 +151,8 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et u32 back; u32 i;
- etnaviv_buffer_dump(gpu, buffer, 0, 0x50); + if (drm_debug & DRM_UT_DRIVER) + etnaviv_buffer_dump(gpu, buffer, 0, 0x50);
/* save offset back into main buffer */ back = buffer->offset; @@ -179,24 +180,25 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et submit->cmd[i].size = submit->cmd[i].obj->offset - submit->cmd[i].offset;
- printk(KERN_ERR "stream link @ 0x%08x\n", cmd->paddr + ((cmd->offset - 1) * 4)); - printk(KERN_ERR "stream link @ %p\n", cmd->vaddr + ((cmd->offset - 1) * 4)); + if (drm_debug & DRM_UT_DRIVER) { + pr_info("stream link @ 0x%08x\n", + cmd->paddr + ((cmd->offset - 1) * 4)); + pr_info("stream link @ %p\n", + cmd->vaddr + ((cmd->offset - 1) * 4));
- for (i = 0; i < submit->nr_cmds; i++) { - struct etnaviv_gem_object *obj = submit->cmd[i].obj; + for (i = 0; i < submit->nr_cmds; i++) { + struct etnaviv_gem_object *obj = submit->cmd[i].obj;
- /* TODO: remove later */ - if (unlikely(drm_debug & DRM_UT_CORE)) - etnaviv_buffer_dump(gpu, obj, submit->cmd[i].offset * 4, - submit->cmd[i].size); - } + etnaviv_buffer_dump(gpu, obj, submit->cmd[i].offset, + submit->cmd[i].size); + }
- /* change ll to NOP */ - printk(KERN_ERR "link op: %p\n", lw); - printk(KERN_ERR "link addr: %p\n", lw + 1); - printk(KERN_ERR "addr: 0x%08x\n", submit->cmd[0].obj->paddr); - printk(KERN_ERR "back: 0x%08x\n", buffer->paddr + (back * 4)); - printk(KERN_ERR "event: %d\n", event); + pr_info("link op: %p\n", lw); + pr_info("link addr: %p\n", lw + 1); + pr_info("addr: 0x%08x\n", submit->cmd[0].obj->paddr); + pr_info("back: 0x%08x\n", buffer->paddr + (back * 4)); + pr_info("event: %d\n", event); + }
/* Change WAIT into a LINK command; write the address first. */ i = VIV_FE_LINK_HEADER_OP_LINK | VIV_FE_LINK_HEADER_PREFETCH(submit->cmd[0].size * 2); @@ -205,5 +207,6 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et *(lw)= i; mb();
- etnaviv_buffer_dump(gpu, buffer, 0, 0x50); + if (drm_debug & DRM_UT_DRIVER) + etnaviv_buffer_dump(gpu, buffer, 0, 0x50); }
From: Russell King rmk+kernel@arm.linux.org.uk
etnaviv_buffer_queue() could not handle multiple command buffers. We can handle this trivially by adapting the existing code - we record where we want to link to, and walk the submitted command buffers in reverse order, appending a LINK command to the previous target.
This also means that we conveniently end up with the address and size to link to when changing the previous WAIT command.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_buffer.c | 45 +++++++++++++++++--------------- 1 file changed, 24 insertions(+), 21 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index 945af22db3f1..be2b02ce9a46 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -148,7 +148,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et struct etnaviv_gem_object *buffer = to_etnaviv_bo(gpu->buffer); struct etnaviv_gem_object *cmd; u32 *lw = buffer->vaddr + ((buffer->offset - 4) * 4); - u32 back; + u32 back, link_target, link_size; u32 i;
if (drm_debug & DRM_UT_DRIVER) @@ -156,6 +156,8 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et
/* save offset back into main buffer */ back = buffer->offset; + link_target = buffer->paddr + buffer->offset * 4; + link_size = 6;
/* trigger event */ CMD_LOAD_STATE(buffer, VIVS_GL_EVENT, VIVS_GL_EVENT_EVENT_ID(event) | VIVS_GL_EVENT_FROM_PE); @@ -165,27 +167,28 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et CMD_LINK(buffer, 2, buffer->paddr + ((buffer->offset - 1) * 4));
/* update offset for every cmd stream */ - for (i = 0; i < submit->nr_cmds; i++) - submit->cmd[i].obj->offset = submit->cmd[i].offset + - submit->cmd[i].size; + for (i = submit->nr_cmds; i--; ) { + cmd = submit->cmd[i].obj;
- /* TODO: inter-connect all cmd buffers */ + cmd->offset = submit->cmd[i].offset + submit->cmd[i].size;
- /* jump back from last cmd to main buffer */ - cmd = submit->cmd[submit->nr_cmds - 1].obj; - CMD_LINK(cmd, 4, buffer->paddr + (back * 4)); + if (drm_debug & DRM_UT_DRIVER) + pr_info("stream link from buffer %u to 0x%08x @ 0x%08x %p\n", + i, link_target, + cmd->paddr + cmd->offset * 4, + cmd->vaddr + cmd->offset * 4);
- /* update the size */ - for (i = 0; i < submit->nr_cmds; i++) - submit->cmd[i].size = submit->cmd[i].obj->offset - - submit->cmd[i].offset; + /* jump back from last cmd to main buffer */ + CMD_LINK(cmd, link_size, link_target);
- if (drm_debug & DRM_UT_DRIVER) { - pr_info("stream link @ 0x%08x\n", - cmd->paddr + ((cmd->offset - 1) * 4)); - pr_info("stream link @ %p\n", - cmd->vaddr + ((cmd->offset - 1) * 4)); + /* update the size */ + submit->cmd[i].size = cmd->offset - submit->cmd[i].offset; + + link_target = cmd->paddr + submit->cmd[i].offset * 4; + link_size = submit->cmd[i].size * 2; + }
+ if (drm_debug & DRM_UT_DRIVER) { for (i = 0; i < submit->nr_cmds; i++) { struct etnaviv_gem_object *obj = submit->cmd[i].obj;
@@ -195,16 +198,16 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et
pr_info("link op: %p\n", lw); pr_info("link addr: %p\n", lw + 1); - pr_info("addr: 0x%08x\n", submit->cmd[0].obj->paddr); + pr_info("addr: 0x%08x\n", link_target); pr_info("back: 0x%08x\n", buffer->paddr + (back * 4)); pr_info("event: %d\n", event); }
/* Change WAIT into a LINK command; write the address first. */ - i = VIV_FE_LINK_HEADER_OP_LINK | VIV_FE_LINK_HEADER_PREFETCH(submit->cmd[0].size * 2); - *(lw + 1) = submit->cmd[0].obj->paddr + submit->cmd[0].offset * 4; + *(lw + 1) = link_target; mb(); - *(lw)= i; + *(lw) = VIV_FE_LINK_HEADER_OP_LINK | + VIV_FE_LINK_HEADER_PREFETCH(link_size); mb();
if (drm_debug & DRM_UT_DRIVER)
Is there a real use case to make use of multiple command buffers per one submit ioctl? I have removed it from my kernel tree.
2015-04-02 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
From: Russell King rmk+kernel@arm.linux.org.uk
etnaviv_buffer_queue() could not handle multiple command buffers. We can handle this trivially by adapting the existing code - we record where we want to link to, and walk the submitted command buffers in reverse order, appending a LINK command to the previous target.
This also means that we conveniently end up with the address and size to link to when changing the previous WAIT command.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk
drivers/staging/etnaviv/etnaviv_buffer.c | 45 +++++++++++++++++--------------- 1 file changed, 24 insertions(+), 21 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index 945af22db3f1..be2b02ce9a46 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -148,7 +148,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et struct etnaviv_gem_object *buffer = to_etnaviv_bo(gpu->buffer); struct etnaviv_gem_object *cmd; u32 *lw = buffer->vaddr + ((buffer->offset - 4) * 4);
u32 back;
u32 back, link_target, link_size; u32 i; if (drm_debug & DRM_UT_DRIVER)
@@ -156,6 +156,8 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et
/* save offset back into main buffer */ back = buffer->offset;
link_target = buffer->paddr + buffer->offset * 4;
link_size = 6; /* trigger event */ CMD_LOAD_STATE(buffer, VIVS_GL_EVENT, VIVS_GL_EVENT_EVENT_ID(event) | VIVS_GL_EVENT_FROM_PE);
@@ -165,27 +167,28 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et CMD_LINK(buffer, 2, buffer->paddr + ((buffer->offset - 1) * 4));
/* update offset for every cmd stream */
for (i = 0; i < submit->nr_cmds; i++)
submit->cmd[i].obj->offset = submit->cmd[i].offset +
submit->cmd[i].size;
for (i = submit->nr_cmds; i--; ) {
cmd = submit->cmd[i].obj;
/* TODO: inter-connect all cmd buffers */
cmd->offset = submit->cmd[i].offset + submit->cmd[i].size;
/* jump back from last cmd to main buffer */
cmd = submit->cmd[submit->nr_cmds - 1].obj;
CMD_LINK(cmd, 4, buffer->paddr + (back * 4));
if (drm_debug & DRM_UT_DRIVER)
pr_info("stream link from buffer %u to 0x%08x @ 0x%08x %p\n",
i, link_target,
cmd->paddr + cmd->offset * 4,
cmd->vaddr + cmd->offset * 4);
/* update the size */
for (i = 0; i < submit->nr_cmds; i++)
submit->cmd[i].size = submit->cmd[i].obj->offset -
submit->cmd[i].offset;
/* jump back from last cmd to main buffer */
CMD_LINK(cmd, link_size, link_target);
if (drm_debug & DRM_UT_DRIVER) {
pr_info("stream link @ 0x%08x\n",
cmd->paddr + ((cmd->offset - 1) * 4));
pr_info("stream link @ %p\n",
cmd->vaddr + ((cmd->offset - 1) * 4));
/* update the size */
submit->cmd[i].size = cmd->offset - submit->cmd[i].offset;
link_target = cmd->paddr + submit->cmd[i].offset * 4;
link_size = submit->cmd[i].size * 2;
}
if (drm_debug & DRM_UT_DRIVER) { for (i = 0; i < submit->nr_cmds; i++) { struct etnaviv_gem_object *obj = submit->cmd[i].obj;
@@ -195,16 +198,16 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et
pr_info("link op: %p\n", lw); pr_info("link addr: %p\n", lw + 1);
pr_info("addr: 0x%08x\n", submit->cmd[0].obj->paddr);
pr_info("addr: 0x%08x\n", link_target); pr_info("back: 0x%08x\n", buffer->paddr + (back * 4)); pr_info("event: %d\n", event); } /* Change WAIT into a LINK command; write the address first. */
i = VIV_FE_LINK_HEADER_OP_LINK | VIV_FE_LINK_HEADER_PREFETCH(submit->cmd[0].size * 2);
*(lw + 1) = submit->cmd[0].obj->paddr + submit->cmd[0].offset * 4;
*(lw + 1) = link_target; mb();
*(lw)= i;
*(lw) = VIV_FE_LINK_HEADER_OP_LINK |
VIV_FE_LINK_HEADER_PREFETCH(link_size); mb(); if (drm_debug & DRM_UT_DRIVER)
-- 2.1.4
-- Christian Gmeiner, MSc
From: Russell King rmk+kernel@arm.linux.org.uk
Combine the event data into an array of etnaviv_event structures, rather than individual arrays.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 18 +++++++++--------- drivers/staging/etnaviv/etnaviv_gpu.h | 8 ++++++-- 2 files changed, 15 insertions(+), 11 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index d06665aa319b..55a58cb27a3d 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -380,8 +380,8 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) /* Setup event management */ spin_lock_init(&gpu->event_spinlock); init_completion(&gpu->event_free); - for (i = 0; i < ARRAY_SIZE(gpu->event_used); i++) { - gpu->event_used[i] = false; + for (i = 0; i < ARRAY_SIZE(gpu->event); i++) { + gpu->event[i].used = false; complete(&gpu->event_free); }
@@ -681,9 +681,9 @@ static unsigned int event_alloc(struct etnaviv_gpu *gpu) spin_lock_irqsave(&gpu->event_spinlock, flags);
/* find first free event */ - for (i = 0; i < ARRAY_SIZE(gpu->event_used); i++) { - if (gpu->event_used[i] == false) { - gpu->event_used[i] = true; + for (i = 0; i < ARRAY_SIZE(gpu->event); i++) { + if (gpu->event[i].used == false) { + gpu->event[i].used = true; event = i; break; } @@ -700,11 +700,11 @@ static void event_free(struct etnaviv_gpu *gpu, unsigned int event)
spin_lock_irqsave(&gpu->event_spinlock, flags);
- if (gpu->event_used[event] == false) { + if (gpu->event[event].used == false) { dev_warn(gpu->dev->dev, "event %u is already marked as free", event); spin_unlock_irqrestore(&gpu->event_spinlock, flags); } else { - gpu->event_used[event] = false; + gpu->event[event].used = false; spin_unlock_irqrestore(&gpu->event_spinlock, flags);
complete(&gpu->event_free); @@ -781,7 +781,7 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, struct etnaviv_gem_submit *submi goto fail; }
- gpu->event_to_fence[event] = submit->fence; + gpu->event[event].fence = submit->fence;
etnaviv_buffer_queue(gpu, event, submit);
@@ -833,7 +833,7 @@ static irqreturn_t irq_handler(int irq, void *data) else { uint8_t event = __fls(intr); dev_dbg(gpu->dev->dev, "event %u\n", event); - gpu->retired_fence = gpu->event_to_fence[event]; + gpu->retired_fence = gpu->event[event].fence; event_free(gpu, event); etnaviv_gpu_retire(gpu); } diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h index 707096b5fe98..519b9344ed0c 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.h +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -78,6 +78,11 @@ struct etnaviv_chip_identity { uint32_t buffer_size; };
+struct etnaviv_event { + bool used; + uint32_t fence; +}; + struct etnaviv_gpu { const char *name; struct drm_device *dev; @@ -88,8 +93,7 @@ struct etnaviv_gpu { struct drm_gem_object *buffer;
/* event management: */ - bool event_used[30]; - uint32_t event_to_fence[30]; + struct etnaviv_event event[30]; struct completion event_free; struct spinlock event_spinlock;
From: Russell King rmk+kernel@arm.linux.org.uk
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_buffer.c | 4 ++++ drivers/staging/etnaviv/etnaviv_gpu.c | 3 +-- drivers/staging/etnaviv/etnaviv_gpu.h | 2 ++ 3 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index be2b02ce9a46..a93d5091828e 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -159,6 +159,10 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et link_target = buffer->paddr + buffer->offset * 4; link_size = 6;
+ /* Save the event and buffer position of the new event trigger */ + gpu->event[event].fence = submit->fence; + gpu->event[event].ring_pos = buffer->offset; + /* trigger event */ CMD_LOAD_STATE(buffer, VIVS_GL_EVENT, VIVS_GL_EVENT_EVENT_ID(event) | VIVS_GL_EVENT_FROM_PE);
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 55a58cb27a3d..8c88940a2bc6 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -781,8 +781,6 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, struct etnaviv_gem_submit *submi goto fail; }
- gpu->event[event].fence = submit->fence; - etnaviv_buffer_queue(gpu, event, submit);
priv->lastctx = ctx; @@ -834,6 +832,7 @@ static irqreturn_t irq_handler(int irq, void *data) uint8_t event = __fls(intr); dev_dbg(gpu->dev->dev, "event %u\n", event); gpu->retired_fence = gpu->event[event].fence; + gpu->last_ring_pos = gpu->event[event].ring_pos; event_free(gpu, event); etnaviv_gpu_retire(gpu); } diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h index 519b9344ed0c..52db3dc54079 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.h +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -81,6 +81,7 @@ struct etnaviv_chip_identity { struct etnaviv_event { bool used; uint32_t fence; + uint32_t ring_pos; };
struct etnaviv_gpu { @@ -102,6 +103,7 @@ struct etnaviv_gpu {
uint32_t submitted_fence; uint32_t retired_fence; + uint32_t last_ring_pos;
/* worker for handling active-list retiring: */ struct work_struct retire_work;
From: Russell King rmk+kernel@arm.linux.org.uk
Ensure that the ring buffer wraps when we fill the buffer.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_buffer.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index a93d5091828e..a8b42a1ec7db 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -30,7 +30,7 @@ static inline void OUT(struct etnaviv_gem_object *buffer, uint32_t data) { u32 *vaddr = (u32 *)buffer->vaddr; - BUG_ON(buffer->offset * sizeof(*vaddr) >= buffer->base.size); + BUG_ON(buffer->offset >= buffer->base.size / sizeof(*vaddr));
vaddr[buffer->offset++] = data; } @@ -154,6 +154,12 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et if (drm_debug & DRM_UT_DRIVER) etnaviv_buffer_dump(gpu, buffer, 0, 0x50);
+ /* + * if we are going to completely overflow the buffer, we need to wrap. + */ + if (buffer->offset + 6 > buffer->base.size / sizeof(uint32_t)) + buffer->offset = 0; + /* save offset back into main buffer */ back = buffer->offset; link_target = buffer->paddr + buffer->offset * 4;
From: Russell King rmk+kernel@arm.linux.org.uk
ERROR: do not initialise statics to 0 or NULL +static bool reglog = false;
ERROR: "foo * bar" should be "foo *bar" +int etnaviv_gem_get_iova_locked(struct etnaviv_gpu * gpu, struct drm_gem_object *obj,
ERROR: do not use C99 // comments + // XXX TODO .. ... ERROR: spaces required around that '=' (ctx:VxW) + struct etnaviv_gem_object *etnaviv_obj= to_etnaviv_bo(obj);
ERROR: do not use C99 // comments +//#define MSM_PARAM_GMEM_SIZE 0x02
ERROR: Macros with complex values should be enclosed in parenthesis +#define DRM_IOCTL_ETNAVIV_GEM_CPU_PREP DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_CPU_PREP, struct drm_etnaviv_gem_cpu_prep) ...
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 18 ++++++++++-------- drivers/staging/etnaviv/etnaviv_gem_prime.c | 2 +- include/uapi/drm/etnaviv_drm.h | 8 ++++---- 3 files changed, 15 insertions(+), 13 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index f98d5ee43853..d65f202de32c 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -256,8 +256,8 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) * That means when I do eventually need to add support for unpinning * the refcnt counter needs to be atomic_t. */ -int etnaviv_gem_get_iova_locked(struct etnaviv_gpu * gpu, struct drm_gem_object *obj, - uint32_t *iova) +int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, + struct drm_gem_object *obj, uint32_t *iova) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); int ret = 0; @@ -317,12 +317,14 @@ int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, in
void etnaviv_gem_put_iova(struct drm_gem_object *obj) { - // XXX TODO .. - // NOTE: probably don't need a _locked() version.. we wouldn't - // normally unmap here, but instead just mark that it could be - // unmapped (if the iova refcnt drops to zero), but then later - // if another _get_iova_locked() fails we can start unmapping - // things that are no longer needed.. + /* + * XXX TODO .. + * NOTE: probably don't need a _locked() version.. we wouldn't + * normally unmap here, but instead just mark that it could be + * unmapped (if the iova refcnt drops to zero), but then later + * if another _get_iova_locked() fails we can start unmapping + * things that are no longer needed.. + */ }
int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, diff --git a/drivers/staging/etnaviv/etnaviv_gem_prime.c b/drivers/staging/etnaviv/etnaviv_gem_prime.c index 78dd843a8e97..f9af91f9ff10 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_prime.c +++ b/drivers/staging/etnaviv/etnaviv_gem_prime.c @@ -21,7 +21,7 @@
struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) { - struct etnaviv_gem_object *etnaviv_obj= to_etnaviv_bo(obj); + struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); BUG_ON(!etnaviv_obj->sgt); /* should have already pinned! */ return etnaviv_obj->sgt; } diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h index a9f020ed71ea..76596fc46150 100644 --- a/include/uapi/drm/etnaviv_drm.h +++ b/include/uapi/drm/etnaviv_drm.h @@ -68,7 +68,7 @@ struct drm_etnaviv_timespec { #define ETNAVIV_PARAM_GPU_INSTRUCTION_COUNT 0x18 #define ETNAVIV_PARAM_GPU_NUM_CONSTANTS 0x19
-//#define MSM_PARAM_GMEM_SIZE 0x02 +/* #define MSM_PARAM_GMEM_SIZE 0x02 */
struct drm_etnaviv_param { uint32_t pipe; /* in, ETNA_PIPE_x */ @@ -217,9 +217,9 @@ struct drm_etnaviv_wait_fence { #define DRM_IOCTL_ETNAVIV_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GET_PARAM, struct drm_etnaviv_param) #define DRM_IOCTL_ETNAVIV_GEM_NEW DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_NEW, struct drm_etnaviv_gem_new) #define DRM_IOCTL_ETNAVIV_GEM_INFO DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_INFO, struct drm_etnaviv_gem_info) -#define DRM_IOCTL_ETNAVIV_GEM_CPU_PREP DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_CPU_PREP, struct drm_etnaviv_gem_cpu_prep) -#define DRM_IOCTL_ETNAVIV_GEM_CPU_FINI DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_CPU_FINI, struct drm_etnaviv_gem_cpu_fini) +#define DRM_IOCTL_ETNAVIV_GEM_CPU_PREP DRM_IOW(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_CPU_PREP, struct drm_etnaviv_gem_cpu_prep) +#define DRM_IOCTL_ETNAVIV_GEM_CPU_FINI DRM_IOW(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_CPU_FINI, struct drm_etnaviv_gem_cpu_fini) #define DRM_IOCTL_ETNAVIV_GEM_SUBMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_SUBMIT, struct drm_etnaviv_gem_submit) -#define DRM_IOCTL_ETNAVIV_WAIT_FENCE DRM_IOW (DRM_COMMAND_BASE + DRM_ETNAVIV_WAIT_FENCE, struct drm_etnaviv_wait_fence) +#define DRM_IOCTL_ETNAVIV_WAIT_FENCE DRM_IOW(DRM_COMMAND_BASE + DRM_ETNAVIV_WAIT_FENCE, struct drm_etnaviv_wait_fence)
#endif /* __ETNAVIV_DRM_H__ */
From: Russell King rmk+kernel@arm.linux.org.uk
Fix many checkpatch warnings.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_buffer.c | 41 ++++-- drivers/staging/etnaviv/etnaviv_drv.c | 38 ++++-- drivers/staging/etnaviv/etnaviv_drv.h | 11 +- drivers/staging/etnaviv/etnaviv_gem.c | 42 ++++-- drivers/staging/etnaviv/etnaviv_gem_prime.c | 2 + drivers/staging/etnaviv/etnaviv_gem_submit.c | 25 ++-- drivers/staging/etnaviv/etnaviv_gpu.c | 185 +++++++++++++++++---------- drivers/staging/etnaviv/etnaviv_gpu.h | 9 +- drivers/staging/etnaviv/etnaviv_iommu.c | 16 ++- drivers/staging/etnaviv/etnaviv_mmu.c | 4 +- drivers/staging/etnaviv/etnaviv_mmu.h | 14 +- include/uapi/drm/etnaviv_drm.h | 2 +- 12 files changed, 261 insertions(+), 128 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index a8b42a1ec7db..026489baeda7 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -30,28 +30,37 @@ static inline void OUT(struct etnaviv_gem_object *buffer, uint32_t data) { u32 *vaddr = (u32 *)buffer->vaddr; + BUG_ON(buffer->offset >= buffer->base.size / sizeof(*vaddr));
vaddr[buffer->offset++] = data; }
-static inline void CMD_LOAD_STATE(struct etnaviv_gem_object *buffer, u32 reg, u32 value) +static inline void CMD_LOAD_STATE(struct etnaviv_gem_object *buffer, + u32 reg, u32 value) { + u32 index = reg >> VIV_FE_LOAD_STATE_HEADER_OFFSET__SHR; + buffer->offset = ALIGN(buffer->offset, 2);
/* write a register via cmd stream */ - OUT(buffer, VIV_FE_LOAD_STATE_HEADER_OP_LOAD_STATE | VIV_FE_LOAD_STATE_HEADER_COUNT(1) | - VIV_FE_LOAD_STATE_HEADER_OFFSET(reg >> VIV_FE_LOAD_STATE_HEADER_OFFSET__SHR)); + OUT(buffer, VIV_FE_LOAD_STATE_HEADER_OP_LOAD_STATE | + VIV_FE_LOAD_STATE_HEADER_COUNT(1) | + VIV_FE_LOAD_STATE_HEADER_OFFSET(index)); OUT(buffer, value); }
-static inline void CMD_LOAD_STATES(struct etnaviv_gem_object *buffer, u32 reg, u16 count, u32 *values) +static inline void CMD_LOAD_STATES(struct etnaviv_gem_object *buffer, + u32 reg, u16 count, u32 *values) { + u32 index = reg >> VIV_FE_LOAD_STATE_HEADER_OFFSET__SHR; u16 i; + buffer->offset = ALIGN(buffer->offset, 2);
- OUT(buffer, VIV_FE_LOAD_STATE_HEADER_OP_LOAD_STATE | VIV_FE_LOAD_STATE_HEADER_COUNT(count) | - VIV_FE_LOAD_STATE_HEADER_OFFSET(reg >> VIV_FE_LOAD_STATE_HEADER_OFFSET__SHR)); + OUT(buffer, VIV_FE_LOAD_STATE_HEADER_OP_LOAD_STATE | + VIV_FE_LOAD_STATE_HEADER_COUNT(count) | + VIV_FE_LOAD_STATE_HEADER_OFFSET(index));
for (i = 0; i < count; i++) OUT(buffer, values[i]); @@ -78,15 +87,18 @@ static inline void CMD_WAIT(struct etnaviv_gem_object *buffer) OUT(buffer, VIV_FE_WAIT_HEADER_OP_WAIT | 200); }
-static inline void CMD_LINK(struct etnaviv_gem_object *buffer, u16 prefetch, u32 address) +static inline void CMD_LINK(struct etnaviv_gem_object *buffer, + u16 prefetch, u32 address) { buffer->offset = ALIGN(buffer->offset, 2);
- OUT(buffer, VIV_FE_LINK_HEADER_OP_LINK | VIV_FE_LINK_HEADER_PREFETCH(prefetch)); + OUT(buffer, VIV_FE_LINK_HEADER_OP_LINK | + VIV_FE_LINK_HEADER_PREFETCH(prefetch)); OUT(buffer, address); }
-static inline void CMD_STALL(struct etnaviv_gem_object *buffer, u32 from, u32 to) +static inline void CMD_STALL(struct etnaviv_gem_object *buffer, + u32 from, u32 to) { buffer->offset = ALIGN(buffer->offset, 2);
@@ -105,14 +117,15 @@ static void etnaviv_cmd_select_pipe(struct etnaviv_gem_object *buffer, u8 pipe) flush = VIVS_GL_FLUSH_CACHE_TEXTURE;
stall = VIVS_GL_SEMAPHORE_TOKEN_FROM(SYNC_RECIPIENT_FE) | - VIVS_GL_SEMAPHORE_TOKEN_TO(SYNC_RECIPIENT_PE); + VIVS_GL_SEMAPHORE_TOKEN_TO(SYNC_RECIPIENT_PE);
CMD_LOAD_STATE(buffer, VIVS_GL_FLUSH_CACHE, flush); CMD_LOAD_STATE(buffer, VIVS_GL_SEMAPHORE_TOKEN, stall);
CMD_STALL(buffer, SYNC_RECIPIENT_FE, SYNC_RECIPIENT_PE);
- CMD_LOAD_STATE(buffer, VIVS_GL_PIPE_SELECT, VIVS_GL_PIPE_SELECT_PIPE(pipe)); + CMD_LOAD_STATE(buffer, VIVS_GL_PIPE_SELECT, + VIVS_GL_PIPE_SELECT_PIPE(pipe)); }
static void etnaviv_buffer_dump(struct etnaviv_gpu *gpu, @@ -143,7 +156,8 @@ u32 etnaviv_buffer_init(struct etnaviv_gpu *gpu) return buffer->offset; }
-void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct etnaviv_gem_submit *submit) +void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, + struct etnaviv_gem_submit *submit) { struct etnaviv_gem_object *buffer = to_etnaviv_bo(gpu->buffer); struct etnaviv_gem_object *cmd; @@ -170,7 +184,8 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct et gpu->event[event].ring_pos = buffer->offset;
/* trigger event */ - CMD_LOAD_STATE(buffer, VIVS_GL_EVENT, VIVS_GL_EVENT_EVENT_ID(event) | VIVS_GL_EVENT_FROM_PE); + CMD_LOAD_STATE(buffer, VIVS_GL_EVENT, VIVS_GL_EVENT_EVENT_ID(event) | + VIVS_GL_EVENT_FROM_PE);
/* append WAIT/LINK to main buffer */ CMD_WAIT(buffer); diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 7b4999c78417..2d44fcd7299e 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -24,11 +24,12 @@ void etnaviv_register_mmu(struct drm_device *dev, struct etnaviv_iommu *mmu) { struct etnaviv_drm_private *priv = dev->dev_private; + priv->mmu = mmu; }
#ifdef CONFIG_DRM_ETNAVIV_REGISTER_LOGGING -static bool reglog = false; +static bool reglog; MODULE_PARM_DESC(reglog, "Enable register read/write logging"); module_param(reglog, bool, 0600); #else @@ -64,14 +65,17 @@ void etnaviv_writel(u32 data, void __iomem *addr) { if (reglog) printk(KERN_DEBUG "IO:W %p %08x\n", addr, data); + writel(data, addr); }
u32 etnaviv_readl(const void __iomem *addr) { u32 val = readl(addr); + if (reglog) printk(KERN_DEBUG "IO:R %p %08x\n", addr, val); + return val; }
@@ -90,6 +94,7 @@ static int etnaviv_unload(struct drm_device *dev) mutex_lock(&dev->struct_mutex); for (i = 0; i < ETNA_MAX_PIPES; i++) { struct etnaviv_gpu *g = priv->gpu[i]; + if (g) etnaviv_gpu_pm_suspend(g); } @@ -114,12 +119,15 @@ static void load_gpu(struct drm_device *dev)
for (i = 0; i < ETNA_MAX_PIPES; i++) { struct etnaviv_gpu *g = priv->gpu[i]; + if (g) { int ret; + etnaviv_gpu_pm_resume(g); ret = etnaviv_gpu_init(g); if (ret) { - dev_err(dev->dev, "%s hw init failed: %d\n", g->name, ret); + dev_err(dev->dev, "%s hw init failed: %d\n", + g->name, ret); priv->gpu[i] = NULL; } } @@ -370,11 +378,15 @@ static int etnaviv_ioctl_gem_new(struct drm_device *dev, void *data, struct drm_file *file) { struct drm_etnaviv_gem_new *args = data; + return etnaviv_gem_new_handle(dev, file, args->size, args->flags, &args->handle); }
-#define TS(t) ((struct timespec){ .tv_sec = (t).tv_sec, .tv_nsec = (t).tv_nsec }) +#define TS(t) ((struct timespec){ \ + .tv_sec = (t).tv_sec, \ + .tv_nsec = (t).tv_nsec \ +})
static int etnaviv_ioctl_gem_cpu_prep(struct drm_device *dev, void *data, struct drm_file *file) @@ -437,17 +449,21 @@ static int etnaviv_ioctl_wait_fence(struct drm_device *dev, void *data, struct drm_file *file) { struct drm_etnaviv_wait_fence *args = data; - return etnaviv_wait_fence_interruptable(dev, args->pipe, args->fence, &TS(args->timeout)); + + return etnaviv_wait_fence_interruptable(dev, args->pipe, args->fence, + &TS(args->timeout)); }
static const struct drm_ioctl_desc etnaviv_ioctls[] = { - DRM_IOCTL_DEF_DRV(ETNAVIV_GET_PARAM, etnaviv_ioctl_get_param, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), - DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_NEW, etnaviv_ioctl_gem_new, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), - DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_INFO, etnaviv_ioctl_gem_info, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), - DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_CPU_PREP, etnaviv_ioctl_gem_cpu_prep, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), - DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_CPU_FINI, etnaviv_ioctl_gem_cpu_fini, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), - DRM_IOCTL_DEF_DRV(ETNAVIV_GEM_SUBMIT, etnaviv_ioctl_gem_submit, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), - DRM_IOCTL_DEF_DRV(ETNAVIV_WAIT_FENCE, etnaviv_ioctl_wait_fence, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), +#define ETNA_IOCTL(n, func, flags) \ + DRM_IOCTL_DEF_DRV(ETNAVIV_##n, etnaviv_ioctl_##func, flags) + ETNA_IOCTL(GET_PARAM, get_param, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + ETNA_IOCTL(GEM_NEW, gem_new, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + ETNA_IOCTL(GEM_INFO, gem_info, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + ETNA_IOCTL(GEM_CPU_PREP, gem_cpu_prep, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + ETNA_IOCTL(GEM_CPU_FINI, gem_cpu_fini, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + ETNA_IOCTL(GEM_SUBMIT, gem_submit, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + ETNA_IOCTL(WAIT_FENCE, wait_fence, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), };
static const struct vm_operations_struct vm_ops = { diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index bf5d1d9cc891..a1543734bc2f 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -77,9 +77,10 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma); int etnaviv_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf); uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); -int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, - uint32_t *iova); -int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, int id, uint32_t *iova); +int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, + struct drm_gem_object *obj, uint32_t *iova); +int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, + int id, uint32_t *iova); struct page **etnaviv_gem_get_pages(struct drm_gem_object *obj); void msm_gem_put_pages(struct drm_gem_object *obj); void etnaviv_gem_put_iova(struct drm_gem_object *obj); @@ -111,7 +112,8 @@ struct drm_gem_object *etnaviv_gem_new(struct drm_device *dev, struct drm_gem_object *msm_gem_import(struct drm_device *dev, uint32_t size, struct sg_table *sgt); u32 etnaviv_buffer_init(struct etnaviv_gpu *gpu); -void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct etnaviv_gem_submit *submit); +void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, + struct etnaviv_gem_submit *submit);
#ifdef CONFIG_DEBUG_FS void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m); @@ -148,6 +150,7 @@ static inline bool fence_completed(struct drm_device *dev, uint32_t fence) static inline int align_pitch(int width, int bpp) { int bytespp = (bpp + 7) / 8; + /* adreno needs pitch aligned to 32 pixels: */ return bytespp * ALIGN(width, 32); } diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index d65f202de32c..18f607b6532f 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -55,7 +55,7 @@ static struct page **get_pages(struct drm_gem_object *obj) */ if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_UNCACHED)) dma_map_sg(dev->dev, etnaviv_obj->sgt->sgl, - etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL); + etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL); }
return etnaviv_obj->pages; @@ -71,7 +71,8 @@ static void put_pages(struct drm_gem_object *obj) */ if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_UNCACHED)) dma_unmap_sg(obj->dev->dev, etnaviv_obj->sgt->sgl, - etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL); + etnaviv_obj->sgt->nents, + DMA_BIDIRECTIONAL); sg_free_table(etnaviv_obj->sgt); kfree(etnaviv_obj->sgt);
@@ -85,9 +86,11 @@ struct page **etnaviv_gem_get_pages(struct drm_gem_object *obj) { struct drm_device *dev = obj->dev; struct page **p; + mutex_lock(&dev->struct_mutex); p = get_pages(obj); mutex_unlock(&dev->struct_mutex); + return p; }
@@ -121,14 +124,17 @@ static int etnaviv_gem_mmap_obj(struct drm_gem_object *obj, struct vm_area_struct *vma) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + pgprot_t vm_page_prot;
vma->vm_flags &= ~VM_PFNMAP; vma->vm_flags |= VM_MIXEDMAP;
+ vm_page_prot = vm_get_page_prot(vma->vm_flags); + if (etnaviv_obj->flags & ETNA_BO_WC) { - vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); + vma->vm_page_prot = pgprot_writecombine(vm_page_prot); } else if (etnaviv_obj->flags & ETNA_BO_UNCACHED) { - vma->vm_page_prot = pgprot_noncached(vm_get_page_prot(vma->vm_flags)); + vma->vm_page_prot = pgprot_noncached(vm_page_prot); } else { /* * Shunt off cached objs to shmem file so they have their own @@ -140,7 +146,7 @@ static int etnaviv_gem_mmap_obj(struct drm_gem_object *obj, vma->vm_pgoff = 0; vma->vm_file = obj->filp;
- vma->vm_page_prot = vm_get_page_prot(vma->vm_flags); + vma->vm_page_prot = vm_page_prot; }
return 0; @@ -243,9 +249,11 @@ static uint64_t mmap_offset(struct drm_gem_object *obj) uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) { uint64_t offset; + mutex_lock(&obj->dev->struct_mutex); offset = mmap_offset(obj); mutex_unlock(&obj->dev->struct_mutex); + return offset; }
@@ -296,7 +304,8 @@ int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, return ret; }
-int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, int id, uint32_t *iova) +int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, + int id, uint32_t *iova) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); int ret; @@ -312,6 +321,7 @@ int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, in mutex_lock(&obj->dev->struct_mutex); ret = etnaviv_gem_get_iova_locked(gpu, obj, iova); mutex_unlock(&obj->dev->struct_mutex); + return ret; }
@@ -361,29 +371,37 @@ fail: void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); + if (!etnaviv_obj->vaddr) { struct page **pages = get_pages(obj); + if (IS_ERR(pages)) return ERR_CAST(pages); + etnaviv_obj->vaddr = vmap(pages, obj->size >> PAGE_SHIFT, VM_MAP, pgprot_writecombine(PAGE_KERNEL)); } + return etnaviv_obj->vaddr; }
void *msm_gem_vaddr(struct drm_gem_object *obj) { void *ret; + mutex_lock(&obj->dev->struct_mutex); ret = etnaviv_gem_vaddr_locked(obj); mutex_unlock(&obj->dev->struct_mutex); + return ret; }
dma_addr_t etnaviv_gem_paddr_locked(struct drm_gem_object *obj) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
return etnaviv_obj->paddr; @@ -393,11 +411,14 @@ void etnaviv_gem_move_to_active(struct drm_gem_object *obj, struct etnaviv_gpu *gpu, bool write, uint32_t fence) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + etnaviv_obj->gpu = gpu; + if (write) etnaviv_obj->write_fence = fence; else etnaviv_obj->read_fence = fence; + list_del_init(&etnaviv_obj->mm_list); list_add_tail(&etnaviv_obj->mm_list, &gpu->active_list); } @@ -459,6 +480,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) uint64_t off = drm_vma_node_start(&obj->vma_node);
WARN_ON(!mutex_is_locked(&dev->struct_mutex)); + seq_printf(m, "%08x: %c(r=%u,w=%u) %2d (%2d) %08llx %p %d\n", etnaviv_obj->flags, is_active(etnaviv_obj) ? 'A' : 'I', etnaviv_obj->read_fence, etnaviv_obj->write_fence, @@ -474,6 +496,7 @@ void msm_gem_describe_objects(struct list_head *list, struct seq_file *m)
list_for_each_entry(etnaviv_obj, list, mm_list) { struct drm_gem_object *obj = &etnaviv_obj->base; + seq_puts(m, " "); msm_gem_describe(obj, m); count++; @@ -504,6 +527,7 @@ static void etnaviv_free_obj(struct drm_gem_object *obj)
if (mmu && etnaviv_obj->iova) { uint32_t offset = etnaviv_obj->gpu_vram_node->start; + etnaviv_iommu_unmap(mmu, offset, etnaviv_obj->sgt, obj->size); drm_mm_remove_node(etnaviv_obj->gpu_vram_node); kfree(etnaviv_obj->gpu_vram_node); @@ -513,7 +537,8 @@ static void etnaviv_free_obj(struct drm_gem_object *obj)
if (obj->import_attach) { if (etnaviv_obj->vaddr) - dma_buf_vunmap(obj->import_attach->dmabuf, etnaviv_obj->vaddr); + dma_buf_vunmap(obj->import_attach->dmabuf, + etnaviv_obj->vaddr);
/* Don't drop the pages for imported dmabuf, as they are not * ours, just free the array we allocated: @@ -695,7 +720,8 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, goto fail; }
- ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages, NULL, npages); + ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages, + NULL, npages); if (ret) goto fail;
diff --git a/drivers/staging/etnaviv/etnaviv_gem_prime.c b/drivers/staging/etnaviv/etnaviv_gem_prime.c index f9af91f9ff10..9c152b5640bc 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_prime.c +++ b/drivers/staging/etnaviv/etnaviv_gem_prime.c @@ -22,7 +22,9 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + BUG_ON(!etnaviv_obj->sgt); /* should have already pinned! */ + return etnaviv_obj->sgt; }
diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index 7eb02a121cff..af3718465ea1 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -92,7 +92,8 @@ static int submit_lookup_objects(struct etnaviv_gem_submit *submit, */ obj = idr_find(&file->object_idr, submit_bo.handle); if (!obj) { - DRM_ERROR("invalid handle %u at index %u\n", submit_bo.handle, i); + DRM_ERROR("invalid handle %u at index %u\n", + submit_bo.handle, i); ret = -EINVAL; goto out_unlock; } @@ -101,7 +102,7 @@ static int submit_lookup_objects(struct etnaviv_gem_submit *submit,
if (!list_empty(&etnaviv_obj->submit_entry)) { DRM_ERROR("handle %u at index %u already on submit list\n", - submit_bo.handle, i); + submit_bo.handle, i); ret = -EINVAL; goto out_unlock; } @@ -163,7 +164,8 @@ retry:
/* if locking succeeded, pin bo: */ - ret = etnaviv_gem_get_iova_locked(submit->gpu, &etnaviv_obj->base, &iova); + ret = etnaviv_gem_get_iova_locked(submit->gpu, + &etnaviv_obj->base, &iova);
/* this would break the logic in the fail path.. there is no * reason for this to happen, but just to be on the safe side @@ -197,7 +199,10 @@ fail: submit_unlock_unpin_bo(submit, slow_locked);
if (ret == -EDEADLK) { - struct etnaviv_gem_object *etnaviv_obj = submit->bos[contended].obj; + struct etnaviv_gem_object *etnaviv_obj; + + etnaviv_obj = submit->bos[contended].obj; + /* we lost out in a seqno race, lock and retry.. */ ret = ww_mutex_lock_slow_interruptible(&etnaviv_obj->resv->lock, &submit->ticket); @@ -251,7 +256,8 @@ static int submit_reloc(struct etnaviv_gem_submit *submit, struct etnaviv_gem_ob uint32_t iova, off; bool valid;
- ret = copy_from_user(&submit_reloc, userptr, sizeof(submit_reloc)); + ret = copy_from_user(&submit_reloc, userptr, + sizeof(submit_reloc)); if (ret) return -EFAULT;
@@ -305,6 +311,7 @@ static void submit_cleanup(struct etnaviv_gem_submit *submit, bool fail)
for (i = 0; i < submit->nr_bos; i++) { struct etnaviv_gem_object *etnaviv_obj = submit->bos[i].obj; + submit_unlock_unpin_bo(submit, i); list_del_init(&etnaviv_obj->submit_entry); drm_gem_object_unreference(&etnaviv_obj->base); @@ -397,7 +404,8 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
if (submit_cmd.size > max_size || submit_cmd.submit_offset > max_size - submit_cmd.size) { - DRM_ERROR("invalid cmdstream size: %u\n", submit_cmd.size); + DRM_ERROR("invalid cmdstream size: %u\n", + submit_cmd.size); ret = -EINVAL; goto out; } @@ -410,8 +418,9 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, if (submit->valid) continue;
- ret = submit_reloc(submit, etnaviv_obj, submit_cmd.submit_offset, - submit_cmd.nr_relocs, submit_cmd.relocs); + ret = submit_reloc(submit, etnaviv_obj, + submit_cmd.submit_offset, + submit_cmd.nr_relocs, submit_cmd.relocs); if (ret) goto out; } diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 8c88940a2bc6..85a0862e0347 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -31,7 +31,8 @@ * Driver functions: */
-int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, uint32_t param, uint64_t *value) +int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, uint32_t param, + uint64_t *value) { switch (param) { case ETNAVIV_PARAM_GPU_MODEL: @@ -112,37 +113,49 @@ int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, uint32_t param, uint64_t *val
static void etnaviv_hw_specs(struct etnaviv_gpu *gpu) { - if (gpu->identity.minor_features0 & chipMinorFeatures0_MORE_MINOR_FEATURES) { + if (gpu->identity.minor_features0 & + chipMinorFeatures0_MORE_MINOR_FEATURES) { u32 specs[2];
specs[0] = gpu_read(gpu, VIVS_HI_CHIP_SPECS); specs[1] = gpu_read(gpu, VIVS_HI_CHIP_SPECS_2);
- gpu->identity.stream_count = (specs[0] & VIVS_HI_CHIP_SPECS_STREAM_COUNT__MASK) + gpu->identity.stream_count = + (specs[0] & VIVS_HI_CHIP_SPECS_STREAM_COUNT__MASK) >> VIVS_HI_CHIP_SPECS_STREAM_COUNT__SHIFT; - gpu->identity.register_max = (specs[0] & VIVS_HI_CHIP_SPECS_REGISTER_MAX__MASK) + gpu->identity.register_max = + (specs[0] & VIVS_HI_CHIP_SPECS_REGISTER_MAX__MASK) >> VIVS_HI_CHIP_SPECS_REGISTER_MAX__SHIFT; - gpu->identity.thread_count = (specs[0] & VIVS_HI_CHIP_SPECS_THREAD_COUNT__MASK) + gpu->identity.thread_count = + (specs[0] & VIVS_HI_CHIP_SPECS_THREAD_COUNT__MASK) >> VIVS_HI_CHIP_SPECS_THREAD_COUNT__SHIFT; - gpu->identity.vertex_cache_size = (specs[0] & VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__MASK) + gpu->identity.vertex_cache_size = + (specs[0] & VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__MASK) >> VIVS_HI_CHIP_SPECS_VERTEX_CACHE_SIZE__SHIFT; - gpu->identity.shader_core_count = (specs[0] & VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__MASK) + gpu->identity.shader_core_count = + (specs[0] & VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__MASK) >> VIVS_HI_CHIP_SPECS_SHADER_CORE_COUNT__SHIFT; - gpu->identity.pixel_pipes = (specs[0] & VIVS_HI_CHIP_SPECS_PIXEL_PIPES__MASK) + gpu->identity.pixel_pipes = + (specs[0] & VIVS_HI_CHIP_SPECS_PIXEL_PIPES__MASK) >> VIVS_HI_CHIP_SPECS_PIXEL_PIPES__SHIFT; - gpu->identity.vertex_output_buffer_size = (specs[0] & VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__MASK) + gpu->identity.vertex_output_buffer_size = + (specs[0] & VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__MASK) >> VIVS_HI_CHIP_SPECS_VERTEX_OUTPUT_BUFFER_SIZE__SHIFT;
- gpu->identity.buffer_size = (specs[1] & VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__MASK) + gpu->identity.buffer_size = + (specs[1] & VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__MASK) >> VIVS_HI_CHIP_SPECS_2_BUFFER_SIZE__SHIFT; - gpu->identity.instruction_count = (specs[1] & VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__MASK) + gpu->identity.instruction_count = + (specs[1] & VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__MASK) >> VIVS_HI_CHIP_SPECS_2_INSTRUCTION_COUNT__SHIFT; - gpu->identity.num_constants = (specs[1] & VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__MASK) + gpu->identity.num_constants = + (specs[1] & VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__MASK) >> VIVS_HI_CHIP_SPECS_2_NUM_CONSTANTS__SHIFT;
gpu->identity.register_max = 1 << gpu->identity.register_max; gpu->identity.thread_count = 1 << gpu->identity.thread_count; - gpu->identity.vertex_output_buffer_size = 1 << gpu->identity.vertex_output_buffer_size; + gpu->identity.vertex_output_buffer_size = + 1 << gpu->identity.vertex_output_buffer_size; } else { dev_err(gpu->dev->dev, "TODO: determine GPU specs based on model\n"); } @@ -165,16 +178,26 @@ static void etnaviv_hw_specs(struct etnaviv_gpu *gpu) break; }
- dev_info(gpu->dev->dev, "stream_count: %x\n", gpu->identity.stream_count); - dev_info(gpu->dev->dev, "register_max: %x\n", gpu->identity.register_max); - dev_info(gpu->dev->dev, "thread_count: %x\n", gpu->identity.thread_count); - dev_info(gpu->dev->dev, "vertex_cache_size: %x\n", gpu->identity.vertex_cache_size); - dev_info(gpu->dev->dev, "shader_core_count: %x\n", gpu->identity.shader_core_count); - dev_info(gpu->dev->dev, "pixel_pipes: %x\n", gpu->identity.pixel_pipes); - dev_info(gpu->dev->dev, "vertex_output_buffer_size: %x\n", gpu->identity.vertex_output_buffer_size); - dev_info(gpu->dev->dev, "buffer_size: %x\n", gpu->identity.buffer_size); - dev_info(gpu->dev->dev, "instruction_count: %x\n", gpu->identity.instruction_count); - dev_info(gpu->dev->dev, "num_constants: %x\n", gpu->identity.num_constants); + dev_info(gpu->dev->dev, "stream_count: %x\n", + gpu->identity.stream_count); + dev_info(gpu->dev->dev, "register_max: %x\n", + gpu->identity.register_max); + dev_info(gpu->dev->dev, "thread_count: %x\n", + gpu->identity.thread_count); + dev_info(gpu->dev->dev, "vertex_cache_size: %x\n", + gpu->identity.vertex_cache_size); + dev_info(gpu->dev->dev, "shader_core_count: %x\n", + gpu->identity.shader_core_count); + dev_info(gpu->dev->dev, "pixel_pipes: %x\n", + gpu->identity.pixel_pipes); + dev_info(gpu->dev->dev, "vertex_output_buffer_size: %x\n", + gpu->identity.vertex_output_buffer_size); + dev_info(gpu->dev->dev, "buffer_size: %x\n", + gpu->identity.buffer_size); + dev_info(gpu->dev->dev, "instruction_count: %x\n", + gpu->identity.instruction_count); + dev_info(gpu->dev->dev, "num_constants: %x\n", + gpu->identity.num_constants); }
static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) @@ -192,23 +215,28 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) gpu->identity.model = gpu_read(gpu, VIVS_HI_CHIP_MODEL); gpu->identity.revision = gpu_read(gpu, VIVS_HI_CHIP_REV);
- /* !!!! HACK ALERT !!!! */ - /* Because people change device IDs without letting software know - ** about it - here is the hack to make it all look the same. Only - ** for GC400 family. Next time - TELL ME!!! */ - if (((gpu->identity.model & 0xFF00) == 0x0400) - && (gpu->identity.model != 0x0420)) { + /* + * !!!! HACK ALERT !!!! + * Because people change device IDs without letting software + * know about it - here is the hack to make it all look the + * same. Only for GC400 family. + */ + if ((gpu->identity.model & 0xff00) == 0x0400 && + gpu->identity.model != 0x0420) { gpu->identity.model = gpu->identity.model & 0x0400; }
- /* An other special case */ - if ((gpu->identity.model == 0x300) - && (gpu->identity.revision == 0x2201)) { + /* Another special case */ + if (gpu->identity.model == 0x300 && + gpu->identity.revision == 0x2201) { u32 chipDate = gpu_read(gpu, VIVS_HI_CHIP_DATE); u32 chipTime = gpu_read(gpu, VIVS_HI_CHIP_TIME);
- if ((chipDate == 0x20080814) && (chipTime == 0x12051100)) { - /* This IP has an ECO; put the correct revision in it. */ + if (chipDate == 0x20080814 && chipTime == 0x12051100) { + /* + * This IP has an ECO; put the correct + * revision in it. + */ gpu->identity.revision = 0x1051; } } @@ -223,27 +251,38 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) if (gpu->identity.model == 0x700) gpu->identity.features &= ~BIT(0);
- if (((gpu->identity.model == 0x500) && (gpu->identity.revision < 2)) - || ((gpu->identity.model == 0x300) && (gpu->identity.revision < 0x2000))) { + if ((gpu->identity.model == 0x500 && gpu->identity.revision < 2) || + (gpu->identity.model == 0x300 && gpu->identity.revision < 0x2000)) {
- /* GC500 rev 1.x and GC300 rev < 2.0 doesn't have these registers. */ + /* + * GC500 rev 1.x and GC300 rev < 2.0 doesn't have these + * registers. + */ gpu->identity.minor_features0 = 0; gpu->identity.minor_features1 = 0; gpu->identity.minor_features2 = 0; gpu->identity.minor_features3 = 0; } else - gpu->identity.minor_features0 = gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_0); + gpu->identity.minor_features0 = + gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_0);
if (gpu->identity.minor_features0 & BIT(21)) { - gpu->identity.minor_features1 = gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_1); - gpu->identity.minor_features2 = gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_2); - gpu->identity.minor_features3 = gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_3); + gpu->identity.minor_features1 = + gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_1); + gpu->identity.minor_features2 = + gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_2); + gpu->identity.minor_features3 = + gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_3); }
- dev_info(gpu->dev->dev, "minor_features: %x\n", gpu->identity.minor_features0); - dev_info(gpu->dev->dev, "minor_features1: %x\n", gpu->identity.minor_features1); - dev_info(gpu->dev->dev, "minor_features2: %x\n", gpu->identity.minor_features2); - dev_info(gpu->dev->dev, "minor_features3: %x\n", gpu->identity.minor_features3); + dev_info(gpu->dev->dev, "minor_features: %x\n", + gpu->identity.minor_features0); + dev_info(gpu->dev->dev, "minor_features1: %x\n", + gpu->identity.minor_features1); + dev_info(gpu->dev->dev, "minor_features2: %x\n", + gpu->identity.minor_features2); + dev_info(gpu->dev->dev, "minor_features3: %x\n", + gpu->identity.minor_features3);
etnaviv_hw_specs(gpu); } @@ -295,7 +334,8 @@ static void etnaviv_hw_reset(struct etnaviv_gpu *gpu)
/* try reseting again if FE it not idle */ if ((idle & VIVS_HI_IDLE_STATE_FE) == 0) { - dev_dbg(gpu->dev->dev, "%s: FE is not idle\n", gpu->name); + dev_dbg(gpu->dev->dev, "%s: FE is not idle\n", + gpu->name); continue; }
@@ -305,7 +345,8 @@ static void etnaviv_hw_reset(struct etnaviv_gpu *gpu) /* is the GPU idle? */ if (((control & VIVS_HI_CLOCK_CONTROL_IDLE_3D) == 0) || ((control & VIVS_HI_CLOCK_CONTROL_IDLE_2D) == 0)) { - dev_dbg(gpu->dev->dev, "%s: GPU is not idle\n", gpu->name); + dev_dbg(gpu->dev->dev, "%s: GPU is not idle\n", + gpu->name); continue; }
@@ -392,8 +433,11 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) words = ALIGN(words, 2) / 2;
gpu_write(gpu, VIVS_HI_INTR_ENBL, ~0U); - gpu_write(gpu, VIVS_FE_COMMAND_ADDRESS, etnaviv_gem_paddr_locked(gpu->buffer)); - gpu_write(gpu, VIVS_FE_COMMAND_CONTROL, VIVS_FE_COMMAND_CONTROL_ENABLE | VIVS_FE_COMMAND_CONTROL_PREFETCH(words)); + gpu_write(gpu, VIVS_FE_COMMAND_ADDRESS, + etnaviv_gem_paddr_locked(gpu->buffer)); + gpu_write(gpu, VIVS_FE_COMMAND_CONTROL, + VIVS_FE_COMMAND_CONTROL_ENABLE | + VIVS_FE_COMMAND_CONTROL_PREFETCH(words));
return 0;
@@ -478,13 +522,13 @@ void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m)
seq_puts(m, "\tDMA ");
- if ((debug.address[0] == debug.address[1]) && (debug.state[0] == debug.state[1])) { + if (debug.address[0] == debug.address[1] && + debug.state[0] == debug.state[1]) { seq_puts(m, "seems to be stuck\n"); + } else if (debug.address[0] == debug.address[1]) { + seq_puts(m, "adress is constant\n"); } else { - if (debug.address[0] == debug.address[1]) - seq_puts(m, "adress is constant\n"); - else - seq_puts(m, "is runing\n"); + seq_puts(m, "is runing\n"); }
seq_printf(m, "\t address 0: 0x%08x\n", debug.address[0]); @@ -509,7 +553,8 @@ static int enable_pwrrail(struct etnaviv_gpu *gpu) if (gpu->gpu_reg) { ret = regulator_enable(gpu->gpu_reg); if (ret) { - dev_err(dev->dev, "failed to enable 'gpu_reg': %d\n", ret); + dev_err(dev->dev, "failed to enable 'gpu_reg': %d\n", + ret); return ret; } } @@ -517,7 +562,8 @@ static int enable_pwrrail(struct etnaviv_gpu *gpu) if (gpu->gpu_cx) { ret = regulator_enable(gpu->gpu_cx); if (ret) { - dev_err(dev->dev, "failed to enable 'gpu_cx': %d\n", ret); + dev_err(dev->dev, "failed to enable 'gpu_cx': %d\n", + ret); return ret; } } @@ -619,7 +665,8 @@ int etnaviv_gpu_pm_suspend(struct etnaviv_gpu *gpu) */ static void recover_worker(struct work_struct *work) { - struct etnaviv_gpu *gpu = container_of(work, struct etnaviv_gpu, recover_work); + struct etnaviv_gpu *gpu = container_of(work, struct etnaviv_gpu, + recover_work); struct drm_device *dev = gpu->dev;
dev_err(dev->dev, "%s: hangcheck recover!\n", gpu->name); @@ -674,7 +721,8 @@ static unsigned int event_alloc(struct etnaviv_gpu *gpu) unsigned long ret, flags; unsigned int i, event = ~0U;
- ret = wait_for_completion_timeout(&gpu->event_free, msecs_to_jiffies(10 * 10000)); + ret = wait_for_completion_timeout(&gpu->event_free, + msecs_to_jiffies(10 * 10000)); if (!ret) dev_err(gpu->dev->dev, "wait_for_completion_timeout failed");
@@ -701,7 +749,8 @@ static void event_free(struct etnaviv_gpu *gpu, unsigned int event) spin_lock_irqsave(&gpu->event_spinlock, flags);
if (gpu->event[event].used == false) { - dev_warn(gpu->dev->dev, "event %u is already marked as free", event); + dev_warn(gpu->dev->dev, "event %u is already marked as free", + event); spin_unlock_irqrestore(&gpu->event_spinlock, flags); } else { gpu->event[event].used = false; @@ -717,7 +766,8 @@ static void event_free(struct etnaviv_gpu *gpu, unsigned int event)
static void retire_worker(struct work_struct *work) { - struct etnaviv_gpu *gpu = container_of(work, struct etnaviv_gpu, retire_work); + struct etnaviv_gpu *gpu = container_of(work, struct etnaviv_gpu, + retire_work); struct drm_device *dev = gpu->dev; uint32_t fence = gpu->retired_fence;
@@ -749,12 +799,13 @@ static void retire_worker(struct work_struct *work) void etnaviv_gpu_retire(struct etnaviv_gpu *gpu) { struct etnaviv_drm_private *priv = gpu->dev->dev_private; + queue_work(priv->wq, &gpu->retire_work); }
/* add bo's to gpu's ring, and kick gpu: */ -int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, struct etnaviv_gem_submit *submit, - struct etnaviv_file_private *ctx) +int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, + struct etnaviv_gem_submit *submit, struct etnaviv_file_private *ctx) { struct drm_device *dev = gpu->dev; struct etnaviv_drm_private *priv = dev->dev_private; @@ -798,14 +849,17 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, struct etnaviv_gem_submit *submi
/* ring takes a reference to the bo and iova: */ drm_gem_object_reference(&etnaviv_obj->base); - etnaviv_gem_get_iova_locked(gpu, &etnaviv_obj->base, &iova); + etnaviv_gem_get_iova_locked(gpu, &etnaviv_obj->base, + &iova); }
if (submit->bos[i].flags & ETNA_SUBMIT_BO_READ) - etnaviv_gem_move_to_active(&etnaviv_obj->base, gpu, false, submit->fence); + etnaviv_gem_move_to_active(&etnaviv_obj->base, gpu, + false, submit->fence);
if (submit->bos[i].flags & ETNA_SUBMIT_BO_WRITE) - etnaviv_gem_move_to_active(&etnaviv_obj->base, gpu, true, submit->fence); + etnaviv_gem_move_to_active(&etnaviv_obj->base, gpu, + true, submit->fence); } hangcheck_timer_reset(gpu);
@@ -830,6 +884,7 @@ static irqreturn_t irq_handler(int irq, void *data) dev_err(gpu->dev->dev, "AXI bus error\n"); else { uint8_t event = __fls(intr); + dev_dbg(gpu->dev->dev, "event %u\n", event); gpu->retired_fence = gpu->event[event].fence; gpu->last_ring_pos = gpu->event[event].ring_pos; diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h index 52db3dc54079..5afa0f74106c 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.h +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -96,7 +96,7 @@ struct etnaviv_gpu { /* event management: */ struct etnaviv_event event[30]; struct completion event_free; - struct spinlock event_spinlock; + spinlock_t event_spinlock;
/* list of GEM active objects: */ struct list_head active_list; @@ -139,7 +139,8 @@ static inline u32 gpu_read(struct etnaviv_gpu *gpu, u32 reg) return etnaviv_readl(gpu->mmio + reg); }
-int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, uint32_t param, uint64_t *value); +int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, uint32_t param, + uint64_t *value);
int etnaviv_gpu_init(struct etnaviv_gpu *gpu); int etnaviv_gpu_pm_suspend(struct etnaviv_gpu *gpu); @@ -150,8 +151,8 @@ void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m); #endif
void etnaviv_gpu_retire(struct etnaviv_gpu *gpu); -int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, struct etnaviv_gem_submit *submit, - struct etnaviv_file_private *ctx); +int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, + struct etnaviv_gem_submit *submit, struct etnaviv_file_private *ctx);
extern struct platform_driver etnaviv_gpu_driver;
diff --git a/drivers/staging/etnaviv/etnaviv_iommu.c b/drivers/staging/etnaviv/etnaviv_iommu.c index 5841a08f627f..6aa91bcf1893 100644 --- a/drivers/staging/etnaviv/etnaviv_iommu.c +++ b/drivers/staging/etnaviv/etnaviv_iommu.c @@ -122,8 +122,8 @@ static int etnaviv_iommu_map(struct iommu_domain *domain, unsigned long iova, return 0; }
-static size_t etnaviv_iommu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t size) +static size_t etnaviv_iommu_unmap(struct iommu_domain *domain, + unsigned long iova, size_t size) { struct etnaviv_iommu_domain *etnaviv_domain = domain->priv;
@@ -158,6 +158,7 @@ struct iommu_domain *etnaviv_iommu_domain_alloc(struct etnaviv_gpu *gpu) { struct iommu_domain *domain; struct etnaviv_iommu_domain *etnaviv_domain; + uint32_t pgtable; int ret;
domain = kzalloc(sizeof(*domain), GFP_KERNEL); @@ -172,12 +173,13 @@ struct iommu_domain *etnaviv_iommu_domain_alloc(struct etnaviv_gpu *gpu)
/* set page table address in MC */ etnaviv_domain = domain->priv; + pgtable = (uint32_t)etnaviv_domain->pgtable.paddr;
- gpu_write(gpu, VIVS_MC_MMU_FE_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr); - gpu_write(gpu, VIVS_MC_MMU_TX_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr); - gpu_write(gpu, VIVS_MC_MMU_PE_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr); - gpu_write(gpu, VIVS_MC_MMU_PEZ_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr); - gpu_write(gpu, VIVS_MC_MMU_RA_PAGE_TABLE, (uint32_t)etnaviv_domain->pgtable.paddr); + gpu_write(gpu, VIVS_MC_MMU_FE_PAGE_TABLE, pgtable); + gpu_write(gpu, VIVS_MC_MMU_TX_PAGE_TABLE, pgtable); + gpu_write(gpu, VIVS_MC_MMU_PE_PAGE_TABLE, pgtable); + gpu_write(gpu, VIVS_MC_MMU_PEZ_PAGE_TABLE, pgtable); + gpu_write(gpu, VIVS_MC_MMU_RA_PAGE_TABLE, pgtable);
return domain;
diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c index cee97e11117d..94a6aa9f9c6f 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.c +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -57,6 +57,7 @@ fail:
for_each_sg(sgt->sgl, sg, i, j) { size_t bytes = sg->length + sg->offset; + iommu_unmap(domain, da, bytes); da += bytes; } @@ -95,7 +96,8 @@ void etnaviv_iommu_destroy(struct etnaviv_iommu *mmu) kfree(mmu); }
-struct etnaviv_iommu *etnaviv_iommu_new(struct drm_device *dev, struct iommu_domain *domain) +struct etnaviv_iommu *etnaviv_iommu_new(struct drm_device *dev, + struct iommu_domain *domain) { struct etnaviv_iommu *mmu;
diff --git a/drivers/staging/etnaviv/etnaviv_mmu.h b/drivers/staging/etnaviv/etnaviv_mmu.h index 02e7adcc96d7..7b97ef35d290 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.h +++ b/drivers/staging/etnaviv/etnaviv_mmu.h @@ -25,13 +25,15 @@ struct etnaviv_iommu { struct iommu_domain *domain; };
-int etnaviv_iommu_attach(struct etnaviv_iommu *iommu, const char **names, int cnt); -int etnaviv_iommu_map(struct etnaviv_iommu *iommu, uint32_t iova, struct sg_table *sgt, - unsigned len, int prot); -int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova, struct sg_table *sgt, - unsigned len); +int etnaviv_iommu_attach(struct etnaviv_iommu *iommu, const char **names, + int cnt); +int etnaviv_iommu_map(struct etnaviv_iommu *iommu, uint32_t iova, + struct sg_table *sgt, unsigned len, int prot); +int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova, + struct sg_table *sgt, unsigned len); void etnaviv_iommu_destroy(struct etnaviv_iommu *iommu);
-struct etnaviv_iommu *etnaviv_iommu_new(struct drm_device *dev, struct iommu_domain *domain); +struct etnaviv_iommu *etnaviv_iommu_new(struct drm_device *dev, + struct iommu_domain *domain);
#endif /* __ETNAVIV_MMU_H__ */ diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h index 76596fc46150..9654021017fd 100644 --- a/include/uapi/drm/etnaviv_drm.h +++ b/include/uapi/drm/etnaviv_drm.h @@ -131,7 +131,7 @@ struct drm_etnaviv_gem_cpu_fini { struct drm_etnaviv_gem_submit_reloc { uint32_t submit_offset; /* in, offset from submit_bo */ uint32_t or; /* in, value OR'd with result */ - int32_t shift; /* in, amount of left shift (can be negative) */ + int32_t shift; /* in, amount of left shift (can be -ve) */ uint32_t reloc_idx; /* in, index of reloc_bo buffer */ uint64_t reloc_offset; /* in, offset from start of reloc_bo */ };
From: Russell King rmk+kernel@arm.linux.org.uk
get_pages() tries to convert a page array into a scatterlist. If this fails, we bail out without freeing the page array. Add a call to drm_gem_put_pages() to drop the reference gained in drm_gem_get_pages(), indicating that we didn't access the pages.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 18f607b6532f..9e3cd61507d1 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -45,6 +45,7 @@ static struct page **get_pages(struct drm_gem_object *obj) etnaviv_obj->sgt = drm_prime_pages_to_sg(p, npages); if (IS_ERR(etnaviv_obj->sgt)) { dev_err(dev->dev, "failed to allocate sgt\n"); + drm_gem_put_pages(obj, p, false, false); return ERR_CAST(etnaviv_obj->sgt); }
From: Russell King rmk+kernel@arm.linux.org.uk
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 4 +++- drivers/staging/etnaviv/etnaviv_gem.h | 7 +++++++ 2 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 9e3cd61507d1..eface33ad445 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -536,7 +536,9 @@ static void etnaviv_free_obj(struct drm_gem_object *obj)
drm_gem_free_mmap_offset(obj);
- if (obj->import_attach) { + if (etnaviv_obj->ops) { + etnaviv_obj->ops->release(etnaviv_obj); + } else if (obj->import_attach) { if (etnaviv_obj->vaddr) dma_buf_vunmap(obj->import_attach->dmabuf, etnaviv_obj->vaddr); diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h index 97302ca6efaa..676cbd46c600 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.h +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -21,8 +21,11 @@ #include <linux/reservation.h> #include "etnaviv_drv.h"
+struct etnaviv_gem_ops; + struct etnaviv_gem_object { struct drm_gem_object base; + const struct etnaviv_gem_ops *ops;
uint32_t flags;
@@ -64,6 +67,10 @@ struct etnaviv_gem_object { }; #define to_etnaviv_bo(x) container_of(x, struct etnaviv_gem_object, base)
+struct etnaviv_gem_ops { + void (*release)(struct etnaviv_gem_object *); +}; + static inline bool is_active(struct etnaviv_gem_object *etnaviv_obj) { return etnaviv_obj->gpu != NULL;
From: Russell King rmk+kernel@arm.linux.org.uk
Convert the prime import code to use etnaviv_gem_ops release method to clean up the object. This removes the prime specific code from the generic object cleanup path.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 32 ++++++++++++++++++++------------ 1 file changed, 20 insertions(+), 12 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index eface33ad445..865b34b8496c 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -538,18 +538,6 @@ static void etnaviv_free_obj(struct drm_gem_object *obj)
if (etnaviv_obj->ops) { etnaviv_obj->ops->release(etnaviv_obj); - } else if (obj->import_attach) { - if (etnaviv_obj->vaddr) - dma_buf_vunmap(obj->import_attach->dmabuf, - etnaviv_obj->vaddr); - - /* Don't drop the pages for imported dmabuf, as they are not - * ours, just free the array we allocated: - */ - if (etnaviv_obj->pages) - drm_free_large(etnaviv_obj->pages); - - drm_prime_gem_destroy(obj, etnaviv_obj->sgt); } else { if (etnaviv_obj->vaddr) vunmap(etnaviv_obj->vaddr); @@ -698,6 +686,25 @@ fail: return ERR_PTR(ret); }
+static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj) +{ + if (etnaviv_obj->vaddr) + dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf, + etnaviv_obj->vaddr); + + /* Don't drop the pages for imported dmabuf, as they are not + * ours, just free the array we allocated: + */ + if (etnaviv_obj->pages) + drm_free_large(etnaviv_obj->pages); + + drm_prime_gem_destroy(&etnaviv_obj->base, etnaviv_obj->sgt); +} + +static const struct etnaviv_gem_ops etnaviv_gem_prime_ops = { + .release = etnaviv_gem_prime_release, +}; + struct drm_gem_object *msm_gem_import(struct drm_device *dev, uint32_t size, struct sg_table *sgt) { @@ -716,6 +723,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, npages = size / PAGE_SIZE;
etnaviv_obj = to_etnaviv_bo(obj); + etnaviv_obj->ops = &etnaviv_gem_prime_ops; etnaviv_obj->sgt = sgt; etnaviv_obj->pages = drm_malloc_ab(npages, sizeof(struct page *)); if (!etnaviv_obj->pages) {
From: Russell King rmk+kernel@arm.linux.org.uk
Convert the shmem object release to use the etnaviv_gem_ops release method.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 865b34b8496c..38e6b8ab3124 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -536,13 +536,7 @@ static void etnaviv_free_obj(struct drm_gem_object *obj)
drm_gem_free_mmap_offset(obj);
- if (etnaviv_obj->ops) { - etnaviv_obj->ops->release(etnaviv_obj); - } else { - if (etnaviv_obj->vaddr) - vunmap(etnaviv_obj->vaddr); - put_pages(obj); - } + etnaviv_obj->ops->release(etnaviv_obj);
if (etnaviv_obj->resv == &etnaviv_obj->_resv) reservation_object_fini(etnaviv_obj->resv); @@ -550,6 +544,17 @@ static void etnaviv_free_obj(struct drm_gem_object *obj) drm_gem_object_release(obj); }
+static void etnaviv_gem_shmem_release(struct etnaviv_gem_object *etnaviv_obj) +{ + if (etnaviv_obj->vaddr) + vunmap(etnaviv_obj->vaddr); + put_pages(&etnaviv_obj->base); +} + +static const struct etnaviv_gem_ops etnaviv_gem_shmem_ops = { + .release = etnaviv_gem_shmem_release, +}; + void etnaviv_gem_free_object(struct drm_gem_object *obj) { struct drm_device *dev = obj->dev; @@ -669,10 +674,12 @@ struct drm_gem_object *etnaviv_gem_new(struct drm_device *dev, goto fail;
ret = 0; - if (flags & ETNA_BO_CMDSTREAM) + if (flags & ETNA_BO_CMDSTREAM) { drm_gem_private_object_init(dev, obj, size); - else + } else { + to_etnaviv_bo(obj)->ops = &etnaviv_gem_shmem_ops; ret = drm_gem_object_init(dev, obj, size); + }
if (ret) goto fail;
From: Russell King rmk+kernel@arm.linux.org.uk
Convert the command buffer release handling to use the etnaviv_gem_ops release method.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 38e6b8ab3124..58a56b9e7abc 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -514,12 +514,21 @@ static void etnaviv_free_cmd(struct drm_gem_object *obj)
drm_gem_free_mmap_offset(obj);
- dma_free_coherent(obj->dev->dev, obj->size, + etnaviv_obj->ops->release(etnaviv_obj); +} + +static void etnaviv_gem_cmd_release(struct etnaviv_gem_object *etnaviv_obj) +{ + dma_free_coherent(etnaviv_obj->base.dev->dev, etnaviv_obj->base.size, etnaviv_obj->vaddr, etnaviv_obj->paddr);
drm_gem_object_release(obj); }
+static const struct etnaviv_gem_ops etnaviv_gem_cmd_ops = { + .release = etnaviv_gem_cmd_release, +}; + static void etnaviv_free_obj(struct drm_gem_object *obj) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); @@ -675,6 +684,7 @@ struct drm_gem_object *etnaviv_gem_new(struct drm_device *dev,
ret = 0; if (flags & ETNA_BO_CMDSTREAM) { + to_etnaviv_bo(obj)->ops = &etnaviv_gem_cmd_ops; drm_gem_private_object_init(dev, obj, size); } else { to_etnaviv_bo(obj)->ops = &etnaviv_gem_shmem_ops;
From: Russell King rmk+kernel@arm.linux.org.uk
We always call drm_gem_object_release() from both etnaviv_free_cmd() and etnaviv_free_obj(). Move this to the parent function so it is done at one place only.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 58a56b9e7abc..827622588ffc 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -521,8 +521,6 @@ static void etnaviv_gem_cmd_release(struct etnaviv_gem_object *etnaviv_obj) { dma_free_coherent(etnaviv_obj->base.dev->dev, etnaviv_obj->base.size, etnaviv_obj->vaddr, etnaviv_obj->paddr); - - drm_gem_object_release(obj); }
static const struct etnaviv_gem_ops etnaviv_gem_cmd_ops = { @@ -549,8 +547,6 @@ static void etnaviv_free_obj(struct drm_gem_object *obj)
if (etnaviv_obj->resv == &etnaviv_obj->_resv) reservation_object_fini(etnaviv_obj->resv); - - drm_gem_object_release(obj); }
static void etnaviv_gem_shmem_release(struct etnaviv_gem_object *etnaviv_obj) @@ -581,6 +577,8 @@ void etnaviv_gem_free_object(struct drm_gem_object *obj) else etnaviv_free_obj(obj);
+ drm_gem_object_release(obj); + kfree(etnaviv_obj); }
From: Russell King rmk+kernel@arm.linux.org.uk
The embedded reservation object is always initialised whenever we create an etnaviv buffer object, but it is not always cleaned up. Arrange this to be clearer: always initialise the embedded reservation object directly, and always clean the embedded reservation object up when removing a buffer object.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 827622588ffc..824338e0068d 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -544,9 +544,6 @@ static void etnaviv_free_obj(struct drm_gem_object *obj) drm_gem_free_mmap_offset(obj);
etnaviv_obj->ops->release(etnaviv_obj); - - if (etnaviv_obj->resv == &etnaviv_obj->_resv) - reservation_object_fini(etnaviv_obj->resv); }
static void etnaviv_gem_shmem_release(struct etnaviv_gem_object *etnaviv_obj) @@ -577,6 +574,7 @@ void etnaviv_gem_free_object(struct drm_gem_object *obj) else etnaviv_free_obj(obj);
+ reservation_object_fini(&etnaviv_obj->_resv); drm_gem_object_release(obj);
kfree(etnaviv_obj); @@ -656,7 +654,7 @@ static int etnaviv_gem_new_impl(struct drm_device *dev, etnaviv_obj->flags = flags;
etnaviv_obj->resv = &etnaviv_obj->_resv; - reservation_object_init(etnaviv_obj->resv); + reservation_object_init(&etnaviv_obj->_resv);
INIT_LIST_HEAD(&etnaviv_obj->submit_entry); list_add_tail(&etnaviv_obj->mm_list, &priv->inactive_list);
From: Russell King rmk+kernel@arm.linux.org.uk
As the tail of etnaviv_free_obj() is identical to etnaviv_free_cmd(), we can eliminate etnaviv_free_obj() entirely by moving it into etnaviv_gem_free_object().
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 19 +++---------------- 1 file changed, 3 insertions(+), 16 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 824338e0068d..0a0cf92ff13f 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -508,15 +508,6 @@ void msm_gem_describe_objects(struct list_head *list, struct seq_file *m) } #endif
-static void etnaviv_free_cmd(struct drm_gem_object *obj) -{ - struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); - - drm_gem_free_mmap_offset(obj); - - etnaviv_obj->ops->release(etnaviv_obj); -} - static void etnaviv_gem_cmd_release(struct etnaviv_gem_object *etnaviv_obj) { dma_free_coherent(etnaviv_obj->base.dev->dev, etnaviv_obj->base.size, @@ -540,10 +531,6 @@ static void etnaviv_free_obj(struct drm_gem_object *obj) drm_mm_remove_node(etnaviv_obj->gpu_vram_node); kfree(etnaviv_obj->gpu_vram_node); } - - drm_gem_free_mmap_offset(obj); - - etnaviv_obj->ops->release(etnaviv_obj); }
static void etnaviv_gem_shmem_release(struct etnaviv_gem_object *etnaviv_obj) @@ -569,11 +556,11 @@ void etnaviv_gem_free_object(struct drm_gem_object *obj)
list_del(&etnaviv_obj->mm_list);
- if (etnaviv_obj->flags & ETNA_BO_CMDSTREAM) - etnaviv_free_cmd(obj); - else + if (!(etnaviv_obj->flags & ETNA_BO_CMDSTREAM)) etnaviv_free_obj(obj);
+ drm_gem_free_mmap_offset(obj); + etnaviv_obj->ops->release(etnaviv_obj); reservation_object_fini(&etnaviv_obj->_resv); drm_gem_object_release(obj);
From: Russell King rmk+kernel@arm.linux.org.uk
etnaviv_gem_new_private() creates a private non-shmem gem object which can be used to create these kinds of objects. Fix up msm_gem_import() to use it.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 32 ++++++++++++++++++++++---------- drivers/staging/etnaviv/etnaviv_gem.h | 3 +++ 2 files changed, 25 insertions(+), 10 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 0a0cf92ff13f..185cd1702b2e 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -686,6 +686,23 @@ fail: return ERR_PTR(ret); }
+int etnaviv_gem_new_private(struct drm_device *dev, size_t size, uint32_t flags, + struct etnaviv_gem_object **res) +{ + struct drm_gem_object *obj; + int ret; + + ret = etnaviv_gem_new_impl(dev, size, flags, &obj); + if (ret) + return ret; + + drm_gem_private_object_init(dev, obj, size); + + *res = to_etnaviv_bo(obj); + + return 0; +} + static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj) { if (etnaviv_obj->vaddr) @@ -709,20 +726,16 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, uint32_t size, struct sg_table *sgt) { struct etnaviv_gem_object *etnaviv_obj; - struct drm_gem_object *obj; int ret, npages;
size = PAGE_ALIGN(size);
- ret = etnaviv_gem_new_impl(dev, size, ETNA_BO_WC, &obj); - if (ret) - goto fail; - - drm_gem_private_object_init(dev, obj, size); + ret = etnaviv_gem_new_private(dev, size, ETNA_BO_WC, &etnaviv_obj); + if (ret < 0) + return ERR_PTR(ret);
npages = size / PAGE_SIZE;
- etnaviv_obj = to_etnaviv_bo(obj); etnaviv_obj->ops = &etnaviv_gem_prime_ops; etnaviv_obj->sgt = sgt; etnaviv_obj->pages = drm_malloc_ab(npages, sizeof(struct page *)); @@ -736,11 +749,10 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, if (ret) goto fail;
- return obj; + return &etnaviv_obj->base;
fail: - if (obj) - drm_gem_object_unreference_unlocked(obj); + drm_gem_object_unreference_unlocked(&etnaviv_obj->base);
return ERR_PTR(ret); } diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h index 676cbd46c600..65c9740542da 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.h +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -105,4 +105,7 @@ struct etnaviv_gem_submit { } bos[0]; };
+int etnaviv_gem_new_private(struct drm_device *dev, size_t size, uint32_t flags, + struct etnaviv_gem_object **res); + #endif /* __ETNAVIV_GEM_H__ */
From: Russell King rmk+kernel@arm.linux.org.uk
Move the prime import code out into etnaviv_gem_prime.c, which keeps all this functionality together.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 55 ----------------------------- drivers/staging/etnaviv/etnaviv_gem_prime.c | 55 +++++++++++++++++++++++++++++ 2 files changed, 55 insertions(+), 55 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 185cd1702b2e..1cd5c6bc2532 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -17,7 +17,6 @@
#include <linux/spinlock.h> #include <linux/shmem_fs.h> -#include <linux/dma-buf.h>
#include "etnaviv_drv.h" #include "etnaviv_gem.h" @@ -702,57 +701,3 @@ int etnaviv_gem_new_private(struct drm_device *dev, size_t size, uint32_t flags,
return 0; } - -static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj) -{ - if (etnaviv_obj->vaddr) - dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf, - etnaviv_obj->vaddr); - - /* Don't drop the pages for imported dmabuf, as they are not - * ours, just free the array we allocated: - */ - if (etnaviv_obj->pages) - drm_free_large(etnaviv_obj->pages); - - drm_prime_gem_destroy(&etnaviv_obj->base, etnaviv_obj->sgt); -} - -static const struct etnaviv_gem_ops etnaviv_gem_prime_ops = { - .release = etnaviv_gem_prime_release, -}; - -struct drm_gem_object *msm_gem_import(struct drm_device *dev, - uint32_t size, struct sg_table *sgt) -{ - struct etnaviv_gem_object *etnaviv_obj; - int ret, npages; - - size = PAGE_ALIGN(size); - - ret = etnaviv_gem_new_private(dev, size, ETNA_BO_WC, &etnaviv_obj); - if (ret < 0) - return ERR_PTR(ret); - - npages = size / PAGE_SIZE; - - etnaviv_obj->ops = &etnaviv_gem_prime_ops; - etnaviv_obj->sgt = sgt; - etnaviv_obj->pages = drm_malloc_ab(npages, sizeof(struct page *)); - if (!etnaviv_obj->pages) { - ret = -ENOMEM; - goto fail; - } - - ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages, - NULL, npages); - if (ret) - goto fail; - - return &etnaviv_obj->base; - -fail: - drm_gem_object_unreference_unlocked(&etnaviv_obj->base); - - return ERR_PTR(ret); -} diff --git a/drivers/staging/etnaviv/etnaviv_gem_prime.c b/drivers/staging/etnaviv/etnaviv_gem_prime.c index 9c152b5640bc..4cf9e043c604 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_prime.c +++ b/drivers/staging/etnaviv/etnaviv_gem_prime.c @@ -15,6 +15,7 @@ * this program. If not, see http://www.gnu.org/licenses/. */
+#include <linux/dma-buf.h> #include "etnaviv_drv.h" #include "etnaviv_gem.h"
@@ -56,3 +57,57 @@ void msm_gem_prime_unpin(struct drm_gem_object *obj) if (!obj->import_attach) msm_gem_put_pages(obj); } + +static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj) +{ + if (etnaviv_obj->vaddr) + dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf, + etnaviv_obj->vaddr); + + /* Don't drop the pages for imported dmabuf, as they are not + * ours, just free the array we allocated: + */ + if (etnaviv_obj->pages) + drm_free_large(etnaviv_obj->pages); + + drm_prime_gem_destroy(&etnaviv_obj->base, etnaviv_obj->sgt); +} + +static const struct etnaviv_gem_ops etnaviv_gem_prime_ops = { + .release = etnaviv_gem_prime_release, +}; + +struct drm_gem_object *msm_gem_import(struct drm_device *dev, + uint32_t size, struct sg_table *sgt) +{ + struct etnaviv_gem_object *etnaviv_obj; + int ret, npages; + + size = PAGE_ALIGN(size); + + ret = etnaviv_gem_new_private(dev, size, ETNA_BO_WC, &etnaviv_obj); + if (ret < 0) + return ERR_PTR(ret); + + npages = size / PAGE_SIZE; + + etnaviv_obj->ops = &etnaviv_gem_prime_ops; + etnaviv_obj->sgt = sgt; + etnaviv_obj->pages = drm_malloc_ab(npages, sizeof(struct page *)); + if (!etnaviv_obj->pages) { + ret = -ENOMEM; + goto fail; + } + + ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages, + NULL, npages); + if (ret) + goto fail; + + return &etnaviv_obj->base; + +fail: + drm_gem_object_unreference_unlocked(&etnaviv_obj->base); + + return ERR_PTR(ret); +}
From: Russell King rmk+kernel@arm.linux.org.uk
Clean up the etnaviv prime import handling by combining msm_gem_import() and msm_gem_prime_import_sg_table(), and then giving it an etnaviv_ prefix.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_drv.c | 2 +- drivers/staging/etnaviv/etnaviv_drv.h | 6 ++---- drivers/staging/etnaviv/etnaviv_gem_prime.c | 13 +++---------- 3 files changed, 6 insertions(+), 15 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 2d44fcd7299e..2b6800a782bb 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -508,7 +508,7 @@ static struct drm_driver etnaviv_drm_driver = { .gem_prime_pin = msm_gem_prime_pin, .gem_prime_unpin = msm_gem_prime_unpin, .gem_prime_get_sg_table = msm_gem_prime_get_sg_table, - .gem_prime_import_sg_table = msm_gem_prime_import_sg_table, + .gem_prime_import_sg_table = etnaviv_gem_prime_import_sg_table, .gem_prime_vmap = msm_gem_prime_vmap, .gem_prime_vunmap = msm_gem_prime_vunmap, #ifdef CONFIG_DEBUG_FS diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index a1543734bc2f..bdf6685ef6cc 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -91,8 +91,8 @@ int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj); void *msm_gem_prime_vmap(struct drm_gem_object *obj); void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); -struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, - size_t size, struct sg_table *sg); +struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev, + struct dma_buf_attachment *attach, struct sg_table *sg); int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj); @@ -109,8 +109,6 @@ int etnaviv_gem_new_handle(struct drm_device *dev, struct drm_file *file, uint32_t size, uint32_t flags, uint32_t *handle); struct drm_gem_object *etnaviv_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); -struct drm_gem_object *msm_gem_import(struct drm_device *dev, - uint32_t size, struct sg_table *sgt); u32 etnaviv_buffer_init(struct etnaviv_gpu *gpu); void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct etnaviv_gem_submit *submit); diff --git a/drivers/staging/etnaviv/etnaviv_gem_prime.c b/drivers/staging/etnaviv/etnaviv_gem_prime.c index 4cf9e043c604..d9742ae1fad1 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_prime.c +++ b/drivers/staging/etnaviv/etnaviv_gem_prime.c @@ -39,12 +39,6 @@ void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) /* TODO msm_gem_vunmap() */ }
-struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, - size_t size, struct sg_table *sg) -{ - return msm_gem_import(dev, size, sg); -} - int msm_gem_prime_pin(struct drm_gem_object *obj) { if (!obj->import_attach) @@ -77,14 +71,13 @@ static const struct etnaviv_gem_ops etnaviv_gem_prime_ops = { .release = etnaviv_gem_prime_release, };
-struct drm_gem_object *msm_gem_import(struct drm_device *dev, - uint32_t size, struct sg_table *sgt) +struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev, + struct dma_buf_attachment *attach, struct sg_table *sgt) { struct etnaviv_gem_object *etnaviv_obj; + size_t size = PAGE_ALIGN(attach->dmabuf->size); int ret, npages;
- size = PAGE_ALIGN(size); - ret = etnaviv_gem_new_private(dev, size, ETNA_BO_WC, &etnaviv_obj); if (ret < 0) return ERR_PTR(ret);
From: Russell King rmk+kernel@arm.linux.org.uk
Convert the internal get_pages()/put_pages() functions to take an etnaviv_obj rather tha converting between drm_gem_object and our private one.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 33 ++++++++++++++++----------------- 1 file changed, 16 insertions(+), 17 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 1cd5c6bc2532..493ffc025569 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -24,16 +24,14 @@ #include "etnaviv_mmu.h"
/* called with dev->struct_mutex held */ -static struct page **get_pages(struct drm_gem_object *obj) +static struct page **get_pages(struct etnaviv_gem_object *etnaviv_obj) { - struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); - if (!etnaviv_obj->pages) { - struct drm_device *dev = obj->dev; + struct drm_device *dev = etnaviv_obj->base.dev; struct page **p; - int npages = obj->size >> PAGE_SHIFT; + int npages = etnaviv_obj->base.size >> PAGE_SHIFT;
- p = drm_gem_get_pages(obj); + p = drm_gem_get_pages(&etnaviv_obj->base);
if (IS_ERR(p)) { dev_err(dev->dev, "could not get pages: %ld\n", @@ -44,7 +42,7 @@ static struct page **get_pages(struct drm_gem_object *obj) etnaviv_obj->sgt = drm_prime_pages_to_sg(p, npages); if (IS_ERR(etnaviv_obj->sgt)) { dev_err(dev->dev, "failed to allocate sgt\n"); - drm_gem_put_pages(obj, p, false, false); + drm_gem_put_pages(&etnaviv_obj->base, p, false, false); return ERR_CAST(etnaviv_obj->sgt); }
@@ -61,22 +59,23 @@ static struct page **get_pages(struct drm_gem_object *obj) return etnaviv_obj->pages; }
-static void put_pages(struct drm_gem_object *obj) +static void put_pages(struct etnaviv_gem_object *etnaviv_obj) { - struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); - if (etnaviv_obj->pages) { + struct drm_device *dev = etnaviv_obj->base.dev; + /* For non-cached buffers, ensure the new pages are clean * because display controller, GPU, etc. are not coherent: */ if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_UNCACHED)) - dma_unmap_sg(obj->dev->dev, etnaviv_obj->sgt->sgl, + dma_unmap_sg(dev->dev, etnaviv_obj->sgt->sgl, etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL); sg_free_table(etnaviv_obj->sgt); kfree(etnaviv_obj->sgt);
- drm_gem_put_pages(obj, etnaviv_obj->pages, true, false); + drm_gem_put_pages(&etnaviv_obj->base, etnaviv_obj->pages, + true, false);
etnaviv_obj->pages = NULL; } @@ -88,7 +87,7 @@ struct page **etnaviv_gem_get_pages(struct drm_gem_object *obj) struct page **p;
mutex_lock(&dev->struct_mutex); - p = get_pages(obj); + p = get_pages(to_etnaviv_bo(obj)); mutex_unlock(&dev->struct_mutex);
return p; @@ -189,7 +188,7 @@ int etnaviv_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) goto out;
/* make sure we have pages attached now */ - pages = get_pages(obj); + pages = get_pages(to_etnaviv_bo(obj)); if (IS_ERR(pages)) { ret = PTR_ERR(pages); goto out_unlock; @@ -273,7 +272,7 @@ int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, if (!etnaviv_obj->iova && !(etnaviv_obj->flags & ETNA_BO_CMDSTREAM)) { struct etnaviv_drm_private *priv = obj->dev->dev_private; struct etnaviv_iommu *mmu = priv->mmu; - struct page **pages = get_pages(obj); + struct page **pages = get_pages(etnaviv_obj); uint32_t offset; struct drm_mm_node *node = NULL;
@@ -375,7 +374,7 @@ void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj) WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
if (!etnaviv_obj->vaddr) { - struct page **pages = get_pages(obj); + struct page **pages = get_pages(etnaviv_obj);
if (IS_ERR(pages)) return ERR_CAST(pages); @@ -536,7 +535,7 @@ static void etnaviv_gem_shmem_release(struct etnaviv_gem_object *etnaviv_obj) { if (etnaviv_obj->vaddr) vunmap(etnaviv_obj->vaddr); - put_pages(&etnaviv_obj->base); + put_pages(etnaviv_obj); }
static const struct etnaviv_gem_ops etnaviv_gem_shmem_ops = {
From: Russell King rmk+kernel@arm.linux.org.uk
Move the locking into etnaviv_gem_prime.c and pass an etnaviv_gem_object rather than drm_gem_object. As this becomes an internal gem function, move the prototype into etnaviv_gem.h.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_drv.h | 2 -- drivers/staging/etnaviv/etnaviv_gem.c | 13 +++---------- drivers/staging/etnaviv/etnaviv_gem.h | 2 ++ drivers/staging/etnaviv/etnaviv_gem_prime.c | 18 ++++++++++++++---- 4 files changed, 19 insertions(+), 16 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index bdf6685ef6cc..79f22d15fc63 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -81,8 +81,6 @@ int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, uint32_t *iova); int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, int id, uint32_t *iova); -struct page **etnaviv_gem_get_pages(struct drm_gem_object *obj); -void msm_gem_put_pages(struct drm_gem_object *obj); void etnaviv_gem_put_iova(struct drm_gem_object *obj); int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, struct drm_mode_create_dumb *args); diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 493ffc025569..2120cd378d86 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -81,19 +81,12 @@ static void put_pages(struct etnaviv_gem_object *etnaviv_obj) } }
-struct page **etnaviv_gem_get_pages(struct drm_gem_object *obj) +struct page **etnaviv_gem_get_pages(struct etnaviv_gem_object *etnaviv_obj) { - struct drm_device *dev = obj->dev; - struct page **p; - - mutex_lock(&dev->struct_mutex); - p = get_pages(to_etnaviv_bo(obj)); - mutex_unlock(&dev->struct_mutex); - - return p; + return get_pages(etnaviv_obj); }
-void msm_gem_put_pages(struct drm_gem_object *obj) +void etnaviv_gem_put_pages(struct etnaviv_gem_object *etnaviv_obj) { /* when we start tracking the pin count, then do something here */ } diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h index 65c9740542da..569a407cbd35 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.h +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -107,5 +107,7 @@ struct etnaviv_gem_submit {
int etnaviv_gem_new_private(struct drm_device *dev, size_t size, uint32_t flags, struct etnaviv_gem_object **res); +struct page **etnaviv_gem_get_pages(struct etnaviv_gem_object *obj); +void etnaviv_gem_put_pages(struct etnaviv_gem_object *obj);
#endif /* __ETNAVIV_GEM_H__ */ diff --git a/drivers/staging/etnaviv/etnaviv_gem_prime.c b/drivers/staging/etnaviv/etnaviv_gem_prime.c index d9742ae1fad1..3a986ae9b00b 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_prime.c +++ b/drivers/staging/etnaviv/etnaviv_gem_prime.c @@ -41,15 +41,25 @@ void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
int msm_gem_prime_pin(struct drm_gem_object *obj) { - if (!obj->import_attach) - etnaviv_gem_get_pages(obj); + if (!obj->import_attach) { + struct drm_device *dev = obj->dev; + + mutex_lock(&dev->struct_mutex); + etnaviv_gem_get_pages(to_etnaviv_bo(obj)); + mutex_unlock(&dev->struct_mutex); + } return 0; }
void msm_gem_prime_unpin(struct drm_gem_object *obj) { - if (!obj->import_attach) - msm_gem_put_pages(obj); + if (!obj->import_attach) { + struct drm_device *dev = obj->dev; + + mutex_lock(&dev->struct_mutex); + etnaviv_gem_put_pages(to_etnaviv_bo(obj)); + mutex_unlock(&dev->struct_mutex); + } }
static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
From: Russell King rmk+kernel@arm.linux.org.uk
Provide a get_pages() method for gem objects, which allows our objects to provide their own method to obtain the struct page array and scatterlist.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 67 ++++++++++++++++------------- drivers/staging/etnaviv/etnaviv_gem.h | 1 + drivers/staging/etnaviv/etnaviv_gem_prime.c | 1 + 3 files changed, 38 insertions(+), 31 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 2120cd378d86..e508d64aa2d3 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -24,39 +24,35 @@ #include "etnaviv_mmu.h"
/* called with dev->struct_mutex held */ -static struct page **get_pages(struct etnaviv_gem_object *etnaviv_obj) +static int etnaviv_gem_shmem_get_pages(struct etnaviv_gem_object *etnaviv_obj) { - if (!etnaviv_obj->pages) { - struct drm_device *dev = etnaviv_obj->base.dev; - struct page **p; - int npages = etnaviv_obj->base.size >> PAGE_SHIFT; - - p = drm_gem_get_pages(&etnaviv_obj->base); + struct drm_device *dev = etnaviv_obj->base.dev; + struct page **p; + int npages = etnaviv_obj->base.size >> PAGE_SHIFT;
- if (IS_ERR(p)) { - dev_err(dev->dev, "could not get pages: %ld\n", - PTR_ERR(p)); - return p; - } + p = drm_gem_get_pages(&etnaviv_obj->base); + if (IS_ERR(p)) { + dev_err(dev->dev, "could not get pages: %ld\n", PTR_ERR(p)); + return PTR_ERR(p); + }
- etnaviv_obj->sgt = drm_prime_pages_to_sg(p, npages); - if (IS_ERR(etnaviv_obj->sgt)) { - dev_err(dev->dev, "failed to allocate sgt\n"); - drm_gem_put_pages(&etnaviv_obj->base, p, false, false); - return ERR_CAST(etnaviv_obj->sgt); - } + etnaviv_obj->sgt = drm_prime_pages_to_sg(p, npages); + if (IS_ERR(etnaviv_obj->sgt)) { + dev_err(dev->dev, "failed to allocate sgt\n"); + drm_gem_put_pages(&etnaviv_obj->base, p, false, false); + return PTR_ERR(p); + }
- etnaviv_obj->pages = p; + etnaviv_obj->pages = p;
- /* For non-cached buffers, ensure the new pages are clean - * because display controller, GPU, etc. are not coherent: - */ - if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_UNCACHED)) - dma_map_sg(dev->dev, etnaviv_obj->sgt->sgl, - etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL); - } + /* For non-cached buffers, ensure the new pages are clean + * because display controller, GPU, etc. are not coherent: + */ + if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_UNCACHED)) + dma_map_sg(dev->dev, etnaviv_obj->sgt->sgl, + etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL);
- return etnaviv_obj->pages; + return 0; }
static void put_pages(struct etnaviv_gem_object *etnaviv_obj) @@ -83,7 +79,15 @@ static void put_pages(struct etnaviv_gem_object *etnaviv_obj)
struct page **etnaviv_gem_get_pages(struct etnaviv_gem_object *etnaviv_obj) { - return get_pages(etnaviv_obj); + int ret; + + if (!etnaviv_obj->pages) { + ret = etnaviv_obj->ops->get_pages(etnaviv_obj); + if (ret < 0) + return ERR_PTR(ret); + } + + return etnaviv_obj->pages; }
void etnaviv_gem_put_pages(struct etnaviv_gem_object *etnaviv_obj) @@ -181,7 +185,7 @@ int etnaviv_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) goto out;
/* make sure we have pages attached now */ - pages = get_pages(to_etnaviv_bo(obj)); + pages = etnaviv_gem_get_pages(to_etnaviv_bo(obj)); if (IS_ERR(pages)) { ret = PTR_ERR(pages); goto out_unlock; @@ -265,7 +269,7 @@ int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, if (!etnaviv_obj->iova && !(etnaviv_obj->flags & ETNA_BO_CMDSTREAM)) { struct etnaviv_drm_private *priv = obj->dev->dev_private; struct etnaviv_iommu *mmu = priv->mmu; - struct page **pages = get_pages(etnaviv_obj); + struct page **pages = etnaviv_gem_get_pages(etnaviv_obj); uint32_t offset; struct drm_mm_node *node = NULL;
@@ -367,7 +371,7 @@ void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj) WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
if (!etnaviv_obj->vaddr) { - struct page **pages = get_pages(etnaviv_obj); + struct page **pages = etnaviv_gem_get_pages(etnaviv_obj);
if (IS_ERR(pages)) return ERR_CAST(pages); @@ -532,6 +536,7 @@ static void etnaviv_gem_shmem_release(struct etnaviv_gem_object *etnaviv_obj) }
static const struct etnaviv_gem_ops etnaviv_gem_shmem_ops = { + .get_pages = etnaviv_gem_shmem_get_pages, .release = etnaviv_gem_shmem_release, };
diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h index 569a407cbd35..bbfcbd7557fe 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.h +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -68,6 +68,7 @@ struct etnaviv_gem_object { #define to_etnaviv_bo(x) container_of(x, struct etnaviv_gem_object, base)
struct etnaviv_gem_ops { + int (*get_pages)(struct etnaviv_gem_object *); void (*release)(struct etnaviv_gem_object *); };
diff --git a/drivers/staging/etnaviv/etnaviv_gem_prime.c b/drivers/staging/etnaviv/etnaviv_gem_prime.c index 3a986ae9b00b..d15f4b60fa47 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_prime.c +++ b/drivers/staging/etnaviv/etnaviv_gem_prime.c @@ -78,6 +78,7 @@ static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj) }
static const struct etnaviv_gem_ops etnaviv_gem_prime_ops = { + /* .get_pages should never be called */ .release = etnaviv_gem_prime_release, };
From: Russell King rmk+kernel@arm.linux.org.uk
We test for write-combine and non-cacheable mappings before calling the DMA API. This is werid, because non-cacheable mappings are DMA coherent by definition, whereas cacheable mappings need cache maintanence provided by the DMA API.
This seems to be a typo: ETNA_BO_CACHED should be used rather than ETNA_BO_UNCACHED.
Moreover, add a comment to the dma_unmap_sg() site so to remind people about the data-corrupting implications of this call if it is abused (as can happen with the etnaviv DRM code structure as it currently stands with long-term mapping of the buffer.)
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index e508d64aa2d3..034ff732bdf4 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -48,7 +48,7 @@ static int etnaviv_gem_shmem_get_pages(struct etnaviv_gem_object *etnaviv_obj) /* For non-cached buffers, ensure the new pages are clean * because display controller, GPU, etc. are not coherent: */ - if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_UNCACHED)) + if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_CACHED)) dma_map_sg(dev->dev, etnaviv_obj->sgt->sgl, etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL);
@@ -60,10 +60,22 @@ static void put_pages(struct etnaviv_gem_object *etnaviv_obj) if (etnaviv_obj->pages) { struct drm_device *dev = etnaviv_obj->base.dev;
- /* For non-cached buffers, ensure the new pages are clean + /* + * For non-cached buffers, ensure the new pages are clean * because display controller, GPU, etc. are not coherent: + * + * WARNING: The DMA API does not support concurrent CPU + * and device access to the memory area. With BIDIRECTIONAL, + * we will clean the cache lines which overlap the region, + * and invalidate all cache lines (partially) contained in + * the region. + * + * If you have dirty data in the overlapping cache lines, + * that will corrupt the GPU-written data. If you have + * written into the remainder of the region, this can + * discard those writes. */ - if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_UNCACHED)) + if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_CACHED)) dma_unmap_sg(dev->dev, etnaviv_obj->sgt->sgl, etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL);
From: Russell King rmk+kernel@arm.linux.org.uk
Add a flag to indicate that the GPU MMU needs to be flushed before executing the next set of command buffers. This is necessary to ensure that the GPU sees updated page table entries which may have been modified by GEM.
It is expected that userspace will have flushed the caches at the end of the previous command buffers, so there will be no cache writebacks pending.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_buffer.c | 64 +++++++++++++++++++++++++------- drivers/staging/etnaviv/etnaviv_mmu.h | 1 + 2 files changed, 51 insertions(+), 14 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index 026489baeda7..96661e513d7d 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -17,6 +17,7 @@
#include "etnaviv_gpu.h" #include "etnaviv_gem.h" +#include "etnaviv_mmu.h"
#include "common.xml.h" #include "state.xml.h" @@ -162,34 +163,38 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct etnaviv_gem_object *buffer = to_etnaviv_bo(gpu->buffer); struct etnaviv_gem_object *cmd; u32 *lw = buffer->vaddr + ((buffer->offset - 4) * 4); - u32 back, link_target, link_size; + u32 back, link_target, link_size, reserve_size; u32 i;
if (drm_debug & DRM_UT_DRIVER) etnaviv_buffer_dump(gpu, buffer, 0, 0x50);
+ reserve_size = 6; + + /* + * If we need to flush the MMU prior to submitting this buffer, we + * will need to append a mmu flush load state, followed by a new + * link to this buffer - a total of four additional words. + */ + if (gpu->mmu->need_flush) + reserve_size += 4; + /* * if we are going to completely overflow the buffer, we need to wrap. */ - if (buffer->offset + 6 > buffer->base.size / sizeof(uint32_t)) + if (buffer->offset + reserve_size > + buffer->base.size / sizeof(uint32_t)) buffer->offset = 0;
/* save offset back into main buffer */ - back = buffer->offset; + back = buffer->offset + reserve_size - 6; link_target = buffer->paddr + buffer->offset * 4; link_size = 6;
- /* Save the event and buffer position of the new event trigger */ - gpu->event[event].fence = submit->fence; - gpu->event[event].ring_pos = buffer->offset; - - /* trigger event */ - CMD_LOAD_STATE(buffer, VIVS_GL_EVENT, VIVS_GL_EVENT_EVENT_ID(event) | - VIVS_GL_EVENT_FROM_PE); - - /* append WAIT/LINK to main buffer */ - CMD_WAIT(buffer); - CMD_LINK(buffer, 2, buffer->paddr + ((buffer->offset - 1) * 4)); + if (gpu->mmu->need_flush) { + /* Skip over the MMU flush and LINK instructions */ + link_target += 4 * sizeof(uint32_t); + }
/* update offset for every cmd stream */ for (i = submit->nr_cmds; i--; ) { @@ -228,6 +233,37 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, pr_info("event: %d\n", event); }
+ if (gpu->mmu->need_flush) { + uint32_t new_target = buffer->paddr + buffer->offset * + sizeof(uint32_t); + + /* Add the MMU flush */ + CMD_LOAD_STATE(buffer, VIVS_GL_FLUSH_MMU, + VIVS_GL_FLUSH_MMU_FLUSH_FEMMU | + VIVS_GL_FLUSH_MMU_FLUSH_PEMMU); + + /* And the link to the first buffer */ + CMD_LINK(buffer, link_size, link_target); + + /* Update the link target to point to the flush */ + link_target = new_target; + link_size = 4; + + gpu->mmu->need_flush = false; + } + + /* Save the event and buffer position of the new event trigger */ + gpu->event[event].fence = submit->fence; + gpu->event[event].ring_pos = buffer->offset; + + /* trigger event */ + CMD_LOAD_STATE(buffer, VIVS_GL_EVENT, VIVS_GL_EVENT_EVENT_ID(event) | + VIVS_GL_EVENT_FROM_PE); + + /* append WAIT/LINK to main buffer */ + CMD_WAIT(buffer); + CMD_LINK(buffer, 2, buffer->paddr + ((buffer->offset - 1) * 4)); + /* Change WAIT into a LINK command; write the address first. */ *(lw + 1) = link_target; mb(); diff --git a/drivers/staging/etnaviv/etnaviv_mmu.h b/drivers/staging/etnaviv/etnaviv_mmu.h index 7b97ef35d290..b3a0e3c98372 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.h +++ b/drivers/staging/etnaviv/etnaviv_mmu.h @@ -23,6 +23,7 @@ struct etnaviv_iommu { struct drm_device *dev; struct iommu_domain *domain; + bool need_flush; };
int etnaviv_iommu_attach(struct etnaviv_iommu *iommu, const char **names,
From: Russell King rmk+kernel@arm.linux.org.uk
The GPU memory management (managed by a drm_mm object) is used to track which areas of the MMU address space are in-use. Therefore, this should be tied to the MMU object, rather than the GPU object.
This means we could (as comments suggest) have multiple MMU objects, one for each context, and switch between them. Each would need to be managed by its own drm_mm object.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 2 +- drivers/staging/etnaviv/etnaviv_gpu.c | 4 ---- drivers/staging/etnaviv/etnaviv_gpu.h | 3 --- drivers/staging/etnaviv/etnaviv_mmu.c | 4 ++++ drivers/staging/etnaviv/etnaviv_mmu.h | 3 +++ 5 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 034ff732bdf4..647815e4f1ba 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -292,7 +292,7 @@ int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, if (!node) return -ENOMEM;
- ret = drm_mm_insert_node(&gpu->mm, node, obj->size, 0, + ret = drm_mm_insert_node(&mmu->mm, node, obj->size, 0, DRM_MM_SEARCH_DEFAULT);
if (!ret) { diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 85a0862e0347..91dc44f35a49 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -947,8 +947,6 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master,
if (gpu->mmu) etnaviv_iommu_destroy(gpu->mmu); - - drm_mm_takedown(&gpu->mm); }
static const struct component_ops gpu_ops = { @@ -1028,8 +1026,6 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev) gpu->pipe = (int)match->data;
/* TODO: figure out max mapped size */ - drm_mm_init(&gpu->mm, 0x80000000, SZ_1G); - dev_set_drvdata(dev, gpu);
err = component_add(&pdev->dev, &gpu_ops); diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h index 5afa0f74106c..a26d0ded1019 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.h +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -113,9 +113,6 @@ struct etnaviv_gpu {
struct etnaviv_iommu *mmu;
- /* memory manager for GPU address area */ - struct drm_mm mm; - /* Power Control: */ struct clk *clk_bus; struct clk *clk_core; diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c index 94a6aa9f9c6f..48a0818a3788 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.c +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -92,6 +92,7 @@ int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova,
void etnaviv_iommu_destroy(struct etnaviv_iommu *mmu) { + drm_mm_takedown(&mmu->mm); iommu_domain_free(mmu->domain); kfree(mmu); } @@ -107,6 +108,9 @@ struct etnaviv_iommu *etnaviv_iommu_new(struct drm_device *dev,
mmu->domain = domain; mmu->dev = dev; + + drm_mm_init(&mmu->mm, 0x80000000, SZ_1G); + iommu_set_fault_handler(domain, etnaviv_fault_handler, dev);
return mmu; diff --git a/drivers/staging/etnaviv/etnaviv_mmu.h b/drivers/staging/etnaviv/etnaviv_mmu.h index b3a0e3c98372..1adcc3ab4ebc 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.h +++ b/drivers/staging/etnaviv/etnaviv_mmu.h @@ -23,6 +23,9 @@ struct etnaviv_iommu { struct drm_device *dev; struct iommu_domain *domain; + + /* memory manager for GPU address area */ + struct drm_mm mm; bool need_flush; };
From: Russell King rmk+kernel@arm.linux.org.uk
We model the GPU MMU using the iommu layer, which supports exporting the iommu domain geometry. Use this feature to publish the size of the MMU window, and initialise the MMU drm_mm object according to the available MMU window size.
As we only allocate a MMU page table which covers 256MB, yet we initialised the drm_mm object to cover 1GB, this fixes an overflow of the MMU page table array.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_iommu.c | 2 ++ drivers/staging/etnaviv/etnaviv_mmu.c | 4 +++- 2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_iommu.c b/drivers/staging/etnaviv/etnaviv_iommu.c index 6aa91bcf1893..d8ac05aa2cd3 100644 --- a/drivers/staging/etnaviv/etnaviv_iommu.c +++ b/drivers/staging/etnaviv/etnaviv_iommu.c @@ -166,6 +166,8 @@ struct iommu_domain *etnaviv_iommu_domain_alloc(struct etnaviv_gpu *gpu) return NULL;
domain->ops = &etnaviv_iommu_ops; + domain->geometry.aperture_start = GPU_MEM_START; + domain->geometry.aperture_end = GPU_MEM_START + PT_ENTRIES * SZ_4K;
ret = domain->ops->domain_init(domain); if (ret) diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c index 48a0818a3788..51d91e3d30ed 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.c +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -109,7 +109,9 @@ struct etnaviv_iommu *etnaviv_iommu_new(struct drm_device *dev, mmu->domain = domain; mmu->dev = dev;
- drm_mm_init(&mmu->mm, 0x80000000, SZ_1G); + drm_mm_init(&mmu->mm, domain->geometry.aperture_start, + domain->geometry.aperture_end - + domain->geometry.aperture_start + 1);
iommu_set_fault_handler(domain, etnaviv_fault_handler, dev);
From: Russell King rmk+kernel@arm.linux.org.uk
Ensure that we unmap all MMU entries when unmapping a region. We fail to do this because we assume that the return value from the unmap method should be zero. It should be the size of entry which has been unmapped.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_iommu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_iommu.c b/drivers/staging/etnaviv/etnaviv_iommu.c index d8ac05aa2cd3..89bc2ffadf86 100644 --- a/drivers/staging/etnaviv/etnaviv_iommu.c +++ b/drivers/staging/etnaviv/etnaviv_iommu.c @@ -134,7 +134,7 @@ static size_t etnaviv_iommu_unmap(struct iommu_domain *domain, pgtable_write(&etnaviv_domain->pgtable, iova, ~0); spin_unlock(&etnaviv_domain->map_lock);
- return 0; + return SZ_4K; }
static phys_addr_t etnaviv_iommu_iova_to_phys(struct iommu_domain *domain,
From: Russell King rmk+kernel@arm.linux.org.uk
Move the code which sets up and tears down the MMU mappings (iow, allocates a node in the drm_mm, and then calls the iommu to setup the actual mapping, and the reverse) into etnaviv_mmu.c
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 33 ++++----------------------- drivers/staging/etnaviv/etnaviv_mmu.c | 43 +++++++++++++++++++++++++++++++++++ drivers/staging/etnaviv/etnaviv_mmu.h | 6 +++++ 3 files changed, 53 insertions(+), 29 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 647815e4f1ba..ab7c6db4ec10 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -278,32 +278,12 @@ int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); int ret = 0;
- if (!etnaviv_obj->iova && !(etnaviv_obj->flags & ETNA_BO_CMDSTREAM)) { - struct etnaviv_drm_private *priv = obj->dev->dev_private; - struct etnaviv_iommu *mmu = priv->mmu; + if (!etnaviv_obj->iova && !(etnaviv_obj->flags & ETNA_BO_CMDSTREAM)) { struct page **pages = etnaviv_gem_get_pages(etnaviv_obj); - uint32_t offset; - struct drm_mm_node *node = NULL; - if (IS_ERR(pages)) return PTR_ERR(pages);
- node = kzalloc(sizeof(*node), GFP_KERNEL); - if (!node) - return -ENOMEM; - - ret = drm_mm_insert_node(&mmu->mm, node, obj->size, 0, - DRM_MM_SEARCH_DEFAULT); - - if (!ret) { - offset = node->start; - etnaviv_obj->iova = offset; - etnaviv_obj->gpu_vram_node = node; - - ret = etnaviv_iommu_map(mmu, offset, etnaviv_obj->sgt, - obj->size, IOMMU_READ | IOMMU_WRITE); - } else - kfree(node); + ret = etnaviv_iommu_map_gem(gpu->mmu, etnaviv_obj); }
if (!ret) @@ -531,13 +511,8 @@ static void etnaviv_free_obj(struct drm_gem_object *obj) struct etnaviv_drm_private *priv = obj->dev->dev_private; struct etnaviv_iommu *mmu = priv->mmu;
- if (mmu && etnaviv_obj->iova) { - uint32_t offset = etnaviv_obj->gpu_vram_node->start; - - etnaviv_iommu_unmap(mmu, offset, etnaviv_obj->sgt, obj->size); - drm_mm_remove_node(etnaviv_obj->gpu_vram_node); - kfree(etnaviv_obj->gpu_vram_node); - } + if (mmu) + etnaviv_iommu_unmap_gem(mmu, etnaviv_obj); }
static void etnaviv_gem_shmem_release(struct etnaviv_gem_object *etnaviv_obj) diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c index 51d91e3d30ed..2effe75cb154 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.c +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -16,6 +16,7 @@ */
#include "etnaviv_drv.h" +#include "etnaviv_gem.h" #include "etnaviv_mmu.h"
static int etnaviv_fault_handler(struct iommu_domain *iommu, struct device *dev, @@ -90,6 +91,48 @@ int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova, return 0; }
+int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, + struct etnaviv_gem_object *etnaviv_obj) +{ + struct sg_table *sgt = etnaviv_obj->sgt; + uint32_t offset; + struct drm_mm_node *node = NULL; + int ret; + + node = kzalloc(sizeof(*node), GFP_KERNEL); + if (!node) + return -ENOMEM; + + ret = drm_mm_insert_node(&mmu->mm, node, etnaviv_obj->base.size, 0, + DRM_MM_SEARCH_DEFAULT); + + if (!ret) { + offset = node->start; + etnaviv_obj->iova = offset; + etnaviv_obj->gpu_vram_node = node; + + ret = etnaviv_iommu_map(mmu, offset, sgt, + etnaviv_obj->base.size, + IOMMU_READ | IOMMU_WRITE); + } else + kfree(node); + + return ret; +} + +void etnaviv_iommu_unmap_gem(struct etnaviv_iommu *mmu, + struct etnaviv_gem_object *etnaviv_obj) +{ + if (etnaviv_obj->iova) { + uint32_t offset = etnaviv_obj->gpu_vram_node->start; + + etnaviv_iommu_unmap(mmu, offset, etnaviv_obj->sgt, + etnaviv_obj->base.size); + drm_mm_remove_node(etnaviv_obj->gpu_vram_node); + kfree(etnaviv_obj->gpu_vram_node); + } +} + void etnaviv_iommu_destroy(struct etnaviv_iommu *mmu) { drm_mm_takedown(&mmu->mm); diff --git a/drivers/staging/etnaviv/etnaviv_mmu.h b/drivers/staging/etnaviv/etnaviv_mmu.h index 1adcc3ab4ebc..262c4e26e901 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.h +++ b/drivers/staging/etnaviv/etnaviv_mmu.h @@ -29,12 +29,18 @@ struct etnaviv_iommu { bool need_flush; };
+struct etnaviv_gem_object; + int etnaviv_iommu_attach(struct etnaviv_iommu *iommu, const char **names, int cnt); int etnaviv_iommu_map(struct etnaviv_iommu *iommu, uint32_t iova, struct sg_table *sgt, unsigned len, int prot); int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova, struct sg_table *sgt, unsigned len); +int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, + struct etnaviv_gem_object *etnaviv_obj); +void etnaviv_iommu_unmap_gem(struct etnaviv_iommu *mmu, + struct etnaviv_gem_object *etnaviv_obj); void etnaviv_iommu_destroy(struct etnaviv_iommu *iommu);
struct etnaviv_iommu *etnaviv_iommu_new(struct drm_device *dev,
From: Russell King rmk+kernel@arm.linux.org.uk
Bypass the iommu when we are dealing with single-entry scatterlists.
The etnaviv iommu code needs to be more inteligent: as it currently stands, it is unusable as it always allocates from the bottom upwards. This causes entries to be re-used without the MMU TLB being flushed, but in order to flush the MMU TLB, we have to insert a command into the GPU command stream. Doing this for every allocation/free is really sub-optimal.
To get things working as it currently stands, bypass this so that the armada DRM scanout buffer can at least be used with etnaviv DRM. This at least gets us /some/ usable acceleration on Dove.
To fix this properly, the MMU handing needs to be re-evaluated and probably rewritten.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_mmu.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c index 2effe75cb154..4589995b83ff 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.c +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -99,6 +99,20 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, struct drm_mm_node *node = NULL; int ret;
+ /* v1 MMU can optimize single entry (contiguous) scatterlists */ + if (sgt->nents == 1) { + uint32_t iova; + + iova = sg_dma_address(sgt->sgl); + if (!iova) + iova = sg_phys(sgt->sgl) - sgt->sgl->offset; + + if (iova < 0x80000000 - sg_dma_len(sgt->sgl)) { + etnaviv_obj->iova = iova; + return 0; + } + } + node = kzalloc(sizeof(*node), GFP_KERNEL); if (!node) return -ENOMEM; @@ -123,7 +137,7 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, void etnaviv_iommu_unmap_gem(struct etnaviv_iommu *mmu, struct etnaviv_gem_object *etnaviv_obj) { - if (etnaviv_obj->iova) { + if (etnaviv_obj->gpu_vram_node) { uint32_t offset = etnaviv_obj->gpu_vram_node->start;
etnaviv_iommu_unmap(mmu, offset, etnaviv_obj->sgt,
From: Russell King rmk+kernel@arm.linux.org.uk
In order to avoid flushing the GPU MMU every time we unmap and remap, allocate MMU addresses in a round-robin fashion. When we have to wrap back to the beginning, indicate that the MMU needs to be flushed.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_mmu.c | 23 +++++++++++++++++++++-- drivers/staging/etnaviv/etnaviv_mmu.h | 1 + 2 files changed, 22 insertions(+), 2 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c index 4589995b83ff..4fbba26a3f37 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.c +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -117,10 +117,29 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, if (!node) return -ENOMEM;
- ret = drm_mm_insert_node(&mmu->mm, node, etnaviv_obj->base.size, 0, - DRM_MM_SEARCH_DEFAULT); + while (1) { + ret = drm_mm_insert_node_in_range(&mmu->mm, node, + etnaviv_obj->base.size, 0, mmu->last_iova, ~0UL, + DRM_MM_SEARCH_DEFAULT); + + if (ret != -ENOSPC) + break; + + /* + * If we did not search from the start of the MMU region, + * try again in case there are free slots. + */ + if (mmu->last_iova) { + mmu->last_iova = 0; + mmu->need_flush = true; + continue; + } + + break; + }
if (!ret) { + mmu->last_iova = node->start + etnaviv_obj->base.size; offset = node->start; etnaviv_obj->iova = offset; etnaviv_obj->gpu_vram_node = node; diff --git a/drivers/staging/etnaviv/etnaviv_mmu.h b/drivers/staging/etnaviv/etnaviv_mmu.h index 262c4e26e901..a37affda9590 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.h +++ b/drivers/staging/etnaviv/etnaviv_mmu.h @@ -26,6 +26,7 @@ struct etnaviv_iommu {
/* memory manager for GPU address area */ struct drm_mm mm; + uint32_t last_iova; bool need_flush; };
From: Russell King rmk+kernel@arm.linux.org.uk
If etnaviv_iommu_map() fails, we returned an error, but we didn't clean up the allocated drm_mm node. Simplify the return path and add the necessary failure clean up.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_mmu.c | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c index 4fbba26a3f37..89c5713f52bc 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.c +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -95,8 +95,7 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, struct etnaviv_gem_object *etnaviv_obj) { struct sg_table *sgt = etnaviv_obj->sgt; - uint32_t offset; - struct drm_mm_node *node = NULL; + struct drm_mm_node *node; int ret;
/* v1 MMU can optimize single entry (contiguous) scatterlists */ @@ -138,18 +137,25 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, break; }
- if (!ret) { - mmu->last_iova = node->start + etnaviv_obj->base.size; - offset = node->start; - etnaviv_obj->iova = offset; - etnaviv_obj->gpu_vram_node = node; + if (ret < 0) { + kfree(node); + return ret; + } + + mmu->last_iova = node->start + etnaviv_obj->base.size; + etnaviv_obj->iova = node->start; + etnaviv_obj->gpu_vram_node = node; + ret = etnaviv_iommu_map(mmu, node->start, sgt, etnaviv_obj->base.size, + IOMMU_READ | IOMMU_WRITE);
- ret = etnaviv_iommu_map(mmu, offset, sgt, - etnaviv_obj->base.size, - IOMMU_READ | IOMMU_WRITE); - } else + if (ret < 0) { + drm_mm_remove_node(node); kfree(node);
+ etnaviv_obj->iova = 0; + etnaviv_obj->gpu_vram_node = NULL; + } + return ret; }
From: Russell King rmk+kernel@arm.linux.org.uk
We can easily exhaust the MMU space since we leave mappings in place until the underlying buffers are freed.
Solve this by reaping inactive MMU entries using the drm_mm scanning facility to select candidate(s), which will then have their MMU mappings released.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_mmu.c | 61 ++++++++++++++++++++++++++++++++++- 1 file changed, 60 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c index 89c5713f52bc..5647768c2be4 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.c +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -94,6 +94,7 @@ int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova, int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, struct etnaviv_gem_object *etnaviv_obj) { + struct etnaviv_drm_private *priv = etnaviv_obj->base.dev->dev_private; struct sg_table *sgt = etnaviv_obj->sgt; struct drm_mm_node *node; int ret; @@ -117,6 +118,10 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, return -ENOMEM;
while (1) { + struct etnaviv_gem_object *o, *n; + struct list_head list; + bool found; + ret = drm_mm_insert_node_in_range(&mmu->mm, node, etnaviv_obj->base.size, 0, mmu->last_iova, ~0UL, DRM_MM_SEARCH_DEFAULT); @@ -134,7 +139,58 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, continue; }
- break; + /* Try to retire some entries */ + drm_mm_init_scan(&mmu->mm, etnaviv_obj->base.size, 0, 0); + + found = 0; + INIT_LIST_HEAD(&list); + list_for_each_entry(o, &priv->inactive_list, mm_list) { + if (!o->gpu_vram_node || + o->gpu_vram_node->mm != &mmu->mm) + continue; + + /* + * If it's on the submit list, then it is part of + * a submission, and we want to keep its entry. + */ + if (!list_empty(&o->submit_entry)) + continue; + + list_add(&o->submit_entry, &list); + if (drm_mm_scan_add_block(o->gpu_vram_node)) { + found = true; + break; + } + } + + if (!found) { + /* Nothing found, clean up and fail */ + list_for_each_entry_safe(o, n, &list, submit_entry) + BUG_ON(drm_mm_scan_remove_block(o->gpu_vram_node)); + break; + } + + /* + * drm_mm does not allow any other operations while + * scanning, so we have to remove all blocks first. + * If drm_mm_scan_remove_block() returns false, we + * can leave the block pinned. + */ + list_for_each_entry_safe(o, n, &list, submit_entry) + if (!drm_mm_scan_remove_block(o->gpu_vram_node)) + list_del_init(&o->submit_entry); + + list_for_each_entry_safe(o, n, &list, submit_entry) { + list_del_init(&o->submit_entry); + etnaviv_iommu_unmap_gem(mmu, o); + } + + /* + * We removed enough mappings so that the new allocation will + * succeed. Ensure that the MMU will be flushed and retry + * the allocation one more time. + */ + mmu->need_flush = true; }
if (ret < 0) { @@ -169,6 +225,9 @@ void etnaviv_iommu_unmap_gem(struct etnaviv_iommu *mmu, etnaviv_obj->base.size); drm_mm_remove_node(etnaviv_obj->gpu_vram_node); kfree(etnaviv_obj->gpu_vram_node); + + etnaviv_obj->gpu_vram_node = NULL; + etnaviv_obj->iova = 0; } }
From: Russell King rmk+kernel@arm.linux.org.uk
Move the scatterlist creation from etnaviv_gem_shmem_get_pages() into etnaviv_gem_get_pages() as we always want a scatterlist internally for the IOMMU code. It makes little sense to have each get_pages() method re-implement this code.
However, we still allow a get_pages() method to override this by doing their own initialisation of the etnaviv_obj->sgt pointer.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 47 +++++++++++++++++++++-------------- 1 file changed, 28 insertions(+), 19 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index ab7c6db4ec10..ad8fa71ea920 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -27,37 +27,21 @@ static int etnaviv_gem_shmem_get_pages(struct etnaviv_gem_object *etnaviv_obj) { struct drm_device *dev = etnaviv_obj->base.dev; - struct page **p; - int npages = etnaviv_obj->base.size >> PAGE_SHIFT; + struct page **p = drm_gem_get_pages(&etnaviv_obj->base);
- p = drm_gem_get_pages(&etnaviv_obj->base); if (IS_ERR(p)) { dev_err(dev->dev, "could not get pages: %ld\n", PTR_ERR(p)); return PTR_ERR(p); }
- etnaviv_obj->sgt = drm_prime_pages_to_sg(p, npages); - if (IS_ERR(etnaviv_obj->sgt)) { - dev_err(dev->dev, "failed to allocate sgt\n"); - drm_gem_put_pages(&etnaviv_obj->base, p, false, false); - return PTR_ERR(p); - } - etnaviv_obj->pages = p;
- /* For non-cached buffers, ensure the new pages are clean - * because display controller, GPU, etc. are not coherent: - */ - if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_CACHED)) - dma_map_sg(dev->dev, etnaviv_obj->sgt->sgl, - etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL); - return 0; }
static void put_pages(struct etnaviv_gem_object *etnaviv_obj) { - if (etnaviv_obj->pages) { + if (etnaviv_obj->sgt) { struct drm_device *dev = etnaviv_obj->base.dev;
/* @@ -81,7 +65,9 @@ static void put_pages(struct etnaviv_gem_object *etnaviv_obj) DMA_BIDIRECTIONAL); sg_free_table(etnaviv_obj->sgt); kfree(etnaviv_obj->sgt); - + etnaviv_obj->sgt = NULL; + } + if (etnaviv_obj->pages) { drm_gem_put_pages(&etnaviv_obj->base, etnaviv_obj->pages, true, false);
@@ -99,6 +85,29 @@ struct page **etnaviv_gem_get_pages(struct etnaviv_gem_object *etnaviv_obj) return ERR_PTR(ret); }
+ if (!etnaviv_obj->sgt) { + struct drm_device *dev = etnaviv_obj->base.dev; + int npages = etnaviv_obj->base.size >> PAGE_SHIFT; + struct sg_table *sgt; + + sgt = drm_prime_pages_to_sg(etnaviv_obj->pages, npages); + if (IS_ERR(sgt)) { + dev_err(dev->dev, "failed to allocate sgt: %ld\n", + PTR_ERR(sgt)); + return ERR_CAST(sgt); + } + + etnaviv_obj->sgt = sgt; + + /* + * For non-cached buffers, ensure the new pages are clean + * because display controller, GPU, etc. are not coherent. + */ + if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_CACHED)) + dma_map_sg(dev->dev, etnaviv_obj->sgt->sgl, + etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL); + } + return etnaviv_obj->pages; }
From: Russell King rmk+kernel@arm.linux.org.uk
Add support for mapping userspace memory to the GPU. This is useful for cases where we have some malloc()'d memory which we wish to pass to the GPU or shmem memory via Xv, and wish to avoid the overhead of an additional memcpy(), especially as is is expensive to memcpy()ing a 1080p frame.
This is mostly taken from the 3.17 i915 userptr implementation, except we solve the held-mm problem in (imho) a nicer way, and we also avoid excessive spinning with -EAGAIN waiting for the queued work to run.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_drv.c | 30 +++++ drivers/staging/etnaviv/etnaviv_drv.h | 2 + drivers/staging/etnaviv/etnaviv_gem.c | 195 +++++++++++++++++++++++++++ drivers/staging/etnaviv/etnaviv_gem.h | 9 ++ drivers/staging/etnaviv/etnaviv_gem_submit.c | 9 ++ include/uapi/drm/etnaviv_drm.h | 13 +- 6 files changed, 257 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 2b6800a782bb..5f386a7045ae 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -454,6 +454,35 @@ static int etnaviv_ioctl_wait_fence(struct drm_device *dev, void *data, &TS(args->timeout)); }
+static int etnaviv_ioctl_gem_userptr(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct drm_etnaviv_gem_userptr *args = data; + int access; + + if (args->flags & ~(ETNA_USERPTR_READ|ETNA_USERPTR_WRITE) || + args->flags == 0) + return -EINVAL; + + if (offset_in_page(args->user_ptr | args->user_size) || + (uintptr_t)args->user_ptr != args->user_ptr || + (uint32_t)args->user_size != args->user_size) + return -EINVAL; + + if (args->flags & ETNA_USERPTR_WRITE) + access = VERIFY_WRITE; + else + access = VERIFY_READ; + + if (!access_ok(access, (void __user *)(unsigned long)args->user_ptr, + args->user_size)) + return -EFAULT; + + return etnaviv_gem_new_userptr(dev, file, args->user_ptr, + args->user_size, args->flags, + &args->handle); +} + static const struct drm_ioctl_desc etnaviv_ioctls[] = { #define ETNA_IOCTL(n, func, flags) \ DRM_IOCTL_DEF_DRV(ETNAVIV_##n, etnaviv_ioctl_##func, flags) @@ -464,6 +493,7 @@ static const struct drm_ioctl_desc etnaviv_ioctls[] = { ETNA_IOCTL(GEM_CPU_FINI, gem_cpu_fini, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), ETNA_IOCTL(GEM_SUBMIT, gem_submit, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), ETNA_IOCTL(WAIT_FENCE, wait_fence, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + ETNA_IOCTL(GEM_USERPTR, gem_userptr, DRM_UNLOCKED|DRM_RENDER_ALLOW), };
static const struct vm_operations_struct vm_ops = { diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index 79f22d15fc63..59aa4666d2cc 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -107,6 +107,8 @@ int etnaviv_gem_new_handle(struct drm_device *dev, struct drm_file *file, uint32_t size, uint32_t flags, uint32_t *handle); struct drm_gem_object *etnaviv_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); +int etnaviv_gem_new_userptr(struct drm_device *dev, struct drm_file *file, + uintptr_t ptr, uint32_t size, uint32_t flags, uint32_t *handle); u32 etnaviv_buffer_init(struct etnaviv_gpu *gpu); void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct etnaviv_gem_submit *submit); diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index ad8fa71ea920..4e28c57b2409 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -694,3 +694,198 @@ int etnaviv_gem_new_private(struct drm_device *dev, size_t size, uint32_t flags,
return 0; } + +struct get_pages_work { + struct work_struct work; + struct mm_struct *mm; + struct task_struct *task; + struct etnaviv_gem_object *etnaviv_obj; +}; + +static struct page **etnaviv_gem_userptr_do_get_pages( + struct etnaviv_gem_object *etnaviv_obj, struct mm_struct *mm, struct task_struct *task) +{ + int ret, pinned, npages = etnaviv_obj->base.size >> PAGE_SHIFT; + struct page **pvec; + uintptr_t ptr; + + pvec = drm_malloc_ab(npages, sizeof(struct page *)); + if (!pvec) + return ERR_PTR(-ENOMEM); + + pinned = 0; + ptr = etnaviv_obj->userptr.ptr; + + down_read(&mm->mmap_sem); + while (pinned < npages) { + ret = get_user_pages(task, mm, ptr, npages - pinned, + !etnaviv_obj->userptr.ro, 0, + pvec + pinned, NULL); + if (ret < 0) + break; + + ptr += ret * PAGE_SIZE; + pinned += ret; + } + up_read(&mm->mmap_sem); + + if (ret < 0) { + release_pages(pvec, pinned, 0); + drm_free_large(pvec); + return ERR_PTR(ret); + } + + return pvec; +} + +static void __etnaviv_gem_userptr_get_pages(struct work_struct *_work) +{ + struct get_pages_work *work = container_of(_work, typeof(*work), work); + struct etnaviv_gem_object *etnaviv_obj = work->etnaviv_obj; + struct drm_device *dev = etnaviv_obj->base.dev; + struct page **pvec; + + pvec = etnaviv_gem_userptr_do_get_pages(etnaviv_obj, work->mm, work->task); + + mutex_lock(&dev->struct_mutex); + if (IS_ERR(pvec)) { + etnaviv_obj->userptr.work = ERR_CAST(pvec); + } else { + etnaviv_obj->userptr.work = NULL; + etnaviv_obj->pages = pvec; + } + + drm_gem_object_unreference(&etnaviv_obj->base); + mutex_unlock(&dev->struct_mutex); + + mmput(work->mm); + put_task_struct(work->task); + kfree(work); +} + +static int etnaviv_gem_userptr_get_pages(struct etnaviv_gem_object *etnaviv_obj) +{ + struct etnaviv_drm_private *priv; + struct page **pvec = NULL; + struct get_pages_work *work; + struct mm_struct *mm; + int ret, pinned, npages = etnaviv_obj->base.size >> PAGE_SHIFT; + + if (etnaviv_obj->userptr.work) { + if (IS_ERR(etnaviv_obj->userptr.work)) { + ret = PTR_ERR(etnaviv_obj->userptr.work); + etnaviv_obj->userptr.work = NULL; + } else { + ret = -EAGAIN; + } + return ret; + } + + mm = get_task_mm(etnaviv_obj->userptr.task); + pinned = 0; + if (mm == current->mm) { + pvec = drm_malloc_ab(npages, sizeof(struct page *)); + if (!pvec) { + mmput(mm); + return -ENOMEM; + } + + pinned = __get_user_pages_fast(etnaviv_obj->userptr.ptr, npages, + !etnaviv_obj->userptr.ro, pvec); + if (pinned < 0) { + drm_free_large(pvec); + mmput(mm); + return pinned; + } + + if (pinned == npages) { + etnaviv_obj->pages = pvec; + mmput(mm); + return 0; + } + } + + release_pages(pvec, pinned, 0); + drm_free_large(pvec); + + work = kmalloc(sizeof(*work), GFP_KERNEL); + if (!work) { + mmput(mm); + return -ENOMEM; + } + + get_task_struct(current); + drm_gem_object_reference(&etnaviv_obj->base); + + work->mm = mm; + work->task = current; + work->etnaviv_obj = etnaviv_obj; + + etnaviv_obj->userptr.work = &work->work; + INIT_WORK(&work->work, __etnaviv_gem_userptr_get_pages); + + priv = etnaviv_obj->base.dev->dev_private; + queue_work(priv->wq, &work->work); + + return -EAGAIN; +} + +static void etnaviv_gem_userptr_release(struct etnaviv_gem_object *etnaviv_obj) +{ + if (etnaviv_obj->sgt) { + /* + * For non-cached buffers, ensure the new pages are clean + * because display controller, GPU, etc. are not coherent: + */ + if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_CACHED)) + dma_unmap_sg(etnaviv_obj->base.dev->dev, + etnaviv_obj->sgt->sgl, + etnaviv_obj->sgt->nents, + DMA_BIDIRECTIONAL); + sg_free_table(etnaviv_obj->sgt); + kfree(etnaviv_obj->sgt); + } + if (etnaviv_obj->pages) { + int npages = etnaviv_obj->base.size >> PAGE_SHIFT; + + release_pages(etnaviv_obj->pages, npages, 0); + drm_free_large(etnaviv_obj->pages); + } + put_task_struct(etnaviv_obj->userptr.task); +} + +static const struct etnaviv_gem_ops etnaviv_gem_userptr_ops = { + .get_pages = etnaviv_gem_userptr_get_pages, + .release = etnaviv_gem_userptr_release, +}; + +int etnaviv_gem_new_userptr(struct drm_device *dev, struct drm_file *file, + uintptr_t ptr, uint32_t size, uint32_t flags, uint32_t *handle) +{ + struct etnaviv_gem_object *etnaviv_obj; + int ret; + + ret = mutex_lock_interruptible(&dev->struct_mutex); + if (ret) + return ret; + + ret = etnaviv_gem_new_private(dev, size, ETNA_BO_CACHED, &etnaviv_obj); + if (ret == 0) { + etnaviv_obj->ops = &etnaviv_gem_userptr_ops; + etnaviv_obj->userptr.ptr = ptr; + etnaviv_obj->userptr.task = current; + etnaviv_obj->userptr.ro = !(flags & ETNA_USERPTR_WRITE); + get_task_struct(current); + } + mutex_unlock(&dev->struct_mutex); + + if (ret) + return ret; + + ret = drm_gem_handle_create(file, &etnaviv_obj->base, handle); + + /* drop reference from allocate - handle holds it now */ + drm_gem_object_unreference_unlocked(&etnaviv_obj->base); + + return ret; +} diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h index bbfcbd7557fe..add616338a9f 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.h +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -23,6 +23,13 @@
struct etnaviv_gem_ops;
+struct etnaviv_gem_userptr { + uintptr_t ptr; + struct task_struct *task; + struct work_struct *work; + bool ro; +}; + struct etnaviv_gem_object { struct drm_gem_object base; const struct etnaviv_gem_ops *ops; @@ -64,6 +71,8 @@ struct etnaviv_gem_object {
/* for buffer manipulation during submit */ u32 offset; + + struct etnaviv_gem_userptr userptr; }; #define to_etnaviv_bo(x) container_of(x, struct etnaviv_gem_object, base)
diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index af3718465ea1..bbe2171b8eb4 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -435,5 +435,14 @@ out: if (submit) submit_cleanup(submit, !!ret); mutex_unlock(&dev->struct_mutex); + + /* + * If we're returning -EAGAIN, it could be due to the userptr code + * wanting to run its workqueue outside of the struct_mutex. + * Flush our workqueue to ensure that it is run in a timely manner. + */ + if (ret == -EAGAIN) + flush_workqueue(priv->wq); + return ret; } diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h index 9654021017fd..a4c109ffbea4 100644 --- a/include/uapi/drm/etnaviv_drm.h +++ b/include/uapi/drm/etnaviv_drm.h @@ -202,6 +202,15 @@ struct drm_etnaviv_wait_fence { struct drm_etnaviv_timespec timeout; /* in */ };
+#define ETNA_USERPTR_READ 0x01 +#define ETNA_USERPTR_WRITE 0x02 +struct drm_etnaviv_gem_userptr { + uint64_t user_ptr; /* in, page aligned user pointer */ + uint64_t user_size; /* in, page aligned user size */ + uint32_t flags; /* in, flags */ + uint32_t handle; /* out, non-zero handle */ +}; + #define DRM_ETNAVIV_GET_PARAM 0x00 /* placeholder: #define DRM_MSM_SET_PARAM 0x01 @@ -212,7 +221,8 @@ struct drm_etnaviv_wait_fence { #define DRM_ETNAVIV_GEM_CPU_FINI 0x05 #define DRM_ETNAVIV_GEM_SUBMIT 0x06 #define DRM_ETNAVIV_WAIT_FENCE 0x07 -#define DRM_ETNAVIV_NUM_IOCTLS 0x08 +#define DRM_ETNAVIV_GEM_USERPTR 0x08 +#define DRM_ETNAVIV_NUM_IOCTLS 0x09
#define DRM_IOCTL_ETNAVIV_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GET_PARAM, struct drm_etnaviv_param) #define DRM_IOCTL_ETNAVIV_GEM_NEW DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_NEW, struct drm_etnaviv_gem_new) @@ -221,5 +231,6 @@ struct drm_etnaviv_wait_fence { #define DRM_IOCTL_ETNAVIV_GEM_CPU_FINI DRM_IOW(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_CPU_FINI, struct drm_etnaviv_gem_cpu_fini) #define DRM_IOCTL_ETNAVIV_GEM_SUBMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_SUBMIT, struct drm_etnaviv_gem_submit) #define DRM_IOCTL_ETNAVIV_WAIT_FENCE DRM_IOW(DRM_COMMAND_BASE + DRM_ETNAVIV_WAIT_FENCE, struct drm_etnaviv_wait_fence) +#define DRM_IOCTL_ETNAVIV_GEM_USERPTR DRM_IOWR(DRM_COMMAND_BASE + DRM_ETNAVIV_GEM_USERPTR, struct drm_etnaviv_gem_userptr)
#endif /* __ETNAVIV_DRM_H__ */
From: Russell King rmk+kernel@arm.linux.org.uk
Call the DRM device 'drm' in the etnaviv_gpu structure, so that we can add a struct device pointer.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_buffer.c | 2 +- drivers/staging/etnaviv/etnaviv_gpu.c | 74 ++++++++++++++++---------------- drivers/staging/etnaviv/etnaviv_gpu.h | 2 +- 3 files changed, 39 insertions(+), 39 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index 96661e513d7d..ad8ff55a59b4 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -135,7 +135,7 @@ static void etnaviv_buffer_dump(struct etnaviv_gpu *gpu, u32 size = obj->base.size; u32 *ptr = obj->vaddr + off;
- dev_info(gpu->dev->dev, "virt %p phys 0x%08x free 0x%08x\n", + dev_info(gpu->drm->dev, "virt %p phys 0x%08x free 0x%08x\n", ptr, obj->paddr + off, size - len * 4 - off);
print_hex_dump(KERN_INFO, "cmd ", DUMP_PREFIX_OFFSET, 16, 4, diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 91dc44f35a49..7ef9120bd7de 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -157,7 +157,7 @@ static void etnaviv_hw_specs(struct etnaviv_gpu *gpu) gpu->identity.vertex_output_buffer_size = 1 << gpu->identity.vertex_output_buffer_size; } else { - dev_err(gpu->dev->dev, "TODO: determine GPU specs based on model\n"); + dev_err(gpu->drm->dev, "TODO: determine GPU specs based on model\n"); }
switch (gpu->identity.instruction_count) { @@ -178,25 +178,25 @@ static void etnaviv_hw_specs(struct etnaviv_gpu *gpu) break; }
- dev_info(gpu->dev->dev, "stream_count: %x\n", + dev_info(gpu->drm->dev, "stream_count: %x\n", gpu->identity.stream_count); - dev_info(gpu->dev->dev, "register_max: %x\n", + dev_info(gpu->drm->dev, "register_max: %x\n", gpu->identity.register_max); - dev_info(gpu->dev->dev, "thread_count: %x\n", + dev_info(gpu->drm->dev, "thread_count: %x\n", gpu->identity.thread_count); - dev_info(gpu->dev->dev, "vertex_cache_size: %x\n", + dev_info(gpu->drm->dev, "vertex_cache_size: %x\n", gpu->identity.vertex_cache_size); - dev_info(gpu->dev->dev, "shader_core_count: %x\n", + dev_info(gpu->drm->dev, "shader_core_count: %x\n", gpu->identity.shader_core_count); - dev_info(gpu->dev->dev, "pixel_pipes: %x\n", + dev_info(gpu->drm->dev, "pixel_pipes: %x\n", gpu->identity.pixel_pipes); - dev_info(gpu->dev->dev, "vertex_output_buffer_size: %x\n", + dev_info(gpu->drm->dev, "vertex_output_buffer_size: %x\n", gpu->identity.vertex_output_buffer_size); - dev_info(gpu->dev->dev, "buffer_size: %x\n", + dev_info(gpu->drm->dev, "buffer_size: %x\n", gpu->identity.buffer_size); - dev_info(gpu->dev->dev, "instruction_count: %x\n", + dev_info(gpu->drm->dev, "instruction_count: %x\n", gpu->identity.instruction_count); - dev_info(gpu->dev->dev, "num_constants: %x\n", + dev_info(gpu->drm->dev, "num_constants: %x\n", gpu->identity.num_constants); }
@@ -242,8 +242,8 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) } }
- dev_info(gpu->dev->dev, "model: %x\n", gpu->identity.model); - dev_info(gpu->dev->dev, "revision: %x\n", gpu->identity.revision); + dev_info(gpu->drm->dev, "model: %x\n", gpu->identity.model); + dev_info(gpu->drm->dev, "revision: %x\n", gpu->identity.revision);
gpu->identity.features = gpu_read(gpu, VIVS_HI_CHIP_FEATURE);
@@ -275,13 +275,13 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_3); }
- dev_info(gpu->dev->dev, "minor_features: %x\n", + dev_info(gpu->drm->dev, "minor_features: %x\n", gpu->identity.minor_features0); - dev_info(gpu->dev->dev, "minor_features1: %x\n", + dev_info(gpu->drm->dev, "minor_features1: %x\n", gpu->identity.minor_features1); - dev_info(gpu->dev->dev, "minor_features2: %x\n", + dev_info(gpu->drm->dev, "minor_features2: %x\n", gpu->identity.minor_features2); - dev_info(gpu->dev->dev, "minor_features3: %x\n", + dev_info(gpu->drm->dev, "minor_features3: %x\n", gpu->identity.minor_features3);
etnaviv_hw_specs(gpu); @@ -334,7 +334,7 @@ static void etnaviv_hw_reset(struct etnaviv_gpu *gpu)
/* try reseting again if FE it not idle */ if ((idle & VIVS_HI_IDLE_STATE_FE) == 0) { - dev_dbg(gpu->dev->dev, "%s: FE is not idle\n", + dev_dbg(gpu->drm->dev, "%s: FE is not idle\n", gpu->name); continue; } @@ -345,7 +345,7 @@ static void etnaviv_hw_reset(struct etnaviv_gpu *gpu) /* is the GPU idle? */ if (((control & VIVS_HI_CLOCK_CONTROL_IDLE_3D) == 0) || ((control & VIVS_HI_CLOCK_CONTROL_IDLE_2D) == 0)) { - dev_dbg(gpu->dev->dev, "%s: GPU is not idle\n", + dev_dbg(gpu->drm->dev, "%s: GPU is not idle\n", gpu->name); continue; } @@ -385,7 +385,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) * simple and to get something working, just use a single address space: */ mmuv2 = gpu->identity.minor_features1 & chipMinorFeatures1_MMU_VERSION; - dev_dbg(gpu->dev->dev, "mmuv2: %d\n", mmuv2); + dev_dbg(gpu->drm->dev, "mmuv2: %d\n", mmuv2);
if (!mmuv2) { iommu = etnaviv_iommu_domain_alloc(gpu); @@ -402,19 +402,19 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
/* TODO: we will leak here memory - fix it! */
- gpu->mmu = etnaviv_iommu_new(gpu->dev, iommu); + gpu->mmu = etnaviv_iommu_new(gpu->drm, iommu); if (!gpu->mmu) { ret = -ENOMEM; goto fail; } - etnaviv_register_mmu(gpu->dev, gpu->mmu); + etnaviv_register_mmu(gpu->drm, gpu->mmu);
/* Create buffer: */ - gpu->buffer = etnaviv_gem_new(gpu->dev, PAGE_SIZE, ETNA_BO_CMDSTREAM); + gpu->buffer = etnaviv_gem_new(gpu->drm, PAGE_SIZE, ETNA_BO_CMDSTREAM); if (IS_ERR(gpu->buffer)) { ret = PTR_ERR(gpu->buffer); gpu->buffer = NULL; - dev_err(gpu->dev->dev, "could not create buffer: %d\n", ret); + dev_err(gpu->drm->dev, "could not create buffer: %d\n", ret); goto fail; }
@@ -547,7 +547,7 @@ void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m) static int enable_pwrrail(struct etnaviv_gpu *gpu) { #if 0 - struct drm_device *dev = gpu->dev; + struct drm_device *dev = gpu->drm; int ret = 0;
if (gpu->gpu_reg) { @@ -667,7 +667,7 @@ static void recover_worker(struct work_struct *work) { struct etnaviv_gpu *gpu = container_of(work, struct etnaviv_gpu, recover_work); - struct drm_device *dev = gpu->dev; + struct drm_device *dev = gpu->drm;
dev_err(dev->dev, "%s: hangcheck recover!\n", gpu->name);
@@ -688,7 +688,7 @@ static void hangcheck_timer_reset(struct etnaviv_gpu *gpu) static void hangcheck_handler(unsigned long data) { struct etnaviv_gpu *gpu = (struct etnaviv_gpu *)data; - struct drm_device *dev = gpu->dev; + struct drm_device *dev = gpu->drm; struct etnaviv_drm_private *priv = dev->dev_private; uint32_t fence = gpu->retired_fence;
@@ -724,7 +724,7 @@ static unsigned int event_alloc(struct etnaviv_gpu *gpu) ret = wait_for_completion_timeout(&gpu->event_free, msecs_to_jiffies(10 * 10000)); if (!ret) - dev_err(gpu->dev->dev, "wait_for_completion_timeout failed"); + dev_err(gpu->drm->dev, "wait_for_completion_timeout failed");
spin_lock_irqsave(&gpu->event_spinlock, flags);
@@ -749,7 +749,7 @@ static void event_free(struct etnaviv_gpu *gpu, unsigned int event) spin_lock_irqsave(&gpu->event_spinlock, flags);
if (gpu->event[event].used == false) { - dev_warn(gpu->dev->dev, "event %u is already marked as free", + dev_warn(gpu->drm->dev, "event %u is already marked as free", event); spin_unlock_irqrestore(&gpu->event_spinlock, flags); } else { @@ -768,10 +768,10 @@ static void retire_worker(struct work_struct *work) { struct etnaviv_gpu *gpu = container_of(work, struct etnaviv_gpu, retire_work); - struct drm_device *dev = gpu->dev; + struct drm_device *dev = gpu->drm; uint32_t fence = gpu->retired_fence;
- etnaviv_update_fence(gpu->dev, fence); + etnaviv_update_fence(gpu->drm, fence);
mutex_lock(&dev->struct_mutex);
@@ -798,7 +798,7 @@ static void retire_worker(struct work_struct *work) /* call from irq handler to schedule work to retire bo's */ void etnaviv_gpu_retire(struct etnaviv_gpu *gpu) { - struct etnaviv_drm_private *priv = gpu->dev->dev_private; + struct etnaviv_drm_private *priv = gpu->drm->dev_private;
queue_work(priv->wq, &gpu->retire_work); } @@ -807,7 +807,7 @@ void etnaviv_gpu_retire(struct etnaviv_gpu *gpu) int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, struct etnaviv_gem_submit *submit, struct etnaviv_file_private *ctx) { - struct drm_device *dev = gpu->dev; + struct drm_device *dev = gpu->drm; struct etnaviv_drm_private *priv = dev->dev_private; int ret = 0; unsigned int event, i; @@ -878,14 +878,14 @@ static irqreturn_t irq_handler(int irq, void *data) u32 intr = gpu_read(gpu, VIVS_HI_INTR_ACKNOWLEDGE);
if (intr != 0) { - dev_dbg(gpu->dev->dev, "intr 0x%08x\n", intr); + dev_dbg(gpu->drm->dev, "intr 0x%08x\n", intr);
if (intr & VIVS_HI_INTR_ACKNOWLEDGE_AXI_BUS_ERROR) - dev_err(gpu->dev->dev, "AXI bus error\n"); + dev_err(gpu->drm->dev, "AXI bus error\n"); else { uint8_t event = __fls(intr);
- dev_dbg(gpu->dev->dev, "event %u\n", event); + dev_dbg(gpu->drm->dev, "event %u\n", event); gpu->retired_fence = gpu->event[event].fence; gpu->last_ring_pos = gpu->event[event].ring_pos; event_free(gpu, event); @@ -918,7 +918,7 @@ static int etnaviv_gpu_bind(struct device *dev, struct device *master,
dev_info(dev, "post gpu[idx]: %p\n", priv->gpu[idx]);
- gpu->dev = drm; + gpu->drm = drm;
INIT_LIST_HEAD(&gpu->active_list); INIT_WORK(&gpu->retire_work, retire_worker); diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h index a26d0ded1019..c9c482a8d569 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.h +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -86,7 +86,7 @@ struct etnaviv_event {
struct etnaviv_gpu { const char *name; - struct drm_device *dev; + struct drm_device *drm; struct etnaviv_chip_identity identity; int pipe;
From: Russell King rmk+kernel@arm.linux.org.uk
Report messages against the component device rather than the subsystem device, so that the component responsible for the message is identified rather than having to be separately formatted in the string. This ensures that many of the debug messages are properly attributed to their appropriate component, which is especially important when we have multiple GPU cores in a SoC.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_buffer.c | 2 +- drivers/staging/etnaviv/etnaviv_drv.c | 8 ++-- drivers/staging/etnaviv/etnaviv_gpu.c | 82 +++++++++++++++----------------- drivers/staging/etnaviv/etnaviv_gpu.h | 2 +- 4 files changed, 45 insertions(+), 49 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index ad8ff55a59b4..0ce1e4baafa4 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -135,7 +135,7 @@ static void etnaviv_buffer_dump(struct etnaviv_gpu *gpu, u32 size = obj->base.size; u32 *ptr = obj->vaddr + off;
- dev_info(gpu->drm->dev, "virt %p phys 0x%08x free 0x%08x\n", + dev_info(gpu->dev, "virt %p phys 0x%08x free 0x%08x\n", ptr, obj->paddr + off, size - len * 4 - off);
print_hex_dump(KERN_INFO, "cmd ", DUMP_PREFIX_OFFSET, 16, 4, diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 5f386a7045ae..3dba228265ea 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -126,8 +126,7 @@ static void load_gpu(struct drm_device *dev) etnaviv_gpu_pm_resume(g); ret = etnaviv_gpu_init(g); if (ret) { - dev_err(dev->dev, "%s hw init failed: %d\n", - g->name, ret); + dev_err(g->dev, "hw init failed: %d\n", ret); priv->gpu[i] = NULL; } } @@ -206,7 +205,7 @@ static int etnaviv_gpu_show(struct drm_device *dev, struct seq_file *m) for (i = 0; i < ETNA_MAX_PIPES; i++) { gpu = priv->gpu[i]; if (gpu) { - seq_printf(m, "%s Status:\n", gpu->name); + seq_printf(m, "%s Status:\n", dev_name(gpu->dev)); etnaviv_gpu_debugfs(gpu, m); } } @@ -223,7 +222,8 @@ static int etnaviv_gem_show(struct drm_device *dev, struct seq_file *m) for (i = 0; i < ETNA_MAX_PIPES; i++) { gpu = priv->gpu[i]; if (gpu) { - seq_printf(m, "Active Objects (%s):\n", gpu->name); + seq_printf(m, "Active Objects (%s):\n", + dev_name(gpu->dev)); msm_gem_describe_objects(&gpu->active_list, m); } } diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 7ef9120bd7de..a5a47f34eba5 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -104,7 +104,7 @@ int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, uint32_t param, break;
default: - DBG("%s: invalid param: %u", gpu->name, param); + DBG("%s: invalid param: %u", dev_name(gpu->dev), param); return -EINVAL; }
@@ -157,7 +157,7 @@ static void etnaviv_hw_specs(struct etnaviv_gpu *gpu) gpu->identity.vertex_output_buffer_size = 1 << gpu->identity.vertex_output_buffer_size; } else { - dev_err(gpu->drm->dev, "TODO: determine GPU specs based on model\n"); + dev_err(gpu->dev, "TODO: determine GPU specs based on model\n"); }
switch (gpu->identity.instruction_count) { @@ -178,25 +178,25 @@ static void etnaviv_hw_specs(struct etnaviv_gpu *gpu) break; }
- dev_info(gpu->drm->dev, "stream_count: %x\n", + dev_info(gpu->dev, "stream_count: %x\n", gpu->identity.stream_count); - dev_info(gpu->drm->dev, "register_max: %x\n", + dev_info(gpu->dev, "register_max: %x\n", gpu->identity.register_max); - dev_info(gpu->drm->dev, "thread_count: %x\n", + dev_info(gpu->dev, "thread_count: %x\n", gpu->identity.thread_count); - dev_info(gpu->drm->dev, "vertex_cache_size: %x\n", + dev_info(gpu->dev, "vertex_cache_size: %x\n", gpu->identity.vertex_cache_size); - dev_info(gpu->drm->dev, "shader_core_count: %x\n", + dev_info(gpu->dev, "shader_core_count: %x\n", gpu->identity.shader_core_count); - dev_info(gpu->drm->dev, "pixel_pipes: %x\n", + dev_info(gpu->dev, "pixel_pipes: %x\n", gpu->identity.pixel_pipes); - dev_info(gpu->drm->dev, "vertex_output_buffer_size: %x\n", + dev_info(gpu->dev, "vertex_output_buffer_size: %x\n", gpu->identity.vertex_output_buffer_size); - dev_info(gpu->drm->dev, "buffer_size: %x\n", + dev_info(gpu->dev, "buffer_size: %x\n", gpu->identity.buffer_size); - dev_info(gpu->drm->dev, "instruction_count: %x\n", + dev_info(gpu->dev, "instruction_count: %x\n", gpu->identity.instruction_count); - dev_info(gpu->drm->dev, "num_constants: %x\n", + dev_info(gpu->dev, "num_constants: %x\n", gpu->identity.num_constants); }
@@ -242,8 +242,8 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) } }
- dev_info(gpu->drm->dev, "model: %x\n", gpu->identity.model); - dev_info(gpu->drm->dev, "revision: %x\n", gpu->identity.revision); + dev_info(gpu->dev, "model: %x\n", gpu->identity.model); + dev_info(gpu->dev, "revision: %x\n", gpu->identity.revision);
gpu->identity.features = gpu_read(gpu, VIVS_HI_CHIP_FEATURE);
@@ -275,13 +275,13 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_3); }
- dev_info(gpu->drm->dev, "minor_features: %x\n", + dev_info(gpu->dev, "minor_features: %x\n", gpu->identity.minor_features0); - dev_info(gpu->drm->dev, "minor_features1: %x\n", + dev_info(gpu->dev, "minor_features1: %x\n", gpu->identity.minor_features1); - dev_info(gpu->drm->dev, "minor_features2: %x\n", + dev_info(gpu->dev, "minor_features2: %x\n", gpu->identity.minor_features2); - dev_info(gpu->drm->dev, "minor_features3: %x\n", + dev_info(gpu->dev, "minor_features3: %x\n", gpu->identity.minor_features3);
etnaviv_hw_specs(gpu); @@ -334,8 +334,7 @@ static void etnaviv_hw_reset(struct etnaviv_gpu *gpu)
/* try reseting again if FE it not idle */ if ((idle & VIVS_HI_IDLE_STATE_FE) == 0) { - dev_dbg(gpu->drm->dev, "%s: FE is not idle\n", - gpu->name); + dev_dbg(gpu->dev, "FE is not idle\n"); continue; }
@@ -345,8 +344,7 @@ static void etnaviv_hw_reset(struct etnaviv_gpu *gpu) /* is the GPU idle? */ if (((control & VIVS_HI_CLOCK_CONTROL_IDLE_3D) == 0) || ((control & VIVS_HI_CLOCK_CONTROL_IDLE_2D) == 0)) { - dev_dbg(gpu->drm->dev, "%s: GPU is not idle\n", - gpu->name); + dev_dbg(gpu->dev, "GPU is not idle\n"); continue; }
@@ -385,7 +383,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) * simple and to get something working, just use a single address space: */ mmuv2 = gpu->identity.minor_features1 & chipMinorFeatures1_MMU_VERSION; - dev_dbg(gpu->drm->dev, "mmuv2: %d\n", mmuv2); + dev_dbg(gpu->dev, "mmuv2: %d\n", mmuv2);
if (!mmuv2) { iommu = etnaviv_iommu_domain_alloc(gpu); @@ -414,7 +412,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) if (IS_ERR(gpu->buffer)) { ret = PTR_ERR(gpu->buffer); gpu->buffer = NULL; - dev_err(gpu->drm->dev, "could not create buffer: %d\n", ret); + dev_err(gpu->dev, "could not create buffer: %d\n", ret); goto fail; }
@@ -622,7 +620,7 @@ int etnaviv_gpu_pm_resume(struct etnaviv_gpu *gpu) { int ret;
- DBG("%s", gpu->name); + DBG("%s", dev_name(gpu->dev));
ret = enable_pwrrail(gpu); if (ret) @@ -643,7 +641,7 @@ int etnaviv_gpu_pm_suspend(struct etnaviv_gpu *gpu) { int ret;
- DBG("%s", gpu->name); + DBG("%s", dev_name(gpu->dev));
ret = disable_axi(gpu); if (ret) @@ -669,7 +667,7 @@ static void recover_worker(struct work_struct *work) recover_work); struct drm_device *dev = gpu->drm;
- dev_err(dev->dev, "%s: hangcheck recover!\n", gpu->name); + dev_err(gpu->dev, "hangcheck recover!\n");
mutex_lock(&dev->struct_mutex); /* TODO gpu->funcs->recover(gpu); */ @@ -680,7 +678,7 @@ static void recover_worker(struct work_struct *work)
static void hangcheck_timer_reset(struct etnaviv_gpu *gpu) { - DBG("%s", gpu->name); + DBG("%s", dev_name(gpu->dev)); mod_timer(&gpu->hangcheck_timer, round_jiffies_up(jiffies + DRM_MSM_HANGCHECK_JIFFIES)); } @@ -698,12 +696,10 @@ static void hangcheck_handler(unsigned long data) } else if (fence_after(gpu->submitted_fence, fence)) { /* no progress and not done.. hung! */ gpu->hangcheck_fence = fence; - dev_err(dev->dev, "%s: hangcheck detected gpu lockup!\n", - gpu->name); - dev_err(dev->dev, "%s: completed fence: %u\n", - gpu->name, fence); - dev_err(dev->dev, "%s: submitted fence: %u\n", - gpu->name, gpu->submitted_fence); + dev_err(gpu->dev, "hangcheck detected gpu lockup!\n"); + dev_err(gpu->dev, " completed fence: %u\n", fence); + dev_err(gpu->dev, " submitted fence: %u\n", + gpu->submitted_fence); queue_work(priv->wq, &gpu->recover_work); }
@@ -724,7 +720,7 @@ static unsigned int event_alloc(struct etnaviv_gpu *gpu) ret = wait_for_completion_timeout(&gpu->event_free, msecs_to_jiffies(10 * 10000)); if (!ret) - dev_err(gpu->drm->dev, "wait_for_completion_timeout failed"); + dev_err(gpu->dev, "wait_for_completion_timeout failed");
spin_lock_irqsave(&gpu->event_spinlock, flags);
@@ -749,7 +745,7 @@ static void event_free(struct etnaviv_gpu *gpu, unsigned int event) spin_lock_irqsave(&gpu->event_spinlock, flags);
if (gpu->event[event].used == false) { - dev_warn(gpu->drm->dev, "event %u is already marked as free", + dev_warn(gpu->dev, "event %u is already marked as free", event); spin_unlock_irqrestore(&gpu->event_spinlock, flags); } else { @@ -878,14 +874,14 @@ static irqreturn_t irq_handler(int irq, void *data) u32 intr = gpu_read(gpu, VIVS_HI_INTR_ACKNOWLEDGE);
if (intr != 0) { - dev_dbg(gpu->drm->dev, "intr 0x%08x\n", intr); + dev_dbg(gpu->dev, "intr 0x%08x\n", intr);
if (intr & VIVS_HI_INTR_ACKNOWLEDGE_AXI_BUS_ERROR) - dev_err(gpu->drm->dev, "AXI bus error\n"); + dev_err(gpu->dev, "AXI bus error\n"); else { uint8_t event = __fls(intr);
- dev_dbg(gpu->drm->dev, "event %u\n", event); + dev_dbg(gpu->dev, "event %u\n", event); gpu->retired_fence = gpu->event[event].fence; gpu->last_ring_pos = gpu->event[event].ring_pos; event_free(gpu, event); @@ -938,7 +934,7 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master,
del_timer(&gpu->hangcheck_timer);
- DBG("%s", gpu->name); + DBG("%s", dev_name(gpu->dev));
WARN_ON(!list_empty(&gpu->active_list));
@@ -985,10 +981,10 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev) if (!match) return -EINVAL;
- gpu->name = pdev->name; + gpu->dev = &pdev->dev;
/* Map registers: */ - gpu->mmio = etnaviv_ioremap(pdev, NULL, gpu->name); + gpu->mmio = etnaviv_ioremap(pdev, NULL, dev_name(gpu->dev)); if (IS_ERR(gpu->mmio)) return PTR_ERR(gpu->mmio);
@@ -1001,7 +997,7 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev) }
err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, - IRQF_TRIGGER_HIGH, gpu->name, gpu); + IRQF_TRIGGER_HIGH, dev_name(gpu->dev), gpu); if (err) { dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq, err); goto fail; diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h index c9c482a8d569..885eddf9fb1c 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.h +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -85,8 +85,8 @@ struct etnaviv_event { };
struct etnaviv_gpu { - const char *name; struct drm_device *drm; + struct device *dev; struct etnaviv_chip_identity identity; int pipe;
From: Russell King rmk+kernel@arm.linux.org.uk
We need to synchronously take down the hangcheck timer, and then cancel the recovery work when we're unbinding the GPU to avoid these timers and workers running after we clean up.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index a5a47f34eba5..0547e93972e6 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -932,10 +932,12 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master, { struct etnaviv_gpu *gpu = dev_get_drvdata(dev);
- del_timer(&gpu->hangcheck_timer); - DBG("%s", dev_name(gpu->dev));
+ /* Safely take down hangcheck */ + del_timer_sync(&gpu->hangcheck_timer); + cancel_work_sync(&gpu->recover_work); + WARN_ON(!list_empty(&gpu->active_list));
if (gpu->buffer)
From: Russell King rmk+kernel@arm.linux.org.uk
Provide a function to safely take down the hangcheck timer and workqueue.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 0547e93972e6..24ed14804ebd 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -708,6 +708,12 @@ static void hangcheck_handler(unsigned long data) hangcheck_timer_reset(gpu); }
+static void hangcheck_disable(struct etnaviv_gpu *gpu) +{ + del_timer_sync(&gpu->hangcheck_timer); + cancel_work_sync(&gpu->recover_work); +} + /* * event management: */ @@ -934,9 +940,7 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master,
DBG("%s", dev_name(gpu->dev));
- /* Safely take down hangcheck */ - del_timer_sync(&gpu->hangcheck_timer); - cancel_work_sync(&gpu->recover_work); + hangcheck_disable(gpu);
WARN_ON(!list_empty(&gpu->active_list));
From: Russell King rmk+kernel@arm.linux.org.uk
If we queue up a large command buffer (32K) containing about 164 1080p blit operations, it can take the GPU several seconds to complete before raising the next event. Our existing hangcheck code decides after a second that the GPU is stuck, and provokes a retirement of the events.
This can lead to errors as the GPU isn't stuck - we could end up overwriting the buffers which the GPU is currently executing.
Resolve this by also checking the current DMA address register, and monitoring it for progress. We have to be careful here, because if we get stuck in a WAIT LINK, the DMA address will change by 16 bytes (inclusive) while the GPU spins in the loop - even though we may not have received the last event from the GPU (eg, because the PE is busy.)
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 19 +++++++++++++++---- drivers/staging/etnaviv/etnaviv_gpu.h | 1 + 2 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 24ed14804ebd..cd308976dec9 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -689,13 +689,24 @@ static void hangcheck_handler(unsigned long data) struct drm_device *dev = gpu->drm; struct etnaviv_drm_private *priv = dev->dev_private; uint32_t fence = gpu->retired_fence; + bool progress = false;
if (fence != gpu->hangcheck_fence) { - /* some progress has been made.. ya! */ - gpu->hangcheck_fence = fence; - } else if (fence_after(gpu->submitted_fence, fence)) { - /* no progress and not done.. hung! */ gpu->hangcheck_fence = fence; + progress = true; + } + + if (!progress) { + uint32_t dma_addr = gpu_read(gpu, VIVS_FE_DMA_ADDRESS); + int change = dma_addr - gpu->hangcheck_dma_addr; + + if (change < 0 || change > 16) { + gpu->hangcheck_dma_addr = dma_addr; + progress = true; + } + } + + if (!progress && fence_after(gpu->submitted_fence, fence)) { dev_err(gpu->dev, "hangcheck detected gpu lockup!\n"); dev_err(gpu->dev, " completed fence: %u\n", fence); dev_err(gpu->dev, " submitted fence: %u\n", diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h index 885eddf9fb1c..59dc9c1a048f 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.h +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -123,6 +123,7 @@ struct etnaviv_gpu { #define DRM_MSM_HANGCHECK_JIFFIES msecs_to_jiffies(DRM_MSM_HANGCHECK_PERIOD) struct timer_list hangcheck_timer; uint32_t hangcheck_fence; + uint32_t hangcheck_dma_addr; struct work_struct recover_work; };
From: Russell King rmk+kernel@arm.linux.org.uk
If we queue up multiple buffers, each with their own event, where the first buffer takes a while to execute, but subsequent buffers do not, we can end up receiving multiple events simultaneously. (eg, 0, 1, 2).
In this case, we only look at event 2, which updates the last fence, and then free event 2, leaving events 0 and 1 still allocated. If this is allowed to continue, eventually we consume all events, and we have no further way to progress.
However, we have to bear in mind that we could end up with events in other orders. For example, we could have three buffers committed at different times:
- buffer 0 is committed, getting event 0. - buffer 1 is committed, getting event 1. - buffer 0 completes, signalling event 0. - we process event 0, and freeing it. - buffer 2 is committed, is small, getting event 0. - buffer 1 completes, signalling event 1. - buffer 2 completes, signalling event 0 as well. - we process both event 0 and event 1. We must note that the fence from event 0 completed, and must not overwrite it with the fence from event 1.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 31 +++++++++++++++++++++++++------ 1 file changed, 25 insertions(+), 6 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index cd308976dec9..4cd84740eac8 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -891,20 +891,39 @@ static irqreturn_t irq_handler(int irq, void *data) u32 intr = gpu_read(gpu, VIVS_HI_INTR_ACKNOWLEDGE);
if (intr != 0) { + int event; + dev_dbg(gpu->dev, "intr 0x%08x\n", intr);
- if (intr & VIVS_HI_INTR_ACKNOWLEDGE_AXI_BUS_ERROR) + if (intr & VIVS_HI_INTR_ACKNOWLEDGE_AXI_BUS_ERROR) { dev_err(gpu->dev, "AXI bus error\n"); - else { - uint8_t event = __fls(intr); + intr &= ~VIVS_HI_INTR_ACKNOWLEDGE_AXI_BUS_ERROR; + } + + while ((event = ffs(intr)) != 0) { + event -= 1; + + intr &= ~(1 << event);
dev_dbg(gpu->dev, "event %u\n", event); - gpu->retired_fence = gpu->event[event].fence; - gpu->last_ring_pos = gpu->event[event].ring_pos; + /* + * Events can be processed out of order. Eg, + * - allocate and queue event 0 + * - allocate event 1 + * - event 0 completes, we process it + * - allocate and queue event 0 + * - event 1 and event 0 complete + * we can end up processing event 0 first, then 1. + */ + if (fence_after(gpu->event[event].fence, gpu->retired_fence)) { + gpu->retired_fence = gpu->event[event].fence; + gpu->last_ring_pos = gpu->event[event].ring_pos; + } event_free(gpu, event); - etnaviv_gpu_retire(gpu); }
+ etnaviv_gpu_retire(gpu); + ret = IRQ_HANDLED; }
From: Russell King rmk+kernel@arm.linux.org.uk
Rather than waiting indefinitely for the GPU to reset, bound this and if it doesn't appear to be successful, bail out and report why we failed.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 32 +++++++++++++++++++++++++++----- 1 file changed, 27 insertions(+), 5 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 4cd84740eac8..f2ce3c71e583 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -287,9 +287,11 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) etnaviv_hw_specs(gpu); }
-static void etnaviv_hw_reset(struct etnaviv_gpu *gpu) +static int etnaviv_hw_reset(struct etnaviv_gpu *gpu) { u32 control, idle; + unsigned long timeout; + bool failed = true;
/* TODO * @@ -298,7 +300,10 @@ static void etnaviv_hw_reset(struct etnaviv_gpu *gpu) * - what about VG? */
- while (true) { + /* We hope that the GPU resets in under one second */ + timeout = jiffies + msecs_to_jiffies(1000); + + while (time_is_after_jiffies(timeout)) { control = VIVS_HI_CLOCK_CONTROL_DISABLE_DEBUG_REGISTERS | VIVS_HI_CLOCK_CONTROL_FSCALE_VAL(0x40);
@@ -342,15 +347,28 @@ static void etnaviv_hw_reset(struct etnaviv_gpu *gpu) control = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL);
/* is the GPU idle? */ - if (((control & VIVS_HI_CLOCK_CONTROL_IDLE_3D) == 0) - || ((control & VIVS_HI_CLOCK_CONTROL_IDLE_2D) == 0)) { + if (((control & VIVS_HI_CLOCK_CONTROL_IDLE_3D) == 0) || + ((control & VIVS_HI_CLOCK_CONTROL_IDLE_2D) == 0)) { dev_dbg(gpu->dev, "GPU is not idle\n"); continue; }
+ failed = false; break; }
+ if (failed) { + idle = gpu_read(gpu, VIVS_HI_IDLE_STATE); + control = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL); + + dev_err(gpu->dev, "GPU failed to reset: FE %sidle, 3D %sidle, 2D %sidle\n", + idle & VIVS_HI_IDLE_STATE_FE ? "" : "not ", + control & VIVS_HI_CLOCK_CONTROL_IDLE_3D ? "" : "not ", + control & VIVS_HI_CLOCK_CONTROL_IDLE_2D ? "" : "not "); + + return -EBUSY; + } + /* We rely on the GPU running, so program the clock */ control = VIVS_HI_CLOCK_CONTROL_DISABLE_DEBUG_REGISTERS | VIVS_HI_CLOCK_CONTROL_FSCALE_VAL(0x40); @@ -359,6 +377,8 @@ static void etnaviv_hw_reset(struct etnaviv_gpu *gpu) gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, control | VIVS_HI_CLOCK_CONTROL_FSCALE_CMD_LOAD); gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, control); + + return 0; }
int etnaviv_gpu_init(struct etnaviv_gpu *gpu) @@ -369,7 +389,9 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) bool mmuv2;
etnaviv_hw_identify(gpu); - etnaviv_hw_reset(gpu); + ret = etnaviv_hw_reset(gpu); + if (ret) + return ret;
/* set base addresses */ gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_RA, 0x0);
From: Russell King rmk+kernel@arm.linux.org.uk
Add the workarounds found in the GALCORE code (and found to be required) for the GC320 2D core found on iMX6 to etnaviv.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index f2ce3c71e583..92a28f11bab6 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -393,6 +393,30 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) if (ret) return ret;
+ if (gpu->identity.model == chipModel_GC320 && + gpu_read(gpu, VIVS_HI_CHIP_TIME) != 0x2062400 && + (gpu->identity.revision == 0x5007 || + gpu->identity.revision == 0x5220)) { + u32 mc_memory_debug; + + mc_memory_debug = gpu_read(gpu, VIVS_MC_DEBUG_MEMORY) & ~0xff; + + if (gpu->identity.revision == 0x5007) + mc_memory_debug |= 0x0c; + else + mc_memory_debug |= 0x08; + + gpu_write(gpu, VIVS_MC_DEBUG_MEMORY, mc_memory_debug); + } + + /* + * Update GPU AXI cache atttribute to "cacheable, no allocate". + * This is necessary to prevent the iMX6 SoC locking up. + */ + gpu_write(gpu, VIVS_HI_AXI_CONFIG, + VIVS_HI_AXI_CONFIG_AWCACHE(2) | + VIVS_HI_AXI_CONFIG_ARCACHE(2)); + /* set base addresses */ gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_RA, 0x0); gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_FE, 0x0);
From: Russell King rmk+kernel@arm.linux.org.uk
Increase the iommu page table size to 512KiB so we can map a maximum of 512MiB of memory. GC600 (at least) seems happy with this change.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_iommu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_iommu.c b/drivers/staging/etnaviv/etnaviv_iommu.c index 89bc2ffadf86..327ca703bc65 100644 --- a/drivers/staging/etnaviv/etnaviv_iommu.c +++ b/drivers/staging/etnaviv/etnaviv_iommu.c @@ -25,7 +25,7 @@ #include "etnaviv_iommu.h" #include "state_hi.xml.h"
-#define PT_SIZE SZ_256K +#define PT_SIZE SZ_512K #define PT_ENTRIES (PT_SIZE / sizeof(uint32_t))
#define GPU_MEM_START 0x80000000
From: Russell King rmk+kernel@arm.linux.org.uk
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_drv.c | 50 ++++++++++++++++++++--------------- drivers/staging/etnaviv/etnaviv_gpu.c | 18 ++++++++++--- 2 files changed, 42 insertions(+), 26 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 3dba228265ea..568615154845 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -566,24 +566,6 @@ static int etnaviv_compare(struct device *dev, void *data) return dev->of_node == np; }
-static int etnaviv_add_components(struct device *master, struct master *m) -{ - struct device_node *child_np; - int ret = 0; - - for_each_available_child_of_node(master->of_node, child_np) { - DRM_INFO("add child %s\n", child_np->name); - - ret = component_master_add_child(m, etnaviv_compare, child_np); - if (ret) { - of_node_put(child_np); - break; - } - } - - return ret; -} - static int etnaviv_bind(struct device *dev) { return drm_platform_init(&etnaviv_drm_driver, to_platform_device(dev)); @@ -595,21 +577,43 @@ static void etnaviv_unbind(struct device *dev) }
static const struct component_master_ops etnaviv_master_ops = { - .add_components = etnaviv_add_components, .bind = etnaviv_bind, .unbind = etnaviv_unbind, };
+static int compare_str(struct device *dev, void *data) +{ + return !strcmp(dev_name(dev), data); +} + static int etnaviv_pdev_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct device_node *node = dev->of_node; - - of_platform_populate(node, NULL, NULL, dev); + struct component_match *match = NULL;
dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
- return component_master_add(&pdev->dev, &etnaviv_master_ops); + if (node) { + struct device_node *child_np; + + of_platform_populate(node, NULL, NULL, dev); + + for_each_available_child_of_node(node, child_np) { + DRM_INFO("add child %s\n", child_np->name); + + component_match_add(dev, &match, etnaviv_compare, + child_np); + } + } else if (dev->platform_data) { + char **names = dev->platform_data; + unsigned i; + + for (i = 0; names[i]; i++) + component_match_add(dev, &match, compare_str, names[i]); + } + + return component_master_add_with_match(dev, &etnaviv_master_ops, match); }
static int etnaviv_pdev_remove(struct platform_device *pdev) @@ -661,3 +665,5 @@ module_exit(etnaviv_exit); MODULE_AUTHOR("Rob Clark <robdclark@gmail.com"); MODULE_DESCRIPTION("etnaviv DRM Driver"); MODULE_LICENSE("GPL"); +MODULE_ALIAS("platform:vivante"); +MODULE_DEVICE_TABLE(of, dt_match); diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 92a28f11bab6..30abf443f2c9 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -26,6 +26,10 @@ #include "state_hi.xml.h" #include "cmdstream.xml.h"
+static const struct platform_device_id gpu_ids[] = { + { .name = "etnaviv-gpu,2d", .driver_data = ETNA_PIPE_2D, }, + { }, +};
/* * Driver functions: @@ -1059,9 +1063,16 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev) if (!gpu) return -ENOMEM;
- match = of_match_device(etnaviv_gpu_match, &pdev->dev); - if (!match) + if (pdev->dev.of_node) { + match = of_match_device(etnaviv_gpu_match, &pdev->dev); + if (!match) + return -EINVAL; + gpu->pipe = (int)match->data; + } else if (pdev->id_entry) { + gpu->pipe = pdev->id_entry->driver_data; + } else { return -EINVAL; + }
gpu->dev = &pdev->dev;
@@ -1101,8 +1112,6 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev) if (IS_ERR(gpu->clk_shader)) gpu->clk_shader = NULL;
- gpu->pipe = (int)match->data; - /* TODO: figure out max mapped size */ dev_set_drvdata(dev, gpu);
@@ -1132,4 +1141,5 @@ struct platform_driver etnaviv_gpu_driver = { }, .probe = etnaviv_gpu_platform_probe, .remove = etnaviv_gpu_platform_remove, + .id_table = gpu_ids, };
From: Russell King rmk+kernel@arm.linux.org.uk
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_drv.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 568615154845..77e05b80f18d 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -20,6 +20,7 @@
#include "etnaviv_drv.h" #include "etnaviv_gpu.h" +#include "etnaviv_mmu.h"
void etnaviv_register_mmu(struct drm_device *dev, struct etnaviv_iommu *mmu) { @@ -239,6 +240,23 @@ static int etnaviv_mm_show(struct drm_device *dev, struct seq_file *m) return drm_mm_dump_table(m, &dev->vma_offset_manager->vm_addr_space_mm); }
+static int etnaviv_mmu_show(struct drm_device *dev, struct seq_file *m) +{ + struct etnaviv_drm_private *priv = dev->dev_private; + struct etnaviv_gpu *gpu; + unsigned int i; + + for (i = 0; i < ETNA_MAX_PIPES; i++) { + gpu = priv->gpu[i]; + if (gpu) { + seq_printf(m, "Active Objects (%s):\n", + dev_name(gpu->dev)); + drm_mm_dump_table(m, &gpu->mmu->mm); + } + } + return 0; +} + static int show_locked(struct seq_file *m, void *arg) { struct drm_info_node *node = (struct drm_info_node *) m->private; @@ -262,6 +280,7 @@ static struct drm_info_list ETNAVIV_debugfs_list[] = { {"gpu", show_locked, 0, etnaviv_gpu_show}, {"gem", show_locked, 0, etnaviv_gem_show}, { "mm", show_locked, 0, etnaviv_mm_show }, + {"mmu", show_locked, 0, etnaviv_mmu_show}, };
static int etnaviv_debugfs_init(struct drm_minor *minor)
From: Russell King rmk+kernel@arm.linux.org.uk
Use the etnaviv definitions for feature constants, rather than BIT()s.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 30abf443f2c9..0e230b220f15 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -253,7 +253,7 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu)
/* Disable fast clear on GC700. */ if (gpu->identity.model == 0x700) - gpu->identity.features &= ~BIT(0); + gpu->identity.features &= ~chipFeatures_FAST_CLEAR;
if ((gpu->identity.model == 0x500 && gpu->identity.revision < 2) || (gpu->identity.model == 0x300 && gpu->identity.revision < 0x2000)) { @@ -270,7 +270,8 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) gpu->identity.minor_features0 = gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_0);
- if (gpu->identity.minor_features0 & BIT(21)) { + if (gpu->identity.minor_features0 & + chipMinorFeatures0_MORE_MINOR_FEATURES) { gpu->identity.minor_features1 = gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_1); gpu->identity.minor_features2 =
From: Russell King rmk+kernel@arm.linux.org.uk
Gem objects compared fences using simple <= tests. This is problematical when the fences wrap past (uint32_t)~0. Resolve this by using our helpers.
However, this is complicated by the use of '0' to indicate that the fence has not been set (eg, because we are not writing to an object.) Carry the access flags into the object and use them when determining of the object can be retired.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_drv.h | 2 +- drivers/staging/etnaviv/etnaviv_gem.c | 11 +++++++---- drivers/staging/etnaviv/etnaviv_gem.h | 1 + drivers/staging/etnaviv/etnaviv_gpu.c | 16 ++++++++-------- 4 files changed, 17 insertions(+), 13 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index 59aa4666d2cc..47aa74b36235 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -97,7 +97,7 @@ void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj); void *msm_gem_vaddr(struct drm_gem_object *obj); dma_addr_t etnaviv_gem_paddr_locked(struct drm_gem_object *obj); void etnaviv_gem_move_to_active(struct drm_gem_object *obj, - struct etnaviv_gpu *gpu, bool write, uint32_t fence); + struct etnaviv_gpu *gpu, uint32_t access, uint32_t fence); void etnaviv_gem_move_to_inactive(struct drm_gem_object *obj); int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, struct timespec *timeout); diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 4e28c57b2409..2f2bf5619ffd 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -405,16 +405,18 @@ dma_addr_t etnaviv_gem_paddr_locked(struct drm_gem_object *obj) }
void etnaviv_gem_move_to_active(struct drm_gem_object *obj, - struct etnaviv_gpu *gpu, bool write, uint32_t fence) + struct etnaviv_gpu *gpu, uint32_t access, uint32_t fence) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
etnaviv_obj->gpu = gpu;
- if (write) - etnaviv_obj->write_fence = fence; - else + if (access & ETNA_SUBMIT_BO_READ) etnaviv_obj->read_fence = fence; + if (access & ETNA_SUBMIT_BO_WRITE) + etnaviv_obj->write_fence = fence; + + etnaviv_obj->access |= access;
list_del_init(&etnaviv_obj->mm_list); list_add_tail(&etnaviv_obj->mm_list, &gpu->active_list); @@ -431,6 +433,7 @@ void etnaviv_gem_move_to_inactive(struct drm_gem_object *obj) etnaviv_obj->gpu = NULL; etnaviv_obj->read_fence = 0; etnaviv_obj->write_fence = 0; + etnaviv_obj->access = 0; list_del_init(&etnaviv_obj->mm_list); list_add_tail(&etnaviv_obj->mm_list, &priv->inactive_list); } diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h index add616338a9f..7844c073ee61 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.h +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -46,6 +46,7 @@ struct etnaviv_gem_object { */ struct list_head mm_list; struct etnaviv_gpu *gpu; /* non-null if active */ + uint32_t access; uint32_t read_fence, write_fence;
/* Transiently in the process of submit ioctl, objects associated diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 0e230b220f15..4b3d8a004374 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -845,8 +845,10 @@ static void retire_worker(struct work_struct *work) obj = list_first_entry(&gpu->active_list, struct etnaviv_gem_object, mm_list);
- if ((obj->read_fence <= fence) && - (obj->write_fence <= fence)) { + if ((!(obj->access & ETNA_SUBMIT_BO_READ) || + fence_after_eq(fence, obj->read_fence)) && + (!(obj->access & ETNA_SUBMIT_BO_WRITE) || + fence_after_eq(fence, obj->write_fence))) { /* move to inactive: */ etnaviv_gem_move_to_inactive(&obj->base); etnaviv_gem_put_iova(&obj->base); @@ -917,13 +919,11 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, &iova); }
- if (submit->bos[i].flags & ETNA_SUBMIT_BO_READ) + if (submit->bos[i].flags & (ETNA_SUBMIT_BO_READ | + ETNA_SUBMIT_BO_WRITE)) etnaviv_gem_move_to_active(&etnaviv_obj->base, gpu, - false, submit->fence); - - if (submit->bos[i].flags & ETNA_SUBMIT_BO_WRITE) - etnaviv_gem_move_to_active(&etnaviv_obj->base, gpu, - true, submit->fence); + submit->bos[i].flags, + submit->fence); } hangcheck_timer_reset(gpu);
From: Russell King rmk+kernel@arm.linux.org.uk
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 89 ++++++++++++++++++++--------------- 1 file changed, 52 insertions(+), 37 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 2f2bf5619ffd..8eeafcafb4e9 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -23,6 +23,55 @@ #include "etnaviv_gpu.h" #include "etnaviv_mmu.h"
+static void etnaviv_gem_scatter_map(struct etnaviv_gem_object *etnaviv_obj) +{ + struct drm_device *dev = etnaviv_obj->base.dev; + + /* + * For non-cached buffers, ensure the new pages are clean + * because display controller, GPU, etc. are not coherent. + */ + if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_CACHED)) { + dma_map_sg(dev->dev, etnaviv_obj->sgt->sgl, + etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL); + } else { + struct scatterlist *sg; + unsigned int i; + + for_each_sg(sgt->sgl, sg, sgt->nents, i) { + sg_dma_address(sg) = sg_phys(sg); +#ifdef CONFIG_NEED_SG_DMA_LENGTH + sg_dma_len(sg) = sg->length; +#endif + } + } +} + +static void etnaviv_gem_scatterlist_unmap(struct etnaviv_gem_object *etnaviv_obj) +{ + struct drm_device *dev = etnaviv_obj->base.dev; + + /* + * For non-cached buffers, ensure the new pages are clean + * because display controller, GPU, etc. are not coherent: + * + * WARNING: The DMA API does not support concurrent CPU + * and device access to the memory area. With BIDIRECTIONAL, + * we will clean the cache lines which overlap the region, + * and invalidate all cache lines (partially) contained in + * the region. + * + * If you have dirty data in the overlapping cache lines, + * that will corrupt the GPU-written data. If you have + * written into the remainder of the region, this can + * discard those writes. + */ + if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_CACHED)) + dma_unmap_sg(dev->dev, etnaviv_obj->sgt->sgl, + etnaviv_obj->sgt->nents, + DMA_BIDIRECTIONAL); +} + /* called with dev->struct_mutex held */ static int etnaviv_gem_shmem_get_pages(struct etnaviv_gem_object *etnaviv_obj) { @@ -42,27 +91,7 @@ static int etnaviv_gem_shmem_get_pages(struct etnaviv_gem_object *etnaviv_obj) static void put_pages(struct etnaviv_gem_object *etnaviv_obj) { if (etnaviv_obj->sgt) { - struct drm_device *dev = etnaviv_obj->base.dev; - - /* - * For non-cached buffers, ensure the new pages are clean - * because display controller, GPU, etc. are not coherent: - * - * WARNING: The DMA API does not support concurrent CPU - * and device access to the memory area. With BIDIRECTIONAL, - * we will clean the cache lines which overlap the region, - * and invalidate all cache lines (partially) contained in - * the region. - * - * If you have dirty data in the overlapping cache lines, - * that will corrupt the GPU-written data. If you have - * written into the remainder of the region, this can - * discard those writes. - */ - if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_CACHED)) - dma_unmap_sg(dev->dev, etnaviv_obj->sgt->sgl, - etnaviv_obj->sgt->nents, - DMA_BIDIRECTIONAL); + etnaviv_gem_scatterlist_unmap(etnaviv_obj); sg_free_table(etnaviv_obj->sgt); kfree(etnaviv_obj->sgt); etnaviv_obj->sgt = NULL; @@ -99,13 +128,7 @@ struct page **etnaviv_gem_get_pages(struct etnaviv_gem_object *etnaviv_obj)
etnaviv_obj->sgt = sgt;
- /* - * For non-cached buffers, ensure the new pages are clean - * because display controller, GPU, etc. are not coherent. - */ - if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_CACHED)) - dma_map_sg(dev->dev, etnaviv_obj->sgt->sgl, - etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL); + etnaviv_gem_scatter_map(etnaviv_obj); }
return etnaviv_obj->pages; @@ -836,15 +859,7 @@ static int etnaviv_gem_userptr_get_pages(struct etnaviv_gem_object *etnaviv_obj) static void etnaviv_gem_userptr_release(struct etnaviv_gem_object *etnaviv_obj) { if (etnaviv_obj->sgt) { - /* - * For non-cached buffers, ensure the new pages are clean - * because display controller, GPU, etc. are not coherent: - */ - if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_CACHED)) - dma_unmap_sg(etnaviv_obj->base.dev->dev, - etnaviv_obj->sgt->sgl, - etnaviv_obj->sgt->nents, - DMA_BIDIRECTIONAL); + etnaviv_gem_scatterlist_unmap(etnaviv_obj); sg_free_table(etnaviv_obj->sgt); kfree(etnaviv_obj->sgt); }
From: Russell King rmk+kernel@arm.linux.org.uk
We never pass the GPU addresses of BOs to userspace, so userspace can never specify the correct address. Hence, this code serves no useful purpose, and can be removed.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.h | 1 - drivers/staging/etnaviv/etnaviv_gem_submit.c | 36 +++++----------------------- 2 files changed, 6 insertions(+), 31 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h index 7844c073ee61..cfade337d4db 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.h +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -100,7 +100,6 @@ struct etnaviv_gem_submit { struct list_head bo_list; struct ww_acquire_ctx ticket; uint32_t fence; - bool valid; unsigned int nr_cmds; unsigned int nr_bos; struct { diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index bbe2171b8eb4..c32fb4424eea 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -25,7 +25,6 @@
#define BO_INVALID_FLAGS ~(ETNA_SUBMIT_BO_READ | ETNA_SUBMIT_BO_WRITE) /* make sure these don't conflict w/ MSM_SUBMIT_BO_x */ -#define BO_VALID 0x8000 #define BO_LOCKED 0x4000 #define BO_PINNED 0x2000
@@ -84,8 +83,6 @@ static int submit_lookup_objects(struct etnaviv_gem_submit *submit, }
submit->bos[i].flags = submit_bo.flags; - /* in validate_objects() we figure out if this is true: */ - submit->bos[i].iova = submit_bo.presumed;
/* normally use drm_gem_object_lookup(), but for bulk lookup * all under single table_lock just hit object_idr directly: @@ -131,9 +128,7 @@ static void submit_unlock_unpin_bo(struct etnaviv_gem_submit *submit, int i) if (submit->bos[i].flags & BO_LOCKED) ww_mutex_unlock(&etnaviv_obj->resv->lock);
- if (!(submit->bos[i].flags & BO_VALID)) - submit->bos[i].iova = 0; - + submit->bos[i].iova = 0; submit->bos[i].flags &= ~(BO_LOCKED | BO_PINNED); }
@@ -143,8 +138,6 @@ static int submit_validate_objects(struct etnaviv_gem_submit *submit) int contended, slow_locked = -1, i, ret = 0;
retry: - submit->valid = true; - for (i = 0; i < submit->nr_bos; i++) { struct etnaviv_gem_object *etnaviv_obj = submit->bos[i].obj; uint32_t iova; @@ -177,14 +170,7 @@ retry: goto fail;
submit->bos[i].flags |= BO_PINNED; - - if (iova == submit->bos[i].iova) { - submit->bos[i].flags |= BO_VALID; - } else { - submit->bos[i].iova = iova; - submit->bos[i].flags &= ~BO_VALID; - submit->valid = false; - } + submit->bos[i].iova = iova; }
ww_acquire_done(&submit->ticket); @@ -217,7 +203,7 @@ fail: }
static int submit_bo(struct etnaviv_gem_submit *submit, uint32_t idx, - struct etnaviv_gem_object **obj, uint32_t *iova, bool *valid) + struct etnaviv_gem_object **obj, uint32_t *iova) { if (idx >= submit->nr_bos) { DRM_ERROR("invalid buffer index: %u (out of %u)\n", @@ -229,8 +215,6 @@ static int submit_bo(struct etnaviv_gem_submit *submit, uint32_t idx, *obj = submit->bos[idx].obj; if (iova) *iova = submit->bos[idx].iova; - if (valid) - *valid = !!(submit->bos[idx].flags & BO_VALID);
return 0; } @@ -254,7 +238,6 @@ static int submit_reloc(struct etnaviv_gem_submit *submit, struct etnaviv_gem_ob void __user *userptr = to_user_ptr(relocs + (i * sizeof(submit_reloc))); uint32_t iova, off; - bool valid;
ret = copy_from_user(&submit_reloc, userptr, sizeof(submit_reloc)); @@ -276,14 +259,10 @@ static int submit_reloc(struct etnaviv_gem_submit *submit, struct etnaviv_gem_ob return -EINVAL; }
- ret = submit_bo(submit, submit_reloc.reloc_idx, &bobj, - &iova, &valid); + ret = submit_bo(submit, submit_reloc.reloc_idx, &bobj, &iova); if (ret) return ret;
- if (valid) - continue; - if (submit_reloc.reloc_offset >= bobj->base.size - sizeof(*ptr)) { DRM_ERROR("relocation %u outside object", i); @@ -371,8 +350,8 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, goto out; }
- ret = submit_bo(submit, submit_cmd.submit_idx, - &etnaviv_obj, NULL, NULL); + ret = submit_bo(submit, submit_cmd.submit_idx, &etnaviv_obj, + NULL); if (ret) goto out;
@@ -415,9 +394,6 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, submit->cmd[i].size = submit_cmd.size / 4; submit->cmd[i].obj = etnaviv_obj;
- if (submit->valid) - continue; - ret = submit_reloc(submit, etnaviv_obj, submit_cmd.submit_offset, submit_cmd.nr_relocs, submit_cmd.relocs);
From: Russell King rmk+kernel@arm.linux.org.uk
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_mmu.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c index 5647768c2be4..0a2f9dd5bb7a 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.c +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -39,8 +39,8 @@ int etnaviv_iommu_map(struct etnaviv_iommu *iommu, uint32_t iova, return -EINVAL;
for_each_sg(sgt->sgl, sg, sgt->nents, i) { - u32 pa = sg_phys(sg) - sg->offset; - size_t bytes = sg->length + sg->offset; + u32 pa = sg_dma_address(sg) - sg->offset; + size_t bytes = sg_dma_len(sg) + sg->offset;
VERB("map[%d]: %08x %08x(%x)", i, iova, pa, bytes);
@@ -57,7 +57,7 @@ fail: da = iova;
for_each_sg(sgt->sgl, sg, i, j) { - size_t bytes = sg->length + sg->offset; + size_t bytes = sg_dma_len(sg) + sg->offset;
iommu_unmap(domain, da, bytes); da += bytes; @@ -74,7 +74,7 @@ int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova, int i;
for_each_sg(sgt->sgl, sg, sgt->nents, i) { - size_t bytes = sg->length + sg->offset; + size_t bytes = sg_dma_len(sg) + sg->offset; size_t unmapped;
unmapped = iommu_unmap(domain, da, bytes); @@ -104,9 +104,6 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, uint32_t iova;
iova = sg_dma_address(sgt->sgl); - if (!iova) - iova = sg_phys(sgt->sgl) - sgt->sgl->offset; - if (iova < 0x80000000 - sg_dma_len(sgt->sgl)) { etnaviv_obj->iova = iova; return 0;
From: Russell King rmk+kernel@arm.linux.org.uk
The DMA API usage by etnaviv is really not up to scratch. It does not respect the buffer ownership rules, which are vitally necessary when using DMA_BIDIRECTIONAL, as mapping /only/ cleans the cache lines, causing dirty data to be written back to RAM and an unmap /only/ invalidates them, causing any data in the cache to be discarded. Given the length of time which these objects remain mapped, we can't hold them in the mapped state while hoping that no other CPU accesses to the buffer occur.
This has lead to visible pixmap corruption in the X server. Work around this by mapping and then immediately unmapping the buffers. This causes data in the buffers to be written back and then purged from the caches.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gem.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 8eeafcafb4e9..56e4ff5dd048 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -26,14 +26,15 @@ static void etnaviv_gem_scatter_map(struct etnaviv_gem_object *etnaviv_obj) { struct drm_device *dev = etnaviv_obj->base.dev; + struct sg_table *sgt = etnaviv_obj->sgt;
/* * For non-cached buffers, ensure the new pages are clean * because display controller, GPU, etc. are not coherent. */ if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_CACHED)) { - dma_map_sg(dev->dev, etnaviv_obj->sgt->sgl, - etnaviv_obj->sgt->nents, DMA_BIDIRECTIONAL); + dma_map_sg(dev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL); + dma_unmap_sg(dev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL); } else { struct scatterlist *sg; unsigned int i; @@ -50,6 +51,7 @@ static void etnaviv_gem_scatter_map(struct etnaviv_gem_object *etnaviv_obj) static void etnaviv_gem_scatterlist_unmap(struct etnaviv_gem_object *etnaviv_obj) { struct drm_device *dev = etnaviv_obj->base.dev; + struct sg_table *sgt = etnaviv_obj->sgt;
/* * For non-cached buffers, ensure the new pages are clean @@ -66,10 +68,10 @@ static void etnaviv_gem_scatterlist_unmap(struct etnaviv_gem_object *etnaviv_obj * written into the remainder of the region, this can * discard those writes. */ - if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_CACHED)) - dma_unmap_sg(dev->dev, etnaviv_obj->sgt->sgl, - etnaviv_obj->sgt->nents, - DMA_BIDIRECTIONAL); + if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_CACHED)) { + dma_map_sg(dev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL); + dma_unmap_sg(dev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL); + } }
/* called with dev->struct_mutex held */
From: Russell King rmk+kernel@arm.linux.org.uk
Add a poisoned bad page to map unused MMU entries to. This gives us a certain amount of protection against bad command streams causing us to hit regions of memory we really shouldn't be touching.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_iommu.c | 29 +++++++++++++++++++++++++++-- 1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_iommu.c b/drivers/staging/etnaviv/etnaviv_iommu.c index 327ca703bc65..71f94dac650b 100644 --- a/drivers/staging/etnaviv/etnaviv_iommu.c +++ b/drivers/staging/etnaviv/etnaviv_iommu.c @@ -36,6 +36,8 @@ struct etnaviv_iommu_domain_pgtable { };
struct etnaviv_iommu_domain { + void *bad_page_cpu; + dma_addr_t bad_page_dma; struct etnaviv_iommu_domain_pgtable pgtable; spinlock_t map_lock; }; @@ -80,18 +82,38 @@ static void pgtable_write(struct etnaviv_iommu_domain_pgtable *pgtable, static int etnaviv_iommu_domain_init(struct iommu_domain *domain) { struct etnaviv_iommu_domain *etnaviv_domain; - int ret; + uint32_t iova, *p; + int ret, i;
etnaviv_domain = kmalloc(sizeof(*etnaviv_domain), GFP_KERNEL); if (!etnaviv_domain) return -ENOMEM;
+ etnaviv_domain->bad_page_cpu = dma_alloc_coherent(NULL, SZ_4K, + &etnaviv_domain->bad_page_dma, + GFP_KERNEL); + if (!etnaviv_domain->bad_page_cpu) { + kfree(etnaviv_domain); + return -ENOMEM; + } + p = etnaviv_domain->bad_page_cpu; + for (i = 0; i < SZ_4K / 4; i++) + *p++ = 0xdead55aa; + ret = pgtable_alloc(&etnaviv_domain->pgtable, PT_SIZE); if (ret < 0) { + dma_free_coherent(NULL, SZ_4K, etnaviv_domain->bad_page_cpu, + etnaviv_domain->bad_page_dma); kfree(etnaviv_domain); return ret; }
+ for (iova = domain->geometry.aperture_start; + iova < domain->geometry.aperture_end; iova += SZ_4K) { + pgtable_write(&etnaviv_domain->pgtable, iova, + etnaviv_domain->bad_page_dma); + } + spin_lock_init(&etnaviv_domain->map_lock); domain->priv = etnaviv_domain; return 0; @@ -103,6 +125,8 @@ static void etnaviv_iommu_domain_destroy(struct iommu_domain *domain)
pgtable_free(&etnaviv_domain->pgtable, PT_SIZE);
+ dma_free_coherent(NULL, SZ_4K, etnaviv_domain->bad_page_cpu, + etnaviv_domain->bad_page_dma); kfree(etnaviv_domain); domain->priv = NULL; } @@ -131,7 +155,8 @@ static size_t etnaviv_iommu_unmap(struct iommu_domain *domain, return -EINVAL;
spin_lock(&etnaviv_domain->map_lock); - pgtable_write(&etnaviv_domain->pgtable, iova, ~0); + pgtable_write(&etnaviv_domain->pgtable, iova, + etnaviv_domain->bad_page_dma); spin_unlock(&etnaviv_domain->map_lock);
return SZ_4K;
From: Russell King rmk+kernel@arm.linux.org.uk
Parse the submitted command buffer for allowable GPU commands, and validate that all commands fit wholely within the submitted buffer. We allow the following commands:
- load state - any of the draw commands - stall - nop
which denies attempts to link, call, return, etc from the supplied command stream. This, at least, ensures that the GPU should reach the end of the submitted command set and return to our buffer.
Future validation of the load state commands will ensure that we prevent userspace providing physical addresses via the GPU command stream, with a future possibility of also validating that the boundaries of the drawing commands lie wholely within the requested buffers. For the time being, this functionality is disabled.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/Makefile | 1 + drivers/staging/etnaviv/etnaviv_cmd_parser.c | 103 +++++++++++++++++++++++++++ drivers/staging/etnaviv/etnaviv_drv.h | 3 + drivers/staging/etnaviv/etnaviv_gem_submit.c | 7 ++ 4 files changed, 114 insertions(+) create mode 100644 drivers/staging/etnaviv/etnaviv_cmd_parser.c
diff --git a/drivers/staging/etnaviv/Makefile b/drivers/staging/etnaviv/Makefile index ef0cffabdcce..2b71c31b6501 100644 --- a/drivers/staging/etnaviv/Makefile +++ b/drivers/staging/etnaviv/Makefile @@ -4,6 +4,7 @@ ifeq (, $(findstring -W,$(EXTRA_CFLAGS))) endif
etnaviv-y := \ + etnaviv_cmd_parser.o \ etnaviv_drv.o \ etnaviv_gem.o \ etnaviv_gem_prime.o \ diff --git a/drivers/staging/etnaviv/etnaviv_cmd_parser.c b/drivers/staging/etnaviv/etnaviv_cmd_parser.c new file mode 100644 index 000000000000..4cc6944e4a8f --- /dev/null +++ b/drivers/staging/etnaviv/etnaviv_cmd_parser.c @@ -0,0 +1,103 @@ +#include <linux/kernel.h> + +#include "etnaviv_gem.h" +#include "etnaviv_gpu.h" + +#include "cmdstream.xml.h" + +#define EXTRACT(val, field) (((val) & field##__MASK) >> field##__SHIFT) + +static bool etnaviv_validate_load_state(struct etnaviv_gpu *gpu, u32 *buf, + unsigned int state, unsigned int num) +{ + return true; + if (0x1200 - state < num * 4) + return false; + if (0x1228 - state < num * 4) + return false; + if (0x1238 - state < num * 4) + return false; + if (0x1284 - state < num * 4) + return false; + if (0x128c - state < num * 4) + return false; + if (0x1304 - state < num * 4) + return false; + if (0x1310 - state < num * 4) + return false; + if (0x1318 - state < num * 4) + return false; + if (0x1280c - state < num * 4 + 0x0c) + return false; + if (0x128ac - state < num * 4 + 0x0c) + return false; + if (0x128cc - state < num * 4 + 0x0c) + return false; + if (0x1297c - state < num * 4 + 0x0c) + return false; + return true; +} + +bool etnaviv_cmd_validate_one(struct etnaviv_gpu *gpu, + struct etnaviv_gem_object *obj, unsigned int offset, unsigned int size) +{ + u32 *start = obj->vaddr + offset * 4; + u32 *buf = start; + u32 *end = buf + size; + + while (buf < end) { + u32 cmd = *buf; + unsigned int len, n, off; + unsigned int op = cmd >> 27; + + switch (op) { + case FE_OPCODE_LOAD_STATE: + n = EXTRACT(cmd, VIV_FE_LOAD_STATE_HEADER_COUNT); + len = 1 + n; + if (buf + len > end) + break; + + off = EXTRACT(cmd, VIV_FE_LOAD_STATE_HEADER_OFFSET); + if (!etnaviv_validate_load_state(gpu, buf + 1, + off * 4, n)) { + dev_warn(gpu->dev, "%s: load state covers restricted state (0x%x-0x%x) at offset %u\n", + __func__, off * 4, (off + n) * 4, buf - start); + return false; + } + break; + + case FE_OPCODE_DRAW_2D: + n = EXTRACT(cmd, VIV_FE_DRAW_2D_HEADER_COUNT); + len = 2 + n * 2; + break; + + case FE_OPCODE_DRAW_PRIMITIVES: + len = 4; + break; + + case FE_OPCODE_DRAW_INDEXED_PRIMITIVES: + len = 6; + break; + + case FE_OPCODE_NOP: + case FE_OPCODE_STALL: + len = 2; + break; + + default: + dev_err(gpu->dev, "%s: op %u not permitted at offset %u\n", + __func__, op, buf - start); + return false; + } + + buf += ALIGN(len, 2); + } + + if (buf > end) { + dev_err(gpu->dev, "%s: commands overflow end of buffer: %u > %u\n", + __func__, buf - start, size); + return false; + } + + return true; +} diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index 47aa74b36235..18b929f9d268 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -39,6 +39,7 @@
struct etnaviv_gpu; struct etnaviv_mmu; +struct etnaviv_gem_object; struct etnaviv_gem_submit;
struct etnaviv_file_private { @@ -112,6 +113,8 @@ int etnaviv_gem_new_userptr(struct drm_device *dev, struct drm_file *file, u32 etnaviv_buffer_init(struct etnaviv_gpu *gpu); void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct etnaviv_gem_submit *submit); +bool etnaviv_cmd_validate_one(struct etnaviv_gpu *gpu, + struct etnaviv_gem_object *obj, unsigned int offset, unsigned int size);
#ifdef CONFIG_DEBUG_FS void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m); diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index c32fb4424eea..7bd4912ab8ad 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -394,6 +394,13 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, submit->cmd[i].size = submit_cmd.size / 4; submit->cmd[i].obj = etnaviv_obj;
+ if (!etnaviv_cmd_validate_one(gpu, etnaviv_obj, + submit->cmd[i].offset, + submit->cmd[i].size)) { + ret = -EINVAL; + goto out; + } + ret = submit_reloc(submit, etnaviv_obj, submit_cmd.submit_offset, submit_cmd.nr_relocs, submit_cmd.relocs);
From: Russell King rmk+kernel@arm.linux.org.uk
There is no need to restrict access to the get_param ioctl; this is used to obtain information about the device and has no side effects.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 77e05b80f18d..4bef6542daf3 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -505,7 +505,7 @@ static int etnaviv_ioctl_gem_userptr(struct drm_device *dev, void *data, static const struct drm_ioctl_desc etnaviv_ioctls[] = { #define ETNA_IOCTL(n, func, flags) \ DRM_IOCTL_DEF_DRV(ETNAVIV_##n, etnaviv_ioctl_##func, flags) - ETNA_IOCTL(GET_PARAM, get_param, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), + ETNA_IOCTL(GET_PARAM, get_param, DRM_UNLOCKED|DRM_RENDER_ALLOW), ETNA_IOCTL(GEM_NEW, gem_new, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), ETNA_IOCTL(GEM_INFO, gem_info, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), ETNA_IOCTL(GEM_CPU_PREP, gem_cpu_prep, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW),
From: Russell King rmk+kernel@arm.linux.org.uk
GC600 does not set busy bits in its idle register for modules which are not present. Add handling to ensure that we don't misinterpret these zero bits as indicating that these modules are busy.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 15 +++++++++++++++ drivers/staging/etnaviv/etnaviv_gpu.h | 1 + 2 files changed, 16 insertions(+)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 4b3d8a004374..2ed8de8c522a 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -289,6 +289,20 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) dev_info(gpu->dev, "minor_features3: %x\n", gpu->identity.minor_features3);
+ /* GC600 idle register reports zero bits where modules aren't present */ + if (gpu->identity.model == chipModel_GC600) { + gpu->idle_mask = VIVS_HI_IDLE_STATE_TX | + VIVS_HI_IDLE_STATE_RA | + VIVS_HI_IDLE_STATE_SE | + VIVS_HI_IDLE_STATE_PA | + VIVS_HI_IDLE_STATE_SH | + VIVS_HI_IDLE_STATE_PE | + VIVS_HI_IDLE_STATE_DE | + VIVS_HI_IDLE_STATE_FE; + } else { + gpu->idle_mask = ~VIVS_HI_IDLE_STATE_AXI_LP; + } + etnaviv_hw_specs(gpu); }
@@ -531,6 +545,7 @@ void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m)
seq_printf(m, "\taxi: 0x%08x\n", axi); seq_printf(m, "\tidle: 0x%08x\n", idle); + idle |= ~gpu->idle_mask & ~VIVS_HI_IDLE_STATE_AXI_LP; if ((idle & VIVS_HI_IDLE_STATE_FE) == 0) seq_puts(m, "\t FE is not idle\n"); if ((idle & VIVS_HI_IDLE_STATE_DE) == 0) diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h index 59dc9c1a048f..484649530ccc 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.h +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -101,6 +101,7 @@ struct etnaviv_gpu { /* list of GEM active objects: */ struct list_head active_list;
+ uint32_t idle_mask; uint32_t submitted_fence; uint32_t retired_fence; uint32_t last_ring_pos;
From: Russell King rmk+kernel@arm.linux.org.uk
If we fail to allocate an event, we leave the submitted fence number incremented. This can cause an already running hangcheck timer to believe that we should be waiting for further events when no event has actually been queued.
Resolve this by moving the fence allocation (which can never fail) after the event allocation.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 2ed8de8c522a..ffecce5236f9 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -890,13 +890,8 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, { struct drm_device *dev = gpu->drm; struct etnaviv_drm_private *priv = dev->dev_private; - int ret = 0; unsigned int event, i;
- submit->fence = ++priv->next_fence; - - gpu->submitted_fence = submit->fence; - /* * TODO * @@ -909,10 +904,13 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, event = event_alloc(gpu); if (unlikely(event == ~0U)) { DRM_ERROR("no free event\n"); - ret = -EBUSY; - goto fail; + return -EBUSY; }
+ submit->fence = ++priv->next_fence; + + gpu->submitted_fence = submit->fence; + etnaviv_buffer_queue(gpu, event, submit);
priv->lastctx = ctx; @@ -942,8 +940,7 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, } hangcheck_timer_reset(gpu);
-fail: - return ret; + return 0; }
/*
From: Russell King rmk+kernel@arm.linux.org.uk
Remove the etnaviv specific power rail support, which is mostly disabled anyway. This really wants to be using the power domain support instead, which allows the SoC specifics to be abstracted from the driver, rather than a home-cooked version of it.
For example, on Dove, it is necessary to go through a specific isolation and reset sequence when removing and restoring the power to the GPU, which is something we don't want to have to place into the generic GPU driver.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 47 ----------------------------------- 1 file changed, 47 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index ffecce5236f9..7537ab13a47e 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -607,45 +607,6 @@ void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m) /* * Power Management: */ - -static int enable_pwrrail(struct etnaviv_gpu *gpu) -{ -#if 0 - struct drm_device *dev = gpu->drm; - int ret = 0; - - if (gpu->gpu_reg) { - ret = regulator_enable(gpu->gpu_reg); - if (ret) { - dev_err(dev->dev, "failed to enable 'gpu_reg': %d\n", - ret); - return ret; - } - } - - if (gpu->gpu_cx) { - ret = regulator_enable(gpu->gpu_cx); - if (ret) { - dev_err(dev->dev, "failed to enable 'gpu_cx': %d\n", - ret); - return ret; - } - } -#endif - return 0; -} - -static int disable_pwrrail(struct etnaviv_gpu *gpu) -{ -#if 0 - if (gpu->gpu_cx) - regulator_disable(gpu->gpu_cx); - if (gpu->gpu_reg) - regulator_disable(gpu->gpu_reg); -#endif - return 0; -} - static int enable_clk(struct etnaviv_gpu *gpu) { if (gpu->clk_core) @@ -688,10 +649,6 @@ int etnaviv_gpu_pm_resume(struct etnaviv_gpu *gpu)
DBG("%s", dev_name(gpu->dev));
- ret = enable_pwrrail(gpu); - if (ret) - return ret; - ret = enable_clk(gpu); if (ret) return ret; @@ -717,10 +674,6 @@ int etnaviv_gpu_pm_suspend(struct etnaviv_gpu *gpu) if (ret) return ret;
- ret = disable_pwrrail(gpu); - if (ret) - return ret; - return 0; }
From: Russell King rmk+kernel@arm.linux.org.uk
The etnaviv_gpu structure can have a longer lifetime than the GPU command buffer, MMU and drm_device structures. When these other structures are freed (via the unbind method) we may be tempted to access these via other functions after they've been freed. Leaving pointers in them invites undetected use-after-free events. This has happened while trying to develop runtime PM for the GPU.
Ensure that these bugs are obvious by NULLing out the pointers at the end of their lifetime.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 7537ab13a47e..7f041a261d54 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -990,11 +990,17 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master,
WARN_ON(!list_empty(&gpu->active_list));
- if (gpu->buffer) + if (gpu->buffer) { drm_gem_object_unreference_unlocked(gpu->buffer); + gpu->buffer = NULL; + }
- if (gpu->mmu) + if (gpu->mmu) { etnaviv_iommu_destroy(gpu->mmu); + gpu->mmu = NULL; + } + + gpu->drm = NULL; }
static const struct component_ops gpu_ops = {
From: Russell King rmk+kernel@arm.linux.org.uk
Hold the mutex while calling component_{un,}bind_all() so that components can perform initialisation in their bind and unbind callbacks from the component helper.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_drv.c | 11 ++++++----- drivers/staging/etnaviv/etnaviv_gpu.c | 2 +- 2 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 4bef6542daf3..2eb33d0074aa 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -99,10 +99,11 @@ static int etnaviv_unload(struct drm_device *dev) if (g) etnaviv_gpu_pm_suspend(g); } - mutex_unlock(&dev->struct_mutex);
component_unbind_all(dev->dev, dev);
+ mutex_unlock(&dev->struct_mutex); + dev->dev_private = NULL;
kfree(priv); @@ -116,8 +117,6 @@ static void load_gpu(struct drm_device *dev) struct etnaviv_drm_private *priv = dev->dev_private; unsigned int i;
- mutex_lock(&dev->struct_mutex); - for (i = 0; i < ETNA_MAX_PIPES; i++) { struct etnaviv_gpu *g = priv->gpu[i];
@@ -132,8 +131,6 @@ static void load_gpu(struct drm_device *dev) } } } - - mutex_unlock(&dev->struct_mutex); }
static int etnaviv_load(struct drm_device *dev, unsigned long flags) @@ -157,12 +154,16 @@ static int etnaviv_load(struct drm_device *dev, unsigned long flags)
platform_set_drvdata(pdev, dev);
+ mutex_lock(&dev->struct_mutex); + err = component_bind_all(dev->dev, dev); if (err < 0) return err;
load_gpu(dev);
+ mutex_unlock(&dev->struct_mutex); + return 0; }
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 7f041a261d54..6492214865b9 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -991,7 +991,7 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master, WARN_ON(!list_empty(&gpu->active_list));
if (gpu->buffer) { - drm_gem_object_unreference_unlocked(gpu->buffer); + drm_gem_object_unreference(gpu->buffer); gpu->buffer = NULL; }
From: Russell King rmk+kernel@arm.linux.org.uk
Power management of each GPU core should be a matter for the driver of each GPU core itself. Move the basic power management calls out of etnaviv_drv.c and into etnaviv_gpu.c so that we can add runtime PM.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_drv.c | 10 ----- drivers/staging/etnaviv/etnaviv_gpu.c | 73 +++++++++++++++++++---------------- drivers/staging/etnaviv/etnaviv_gpu.h | 2 - 3 files changed, 39 insertions(+), 46 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 2eb33d0074aa..83cab36170f3 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -87,21 +87,12 @@ u32 etnaviv_readl(const void __iomem *addr) static int etnaviv_unload(struct drm_device *dev) { struct etnaviv_drm_private *priv = dev->dev_private; - unsigned int i;
flush_workqueue(priv->wq); destroy_workqueue(priv->wq);
mutex_lock(&dev->struct_mutex); - for (i = 0; i < ETNA_MAX_PIPES; i++) { - struct etnaviv_gpu *g = priv->gpu[i]; - - if (g) - etnaviv_gpu_pm_suspend(g); - } - component_unbind_all(dev->dev, dev); - mutex_unlock(&dev->struct_mutex);
dev->dev_private = NULL; @@ -123,7 +114,6 @@ static void load_gpu(struct drm_device *dev) if (g) { int ret;
- etnaviv_gpu_pm_resume(g); ret = etnaviv_gpu_init(g); if (ret) { dev_err(g->dev, "hw init failed: %d\n", ret); diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 6492214865b9..dd8dc6707ce4 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -643,40 +643,6 @@ static int disable_axi(struct etnaviv_gpu *gpu) return 0; }
-int etnaviv_gpu_pm_resume(struct etnaviv_gpu *gpu) -{ - int ret; - - DBG("%s", dev_name(gpu->dev)); - - ret = enable_clk(gpu); - if (ret) - return ret; - - ret = enable_axi(gpu); - if (ret) - return ret; - - return 0; -} - -int etnaviv_gpu_pm_suspend(struct etnaviv_gpu *gpu) -{ - int ret; - - DBG("%s", dev_name(gpu->dev)); - - ret = disable_axi(gpu); - if (ret) - return ret; - - ret = disable_clk(gpu); - if (ret) - return ret; - - return 0; -} - /* * Hangcheck detection for locked gpu: */ @@ -946,6 +912,38 @@ static irqreturn_t irq_handler(int irq, void *data) return ret; }
+static int etnaviv_gpu_resume(struct etnaviv_gpu *gpu) +{ + int ret; + + ret = enable_clk(gpu); + if (ret) + return ret; + + ret = enable_axi(gpu); + if (ret) { + disable_clk(gpu); + return ret; + } + + return 0; +} + +static int etnaviv_gpu_suspend(struct etnaviv_gpu *gpu) +{ + int ret; + + ret = disable_axi(gpu); + if (ret) + return ret; + + ret = disable_clk(gpu); + if (ret) + return ret; + + return 0; +} + static int etnaviv_gpu_bind(struct device *dev, struct device *master, void *data) { @@ -953,6 +951,7 @@ static int etnaviv_gpu_bind(struct device *dev, struct device *master, struct etnaviv_drm_private *priv = drm->dev_private; struct etnaviv_gpu *gpu = dev_get_drvdata(dev); int idx = gpu->pipe; + int ret;
dev_info(dev, "pre gpu[idx]: %p\n", priv->gpu[idx]);
@@ -966,6 +965,10 @@ static int etnaviv_gpu_bind(struct device *dev, struct device *master,
dev_info(dev, "post gpu[idx]: %p\n", priv->gpu[idx]);
+ ret = etnaviv_gpu_resume(gpu); + if (ret < 0) + return ret; + gpu->drm = drm;
INIT_LIST_HEAD(&gpu->active_list); @@ -990,6 +993,8 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master,
WARN_ON(!list_empty(&gpu->active_list));
+ etnaviv_gpu_suspend(gpu); + if (gpu->buffer) { drm_gem_object_unreference(gpu->buffer); gpu->buffer = NULL; diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h index 484649530ccc..bbc91b6ea8fe 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.h +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -142,8 +142,6 @@ int etnaviv_gpu_get_param(struct etnaviv_gpu *gpu, uint32_t param, uint64_t *value);
int etnaviv_gpu_init(struct etnaviv_gpu *gpu); -int etnaviv_gpu_pm_suspend(struct etnaviv_gpu *gpu); -int etnaviv_gpu_pm_resume(struct etnaviv_gpu *gpu);
#ifdef CONFIG_DEBUG_FS void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m);
From: Russell King rmk+kernel@arm.linux.org.uk
We need to reprogram various registers when coming out of runtime PM, many of which are those which are setup by the main initialisation. Abstract this code out and arrange for the runtime PM resume method to call it.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_gpu.c | 51 ++++++++++++++++++++--------------- 1 file changed, 30 insertions(+), 21 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index dd8dc6707ce4..df3210424d09 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -400,17 +400,9 @@ static int etnaviv_hw_reset(struct etnaviv_gpu *gpu) return 0; }
-int etnaviv_gpu_init(struct etnaviv_gpu *gpu) +static void etnaviv_gpu_hw_init(struct etnaviv_gpu *gpu) { - int ret, i; u32 words; /* 32 bit words */ - struct iommu_domain *iommu; - bool mmuv2; - - etnaviv_hw_identify(gpu); - ret = etnaviv_hw_reset(gpu); - if (ret) - return ret;
if (gpu->identity.model == chipModel_GC320 && gpu_read(gpu, VIVS_HI_CHIP_TIME) != 0x2062400 && @@ -443,6 +435,33 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PEZ, 0x0); gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PE, 0x0);
+ /* FIXME: we need to program the GPU table pointer(s) here */ + + /* Start command processor */ + words = etnaviv_buffer_init(gpu); + + /* convert number of 32 bit words to number of 64 bit words */ + words = ALIGN(words, 2) / 2; + + gpu_write(gpu, VIVS_HI_INTR_ENBL, ~0U); + gpu_write(gpu, VIVS_FE_COMMAND_ADDRESS, + etnaviv_gem_paddr_locked(gpu->buffer)); + gpu_write(gpu, VIVS_FE_COMMAND_CONTROL, + VIVS_FE_COMMAND_CONTROL_ENABLE | + VIVS_FE_COMMAND_CONTROL_PREFETCH(words)); +} + +int etnaviv_gpu_init(struct etnaviv_gpu *gpu) +{ + int ret, i; + struct iommu_domain *iommu; + bool mmuv2; + + etnaviv_hw_identify(gpu); + ret = etnaviv_hw_reset(gpu); + if (ret) + return ret; + /* Setup IOMMU.. eventually we will (I think) do this once per context * and have separate page tables per context. For now, to keep things * simple and to get something working, just use a single address space: @@ -489,18 +508,8 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) complete(&gpu->event_free); }
- /* Start command processor */ - words = etnaviv_buffer_init(gpu); - - /* convert number of 32 bit words to number of 64 bit words */ - words = ALIGN(words, 2) / 2; - - gpu_write(gpu, VIVS_HI_INTR_ENBL, ~0U); - gpu_write(gpu, VIVS_FE_COMMAND_ADDRESS, - etnaviv_gem_paddr_locked(gpu->buffer)); - gpu_write(gpu, VIVS_FE_COMMAND_CONTROL, - VIVS_FE_COMMAND_CONTROL_ENABLE | - VIVS_FE_COMMAND_CONTROL_PREFETCH(words)); + /* Now program the hardware */ + etnaviv_gpu_hw_init(gpu);
return 0;
From: Russell King rmk+kernel@arm.linux.org.uk
Add support to append an END command to the GPU command stream so we cleanly shutdown the GPU on driver removal and suspend. This properly quiesces the GPU, allowing it to come back up on SoCs such as iMX6.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_buffer.c | 11 +++++++++++ drivers/staging/etnaviv/etnaviv_drv.h | 1 + drivers/staging/etnaviv/etnaviv_gpu.c | 29 +++++++++++++++++++++++++++++ 3 files changed, 41 insertions(+)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index 0ce1e4baafa4..17bf6f9abed0 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -157,6 +157,17 @@ u32 etnaviv_buffer_init(struct etnaviv_gpu *gpu) return buffer->offset; }
+void etnaviv_buffer_end(struct etnaviv_gpu *gpu) +{ + struct etnaviv_gem_object *buffer = to_etnaviv_bo(gpu->buffer); + + /* Replace the last WAIT with an END */ + buffer->offset -= 4; + + CMD_END(buffer); + mb(); +} + void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct etnaviv_gem_submit *submit) { diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index 18b929f9d268..4c848afab876 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -111,6 +111,7 @@ struct drm_gem_object *etnaviv_gem_new(struct drm_device *dev, int etnaviv_gem_new_userptr(struct drm_device *dev, struct drm_file *file, uintptr_t ptr, uint32_t size, uint32_t flags, uint32_t *handle); u32 etnaviv_buffer_init(struct etnaviv_gpu *gpu); +void etnaviv_buffer_end(struct etnaviv_gpu *gpu); void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct etnaviv_gem_submit *submit); bool etnaviv_cmd_validate_one(struct etnaviv_gpu *gpu, diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index df3210424d09..a22c5ab558a7 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -942,6 +942,35 @@ static int etnaviv_gpu_suspend(struct etnaviv_gpu *gpu) { int ret;
+ if (gpu->buffer) { + unsigned long timeout; + + /* Replace the last WAIT with END */ + etnaviv_buffer_end(gpu); + + /* + * We know that only the FE is busy here, this should + * happen quickly (as the WAIT is only 200 cycles). If + * we fail, just warn and continue. + */ + timeout = jiffies + msecs_to_jiffies(100); + do { + u32 idle = gpu_read(gpu, VIVS_HI_IDLE_STATE); + + if ((idle & gpu->idle_mask) == gpu->idle_mask) + break; + + if (time_is_before_jiffies(timeout)) { + dev_warn(gpu->dev, + "timed out waiting for idle: idle=0x%x\n", + idle); + break; + } + + udelay(5); + } while (1); + } + ret = disable_axi(gpu); if (ret) return ret;
From: Russell King rmk+kernel@arm.linux.org.uk
Add initial runtime PM support. We manage the runtime PM state based on the events: when we allocate an event upon command submission, we "get" the runtime PM state, causing it to resume if not already resumed.
When we receive an interrupt, and free an event, we "put" the runtime state. When all events are eventually freed, the runtime PM state will then indicate that it can attempt to suspend the device.
We also include an idle callback which checks that the GPU modules (except for the front end) are idle before suspending. This way we ensure that the GPU is properly idle, and retry the suspend later if not.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk Signed-off-by: Lucas Stach l.stach@pengutronix.de --- lst: squashed fix for PM_RUNTIME config symbol removal --- drivers/staging/etnaviv/etnaviv_gpu.c | 111 ++++++++++++++++++++++++++++++++-- drivers/staging/etnaviv/etnaviv_gpu.h | 1 + 2 files changed, 107 insertions(+), 5 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index a22c5ab558a7..32084e08a387 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -457,10 +457,14 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) struct iommu_domain *iommu; bool mmuv2;
+ ret = pm_runtime_get_sync(gpu->dev); + if (ret < 0) + return ret; + etnaviv_hw_identify(gpu); ret = etnaviv_hw_reset(gpu); if (ret) - return ret; + goto fail;
/* Setup IOMMU.. eventually we will (I think) do this once per context * and have separate page tables per context. For now, to keep things @@ -510,10 +514,17 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
/* Now program the hardware */ etnaviv_gpu_hw_init(gpu); + gpu->initialized = true; + + pm_runtime_mark_last_busy(gpu->dev); + pm_runtime_put_autosuspend(gpu->dev);
return 0;
fail: + pm_runtime_mark_last_busy(gpu->dev); + pm_runtime_put_autosuspend(gpu->dev); + return ret; }
@@ -545,10 +556,15 @@ static void verify_dma(struct etnaviv_gpu *gpu, struct dma_debug *debug) void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m) { struct dma_debug debug; - u32 dma_lo = gpu_read(gpu, VIVS_FE_DMA_LOW); - u32 dma_hi = gpu_read(gpu, VIVS_FE_DMA_HIGH); - u32 axi = gpu_read(gpu, VIVS_HI_AXI_STATUS); - u32 idle = gpu_read(gpu, VIVS_HI_IDLE_STATE); + u32 dma_lo, dma_hi, axi, idle; + + if (pm_runtime_get_sync(gpu->dev) < 0) + return; + + dma_lo = gpu_read(gpu, VIVS_FE_DMA_LOW); + dma_hi = gpu_read(gpu, VIVS_FE_DMA_HIGH); + axi = gpu_read(gpu, VIVS_HI_AXI_STATUS); + idle = gpu_read(gpu, VIVS_HI_IDLE_STATE);
verify_dma(gpu, &debug);
@@ -610,6 +626,9 @@ void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m) seq_printf(m, "\t state 1: 0x%08x\n", debug.state[1]); seq_printf(m, "\t last fetch 64 bit word: 0x%08x 0x%08x\n", dma_lo, dma_hi); + + pm_runtime_mark_last_busy(gpu->dev); + pm_runtime_put_autosuspend(gpu->dev); } #endif
@@ -819,6 +838,11 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, struct drm_device *dev = gpu->drm; struct etnaviv_drm_private *priv = dev->dev_private; unsigned int event, i; + int ret; + + ret = pm_runtime_get_sync(gpu->dev); + if (ret < 0) + return ret;
/* * TODO @@ -832,6 +856,7 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu, event = event_alloc(gpu); if (unlikely(event == ~0U)) { DRM_ERROR("no free event\n"); + pm_runtime_put_autosuspend(gpu->dev); return -EBUSY; }
@@ -884,6 +909,8 @@ static irqreturn_t irq_handler(int irq, void *data) if (intr != 0) { int event;
+ pm_runtime_mark_last_busy(gpu->dev); + dev_dbg(gpu->dev, "intr 0x%08x\n", intr);
if (intr & VIVS_HI_INTR_ACKNOWLEDGE_AXI_BUS_ERROR) { @@ -911,6 +938,15 @@ static irqreturn_t irq_handler(int irq, void *data) gpu->last_ring_pos = gpu->event[event].ring_pos; } event_free(gpu, event); + + /* + * We need to balance the runtime PM count caused by + * each submission. Upon submission, we increment + * the runtime PM counter, and allocate one event. + * So here, we put the runtime PM count for each + * completed event. + */ + pm_runtime_put_autosuspend(gpu->dev); }
etnaviv_gpu_retire(gpu); @@ -1003,7 +1039,11 @@ static int etnaviv_gpu_bind(struct device *dev, struct device *master,
dev_info(dev, "post gpu[idx]: %p\n", priv->gpu[idx]);
+#ifdef CONFIG_PM + ret = pm_runtime_get_sync(gpu->dev); +#else ret = etnaviv_gpu_resume(gpu); +#endif if (ret < 0) return ret;
@@ -1015,6 +1055,10 @@ static int etnaviv_gpu_bind(struct device *dev, struct device *master,
setup_timer(&gpu->hangcheck_timer, hangcheck_handler, (unsigned long)gpu); + + pm_runtime_mark_last_busy(gpu->dev); + pm_runtime_put_autosuspend(gpu->dev); + return 0; fail: return -1; @@ -1031,7 +1075,12 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master,
WARN_ON(!list_empty(&gpu->active_list));
+#ifdef CONFIG_PM + pm_runtime_get_sync(gpu->dev); + pm_runtime_put_sync_suspend(gpu->dev); +#else etnaviv_gpu_suspend(gpu); +#endif
if (gpu->buffer) { drm_gem_object_unreference(gpu->buffer); @@ -1130,6 +1179,15 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev) /* TODO: figure out max mapped size */ dev_set_drvdata(dev, gpu);
+ /* + * We treat the device as initially suspended. The runtime PM + * autosuspend delay is rather arbitary: no measurements have + * yet been performed to determine an appropriate value. + */ + pm_runtime_use_autosuspend(gpu->dev); + pm_runtime_set_autosuspend_delay(gpu->dev, 200); + pm_runtime_enable(gpu->dev); + err = component_add(&pdev->dev, &gpu_ops); if (err < 0) { dev_err(&pdev->dev, "failed to register component: %d\n", err); @@ -1145,13 +1203,56 @@ fail: static int etnaviv_gpu_platform_remove(struct platform_device *pdev) { component_del(&pdev->dev, &gpu_ops); + pm_runtime_disable(&pdev->dev); return 0; }
+#ifdef CONFIG_PM +static int etnaviv_gpu_rpm_suspend(struct device *dev) +{ + struct etnaviv_gpu *gpu = dev_get_drvdata(dev); + u32 idle, mask; + + /* If we have outstanding fences, we're not idle */ + if (gpu->retired_fence != gpu->submitted_fence) + return -EBUSY; + + /* Check whether the hardware (except FE) is idle */ + mask = gpu->idle_mask & ~VIVS_HI_IDLE_STATE_FE; + idle = gpu_read(gpu, VIVS_HI_IDLE_STATE) & mask; + if (idle != mask) + return -EBUSY; + + return etnaviv_gpu_suspend(gpu); +} + +static int etnaviv_gpu_rpm_resume(struct device *dev) +{ + struct etnaviv_gpu *gpu = dev_get_drvdata(dev); + int ret; + + ret = etnaviv_gpu_resume(gpu); + if (ret) + return ret; + + /* Re-initialise the basic hardware state */ + if (gpu->initialized) + etnaviv_gpu_hw_init(gpu); + + return 0; +} +#endif + +static const struct dev_pm_ops etnaviv_gpu_pm_ops = { + SET_RUNTIME_PM_OPS(etnaviv_gpu_rpm_suspend, etnaviv_gpu_rpm_resume, + NULL) +}; + struct platform_driver etnaviv_gpu_driver = { .driver = { .name = "etnaviv-gpu", .owner = THIS_MODULE, + .pm = &etnaviv_gpu_pm_ops, .of_match_table = etnaviv_gpu_match, }, .probe = etnaviv_gpu_platform_probe, diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h index bbc91b6ea8fe..3a4c111a4978 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.h +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -89,6 +89,7 @@ struct etnaviv_gpu { struct device *dev; struct etnaviv_chip_identity identity; int pipe; + bool initialized;
/* 'ring'-buffer: */ struct drm_gem_object *buffer;
From: Russell King rmk+kernel@arm.linux.org.uk
On iMX6, memory starts at 256MB physical, and if we have 2GB of memory, this causes it to extend beyond 0x80000000. All memory is available for DMA coherent allocations, which can result in the command buffers being allocated from addresses above 0x80000000.
However, the Vivante GPU requires that command buffers are in the linear space (first 2GB of GPU addresses.) Trying to program an address with bit 31 set results in it being ignored.
The Vivante GPU has a solution to this: they provide a set of memory base address registers, which are added to the GPU linear space address before appearing on the bus. If we program these to the base address of physical memory, we can use all 2GB of RAM.
There are other reasons to do this: firstly, other SoCs may start their physical memory at other bus addresses, which may further reduce the range of physical addresses for command buffers. Secondly, it prevents the GPU being able to access addresses below RAM, which in the iMX6 case are hardware registers.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk --- drivers/staging/etnaviv/etnaviv_buffer.c | 21 +++++++++++++-------- drivers/staging/etnaviv/etnaviv_gem.c | 3 ++- drivers/staging/etnaviv/etnaviv_gpu.c | 20 ++++++++++++++------ drivers/staging/etnaviv/etnaviv_gpu.h | 3 +++ drivers/staging/etnaviv/etnaviv_mmu.c | 4 ++-- drivers/staging/etnaviv/etnaviv_mmu.h | 2 +- 6 files changed, 35 insertions(+), 18 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index 17bf6f9abed0..391b2afc7f59 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -129,6 +129,11 @@ static void etnaviv_cmd_select_pipe(struct etnaviv_gem_object *buffer, u8 pipe) VIVS_GL_PIPE_SELECT_PIPE(pipe)); }
+static u32 gpu_va(struct etnaviv_gpu *gpu, struct etnaviv_gem_object *obj) +{ + return obj->paddr - gpu->memory_base; +} + static void etnaviv_buffer_dump(struct etnaviv_gpu *gpu, struct etnaviv_gem_object *obj, u32 off, u32 len) { @@ -136,7 +141,7 @@ static void etnaviv_buffer_dump(struct etnaviv_gpu *gpu, u32 *ptr = obj->vaddr + off;
dev_info(gpu->dev, "virt %p phys 0x%08x free 0x%08x\n", - ptr, obj->paddr + off, size - len * 4 - off); + ptr, gpu_va(gpu, obj) + off, size - len * 4 - off);
print_hex_dump(KERN_INFO, "cmd ", DUMP_PREFIX_OFFSET, 16, 4, ptr, len * 4, 0); @@ -152,7 +157,7 @@ u32 etnaviv_buffer_init(struct etnaviv_gpu *gpu) etnaviv_cmd_select_pipe(buffer, gpu->pipe);
CMD_WAIT(buffer); - CMD_LINK(buffer, 2, buffer->paddr + ((buffer->offset - 1) * 4)); + CMD_LINK(buffer, 2, gpu_va(gpu, buffer) + ((buffer->offset - 1) * 4));
return buffer->offset; } @@ -199,7 +204,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event,
/* save offset back into main buffer */ back = buffer->offset + reserve_size - 6; - link_target = buffer->paddr + buffer->offset * 4; + link_target = gpu_va(gpu, buffer) + buffer->offset * 4; link_size = 6;
if (gpu->mmu->need_flush) { @@ -216,7 +221,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, if (drm_debug & DRM_UT_DRIVER) pr_info("stream link from buffer %u to 0x%08x @ 0x%08x %p\n", i, link_target, - cmd->paddr + cmd->offset * 4, + gpu_va(gpu, cmd) + cmd->offset * 4, cmd->vaddr + cmd->offset * 4);
/* jump back from last cmd to main buffer */ @@ -225,7 +230,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, /* update the size */ submit->cmd[i].size = cmd->offset - submit->cmd[i].offset;
- link_target = cmd->paddr + submit->cmd[i].offset * 4; + link_target = gpu_va(gpu, cmd) + submit->cmd[i].offset * 4; link_size = submit->cmd[i].size * 2; }
@@ -240,12 +245,12 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, pr_info("link op: %p\n", lw); pr_info("link addr: %p\n", lw + 1); pr_info("addr: 0x%08x\n", link_target); - pr_info("back: 0x%08x\n", buffer->paddr + (back * 4)); + pr_info("back: 0x%08x\n", gpu_va(gpu, buffer) + (back * 4)); pr_info("event: %d\n", event); }
if (gpu->mmu->need_flush) { - uint32_t new_target = buffer->paddr + buffer->offset * + uint32_t new_target = gpu_va(gpu, buffer) + buffer->offset * sizeof(uint32_t);
/* Add the MMU flush */ @@ -273,7 +278,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event,
/* append WAIT/LINK to main buffer */ CMD_WAIT(buffer); - CMD_LINK(buffer, 2, buffer->paddr + ((buffer->offset - 1) * 4)); + CMD_LINK(buffer, 2, gpu_va(gpu, buffer) + ((buffer->offset - 1) * 4));
/* Change WAIT into a LINK command; write the address first. */ *(lw + 1) = link_target; diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 56e4ff5dd048..fc8dcfaf5f21 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -317,7 +317,8 @@ int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, if (IS_ERR(pages)) return PTR_ERR(pages);
- ret = etnaviv_iommu_map_gem(gpu->mmu, etnaviv_obj); + ret = etnaviv_iommu_map_gem(gpu->mmu, etnaviv_obj, + gpu->memory_base); }
if (!ret) diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 32084e08a387..973b00346c7f 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -429,11 +429,11 @@ static void etnaviv_gpu_hw_init(struct etnaviv_gpu *gpu) VIVS_HI_AXI_CONFIG_ARCACHE(2));
/* set base addresses */ - gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_RA, 0x0); - gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_FE, 0x0); - gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_TX, 0x0); - gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PEZ, 0x0); - gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PE, 0x0); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_RA, gpu->memory_base); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_FE, gpu->memory_base); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_TX, gpu->memory_base); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PEZ, gpu->memory_base); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_PE, gpu->memory_base);
/* FIXME: we need to program the GPU table pointer(s) here */
@@ -445,7 +445,7 @@ static void etnaviv_gpu_hw_init(struct etnaviv_gpu *gpu)
gpu_write(gpu, VIVS_HI_INTR_ENBL, ~0U); gpu_write(gpu, VIVS_FE_COMMAND_ADDRESS, - etnaviv_gem_paddr_locked(gpu->buffer)); + etnaviv_gem_paddr_locked(gpu->buffer) - gpu->memory_base); gpu_write(gpu, VIVS_FE_COMMAND_CONTROL, VIVS_FE_COMMAND_CONTROL_ENABLE | VIVS_FE_COMMAND_CONTROL_PREFETCH(words)); @@ -1140,6 +1140,14 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev)
gpu->dev = &pdev->dev;
+ /* + * Set the GPU base address to the start of physical memory. This + * ensures that if we have up to 2GB, the v1 MMU can address the + * highest memory. This is important as command buffers may be + * allocated outside of this limit. + */ + gpu->memory_base = PHYS_OFFSET; + /* Map registers: */ gpu->mmio = etnaviv_ioremap(pdev, NULL, dev_name(gpu->dev)); if (IS_ERR(gpu->mmio)) diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h index 3a4c111a4978..9feab07da457 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.h +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -94,6 +94,9 @@ struct etnaviv_gpu { /* 'ring'-buffer: */ struct drm_gem_object *buffer;
+ /* bus base address of memory */ + uint32_t memory_base; + /* event management: */ struct etnaviv_event event[30]; struct completion event_free; diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c index 0a2f9dd5bb7a..897356d08e30 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.c +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -92,7 +92,7 @@ int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova, }
int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, - struct etnaviv_gem_object *etnaviv_obj) + struct etnaviv_gem_object *etnaviv_obj, uint32_t memory_base) { struct etnaviv_drm_private *priv = etnaviv_obj->base.dev->dev_private; struct sg_table *sgt = etnaviv_obj->sgt; @@ -103,7 +103,7 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, if (sgt->nents == 1) { uint32_t iova;
- iova = sg_dma_address(sgt->sgl); + iova = sg_dma_address(sgt->sgl) - memory_base; if (iova < 0x80000000 - sg_dma_len(sgt->sgl)) { etnaviv_obj->iova = iova; return 0; diff --git a/drivers/staging/etnaviv/etnaviv_mmu.h b/drivers/staging/etnaviv/etnaviv_mmu.h index a37affda9590..29881e27dc7e 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.h +++ b/drivers/staging/etnaviv/etnaviv_mmu.h @@ -39,7 +39,7 @@ int etnaviv_iommu_map(struct etnaviv_iommu *iommu, uint32_t iova, int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova, struct sg_table *sgt, unsigned len); int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, - struct etnaviv_gem_object *etnaviv_obj); + struct etnaviv_gem_object *etnaviv_obj, uint32_t memory_base); void etnaviv_iommu_unmap_gem(struct etnaviv_iommu *mmu, struct etnaviv_gem_object *etnaviv_obj); void etnaviv_iommu_destroy(struct etnaviv_iommu *iommu);
The hardware interprets a value of 0 as the maximium number of rectangles which is 256. Allow this in the command parser.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_cmd_parser.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/staging/etnaviv/etnaviv_cmd_parser.c b/drivers/staging/etnaviv/etnaviv_cmd_parser.c index 4cc6944e4a8f..61370d3ebf9d 100644 --- a/drivers/staging/etnaviv/etnaviv_cmd_parser.c +++ b/drivers/staging/etnaviv/etnaviv_cmd_parser.c @@ -68,6 +68,8 @@ bool etnaviv_cmd_validate_one(struct etnaviv_gpu *gpu,
case FE_OPCODE_DRAW_2D: n = EXTRACT(cmd, VIV_FE_DRAW_2D_HEADER_COUNT); + if (n == 0) + n = 256; len = 2 + n * 2; break;
It is legal for the userspace to pass in a command stream of a size aligned to 32 bit, if that is where the last user command ends. The kernel then needs to insert a LINK command at the end of the stream, which needs to be aligned to 64 bit, so the kernel may insert an additional 32bits of padding in the stream. Align the stream size to account for that in the size and command stream validator checks.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_gem_submit.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index 7bd4912ab8ad..965096be5219 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -377,8 +377,11 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
/* * We must have space to add a LINK command at the end of - * the command buffer. + * the command buffer. Align buffer size to the next 64bit + * quantity, as that's the point where we need to insert the + * next command. */ + submit_cmd.size = ALIGN(submit_cmd.size, 8); max_size = etnaviv_obj->base.size - 8;
if (submit_cmd.size > max_size ||
On Thu, Apr 02, 2015 at 05:30:29PM +0200, Lucas Stach wrote:
It is legal for the userspace to pass in a command stream of a size aligned to 32 bit, if that is where the last user command ends. The kernel then needs to insert a LINK command at the end of the stream, which needs to be aligned to 64 bit, so the kernel may insert an additional 32bits of padding in the stream. Align the stream size to account for that in the size and command stream validator checks.
Signed-off-by: Lucas Stach l.stach@pengutronix.de
drivers/staging/etnaviv/etnaviv_gem_submit.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index 7bd4912ab8ad..965096be5219 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -377,8 +377,11 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
/* * We must have space to add a LINK command at the end of
* the command buffer.
* the command buffer. Align buffer size to the next 64bit
* quantity, as that's the point where we need to insert the
*/* next command.
max_size = etnaviv_obj->base.size - 8;submit_cmd.size = ALIGN(submit_cmd.size, 8);
I wonder if it's an error if the command size is not a multiple of 8? I know that the command stream is always aligned to 8 bytes for the 2D cores, but I don't know about the 3D or VG cores.
Am Donnerstag, den 02.04.2015, 17:20 +0100 schrieb Russell King - ARM Linux:
On Thu, Apr 02, 2015 at 05:30:29PM +0200, Lucas Stach wrote:
It is legal for the userspace to pass in a command stream of a size aligned to 32 bit, if that is where the last user command ends. The kernel then needs to insert a LINK command at the end of the stream, which needs to be aligned to 64 bit, so the kernel may insert an additional 32bits of padding in the stream. Align the stream size to account for that in the size and command stream validator checks.
Signed-off-by: Lucas Stach l.stach@pengutronix.de
drivers/staging/etnaviv/etnaviv_gem_submit.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index 7bd4912ab8ad..965096be5219 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -377,8 +377,11 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
/* * We must have space to add a LINK command at the end of
* the command buffer.
* the command buffer. Align buffer size to the next 64bit
* quantity, as that's the point where we need to insert the
*/* next command.
max_size = etnaviv_obj->base.size - 8;submit_cmd.size = ALIGN(submit_cmd.size, 8);
I wonder if it's an error if the command size is not a multiple of 8? I know that the command stream is always aligned to 8 bytes for the 2D cores, but I don't know about the 3D or VG cores.
The start of the commands must always be 64bit aligned, it's the same for all pipes. The size can be dword aligned if the last command in the stream is something like SET_STATE with a length of 2. In that case one needs to insert another padding dword, but as it's the end of the stream userspace may omit that.
So that more a question of policy: do we want userspace to always specify the size including the padding and reject streams with a size not double-dword aligned, or do we just fix it up in the kernel, as it's equally easily done there. I was a bit on the fence here and decided to go the "let the kernel fix things" route.
Regards, Lucas
On Thu, Apr 02, 2015 at 06:29:24PM +0200, Lucas Stach wrote:
The start of the commands must always be 64bit aligned, it's the same for all pipes. The size can be dword aligned if the last command in the stream is something like SET_STATE with a length of 2. In that case one needs to insert another padding dword, but as it's the end of the stream userspace may omit that.
So that more a question of policy: do we want userspace to always specify the size including the padding and reject streams with a size not double-dword aligned, or do we just fix it up in the kernel, as it's equally easily done there. I was a bit on the fence here and decided to go the "let the kernel fix things" route.
Without really knowing the hardware, it's hard to say. However, they seem to be designed around a 64-bit architecture, and I would not be surprised if the front end DMA always fetches from the command buffer in 64-bit quantities.
You mention SET_STATE, but the same applies to NOP, WAIT and all of the other commands for the 2D cores - the command word must always be in the first 32-bits of each 64-bit naturally aligned word in memory.
Given that, my feeling (and it's only a feeling) is that it would be potentially dangerous to allow userspace to pass a buffer which isn't aligned. As you've found, the kernel would have to align the buffer size up to a 64-bit quantity to add the LINK command at the end anyway.
So, I would err on the side of having userspace always do that, and we initially enforce that on the kernel side. If we find that's too strict, we can always relax the kernel side - and we still remain compatible with userspace. If we do the reverse, it's harder for the kernel to become more strict without breaking userspace. Given the "no regressions" rule, that's something we all want to avoid. :)
Am Donnerstag, den 02.04.2015, 17:45 +0100 schrieb Russell King - ARM Linux:
On Thu, Apr 02, 2015 at 06:29:24PM +0200, Lucas Stach wrote:
The start of the commands must always be 64bit aligned, it's the same for all pipes. The size can be dword aligned if the last command in the stream is something like SET_STATE with a length of 2. In that case one needs to insert another padding dword, but as it's the end of the stream userspace may omit that.
So that more a question of policy: do we want userspace to always specify the size including the padding and reject streams with a size not double-dword aligned, or do we just fix it up in the kernel, as it's equally easily done there. I was a bit on the fence here and decided to go the "let the kernel fix things" route.
Without really knowing the hardware, it's hard to say. However, they seem to be designed around a 64-bit architecture, and I would not be surprised if the front end DMA always fetches from the command buffer in 64-bit quantities.
You mention SET_STATE, but the same applies to NOP, WAIT and all of the other commands for the 2D cores - the command word must always be in the first 32-bits of each 64-bit naturally aligned word in memory.
Given that, my feeling (and it's only a feeling) is that it would be potentially dangerous to allow userspace to pass a buffer which isn't aligned. As you've found, the kernel would have to align the buffer size up to a 64-bit quantity to add the LINK command at the end anyway.
So, I would err on the side of having userspace always do that, and we initially enforce that on the kernel side. If we find that's too strict, we can always relax the kernel side - and we still remain compatible with userspace. If we do the reverse, it's harder for the kernel to become more strict without breaking userspace. Given the "no regressions" rule, that's something we all want to avoid. :)
Yes, seems reasonable. I'll change this to just reject non 64bit aligned sizes and change my userspace accordingly.
Thanks, Lucas
2015-04-02 18:45 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk:
On Thu, Apr 02, 2015 at 06:29:24PM +0200, Lucas Stach wrote:
The start of the commands must always be 64bit aligned, it's the same for all pipes. The size can be dword aligned if the last command in the stream is something like SET_STATE with a length of 2. In that case one needs to insert another padding dword, but as it's the end of the stream userspace may omit that.
So that more a question of policy: do we want userspace to always specify the size including the padding and reject streams with a size not double-dword aligned, or do we just fix it up in the kernel, as it's equally easily done there. I was a bit on the fence here and decided to go the "let the kernel fix things" route.
Without really knowing the hardware, it's hard to say. However, they seem to be designed around a 64-bit architecture, and I would not be surprised if the front end DMA always fetches from the command buffer in 64-bit quantities.
Thats what the hardware does.
You mention SET_STATE, but the same applies to NOP, WAIT and all of the other commands for the 2D cores - the command word must always be in the first 32-bits of each 64-bit naturally aligned word in memory.
Yup - see https://github.com/laanwj/etna_viv/blob/master/doc/hardware.md#command-strea...
Given that, my feeling (and it's only a feeling) is that it would be potentially dangerous to allow userspace to pass a buffer which isn't aligned. As you've found, the kernel would have to align the buffer size up to a 64-bit quantity to add the LINK command at the end anyway.
So, I would err on the side of having userspace always do that, and we initially enforce that on the kernel side. If we find that's too strict, we can always relax the kernel side - and we still remain compatible with userspace. If we do the reverse, it's harder for the kernel to become more strict without breaking userspace. Given the "no regressions" rule, that's something we all want to avoid. :)
-- Christian Gmeiner, MSc
These two GPUs seems to be in the transition state between two generations and report a wrong instuction count. Fix it up in the kernel like the Vivante driver does.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_gpu.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 973b00346c7f..810a9ec04aea 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -166,7 +166,12 @@ static void etnaviv_hw_specs(struct etnaviv_gpu *gpu)
switch (gpu->identity.instruction_count) { case 0: - gpu->identity.instruction_count = 256; + if ((gpu->identity.model == 0x2000 && + gpu->identity.revision == 0x5108) || + gpu->identity.model == 0x880) + gpu->identity.instruction_count = 512; + else + gpu->identity.instruction_count = 256; break;
case 1:
This is taken from the Vivante kernel driver and seems to improve stability and performance.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_gpu.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 810a9ec04aea..0a6c702621d8 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -433,6 +433,16 @@ static void etnaviv_gpu_hw_init(struct etnaviv_gpu *gpu) VIVS_HI_AXI_CONFIG_AWCACHE(2) | VIVS_HI_AXI_CONFIG_ARCACHE(2));
+ /* GC2000 rev 5108 needs a special bus config */ + if (gpu->identity.model == 0x2000 && gpu->identity.revision == 0x5108) { + u32 bus_config = gpu_read(gpu, VIVS_MC_BUS_CONFIG); + bus_config &= ~(VIVS_MC_BUS_CONFIG_FE_BUS_CONFIG__MASK | + VIVS_MC_BUS_CONFIG_TX_BUS_CONFIG__MASK); + bus_config |= VIVS_MC_BUS_CONFIG_FE_BUS_CONFIG(1) | + VIVS_MC_BUS_CONFIG_TX_BUS_CONFIG(0); + gpu_write(gpu, VIVS_MC_BUS_CONFIG, bus_config); + } + /* set base addresses */ gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_RA, gpu->memory_base); gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_FE, gpu->memory_base);
The intention clearly was to do the same thing for WC and UC buffers, not for cached ones.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_gem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index fc8dcfaf5f21..849d5cbb510c 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -32,7 +32,7 @@ static void etnaviv_gem_scatter_map(struct etnaviv_gem_object *etnaviv_obj) * For non-cached buffers, ensure the new pages are clean * because display controller, GPU, etc. are not coherent. */ - if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_CACHED)) { + if (etnaviv_obj->flags & (ETNA_BO_WC|ETNA_BO_UNCACHED)) { dma_map_sg(dev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL); dma_unmap_sg(dev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL); } else {
On Thu, Apr 02, 2015 at 05:30:32PM +0200, Lucas Stach wrote:
The intention clearly was to do the same thing for WC and UC buffers, not for cached ones.
Err, from one of my previous commits:
staging: etnaviv: fix DMA API usage
We test for write-combine and non-cacheable mappings before calling the DMA API. This is werid, because non-cacheable mappings are DMA coherent by definition, whereas cacheable mappings need cache maintanence provided by the DMA API.
This seems to be a typo: ETNA_BO_CACHED should be used rather than ETNA_BO_UNCACHED.
It's utterly senseless to use the DMA API on uncached mappings.
Am Donnerstag, den 02.04.2015, 17:22 +0100 schrieb Russell King - ARM Linux:
On Thu, Apr 02, 2015 at 05:30:32PM +0200, Lucas Stach wrote:
The intention clearly was to do the same thing for WC and UC buffers, not for cached ones.
Err, from one of my previous commits:
staging: etnaviv: fix DMA API usage We test for write-combine and non-cacheable mappings before calling the DMA API. This is werid, because non-cacheable mappings are DMA coherent by definition, whereas cacheable mappings need cache maintanence provided by the DMA API. This seems to be a typo: ETNA_BO_CACHED should be used rather than ETNA_BO_UNCACHED.
It's utterly senseless to use the DMA API on uncached mappings.
Note that this function here isn't used to do cache maintenance for buffers at the time the driver owns them. It is used to clean the cache _before_ we do anything with those buffers.
As we simply allocating SHM buffers here (and threat them as UC/WC for the userspace mappings) we are not sure if there is any cache writeback pending on the newly allocated pages. So this is basically a cache invalidate hidden behind the DMA API. You might treat this as abuse and I'm happy to fix this, but the point here is we need to do this operation also on UC buffers. I've certainly seen stray cache writeback corrupting my data in UC buffers otherwise.
Regards, Lucas
Avoids memory corruptions seen due to stale TLB entries.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_buffer.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index 391b2afc7f59..05e0da28cc97 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -256,7 +256,10 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, /* Add the MMU flush */ CMD_LOAD_STATE(buffer, VIVS_GL_FLUSH_MMU, VIVS_GL_FLUSH_MMU_FLUSH_FEMMU | - VIVS_GL_FLUSH_MMU_FLUSH_PEMMU); + VIVS_GL_FLUSH_MMU_FLUSH_UNK1 | + VIVS_GL_FLUSH_MMU_FLUSH_UNK2 | + VIVS_GL_FLUSH_MMU_FLUSH_PEMMU | + VIVS_GL_FLUSH_MMU_FLUSH_UNK4);
/* And the link to the first buffer */ CMD_LINK(buffer, link_size, link_target);
This provides a bit more type safety.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_gem.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h index cfade337d4db..fadd5198b3e8 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.h +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -75,7 +75,12 @@ struct etnaviv_gem_object {
struct etnaviv_gem_userptr userptr; }; -#define to_etnaviv_bo(x) container_of(x, struct etnaviv_gem_object, base) + +static inline +struct etnaviv_gem_object *to_etnaviv_bo(struct drm_gem_object *obj) +{ + return container_of(obj, struct etnaviv_gem_object, base); +}
struct etnaviv_gem_ops { int (*get_pages)(struct etnaviv_gem_object *);
On Thu, Apr 02, 2015 at 05:30:34PM +0200, Lucas Stach wrote:
This provides a bit more type safety.
Signed-off-by: Lucas Stach l.stach@pengutronix.de
drivers/staging/etnaviv/etnaviv_gem.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h index cfade337d4db..fadd5198b3e8 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.h +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -75,7 +75,12 @@ struct etnaviv_gem_object {
struct etnaviv_gem_userptr userptr; }; -#define to_etnaviv_bo(x) container_of(x, struct etnaviv_gem_object, base)
+static inline +struct etnaviv_gem_object *to_etnaviv_bo(struct drm_gem_object *obj) +{
- return container_of(obj, struct etnaviv_gem_object, base);
+}
I've always wondered about patches like this, and wondered how they're supposed to be more type safe.
The only thing which I can see is that the inline function will warn if you pass it a const or volatile pointer, whereas container_of() will only warn if it's passed a volatile pointer. Apart from that, I don't see any difference between the two.
2015-04-02 18:29 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk:
On Thu, Apr 02, 2015 at 05:30:34PM +0200, Lucas Stach wrote:
This provides a bit more type safety.
Signed-off-by: Lucas Stach l.stach@pengutronix.de
drivers/staging/etnaviv/etnaviv_gem.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h index cfade337d4db..fadd5198b3e8 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.h +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -75,7 +75,12 @@ struct etnaviv_gem_object {
struct etnaviv_gem_userptr userptr;
}; -#define to_etnaviv_bo(x) container_of(x, struct etnaviv_gem_object, base)
+static inline +struct etnaviv_gem_object *to_etnaviv_bo(struct drm_gem_object *obj) +{
return container_of(obj, struct etnaviv_gem_object, base);
+}
I've always wondered about patches like this, and wondered how they're supposed to be more type safe.
The only thing which I can see is that the inline function will warn if you pass it a const or volatile pointer, whereas container_of() will only warn if it's passed a volatile pointer. Apart from that, I don't see any difference between the two.
Eclipse CDT is happier with functions then macros in some case. I have no opinion about this patch.
-- Christian Gmeiner, MSc
Other users may only have the gpu pointer available and it's easy to convert from pipe to gpu in the ioctl path.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_drv.c | 24 +++++++++++++----------- drivers/staging/etnaviv/etnaviv_drv.h | 5 +++-- 2 files changed, 16 insertions(+), 13 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 83cab36170f3..30896f9afa1a 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -301,20 +301,13 @@ static void etnaviv_debugfs_cleanup(struct drm_minor *minor) /* * Fences: */ -int etnaviv_wait_fence_interruptable(struct drm_device *dev, uint32_t pipe, - uint32_t fence, struct timespec *timeout) +int etnaviv_wait_fence_interruptable(struct drm_device *dev, + struct etnaviv_gpu *gpu, uint32_t fence, + struct timespec *timeout) { struct etnaviv_drm_private *priv = dev->dev_private; - struct etnaviv_gpu *gpu; int ret;
- if (pipe >= ETNA_MAX_PIPES) - return -EINVAL; - - gpu = priv->gpu[pipe]; - if (!gpu) - return -ENXIO; - if (fence_after(fence, gpu->submitted_fence)) { DRM_ERROR("waiting on invalid fence: %u (of %u)\n", fence, gpu->submitted_fence); @@ -458,9 +451,18 @@ static int etnaviv_ioctl_gem_info(struct drm_device *dev, void *data, static int etnaviv_ioctl_wait_fence(struct drm_device *dev, void *data, struct drm_file *file) { + struct etnaviv_drm_private *priv = dev->dev_private; + struct etnaviv_gpu *gpu; struct drm_etnaviv_wait_fence *args = data;
- return etnaviv_wait_fence_interruptable(dev, args->pipe, args->fence, + if (args->pipe >= ETNA_MAX_PIPES) + return -EINVAL; + + gpu = priv->gpu[args->pipe]; + if (!gpu) + return -ENODEV; + + return etnaviv_wait_fence_interruptable(dev, gpu, args->fence, &TS(args->timeout)); }
diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index 4c848afab876..5c3250b772cc 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -68,8 +68,9 @@ struct etnaviv_drm_private {
void etnaviv_register_mmu(struct drm_device *dev, struct etnaviv_iommu *mmu);
-int etnaviv_wait_fence_interruptable(struct drm_device *dev, uint32_t pipe, - uint32_t fence, struct timespec *timeout); +int etnaviv_wait_fence_interruptable(struct drm_device *dev, + struct etnaviv_gpu *gpu, uint32_t fence, + struct timespec *timeout); void etnaviv_update_fence(struct drm_device *dev, uint32_t fence);
int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
Allows userspace to properly synchronize with the GPU when accessing buffers.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_gem.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 849d5cbb510c..57f3080fb632 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -467,28 +467,28 @@ void etnaviv_gem_move_to_inactive(struct drm_gem_object *obj) int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, struct timespec *timeout) { -/* + struct drm_device *dev = obj->dev; struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); -*/ + int ret = 0; - /* TODO */ -#if 0 + if (is_active(etnaviv_obj)) { uint32_t fence = 0;
- if (op & MSM_PREP_READ) + if (op & ETNA_PREP_READ) fence = etnaviv_obj->write_fence; - if (op & MSM_PREP_WRITE) + if (op & ETNA_PREP_WRITE) fence = max(fence, etnaviv_obj->read_fence); - if (op & MSM_PREP_NOSYNC) + if (op & ETNA_PREP_NOSYNC) timeout = NULL;
- ret = etnaviv_wait_fence_interruptable(dev, fence, timeout); + ret = etnaviv_wait_fence_interruptable(dev, etnaviv_obj->gpu, + fence, timeout); }
/* TODO cache maintenance */ -#endif + return ret; }
2015-04-02 17:30 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Allows userspace to properly synchronize with the GPU when accessing buffers.
Signed-off-by: Lucas Stach l.stach@pengutronix.de
drivers/staging/etnaviv/etnaviv_gem.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 849d5cbb510c..57f3080fb632 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -467,28 +467,28 @@ void etnaviv_gem_move_to_inactive(struct drm_gem_object *obj) int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, struct timespec *timeout) { -/*
struct drm_device *dev = obj->dev; struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
-*/
int ret = 0;
/* TODO */
-#if 0
if (is_active(etnaviv_obj)) { uint32_t fence = 0;
if (op & MSM_PREP_READ)
if (op & ETNA_PREP_READ) fence = etnaviv_obj->write_fence;
if (op & MSM_PREP_WRITE)
if (op & ETNA_PREP_WRITE) fence = max(fence, etnaviv_obj->read_fence);
if (op & MSM_PREP_NOSYNC)
if (op & ETNA_PREP_NOSYNC) timeout = NULL;
ret = etnaviv_wait_fence_interruptable(dev, fence, timeout);
ret = etnaviv_wait_fence_interruptable(dev, etnaviv_obj->gpu,
fence, timeout); } /* TODO cache maintenance */
-#endif
return ret;
}
-- 2.1.4
looks like a part from https://github.com/austriancoder/linux/commit/0c347857d7eff27834bd82d5485c97...
-- Christian Gmeiner, MSc
Am Sonntag, den 05.04.2015, 20:51 +0200 schrieb Christian Gmeiner:
2015-04-02 17:30 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Allows userspace to properly synchronize with the GPU when accessing buffers.
Signed-off-by: Lucas Stach l.stach@pengutronix.de
drivers/staging/etnaviv/etnaviv_gem.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 849d5cbb510c..57f3080fb632 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -467,28 +467,28 @@ void etnaviv_gem_move_to_inactive(struct drm_gem_object *obj) int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, struct timespec *timeout) { -/*
struct drm_device *dev = obj->dev; struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
-*/
int ret = 0;
/* TODO */
-#if 0
if (is_active(etnaviv_obj)) { uint32_t fence = 0;
if (op & MSM_PREP_READ)
if (op & ETNA_PREP_READ) fence = etnaviv_obj->write_fence;
if (op & MSM_PREP_WRITE)
if (op & ETNA_PREP_WRITE) fence = max(fence, etnaviv_obj->read_fence);
if (op & MSM_PREP_NOSYNC)
if (op & ETNA_PREP_NOSYNC) timeout = NULL;
ret = etnaviv_wait_fence_interruptable(dev, fence, timeout);
ret = etnaviv_wait_fence_interruptable(dev, etnaviv_obj->gpu,
fence, timeout); } /* TODO cache maintenance */
-#endif
return ret;
}
-- 2.1.4
looks like a part from https://github.com/austriancoder/linux/commit/0c347857d7eff27834bd82d5485c97...
Oh, yep. Together with the previous commit this looks like the same thing. I would rather leave it as two commits, so it's logically separated, but would be happy to hand over authorship of the commits to you if you are ok with that.
Regards, Lucas
Hi Lucas,
2015-04-07 9:26 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Sonntag, den 05.04.2015, 20:51 +0200 schrieb Christian Gmeiner:
2015-04-02 17:30 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Allows userspace to properly synchronize with the GPU when accessing buffers.
Signed-off-by: Lucas Stach l.stach@pengutronix.de
drivers/staging/etnaviv/etnaviv_gem.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 849d5cbb510c..57f3080fb632 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -467,28 +467,28 @@ void etnaviv_gem_move_to_inactive(struct drm_gem_object *obj) int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, struct timespec *timeout) { -/*
struct drm_device *dev = obj->dev; struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
-*/
int ret = 0;
/* TODO */
-#if 0
if (is_active(etnaviv_obj)) { uint32_t fence = 0;
if (op & MSM_PREP_READ)
if (op & ETNA_PREP_READ) fence = etnaviv_obj->write_fence;
if (op & MSM_PREP_WRITE)
if (op & ETNA_PREP_WRITE) fence = max(fence, etnaviv_obj->read_fence);
if (op & MSM_PREP_NOSYNC)
if (op & ETNA_PREP_NOSYNC) timeout = NULL;
ret = etnaviv_wait_fence_interruptable(dev, fence, timeout);
ret = etnaviv_wait_fence_interruptable(dev, etnaviv_obj->gpu,
fence, timeout); } /* TODO cache maintenance */
-#endif
return ret;
}
-- 2.1.4
looks like a part from https://github.com/austriancoder/linux/commit/0c347857d7eff27834bd82d5485c97...
Oh, yep. Together with the previous commit this looks like the same thing. I would rather leave it as two commits, so it's logically separated, but would be happy to hand over authorship of the commits to you if you are ok with that.
thanks - I am fine with that.
greets -- Christian Gmeiner, MSc
As single buffer object may be mapped into different address spaces at the same time. For now we only have two different address spaces for the 3D and 2D pipe, but this may change as soon as we implement per-process page tables.
Allow this by having each buffer object manage a list of all it's mappings into the respective address spaces.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_gem.c | 44 +++++++++++++++++----- drivers/staging/etnaviv/etnaviv_gem.h | 13 ++++++- drivers/staging/etnaviv/etnaviv_mmu.c | 71 +++++++++++++++++++++-------------- drivers/staging/etnaviv/etnaviv_mmu.h | 8 +++- 4 files changed, 93 insertions(+), 43 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 57f3080fb632..04594dad27e2 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -310,19 +310,25 @@ int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, uint32_t *iova) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + struct etnaviv_vram_mapping *mapping = + etnaviv_gem_get_vram_mapping(etnaviv_obj, gpu->mmu); int ret = 0;
- if (!etnaviv_obj->iova && !(etnaviv_obj->flags & ETNA_BO_CMDSTREAM)) { + if (etnaviv_obj->flags & ETNA_BO_CMDSTREAM) { + *iova = etnaviv_obj->paddr; + return 0; + } + + if (!mapping) { struct page **pages = etnaviv_gem_get_pages(etnaviv_obj); if (IS_ERR(pages)) return PTR_ERR(pages); - ret = etnaviv_iommu_map_gem(gpu->mmu, etnaviv_obj, - gpu->memory_base); + gpu->memory_base, &mapping); }
if (!ret) - *iova = etnaviv_obj->iova; + *iova = mapping->iova;
return ret; } @@ -331,13 +337,15 @@ int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, int id, uint32_t *iova) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); + struct etnaviv_vram_mapping *mapping = + etnaviv_gem_get_vram_mapping(etnaviv_obj, gpu->mmu); int ret;
/* this is safe right now because we don't unmap until the * bo is deleted: */ - if (etnaviv_obj->iova) { - *iova = etnaviv_obj->iova; + if (mapping) { + *iova = mapping->iova; return 0; }
@@ -546,11 +554,12 @@ static const struct etnaviv_gem_ops etnaviv_gem_cmd_ops = { static void etnaviv_free_obj(struct drm_gem_object *obj) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); - struct etnaviv_drm_private *priv = obj->dev->dev_private; - struct etnaviv_iommu *mmu = priv->mmu; + struct etnaviv_vram_mapping *mapping, *tmp;
- if (mmu) - etnaviv_iommu_unmap_gem(mmu, etnaviv_obj); + list_for_each_entry_safe(mapping, tmp, &etnaviv_obj->vram_list, + obj_head) { + etnaviv_iommu_unmap_gem(mapping->mmu, etnaviv_obj, mapping); + } }
static void etnaviv_gem_shmem_release(struct etnaviv_gem_object *etnaviv_obj) @@ -665,6 +674,7 @@ static int etnaviv_gem_new_impl(struct drm_device *dev, reservation_object_init(&etnaviv_obj->_resv);
INIT_LIST_HEAD(&etnaviv_obj->submit_entry); + INIT_LIST_HEAD(&etnaviv_obj->vram_list); list_add_tail(&etnaviv_obj->mm_list, &priv->inactive_list);
*obj = &etnaviv_obj->base; @@ -724,6 +734,20 @@ int etnaviv_gem_new_private(struct drm_device *dev, size_t size, uint32_t flags, return 0; }
+struct etnaviv_vram_mapping * +etnaviv_gem_get_vram_mapping(struct etnaviv_gem_object *obj, + struct etnaviv_iommu *mmu) +{ + struct etnaviv_vram_mapping *mapping; + + list_for_each_entry(mapping, &obj->vram_list, obj_head) { + if (mapping->mmu == mmu) + return mapping; + } + + return NULL; +} + struct get_pages_work { struct work_struct work; struct mm_struct *mm; diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h index fadd5198b3e8..b0e5e968912a 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.h +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -30,6 +30,13 @@ struct etnaviv_gem_userptr { bool ro; };
+struct etnaviv_vram_mapping { + struct list_head obj_head; + struct etnaviv_iommu *mmu; + struct drm_mm_node vram_node; + uint32_t iova; +}; + struct etnaviv_gem_object { struct drm_gem_object base; const struct etnaviv_gem_ops *ops; @@ -59,7 +66,6 @@ struct etnaviv_gem_object { struct page **pages; struct sg_table *sgt; void *vaddr; - uint32_t iova;
/* for ETNA_BO_CMDSTREAM */ dma_addr_t paddr; @@ -68,7 +74,7 @@ struct etnaviv_gem_object { struct reservation_object *resv; struct reservation_object _resv;
- struct drm_mm_node *gpu_vram_node; + struct list_head vram_list;
/* for buffer manipulation during submit */ u32 offset; @@ -120,6 +126,9 @@ struct etnaviv_gem_submit { } bos[0]; };
+struct etnaviv_vram_mapping * +etnaviv_gem_get_vram_mapping(struct etnaviv_gem_object *obj, + struct etnaviv_iommu *mmu); int etnaviv_gem_new_private(struct drm_device *dev, size_t size, uint32_t flags, struct etnaviv_gem_object **res); struct page **etnaviv_gem_get_pages(struct etnaviv_gem_object *obj); diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c index 897356d08e30..4bcc3eabb3a1 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.c +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -92,28 +92,38 @@ int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova, }
int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, - struct etnaviv_gem_object *etnaviv_obj, uint32_t memory_base) + struct etnaviv_gem_object *etnaviv_obj, uint32_t memory_base, + struct etnaviv_vram_mapping **out_mapping) { struct etnaviv_drm_private *priv = etnaviv_obj->base.dev->dev_private; struct sg_table *sgt = etnaviv_obj->sgt; + struct etnaviv_vram_mapping *mapping, *free = NULL; struct drm_mm_node *node; int ret;
+ mapping = kzalloc(sizeof(*mapping), GFP_KERNEL); + if (!mapping) + return -ENOMEM; + + INIT_LIST_HEAD(&mapping->obj_head); + mapping->mmu = mmu; + /* v1 MMU can optimize single entry (contiguous) scatterlists */ if (sgt->nents == 1) { uint32_t iova;
iova = sg_dma_address(sgt->sgl) - memory_base; if (iova < 0x80000000 - sg_dma_len(sgt->sgl)) { - etnaviv_obj->iova = iova; + mapping->iova = iova; + list_add_tail(&mapping->obj_head, + &etnaviv_obj->vram_list); + if (out_mapping) + *out_mapping = mapping; return 0; } }
- node = kzalloc(sizeof(*node), GFP_KERNEL); - if (!node) - return -ENOMEM; - + node = &mapping->vram_node; while (1) { struct etnaviv_gem_object *o, *n; struct list_head list; @@ -142,8 +152,8 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, found = 0; INIT_LIST_HEAD(&list); list_for_each_entry(o, &priv->inactive_list, mm_list) { - if (!o->gpu_vram_node || - o->gpu_vram_node->mm != &mmu->mm) + free = etnaviv_gem_get_vram_mapping(o, mmu); + if (!free) continue;
/* @@ -154,7 +164,7 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, continue;
list_add(&o->submit_entry, &list); - if (drm_mm_scan_add_block(o->gpu_vram_node)) { + if (drm_mm_scan_add_block(&free->vram_node)) { found = true; break; } @@ -163,7 +173,7 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, if (!found) { /* Nothing found, clean up and fail */ list_for_each_entry_safe(o, n, &list, submit_entry) - BUG_ON(drm_mm_scan_remove_block(o->gpu_vram_node)); + BUG_ON(drm_mm_scan_remove_block(&free->vram_node)); break; }
@@ -174,12 +184,12 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, * can leave the block pinned. */ list_for_each_entry_safe(o, n, &list, submit_entry) - if (!drm_mm_scan_remove_block(o->gpu_vram_node)) + if (!drm_mm_scan_remove_block(&free->vram_node)) list_del_init(&o->submit_entry);
list_for_each_entry_safe(o, n, &list, submit_entry) { list_del_init(&o->submit_entry); - etnaviv_iommu_unmap_gem(mmu, o); + etnaviv_iommu_unmap_gem(mmu, o, free); }
/* @@ -191,40 +201,43 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, }
if (ret < 0) { - kfree(node); + kfree(mapping); return ret; }
mmu->last_iova = node->start + etnaviv_obj->base.size; - etnaviv_obj->iova = node->start; - etnaviv_obj->gpu_vram_node = node; + mapping->iova = node->start; ret = etnaviv_iommu_map(mmu, node->start, sgt, etnaviv_obj->base.size, IOMMU_READ | IOMMU_WRITE);
if (ret < 0) { drm_mm_remove_node(node); - kfree(node); - - etnaviv_obj->iova = 0; - etnaviv_obj->gpu_vram_node = NULL; + kfree(mapping); + return ret; }
+ list_add_tail(&mapping->obj_head, &etnaviv_obj->vram_list); + if (out_mapping) + *out_mapping = mapping; + return ret; }
void etnaviv_iommu_unmap_gem(struct etnaviv_iommu *mmu, - struct etnaviv_gem_object *etnaviv_obj) + struct etnaviv_gem_object *etnaviv_obj, + struct etnaviv_vram_mapping *mapping) { - if (etnaviv_obj->gpu_vram_node) { - uint32_t offset = etnaviv_obj->gpu_vram_node->start; + if (mapping) { + uint32_t offset = mapping->vram_node.start;
- etnaviv_iommu_unmap(mmu, offset, etnaviv_obj->sgt, - etnaviv_obj->base.size); - drm_mm_remove_node(etnaviv_obj->gpu_vram_node); - kfree(etnaviv_obj->gpu_vram_node); - - etnaviv_obj->gpu_vram_node = NULL; - etnaviv_obj->iova = 0; + if (mapping->iova >= 0x80000000) { + etnaviv_iommu_unmap(mmu, offset, etnaviv_obj->sgt, + etnaviv_obj->base.size); + drm_mm_remove_node(&mapping->vram_node); + } + list_del(&mapping->obj_head); + kfree(mapping); + mapping = NULL; } }
diff --git a/drivers/staging/etnaviv/etnaviv_mmu.h b/drivers/staging/etnaviv/etnaviv_mmu.h index 29881e27dc7e..9a4b493015f6 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.h +++ b/drivers/staging/etnaviv/etnaviv_mmu.h @@ -20,6 +20,8 @@
#include <linux/iommu.h>
+struct etnaviv_vram_mapping; + struct etnaviv_iommu { struct drm_device *dev; struct iommu_domain *domain; @@ -39,9 +41,11 @@ int etnaviv_iommu_map(struct etnaviv_iommu *iommu, uint32_t iova, int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, uint32_t iova, struct sg_table *sgt, unsigned len); int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, - struct etnaviv_gem_object *etnaviv_obj, uint32_t memory_base); + struct etnaviv_gem_object *etnaviv_obj, uint32_t memory_base, + struct etnaviv_vram_mapping **mapping); void etnaviv_iommu_unmap_gem(struct etnaviv_iommu *mmu, - struct etnaviv_gem_object *etnaviv_obj); + struct etnaviv_gem_object *etnaviv_obj, + struct etnaviv_vram_mapping *mapping); void etnaviv_iommu_destroy(struct etnaviv_iommu *iommu);
struct etnaviv_iommu *etnaviv_iommu_new(struct drm_device *dev,
Each pipe has it's own MMU, so there is no point in pretending to have a single one at the DRM driver level. All MMU management has to happen on a per-pipe level.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_drv.c | 7 ------- drivers/staging/etnaviv/etnaviv_drv.h | 5 ----- drivers/staging/etnaviv/etnaviv_gpu.c | 1 - 3 files changed, 13 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 30896f9afa1a..25c64319ab34 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -22,13 +22,6 @@ #include "etnaviv_gpu.h" #include "etnaviv_mmu.h"
-void etnaviv_register_mmu(struct drm_device *dev, struct etnaviv_iommu *mmu) -{ - struct etnaviv_drm_private *priv = dev->dev_private; - - priv->mmu = mmu; -} - #ifdef CONFIG_DRM_ETNAVIV_REGISTER_LOGGING static bool reglog; MODULE_PARM_DESC(reglog, "Enable register read/write logging"); diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index 5c3250b772cc..cf7e6f758dd7 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -61,13 +61,8 @@ struct etnaviv_drm_private { struct list_head inactive_list;
struct workqueue_struct *wq; - - /* registered MMUs: */ - struct etnaviv_iommu *mmu; };
-void etnaviv_register_mmu(struct drm_device *dev, struct etnaviv_iommu *mmu); - int etnaviv_wait_fence_interruptable(struct drm_device *dev, struct etnaviv_gpu *gpu, uint32_t fence, struct timespec *timeout); diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 0a6c702621d8..78955055d2eb 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -508,7 +508,6 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) ret = -ENOMEM; goto fail; } - etnaviv_register_mmu(gpu->drm, gpu->mmu);
/* Create buffer: */ gpu->buffer = etnaviv_gem_new(gpu->drm, PAGE_SIZE, ETNA_BO_CMDSTREAM);
The MMU is per GPU (pipe), rather than per DRM device, so it makes a lot more sense to use the GPU device instead of the DRM device for the MMU to hang off from.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_gpu.c | 2 +- drivers/staging/etnaviv/etnaviv_mmu.c | 2 +- drivers/staging/etnaviv/etnaviv_mmu.h | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 78955055d2eb..8221df820824 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -503,7 +503,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
/* TODO: we will leak here memory - fix it! */
- gpu->mmu = etnaviv_iommu_new(gpu->drm, iommu); + gpu->mmu = etnaviv_iommu_new(gpu->dev, iommu); if (!gpu->mmu) { ret = -ENOMEM; goto fail; diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c index 4bcc3eabb3a1..f2a9f7c049e4 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.c +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -248,7 +248,7 @@ void etnaviv_iommu_destroy(struct etnaviv_iommu *mmu) kfree(mmu); }
-struct etnaviv_iommu *etnaviv_iommu_new(struct drm_device *dev, +struct etnaviv_iommu *etnaviv_iommu_new(struct device *dev, struct iommu_domain *domain) { struct etnaviv_iommu *mmu; diff --git a/drivers/staging/etnaviv/etnaviv_mmu.h b/drivers/staging/etnaviv/etnaviv_mmu.h index 9a4b493015f6..ca509441c76c 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.h +++ b/drivers/staging/etnaviv/etnaviv_mmu.h @@ -23,7 +23,7 @@ struct etnaviv_vram_mapping;
struct etnaviv_iommu { - struct drm_device *dev; + struct device *dev; struct iommu_domain *domain;
/* memory manager for GPU address area */ @@ -48,7 +48,7 @@ void etnaviv_iommu_unmap_gem(struct etnaviv_iommu *mmu, struct etnaviv_vram_mapping *mapping); void etnaviv_iommu_destroy(struct etnaviv_iommu *iommu);
-struct etnaviv_iommu *etnaviv_iommu_new(struct drm_device *dev, +struct etnaviv_iommu *etnaviv_iommu_new(struct device *dev, struct iommu_domain *domain);
#endif /* __ETNAVIV_MMU_H__ */
The MMU needs to be flushed when changing the render context to get rid of stale TLB entries left behind by the last context.
While we do not support context switching between different processes yet this commit fixes memory corruptions seen when executing different 3D applications one after another.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_gpu.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 8221df820824..d7025101e929 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -878,6 +878,9 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu,
gpu->submitted_fence = submit->fence;
+ if (priv->lastctx != ctx) + gpu->mmu->need_flush = true; + etnaviv_buffer_queue(gpu, event, submit);
priv->lastctx = ctx;
At least the GC2000 I'm testing with seems to have a bug that all vertex streams have to be mapped either through the MMU or without it. Mixing between both mapping types in a single draw command results in corrupted vertex data.
As we can not quarantee that a buffer may be mappable without the MMU all vertex buffers need to go through the MMU. As the userspace knows which buffers may be used as vertex buffers at allocation time this adds a flagfor userspace to specify in this situation.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_mmu.c | 2 +- include/uapi/drm/etnaviv_drm.h | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c index f2a9f7c049e4..a59d27a2adfe 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.c +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -109,7 +109,7 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu *mmu, mapping->mmu = mmu;
/* v1 MMU can optimize single entry (contiguous) scatterlists */ - if (sgt->nents == 1) { + if (sgt->nents == 1 && !(etnaviv_obj->flags & ETNA_BO_FORCE_MMU)) { uint32_t iova;
iova = sg_dma_address(sgt->sgl) - memory_base; diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h index a4c109ffbea4..52c6989ad93f 100644 --- a/include/uapi/drm/etnaviv_drm.h +++ b/include/uapi/drm/etnaviv_drm.h @@ -86,6 +86,8 @@ struct drm_etnaviv_param { #define ETNA_BO_CACHED 0x00010000 #define ETNA_BO_WC 0x00020000 #define ETNA_BO_UNCACHED 0x00040000 +/* map flags */ +#define ETNA_BO_FORCE_MMU 0x00100000
struct drm_etnaviv_gem_new { uint64_t size; /* in */
The GPU cores are possibly scattered in the SoC address space, so the current abstraction of having a parent node for the master device and the cores as child nodes doesn't fit too well.
Instead take the same approach as with imx-drm to have a logical master node that refers to the other components by a phandle, so those can be placed under their real parent buses in the DT.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_drv.c | 33 +++++++++++++++++---------------- 1 file changed, 17 insertions(+), 16 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 25c64319ab34..eade6010ce42 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -563,14 +563,6 @@ static struct drm_driver etnaviv_drm_driver = { /* * Platform driver: */ - -static int etnaviv_compare(struct device *dev, void *data) -{ - struct device_node *np = data; - - return dev->of_node == np; -} - static int etnaviv_bind(struct device *dev) { return drm_platform_init(&etnaviv_drm_driver, to_platform_device(dev)); @@ -586,6 +578,13 @@ static const struct component_master_ops etnaviv_master_ops = { .unbind = etnaviv_unbind, };
+static int compare_of(struct device *dev, void *data) +{ + struct device_node *np = data; + + return dev->of_node == np; +} + static int compare_str(struct device *dev, void *data) { return !strcmp(dev_name(dev), data); @@ -600,15 +599,17 @@ static int etnaviv_pdev_probe(struct platform_device *pdev) dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
if (node) { - struct device_node *child_np; - - of_platform_populate(node, NULL, NULL, dev); + struct device_node *core_node; + int i;
- for_each_available_child_of_node(node, child_np) { - DRM_INFO("add child %s\n", child_np->name); + for (i = 0; ; i++) { + core_node = of_parse_phandle(node, "cores", i); + if (!core_node) + break;
- component_match_add(dev, &match, etnaviv_compare, - child_np); + component_match_add(&pdev->dev, &match, compare_of, + core_node); + of_node_put(core_node); } } else if (dev->platform_data) { char **names = dev->platform_data; @@ -629,7 +630,7 @@ static int etnaviv_pdev_remove(struct platform_device *pdev) }
static const struct of_device_id dt_match[] = { - { .compatible = "vivante,gccore" }, + { .compatible = "fsl,imx-gpu-subsystem" }, {} }; MODULE_DEVICE_TABLE(of, dt_match);
The platform should specify the appropriate IRQ flags and it's a really bad idea to override them in individual drivers.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_gpu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index d7025101e929..df393b9ecbf8 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -1178,8 +1178,8 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev) goto fail; }
- err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, - IRQF_TRIGGER_HIGH, dev_name(gpu->dev), gpu); + err = devm_request_irq(&pdev->dev, gpu->irq, irq_handler, 0, + dev_name(gpu->dev), gpu); if (err) { dev_err(dev, "failed to request IRQ%u: %d\n", gpu->irq, err); goto fail;
While this isn't the case on i.MX6 a single GPU pipe can have multiple rendering backend states, which can be selected by the pipe switch command, so there is no strict mapping between the user "pipes" and the PIPE_2D/PIPE_3D execution states.
We need to respect this in the public userspace API. The exported pipes are the GC cores which may be able to support one or more execution states. The information which execution states are supported on a given pipe is available to userspace through the features0 param.
Userspace is responsible to choose a matching pipe when deciding which pipe to use for a specific task.
The submit ioctl now takes one more parameter for userspace to specify which execution state it expects a pipe to be in when starting execution of the command buffers. This allows the kernel to insert pipe switch commands only when really needed while maintaining separation between processes by making sure that no process can leave the pipe behind in an unexpected state, potentially hanging the next one to use the GPU.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_buffer.c | 57 +++++++++++++++++----------- drivers/staging/etnaviv/etnaviv_drv.c | 1 + drivers/staging/etnaviv/etnaviv_drv.h | 1 + drivers/staging/etnaviv/etnaviv_gem.h | 1 + drivers/staging/etnaviv/etnaviv_gem_submit.c | 1 + drivers/staging/etnaviv/etnaviv_gpu.c | 48 ++++------------------- drivers/staging/etnaviv/etnaviv_gpu.h | 2 +- include/uapi/drm/etnaviv_drm.h | 19 +++++----- 8 files changed, 57 insertions(+), 73 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index 05e0da28cc97..7c8014f07249 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -154,8 +154,6 @@ u32 etnaviv_buffer_init(struct etnaviv_gpu *gpu) /* initialize buffer */ buffer->offset = 0;
- etnaviv_cmd_select_pipe(buffer, gpu->pipe); - CMD_WAIT(buffer); CMD_LINK(buffer, 2, gpu_va(gpu, buffer) + ((buffer->offset - 1) * 4));
@@ -179,21 +177,29 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, struct etnaviv_gem_object *buffer = to_etnaviv_bo(gpu->buffer); struct etnaviv_gem_object *cmd; u32 *lw = buffer->vaddr + ((buffer->offset - 4) * 4); - u32 back, link_target, link_size, reserve_size; + u32 back, link_target, link_size, reserve_size, extra_size = 0; u32 i;
if (drm_debug & DRM_UT_DRIVER) etnaviv_buffer_dump(gpu, buffer, 0, 0x50);
- reserve_size = 6; - /* * If we need to flush the MMU prior to submitting this buffer, we * will need to append a mmu flush load state, followed by a new * link to this buffer - a total of four additional words. */ - if (gpu->mmu->need_flush) - reserve_size += 4; + if (gpu->mmu->need_flush || gpu->switch_context) { + /* link command */ + extra_size += 2; + /* flush command */ + if (gpu->mmu->need_flush) + extra_size += 2; + /* pipe switch commands */ + if (gpu->switch_context) + extra_size += 8; + } + + reserve_size = 6 + extra_size;
/* * if we are going to completely overflow the buffer, we need to wrap. @@ -207,10 +213,8 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, link_target = gpu_va(gpu, buffer) + buffer->offset * 4; link_size = 6;
- if (gpu->mmu->need_flush) { - /* Skip over the MMU flush and LINK instructions */ - link_target += 4 * sizeof(uint32_t); - } + /* Skip over any extra instructions */ + link_target += extra_size * sizeof(uint32_t);
/* update offset for every cmd stream */ for (i = submit->nr_cmds; i--; ) { @@ -249,26 +253,33 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, unsigned int event, pr_info("event: %d\n", event); }
- if (gpu->mmu->need_flush) { + if (gpu->mmu->need_flush || gpu->switch_context) { uint32_t new_target = gpu_va(gpu, buffer) + buffer->offset * sizeof(uint32_t);
- /* Add the MMU flush */ - CMD_LOAD_STATE(buffer, VIVS_GL_FLUSH_MMU, - VIVS_GL_FLUSH_MMU_FLUSH_FEMMU | - VIVS_GL_FLUSH_MMU_FLUSH_UNK1 | - VIVS_GL_FLUSH_MMU_FLUSH_UNK2 | - VIVS_GL_FLUSH_MMU_FLUSH_PEMMU | - VIVS_GL_FLUSH_MMU_FLUSH_UNK4); + if (gpu->mmu->need_flush) { + /* Add the MMU flush */ + CMD_LOAD_STATE(buffer, VIVS_GL_FLUSH_MMU, + VIVS_GL_FLUSH_MMU_FLUSH_FEMMU | + VIVS_GL_FLUSH_MMU_FLUSH_UNK1 | + VIVS_GL_FLUSH_MMU_FLUSH_UNK2 | + VIVS_GL_FLUSH_MMU_FLUSH_PEMMU | + VIVS_GL_FLUSH_MMU_FLUSH_UNK4); + + gpu->mmu->need_flush = false; + } + + if (gpu->switch_context) { + etnaviv_cmd_select_pipe(buffer, submit->exec_state); + gpu->switch_context = false; + }
/* And the link to the first buffer */ CMD_LINK(buffer, link_size, link_target);
- /* Update the link target to point to the flush */ + /* Update the link target to point to above instructions */ link_target = new_target; - link_size = 4; - - gpu->mmu->need_flush = false; + link_size = extra_size; }
/* Save the event and buffer position of the new event trigger */ diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index eade6010ce42..828ed8ce347f 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -134,6 +134,7 @@ static int etnaviv_load(struct drm_device *dev, unsigned long flags) init_waitqueue_head(&priv->fence_event);
INIT_LIST_HEAD(&priv->inactive_list); + priv->num_gpus = 0;
platform_set_drvdata(pdev, dev);
diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index cf7e6f758dd7..4dfcd03c80ef 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -51,6 +51,7 @@ struct etnaviv_file_private { };
struct etnaviv_drm_private { + int num_gpus; struct etnaviv_gpu *gpu[ETNA_MAX_PIPES]; struct etnaviv_file_private *lastctx;
diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h index b0e5e968912a..6e0822674c8e 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.h +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -108,6 +108,7 @@ static inline bool is_active(struct etnaviv_gem_object *etnaviv_obj) struct etnaviv_gem_submit { struct drm_device *dev; struct etnaviv_gpu *gpu; + uint32_t exec_state; struct list_head bo_list; struct ww_acquire_ctx ticket; uint32_t fence; diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index 965096be5219..9061f5f7ecc6 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -328,6 +328,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, ret = -ENOMEM; goto out; } + submit->exec_state = args->exec_state;
ret = submit_lookup_objects(submit, args, file); if (ret) diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index df393b9ecbf8..abadfecb447d 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -27,7 +27,7 @@ #include "cmdstream.xml.h"
static const struct platform_device_id gpu_ids[] = { - { .name = "etnaviv-gpu,2d", .driver_data = ETNA_PIPE_2D, }, + { .name = "etnaviv-gpu,2d" }, { }, };
@@ -878,8 +878,10 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu,
gpu->submitted_fence = submit->fence;
- if (priv->lastctx != ctx) + if (priv->lastctx != ctx) { gpu->mmu->need_flush = true; + gpu->switch_context = true; + }
etnaviv_buffer_queue(gpu, event, submit);
@@ -1041,21 +1043,8 @@ static int etnaviv_gpu_bind(struct device *dev, struct device *master, struct drm_device *drm = data; struct etnaviv_drm_private *priv = drm->dev_private; struct etnaviv_gpu *gpu = dev_get_drvdata(dev); - int idx = gpu->pipe; int ret;
- dev_info(dev, "pre gpu[idx]: %p\n", priv->gpu[idx]); - - if (priv->gpu[idx] == NULL) { - dev_info(dev, "adding core @idx %d\n", idx); - priv->gpu[idx] = gpu; - } else { - dev_err(dev, "failed to add core @idx %d\n", idx); - goto fail; - } - - dev_info(dev, "post gpu[idx]: %p\n", priv->gpu[idx]); - #ifdef CONFIG_PM ret = pm_runtime_get_sync(gpu->dev); #else @@ -1073,12 +1062,12 @@ static int etnaviv_gpu_bind(struct device *dev, struct device *master, setup_timer(&gpu->hangcheck_timer, hangcheck_handler, (unsigned long)gpu);
+ priv->gpu[priv->num_gpus++] = gpu; + pm_runtime_mark_last_busy(gpu->dev); pm_runtime_put_autosuspend(gpu->dev);
return 0; -fail: - return -1; }
static void etnaviv_gpu_unbind(struct device *dev, struct device *master, @@ -1119,23 +1108,13 @@ static const struct component_ops gpu_ops = {
static const struct of_device_id etnaviv_gpu_match[] = { { - .compatible = "vivante,vivante-gpu-2d", - .data = (void *)ETNA_PIPE_2D + .compatible = "vivante,gc" }, - { - .compatible = "vivante,vivante-gpu-3d", - .data = (void *)ETNA_PIPE_3D - }, - { - .compatible = "vivante,vivante-gpu-vg", - .data = (void *)ETNA_PIPE_VG - }, - { } + { /* sentinel */ } };
static int etnaviv_gpu_platform_probe(struct platform_device *pdev) { - const struct of_device_id *match; struct device *dev = &pdev->dev; struct etnaviv_gpu *gpu; int err = 0; @@ -1144,17 +1123,6 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev) if (!gpu) return -ENOMEM;
- if (pdev->dev.of_node) { - match = of_match_device(etnaviv_gpu_match, &pdev->dev); - if (!match) - return -EINVAL; - gpu->pipe = (int)match->data; - } else if (pdev->id_entry) { - gpu->pipe = pdev->id_entry->driver_data; - } else { - return -EINVAL; - } - gpu->dev = &pdev->dev;
/* diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h index 9feab07da457..9465f7f56cdf 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.h +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -88,8 +88,8 @@ struct etnaviv_gpu { struct drm_device *drm; struct device *dev; struct etnaviv_chip_identity identity; - int pipe; bool initialized; + bool switch_context;
/* 'ring'-buffer: */ struct drm_gem_object *buffer; diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h index 52c6989ad93f..dfd51fcd56d6 100644 --- a/include/uapi/drm/etnaviv_drm.h +++ b/include/uapi/drm/etnaviv_drm.h @@ -34,12 +34,6 @@ * fields.. so that has to be somehow ok. */
-#define ETNA_PIPE_3D 0x00 -#define ETNA_PIPE_2D 0x01 -#define ETNA_PIPE_VG 0x02 - -#define ETNA_MAX_PIPES 3 - /* timeouts are specified in clock-monotonic absolute times (to simplify * restarting interrupted ioctls). The following struct is logically the * same as 'struct timespec' but 32/64b ABI safe. @@ -70,8 +64,10 @@ struct drm_etnaviv_timespec {
/* #define MSM_PARAM_GMEM_SIZE 0x02 */
+#define ETNA_MAX_PIPES 4 + struct drm_etnaviv_param { - uint32_t pipe; /* in, ETNA_PIPE_x */ + uint32_t pipe; /* in */ uint32_t param; /* in, ETNAVIV_PARAM_x */ uint64_t value; /* out (get_param) or in (set_param) */ }; @@ -182,11 +178,16 @@ struct drm_etnaviv_gem_submit_bo { * one or more cmdstream buffers. This allows for conditional execution * (context-restore), and IB buffers needed for per tile/bin draw cmds. */ +#define ETNA_PIPE_3D 0x00 +#define ETNA_PIPE_2D 0x01 +#define ETNA_PIPE_VG 0x02 struct drm_etnaviv_gem_submit { - uint32_t pipe; /* in, ETNA_PIPE_x */ + uint32_t pipe; /* in */ + uint32_t exec_state; /* in, initial execution state (ETNA_PIPE_x) */ uint32_t fence; /* out */ uint32_t nr_bos; /* in, number of submit_bo's */ uint32_t nr_cmds; /* in, number of submit_cmd's */ + uint32_t pad; uint64_t bos; /* in, ptr to array of submit_bo's */ uint64_t cmds; /* in, ptr to array of submit_cmd's */ }; @@ -199,7 +200,7 @@ struct drm_etnaviv_gem_submit { * APIs without requiring a dummy bo to synchronize on. */ struct drm_etnaviv_wait_fence { - uint32_t pipe; /* in, ETNA_PIPE_x */ + uint32_t pipe; /* in */ uint32_t fence; /* in */ struct drm_etnaviv_timespec timeout; /* in */ };
On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote:
While this isn't the case on i.MX6 a single GPU pipe can have multiple rendering backend states, which can be selected by the pipe switch command, so there is no strict mapping between the user "pipes" and the PIPE_2D/PIPE_3D execution states.
This is good, because on Dove we have a single Vivante core which supports both 2D and 3D together. It's always bugged me that etnadrm has not treated cores separately from their capabilities.
2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk:
On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote:
While this isn't the case on i.MX6 a single GPU pipe can have multiple rendering backend states, which can be selected by the pipe switch command, so there is no strict mapping between the user "pipes" and the PIPE_2D/PIPE_3D execution states.
This is good, because on Dove we have a single Vivante core which supports both 2D and 3D together. It's always bugged me that etnadrm has not treated cores separately from their capabilities.
Today I finally got the idea how this multiple pipe stuff should be done the right way - thanks Russell. So maybe you/we need to rework how the driver is designed regarding cores and pipes.
On the imx6 we should get 3 device nodes each only supporting one pipe type. On the dove we should get only one device node supporting 2 pipes types. What do you think?
greets -- Christian Gmeiner, MSc
Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner:
2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk:
On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote:
While this isn't the case on i.MX6 a single GPU pipe can have multiple rendering backend states, which can be selected by the pipe switch command, so there is no strict mapping between the user "pipes" and the PIPE_2D/PIPE_3D execution states.
This is good, because on Dove we have a single Vivante core which supports both 2D and 3D together. It's always bugged me that etnadrm has not treated cores separately from their capabilities.
Today I finally got the idea how this multiple pipe stuff should be done the right way - thanks Russell. So maybe you/we need to rework how the driver is designed regarding cores and pipes.
On the imx6 we should get 3 device nodes each only supporting one pipe type. On the dove we should get only one device node supporting 2 pipes types. What do you think?
Sorry, but I strongly object against the idea of having multiple DRM device nodes for the different pipes.
If we need the GPU2D and GPU3D to work together (and I can already see use-cases where we need to use the GPU2D in MESA to do things the GPU3D is incapable of) we would then need a lot more DMA-BUFs to get buffers across the devices. This is a waste of resources and complicates things a lot as we would then have to deal with DMA-BUF fences just to get the synchronization right, which is a no-brainer if we are on the same DRM device.
Also it does not allow us to make any simplifications to the userspace API, so I can't really see any benefit.
Also on Dove I think one would expect to get a single pipe capable of executing in both 2D and 3D state. If userspace takes advantage of that one could leave the sync between both engines to the FE, which is a good thing as this allows the kernel to do less work. I don't see why we should throw this away.
Regards, Lucas
Hi Lucas
2015-04-07 9:46 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner:
2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk:
On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote:
While this isn't the case on i.MX6 a single GPU pipe can have multiple rendering backend states, which can be selected by the pipe switch command, so there is no strict mapping between the user "pipes" and the PIPE_2D/PIPE_3D execution states.
This is good, because on Dove we have a single Vivante core which supports both 2D and 3D together. It's always bugged me that etnadrm has not treated cores separately from their capabilities.
Today I finally got the idea how this multiple pipe stuff should be done the right way - thanks Russell. So maybe you/we need to rework how the driver is designed regarding cores and pipes.
On the imx6 we should get 3 device nodes each only supporting one pipe type. On the dove we should get only one device node supporting 2 pipes types. What do you think?
Sorry, but I strongly object against the idea of having multiple DRM device nodes for the different pipes.
But that would expose the way the hardware is designed.
If we need the GPU2D and GPU3D to work together (and I can already see use-cases where we need to use the GPU2D in MESA to do things the GPU3D is incapable of) we would then need a lot more DMA-BUFs to get buffers across the devices. This is a waste of resources and complicates things a lot as we would then have to deal with DMA-BUF fences just to get the synchronization right, which is a no-brainer if we are on the same DRM device.
Welcome to the wonderful world of render-only GPUs.
Also it does not allow us to make any simplifications to the userspace API, so I can't really see any benefit.
About what simplifications are you taking?
Also on Dove I think one would expect to get a single pipe capable of executing in both 2D and 3D state. If userspace takes advantage of that one could leave the sync between both engines to the FE, which is a good thing as this allows the kernel to do less work. I don't see why we should throw this away.
That is what I am talking of. On Dove we have one core which has a 2d and a 3d pipe. On the imx6 we would have up to 3 cores (2d, 3d, vg) with only one pipe.
--> Dove: 1 drm device node with 2 pipes --> imx6: up to 3 drm devices nodes, where each only has one supported pipe.
So the user space opens the device, checks if the wanted pipe is supported and creates a pipe representation where the user space can work with. As the kernel has already the pipe concept it should work without to much refactoring.
Also how should the user space read out supported features in the case of Dove? Currently the driver is focused on the architecture of the imx6 (which is my fault).
greets -- Christian Gmeiner, MSc
Am Dienstag, den 07.04.2015, 10:03 +0200 schrieb Christian Gmeiner:
Hi Lucas
2015-04-07 9:46 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner:
2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk:
On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote:
While this isn't the case on i.MX6 a single GPU pipe can have multiple rendering backend states, which can be selected by the pipe switch command, so there is no strict mapping between the user "pipes" and the PIPE_2D/PIPE_3D execution states.
This is good, because on Dove we have a single Vivante core which supports both 2D and 3D together. It's always bugged me that etnadrm has not treated cores separately from their capabilities.
Today I finally got the idea how this multiple pipe stuff should be done the right way - thanks Russell. So maybe you/we need to rework how the driver is designed regarding cores and pipes.
On the imx6 we should get 3 device nodes each only supporting one pipe type. On the dove we should get only one device node supporting 2 pipes types. What do you think?
Sorry, but I strongly object against the idea of having multiple DRM device nodes for the different pipes.
But that would expose the way the hardware is designed.
A single DRM device is equally capable of exposing the cores as separate entities. I don't see why we should make our lives harder just to have userspace deal with multiple DRM nodes.
If we need the GPU2D and GPU3D to work together (and I can already see use-cases where we need to use the GPU2D in MESA to do things the GPU3D is incapable of) we would then need a lot more DMA-BUFs to get buffers across the devices. This is a waste of resources and complicates things a lot as we would then have to deal with DMA-BUF fences just to get the synchronization right, which is a no-brainer if we are on the same DRM device.
Welcome to the wonderful world of render-only GPUs.
I object to making things complex for the sake of making things hard. ;)
Also it does not allow us to make any simplifications to the userspace API, so I can't really see any benefit.
About what simplifications are you taking?
There are none. Every bit in the API is still needed, even if we expose the cores as separate DRM devices.
Also on Dove I think one would expect to get a single pipe capable of executing in both 2D and 3D state. If userspace takes advantage of that one could leave the sync between both engines to the FE, which is a good thing as this allows the kernel to do less work. I don't see why we should throw this away.
That is what I am talking of. On Dove we have one core which has a 2d and a 3d pipe. On the imx6 we would have up to 3 cores (2d, 3d, vg) with only one pipe.
--> Dove: 1 drm device node with 2 pipes --> imx6: up to 3 drm devices nodes, where each only has one supported pipe.
So the user space opens the device, checks if the wanted pipe is supported and creates a pipe representation where the user space can work with. As the kernel has already the pipe concept it should work without to much refactoring.
Also how should the user space read out supported features in the case of Dove? Currently the driver is focused on the architecture of the imx6 (which is my fault).
The naming may not be perfect yet, but the same thing is already possible with the driver as posted. The "pipe" argument in the ioctls is not the execution state anymore. That is what I think you mean when you talk about 2d, 3d, vg pipe.
A pipe is now simply a channel to a single GPU core. If a given core is able to execute 2d, 3d or vg state is a matter of looking at the feature bits for this pipe. Why would we need a full blown DRM device for that? On i.MX6 you have 3 pipes where each of them is capable of executing in exactly one exec state. On Dove you get a single pipe that is able to switch between 2D and 3D exec state. If userspace is aware of that one could even interleave 2D with 3D execution on a single submit.
Regards, Lucas
On Tue, Apr 07, 2015 at 11:05:50AM +0200, Lucas Stach wrote:
A pipe is now simply a channel to a single GPU core. If a given core is able to execute 2d, 3d or vg state is a matter of looking at the feature bits for this pipe. Why would we need a full blown DRM device for that? On i.MX6 you have 3 pipes where each of them is capable of executing in exactly one exec state. On Dove you get a single pipe that is able to switch between 2D and 3D exec state. If userspace is aware of that one could even interleave 2D with 3D execution on a single submit.
This is another issue which needs nailing down and clearly specified. :)
In the case of a GPU with multiple execution states, are you intending each command buffer submitted should ensure that the appropriate execution state is selected (via a SET PIPE command) or are you intending the kernel to know about this?
Am Dienstag, den 07.04.2015, 12:31 +0100 schrieb Russell King - ARM Linux:
On Tue, Apr 07, 2015 at 11:05:50AM +0200, Lucas Stach wrote:
A pipe is now simply a channel to a single GPU core. If a given core is able to execute 2d, 3d or vg state is a matter of looking at the feature bits for this pipe. Why would we need a full blown DRM device for that? On i.MX6 you have 3 pipes where each of them is capable of executing in exactly one exec state. On Dove you get a single pipe that is able to switch between 2D and 3D exec state. If userspace is aware of that one could even interleave 2D with 3D execution on a single submit.
This is another issue which needs nailing down and clearly specified. :)
In the case of a GPU with multiple execution states, are you intending each command buffer submitted should ensure that the appropriate execution state is selected (via a SET PIPE command) or are you intending the kernel to know about this?
Currently the kernel makes sure that the pipe is in the expected execution state for each submit if the GPU has been dirtied by anything other than the current submitting process.
As the pipe switch is a somewhat heavy command I'm not sure if we really like userspace to prepend it on each submit. On the other hand we also expect userspace to flush caches on each submit to be able to safely flush the MMU, so the pipe switch might turn out the be harmless in contrast.
I agree that there is currently a lack of documentation for those things.
Regards, Lucas
On Tue, Apr 7, 2015 at 3:46 AM, Lucas Stach l.stach@pengutronix.de wrote:
Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner:
2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk:
On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote:
While this isn't the case on i.MX6 a single GPU pipe can have multiple rendering backend states, which can be selected by the pipe switch command, so there is no strict mapping between the user "pipes" and the PIPE_2D/PIPE_3D execution states.
This is good, because on Dove we have a single Vivante core which supports both 2D and 3D together. It's always bugged me that etnadrm has not treated cores separately from their capabilities.
Today I finally got the idea how this multiple pipe stuff should be done the right way - thanks Russell. So maybe you/we need to rework how the driver is designed regarding cores and pipes.
On the imx6 we should get 3 device nodes each only supporting one pipe type. On the dove we should get only one device node supporting 2 pipes types. What do you think?
Sorry, but I strongly object against the idea of having multiple DRM device nodes for the different pipes.
If we need the GPU2D and GPU3D to work together (and I can already see use-cases where we need to use the GPU2D in MESA to do things the GPU3D is incapable of) we would then need a lot more DMA-BUFs to get buffers across the devices. This is a waste of resources and complicates things a lot as we would then have to deal with DMA-BUF fences just to get the synchronization right, which is a no-brainer if we are on the same DRM device.
Also it does not allow us to make any simplifications to the userspace API, so I can't really see any benefit.
Also on Dove I think one would expect to get a single pipe capable of executing in both 2D and 3D state. If userspace takes advantage of that one could leave the sync between both engines to the FE, which is a good thing as this allows the kernel to do less work. I don't see why we should throw this away.
Just about all modern GPUs support varying combinations of independent pipelines and we currently support this just fine via a single device node in other drm drivers. E.g., modern radeons support one or more gfx, compute, dma, video decode and video encode engines. What combination is present depends on the asic.
Alex
Regards, Lucas
-- Pengutronix e.K. | Lucas Stach | Industrial Linux Solutions | http://www.pengutronix.de/ |
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On Tue, Apr 7, 2015 at 4:38 PM, Alex Deucher alexdeucher@gmail.com wrote:
On Tue, Apr 7, 2015 at 3:46 AM, Lucas Stach l.stach@pengutronix.de wrote:
Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner:
2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux <
linux@arm.linux.org.uk>:
On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote:
While this isn't the case on i.MX6 a single GPU pipe can have multiple rendering backend states, which can be selected by the pipe switch command, so there is no strict mapping between the user "pipes" and the PIPE_2D/PIPE_3D execution states.
This is good, because on Dove we have a single Vivante core which supports both 2D and 3D together. It's always bugged me that etnadrm has not treated cores separately from their capabilities.
Today I finally got the idea how this multiple pipe stuff should be done the right way - thanks Russell. So maybe you/we need to rework how the driver is designed regarding cores and pipes.
On the imx6 we should get 3 device nodes each only supporting one pipe type. On the dove we should get only one device node supporting 2 pipes types. What do you
think?
Sorry, but I strongly object against the idea of having multiple DRM device nodes for the different pipes.
If we need the GPU2D and GPU3D to work together (and I can already see use-cases where we need to use the GPU2D in MESA to do things the GPU3D is incapable of) we would then need a lot more DMA-BUFs to get buffers across the devices. This is a waste of resources and complicates things a lot as we would then have to deal with DMA-BUF fences just to get the synchronization right, which is a no-brainer if we are on the same DRM device.
Also it does not allow us to make any simplifications to the userspace API, so I can't really see any benefit.
Also on Dove I think one would expect to get a single pipe capable of executing in both 2D and 3D state. If userspace takes advantage of that one could leave the sync between both engines to the FE, which is a good thing as this allows the kernel to do less work. I don't see why we should throw this away.
Just about all modern GPUs support varying combinations of independent pipelines and we currently support this just fine via a single device node in other drm drivers. E.g., modern radeons support one or more gfx, compute, dma, video decode and video encode engines. What combination is present depends on the asic.
That reminds me. We should also have in the back of our heads that compute is supported by the newer Vivante chips. We will also need to support multiple independent 3d cores as that support has shown up in the V5 galcore drivers.
-Jon
Am Dienstag, den 07.04.2015, 16:51 +0200 schrieb Jon Nettleton:
On Tue, Apr 7, 2015 at 4:38 PM, Alex Deucher alexdeucher@gmail.com wrote: On Tue, Apr 7, 2015 at 3:46 AM, Lucas Stach l.stach@pengutronix.de wrote: > Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner: >> 2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk: >> > On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote: >> >> While this isn't the case on i.MX6 a single GPU pipe can have >> >> multiple rendering backend states, which can be selected by the >> >> pipe switch command, so there is no strict mapping between the >> >> user "pipes" and the PIPE_2D/PIPE_3D execution states. >> > >> > This is good, because on Dove we have a single Vivante core which >> > supports both 2D and 3D together. It's always bugged me that >> > etnadrm has not treated cores separately from their capabilities. >> > >> >> Today I finally got the idea how this multiple pipe stuff should be >> done the right way - thanks Russell. >> So maybe you/we need to rework how the driver is designed regarding >> cores and pipes. >> >> On the imx6 we should get 3 device nodes each only supporting one pipe >> type. On the dove we >> should get only one device node supporting 2 pipes types. What do you think? >> > Sorry, but I strongly object against the idea of having multiple DRM > device nodes for the different pipes. > > If we need the GPU2D and GPU3D to work together (and I can already see > use-cases where we need to use the GPU2D in MESA to do things the GPU3D > is incapable of) we would then need a lot more DMA-BUFs to get buffers > across the devices. This is a waste of resources and complicates things > a lot as we would then have to deal with DMA-BUF fences just to get the > synchronization right, which is a no-brainer if we are on the same DRM > device. > > Also it does not allow us to make any simplifications to the userspace > API, so I can't really see any benefit. > > Also on Dove I think one would expect to get a single pipe capable of > executing in both 2D and 3D state. If userspace takes advantage of that > one could leave the sync between both engines to the FE, which is a good > thing as this allows the kernel to do less work. I don't see why we > should throw this away.
Just about all modern GPUs support varying combinations of independent pipelines and we currently support this just fine via a single device node in other drm drivers. E.g., modern radeons support one or more gfx, compute, dma, video decode and video encode engines. What combination is present depends on the asic.
That reminds me. We should also have in the back of our heads that compute is supported by the newer Vivante chips. We will also need to support multiple independent 3d cores as that support has shown up in the V5 galcore drivers.
AFAIK compute is just another state of the 3D pipe where instead of issuing a draw command you would kick the thread walker.
Multicore with a single FE is just a single pipe with chip selects set to the available backends and mirrored pagetables for the MMUs. With more than one FE you get more than one pipe which is more like a SLI setup on the desktop, where userspace has to deal with splitting the render targets into portions for each GPU. One more reason to keep things in one DRM device, as I think no one wants to deal with syncing pagetables across different devices.
Regards, Lucas
On Tue, Apr 7, 2015 at 5:01 PM, Lucas Stach l.stach@pengutronix.de wrote:
Am Dienstag, den 07.04.2015, 16:51 +0200 schrieb Jon Nettleton:
On Tue, Apr 7, 2015 at 4:38 PM, Alex Deucher alexdeucher@gmail.com wrote: On Tue, Apr 7, 2015 at 3:46 AM, Lucas Stach l.stach@pengutronix.de wrote: > Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner: >> 2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk: >> > On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote: >> >> While this isn't the case on i.MX6 a single GPU pipe can have >> >> multiple rendering backend states, which can be selected by the >> >> pipe switch command, so there is no strict mapping between the >> >> user "pipes" and the PIPE_2D/PIPE_3D execution states. >> > >> > This is good, because on Dove we have a single Vivante core which >> > supports both 2D and 3D together. It's always bugged me that >> > etnadrm has not treated cores separately from their capabilities. >> > >> >> Today I finally got the idea how this multiple pipe stuff should be >> done the right way - thanks Russell. >> So maybe you/we need to rework how the driver is designed regarding >> cores and pipes. >> >> On the imx6 we should get 3 device nodes each only supporting one pipe >> type. On the dove we >> should get only one device node supporting 2 pipes types. What do you think? >> > Sorry, but I strongly object against the idea of having multiple DRM > device nodes for the different pipes. > > If we need the GPU2D and GPU3D to work together (and I can already see > use-cases where we need to use the GPU2D in MESA to do things the GPU3D > is incapable of) we would then need a lot more DMA-BUFs to get buffers > across the devices. This is a waste of resources and complicates things > a lot as we would then have to deal with DMA-BUF fences just to get the > synchronization right, which is a no-brainer if we are on the same DRM > device. > > Also it does not allow us to make any simplifications to the userspace > API, so I can't really see any benefit. > > Also on Dove I think one would expect to get a single pipe capable of > executing in both 2D and 3D state. If userspace takes advantage of that > one could leave the sync between both engines to the FE, which is a good > thing as this allows the kernel to do less work. I don't see why we > should throw this away.
Just about all modern GPUs support varying combinations of independent pipelines and we currently support this just fine via a single device node in other drm drivers. E.g., modern radeons support one or more gfx, compute, dma, video decode and video encode engines. What combination is present depends on the asic.
That reminds me. We should also have in the back of our heads that compute is supported by the newer Vivante chips. We will also need to support multiple independent 3d cores as that support has shown up in the V5 galcore drivers.
AFAIK compute is just another state of the 3D pipe where instead of issuing a draw command you would kick the thread walker.
I believe this is true, but I don't believe anyone has RE'd anything yet.
Multicore with a single FE is just a single pipe with chip selects set to the available backends and mirrored pagetables for the MMUs. With more than one FE you get more than one pipe which is more like a SLI setup on the desktop, where userspace has to deal with splitting the render targets into portions for each GPU.
Yes, the galcore makes this a configuration option at build time supporting both configs.
One more reason to keep things in one DRM device, as I think no one wants to deal with syncing pagetables across different devices.
Regards, Lucas
-- Pengutronix e.K. | Lucas Stach | Industrial Linux Solutions | http://www.pengutronix.de/ |
2015-04-07 17:01 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Dienstag, den 07.04.2015, 16:51 +0200 schrieb Jon Nettleton:
On Tue, Apr 7, 2015 at 4:38 PM, Alex Deucher alexdeucher@gmail.com wrote: On Tue, Apr 7, 2015 at 3:46 AM, Lucas Stach l.stach@pengutronix.de wrote: > Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner: >> 2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk: >> > On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote: >> >> While this isn't the case on i.MX6 a single GPU pipe can have >> >> multiple rendering backend states, which can be selected by the >> >> pipe switch command, so there is no strict mapping between the >> >> user "pipes" and the PIPE_2D/PIPE_3D execution states. >> > >> > This is good, because on Dove we have a single Vivante core which >> > supports both 2D and 3D together. It's always bugged me that >> > etnadrm has not treated cores separately from their capabilities. >> > >> >> Today I finally got the idea how this multiple pipe stuff should be >> done the right way - thanks Russell. >> So maybe you/we need to rework how the driver is designed regarding >> cores and pipes. >> >> On the imx6 we should get 3 device nodes each only supporting one pipe >> type. On the dove we >> should get only one device node supporting 2 pipes types. What do you think? >> > Sorry, but I strongly object against the idea of having multiple DRM > device nodes for the different pipes. > > If we need the GPU2D and GPU3D to work together (and I can already see > use-cases where we need to use the GPU2D in MESA to do things the GPU3D > is incapable of) we would then need a lot more DMA-BUFs to get buffers > across the devices. This is a waste of resources and complicates things > a lot as we would then have to deal with DMA-BUF fences just to get the > synchronization right, which is a no-brainer if we are on the same DRM > device. > > Also it does not allow us to make any simplifications to the userspace > API, so I can't really see any benefit. > > Also on Dove I think one would expect to get a single pipe capable of > executing in both 2D and 3D state. If userspace takes advantage of that > one could leave the sync between both engines to the FE, which is a good > thing as this allows the kernel to do less work. I don't see why we > should throw this away.
Just about all modern GPUs support varying combinations of independent pipelines and we currently support this just fine via a single device node in other drm drivers. E.g., modern radeons support one or more gfx, compute, dma, video decode and video encode engines. What combination is present depends on the asic.
That reminds me. We should also have in the back of our heads that compute is supported by the newer Vivante chips. We will also need to support multiple independent 3d cores as that support has shown up in the V5 galcore drivers.
AFAIK compute is just another state of the 3D pipe where instead of issuing a draw command you would kick the thread walker.
Multicore with a single FE is just a single pipe with chip selects set to the available backends and mirrored pagetables for the MMUs. With more than one FE you get more than one pipe which is more like a SLI setup on the desktop, where userspace has to deal with splitting the render targets into portions for each GPU. One more reason to keep things in one DRM device, as I think no one wants to deal with syncing pagetables across different devices.
I don't get you naming scheme - sorry.
For me one Core has a single FE. This single FE can have one pipe or multiple pipes. A pipe is the execution unit select via SELECT_PIPE command (2d, 3d, ..).
In the Dove use case we have: - 1 Core with one FE - 2 pipelines
In the imx6 case we have: - 3 Cores (each has only one FE) - every FE only support one type of pipeline.
And each Core(/FE) has its own device node. Does this make any sense?
greets -- Christian Gmeiner, MSc
Am Dienstag, den 07.04.2015, 17:13 +0200 schrieb Christian Gmeiner:
2015-04-07 17:01 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Dienstag, den 07.04.2015, 16:51 +0200 schrieb Jon Nettleton:
On Tue, Apr 7, 2015 at 4:38 PM, Alex Deucher alexdeucher@gmail.com wrote: On Tue, Apr 7, 2015 at 3:46 AM, Lucas Stach l.stach@pengutronix.de wrote: > Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner: >> 2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk: >> > On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote: >> >> While this isn't the case on i.MX6 a single GPU pipe can have >> >> multiple rendering backend states, which can be selected by the >> >> pipe switch command, so there is no strict mapping between the >> >> user "pipes" and the PIPE_2D/PIPE_3D execution states. >> > >> > This is good, because on Dove we have a single Vivante core which >> > supports both 2D and 3D together. It's always bugged me that >> > etnadrm has not treated cores separately from their capabilities. >> > >> >> Today I finally got the idea how this multiple pipe stuff should be >> done the right way - thanks Russell. >> So maybe you/we need to rework how the driver is designed regarding >> cores and pipes. >> >> On the imx6 we should get 3 device nodes each only supporting one pipe >> type. On the dove we >> should get only one device node supporting 2 pipes types. What do you think? >> > Sorry, but I strongly object against the idea of having multiple DRM > device nodes for the different pipes. > > If we need the GPU2D and GPU3D to work together (and I can already see > use-cases where we need to use the GPU2D in MESA to do things the GPU3D > is incapable of) we would then need a lot more DMA-BUFs to get buffers > across the devices. This is a waste of resources and complicates things > a lot as we would then have to deal with DMA-BUF fences just to get the > synchronization right, which is a no-brainer if we are on the same DRM > device. > > Also it does not allow us to make any simplifications to the userspace > API, so I can't really see any benefit. > > Also on Dove I think one would expect to get a single pipe capable of > executing in both 2D and 3D state. If userspace takes advantage of that > one could leave the sync between both engines to the FE, which is a good > thing as this allows the kernel to do less work. I don't see why we > should throw this away.
Just about all modern GPUs support varying combinations of independent pipelines and we currently support this just fine via a single device node in other drm drivers. E.g., modern radeons support one or more gfx, compute, dma, video decode and video encode engines. What combination is present depends on the asic.
That reminds me. We should also have in the back of our heads that compute is supported by the newer Vivante chips. We will also need to support multiple independent 3d cores as that support has shown up in the V5 galcore drivers.
AFAIK compute is just another state of the 3D pipe where instead of issuing a draw command you would kick the thread walker.
Multicore with a single FE is just a single pipe with chip selects set to the available backends and mirrored pagetables for the MMUs. With more than one FE you get more than one pipe which is more like a SLI setup on the desktop, where userspace has to deal with splitting the render targets into portions for each GPU. One more reason to keep things in one DRM device, as I think no one wants to deal with syncing pagetables across different devices.
I don't get you naming scheme - sorry.
For me one Core has a single FE. This single FE can have one pipe or multiple pipes. A pipe is the execution unit select via SELECT_PIPE command (2d, 3d, ..).
In the Dove use case we have:
- 1 Core with one FE
- 2 pipelines
In the imx6 case we have:
- 3 Cores (each has only one FE)
- every FE only support one type of pipeline.
Okay let's keep it at this: a core is an entity with a FE at the front. A pipe is the backend fed by the FE selected by the SELECT_PIPE command.
This is currently confusing as I didn't change the naming in the API, but really the "pipe" parameter in the IOCTLs means core. I'll rename this for the next round.
And each Core(/FE) has its own device node. Does this make any sense?
And I don't get why each core needs to have a single device node. IMHO this is purely an implementation decision weather to have one device node for all cores or one device node per core.
For now I could only see that one device node per core makes things harder to get right, while I don't see a single benefit.
Regards, Lucas
Hi Lucas.
2015-04-07 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Dienstag, den 07.04.2015, 17:13 +0200 schrieb Christian Gmeiner:
2015-04-07 17:01 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Am Dienstag, den 07.04.2015, 16:51 +0200 schrieb Jon Nettleton:
On Tue, Apr 7, 2015 at 4:38 PM, Alex Deucher alexdeucher@gmail.com wrote: On Tue, Apr 7, 2015 at 3:46 AM, Lucas Stach l.stach@pengutronix.de wrote: > Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner: >> 2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk: >> > On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote: >> >> While this isn't the case on i.MX6 a single GPU pipe can have >> >> multiple rendering backend states, which can be selected by the >> >> pipe switch command, so there is no strict mapping between the >> >> user "pipes" and the PIPE_2D/PIPE_3D execution states. >> > >> > This is good, because on Dove we have a single Vivante core which >> > supports both 2D and 3D together. It's always bugged me that >> > etnadrm has not treated cores separately from their capabilities. >> > >> >> Today I finally got the idea how this multiple pipe stuff should be >> done the right way - thanks Russell. >> So maybe you/we need to rework how the driver is designed regarding >> cores and pipes. >> >> On the imx6 we should get 3 device nodes each only supporting one pipe >> type. On the dove we >> should get only one device node supporting 2 pipes types. What do you think? >> > Sorry, but I strongly object against the idea of having multiple DRM > device nodes for the different pipes. > > If we need the GPU2D and GPU3D to work together (and I can already see > use-cases where we need to use the GPU2D in MESA to do things the GPU3D > is incapable of) we would then need a lot more DMA-BUFs to get buffers > across the devices. This is a waste of resources and complicates things > a lot as we would then have to deal with DMA-BUF fences just to get the > synchronization right, which is a no-brainer if we are on the same DRM > device. > > Also it does not allow us to make any simplifications to the userspace > API, so I can't really see any benefit. > > Also on Dove I think one would expect to get a single pipe capable of > executing in both 2D and 3D state. If userspace takes advantage of that > one could leave the sync between both engines to the FE, which is a good > thing as this allows the kernel to do less work. I don't see why we > should throw this away.
Just about all modern GPUs support varying combinations of independent pipelines and we currently support this just fine via a single device node in other drm drivers. E.g., modern radeons support one or more gfx, compute, dma, video decode and video encode engines. What combination is present depends on the asic.
That reminds me. We should also have in the back of our heads that compute is supported by the newer Vivante chips. We will also need to support multiple independent 3d cores as that support has shown up in the V5 galcore drivers.
AFAIK compute is just another state of the 3D pipe where instead of issuing a draw command you would kick the thread walker.
Multicore with a single FE is just a single pipe with chip selects set to the available backends and mirrored pagetables for the MMUs. With more than one FE you get more than one pipe which is more like a SLI setup on the desktop, where userspace has to deal with splitting the render targets into portions for each GPU. One more reason to keep things in one DRM device, as I think no one wants to deal with syncing pagetables across different devices.
I don't get you naming scheme - sorry.
For me one Core has a single FE. This single FE can have one pipe or multiple pipes. A pipe is the execution unit select via SELECT_PIPE command (2d, 3d, ..).
In the Dove use case we have:
- 1 Core with one FE
- 2 pipelines
In the imx6 case we have:
- 3 Cores (each has only one FE)
- every FE only support one type of pipeline.
Okay let's keep it at this: a core is an entity with a FE at the front. A pipe is the backend fed by the FE selected by the SELECT_PIPE command.
This is currently confusing as I didn't change the naming in the API, but really the "pipe" parameter in the IOCTLs means core. I'll rename this for the next round.
The current driver was written only for the imx6 use case. So it combines one pipe of the 3 GPU cores into one device node. And yes the pipe parameter could be seen as core. But I think that this design is wrong. I did not know it better at the time I started working on it. I think that I would not be that hard to change the driver in that way that every core has its own device node and the pipe parameter really is a pipe of that core.
And each Core(/FE) has its own device node. Does this make any sense?
And I don't get why each core needs to have a single device node. IMHO this is purely an implementation decision weather to have one device node for all cores or one device node per core.
It is an important decision. And I think that one device node per core reflects the hardware design to 100%.
For now I could only see that one device node per core makes things harder to get right, while I don't see a single benefit.
What makes harder to get it right? The needed changes to the kernel driver are not that hard. The user space is an other story but thats because of the render-only thing, where we need to pass (prime) buffers around and do fence syncs etc. In the end I do not see a showstopper in the user space.
What would you do if - I know/hope that this will never happen - there is a SoC, which integrates two 3d cores?
greets -- Christian Gmeiner, MSc
On Tue, Apr 07, 2015 at 06:59:59PM +0200, Christian Gmeiner wrote:
Hi Lucas.
2015-04-07 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
And I don't get why each core needs to have a single device node. IMHO this is purely an implementation decision weather to have one device node for all cores or one device node per core.
It is an important decision. And I think that one device node per core reflects the hardware design to 100%.
Since when do the interfaces to userspace need to reflect the hardware design?
Isn't the point of having a userspace interface, in part, to abstract the hardware design details and provide userspace with something that is relatively easy to use without needlessly exposing the variation of the underlying hardware?
Please get away from the idea that userspace interfaces should reflect the hardware design.
What makes harder to get it right? The needed changes to the kernel driver are not that hard. The user space is an other story but thats because of the render-only thing, where we need to pass (prime) buffers around and do fence syncs etc. In the end I do not see a showstopper in the user space.
The fence syncs are an issue when you have multiple cores - that's something I started to sort out in my patch series, but when you appeared to refuse to accept some of the patches, I stopped...
The problem when you have multiple cores is one global fence event counter which gets compared to the fence values in each buffer object no longer works.
Consider this scenario:
You have two threads, thread A making use of a 2D core, and thread B using the 3D core.
Thread B submits a big long render operation, and the buffers get assigned fence number 1.
Thread A submits a short render operation, and the buffers get assigned fence number 2.
The 2D core finishes, and sends its interrupt. Etnaviv updates the completed fence position to 2.
At this point, we believe that fence numbers 1 and 2 are now complete, despite the 3D core continuing to execute and operate on the buffers with fence number 1.
I'm certain that the fence implementation we currently have can't be made to work with multiple cores with a few tweeks - we need something better to cater for what is essentially out-of-order completion amongst the cores.
A simple resolution to that _would_ be your argument of exposing each GPU as a separate DRM node, because then we get completely separate accounting of each - but it needlessly adds an expense in userspace. Userspace would have to make multiple calls - to each GPU DRM node - to check whether the buffer is busy on any of the GPUs as it may not know which GPU could be using the buffer, especially if it got it via a dmabuf fd sent over the DRI3 protocol. To me, that sounds like a burden on userspace.
2015-04-07 23:25 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk:
On Tue, Apr 07, 2015 at 06:59:59PM +0200, Christian Gmeiner wrote:
Hi Lucas.
2015-04-07 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
And I don't get why each core needs to have a single device node. IMHO this is purely an implementation decision weather to have one device node for all cores or one device node per core.
It is an important decision. And I think that one device node per core reflects the hardware design to 100%.
Since when do the interfaces to userspace need to reflect the hardware design?
Isn't the point of having a userspace interface, in part, to abstract the hardware design details and provide userspace with something that is relatively easy to use without needlessly exposing the variation of the underlying hardware?
Please get away from the idea that userspace interfaces should reflect the hardware design.
I think that we are in a phase of heavy discussion and we should talk about every aspect to design the driver - keep in mind that we could skip staging and then the interface needs to future proof.
What makes harder to get it right? The needed changes to the kernel driver are not that hard. The user space is an other story but thats because of the render-only thing, where we need to pass (prime) buffers around and do fence syncs etc. In the end I do not see a showstopper in the user space.
The fence syncs are an issue when you have multiple cores - that's something I started to sort out in my patch series, but when you appeared to refuse to accept some of the patches, I stopped...
I hope we can close this chapter soon. I am quite sorry about that but if you only would answered a single mail or a single irc message at that time we could have sorted this out.
The problem when you have multiple cores is one global fence event counter which gets compared to the fence values in each buffer object no longer works.
Consider this scenario:
You have two threads, thread A making use of a 2D core, and thread B using the 3D core.
Thread B submits a big long render operation, and the buffers get assigned fence number 1.
Thread A submits a short render operation, and the buffers get assigned fence number 2.
The 2D core finishes, and sends its interrupt. Etnaviv updates the completed fence position to 2.
At this point, we believe that fence numbers 1 and 2 are now complete, despite the 3D core continuing to execute and operate on the buffers with fence number 1.
Yes, this _is_ a problem.
I'm certain that the fence implementation we currently have can't be made to work with multiple cores with a few tweeks - we need something better to cater for what is essentially out-of-order completion amongst the cores.
A simple resolution to that _would_ be your argument of exposing each GPU as a separate DRM node, because then we get completely separate accounting of each - but it needlessly adds an expense in userspace. Userspace would have to make multiple calls - to each GPU DRM node - to check whether the buffer is busy on any of the GPUs as it may not know which GPU could be using the buffer, especially if it got it via a dmabuf fd sent over the DRI3 protocol. To me, that sounds like a burden on userspace.
greets -- Christian Gmeiner, MSc
Am Dienstag, den 07.04.2015, 22:25 +0100 schrieb Russell King - ARM Linux:
On Tue, Apr 07, 2015 at 06:59:59PM +0200, Christian Gmeiner wrote:
Hi Lucas.
2015-04-07 17:29 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
And I don't get why each core needs to have a single device node. IMHO this is purely an implementation decision weather to have one device node for all cores or one device node per core.
It is an important decision. And I think that one device node per core reflects the hardware design to 100%.
Since when do the interfaces to userspace need to reflect the hardware design?
Isn't the point of having a userspace interface, in part, to abstract the hardware design details and provide userspace with something that is relatively easy to use without needlessly exposing the variation of the underlying hardware?
Please get away from the idea that userspace interfaces should reflect the hardware design.
What makes harder to get it right? The needed changes to the kernel driver are not that hard. The user space is an other story but thats because of the render-only thing, where we need to pass (prime) buffers around and do fence syncs etc. In the end I do not see a showstopper in the user space.
The fence syncs are an issue when you have multiple cores - that's something I started to sort out in my patch series, but when you appeared to refuse to accept some of the patches, I stopped...
The problem when you have multiple cores is one global fence event counter which gets compared to the fence values in each buffer object no longer works.
Consider this scenario:
You have two threads, thread A making use of a 2D core, and thread B using the 3D core.
Thread B submits a big long render operation, and the buffers get assigned fence number 1.
Thread A submits a short render operation, and the buffers get assigned fence number 2.
The 2D core finishes, and sends its interrupt. Etnaviv updates the completed fence position to 2.
At this point, we believe that fence numbers 1 and 2 are now complete, despite the 3D core continuing to execute and operate on the buffers with fence number 1.
I'm certain that the fence implementation we currently have can't be made to work with multiple cores with a few tweeks - we need something better to cater for what is essentially out-of-order completion amongst the cores.
A simple resolution to that _would_ be your argument of exposing each GPU as a separate DRM node, because then we get completely separate accounting of each - but it needlessly adds an expense in userspace. Userspace would have to make multiple calls - to each GPU DRM node - to check whether the buffer is busy on any of the GPUs as it may not know which GPU could be using the buffer, especially if it got it via a dmabuf fd sent over the DRI3 protocol. To me, that sounds like a burden on userspace.
And even simpler would be to have the monotonic increasing fence queue to be per core and allow each to GEM object to be on multiple queues. So when waiting for buffer idle, the kernel can easily make sure the object is idle on all attached fence queues.
Same principle as with the MMU mappings right now: a single GEM object mapped to possibly different positions in the VM space of each core.
Regards, Lucas
On Tue, Apr 7, 2015 at 12:59 PM, Christian Gmeiner christian.gmeiner@gmail.com wrote:
And each Core(/FE) has its own device node. Does this make any sense?
And I don't get why each core needs to have a single device node. IMHO this is purely an implementation decision weather to have one device node for all cores or one device node per core.
It is an important decision. And I think that one device node per core reflects the hardware design to 100%.
Although I haven't really added support for devices with multiple pipe, the pipe param in msm ioctls is intended to deal with hw that has multiple pipes. (And I assume someday adreno will sprout an extra compute pipe, where we'll need this.)
in your case, it sounds a bit like you should have an ioctl to enumerate the pipes, and a getcap that returns a bitmask of compute engine(s) supported by a given pipe. Or something roughly like that.
For now I could only see that one device node per core makes things harder to get right, while I don't see a single benefit.
What makes harder to get it right? The needed changes to the kernel driver are not that hard. The user space is an other story but thats because of the render-only thing, where we need to pass (prime) buffers around and do fence syncs etc. In the end I do not see a showstopper in the user space.
I assume the hw gives you a way to do fencing between pipes? It seems at least convenient not to need to expose that via dmabuf+fence, since that is a bit heavyweight if you end up needing to do things like texture uploads/downloads or msaa resolve on one pipe synchronized to rendering happening on another..
BR, -R
Am Dienstag, den 07.04.2015, 18:14 -0400 schrieb Rob Clark:
On Tue, Apr 7, 2015 at 12:59 PM, Christian Gmeiner christian.gmeiner@gmail.com wrote:
And each Core(/FE) has its own device node. Does this make any sense?
And I don't get why each core needs to have a single device node. IMHO this is purely an implementation decision weather to have one device node for all cores or one device node per core.
It is an important decision. And I think that one device node per core reflects the hardware design to 100%.
Although I haven't really added support for devices with multiple pipe, the pipe param in msm ioctls is intended to deal with hw that has multiple pipes. (And I assume someday adreno will sprout an extra compute pipe, where we'll need this.)
in your case, it sounds a bit like you should have an ioctl to enumerate the pipes, and a getcap that returns a bitmask of compute engine(s) supported by a given pipe. Or something roughly like that.
The current interface already allows for that. Each core get a simple integer assigned. The userspace can then just ask for the feature bits of a core with an increasing integer as index. The feature bits tell you if the core is capable of executing 2D, 3D or VG pipe states.
Since we construct the DRM device only when all cores are probed and tear it down when one of them goes away there are no holes in the index space. So once you hit ENODEV when asking for the feature bits of a core you know that there are no more cores to enumerate.
For now I could only see that one device node per core makes things harder to get right, while I don't see a single benefit.
What makes harder to get it right? The needed changes to the kernel driver are not that hard. The user space is an other story but thats because of the render-only thing, where we need to pass (prime) buffers around and do fence syncs etc. In the end I do not see a showstopper in the user space.
I assume the hw gives you a way to do fencing between pipes? It seems at least convenient not to need to expose that via dmabuf+fence, since that is a bit heavyweight if you end up needing to do things like texture uploads/downloads or msaa resolve on one pipe synchronized to rendering happening on another..
The cores are separate entities with no internal synchronization AFAIK.
Regards, Lucas
Am Dienstag, den 07.04.2015, 18:59 +0200 schrieb Christian Gmeiner:
Hi Lucas.
[...]
I don't get you naming scheme - sorry.
For me one Core has a single FE. This single FE can have one pipe or multiple pipes. A pipe is the execution unit select via SELECT_PIPE command (2d, 3d, ..).
In the Dove use case we have:
- 1 Core with one FE
- 2 pipelines
In the imx6 case we have:
- 3 Cores (each has only one FE)
- every FE only support one type of pipeline.
Okay let's keep it at this: a core is an entity with a FE at the front. A pipe is the backend fed by the FE selected by the SELECT_PIPE command.
This is currently confusing as I didn't change the naming in the API, but really the "pipe" parameter in the IOCTLs means core. I'll rename this for the next round.
The current driver was written only for the imx6 use case. So it combines one pipe of the 3 GPU cores into one device node. And yes the pipe parameter could be seen as core. But I think that this design is wrong. I did not know it better at the time I started working on it. I think that I would not be that hard to change the driver in that way that every core has its own device node and the pipe parameter really is a pipe of that core.
And each Core(/FE) has its own device node. Does this make any sense?
And I don't get why each core needs to have a single device node. IMHO this is purely an implementation decision weather to have one device node for all cores or one device node per core.
It is an important decision. And I think that one device node per core reflects the hardware design to 100%.
I'll refer to Russells mail for this.
For now I could only see that one device node per core makes things harder to get right, while I don't see a single benefit.
What makes harder to get it right? The needed changes to the kernel driver are not that hard. The user space is an other story but thats because of the render-only thing, where we need to pass (prime) buffers around and do fence syncs etc. In the end I do not see a showstopper in the user space.
DMA-BUFs and fences on them are no showstopper, but are a burden on userspace that we don't _need_ to impose. So why should we do this?
What would you do if - I know/hope that this will never happen - there is a SoC, which integrates two 3d cores?
Please go back and read the patch at the top of this thread. Having multiple cores with the same pipe caps is entirely possible with the current driver. Each core gets assigned a simple integer index and userspace is responsible to look at the feature bits of each core/index, so having multiple 3D cores is not a problem at all.
Regards, Lucas
2015-04-07 16:38 GMT+02:00 Alex Deucher alexdeucher@gmail.com:
On Tue, Apr 7, 2015 at 3:46 AM, Lucas Stach l.stach@pengutronix.de wrote:
Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner:
2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk:
On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote:
While this isn't the case on i.MX6 a single GPU pipe can have multiple rendering backend states, which can be selected by the pipe switch command, so there is no strict mapping between the user "pipes" and the PIPE_2D/PIPE_3D execution states.
This is good, because on Dove we have a single Vivante core which supports both 2D and 3D together. It's always bugged me that etnadrm has not treated cores separately from their capabilities.
Today I finally got the idea how this multiple pipe stuff should be done the right way - thanks Russell. So maybe you/we need to rework how the driver is designed regarding cores and pipes.
On the imx6 we should get 3 device nodes each only supporting one pipe type. On the dove we should get only one device node supporting 2 pipes types. What do you think?
Sorry, but I strongly object against the idea of having multiple DRM device nodes for the different pipes.
If we need the GPU2D and GPU3D to work together (and I can already see use-cases where we need to use the GPU2D in MESA to do things the GPU3D is incapable of) we would then need a lot more DMA-BUFs to get buffers across the devices. This is a waste of resources and complicates things a lot as we would then have to deal with DMA-BUF fences just to get the synchronization right, which is a no-brainer if we are on the same DRM device.
Also it does not allow us to make any simplifications to the userspace API, so I can't really see any benefit.
Also on Dove I think one would expect to get a single pipe capable of executing in both 2D and 3D state. If userspace takes advantage of that one could leave the sync between both engines to the FE, which is a good thing as this allows the kernel to do less work. I don't see why we should throw this away.
Just about all modern GPUs support varying combinations of independent pipelines and we currently support this just fine via a single device node in other drm drivers. E.g., modern radeons support one or more gfx, compute, dma, video decode and video encode engines. What combination is present depends on the asic.
So if you have multiple GPUs (IP cores with separate IRQ, register addresses, ..) with combinations of independent pipelines that would mean that every GPU gets its own device node and supports a combinations of independent pipelines.
greets -- Christian Gmeiner, MSc
On 07.04.2015 16:52, Christian Gmeiner wrote:
2015-04-07 16:38 GMT+02:00 Alex Deucher alexdeucher@gmail.com:
On Tue, Apr 7, 2015 at 3:46 AM, Lucas Stach l.stach@pengutronix.de wrote:
Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner:
2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk:
On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote:
While this isn't the case on i.MX6 a single GPU pipe can have multiple rendering backend states, which can be selected by the pipe switch command, so there is no strict mapping between the user "pipes" and the PIPE_2D/PIPE_3D execution states.
This is good, because on Dove we have a single Vivante core which supports both 2D and 3D together. It's always bugged me that etnadrm has not treated cores separately from their capabilities.
Today I finally got the idea how this multiple pipe stuff should be done the right way - thanks Russell. So maybe you/we need to rework how the driver is designed regarding cores and pipes.
On the imx6 we should get 3 device nodes each only supporting one pipe type. On the dove we should get only one device node supporting 2 pipes types. What do you think?
Sorry, but I strongly object against the idea of having multiple DRM device nodes for the different pipes.
If we need the GPU2D and GPU3D to work together (and I can already see use-cases where we need to use the GPU2D in MESA to do things the GPU3D is incapable of) we would then need a lot more DMA-BUFs to get buffers across the devices. This is a waste of resources and complicates things a lot as we would then have to deal with DMA-BUF fences just to get the synchronization right, which is a no-brainer if we are on the same DRM device.
Also it does not allow us to make any simplifications to the userspace API, so I can't really see any benefit.
Also on Dove I think one would expect to get a single pipe capable of executing in both 2D and 3D state. If userspace takes advantage of that one could leave the sync between both engines to the FE, which is a good thing as this allows the kernel to do less work. I don't see why we should throw this away.
Just about all modern GPUs support varying combinations of independent pipelines and we currently support this just fine via a single device node in other drm drivers. E.g., modern radeons support one or more gfx, compute, dma, video decode and video encode engines. What combination is present depends on the asic.
So if you have multiple GPUs (IP cores with separate IRQ, register addresses, ..) with combinations of independent pipelines that would mean that every GPU gets its own device node and supports a combinations of independent pipelines.
Yeah, correct. For Radeon it actually depends on how the multiple GPUs/pipelines are wired up.
If you have multiple GPUs each one usually has a different internal address space and different resources (VRAM, special memory regions like LDS/GDS etc...) and a couple of different pipelines.
It won't make sense to create a separate device node for each pipeline, cause as noted that would mean we have to share all resources using DMA_buf file descriptors.
Regards, Christian.
greets
Christian Gmeiner, MSc
https://soundcloud.com/christian-gmeiner _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Am Dienstag, den 07.04.2015, 16:52 +0200 schrieb Christian Gmeiner:
2015-04-07 16:38 GMT+02:00 Alex Deucher alexdeucher@gmail.com:
On Tue, Apr 7, 2015 at 3:46 AM, Lucas Stach l.stach@pengutronix.de wrote:
Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner:
2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux linux@arm.linux.org.uk:
On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote:
While this isn't the case on i.MX6 a single GPU pipe can have multiple rendering backend states, which can be selected by the pipe switch command, so there is no strict mapping between the user "pipes" and the PIPE_2D/PIPE_3D execution states.
This is good, because on Dove we have a single Vivante core which supports both 2D and 3D together. It's always bugged me that etnadrm has not treated cores separately from their capabilities.
Today I finally got the idea how this multiple pipe stuff should be done the right way - thanks Russell. So maybe you/we need to rework how the driver is designed regarding cores and pipes.
On the imx6 we should get 3 device nodes each only supporting one pipe type. On the dove we should get only one device node supporting 2 pipes types. What do you think?
Sorry, but I strongly object against the idea of having multiple DRM device nodes for the different pipes.
If we need the GPU2D and GPU3D to work together (and I can already see use-cases where we need to use the GPU2D in MESA to do things the GPU3D is incapable of) we would then need a lot more DMA-BUFs to get buffers across the devices. This is a waste of resources and complicates things a lot as we would then have to deal with DMA-BUF fences just to get the synchronization right, which is a no-brainer if we are on the same DRM device.
Also it does not allow us to make any simplifications to the userspace API, so I can't really see any benefit.
Also on Dove I think one would expect to get a single pipe capable of executing in both 2D and 3D state. If userspace takes advantage of that one could leave the sync between both engines to the FE, which is a good thing as this allows the kernel to do less work. I don't see why we should throw this away.
Just about all modern GPUs support varying combinations of independent pipelines and we currently support this just fine via a single device node in other drm drivers. E.g., modern radeons support one or more gfx, compute, dma, video decode and video encode engines. What combination is present depends on the asic.
So if you have multiple GPUs (IP cores with separate IRQ, register addresses, ..) with combinations of independent pipelines that would mean that every GPU gets its own device node and supports a combinations of independent pipelines.
To merge the available GPU cores on one SoC into a single DRM device or to construct an separate DRM device for each core is purely an implementation decision. For now I haven't seen any compelling argument that having separate DRM devices would provide any benefit.
Regards, Lucas
In case the component bind fails the mutex should not be left locked.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_drv.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 828ed8ce347f..799793ea0b38 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -141,8 +141,10 @@ static int etnaviv_load(struct drm_device *dev, unsigned long flags) mutex_lock(&dev->struct_mutex);
err = component_bind_all(dev->dev, dev); - if (err < 0) + if (err < 0) { + mutex_unlock(&dev->struct_mutex); return err; + }
load_gpu(dev);
Drop the last remaining MSM bits and things we don't need for Vivante GPUs. Those include shifting and or-ing of reloc addresses and IB buffers.
Signed-off-by: Russell King rmk+kernel@arm.linux.org.uk Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_gem_submit.c | 9 +-------- include/uapi/drm/etnaviv_drm.h | 22 ++++------------------ 2 files changed, 5 insertions(+), 26 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index 9061f5f7ecc6..2edaa1262fef 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -269,14 +269,7 @@ static int submit_reloc(struct etnaviv_gem_submit *submit, struct etnaviv_gem_ob return -EINVAL; }
- iova += submit_reloc.reloc_offset; - - if (submit_reloc.shift < 0) - iova >>= -submit_reloc.shift; - else - iova <<= submit_reloc.shift; - - ptr[off] = iova | submit_reloc.or; + ptr[off] = iova + submit_reloc.reloc_offset;
last_offset = off; } diff --git a/include/uapi/drm/etnaviv_drm.h b/include/uapi/drm/etnaviv_drm.h index dfd51fcd56d6..c6ce72ae4dbe 100644 --- a/include/uapi/drm/etnaviv_drm.h +++ b/include/uapi/drm/etnaviv_drm.h @@ -1,6 +1,5 @@ /* - * Copyright (C) 2013 Red Hat - * Author: Rob Clark robdclark@gmail.com + * Copyright (C) 2015 Etnaviv Project * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published by @@ -62,8 +61,6 @@ struct drm_etnaviv_timespec { #define ETNAVIV_PARAM_GPU_INSTRUCTION_COUNT 0x18 #define ETNAVIV_PARAM_GPU_NUM_CONSTANTS 0x19
-/* #define MSM_PARAM_GMEM_SIZE 0x02 */ - #define ETNA_MAX_PIPES 4
struct drm_etnaviv_param { @@ -116,35 +113,24 @@ struct drm_etnaviv_gem_cpu_fini { */
/* The value written into the cmdstream is logically: - * - * ((relocbuf->gpuaddr + reloc_offset) << shift) | or - * - * When we have GPU's w/ >32bit ptrs, it should be possible to deal - * with this by emit'ing two reloc entries with appropriate shift - * values. Or a new ETNA_SUBMIT_CMD_x type would also be an option. + * relocbuf->gpuaddr + reloc_offset * * NOTE that reloc's must be sorted by order of increasing submit_offset, * otherwise EINVAL. */ struct drm_etnaviv_gem_submit_reloc { uint32_t submit_offset; /* in, offset from submit_bo */ - uint32_t or; /* in, value OR'd with result */ - int32_t shift; /* in, amount of left shift (can be -ve) */ uint32_t reloc_idx; /* in, index of reloc_bo buffer */ uint64_t reloc_offset; /* in, offset from start of reloc_bo */ };
/* submit-types: * BUF - this cmd buffer is executed normally. - * IB_TARGET_BUF - this cmd buffer is an IB target. Reloc's are - * processed normally, but the kernel does not setup an IB to - * this buffer in the first-level ringbuffer * CTX_RESTORE_BUF - only executed if there has been a GPU context * switch since the last SUBMIT ioctl */ #define ETNA_SUBMIT_CMD_BUF 0x0001 -#define ETNA_SUBMIT_CMD_IB_TARGET_BUF 0x0002 -#define ETNA_SUBMIT_CMD_CTX_RESTORE_BUF 0x0003 +#define ETNA_SUBMIT_CMD_CTX_RESTORE_BUF 0x0002 struct drm_etnaviv_gem_submit_cmd { uint32_t type; /* in, one of ETNA_SUBMIT_CMD_x */ uint32_t submit_idx; /* in, index of submit_bo cmdstream buffer */ @@ -216,7 +202,7 @@ struct drm_etnaviv_gem_userptr {
#define DRM_ETNAVIV_GET_PARAM 0x00 /* placeholder: -#define DRM_MSM_SET_PARAM 0x01 +#define DRM_ETNAVIV_SET_PARAM 0x01 */ #define DRM_ETNAVIV_GEM_NEW 0x02 #define DRM_ETNAVIV_GEM_INFO 0x03
Dumb buffers must be only used as backing storage for scanout only surfaces. Any acceleration operation on them is not allowed.
So there is no point in having dumb buffer support in a driver that isn't able to drive any scanout hardware.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_drv.c | 3 --- drivers/staging/etnaviv/etnaviv_drv.h | 12 ------------ drivers/staging/etnaviv/etnaviv_gem.c | 31 ------------------------------- 3 files changed, 46 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 799793ea0b38..d01af1290bb2 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -536,9 +536,6 @@ static struct drm_driver etnaviv_drm_driver = { .set_busid = drm_platform_set_busid, .gem_free_object = etnaviv_gem_free_object, .gem_vm_ops = &vm_ops, - .dumb_create = msm_gem_dumb_create, - .dumb_map_offset = msm_gem_dumb_map_offset, - .dumb_destroy = drm_gem_dumb_destroy, .prime_handle_to_fd = drm_gem_prime_handle_to_fd, .prime_fd_to_handle = drm_gem_prime_fd_to_handle, .gem_prime_export = drm_gem_prime_export, diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index 4dfcd03c80ef..8d835a5e2e2a 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -80,10 +80,6 @@ int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, int id, uint32_t *iova); void etnaviv_gem_put_iova(struct drm_gem_object *obj); -int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, - struct drm_mode_create_dumb *args); -int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, - uint32_t handle, uint64_t *offset); struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj); void *msm_gem_prime_vmap(struct drm_gem_object *obj); void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); @@ -146,12 +142,4 @@ static inline bool fence_completed(struct drm_device *dev, uint32_t fence) return fence_after_eq(priv->completed_fence, fence); }
-static inline int align_pitch(int width, int bpp) -{ - int bytespp = (bpp + 7) / 8; - - /* adreno needs pitch aligned to 32 pixels: */ - return bytespp * ALIGN(width, 32); -} - #endif /* __ETNAVIV_DRV_H__ */ diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 04594dad27e2..e396ee90bc5e 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -368,37 +368,6 @@ void etnaviv_gem_put_iova(struct drm_gem_object *obj) */ }
-int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, - struct drm_mode_create_dumb *args) -{ - args->pitch = align_pitch(args->width, args->bpp); - args->size = PAGE_ALIGN(args->pitch * args->height); - /* TODO: re-check flags */ - return etnaviv_gem_new_handle(dev, file, args->size, - ETNA_BO_WC, &args->handle); -} - -int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, - uint32_t handle, uint64_t *offset) -{ - struct drm_gem_object *obj; - int ret = 0; - - /* GEM does all our handle to object mapping */ - obj = drm_gem_object_lookup(dev, file, handle); - if (obj == NULL) { - ret = -ENOENT; - goto fail; - } - - *offset = msm_gem_mmap_offset(obj); - - drm_gem_object_unreference_unlocked(obj); - -fail: - return ret; -} - void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
2015-04-02 17:30 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
Dumb buffers must be only used as backing storage for scanout only surfaces. Any acceleration operation on them is not allowed.
So there is no point in having dumb buffer support in a driver that isn't able to drive any scanout hardware.
Signed-off-by: Lucas Stach l.stach@pengutronix.de
drivers/staging/etnaviv/etnaviv_drv.c | 3 --- drivers/staging/etnaviv/etnaviv_drv.h | 12 ------------ drivers/staging/etnaviv/etnaviv_gem.c | 31 ------------------------------- 3 files changed, 46 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 799793ea0b38..d01af1290bb2 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -536,9 +536,6 @@ static struct drm_driver etnaviv_drm_driver = { .set_busid = drm_platform_set_busid, .gem_free_object = etnaviv_gem_free_object, .gem_vm_ops = &vm_ops,
.dumb_create = msm_gem_dumb_create,
.dumb_map_offset = msm_gem_dumb_map_offset,
.dumb_destroy = drm_gem_dumb_destroy, .prime_handle_to_fd = drm_gem_prime_handle_to_fd, .prime_fd_to_handle = drm_gem_prime_fd_to_handle, .gem_prime_export = drm_gem_prime_export,
diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index 4dfcd03c80ef..8d835a5e2e2a 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -80,10 +80,6 @@ int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, int id, uint32_t *iova); void etnaviv_gem_put_iova(struct drm_gem_object *obj); -int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
struct drm_mode_create_dumb *args);
-int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
uint32_t handle, uint64_t *offset);
struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj); void *msm_gem_prime_vmap(struct drm_gem_object *obj); void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); @@ -146,12 +142,4 @@ static inline bool fence_completed(struct drm_device *dev, uint32_t fence) return fence_after_eq(priv->completed_fence, fence); }
-static inline int align_pitch(int width, int bpp) -{
int bytespp = (bpp + 7) / 8;
/* adreno needs pitch aligned to 32 pixels: */
return bytespp * ALIGN(width, 32);
-}
#endif /* __ETNAVIV_DRV_H__ */ diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 04594dad27e2..e396ee90bc5e 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -368,37 +368,6 @@ void etnaviv_gem_put_iova(struct drm_gem_object *obj) */ }
-int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
struct drm_mode_create_dumb *args)
-{
args->pitch = align_pitch(args->width, args->bpp);
args->size = PAGE_ALIGN(args->pitch * args->height);
/* TODO: re-check flags */
return etnaviv_gem_new_handle(dev, file, args->size,
ETNA_BO_WC, &args->handle);
-}
-int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
uint32_t handle, uint64_t *offset)
-{
struct drm_gem_object *obj;
int ret = 0;
/* GEM does all our handle to object mapping */
obj = drm_gem_object_lookup(dev, file, handle);
if (obj == NULL) {
ret = -ENOENT;
goto fail;
}
*offset = msm_gem_mmap_offset(obj);
drm_gem_object_unreference_unlocked(obj);
-fail:
return ret;
-}
void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); -- 2.1.4
And this one is also quite similar to this one: https://github.com/austriancoder/linux/commit/cf8e8813ba730334195bc3c74017a2...
Except yours also removes align_pitch(..), which is good.
greets -- Christian Gmeiner, MSc
This is a simple rename without functional changes.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_drv.c | 10 +++++----- drivers/staging/etnaviv/etnaviv_drv.h | 12 ++++++------ drivers/staging/etnaviv/etnaviv_gem.c | 2 +- drivers/staging/etnaviv/etnaviv_gem_prime.c | 17 ++++++++--------- 4 files changed, 20 insertions(+), 21 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index d01af1290bb2..5045dee1932d 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -540,12 +540,12 @@ static struct drm_driver etnaviv_drm_driver = { .prime_fd_to_handle = drm_gem_prime_fd_to_handle, .gem_prime_export = drm_gem_prime_export, .gem_prime_import = drm_gem_prime_import, - .gem_prime_pin = msm_gem_prime_pin, - .gem_prime_unpin = msm_gem_prime_unpin, - .gem_prime_get_sg_table = msm_gem_prime_get_sg_table, + .gem_prime_pin = etnaviv_gem_prime_pin, + .gem_prime_unpin = etnaviv_gem_prime_unpin, + .gem_prime_get_sg_table = etnaviv_gem_prime_get_sg_table, .gem_prime_import_sg_table = etnaviv_gem_prime_import_sg_table, - .gem_prime_vmap = msm_gem_prime_vmap, - .gem_prime_vunmap = msm_gem_prime_vunmap, + .gem_prime_vmap = etnaviv_gem_prime_vmap, + .gem_prime_vunmap = etnaviv_gem_prime_vunmap, #ifdef CONFIG_DEBUG_FS .debugfs_init = etnaviv_debugfs_init, .debugfs_cleanup = etnaviv_debugfs_cleanup, diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index 8d835a5e2e2a..c4892badb33b 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -80,15 +80,15 @@ int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, int id, uint32_t *iova); void etnaviv_gem_put_iova(struct drm_gem_object *obj); -struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj); -void *msm_gem_prime_vmap(struct drm_gem_object *obj); -void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); +struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj); +void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj); +void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); -int msm_gem_prime_pin(struct drm_gem_object *obj); -void msm_gem_prime_unpin(struct drm_gem_object *obj); +int etnaviv_gem_prime_pin(struct drm_gem_object *obj); +void etnaviv_gem_prime_unpin(struct drm_gem_object *obj); void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj); -void *msm_gem_vaddr(struct drm_gem_object *obj); +void *etnaviv_gem_vaddr(struct drm_gem_object *obj); dma_addr_t etnaviv_gem_paddr_locked(struct drm_gem_object *obj); void etnaviv_gem_move_to_active(struct drm_gem_object *obj, struct etnaviv_gpu *gpu, uint32_t access, uint32_t fence); diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index e396ee90bc5e..dd223b5e230c 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -387,7 +387,7 @@ void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj) return etnaviv_obj->vaddr; }
-void *msm_gem_vaddr(struct drm_gem_object *obj) +void *etnaviv_gem_vaddr(struct drm_gem_object *obj) { void *ret;
diff --git a/drivers/staging/etnaviv/etnaviv_gem_prime.c b/drivers/staging/etnaviv/etnaviv_gem_prime.c index d15f4b60fa47..a47fbbddb9f6 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_prime.c +++ b/drivers/staging/etnaviv/etnaviv_gem_prime.c @@ -1,6 +1,5 @@ /* - * Copyright (C) 2013 Red Hat - * Author: Rob Clark robdclark@gmail.com + * Copyright (C) 2015 Etnaviv Project * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published by @@ -20,7 +19,7 @@ #include "etnaviv_gem.h"
-struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) +struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
@@ -29,17 +28,17 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) return etnaviv_obj->sgt; }
-void *msm_gem_prime_vmap(struct drm_gem_object *obj) +void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj) { - return msm_gem_vaddr(obj); + return etnaviv_gem_vaddr(obj); }
-void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) +void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) { - /* TODO msm_gem_vunmap() */ + /* TODO etnaviv_gem_vunmap() */ }
-int msm_gem_prime_pin(struct drm_gem_object *obj) +int etnaviv_gem_prime_pin(struct drm_gem_object *obj) { if (!obj->import_attach) { struct drm_device *dev = obj->dev; @@ -51,7 +50,7 @@ int msm_gem_prime_pin(struct drm_gem_object *obj) return 0; }
-void msm_gem_prime_unpin(struct drm_gem_object *obj) +void etnaviv_gem_prime_unpin(struct drm_gem_object *obj) { if (!obj->import_attach) { struct drm_device *dev = obj->dev;
Hi Lucas
2015-04-02 17:30 GMT+02:00 Lucas Stach l.stach@pengutronix.de:
This is a simple rename without functional changes.
Signed-off-by: Lucas Stach l.stach@pengutronix.de
drivers/staging/etnaviv/etnaviv_drv.c | 10 +++++----- drivers/staging/etnaviv/etnaviv_drv.h | 12 ++++++------ drivers/staging/etnaviv/etnaviv_gem.c | 2 +- drivers/staging/etnaviv/etnaviv_gem_prime.c | 17 ++++++++--------- 4 files changed, 20 insertions(+), 21 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index d01af1290bb2..5045dee1932d 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -540,12 +540,12 @@ static struct drm_driver etnaviv_drm_driver = { .prime_fd_to_handle = drm_gem_prime_fd_to_handle, .gem_prime_export = drm_gem_prime_export, .gem_prime_import = drm_gem_prime_import,
.gem_prime_pin = msm_gem_prime_pin,
.gem_prime_unpin = msm_gem_prime_unpin,
.gem_prime_get_sg_table = msm_gem_prime_get_sg_table,
.gem_prime_pin = etnaviv_gem_prime_pin,
.gem_prime_unpin = etnaviv_gem_prime_unpin,
.gem_prime_get_sg_table = etnaviv_gem_prime_get_sg_table, .gem_prime_import_sg_table = etnaviv_gem_prime_import_sg_table,
.gem_prime_vmap = msm_gem_prime_vmap,
.gem_prime_vunmap = msm_gem_prime_vunmap,
.gem_prime_vmap = etnaviv_gem_prime_vmap,
.gem_prime_vunmap = etnaviv_gem_prime_vunmap,
#ifdef CONFIG_DEBUG_FS .debugfs_init = etnaviv_debugfs_init, .debugfs_cleanup = etnaviv_debugfs_cleanup, diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index 8d835a5e2e2a..c4892badb33b 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -80,15 +80,15 @@ int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, int id, uint32_t *iova); void etnaviv_gem_put_iova(struct drm_gem_object *obj); -struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj); -void *msm_gem_prime_vmap(struct drm_gem_object *obj); -void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); +struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj); +void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj); +void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); -int msm_gem_prime_pin(struct drm_gem_object *obj); -void msm_gem_prime_unpin(struct drm_gem_object *obj); +int etnaviv_gem_prime_pin(struct drm_gem_object *obj); +void etnaviv_gem_prime_unpin(struct drm_gem_object *obj); void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj); -void *msm_gem_vaddr(struct drm_gem_object *obj); +void *etnaviv_gem_vaddr(struct drm_gem_object *obj); dma_addr_t etnaviv_gem_paddr_locked(struct drm_gem_object *obj); void etnaviv_gem_move_to_active(struct drm_gem_object *obj, struct etnaviv_gpu *gpu, uint32_t access, uint32_t fence); diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index e396ee90bc5e..dd223b5e230c 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -387,7 +387,7 @@ void *etnaviv_gem_vaddr_locked(struct drm_gem_object *obj) return etnaviv_obj->vaddr; }
-void *msm_gem_vaddr(struct drm_gem_object *obj) +void *etnaviv_gem_vaddr(struct drm_gem_object *obj) { void *ret;
diff --git a/drivers/staging/etnaviv/etnaviv_gem_prime.c b/drivers/staging/etnaviv/etnaviv_gem_prime.c index d15f4b60fa47..a47fbbddb9f6 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_prime.c +++ b/drivers/staging/etnaviv/etnaviv_gem_prime.c @@ -1,6 +1,5 @@ /*
- Copyright (C) 2013 Red Hat
- Author: Rob Clark robdclark@gmail.com
- Copyright (C) 2015 Etnaviv Project
- This program is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License version 2 as published by
@@ -20,7 +19,7 @@ #include "etnaviv_gem.h"
-struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) +struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj) { struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
@@ -29,17 +28,17 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) return etnaviv_obj->sgt; }
-void *msm_gem_prime_vmap(struct drm_gem_object *obj) +void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj) {
return msm_gem_vaddr(obj);
return etnaviv_gem_vaddr(obj);
}
-void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) +void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) {
/* TODO msm_gem_vunmap() */
/* TODO etnaviv_gem_vunmap() */
}
-int msm_gem_prime_pin(struct drm_gem_object *obj) +int etnaviv_gem_prime_pin(struct drm_gem_object *obj) { if (!obj->import_attach) { struct drm_device *dev = obj->dev; @@ -51,7 +50,7 @@ int msm_gem_prime_pin(struct drm_gem_object *obj) return 0; }
-void msm_gem_prime_unpin(struct drm_gem_object *obj) +void etnaviv_gem_prime_unpin(struct drm_gem_object *obj) { if (!obj->import_attach) { struct drm_device *dev = obj->dev; -- 2.1.4
This change looks also very similar to this one:
https://github.com/austriancoder/linux/commit/6addb5f0a98e8670b703de787cc75b...
greets -- Christian Gmeiner, MSc
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_drv.c | 6 +++--- drivers/staging/etnaviv/etnaviv_drv.h | 7 +++---- drivers/staging/etnaviv/etnaviv_gem.c | 8 ++++---- drivers/staging/etnaviv/etnaviv_gem_submit.c | 2 +- drivers/staging/etnaviv/etnaviv_gpu.c | 2 +- drivers/staging/etnaviv/etnaviv_gpu.h | 4 ++-- 6 files changed, 14 insertions(+), 15 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 5045dee1932d..713458bbf9c4 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -212,12 +212,12 @@ static int etnaviv_gem_show(struct drm_device *dev, struct seq_file *m) if (gpu) { seq_printf(m, "Active Objects (%s):\n", dev_name(gpu->dev)); - msm_gem_describe_objects(&gpu->active_list, m); + etnaviv_gem_describe_objects(&gpu->active_list, m); } }
seq_puts(m, "Inactive Objects:\n"); - msm_gem_describe_objects(&priv->inactive_list, m); + etnaviv_gem_describe_objects(&priv->inactive_list, m);
return 0; } @@ -437,7 +437,7 @@ static int etnaviv_ioctl_gem_info(struct drm_device *dev, void *data, if (!obj) return -ENOENT;
- args->offset = msm_gem_mmap_offset(obj); + args->offset = etnaviv_gem_mmap_offset(obj);
drm_gem_object_unreference_unlocked(obj);
diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index c4892badb33b..36fd56cfbe40 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -74,7 +74,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma); int etnaviv_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf); -uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); +uint64_t etnaviv_gem_mmap_offset(struct drm_gem_object *obj); int etnaviv_gem_get_iova_locked(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, uint32_t *iova); int etnaviv_gem_get_iova(struct etnaviv_gpu *gpu, struct drm_gem_object *obj, @@ -111,9 +111,8 @@ bool etnaviv_cmd_validate_one(struct etnaviv_gpu *gpu, struct etnaviv_gem_object *obj, unsigned int offset, unsigned int size);
#ifdef CONFIG_DEBUG_FS -void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m); -void msm_gem_describe_objects(struct list_head *list, struct seq_file *m); -void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m); +void etnaviv_gem_describe(struct drm_gem_object *obj, struct seq_file *m); +void etnaviv_gem_describe_objects(struct list_head *list, struct seq_file *m); #endif
void __iomem *etnaviv_ioremap(struct platform_device *pdev, const char *name, diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index dd223b5e230c..053119f00b3e 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -288,7 +288,7 @@ static uint64_t mmap_offset(struct drm_gem_object *obj) return drm_vma_node_offset_addr(&obj->vma_node); }
-uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) +uint64_t etnaviv_gem_mmap_offset(struct drm_gem_object *obj) { uint64_t offset;
@@ -476,7 +476,7 @@ int etnaviv_gem_cpu_fini(struct drm_gem_object *obj) }
#ifdef CONFIG_DEBUG_FS -void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) +void etnaviv_gem_describe(struct drm_gem_object *obj, struct seq_file *m) { struct drm_device *dev = obj->dev; struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj); @@ -491,7 +491,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) off, etnaviv_obj->vaddr, obj->size); }
-void msm_gem_describe_objects(struct list_head *list, struct seq_file *m) +void etnaviv_gem_describe_objects(struct list_head *list, struct seq_file *m) { struct etnaviv_gem_object *etnaviv_obj; int count = 0; @@ -501,7 +501,7 @@ void msm_gem_describe_objects(struct list_head *list, struct seq_file *m) struct drm_gem_object *obj = &etnaviv_obj->base;
seq_puts(m, " "); - msm_gem_describe(obj, m); + etnaviv_gem_describe(obj, m); count++; size += obj->size; } diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index 2edaa1262fef..0c84e8bf782c 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -24,7 +24,7 @@ */
#define BO_INVALID_FLAGS ~(ETNA_SUBMIT_BO_READ | ETNA_SUBMIT_BO_WRITE) -/* make sure these don't conflict w/ MSM_SUBMIT_BO_x */ +/* make sure these don't conflict w/ ETNAVIV_SUBMIT_BO_x */ #define BO_LOCKED 0x4000 #define BO_PINNED 0x2000
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index abadfecb447d..82736f6a7c47 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -707,7 +707,7 @@ static void hangcheck_timer_reset(struct etnaviv_gpu *gpu) { DBG("%s", dev_name(gpu->dev)); mod_timer(&gpu->hangcheck_timer, - round_jiffies_up(jiffies + DRM_MSM_HANGCHECK_JIFFIES)); + round_jiffies_up(jiffies + DRM_ETNAVIV_HANGCHECK_JIFFIES)); }
static void hangcheck_handler(unsigned long data) diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h index 9465f7f56cdf..b8332a981b7d 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.h +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -124,8 +124,8 @@ struct etnaviv_gpu { struct clk *clk_shader;
/* Hang Detction: */ -#define DRM_MSM_HANGCHECK_PERIOD 500 /* in ms */ -#define DRM_MSM_HANGCHECK_JIFFIES msecs_to_jiffies(DRM_MSM_HANGCHECK_PERIOD) +#define DRM_ETNAVIV_HANGCHECK_PERIOD 500 /* in ms */ +#define DRM_ETNAVIV_HANGCHECK_JIFFIES msecs_to_jiffies(DRM_ETNAVIV_HANGCHECK_PERIOD) struct timer_list hangcheck_timer; uint32_t hangcheck_fence; uint32_t hangcheck_dma_addr;
From: Christian Gmeiner christian.gmeiner@gmail.com
There is no need to spam the kernel logs with the GPU specs and features at startup. If someone wants to know about this stuff debugfs should be the right place to look at.
Also use better format specifiers to make it easier for humans to read.
Signed-off-by: Christian Gmeiner christian.gmeiner@gmail.com Signed-off-by: Lucas Stach l.stach@pengutronix.de --- lst: - added commit message - squashed some more changes in to quiten down log spamming --- drivers/staging/etnaviv/etnaviv_gpu.c | 66 ++++++++++++++++++----------------- 1 file changed, 34 insertions(+), 32 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 82736f6a7c47..4da03f2d2dfa 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -186,27 +186,6 @@ static void etnaviv_hw_specs(struct etnaviv_gpu *gpu) gpu->identity.instruction_count = 256; break; } - - dev_info(gpu->dev, "stream_count: %x\n", - gpu->identity.stream_count); - dev_info(gpu->dev, "register_max: %x\n", - gpu->identity.register_max); - dev_info(gpu->dev, "thread_count: %x\n", - gpu->identity.thread_count); - dev_info(gpu->dev, "vertex_cache_size: %x\n", - gpu->identity.vertex_cache_size); - dev_info(gpu->dev, "shader_core_count: %x\n", - gpu->identity.shader_core_count); - dev_info(gpu->dev, "pixel_pipes: %x\n", - gpu->identity.pixel_pipes); - dev_info(gpu->dev, "vertex_output_buffer_size: %x\n", - gpu->identity.vertex_output_buffer_size); - dev_info(gpu->dev, "buffer_size: %x\n", - gpu->identity.buffer_size); - dev_info(gpu->dev, "instruction_count: %x\n", - gpu->identity.instruction_count); - dev_info(gpu->dev, "num_constants: %x\n", - gpu->identity.num_constants); }
static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) @@ -251,8 +230,8 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) } }
- dev_info(gpu->dev, "model: %x\n", gpu->identity.model); - dev_info(gpu->dev, "revision: %x\n", gpu->identity.revision); + dev_info(gpu->dev, "model: GC%x, revision: %x\n", + gpu->identity.model, gpu->identity.revision);
gpu->identity.features = gpu_read(gpu, VIVS_HI_CHIP_FEATURE);
@@ -285,15 +264,6 @@ static void etnaviv_hw_identify(struct etnaviv_gpu *gpu) gpu_read(gpu, VIVS_HI_CHIP_MINOR_FEATURE_3); }
- dev_info(gpu->dev, "minor_features: %x\n", - gpu->identity.minor_features0); - dev_info(gpu->dev, "minor_features1: %x\n", - gpu->identity.minor_features1); - dev_info(gpu->dev, "minor_features2: %x\n", - gpu->identity.minor_features2); - dev_info(gpu->dev, "minor_features3: %x\n", - gpu->identity.minor_features3); - /* GC600 idle register reports zero bits where modules aren't present */ if (gpu->identity.model == chipModel_GC600) { gpu->idle_mask = VIVS_HI_IDLE_STATE_TX | @@ -582,6 +552,38 @@ void etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m)
verify_dma(gpu, &debug);
+ seq_puts(m, "\tfeatures\n"); + seq_printf(m, "\t minor_features0: 0x%08x\n", + gpu->identity.minor_features0); + seq_printf(m, "\t minor_features1: 0x%08x\n", + gpu->identity.minor_features1); + seq_printf(m, "\t minor_features2: 0x%08x\n", + gpu->identity.minor_features2); + seq_printf(m, "\t minor_features3: 0x%08x\n", + gpu->identity.minor_features3); + + seq_puts(m, "\tspecs\n"); + seq_printf(m, "\t stream_count: %d\n", + gpu->identity.stream_count); + seq_printf(m, "\t register_max: %d\n", + gpu->identity.register_max); + seq_printf(m, "\t thread_count: %d\n", + gpu->identity.thread_count); + seq_printf(m, "\t vertex_cache_size: %d\n", + gpu->identity.vertex_cache_size); + seq_printf(m, "\t shader_core_count: %d\n", + gpu->identity.shader_core_count); + seq_printf(m, "\t pixel_pipes: %d\n", + gpu->identity.pixel_pipes); + seq_printf(m, "\t vertex_output_buffer_size: %d\n", + gpu->identity.vertex_output_buffer_size); + seq_printf(m, "\t buffer_size: %d\n", + gpu->identity.buffer_size); + seq_printf(m, "\t instruction_count: %d\n", + gpu->identity.instruction_count); + seq_printf(m, "\t num_constants: %d\n", + gpu->identity.num_constants); + seq_printf(m, "\taxi: 0x%08x\n", axi); seq_printf(m, "\tidle: 0x%08x\n", idle); idle |= ~gpu->idle_mask & ~VIVS_HI_IDLE_STATE_AXI_LP;
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_buffer.c | 2 +- drivers/staging/etnaviv/etnaviv_cmd_parser.c | 16 ++++++++++++++++ drivers/staging/etnaviv/etnaviv_drv.c | 3 +-- drivers/staging/etnaviv/etnaviv_drv.h | 3 +-- drivers/staging/etnaviv/etnaviv_gem.c | 3 +-- drivers/staging/etnaviv/etnaviv_gem.h | 3 +-- drivers/staging/etnaviv/etnaviv_gem_submit.c | 3 +-- drivers/staging/etnaviv/etnaviv_gpu.c | 3 +-- drivers/staging/etnaviv/etnaviv_gpu.h | 3 +-- drivers/staging/etnaviv/etnaviv_mmu.c | 3 +-- drivers/staging/etnaviv/etnaviv_mmu.h | 3 +-- 11 files changed, 26 insertions(+), 19 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_buffer.c b/drivers/staging/etnaviv/etnaviv_buffer.c index 7c8014f07249..41a7c7a0eda1 100644 --- a/drivers/staging/etnaviv/etnaviv_buffer.c +++ b/drivers/staging/etnaviv/etnaviv_buffer.c @@ -1,5 +1,5 @@ /* - * Copyright (C) 2014 2014 Etnaviv Project + * Copyright (C) 2014 Etnaviv Project * Author: Christian Gmeiner christian.gmeiner@gmail.com * * This program is free software; you can redistribute it and/or modify it diff --git a/drivers/staging/etnaviv/etnaviv_cmd_parser.c b/drivers/staging/etnaviv/etnaviv_cmd_parser.c index 61370d3ebf9d..2607771efe07 100644 --- a/drivers/staging/etnaviv/etnaviv_cmd_parser.c +++ b/drivers/staging/etnaviv/etnaviv_cmd_parser.c @@ -1,3 +1,19 @@ +/* + * Copyright (C) 2015 Etnaviv Project + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + #include <linux/kernel.h>
#include "etnaviv_gem.h" diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index 713458bbf9c4..dd74425ff94b 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -1,6 +1,5 @@ /* - * Copyright (C) 2013 Red Hat - * Author: Rob Clark robdclark@gmail.com + * Copyright (C) 2015 Etnaviv Project * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published by diff --git a/drivers/staging/etnaviv/etnaviv_drv.h b/drivers/staging/etnaviv/etnaviv_drv.h index 36fd56cfbe40..a8ba22a3c7a1 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.h +++ b/drivers/staging/etnaviv/etnaviv_drv.h @@ -1,6 +1,5 @@ /* - * Copyright (C) 2013 Red Hat - * Author: Rob Clark robdclark@gmail.com + * Copyright (C) 2015 Etnaviv Project * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published by diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 053119f00b3e..67a8b5120f31 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -1,6 +1,5 @@ /* - * Copyright (C) 2013 Red Hat - * Author: Rob Clark robdclark@gmail.com + * Copyright (C) 2015 Etnaviv Project * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published by diff --git a/drivers/staging/etnaviv/etnaviv_gem.h b/drivers/staging/etnaviv/etnaviv_gem.h index 6e0822674c8e..198a12b0ea03 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.h +++ b/drivers/staging/etnaviv/etnaviv_gem.h @@ -1,6 +1,5 @@ /* - * Copyright (C) 2013 Red Hat - * Author: Rob Clark robdclark@gmail.com + * Copyright (C) 2015 Etnaviv Project * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published by diff --git a/drivers/staging/etnaviv/etnaviv_gem_submit.c b/drivers/staging/etnaviv/etnaviv_gem_submit.c index 0c84e8bf782c..5a2343084ca8 100644 --- a/drivers/staging/etnaviv/etnaviv_gem_submit.c +++ b/drivers/staging/etnaviv/etnaviv_gem_submit.c @@ -1,6 +1,5 @@ /* - * Copyright (C) 2013 Red Hat - * Author: Rob Clark robdclark@gmail.com + * Copyright (C) 2015 Etnaviv Project * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published by diff --git a/drivers/staging/etnaviv/etnaviv_gpu.c b/drivers/staging/etnaviv/etnaviv_gpu.c index 4da03f2d2dfa..b15b7c3c938c 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.c +++ b/drivers/staging/etnaviv/etnaviv_gpu.c @@ -1,6 +1,5 @@ /* - * Copyright (C) 2013 Red Hat - * Author: Rob Clark robdclark@gmail.com + * Copyright (C) 2015 Etnaviv Project * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published by diff --git a/drivers/staging/etnaviv/etnaviv_gpu.h b/drivers/staging/etnaviv/etnaviv_gpu.h index b8332a981b7d..12c62593b42b 100644 --- a/drivers/staging/etnaviv/etnaviv_gpu.h +++ b/drivers/staging/etnaviv/etnaviv_gpu.h @@ -1,6 +1,5 @@ /* - * Copyright (C) 2013 Red Hat - * Author: Rob Clark robdclark@gmail.com + * Copyright (C) 2015 Etnaviv Project * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published by diff --git a/drivers/staging/etnaviv/etnaviv_mmu.c b/drivers/staging/etnaviv/etnaviv_mmu.c index a59d27a2adfe..ea6874eac574 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.c +++ b/drivers/staging/etnaviv/etnaviv_mmu.c @@ -1,6 +1,5 @@ /* - * Copyright (C) 2013 Red Hat - * Author: Rob Clark robdclark@gmail.com + * Copyright (C) 2015 Etnaviv Project * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published by diff --git a/drivers/staging/etnaviv/etnaviv_mmu.h b/drivers/staging/etnaviv/etnaviv_mmu.h index ca509441c76c..c437063c7316 100644 --- a/drivers/staging/etnaviv/etnaviv_mmu.h +++ b/drivers/staging/etnaviv/etnaviv_mmu.h @@ -1,6 +1,5 @@ /* - * Copyright (C) 2013 Red Hat - * Author: Rob Clark robdclark@gmail.com + * Copyright (C) 2015 Etnaviv Project * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published by
- correct license to the proper GPLv2 - add correct author names - remove double MODULE_DEVICE_TABLE - update driver date
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- drivers/staging/etnaviv/etnaviv_drv.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/drivers/staging/etnaviv/etnaviv_drv.c b/drivers/staging/etnaviv/etnaviv_drv.c index dd74425ff94b..6301aeb77c28 100644 --- a/drivers/staging/etnaviv/etnaviv_drv.c +++ b/drivers/staging/etnaviv/etnaviv_drv.c @@ -554,7 +554,7 @@ static struct drm_driver etnaviv_drm_driver = { .fops = &fops, .name = "etnaviv", .desc = "etnaviv DRM", - .date = "20130625", + .date = "20150302", .major = 1, .minor = 0, }; @@ -639,7 +639,7 @@ static struct platform_driver etnaviv_platform_driver = { .remove = etnaviv_pdev_remove, .driver = { .owner = THIS_MODULE, - .name = "vivante", + .name = "etnaviv", .of_match_table = dt_match, }, }; @@ -667,8 +667,9 @@ static void __exit etnaviv_exit(void) } module_exit(etnaviv_exit);
-MODULE_AUTHOR("Rob Clark <robdclark@gmail.com"); +MODULE_AUTHOR("Christian Gmeiner christian.gmeiner@gmail.com"); +MODULE_AUTHOR("Russell King rmk+kernel@arm.linux.org.uk"); +MODULE_AUTHOR("Lucas Stach l.stach@pengutronix.de"); MODULE_DESCRIPTION("etnaviv DRM Driver"); -MODULE_LICENSE("GPL"); -MODULE_ALIAS("platform:vivante"); -MODULE_DEVICE_TABLE(of, dt_match); +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS("platform:etnaviv");
This adds the device nodes for 2D, 3D and VG GPU cores.
Signed-off-by: Lucas Stach l.stach@pengutronix.de --- arch/arm/boot/dts/imx6dl.dtsi | 5 +++++ arch/arm/boot/dts/imx6q.dtsi | 14 ++++++++++++++ arch/arm/boot/dts/imx6qdl.dtsi | 19 +++++++++++++++++++ 3 files changed, 38 insertions(+)
diff --git a/arch/arm/boot/dts/imx6dl.dtsi b/arch/arm/boot/dts/imx6dl.dtsi index f94bf72832af..188a2cfb3073 100644 --- a/arch/arm/boot/dts/imx6dl.dtsi +++ b/arch/arm/boot/dts/imx6dl.dtsi @@ -104,6 +104,11 @@ compatible = "fsl,imx-display-subsystem"; ports = <&ipu1_di0>, <&ipu1_di1>; }; + + gpu-subsystem { + compatible = "fsl,imx-gpu-subsystem"; + cores = <&gpu_2d>, <&gpu_3d>; + }; };
&hdmi { diff --git a/arch/arm/boot/dts/imx6q.dtsi b/arch/arm/boot/dts/imx6q.dtsi index 93ec79bb6b35..94518103c544 100644 --- a/arch/arm/boot/dts/imx6q.dtsi +++ b/arch/arm/boot/dts/imx6q.dtsi @@ -153,6 +153,15 @@ status = "disabled"; };
+ gpu_vg: gpu@02204000 { + compatible = "vivante,gc"; + reg = <0x02204000 0x4000>; + interrupts = <0 11 IRQ_TYPE_LEVEL_HIGH>; + clocks = <&clks IMX6QDL_CLK_OPENVG_AXI>, + <&clks IMX6QDL_CLK_GPU2D_CORE>; + clock-names = "bus", "core"; + }; + ipu2: ipu@02800000 { #address-cells = <1>; #size-cells = <0>; @@ -225,6 +234,11 @@ compatible = "fsl,imx-display-subsystem"; ports = <&ipu1_di0>, <&ipu1_di1>, <&ipu2_di0>, <&ipu2_di1>; }; + + gpu-subsystem { + compatible = "fsl,imx-gpu-subsystem"; + cores = <&gpu_2d>, <&gpu_3d>, <&gpu_vg>; + }; };
&hdmi { diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi index d6c69ec44314..6fcd1138b532 100644 --- a/arch/arm/boot/dts/imx6qdl.dtsi +++ b/arch/arm/boot/dts/imx6qdl.dtsi @@ -118,6 +118,25 @@ status = "disabled"; };
+ gpu_3d: gpu@00130000 { + compatible = "vivante,gc"; + reg = <0x00130000 0x4000>; + interrupts = <0 9 IRQ_TYPE_LEVEL_HIGH>; + clocks = <&clks IMX6QDL_CLK_GPU3D_AXI>, + <&clks IMX6QDL_CLK_GPU3D_CORE>, + <&clks IMX6QDL_CLK_GPU3D_SHADER>; + clock-names = "bus", "core", "shader"; + }; + + gpu_2d: gpu@00134000 { + compatible = "vivante,gc"; + reg = <0x00134000 0x4000>; + interrupts = <0 10 IRQ_TYPE_LEVEL_HIGH>; + clocks = <&clks IMX6QDL_CLK_GPU2D_AXI>, + <&clks IMX6QDL_CLK_GPU2D_CORE>; + clock-names = "bus", "core"; + }; + timer@00a00600 { compatible = "arm,cortex-a9-twd-timer"; reg = <0x00a00600 0x20>;
On Thu, Apr 02, 2015 at 05:29:02PM +0200, Lucas Stach wrote:
Hey all,
this is the Etnaviv DRM driver for Vivante embedded GPUs. It is heavily influenced by the MSM driver, as can be clearly seen with the first commits.
You should be copying Greg KH for staging too.
Am Donnerstag, den 02.04.2015, 16:43 +0100 schrieb Russell King - ARM Linux:
On Thu, Apr 02, 2015 at 05:29:02PM +0200, Lucas Stach wrote:
Hey all,
this is the Etnaviv DRM driver for Vivante embedded GPUs. It is heavily influenced by the MSM driver, as can be clearly seen with the first commits.
You should be copying Greg KH for staging too.
I didn't do that on purpose. As stated below in the cover letter I'm not really happy to push things into staging. Especially after the experience with imx-drm, where it took us a considerable amount of work to even get people to look at the code after it had landed in staging.
If possible I would like to collect feedback now and only if someone feels genuinely unhappy about this code living under drivers/gpu/drm then keep it in staging. Otherwise I would like to move it when removing the RFC from this patchset.
Regards, Lucas
On Thu, Apr 2, 2015 at 10:59 AM, Lucas Stach l.stach@pengutronix.de wrote:
Am Donnerstag, den 02.04.2015, 16:43 +0100 schrieb Russell King - ARM Linux:
On Thu, Apr 02, 2015 at 05:29:02PM +0200, Lucas Stach wrote:
Hey all,
this is the Etnaviv DRM driver for Vivante embedded GPUs. It is heavily influenced by the MSM driver, as can be clearly seen with the first commits.
You should be copying Greg KH for staging too.
I didn't do that on purpose. As stated below in the cover letter I'm not really happy to push things into staging. Especially after the experience with imx-drm, where it took us a considerable amount of work to even get people to look at the code after it had landed in staging.
If possible I would like to collect feedback now and only if someone feels genuinely unhappy about this code living under drivers/gpu/drm then keep it in staging. Otherwise I would like to move it when removing the RFC from this patchset.
Looks good so far Lucas on my wand quad..
Where's your libdrm/mesa tree located that you've been working on?
debian@arm:~$ uname -r 4.0.0-rc6-armv7-devel-r24 debian@arm:~$ dmesg | grep etnaviv [ 3.711866] etnaviv gpu-subsystem: bound 134000.gpu (ops gpu_ops) [ 3.718015] etnaviv gpu-subsystem: bound 130000.gpu (ops gpu_ops) [ 3.724133] etnaviv gpu-subsystem: bound 2204000.gpu (ops gpu_ops) [ 3.730351] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007 [ 3.771045] etnaviv-gpu 130000.gpu: model: GC2000, revision: 5108 [ 3.802887] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215 [ 3.848287] [drm] Initialized etnaviv 1.0.0 20150302 on minor 1
Regards,
2015-04-02 22:01 GMT+02:00 Robert Nelson robertcnelson@gmail.com:
On Thu, Apr 2, 2015 at 10:59 AM, Lucas Stach l.stach@pengutronix.de wrote:
Am Donnerstag, den 02.04.2015, 16:43 +0100 schrieb Russell King - ARM Linux:
On Thu, Apr 02, 2015 at 05:29:02PM +0200, Lucas Stach wrote:
Hey all,
this is the Etnaviv DRM driver for Vivante embedded GPUs. It is heavily influenced by the MSM driver, as can be clearly seen with the first commits.
You should be copying Greg KH for staging too.
I didn't do that on purpose. As stated below in the cover letter I'm not really happy to push things into staging. Especially after the experience with imx-drm, where it took us a considerable amount of work to even get people to look at the code after it had landed in staging.
If possible I would like to collect feedback now and only if someone feels genuinely unhappy about this code living under drivers/gpu/drm then keep it in staging. Otherwise I would like to move it when removing the RFC from this patchset.
Looks good so far Lucas on my wand quad..
Where's your libdrm/mesa tree located that you've been working on?
debian@arm:~$ uname -r 4.0.0-rc6-armv7-devel-r24 debian@arm:~$ dmesg | grep etnaviv [ 3.711866] etnaviv gpu-subsystem: bound 134000.gpu (ops gpu_ops) [ 3.718015] etnaviv gpu-subsystem: bound 130000.gpu (ops gpu_ops) [ 3.724133] etnaviv gpu-subsystem: bound 2204000.gpu (ops gpu_ops) [ 3.730351] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007 [ 3.771045] etnaviv-gpu 130000.gpu: model: GC2000, revision: 5108 [ 3.802887] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215 [ 3.848287] [drm] Initialized etnaviv 1.0.0 20150302 on minor 1
Regards,
--
Where can we find the userspace (libdrm, mesa, ..)?
-- Christian Gmeiner, MSc
Hi Lucas,
Thanks for the patch series ! It sounds great, even if it probably needs to be squashed down to some patches, as most of the patches are fixes.
2015-04-02 22:01 GMT+02:00 Robert Nelson robertcnelson@gmail.com:
On Thu, Apr 2, 2015 at 10:59 AM, Lucas Stach l.stach@pengutronix.de wrote:
Am Donnerstag, den 02.04.2015, 16:43 +0100 schrieb Russell King - ARM Linux:
On Thu, Apr 02, 2015 at 05:29:02PM +0200, Lucas Stach wrote:
Hey all,
this is the Etnaviv DRM driver for Vivante embedded GPUs. It is heavily influenced by the MSM driver, as can be clearly seen with the first commits.
You should be copying Greg KH for staging too.
I didn't do that on purpose. As stated below in the cover letter I'm not really happy to push things into staging. Especially after the experience with imx-drm, where it took us a considerable amount of work to even get people to look at the code after it had landed in staging.
If possible I would like to collect feedback now and only if someone feels genuinely unhappy about this code living under drivers/gpu/drm then keep it in staging. Otherwise I would like to move it when removing the RFC from this patchset.
Looks good so far Lucas on my wand quad..
Where's your libdrm/mesa tree located that you've been working on?
debian@arm:~$ uname -r 4.0.0-rc6-armv7-devel-r24 debian@arm:~$ dmesg | grep etnaviv [ 3.711866] etnaviv gpu-subsystem: bound 134000.gpu (ops gpu_ops) [ 3.718015] etnaviv gpu-subsystem: bound 130000.gpu (ops gpu_ops) [ 3.724133] etnaviv gpu-subsystem: bound 2204000.gpu (ops gpu_ops) [ 3.730351] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007 [ 3.771045] etnaviv-gpu 130000.gpu: model: GC2000, revision: 5108 [ 3.802887] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215 [ 3.848287] [drm] Initialized etnaviv 1.0.0 20150302 on minor 1
Exactly the same question, I would like to give it a try on my board, how can I test it ?
Thanks, JM
On Thu, Apr 02, 2015 at 05:59:36PM +0200, Lucas Stach wrote:
Am Donnerstag, den 02.04.2015, 16:43 +0100 schrieb Russell King - ARM Linux:
On Thu, Apr 02, 2015 at 05:29:02PM +0200, Lucas Stach wrote:
Hey all,
this is the Etnaviv DRM driver for Vivante embedded GPUs. It is heavily influenced by the MSM driver, as can be clearly seen with the first commits.
You should be copying Greg KH for staging too.
I didn't do that on purpose. As stated below in the cover letter I'm not really happy to push things into staging. Especially after the experience with imx-drm, where it took us a considerable amount of work to even get people to look at the code after it had landed in staging.
If possible I would like to collect feedback now and only if someone feels genuinely unhappy about this code living under drivers/gpu/drm then keep it in staging. Otherwise I would like to move it when removing the RFC from this patchset.
If you want feedback can you please squash down the patch series so that it's clean? I don't like reading drivers and spotting problems just to realize that a few patches down an issue is fixed. -Daniel
Hello,
I've just build and booted the Etnaviv driver as module with Kernel 4.0.4.
When I unload the driver with rmmod an oops happens:
------------[ cut here ]------------ WARNING: CPU: 1 PID: 2192 at drivers/staging/etnaviv/etnaviv_gem.c:404 etnaviv_gem_paddr_locked+0x30/0x38 [etnaviv]() Modules linked in: nft_reject_ipv6 nf_reject_ipv6 nf_log_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 nf_tables_ipv6 nft_reject_ipv4 nf_reject_ipv4 nft_reject nf_log_ipv4 nf_log_common nft_log nft_counter nft_meta nf_conntrack_ipv4 nf_defrag_ipv4 nft_ct nf_conntrack nft_hash nft_rbtree nf_tables_ipv4 nf_tables nfnetlink bridge stp llc rfcomm bnep hci_uart sch_sfq nfsd auth_rpcgss lockd grace sunrpc imx_ipuv3_crtc dw_hdmi_imx dw_hdmi evdev imx_ipu_v3 brcmfmac imxdrm brcmutil ci_hdrc_imx ci_hdrc imx_thermal etnaviv(C-) usbmisc_imx drm_kms_helper gpio_keys CPU: 1 PID: 2192 Comm: rmmod Tainted: G C 4.0.4-wandq-00210-g9240da9 #325 Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree) [<8010cd14>] (unwind_backtrace) from [<80109778>] (show_stack+0x10/0x14) [<80109778>] (show_stack) from [<8068ed1c>] (dump_stack+0x94/0xd4) [<8068ed1c>] (dump_stack) from [<80126a54>] (warn_slowpath_common+0x84/0xb4) [<80126a54>] (warn_slowpath_common) from [<80126b20>] (warn_slowpath_null+0x1c/0x24) [<80126b20>] (warn_slowpath_null) from [<7f059d28>] (etnaviv_gem_paddr_locked+0x30/0x38 [etnaviv]) [<7f059d28>] (etnaviv_gem_paddr_locked [etnaviv]) from [<7f05b198>] (etnaviv_gpu_hw_init+0xb4/0x18c [etnaviv]) [<7f05b198>] (etnaviv_gpu_hw_init [etnaviv]) from [<7f05bbf0>] (etnaviv_gpu_rpm_resume+0x70/0xcc [etnaviv]) [<7f05bbf0>] (etnaviv_gpu_rpm_resume [etnaviv]) from [<80413044>] (__rpm_callback+0x2c/0x60) [<80413044>] (__rpm_callback) from [<80413098>] (rpm_callback+0x20/0x80) [<80413098>] (rpm_callback) from [<80413e50>] (rpm_resume+0x350/0x524) [<80413e50>] (rpm_resume) from [<80414070>] (__pm_runtime_resume+0x4c/0x64) [<80414070>] (__pm_runtime_resume) from [<8040b9a8>] (__device_release_driver+0x1c/0xc4) [<8040b9a8>] (__device_release_driver) from [<8040c0f4>] (driver_detach+0xac/0xb0) [<8040c0f4>] (driver_detach) from [<8040b75c>] (bus_remove_driver+0x4c/0xa0) [<8040b75c>] (bus_remove_driver) from [<7f05dcc8>] (etnaviv_exit+0x10/0x348 [etnaviv]) [<7f05dcc8>] (etnaviv_exit [etnaviv]) from [<801839d4>] (SyS_delete_module+0x174/0x1b8) [<801839d4>] (SyS_delete_module) from [<80106420>] (ret_fast_syscall+0x0/0x34) ---[ end trace e3e10844e84f28b3 ]--- Unable to handle kernel NULL pointer dereference at virtual address 00000058 pgd = d0430000 [00000058] *pgd=60452831, *pte=00000000, *ppte=00000000 Internal error: Oops: 17 [#1] PREEMPT SMP ARM Modules linked in: nft_reject_ipv6 nf_reject_ipv6 nf_log_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 nf_tables_ipv6 nft_reject_ipv4 nf_reject_ipv4 nft_reject nf_log_ipv4 nf_log_common nft_log nft_counter nft_meta nf_conntrack_ipv4 nf_defrag_ipv4 nft_ct nf_conntrack nft_hash nft_rbtree nf_tables_ipv4 nf_tables nfnetlink bridge stp llc rfcomm bnep hci_uart sch_sfq nfsd auth_rpcgss lockd grace sunrpc imx_ipuv3_crtc dw_hdmi_imx dw_hdmi evdev imx_ipu_v3 brcmfmac imxdrm brcmutil ci_hdrc_imx ci_hdrc imx_thermal etnaviv(C-) usbmisc_imx drm_kms_helper gpio_keys CPU: 1 PID: 2192 Comm: rmmod Tainted: G WC 4.0.4-wandq-00210-g9240da9 #325 Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree) task: d48ce300 ti: d1f26000 task.ti: d1f26000 PC is at etnaviv_buffer_init+0x4/0xa0 [etnaviv] LR is at etnaviv_gpu_hw_init+0x98/0x18c [etnaviv] pc : [<7f05d4f0>] lr : [<7f05b17c>] psr: a00e0013\x0asp : d1f27e98 ip : 00000000 fp : 01c09d58 r10: 00000000 r9 : d1f26000 r8 : 00000004 r7 : 80a02100 r6 : 80411c80 r5 : dd75bfc0 r4 : dd61f410 r3 : 00000000 r2 : 00000730 r1 : 00000000 r0 : dd61f410 Flags: NzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user Control: 10c5387d Table: 6043004a DAC: 00000015 Process rmmod (pid: 2192, stack limit = 0xd1f26210) Stack: (0xd1f27e98 to 0xd1f28000) 7e80: 00000001 dd61f410 7ea0: dd75bfc0 7f05bbf0 7f05bb80 de120a10 de120a74 80413044 de120a10 00000000 7ec0: de120410 80413098 80411c80 de120a10 00000000 80413e50 01c09d58 8068bebc 7ee0: ddaf2e25 8068c028 de1bb3a4 7f0601a4 de1d4044 de120a10 de120a74 00000004 7f00: 600e0013 801065a4 d1f26000 80414070 7f0601a4 de120a10 7f0601a4 de120a44 7f20: 00000081 8040b9a8 7f0601a4 de120a10 7f0601a4 8040c0f4 7f0601a4 01c09d8c 7f40: 00000000 8040b75c 7f060240 7f05dcc8 7f05dcb8 801839d4 d4b29488 616e7465 7f60: 00766976 00000000 d48ce728 00000000 d48ce728 00000000 80a4230c d48ce300 7f80: 01c09d58 8013da84 d4b29480 d1f26000 801065a4 00f27fb0 00000006 01c09d58 7fa0: 7e868e68 80106420 01c09d58 7e868e68 01c09d8c 00000800 63760a00 63760a00 7fc0: 01c09d58 7e868e68 00000000 00000081 00000001 7e869078 0002e1a0 01c09d58 7fe0: 76e94400 7e868e14 000186d0 76e9440c 600e0010 01c09d8c 3136315b 203a5d39 [<7f05d4f0>] (etnaviv_buffer_init [etnaviv]) from [<00000001>] (0x1) Code: eb4c93a7 e28dd014 e8bd8030 e5903050 (e5932058) ---[ end trace e3e10844e84f28b4 ]---
I've tried it two times, both ended with the same oops. So it seems to be reproducible (here).
I haven't had a deeper look at the source, but after a quick look I assume a fix isn't that hard.
Regards,
Alexander Holler
Hi Alexander,
Am Mittwoch, den 27.05.2015, 14:45 +0200 schrieb Alexander Holler:
Hello,
I've just build and booted the Etnaviv driver as module with Kernel 4.0.4.
When I unload the driver with rmmod an oops happens:
Thanks for the report.
I'm currently working on the patchstack to get it into shape for another submission. I'll take a look at this problem.
Regards, Lucas
------------[ cut here ]------------ WARNING: CPU: 1 PID: 2192 at drivers/staging/etnaviv/etnaviv_gem.c:404 etnaviv_gem_paddr_locked+0x30/0x38 [etnaviv]() Modules linked in: nft_reject_ipv6 nf_reject_ipv6 nf_log_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 nf_tables_ipv6 nft_reject_ipv4 nf_reject_ipv4 nft_reject nf_log_ipv4 nf_log_common nft_log nft_counter nft_meta nf_conntrack_ipv4 nf_defrag_ipv4 nft_ct nf_conntrack nft_hash nft_rbtree nf_tables_ipv4 nf_tables nfnetlink bridge stp llc rfcomm bnep hci_uart sch_sfq nfsd auth_rpcgss lockd grace sunrpc imx_ipuv3_crtc dw_hdmi_imx dw_hdmi evdev imx_ipu_v3 brcmfmac imxdrm brcmutil ci_hdrc_imx ci_hdrc imx_thermal etnaviv(C-) usbmisc_imx drm_kms_helper gpio_keys CPU: 1 PID: 2192 Comm: rmmod Tainted: G C 4.0.4-wandq-00210-g9240da9 #325 Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree) [<8010cd14>] (unwind_backtrace) from [<80109778>] (show_stack+0x10/0x14) [<80109778>] (show_stack) from [<8068ed1c>] (dump_stack+0x94/0xd4) [<8068ed1c>] (dump_stack) from [<80126a54>] (warn_slowpath_common+0x84/0xb4) [<80126a54>] (warn_slowpath_common) from [<80126b20>] (warn_slowpath_null+0x1c/0x24) [<80126b20>] (warn_slowpath_null) from [<7f059d28>] (etnaviv_gem_paddr_locked+0x30/0x38 [etnaviv]) [<7f059d28>] (etnaviv_gem_paddr_locked [etnaviv]) from [<7f05b198>] (etnaviv_gpu_hw_init+0xb4/0x18c [etnaviv]) [<7f05b198>] (etnaviv_gpu_hw_init [etnaviv]) from [<7f05bbf0>] (etnaviv_gpu_rpm_resume+0x70/0xcc [etnaviv]) [<7f05bbf0>] (etnaviv_gpu_rpm_resume [etnaviv]) from [<80413044>] (__rpm_callback+0x2c/0x60) [<80413044>] (__rpm_callback) from [<80413098>] (rpm_callback+0x20/0x80) [<80413098>] (rpm_callback) from [<80413e50>] (rpm_resume+0x350/0x524) [<80413e50>] (rpm_resume) from [<80414070>] (__pm_runtime_resume+0x4c/0x64) [<80414070>] (__pm_runtime_resume) from [<8040b9a8>] (__device_release_driver+0x1c/0xc4) [<8040b9a8>] (__device_release_driver) from [<8040c0f4>] (driver_detach+0xac/0xb0) [<8040c0f4>] (driver_detach) from [<8040b75c>] (bus_remove_driver+0x4c/0xa0) [<8040b75c>] (bus_remove_driver) from [<7f05dcc8>] (etnaviv_exit+0x10/0x348 [etnaviv]) [<7f05dcc8>] (etnaviv_exit [etnaviv]) from [<801839d4>] (SyS_delete_module+0x174/0x1b8) [<801839d4>] (SyS_delete_module) from [<80106420>] (ret_fast_syscall+0x0/0x34) ---[ end trace e3e10844e84f28b3 ]--- Unable to handle kernel NULL pointer dereference at virtual address 00000058 pgd = d0430000 [00000058] *pgd=60452831, *pte=00000000, *ppte=00000000 Internal error: Oops: 17 [#1] PREEMPT SMP ARM Modules linked in: nft_reject_ipv6 nf_reject_ipv6 nf_log_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 nf_tables_ipv6 nft_reject_ipv4 nf_reject_ipv4 nft_reject nf_log_ipv4 nf_log_common nft_log nft_counter nft_meta nf_conntrack_ipv4 nf_defrag_ipv4 nft_ct nf_conntrack nft_hash nft_rbtree nf_tables_ipv4 nf_tables nfnetlink bridge stp llc rfcomm bnep hci_uart sch_sfq nfsd auth_rpcgss lockd grace sunrpc imx_ipuv3_crtc dw_hdmi_imx dw_hdmi evdev imx_ipu_v3 brcmfmac imxdrm brcmutil ci_hdrc_imx ci_hdrc imx_thermal etnaviv(C-) usbmisc_imx drm_kms_helper gpio_keys CPU: 1 PID: 2192 Comm: rmmod Tainted: G WC 4.0.4-wandq-00210-g9240da9 #325 Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree) task: d48ce300 ti: d1f26000 task.ti: d1f26000 PC is at etnaviv_buffer_init+0x4/0xa0 [etnaviv] LR is at etnaviv_gpu_hw_init+0x98/0x18c [etnaviv] pc : [<7f05d4f0>] lr : [<7f05b17c>] psr: a00e0013\x0asp : d1f27e98 ip : 00000000 fp : 01c09d58 r10: 00000000 r9 : d1f26000 r8 : 00000004 r7 : 80a02100 r6 : 80411c80 r5 : dd75bfc0 r4 : dd61f410 r3 : 00000000 r2 : 00000730 r1 : 00000000 r0 : dd61f410 Flags: NzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user Control: 10c5387d Table: 6043004a DAC: 00000015 Process rmmod (pid: 2192, stack limit = 0xd1f26210) Stack: (0xd1f27e98 to 0xd1f28000) 7e80: 00000001 dd61f410 7ea0: dd75bfc0 7f05bbf0 7f05bb80 de120a10 de120a74 80413044 de120a10 00000000 7ec0: de120410 80413098 80411c80 de120a10 00000000 80413e50 01c09d58 8068bebc 7ee0: ddaf2e25 8068c028 de1bb3a4 7f0601a4 de1d4044 de120a10 de120a74 00000004 7f00: 600e0013 801065a4 d1f26000 80414070 7f0601a4 de120a10 7f0601a4 de120a44 7f20: 00000081 8040b9a8 7f0601a4 de120a10 7f0601a4 8040c0f4 7f0601a4 01c09d8c 7f40: 00000000 8040b75c 7f060240 7f05dcc8 7f05dcb8 801839d4 d4b29488 616e7465 7f60: 00766976 00000000 d48ce728 00000000 d48ce728 00000000 80a4230c d48ce300 7f80: 01c09d58 8013da84 d4b29480 d1f26000 801065a4 00f27fb0 00000006 01c09d58 7fa0: 7e868e68 80106420 01c09d58 7e868e68 01c09d8c 00000800 63760a00 63760a00 7fc0: 01c09d58 7e868e68 00000000 00000081 00000001 7e869078 0002e1a0 01c09d58 7fe0: 76e94400 7e868e14 000186d0 76e9440c 600e0010 01c09d8c 3136315b 203a5d39 [<7f05d4f0>] (etnaviv_buffer_init [etnaviv]) from [<00000001>] (0x1) Code: eb4c93a7 e28dd014 e8bd8030 e5903050 (e5932058) ---[ end trace e3e10844e84f28b4 ]---
I've tried it two times, both ended with the same oops. So it seems to be reproducible (here).
I haven't had a deeper look at the source, but after a quick look I assume a fix isn't that hard.
Regards,
Alexander Holler
On Wed, May 27, 2015 at 02:49:17PM +0200, Lucas Stach wrote:
Hi Alexander,
Am Mittwoch, den 27.05.2015, 14:45 +0200 schrieb Alexander Holler:
Hello,
I've just build and booted the Etnaviv driver as module with Kernel 4.0.4.
When I unload the driver with rmmod an oops happens:
Thanks for the report.
I'm currently working on the patchstack to get it into shape for another submission. I'll take a look at this problem.
Lucas,
We definitely need to talk about this... please can you take a look at my patch stack which I mentioned in my reply to Alexander. I know that you already based your patch set off one of my out-dated patch stacks, but unless you grab a more recent one, you're going to be solving a lot of bugs that I've fixed.
Also, I've heard via IRC that you've updated my DDX - which is something I've also done (I think I did mention that I would be doing this... and I also have a bunch of work on the Xrender backend.) I suspect that makes your work there redundant.
There's at least two more issues that need to be discussed, which is concerning the command buffers, and their DMA-coherent nature, which makes the kernel command parser expensive. In my perf measurements, it's right at the top of the list as the most expensive function.
I've made some improvements to the parser which reduces its perf figure a little, but it's still up there as the number one hot function, inspite of the code being as tight as the compiler can manage.
This is probably because we're reading from uncached memory: the CPU can't speculatively prefetch into the cache any of the data from memory.
It's _probably_ (I haven't benchmarked it) going to be faster to copy_from_user() the command buffers into the kernel, either directly into DMA coherent memory, or into cacheable memory, and then run them through the command parser, followed by either a copy to DMA coherent memory or using the DMA streaming API to push the cache lines out.
This has other advantages: by not directly exposing the memory which the GPU executes its command stream into userspace, we prevent submit-then- change "attacks" bypassing the kernel command parser.
Another issue is that we incur quite a heavy overhead if we allocate an etnadrm buffer, and then map it into userspace. We end up allocating all pages for the buffer as soon as any page is faulted in (consider if it's a 1080p buffer...) and then setup the scatterlists. That's useless overhead if we later decide that we're not going to pass it to the GPU (eg, because Xorg allocated a pixmap which it then only performed CPU operations on.) I have "changes" for this which aren't prepared as proper patches yet (in other words, they're currently as a playground of changes.)
I have other improvements pending which needs proper perf analysis before I can sort them out properly. These change the flow for a submitted command set - reading all data from userspace before taking dev->struct_mutex, since holding this lock over a page fault isn't particularly graceful. Do we really want to stall graphics on the entire GPU while some process' page happens to be swapped out to disk? I suspect not.
Hi Russell, et al.
first let me say I'm sorry that I have been less responsive than some of you would have hoped. I'm trying to get better at that, but juggling a large pile of components where none of them are currently mainline isn't an easy task, especially with downstream priorities not quite lining up with the mainlining efforts. I'm trying to get a better time management in place with slots reserved for the mainline stuff.
To make this a bit easier for me I would like to ask you to please keep the relevant discussions on E-Mail. Following another communication channel like IRC just makes things harder for me. I'm prepared to take critique on public mail, no need to keep things hidden. ;)
Am Donnerstag, den 28.05.2015, 00:03 +0100 schrieb Russell King - ARM Linux:
On Wed, May 27, 2015 at 02:49:17PM +0200, Lucas Stach wrote:
Hi Alexander,
Am Mittwoch, den 27.05.2015, 14:45 +0200 schrieb Alexander Holler:
Hello,
I've just build and booted the Etnaviv driver as module with Kernel 4.0.4.
When I unload the driver with rmmod an oops happens:
Thanks for the report.
I'm currently working on the patchstack to get it into shape for another submission. I'll take a look at this problem.
Lucas,
We definitely need to talk about this... please can you take a look at my patch stack which I mentioned in my reply to Alexander. I know that you already based your patch set off one of my out-dated patch stacks, but unless you grab a more recent one, you're going to be solving a lot of bugs that I've fixed.
Right, thanks for the pointer. I took a look at this and it doesn't seem to clash with things I did until now, so it should be pretty easy for me to resync your changes into a common tree.
Also, I've heard via IRC that you've updated my DDX - which is something I've also done (I think I did mention that I would be doing this... and I also have a bunch of work on the Xrender backend.) I suspect that makes your work there redundant.
I've minimally beat it into shape to work on top of the new kernel driver and fixed some XV accel bugs when used with GStreamer. I'm also looking at accelerating rotated blits to speed up the rotated display case. I consider none of this to be stable and tested enough to push it out right now. I'll keep watching the things you do the driver and ping you if I have anything worthwhile to integrate.
There's at least two more issues that need to be discussed, which is concerning the command buffers, and their DMA-coherent nature, which makes the kernel command parser expensive. In my perf measurements, it's right at the top of the list as the most expensive function.
I've made some improvements to the parser which reduces its perf figure a little, but it's still up there as the number one hot function, inspite of the code being as tight as the compiler can manage.
This is probably because we're reading from uncached memory: the CPU can't speculatively prefetch into the cache any of the data from memory.
Yes, I've already seen this coming up in the traces.
I was thinking about exposing cached memory to userspace for the command streams and then copying the commands into a write-combined buffer while validating them. This isn't far from your idea and I think it should let us optimize the CPU access for validation and buffer reloc patching, while avoiding the need to do additional cache synchronization and plugging the submit-then-change attack vector.
It's _probably_ (I haven't benchmarked it) going to be faster to copy_from_user() the command buffers into the kernel, either directly into DMA coherent memory, or into cacheable memory, and then run them through the command parser, followed by either a copy to DMA coherent memory or using the DMA streaming API to push the cache lines out.
This has other advantages: by not directly exposing the memory which the GPU executes its command stream into userspace, we prevent submit-then- change "attacks" bypassing the kernel command parser.
Another issue is that we incur quite a heavy overhead if we allocate an etnadrm buffer, and then map it into userspace. We end up allocating all pages for the buffer as soon as any page is faulted in (consider if it's a 1080p buffer...) and then setup the scatterlists. That's useless overhead if we later decide that we're not going to pass it to the GPU (eg, because Xorg allocated a pixmap which it then only performed CPU operations on.) I have "changes" for this which aren't prepared as proper patches yet (in other words, they're currently as a playground of changes.)
I think this is a userspace problem. Userspace should not allocate any GEM objects until it is clear that the buffer is going to be used by the GPU. Experience with the Intel DDX shows that a crucial part for good X accel performance is to have a proper strategy for migrating pixmaps between the CPU and GPU in place, even with UMA.
I have other improvements pending which needs proper perf analysis before I can sort them out properly. These change the flow for a submitted command set - reading all data from userspace before taking dev->struct_mutex, since holding this lock over a page fault isn't particularly graceful. Do we really want to stall graphics on the entire GPU while some process' page happens to be swapped out to disk? I suspect not.
As long as those changes don't incur any userspace visible changes it should be safe to keep them in the playground until the basics are mainline. Nonetheless I would be grateful if you could point me to those patches, so I can get a picture of how those changes would look like.
Regards, Lucas
On Tue, Jun 09, 2015 at 12:13:05PM +0200, Lucas Stach wrote:
To make this a bit easier for me I would like to ask you to please keep the relevant discussions on E-Mail. Following another communication channel like IRC just makes things harder for me. I'm prepared to take critique on public mail, no need to keep things hidden. ;)
Unfortunately, a lot of discussion does happen on IRC - that's why there exists a #etnaviv channel. It's not that it's hidden, it's just that you're only there very rarely.
Am Donnerstag, den 28.05.2015, 00:03 +0100 schrieb Russell King - ARM Linux:
Lucas,
We definitely need to talk about this... please can you take a look at my patch stack which I mentioned in my reply to Alexander. I know that you already based your patch set off one of my out-dated patch stacks, but unless you grab a more recent one, you're going to be solving a lot of bugs that I've fixed.
Right, thanks for the pointer. I took a look at this and it doesn't seem to clash with things I did until now, so it should be pretty easy for me to resync your changes into a common tree.
That's good.
Also, I've heard via IRC that you've updated my DDX - which is something I've also done (I think I did mention that I would be doing this... and I also have a bunch of work on the Xrender backend.) I suspect that makes your work there redundant.
I've minimally beat it into shape to work on top of the new kernel driver and fixed some XV accel bugs when used with GStreamer.
What bugs are they? It would be nice to have at least bug fixes sent in a timely manner.
I'm also looking at accelerating rotated blits to speed up the rotated display case. I consider none of this to be stable and tested enough to push it out right now. I'll keep watching the things you do the driver and ping you if I have anything worthwhile to integrate.
Rotated blits for general pixmaps don't make sense.
The only place where this becomes useful is in Xrender's composite method, where we have picture transforms to deal with.
However, I forsee problems there, as miComputeCompositeRegion() doesn't take account of any rotations. (The source pict->clientClip is in unrotated coordinates looking at miValidatePicture(), and miClipPictureSrc() will apply that clip region in an untranslated manner to the destination.) Even considering flips, I'm not sure it's valid to use the region computed by miComputeCompositeRegion() as the computed region would appear (at least to me) to be wrong. Maybe I'm misunderstanding something in the Xrender extension though.
I have some experimental code I'm running on iMX6 which implements the Xrandr H and V flips, as well as 180° rotation (iow, H and V flip) but the conclusion I've come to is it will probably be much better (and more efficient) to use any abilities of the KMS driver to do the final rotation/flipping rather than having the GPU do this. Couple this with my point above, and it's the reason why I haven't put much effort into that bit.
Another issue is that we incur quite a heavy overhead if we allocate an etnadrm buffer, and then map it into userspace. We end up allocating all pages for the buffer as soon as any page is faulted in (consider if it's a 1080p buffer...) and then setup the scatterlists. That's useless overhead if we later decide that we're not going to pass it to the GPU (eg, because Xorg allocated a pixmap which it then only performed CPU operations on.) I have "changes" for this which aren't prepared as proper patches yet (in other words, they're currently as a playground of changes.)
I think this is a userspace problem. Userspace should not allocate any GEM objects until it is clear that the buffer is going to be used by the GPU. Experience with the Intel DDX shows that a crucial part for good X accel performance is to have a proper strategy for migrating pixmaps between the CPU and GPU in place, even with UMA.
I don't think so. There are certain cases where we know that a pixmap is never going to be supported by the GPU (eg, unsupported format) but there are a lot of cases where we just don't know.
The problem is that bouncing a 1080p pixmap between the CPU and GPU is expensive. If we're the situation where we have multiple different buffers for a single pixmap (eg, one for the fb code, one for the GPU) then we'll be forever copying pixels between the two, which just adds to the overhead. Yes, I'm aware that i915 does damage tracking to reduce the amount of copying, but that still doesn't get away from reading from an uncached pixmap.
(btw, when this code runs on Armada DRM, it's running with fully cached pixmaps allocated by Armada DRM, it doesn't use the Etnaviv allocator, and it doesn't suffer from these overheads as a result.)
I have other improvements pending which needs proper perf analysis before I can sort them out properly. These change the flow for a submitted command set - reading all data from userspace before taking dev->struct_mutex, since holding this lock over a page fault isn't particularly graceful. Do we really want to stall graphics on the entire GPU while some process' page happens to be swapped out to disk? I suspect not.
As long as those changes don't incur any userspace visible changes it should be safe to keep them in the playground until the basics are mainline. Nonetheless I would be grateful if you could point me to those patches, so I can get a picture of how those changes would look like.
They don't, but the one we do need to solve before we can think about mainlining anything is the cache status of the command buffers, and whether we still use bos in userspace for those. If we're going to switch to reading via copy_from_user() (which has the advantage that it'll work on virtual caches - having userspace write via the userspace alias, and then trying to read from the kernel space alias is fraught with problems for systems with virtual caches) then there's no point in having the commands stored in a bo.
On Wed, May 27, 2015 at 02:45:48PM +0200, Alexander Holler wrote:
Hello,
I've just build and booted the Etnaviv driver as module with Kernel 4.0.4.
You may wish to try using my patch set(s) at (url purposely obfuscated, sorry):
http : // www . home . arm . linux . org . uk / ~rmk / cubox
which is where I publish my Cubox-i/hummingboard patches. My advice would be to grab the latest tarball (even though its against 4.1-rc1), and apply the etnaviv patches from it, and the appropriate DT update patches.
This has all Lucas' API updates incorporated, but also a lot of fixes and other improvements.
Am 27.05.2015 um 19:35 schrieb Russell King - ARM Linux:
On Wed, May 27, 2015 at 02:45:48PM +0200, Alexander Holler wrote:
Hello,
I've just build and booted the Etnaviv driver as module with Kernel 4.0.4.
You may wish to try using my patch set(s) at (url purposely obfuscated, sorry):
http : // www . home . arm . linux . org . uk / ~rmk / cubox
which is where I publish my Cubox-i/hummingboard patches. My advice would be to grab the latest tarball (even though its against 4.1-rc1), and apply the etnaviv patches from it, and the appropriate DT update patches.
This has all Lucas' API updates incorporated, but also a lot of fixes and other improvements.
Thanks,
Alexander Holler
dri-devel@lists.freedesktop.org