On Wed, Apr 24, 2013 at 11:35 PM, Laura Abbott <lauraa(a)codeaurora.org> wrote:
> Hi all,
>
> I've been looking at a better way to do custom dma allocation algorithms in
> a similar style to Ion heaps. Most drivers/clients have come up with a
> series of semi-standard ways to get memory (CMA, memblock_reserve,
> discontiguous pages etc.) . As these allocation schemes get more and more
> complex, there needs to be a since place where all clients (Ion based driver
> …
[View More]vs. DRM driver vs. ???) can independently take advantage of any
> optimizations and call a single API for the backing allocations.
>
> The dma_map_ops take care of almost everything needed for abstraction
> but the question is where should new allocation algorithms be located?
> Most of the work has been added to either arm/mm/dma-mapping.c or
> dma-contiguous.c . My current thought:
>
> 1) split out the dma_map_ops currently in dma-mapping.c into separate files
> (dma-mapping-common.c, dma-mapping-iommu.c)
> 2) Extend dma-contiguous.c to support memblock_reserve memory
> 3) Place additional algorithms in either arch/arm/mm or
> drivers/base/dma-alloc/ as appropriate to the code. This is the part where
> I'm most unsure about the direction.
>
> I don't have anything written yet but I plan to draft some patches assuming
> the proposed approach sounds reasonable and no one else has started on
> something similar already.
>
> Thoughts? Opinions?
>From my (oblivious to all the arm madness) pov the big thing is
getting dma allocations working for more than one struct device. This
way we could get rid of to "where do I need to allocate buffers"
duplication between the kernel and userspace (which needs to know that
to pick the right ion heap), which is my main gripe with ion ;-)
Rob Clark sent out a quick rfc for that a while back:
http://lists.linaro.org/pipermail/linaro-mm-sig/2012-July/002250.html
But that's by far not good enough for arm, especially now that cma
gets tightly bound to individual devices with the dt bindings. Also,
no one really followed up on Rob's patches, and personally I don't
really care that much since x86 is a bit saner ... But it should be
good enough for contiguous allocations, which leaves only really crazy
stuff unsolved.
So I think when you want to rework the various algorithms for
allocating dma mem and consolidate them it should also solve this
little multi-dev issue.
Adding tons more people/lists who might be interested.
Cheers, Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
[View Less]
Hi all,
Today's linux-next merge of the drm-intel tree got a conflict in
drivers/gpu/drm/i915/i915_reg.h between commit a65851af5938 ("drm/i915:
Make data/link N value power of two") from the drm tree and commit
72419203cab9 ("drm/i915: hw state readout support for fdi m/n") from the
drm-intel tree.
I fixed it up (see below) and can carry the fix as necessary (no action
is required).
--
Cheers,
Stephen Rothwell sfr(a)canb.auug.org.au
diff --cc drivers/gpu/drm/i915/…
[View More]i915_reg.h
index 83f9c26,b5d87bd..0000000
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@@ -2652,11 -2774,11 +2774,12 @@@
#define _PIPEB_GMCH_DATA_M 0x71050
/* Transfer unit size for display port - 1, default is 0x3f (for TU size 64) */
-#define PIPE_GMCH_DATA_M_TU_SIZE_MASK (0x3f << 25)
-#define PIPE_GMCH_DATA_M_TU_SIZE_SHIFT 25
+#define TU_SIZE(x) (((x)-1) << 25) /* default size 64 */
+#define TU_SIZE_MASK (0x3f << 25)
+ #define TU_SIZE_SHIFT 25
-#define PIPE_GMCH_DATA_M_MASK (0xffffff)
+#define DATA_LINK_M_N_MASK (0xffffff)
+#define DATA_LINK_N_MAX (0x800000)
#define _PIPEA_GMCH_DATA_N 0x70054
#define _PIPEB_GMCH_DATA_N 0x71054
[View Less]
https://bugs.freedesktop.org/show_bug.cgi?id=64091
Michel Dänzer <michel(a)daenzer.net> changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|mesa-dev(a)lists.freedesktop. |dri-devel(a)lists.freedesktop
|org |.org
Component|Mesa core |Drivers/Gallium/r600
--- Comment #8 from Michel Dänzer <michel(a)daenzer.…
[View More]net> ---
It's unlikely that both drivers are affected by one and the same problem. Let's
start looking at this from the r600g driver first and work our way up the stack
as necessary. :)
The r600g driver has been working around the issues pointed out by Alex using
GPU byte-swapping facilities, but that obviously isn't working for
depth/stencil readback.
--
You are receiving this mail because:
You are the assignee for the bug.
[View Less]