Oops, meant to put dri-devel on cc here.
Am Freitag, den 13.04.2012, 22:12 +0200 schrieb Lucas Stach:
Am Freitag, den 13.04.2012, 08:33 +0100 schrieb Dave Airlie:
On Fri, Apr 13, 2012 at 12:10 AM, Lucas Stach dev@lynxeye.de wrote:
Am Mittwoch, den 11.04.2012, 15:18 +0000 schrieb Arnd Bergmann:
On Wednesday 11 April 2012, Thierry Reding wrote:
- Daniel Vetter wrote:
On Wed, Apr 11, 2012 at 03:23:26PM +0200, Thierry Reding wrote: > * Daniel Vetter wrote: > > On Wed, Apr 11, 2012 at 02:10:30PM +0200, Thierry Reding wrote: > > > This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It > > > currently has rudimentary GEM support and can run a console on the > > > framebuffer as well as X using the xf86-video-modesetting driver. > > > Only the RGB output is supported. Quite a lot of things still need > > > to be worked out and there is a lot of room for cleanup. > > > > Indeed, after a quick look there are tons of functions that are just stubs > > ;-) One thing I wonder though is why you directly use the iommu api and > > not wrap it up into dma_map? Is arm infrastructure just not there yet or > > do you plan to tightly integrate the tegra drm with the iommu (e.g. for > > process space switching or similarly funky stuff)? > > I'm not sure I know what you are referring to. Looking for all users of > iommu_map() doesn't turn up anything related to dma_map. Can you point me in > the right direction?
Well, you use the iommu api to map/unmap memory into the iommu for tegra, whereas usually device drivers just use the dma api to do that. The usual interface is dma_map_sg/dma_unmap_sg, but there are quite a few variants around. I'm just wondering why this you've choosen this.
I don't think this works on ARM. Maybe I'm not seeing the whole picture but judging by a quick look through the kernel tree there aren't any users that map DMA memory through an IOMMU.
dma_map_sg is certainly the right interface to use, and Marek Szyprowski has patches to make that work on ARM, hopefully going into v3.5, so you could use those.
Just jumping in here to make sure everyone understands the limitations of the Tegra 2 GART IOMMU we are talking about here. It has no isolation capabilities and a really small remapping window of 32MB. So it's impossible to remap every buffer used by the graphics engines. The only sane way to handle this is to set aside a chunk of stolen system memory as VRAM and let a memory manager like TTM handle the allocation of linear regions and GART mappings. This means a more tight integration of the DRM driver and the IOMMU, where I think that using the IOMMU API directly and completely controlling the GART from one driver is the right way to go for a number of reasons, where my biggest concern is that we can't implement a sane out-of-remapping space when we go through the dma_map API.
It sounds like the old AGP GARTs on PowerPC, where the CPU couldn't get mappings of the GTT space, and the GART was only available to the GPU device.
Yes, maybe we should treat the Tegra GART just like this. My current plan is to write a TTM backend on top of the IOMMU API as I think this is the right level of abstraction and both Tegra 2 GART and Tegra 3 SMMU are available through this interface. I think we can just allocate the real pages from highmem and flip them into IOMMU address space as needed - just the normal TTM use-case.
-- Lucas
dri-devel@lists.freedesktop.org