On Wed, Jun 27, 2012 at 05:29:14PM +0300, Hiroshi Doyu wrote:
On Wed, 27 Jun 2012 16:08:10 +0200 Thierry Reding thierry.reding@avionic-design.de wrote:
- PGP Signed by an unknown key
On Wed, Jun 27, 2012 at 03:59:07PM +0300, Hiroshi Doyu wrote:
On Wed, 27 Jun 2012 07:14:18 +0200 Thierry Reding thierry.reding@avionic-design.de wrote:
Old Signed by an unknown key
On Tue, Jun 26, 2012 at 08:48:18PM -0600, Stephen Warren wrote:
On 06/26/2012 08:32 PM, Mark Zhang wrote:
> On 06/26/2012 07:46 PM, Mark Zhang wrote: >>>> On Tue, 26 Jun 2012 12:55:13 +0200 >>>> Thierry Reding thierry.reding@avionic-design.de wrote: > ... >>> I'm not sure I understand how information about the carveout would be >>> obtained from the IOMMU API, though. >> >> I think that can be similar with current gart implementation. Define carveout as: >> >> carveout { >> compatible = "nvidia,tegra20-carveout"; >> size = <0x10000000>; >> }; >> >> Then create a file such like "tegra-carveout.c" to get these definitions and > register itself as platform device's iommu instance. > > The carveout isn't a HW object, so it doesn't seem appropriate to define a DT > node to represent it.
Yes. But I think it's better to export the size of carveout as a configurable item. So we need to define this somewhere. How about define carveout as a property of gart?
There already exists a way of preventing Linux from using certain chunks of memory; the /memreserve/ syntax. From a brief look at the dtc source, it looks like /memreserve/ entries can have labels, which implies that a property in the GART node could refer to the /memreserve/ entry by phandle in order to know what memory regions to use.
Wasn't the whole point of using a carveout supposed to be a replacement for the GART?
Mostly agree. IIUC, we use both carveout/gart allocated buffers in android/tegra2.
As such I'd think the carveout should rather be a property of the host1x device.
Rather than introducing a new property, how about using "coherent_pool=??M" in the kernel command line if necessary? I think that this carveout size depends on the system usage/load.
I was hoping that we could get away with using the CMA and perhaps initialize it based on device tree content. I agree that the carveout size depends on the use-case, but I still think it makes sense to specify it on a per-board basis.
DRM driver doesn't know if it uses CMA or not, because DRM only uses DMA API.
So how is the DRM supposed to allocate buffers? Does it call the dma_alloc_from_contiguous() function to do that? I can see how it is used by arm_dma_ops but how does it end up in the driver?
I think that "coherent_pool" can be used only when the amount of contiguous memory is short in your system. Otherwise even unnecessary.
Could you explain a bit more why you want carveout size on per-board basis?
In the ideal case I would want to not have a carveout size at all. However there may be situations where you need to make sure some driver can allocate a given amount of memory. Having to specify this using a kernel command-line parameter is cumbersome because it may require changes to the bootloader or whatever. So if you know that a particular board always needs 128 MiB of carveout, then it makes sense to specify it on a per-board basis.
Thierry