On Wed, 27 Jun 2012 04:48:18 +0200 Stephen Warren swarren@nvidia.com wrote:
On 06/26/2012 08:32 PM, Mark Zhang wrote:
On 06/26/2012 07:46 PM, Mark Zhang wrote:
On Tue, 26 Jun 2012 12:55:13 +0200 Thierry Reding thierry.reding@avionic-design.de wrote:
...
I'm not sure I understand how information about the carveout would be obtained from the IOMMU API, though.
I think that can be similar with current gart implementation. Define carveout as:
carveout { compatible = "nvidia,tegra20-carveout"; size = <0x10000000>; };
Then create a file such like "tegra-carveout.c" to get these definitions and
register itself as platform device's iommu instance.
The carveout isn't a HW object, so it doesn't seem appropriate to define a DT node to represent it.
Yes. But I think it's better to export the size of carveout as a configurable item. So we need to define this somewhere. How about define carveout as a property of gart?
There already exists a way of preventing Linux from using certain chunks of memory; the /memreserve/ syntax. From a brief look at the dtc source, it looks like /memreserve/ entries can have labels, which implies that a property in the GART node could refer to the /memreserve/ entry by phandle in order to know what memory regions to use.
I think that we don't need the starting address for carveout but we need its size. carveout memory is just anonymous physically continguous buffer.