Hi,
On Thu, Aug 20, 2020 at 09:07:29AM +0100, Ezequiel Garcia wrote:
For single-device allocations, would using the buffer allocation functionality of that device's native API be better in most cases? (Some other possibly relevant discussion at [1])
That may be true for existing subsystems.
However, how about a new subsystem/API wanting to leverage the heap API and avoid exposing yet another allocator API?
And then, also, if we have an allocator API, perhaps we could imagine a future where applications would only need to worry about that, and not about each per-framework allocator.
Yeah both are reasonable points. I was thinking in the context of the thread I linked where allocating lots of GEM handles for process-local use is preferable to importing loads of dma_buf fds, but in that case the userspace graphics driver is somewhat "special" rather than a generic application in the traditional sense.
I do think the best usage model for applications though is to use libraries which can hide the details, instead of going at the kernel APIs directly.
I can see that this can save some boilerplate for devices that want to expose private chunks of memory, but might it also lead to 100 aliases for the system's generic coherent memory pool?
The idea (even if PoC) was to let drivers decide if they are special enough to add themselves (via dev_dma_heap_add).
OK, that makes sense. I think it's tricky to know how many "special" chunks of memory will already be hooked up to the DMA infrastructure and how many would need some extra/special work.
Cheers, -Brian