Hi, Christian,
We have an upcoming use-case in i915 where one solution would be sparsely populated TTM bos.
We had that at one point where ttm_tt pages were allocated on demand, but this time we'd rather be looking at multiple struct ttm_resources per bo and those resources could be from different managers.
There might theoretically be other ways we can handle this use-case but I wanted to check with you whether this is something AMD is already looking into and if not, your general opinion.
Thanks, Thomas
Hi Thomas,
Am 19.11.21 um 15:28 schrieb Thomas Hellström:
Hi, Christian,
We have an upcoming use-case in i915 where one solution would be sparsely populated TTM bos.
We had that at one point where ttm_tt pages were allocated on demand, but this time we'd rather be looking at multiple struct ttm_resources per bo and those resources could be from different managers.
There might theoretically be other ways we can handle this use-case but I wanted to check with you whether this is something AMD is already looking into and if not, your general opinion.
oh, yes I've looked into this as well a very long time ago.
At that point the basic blocker was that we couldn't have different cache setting for the same VMA, but I think that's fixed by now.
Another thing is that you essentially need to move the LRU handling into the resource like I already planned to do anyway.
Regards, Christian.
Thanks, Thomas
On Fri, Nov 19, 2021 at 05:35:53PM +0100, Christian König wrote:
Hi Thomas,
Am 19.11.21 um 15:28 schrieb Thomas Hellström:
Hi, Christian,
We have an upcoming use-case in i915 where one solution would be sparsely populated TTM bos.
We had that at one point where ttm_tt pages were allocated on demand, but this time we'd rather be looking at multiple struct ttm_resources per bo and those resources could be from different managers.
There might theoretically be other ways we can handle this use-case but I wanted to check with you whether this is something AMD is already looking into and if not, your general opinion.
oh, yes I've looked into this as well a very long time ago.
At that point the basic blocker was that we couldn't have different cache setting for the same VMA, but I think that's fixed by now.
I think for cpu mmap we might just disallow them. Or we just migrate them back into so that cpu access is always done in the same (or at least a compatible) cache domain.
We can't really talk yet about what this thing is for, but "entire ttm_bo cpu mmap must have same caching mode" shouldn't be a real limitation for what we want to do here.
Another thing is that you essentially need to move the LRU handling into the resource like I already planned to do anyway.
Yeah, hence why I suggested going ttm_bo : ttm_resource 1:N might be a good idea in general, and we could piggy-pack on top of this. If you're all on board then I guess we'll try to prototype something and maybe if you're bored we could resurrect some of the patches to move lru/dma_resv and whatever else from ttm_bo to ttm_resource? Just to see how much this would impact. -Daniel
dri-devel@lists.freedesktop.org