On Thu, May 27, 2021 at 07:08:04PM -0400, Felix Kuehling wrote:
Now we're trying to migrate data to and from that memory using the migrate_vma_* helpers so we can support page-based migration in our unified memory allocations, while also supporting CPU access to those pages.
So you have completely coherent and indistinguishable GPU and CPU memory and the need of migration is basicaly alot like NUMA policy choice - get better access locality?
This patch series makes a few changes to make MEMORY_DEVICE_GENERIC pages behave correctly in the migrate_vma_* helpers. We are looking for feedback about this approach. If we're close, what's needed to make our patches acceptable upstream? If we're not close, any suggestions how else to achieve what we are trying to do (i.e. page migration and coherent CPU access to VRAM)?
I'm not an expert in migrate, but it doesn't look outrageous.
Have you thought about allowing MEMORY_DEVICE_GENERIC to work with hmm_range_fault() so you can have nice uniform RDMA?
People have wanted to do that with MEMORY_DEVICE_PRIVATE but nobody finished the work
Jason