-----Original Message----- From: Jason Gunthorpe jgg@ziepe.ca Sent: Tuesday, June 30, 2020 12:17 PM To: Xiong, Jianxin jianxin.xiong@intel.com Cc: linux-rdma@vger.kernel.org; Doug Ledford dledford@redhat.com; Sumit Semwal sumit.semwal@linaro.org; Leon Romanovsky leon@kernel.org; Vetter, Daniel daniel.vetter@intel.com; Christian Koenig christian.koenig@amd.com; dri- devel@lists.freedesktop.org Subject: Re: [RFC PATCH v2 0/3] RDMA: add dma-buf support
On Tue, Jun 30, 2020 at 05:21:33PM +0000, Xiong, Jianxin wrote:
Heterogeneous Memory Management (HMM) utilizes mmu_interval_notifier and ZONE_DEVICE to support shared virtual address space and page migration between system memory and device memory. HMM doesn't support pinning device memory because pages located on device must be able to migrate to system memory when accessed by CPU. Peer-to-peer access is possible if the peer can handle page fault. For RDMA, that means the NIC must support on-demand paging.
peer-peer access is currently not possible with hmm_range_fault().
Currently hmm_range_fault() always sets the cpu access flag and device private pages are migrated to the system RAM in the fault handler. However, it's possible to have a modified code flow to keep the device private page info for use with peer to peer access.
Sort of, but only within the same device, RDMA or anything else generic can't reach inside a DEVICE_PRIVATE and extract anything
useful.
But pfn is supposed to be all that is needed.
Needed for what? The PFN of the DEVICE_PRIVATE pages is useless for anything.
Hmm. I thought the pfn corresponds to the address in the BAR range. I could be wrong here.
Well, what do you want to happen here? The RDMA parts are reasonable, but I don't want to add new functionality without a purpose - the other parts need to be settled out first.
At the RDMA side, we mainly want to check if the changes are acceptable. For example, the part about adding 'fd' to the device ops and the ioctl interface. All the previous comments are very helpful for us to refine the patch so that we can be ready when GPU side support becomes available.
Well, I'm not totally happy with the way the umem and the fd is handled so roughly and incompletely..
Yes, this feedback is very helpful. Will work on improving the code.
Hum. This is not actually so hard to do. The whole dma buf proposal would make a lot more sense if the 'dma buf MR' had to be the dynamic kind and the driver had to provide the faulting. It would not be so hard to change mlx5 to be able to work like this, perhaps. (the locking might be a bit tricky though)
The main issue is that not all NICs support ODP.
Sure, but there is lots of infrastructure work here to be done on dma buf, having a correct consumer in the form of ODP might be helpful to advance it.
Good point. Thanks.
Jason