On Tue, Dec 08, 2020 at 02:39:15PM -0800, Jianxin Xiong wrote:
Implement the new driver method 'reg_user_mr_dmabuf'. Utilize the core functions to import dma-buf based memory region and update the mappings.
Add code to handle dma-buf related page fault.
Signed-off-by: Jianxin Xiong jianxin.xiong@intel.com Reviewed-by: Sean Hefty sean.hefty@intel.com Acked-by: Michael J. Ruhl michael.j.ruhl@intel.com Acked-by: Christian Koenig christian.koenig@amd.com Acked-by: Daniel Vetter daniel.vetter@ffwll.ch
drivers/infiniband/hw/mlx5/main.c | 2 + drivers/infiniband/hw/mlx5/mlx5_ib.h | 18 +++++ drivers/infiniband/hw/mlx5/mr.c | 128 +++++++++++++++++++++++++++++++++-- drivers/infiniband/hw/mlx5/odp.c | 86 +++++++++++++++++++++-- 4 files changed, 225 insertions(+), 9 deletions(-)
<...>
- umem = ib_umem_dmabuf_get(&dev->ib_dev, offset, length, fd, access_flags,
&mlx5_ib_dmabuf_attach_ops);
- if (IS_ERR(umem)) {
mlx5_ib_dbg(dev, "umem get failed (%ld)\n", PTR_ERR(umem));
return ERR_PTR(PTR_ERR(umem));
return ERR_CAST(umem);
- }
<...>
- dma_resv_lock(umem_dmabuf->attach->dmabuf->resv, NULL);
- err = ib_umem_dmabuf_map_pages(umem_dmabuf);
- if (!err) {
page_size = mlx5_umem_find_best_pgsz(&umem_dmabuf->umem, mkc,
log_page_size, 0,
umem_dmabuf->umem.iova);
if (unlikely(page_size < PAGE_SIZE)) {
ib_umem_dmabuf_unmap_pages(umem_dmabuf);
err = -EINVAL;
} else {
err = mlx5_ib_update_mr_pas(mr, xlt_flags);
}
- }
- dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv);
Let's write this section in kernel coding style, please
dma_resv_lock(umem_dmabuf->attach->dmabuf->resv, NULL); err = ib_umem_dmabuf_map_pages(umem_dmabuf); if (err) { dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv); return err; } .....
Thanks