Allow hmm_range_fault() to return success (0) when the CPU pagetable entry points to the special shared zero page. The caller can then handle the zero page by possibly clearing device private memory instead of DMAing a zero page.
Signed-off-by: Ralph Campbell rcampbell@nvidia.com Cc: "Jérôme Glisse" jglisse@redhat.com Cc: Jason Gunthorpe jgg@mellanox.com Cc: Christoph Hellwig hch@lst.de --- mm/hmm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hmm.c b/mm/hmm.c index 06041d4399ff..7217912bef13 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -532,7 +532,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, return -EBUSY; } else if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pte_special(pte)) { *pfn = range->values[HMM_PFN_SPECIAL]; - return -EFAULT; + return is_zero_pfn(pte_pfn(pte)) ? 0 : -EFAULT; }
*pfn = hmm_device_entry_from_pfn(range, pte_pfn(pte)) | cpu_flags;