From: Alistair Popple apopple@nvidia.com Sent: Thursday, February 25, 2021 11:18 PM To: linux-mm@kvack.org; nouveau@lists.freedesktop.org; bskeggs@redhat.com; akpm@linux-foundation.org Cc: linux-doc@vger.kernel.org; linux-kernel@vger.kernel.org; dri- devel@lists.freedesktop.org; John Hubbard jhubbard@nvidia.com; Ralph Campbell rcampbell@nvidia.com; jglisse@redhat.com; Jason Gunthorpe jgg@nvidia.com; hch@infradead.org; daniel@ffwll.ch; Alistair Popple apopple@nvidia.com Subject: [PATCH v3 5/8] mm: Device exclusive memory access
Some devices require exclusive write access to shared virtual memory (SVM) ranges to perform atomic operations on that memory. This requires CPU page tables to be updated to deny access whilst atomic operations are occurring.
In order to do this introduce a new swap entry type (SWP_DEVICE_EXCLUSIVE). When a SVM range needs to be marked for exclusive access by a device all page table mappings for the particular range are replaced with device exclusive swap entries. This causes any CPU access to the page to result in a fault.
Faults are resovled by replacing the faulting entry with the original mapping. This results in MMU notifiers being called which a driver uses to update access permissions such as revoking atomic access. After notifiers have been called the device will no longer have exclusive access to the region.
Signed-off-by: Alistair Popple apopple@nvidia.com
Documentation/vm/hmm.rst | 15 ++++ include/linux/rmap.h | 3 + include/linux/swap.h | 4 +- include/linux/swapops.h | 44 ++++++++++- mm/hmm.c | 5 ++ mm/memory.c | 108 +++++++++++++++++++++++++- mm/mprotect.c | 8 ++ mm/page_vma_mapped.c | 9 ++- mm/rmap.c | 163 +++++++++++++++++++++++++++++++++++++++ 9 files changed, 352 insertions(+), 7 deletions(-)
...
+int make_device_exclusive_range(struct mm_struct *mm, unsigned long start,
unsigned long end, struct page **pages) {
- long npages = (end - start) >> PAGE_SHIFT;
- long i;
Nit: you should use unsigned long for 'i' and 'npages' to match start/end.