On 2/19/19 12:04 PM, jglisse@redhat.com wrote:
From: Jérôme Glisse jglisse@redhat.com
CPU page table update can happens for many reasons, not only as a result
s/update/updates s/happens/happen
of a syscall (munmap(), mprotect(), mremap(), madvise(), ...) but also as a result of kernel activities (memory compression, reclaim, migration, ...).
This patch introduce a set of enums that can be associated with each of
s/introduce/introduces
the events triggering a mmu notifier. Latter patches take advantages of those enum values.
s/advantages/advantage
- UNMAP: munmap() or mremap() - CLEAR: page table is cleared (migration, compaction, reclaim, ...) - PROTECTION_VMA: change in access protections for the range - PROTECTION_PAGE: change in access protections for page in the range - SOFT_DIRTY: soft dirtyness tracking
s/dirtyness/dirtiness
Being able to identify munmap() and mremap() from other reasons why the page table is cleared is important to allow user of mmu notifier to update their own internal tracking structure accordingly (on munmap or mremap it is not longer needed to track range of virtual address as it becomes invalid).
Signed-off-by: Jérôme Glisse jglisse@redhat.com Cc: Christian König christian.koenig@amd.com Cc: Joonas Lahtinen joonas.lahtinen@linux.intel.com Cc: Jani Nikula jani.nikula@linux.intel.com Cc: Rodrigo Vivi rodrigo.vivi@intel.com Cc: Jan Kara jack@suse.cz Cc: Andrea Arcangeli aarcange@redhat.com Cc: Peter Xu peterx@redhat.com Cc: Felix Kuehling Felix.Kuehling@amd.com Cc: Jason Gunthorpe jgg@mellanox.com Cc: Ross Zwisler zwisler@kernel.org Cc: Dan Williams dan.j.williams@intel.com Cc: Paolo Bonzini pbonzini@redhat.com Cc: Radim Krčmář rkrcmar@redhat.com Cc: Michal Hocko mhocko@kernel.org Cc: Christian Koenig christian.koenig@amd.com Cc: Ralph Campbell rcampbell@nvidia.com Cc: John Hubbard jhubbard@nvidia.com Cc: kvm@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linux-rdma@vger.kernel.org Cc: Arnd Bergmann arnd@arndb.de
include/linux/mmu_notifier.h | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+)
diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index c8672c366f67..2386e71ac1b8 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -10,6 +10,36 @@ struct mmu_notifier; struct mmu_notifier_ops;
+/**
- enum mmu_notifier_event - reason for the mmu notifier callback
- @MMU_NOTIFY_UNMAP: either munmap() that unmap the range or a mremap() that
- move the range
I would say something about the VMA for the notifier range is being deleted. MMU notifier clients can then use this case to remove any policy or access counts associated with the range. Just changing the PTE to "no access" as in the CLEAR case doesn't mean a policy which prefers device private memory over system memory should be cleared.
- @MMU_NOTIFY_CLEAR: clear page table entry (many reasons for this like
- madvise() or replacing a page by another one, ...).
- @MMU_NOTIFY_PROTECTION_VMA: update is due to protection change for the range
- ie using the vma access permission (vm_page_prot) to update the whole range
- is enough no need to inspect changes to the CPU page table (mprotect()
- syscall)
- @MMU_NOTIFY_PROTECTION_PAGE: update is due to change in read/write flag for
- pages in the range so to mirror those changes the user must inspect the CPU
- page table (from the end callback).
- @MMU_NOTIFY_SOFT_DIRTY: soft dirty accounting (still same page and same
- access flags). User should soft dirty the page in the end callback to make
- sure that anyone relying on soft dirtyness catch pages that might be written
- through non CPU mappings.
- */
+enum mmu_notifier_event {
- MMU_NOTIFY_UNMAP = 0,
- MMU_NOTIFY_CLEAR,
- MMU_NOTIFY_PROTECTION_VMA,
- MMU_NOTIFY_PROTECTION_PAGE,
- MMU_NOTIFY_SOFT_DIRTY,
+};
#ifdef CONFIG_MMU_NOTIFIER
/*