On 26 Feb 2021, at 2:18, Alistair Popple wrote:
Migration is currently implemented as a mode of operation for try_to_unmap_one() generally specified by passing the TTU_MIGRATION flag or in the case of splitting a huge anonymous page TTU_SPLIT_FREEZE.
However it does not have much in common with the rest of the unmap functionality of try_to_unmap_one() and thus splitting it into a separate function reduces the complexity of try_to_unmap_one() making it more readable.
Several simplifications can also be made in try_to_migrate_one() based on the following observations:
- All users of TTU_MIGRATION also set TTU_IGNORE_MLOCK.
- No users of TTU_MIGRATION ever set TTU_IGNORE_HWPOISON.
- No users of TTU_MIGRATION ever set TTU_BATCH_FLUSH.
TTU_SPLIT_FREEZE is a special case of migration used when splitting an anonymous page. This is most easily dealt with by calling the correct function from unmap_page() in mm/huge_memory.c - either try_to_migrate() for PageAnon or try_to_unmap().
Signed-off-by: Alistair Popple apopple@nvidia.com
include/linux/rmap.h | 4 +- mm/huge_memory.c | 10 +- mm/migrate.c | 9 +- mm/rmap.c | 352 +++++++++++++++++++++++++++++++------------ 4 files changed, 269 insertions(+), 106 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 7f1ee411bd7b..77fa17de51d7 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -86,8 +86,6 @@ struct anon_vma_chain { };
enum ttu_flags {
- TTU_MIGRATION = 0x1, /* migration mode */
- TTU_SPLIT_HUGE_PMD = 0x4, /* split huge PMD if any */
It implies freeze in try_to_migrate() and no freeze in try_to_unmap(). I think we need some comments here, above try_to_migrate(), and above try_to_unmap() to clarify the implication.
TTU_IGNORE_MLOCK = 0x8, /* ignore mlock */ TTU_IGNORE_HWPOISON = 0x20, /* corrupted page is recoverable */ @@ -96,7 +94,6 @@ enum ttu_flags { * do a final flush if necessary */ TTU_RMAP_LOCKED = 0x80, /* do not grab rmap lock: * caller holds it */
- TTU_SPLIT_FREEZE = 0x100, /* freeze pte under splitting thp */
};
#ifdef CONFIG_MMU @@ -193,6 +190,7 @@ static inline void page_dup_rmap(struct page *page, bool compound) int page_referenced(struct page *, int is_locked, struct mem_cgroup *memcg, unsigned long *vm_flags);
+bool try_to_migrate(struct page *page, enum ttu_flags flags); bool try_to_unmap(struct page *, enum ttu_flags flags);
/* Avoid racy checks */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d00b93dc2d9e..357052a4567b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2351,16 +2351,16 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma,
static void unmap_page(struct page *page) {
- enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK |
TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD;
enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD; bool unmap_success;
VM_BUG_ON_PAGE(!PageHead(page), page);
if (PageAnon(page))
ttu_flags |= TTU_SPLIT_FREEZE;
- unmap_success = try_to_unmap(page, ttu_flags);
unmap_success = try_to_migrate(page, ttu_flags);
- else
unmap_success = try_to_unmap(page, ttu_flags |
TTU_IGNORE_MLOCK);
I think we need a comment here about why anonymous pages need try_to_migrate() and others need try_to_unmap().
Thanks.
— Best Regards, Yan Zi