This patch series introduces MEMORY_DEVICE_COHERENT, a type of memory owned by a device that can be mapped into CPU page tables like MEMORY_DEVICE_GENERIC and can also be migrated like MEMORY_DEVICE_PRIVATE. With MEMORY_DEVICE_COHERENT, we isolate the new memory type from other subsystems as far as possible, though there are some small changes to other subsystems such as filesystem DAX, to handle the new memory type appropriately.
We use ZONE_DEVICE for this instead of NUMA so that the amdgpu allocator can manage it without conflicting with core mm for non-unified memory use cases.
How it works: The system BIOS advertises the GPU device memory (aka VRAM) as SPM (special purpose memory) in the UEFI system address map. The amdgpu driver registers the memory with devmap as MEMORY_DEVICE_COHERENT using devm_memremap_pages.
The initial user for this hardware page migration capability will be the Frontier supercomputer project. Our nodes in the lab have .5 TB of system memory plus 256 GB of device memory split across 4 GPUs, all in the same coherent address space. Page migration is expected to improve application efficiency significantly. We will report empirical results as they become available.
This includes patches originally by Ralph Campbell to change ZONE_DEVICE reference counting as requested in previous reviews of this patch series (see https://patchwork.freedesktop.org/series/90706/). We extended hmm_test to cover migration of MEMORY_DEVICE_COHERENT. This patch set builds on HMM and our SVM memory manager already merged in 5.14. We would like to complete review and merge this migration patchset for 5.16.
Alex Sierra (10): mm: add zone device coherent type memory support mm: add device coherent vma selection for memory migration drm/amdkfd: ref count init for device pages drm/amdkfd: add SPM support for SVM drm/amdkfd: coherent type as sys mem on migration to ram lib: test_hmm add ioctl to get zone device type lib: test_hmm add module param for zone device type lib: add support for device coherent type in test_hmm tools: update hmm-test to support device coherent type tools: update test_hmm script to support SP config
Ralph Campbell (2): ext4/xfs: add page refcount helper mm: remove extra ZONE_DEVICE struct page refcount
arch/powerpc/kvm/book3s_hv_uvmem.c | 2 +- drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 40 ++-- drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +- fs/dax.c | 8 +- fs/ext4/inode.c | 5 +- fs/fuse/dax.c | 4 +- fs/xfs/xfs_file.c | 4 +- include/linux/dax.h | 10 + include/linux/memremap.h | 15 +- include/linux/migrate.h | 1 + include/linux/mm.h | 19 +- lib/test_hmm.c | 276 +++++++++++++++++------ lib/test_hmm_uapi.h | 20 +- mm/internal.h | 8 + mm/memcontrol.c | 12 +- mm/memory-failure.c | 6 +- mm/memremap.c | 71 ++---- mm/migrate.c | 33 +-- mm/page_alloc.c | 3 + mm/swap.c | 45 +--- tools/testing/selftests/vm/hmm-tests.c | 137 +++++++++-- tools/testing/selftests/vm/test_hmm.sh | 20 +- 22 files changed, 490 insertions(+), 251 deletions(-)
From: Ralph Campbell rcampbell@nvidia.com
There are several places where ZONE_DEVICE struct pages assume a reference count == 1 means the page is idle and free. Instead of open coding this, add a helper function to hide this detail.
Signed-off-by: Ralph Campbell rcampbell@nvidia.com Signed-off-by: Alex Sierra alex.sierra@amd.com Reviewed-by: Christoph Hellwig hch@lst.de Acked-by: Theodore Ts'o tytso@mit.edu Acked-by: Darrick J. Wong djwong@kernel.org --- v3: [AS]: rename dax_layout_is_idle_page func to dax_page_unused
v4: [AS]: This ref count functionality was missing on fuse/dax.c. --- fs/dax.c | 4 ++-- fs/ext4/inode.c | 5 +---- fs/fuse/dax.c | 4 +--- fs/xfs/xfs_file.c | 4 +--- include/linux/dax.h | 10 ++++++++++ 5 files changed, 15 insertions(+), 12 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c index 62352cbcf0f4..c387d09e3e5a 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -369,7 +369,7 @@ static void dax_disassociate_entry(void *entry, struct address_space *mapping, for_each_mapped_pfn(entry, pfn) { struct page *page = pfn_to_page(pfn);
- WARN_ON_ONCE(trunc && page_ref_count(page) > 1); + WARN_ON_ONCE(trunc && !dax_page_unused(page)); WARN_ON_ONCE(page->mapping && page->mapping != mapping); page->mapping = NULL; page->index = 0; @@ -383,7 +383,7 @@ static struct page *dax_busy_page(void *entry) for_each_mapped_pfn(entry, pfn) { struct page *page = pfn_to_page(pfn);
- if (page_ref_count(page) > 1) + if (!dax_page_unused(page)) return page; } return NULL; diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index fe6045a46599..05ffe6875cb1 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3971,10 +3971,7 @@ int ext4_break_layouts(struct inode *inode) if (!page) return 0;
- error = ___wait_var_event(&page->_refcount, - atomic_read(&page->_refcount) == 1, - TASK_INTERRUPTIBLE, 0, 0, - ext4_wait_dax_page(ei)); + error = dax_wait_page(ei, page, ext4_wait_dax_page); } while (error == 0);
return error; diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c index ff99ab2a3c43..2b1f190ba78a 100644 --- a/fs/fuse/dax.c +++ b/fs/fuse/dax.c @@ -677,9 +677,7 @@ static int __fuse_dax_break_layouts(struct inode *inode, bool *retry, return 0;
*retry = true; - return ___wait_var_event(&page->_refcount, - atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE, - 0, 0, fuse_wait_dax_page(inode)); + return dax_wait_page(inode, page, fuse_wait_dax_page); }
/* dmap_end == 0 leads to unmapping of whole file */ diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 396ef36dcd0a..182057281086 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -840,9 +840,7 @@ xfs_break_dax_layouts( return 0;
*retry = true; - return ___wait_var_event(&page->_refcount, - atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE, - 0, 0, xfs_wait_dax_page(inode)); + return dax_wait_page(inode, page, xfs_wait_dax_page); }
int diff --git a/include/linux/dax.h b/include/linux/dax.h index b52f084aa643..8b5da1d60dbc 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -243,6 +243,16 @@ static inline bool dax_mapping(struct address_space *mapping) return mapping->host && IS_DAX(mapping->host); }
+static inline bool dax_page_unused(struct page *page) +{ + return page_ref_count(page) == 1; +} + +#define dax_wait_page(_inode, _page, _wait_cb) \ + ___wait_var_event(&(_page)->_refcount, \ + dax_page_unused(_page), \ + TASK_INTERRUPTIBLE, 0, 0, _wait_cb(_inode)) + #ifdef CONFIG_DEV_DAX_HMEM_DEVICES void hmem_register_device(int target_nid, struct resource *r); #else
From: Ralph Campbell rcampbell@nvidia.com
ZONE_DEVICE struct pages have an extra reference count that complicates the code for put_page() and several places in the kernel that need to check the reference count to see that a page is not being used (gup, compaction, migration, etc.). Clean up the code so the reference count doesn't need to be treated specially for ZONE_DEVICE.
Signed-off-by: Ralph Campbell rcampbell@nvidia.com Signed-off-by: Alex Sierra alex.sierra@amd.com Reviewed-by: Christoph Hellwig hch@lst.de --- v2: AS: merged this patch in linux 5.11 version
v5: AS: add condition at try_grab_page to check for the zone device type, while page ref counter is checked less/equal to zero. In case of device zone, pages ref counter are initialized to zero.
v7: AS: fix condition at try_grab_page added at v5, is invalid. It supposed to fix xfstests/generic/413 test, however, there's a known issue on this test where DAX mapped area DIO to non-DAX expect to fail. https://patchwork.kernel.org/project/fstests/patch/1489463960-3579-1-git-sen... This condition was removed after rebase over patch series https://lore.kernel.org/r/20210813044133.1536842-4-jhubbard@nvidia.com --- arch/powerpc/kvm/book3s_hv_uvmem.c | 2 +- drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +- fs/dax.c | 4 +- include/linux/dax.h | 2 +- include/linux/memremap.h | 7 +-- include/linux/mm.h | 11 ---- lib/test_hmm.c | 2 +- mm/internal.h | 8 +++ mm/memcontrol.c | 6 +-- mm/memremap.c | 69 +++++++------------------- mm/migrate.c | 5 -- mm/page_alloc.c | 3 ++ mm/swap.c | 45 ++--------------- 13 files changed, 46 insertions(+), 120 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index 84e5a2dc8be5..acee67710620 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -711,7 +711,7 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
dpage = pfn_to_page(uvmem_pfn); dpage->zone_device_data = pvt; - get_page(dpage); + init_page_count(dpage); lock_page(dpage); return dpage; out_clear: diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 92987daa5e17..8bc7120e1216 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -324,7 +324,7 @@ nouveau_dmem_page_alloc_locked(struct nouveau_drm *drm) return NULL; }
- get_page(page); + init_page_count(page); lock_page(page); return page; } diff --git a/fs/dax.c b/fs/dax.c index c387d09e3e5a..1166630b7190 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -571,14 +571,14 @@ static void *grab_mapping_entry(struct xa_state *xas,
/** * dax_layout_busy_page_range - find first pinned page in @mapping - * @mapping: address space to scan for a page with ref count > 1 + * @mapping: address space to scan for a page with ref count > 0 * @start: Starting offset. Page containing 'start' is included. * @end: End offset. Page containing 'end' is included. If 'end' is LLONG_MAX, * pages from 'start' till the end of file are included. * * DAX requires ZONE_DEVICE mapped pages. These pages are never * 'onlined' to the page allocator so they are considered idle when - * page->count == 1. A filesystem uses this interface to determine if + * page->count == 0. A filesystem uses this interface to determine if * any page in the mapping is busy, i.e. for DMA, or other * get_user_pages() usages. * diff --git a/include/linux/dax.h b/include/linux/dax.h index 8b5da1d60dbc..05fc982ce153 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -245,7 +245,7 @@ static inline bool dax_mapping(struct address_space *mapping)
static inline bool dax_page_unused(struct page *page) { - return page_ref_count(page) == 1; + return page_ref_count(page) == 0; }
#define dax_wait_page(_inode, _page, _wait_cb) \ diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 45a79da89c5f..77ff5fd0685f 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -66,9 +66,10 @@ enum memory_type {
struct dev_pagemap_ops { /* - * Called once the page refcount reaches 1. (ZONE_DEVICE pages never - * reach 0 refcount unless there is a refcount bug. This allows the - * device driver to implement its own memory management.) + * Called once the page refcount reaches 0. The reference count + * should be reset to one with init_page_count(page) before reusing + * the page. This allows the device driver to implement its own + * memory management. */ void (*page_free)(struct page *page);
diff --git a/include/linux/mm.h b/include/linux/mm.h index d8f98d652164..e24c904deeec 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1220,17 +1220,6 @@ static inline void put_page(struct page *page) { page = compound_head(page);
- /* - * For devmap managed pages we need to catch refcount transition from - * 2 to 1, when refcount reach one it means the page is free and we - * need to inform the device driver through callback. See - * include/linux/memremap.h and HMM for details. - */ - if (page_is_devmap_managed(page)) { - put_devmap_managed_page(page); - return; - } - if (put_page_testzero(page)) __put_page(page); } diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 80a78877bd93..6998f10350ea 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -561,7 +561,7 @@ static struct page *dmirror_devmem_alloc_page(struct dmirror_device *mdevice) }
dpage->zone_device_data = rpage; - get_page(dpage); + init_page_count(dpage); lock_page(dpage); return dpage;
diff --git a/mm/internal.h b/mm/internal.h index e8fdb531f887..5438cceca4b9 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -667,4 +667,12 @@ int vmap_pages_range_noflush(unsigned long addr, unsigned long end,
void vunmap_range_noflush(unsigned long start, unsigned long end);
+#ifdef CONFIG_DEV_PAGEMAP_OPS +void free_zone_device_page(struct page *page); +#else +static inline void free_zone_device_page(struct page *page) +{ +} +#endif + #endif /* __MM_INTERNAL_H */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 64ada9e650a5..9a6bfb4fd36c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5350,11 +5350,7 @@ static struct page *mc_handle_swap_pte(struct vm_area_struct *vma, */ if (is_device_private_entry(ent)) { page = device_private_entry_to_page(ent); - /* - * MEMORY_DEVICE_PRIVATE means ZONE_DEVICE page and which have - * a refcount of 1 when free (unlike normal page) - */ - if (!page_ref_add_unless(page, 1, 1)) + if (!get_page_unless_zero(page)) return NULL; return page; } diff --git a/mm/memremap.c b/mm/memremap.c index 15a074ffb8d7..ab949a571e78 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -12,6 +12,7 @@ #include <linux/types.h> #include <linux/wait_bit.h> #include <linux/xarray.h> +#include "internal.h"
static DEFINE_XARRAY(pgmap_array);
@@ -37,32 +38,6 @@ unsigned long memremap_compat_align(void) EXPORT_SYMBOL_GPL(memremap_compat_align); #endif
-#ifdef CONFIG_DEV_PAGEMAP_OPS -DEFINE_STATIC_KEY_FALSE(devmap_managed_key); -EXPORT_SYMBOL(devmap_managed_key); - -static void devmap_managed_enable_put(struct dev_pagemap *pgmap) -{ - if (pgmap->type == MEMORY_DEVICE_PRIVATE || - pgmap->type == MEMORY_DEVICE_FS_DAX) - static_branch_dec(&devmap_managed_key); -} - -static void devmap_managed_enable_get(struct dev_pagemap *pgmap) -{ - if (pgmap->type == MEMORY_DEVICE_PRIVATE || - pgmap->type == MEMORY_DEVICE_FS_DAX) - static_branch_inc(&devmap_managed_key); -} -#else -static void devmap_managed_enable_get(struct dev_pagemap *pgmap) -{ -} -static void devmap_managed_enable_put(struct dev_pagemap *pgmap) -{ -} -#endif /* CONFIG_DEV_PAGEMAP_OPS */ - static void pgmap_array_delete(struct range *range) { xa_store_range(&pgmap_array, PHYS_PFN(range->start), PHYS_PFN(range->end), @@ -102,16 +77,6 @@ static unsigned long pfn_end(struct dev_pagemap *pgmap, int range_id) return (range->start + range_len(range)) >> PAGE_SHIFT; }
-static unsigned long pfn_next(unsigned long pfn) -{ - if (pfn % 1024 == 0) - cond_resched(); - return pfn + 1; -} - -#define for_each_device_pfn(pfn, map, i) \ - for (pfn = pfn_first(map, i); pfn < pfn_end(map, i); pfn = pfn_next(pfn)) - static void dev_pagemap_kill(struct dev_pagemap *pgmap) { if (pgmap->ops && pgmap->ops->kill) @@ -167,20 +132,18 @@ static void pageunmap_range(struct dev_pagemap *pgmap, int range_id)
void memunmap_pages(struct dev_pagemap *pgmap) { - unsigned long pfn; int i;
dev_pagemap_kill(pgmap); for (i = 0; i < pgmap->nr_range; i++) - for_each_device_pfn(pfn, pgmap, i) - put_page(pfn_to_page(pfn)); + percpu_ref_put_many(pgmap->ref, pfn_end(pgmap, i) - + pfn_first(pgmap, i)); dev_pagemap_cleanup(pgmap);
for (i = 0; i < pgmap->nr_range; i++) pageunmap_range(pgmap, i);
WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n"); - devmap_managed_enable_put(pgmap); } EXPORT_SYMBOL_GPL(memunmap_pages);
@@ -382,8 +345,6 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid) } }
- devmap_managed_enable_get(pgmap); - /* * Clear the pgmap nr_range as it will be incremented for each * successfully processed range. This communicates how many @@ -498,16 +459,9 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn, EXPORT_SYMBOL_GPL(get_dev_pagemap);
#ifdef CONFIG_DEV_PAGEMAP_OPS -void free_devmap_managed_page(struct page *page) +static void free_device_page(struct page *page) { - /* notify page idle for dax */ - if (!is_device_private_page(page)) { - wake_up_var(&page->_refcount); - return; - } - __ClearPageWaiters(page); - mem_cgroup_uncharge(page);
/* @@ -534,4 +488,19 @@ void free_devmap_managed_page(struct page *page) page->mapping = NULL; page->pgmap->ops->page_free(page); } + +void free_zone_device_page(struct page *page) +{ + switch (page->pgmap->type) { + case MEMORY_DEVICE_PRIVATE: + free_device_page(page); + return; + case MEMORY_DEVICE_FS_DAX: + /* notify page idle */ + wake_up_var(&page->_refcount); + return; + default: + return; + } +} #endif /* CONFIG_DEV_PAGEMAP_OPS */ diff --git a/mm/migrate.c b/mm/migrate.c index 41ff2c9896c4..e3a10e2a1bb3 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -350,11 +350,6 @@ static int expected_page_refs(struct address_space *mapping, struct page *page) { int expected_count = 1;
- /* - * Device private pages have an extra refcount as they are - * ZONE_DEVICE pages. - */ - expected_count += is_device_private_page(page); if (mapping) expected_count += thp_nr_pages(page) + page_has_private(page);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ef2265f86b91..1ef1f733af5b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6414,6 +6414,9 @@ void __ref memmap_init_zone_device(struct zone *zone,
__init_single_page(page, pfn, zone_idx, nid);
+ /* ZONE_DEVICE pages start with a zero reference count. */ + set_page_count(page, 0); + /* * Mark page reserved as it will need to wait for onlining * phase for it to be fully associated with a zone. diff --git a/mm/swap.c b/mm/swap.c index dfb48cf9c2c9..9e821f1951c5 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -114,12 +114,11 @@ static void __put_compound_page(struct page *page) void __put_page(struct page *page) { if (is_zone_device_page(page)) { - put_dev_pagemap(page->pgmap); - /* * The page belongs to the device that created pgmap. Do * not return it to page allocator. */ + free_zone_device_page(page); return; }
@@ -917,29 +916,18 @@ void release_pages(struct page **pages, int nr) if (is_huge_zero_page(page)) continue;
+ if (!put_page_testzero(page)) + continue; + if (is_zone_device_page(page)) { if (lruvec) { unlock_page_lruvec_irqrestore(lruvec, flags); lruvec = NULL; } - /* - * ZONE_DEVICE pages that return 'false' from - * page_is_devmap_managed() do not require special - * processing, and instead, expect a call to - * put_page_testzero(). - */ - if (page_is_devmap_managed(page)) { - put_devmap_managed_page(page); - continue; - } - if (put_page_testzero(page)) - put_dev_pagemap(page->pgmap); + free_zone_device_page(page); continue; }
- if (!put_page_testzero(page)) - continue; - if (PageCompound(page)) { if (lruvec) { unlock_page_lruvec_irqrestore(lruvec, flags); @@ -1143,26 +1131,3 @@ void __init swap_setup(void) * _really_ don't want to cluster much more */ } - -#ifdef CONFIG_DEV_PAGEMAP_OPS -void put_devmap_managed_page(struct page *page) -{ - int count; - - if (WARN_ON_ONCE(!page_is_devmap_managed(page))) - return; - - count = page_ref_dec_return(page); - - /* - * devmap page refcounts are 1-based, rather than 0-based: if - * refcount is 1, then the page is free and the refcount is - * stable because nobody holds a reference on the page. - */ - if (count == 1) - free_devmap_managed_page(page); - else if (!count) - __put_page(page); -} -EXPORT_SYMBOL(put_devmap_managed_page); -#endif
Device memory that is cache coherent from device and CPU point of view. This is use on platform that have an advance system bus (like CAPI or CCIX). Any page of a process can be migrated to such memory. However, no one should be allow to pin such memory so that it can always be evicted.
Signed-off-by: Alex Sierra alex.sierra@amd.com --- include/linux/memremap.h | 8 ++++++++ include/linux/mm.h | 8 ++++++++ mm/memcontrol.c | 6 +++--- mm/memory-failure.c | 6 +++++- mm/memremap.c | 2 ++ mm/migrate.c | 19 ++++++++++++------- 6 files changed, 38 insertions(+), 11 deletions(-)
diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 77ff5fd0685f..d64cd2e8147a 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -39,6 +39,13 @@ struct vmem_altmap { * A more complete discussion of unaddressable memory may be found in * include/linux/hmm.h and Documentation/vm/hmm.rst. * + * MEMORY_DEVICE_COHERENT: + * Device memory that is cache coherent from device and CPU point of view. This + * is use on platform that have an advance system bus (like CAPI or CCIX). A + * driver can hotplug the device memory using ZONE_DEVICE and with that memory + * type. Any page of a process can be migrated to such memory. However no one + * should be allow to pin such memory so that it can always be evicted. + * * MEMORY_DEVICE_FS_DAX: * Host memory that has similar access semantics as System RAM i.e. DMA * coherent and supports page pinning. In support of coordinating page @@ -59,6 +66,7 @@ struct vmem_altmap { enum memory_type { /* 0 is reserved to catch uninitialized type fields */ MEMORY_DEVICE_PRIVATE = 1, + MEMORY_DEVICE_COHERENT, MEMORY_DEVICE_FS_DAX, MEMORY_DEVICE_GENERIC, MEMORY_DEVICE_PCI_P2PDMA, diff --git a/include/linux/mm.h b/include/linux/mm.h index e24c904deeec..8bc697006a5c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1187,6 +1187,14 @@ static inline bool is_device_private_page(const struct page *page) page->pgmap->type == MEMORY_DEVICE_PRIVATE; }
+static inline bool is_device_page(const struct page *page) +{ + return IS_ENABLED(CONFIG_DEV_PAGEMAP_OPS) && + is_zone_device_page(page) && + (page->pgmap->type == MEMORY_DEVICE_PRIVATE || + page->pgmap->type == MEMORY_DEVICE_COHERENT); +} + static inline bool is_pci_p2pdma_page(const struct page *page) { return IS_ENABLED(CONFIG_DEV_PAGEMAP_OPS) && diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9a6bfb4fd36c..fe5a96428dce 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5526,8 +5526,8 @@ static int mem_cgroup_move_account(struct page *page, * 2(MC_TARGET_SWAP): if the swap entry corresponding to this pte is a * target for charge migration. if @target is not NULL, the entry is stored * in target->ent. - * 3(MC_TARGET_DEVICE): like MC_TARGET_PAGE but page is MEMORY_DEVICE_PRIVATE - * (so ZONE_DEVICE page and thus not on the lru). + * 3(MC_TARGET_DEVICE): like MC_TARGET_PAGE but page is MEMORY_DEVICE_COHERENT + * or MEMORY_DEVICE_PRIVATE (so ZONE_DEVICE page and thus not on the lru). * For now we such page is charge like a regular page would be as for all * intent and purposes it is just special memory taking the place of a * regular page. @@ -5561,7 +5561,7 @@ static enum mc_target_type get_mctgt_type(struct vm_area_struct *vma, */ if (page_memcg(page) == mc.from) { ret = MC_TARGET_PAGE; - if (is_device_private_page(page)) + if (is_device_page(page)) ret = MC_TARGET_DEVICE; if (target) target->page = page; diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 6f5f78885ab4..1076f5a07370 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1373,12 +1373,16 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags, goto unlock; }
- if (pgmap->type == MEMORY_DEVICE_PRIVATE) { + switch (pgmap->type) { + case MEMORY_DEVICE_PRIVATE: + case MEMORY_DEVICE_COHERENT: /* * TODO: Handle HMM pages which may need coordination * with device-side memory. */ goto unlock; + default: + break; }
/* diff --git a/mm/memremap.c b/mm/memremap.c index ab949a571e78..56033955d1f4 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -294,6 +294,7 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
switch (pgmap->type) { case MEMORY_DEVICE_PRIVATE: + case MEMORY_DEVICE_COHERENT: if (!IS_ENABLED(CONFIG_DEVICE_PRIVATE)) { WARN(1, "Device private memory not supported\n"); return ERR_PTR(-EINVAL); @@ -493,6 +494,7 @@ void free_zone_device_page(struct page *page) { switch (page->pgmap->type) { case MEMORY_DEVICE_PRIVATE: + case MEMORY_DEVICE_COHERENT: free_device_page(page); return; case MEMORY_DEVICE_FS_DAX: diff --git a/mm/migrate.c b/mm/migrate.c index e3a10e2a1bb3..2bda612f3650 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2565,7 +2565,7 @@ static bool migrate_vma_check_page(struct page *page) * FIXME proper solution is to rework migration_entry_wait() so * it does not need to take a reference on page. */ - return is_device_private_page(page); + return is_device_page(page); }
/* For file back page */ @@ -2854,7 +2854,7 @@ EXPORT_SYMBOL(migrate_vma_setup); * handle_pte_fault() * do_anonymous_page() * to map in an anonymous zero page but the struct page will be a ZONE_DEVICE - * private page. + * private or coherent page. */ static void migrate_vma_insert_page(struct migrate_vma *migrate, unsigned long addr, @@ -2925,10 +2925,15 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,
swp_entry = make_device_private_entry(page, vma->vm_flags & VM_WRITE); entry = swp_entry_to_pte(swp_entry); + } else if (is_device_page(page)) { + entry = pte_mkold(mk_pte(page, + READ_ONCE(vma->vm_page_prot))); + if (vma->vm_flags & VM_WRITE) + entry = pte_mkwrite(pte_mkdirty(entry)); } else { /* - * For now we only support migrating to un-addressable - * device memory. + * We support migrating to private and coherent types + * for device zone memory. */ pr_warn_once("Unsupported ZONE_DEVICE page type.\n"); goto abort; @@ -3034,10 +3039,10 @@ void migrate_vma_pages(struct migrate_vma *migrate) mapping = page_mapping(page);
if (is_zone_device_page(newpage)) { - if (is_device_private_page(newpage)) { + if (is_device_page(newpage)) { /* - * For now only support private anonymous when - * migrating to un-addressable device memory. + * For now only support private and coherent + * anonymous when migrating to device memory. */ if (mapping) { migrate->src[i] &= ~MIGRATE_PFN_MIGRATE;
This case is used to migrate pages from device memory, back to system memory. Device coherent type memory is cache coherent from device and CPU point of view.
Signed-off-by: Alex Sierra alex.sierra@amd.com --- v2: condition added when migrations from device coherent pages. --- include/linux/migrate.h | 1 + mm/migrate.c | 9 +++++++-- 2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 4bb4e519e3f5..b1cae5073d69 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -156,6 +156,7 @@ static inline unsigned long migrate_pfn(unsigned long pfn) enum migrate_vma_direction { MIGRATE_VMA_SELECT_SYSTEM = 1 << 0, MIGRATE_VMA_SELECT_DEVICE_PRIVATE = 1 << 1, + MIGRATE_VMA_SELECT_DEVICE_COHERENT = 1 << 2, };
struct migrate_vma { diff --git a/mm/migrate.c b/mm/migrate.c index 2bda612f3650..b40cd5a69f65 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2406,8 +2406,6 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, if (is_write_device_private_entry(entry)) mpfn |= MIGRATE_PFN_WRITE; } else { - if (!(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) - goto next; pfn = pte_pfn(pte); if (is_zero_pfn(pfn)) { mpfn = MIGRATE_PFN_MIGRATE; @@ -2415,6 +2413,13 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, goto next; } page = vm_normal_page(migrate->vma, addr, pte); + if (!is_zone_device_page(page) && + !(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) + goto next; + if (is_zone_device_page(page) && + (!(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_COHERENT) || + page->pgmap->owner != migrate->pgmap_owner)) + goto next; mpfn = migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE; mpfn |= pte_write(pte) ? MIGRATE_PFN_WRITE : 0; }
Ref counter from device pages is init to zero during memmap init zone. The first time a new device page is allocated to migrate data into it, its ref counter needs to be initialized to one.
Signed-off-by: Alex Sierra alex.sierra@amd.com --- drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c index dab290a4d19d..ffad39ffa8c6 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c @@ -220,7 +220,8 @@ svm_migrate_get_vram_page(struct svm_range *prange, unsigned long pfn) page = pfn_to_page(pfn); svm_range_bo_ref(prange->svm_bo); page->zone_device_data = prange->svm_bo; - get_page(page); + VM_BUG_ON_PAGE(page_ref_count(page), page); + init_page_count(page); lock_page(page); }
When CPU is connected throug XGMI, it has coherent access to VRAM resource. In this case that resource is taken from a table in the device gmc aperture base. This resource is used along with the device type, which could be DEVICE_PRIVATE or DEVICE_COHERENT to create the device page map region.
Signed-off-by: Alex Sierra alex.sierra@amd.com Reviewed-by: Felix Kuehling Felix.Kuehling@amd.com --- v7: Remove lookup_resource call, so export symbol for this function is not longer required. Patch dropped "kernel: resource: lookup_resource as exported symbol" --- drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 32 +++++++++++++++--------- 1 file changed, 20 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c index ffad39ffa8c6..9efc97d55077 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c @@ -866,7 +866,7 @@ int svm_migrate_init(struct amdgpu_device *adev) { struct kfd_dev *kfddev = adev->kfd.dev; struct dev_pagemap *pgmap; - struct resource *res; + struct resource *res = NULL; unsigned long size; void *r;
@@ -881,22 +881,29 @@ int svm_migrate_init(struct amdgpu_device *adev) * should remove reserved size */ size = ALIGN(adev->gmc.real_vram_size, 2ULL << 20); - res = devm_request_free_mem_region(adev->dev, &iomem_resource, size); - if (IS_ERR(res)) - return -ENOMEM; + if (adev->gmc.xgmi.connected_to_cpu) { + pgmap->range.start = adev->gmc.aper_base; + pgmap->range.end = adev->gmc.aper_base + adev->gmc.aper_size - 1; + pgmap->type = MEMORY_DEVICE_COHERENT; + } else { + res = devm_request_free_mem_region(adev->dev, &iomem_resource, size); + if (IS_ERR(res)) + return -ENOMEM; + pgmap->range.start = res->start; + pgmap->range.end = res->end; + pgmap->type = MEMORY_DEVICE_PRIVATE; + }
- pgmap->type = MEMORY_DEVICE_PRIVATE; pgmap->nr_range = 1; - pgmap->range.start = res->start; - pgmap->range.end = res->end; pgmap->ops = &svm_migrate_pgmap_ops; pgmap->owner = SVM_ADEV_PGMAP_OWNER(adev); - pgmap->flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE; + pgmap->flags = 0; r = devm_memremap_pages(adev->dev, pgmap); if (IS_ERR(r)) { pr_err("failed to register HMM device memory\n"); - devm_release_mem_region(adev->dev, res->start, - res->end - res->start + 1); + if (pgmap->type == MEMORY_DEVICE_PRIVATE) + devm_release_mem_region(adev->dev, res->start, + res->end - res->start + 1); return PTR_ERR(r); }
@@ -915,6 +922,7 @@ void svm_migrate_fini(struct amdgpu_device *adev) struct dev_pagemap *pgmap = &adev->kfd.dev->pgmap;
devm_memunmap_pages(adev->dev, pgmap); - devm_release_mem_region(adev->dev, pgmap->range.start, - pgmap->range.end - pgmap->range.start + 1); + if (pgmap->type == MEMORY_DEVICE_PRIVATE) + devm_release_mem_region(adev->dev, pgmap->range.start, + pgmap->range.end - pgmap->range.start + 1); }
Coherent device type memory on VRAM to RAM migration, has similar access as System RAM from the CPU. This flag sets the source from the sender. Which in Coherent type case, should be set as MIGRATE_VMA_SELECT_DEVICE_COHERENT.
Signed-off-by: Alex Sierra alex.sierra@amd.com Reviewed-by: Felix Kuehling Felix.Kuehling@amd.com --- drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c index 9efc97d55077..4ec7ac13f2b7 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c @@ -617,9 +617,12 @@ svm_migrate_vma_to_ram(struct amdgpu_device *adev, struct svm_range *prange, migrate.vma = vma; migrate.start = start; migrate.end = end; - migrate.flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE; migrate.pgmap_owner = SVM_ADEV_PGMAP_OWNER(adev);
+ if (adev->gmc.xgmi.connected_to_cpu) + migrate.flags = MIGRATE_VMA_SELECT_DEVICE_COHERENT; + else + migrate.flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE; size = 2 * sizeof(*migrate.src) + sizeof(uint64_t) + sizeof(dma_addr_t); size *= npages; buf = kvmalloc(size, GFP_KERNEL | __GFP_ZERO);
new ioctl cmd added to query zone device type. This will be used once the test_hmm adds zone device coherent type.
Signed-off-by: Alex Sierra alex.sierra@amd.com --- lib/test_hmm.c | 15 ++++++++++++++- lib/test_hmm_uapi.h | 7 +++++++ 2 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 6998f10350ea..3cd91ca31dd7 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -82,6 +82,7 @@ struct dmirror_chunk { struct dmirror_device { struct cdev cdevice; struct hmm_devmem *devmem; + unsigned int zone_device_type;
unsigned int devmem_capacity; unsigned int devmem_count; @@ -468,6 +469,7 @@ static bool dmirror_allocate_chunk(struct dmirror_device *mdevice, if (IS_ERR(res)) goto err_devmem;
+ mdevice->zone_device_type = HMM_DMIRROR_MEMORY_DEVICE_PRIVATE; devmem->pagemap.type = MEMORY_DEVICE_PRIVATE; devmem->pagemap.range.start = res->start; devmem->pagemap.range.end = res->end; @@ -912,6 +914,15 @@ static int dmirror_snapshot(struct dmirror *dmirror, return ret; }
+static int dmirror_get_device_type(struct dmirror *dmirror, + struct hmm_dmirror_cmd *cmd) +{ + mutex_lock(&dmirror->mutex); + cmd->zone_device_type = dmirror->mdevice->zone_device_type; + mutex_unlock(&dmirror->mutex); + + return 0; +} static long dmirror_fops_unlocked_ioctl(struct file *filp, unsigned int command, unsigned long arg) @@ -952,7 +963,9 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp, case HMM_DMIRROR_SNAPSHOT: ret = dmirror_snapshot(dmirror, &cmd); break; - + case HMM_DMIRROR_GET_MEM_DEV_TYPE: + ret = dmirror_get_device_type(dmirror, &cmd); + break; default: return -EINVAL; } diff --git a/lib/test_hmm_uapi.h b/lib/test_hmm_uapi.h index 670b4ef2a5b6..ee88701793d5 100644 --- a/lib/test_hmm_uapi.h +++ b/lib/test_hmm_uapi.h @@ -26,6 +26,7 @@ struct hmm_dmirror_cmd { __u64 npages; __u64 cpages; __u64 faults; + __u64 zone_device_type; };
/* Expose the address space of the calling process through hmm device file */ @@ -33,6 +34,7 @@ struct hmm_dmirror_cmd { #define HMM_DMIRROR_WRITE _IOWR('H', 0x01, struct hmm_dmirror_cmd) #define HMM_DMIRROR_MIGRATE _IOWR('H', 0x02, struct hmm_dmirror_cmd) #define HMM_DMIRROR_SNAPSHOT _IOWR('H', 0x03, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_GET_MEM_DEV_TYPE _IOWR('H', 0x04, struct hmm_dmirror_cmd)
/* * Values returned in hmm_dmirror_cmd.ptr for HMM_DMIRROR_SNAPSHOT. @@ -60,4 +62,9 @@ enum { HMM_DMIRROR_PROT_DEV_PRIVATE_REMOTE = 0x30, };
+enum { + /* 0 is reserved to catch uninitialized type fields */ + HMM_DMIRROR_MEMORY_DEVICE_PRIVATE = 1, +}; + #endif /* _LIB_TEST_HMM_UAPI_H */
In order to configure device coherent in test_hmm, two module parameters should be passed, which correspond to the SP start address of each device (2) spm_addr_dev0 & spm_addr_dev1. If no parameters are passed, private device type is configured.
Signed-off-by: Alex Sierra alex.sierra@amd.com --- lib/test_hmm.c | 66 +++++++++++++++++++++++++++++++-------------- lib/test_hmm_uapi.h | 1 + 2 files changed, 47 insertions(+), 20 deletions(-)
diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 3cd91ca31dd7..70a9be0efa00 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -33,6 +33,16 @@ #define DEVMEM_CHUNK_SIZE (256 * 1024 * 1024U) #define DEVMEM_CHUNKS_RESERVE 16
+static unsigned long spm_addr_dev0; +module_param(spm_addr_dev0, long, 0644); +MODULE_PARM_DESC(spm_addr_dev0, + "Specify start address for SPM (special purpose memory) used for device 0. By setting this Coherent device type will be used. Make sure spm_addr_dev1 is set too"); + +static unsigned long spm_addr_dev1; +module_param(spm_addr_dev1, long, 0644); +MODULE_PARM_DESC(spm_addr_dev1, + "Specify start address for SPM (special purpose memory) used for device 1. By setting this Coherent device type will be used. Make sure spm_addr_dev0 is set too"); + static const struct dev_pagemap_ops dmirror_devmem_ops; static const struct mmu_interval_notifier_ops dmirror_min_ops; static dev_t dmirror_dev; @@ -450,11 +460,11 @@ static int dmirror_write(struct dmirror *dmirror, struct hmm_dmirror_cmd *cmd) return ret; }
-static bool dmirror_allocate_chunk(struct dmirror_device *mdevice, +static int dmirror_allocate_chunk(struct dmirror_device *mdevice, struct page **ppage) { struct dmirror_chunk *devmem; - struct resource *res; + struct resource *res = NULL; unsigned long pfn; unsigned long pfn_first; unsigned long pfn_last; @@ -462,17 +472,29 @@ static bool dmirror_allocate_chunk(struct dmirror_device *mdevice,
devmem = kzalloc(sizeof(*devmem), GFP_KERNEL); if (!devmem) - return false; + return -ENOMEM;
- res = request_free_mem_region(&iomem_resource, DEVMEM_CHUNK_SIZE, - "hmm_dmirror"); - if (IS_ERR(res)) - goto err_devmem; + if (!spm_addr_dev0 && !spm_addr_dev1) { + res = request_free_mem_region(&iomem_resource, DEVMEM_CHUNK_SIZE, + "hmm_dmirror"); + if (IS_ERR_OR_NULL(res)) + goto err_devmem; + devmem->pagemap.range.start = res->start; + devmem->pagemap.range.end = res->end; + devmem->pagemap.type = MEMORY_DEVICE_PRIVATE; + mdevice->zone_device_type = HMM_DMIRROR_MEMORY_DEVICE_PRIVATE; + } else if (spm_addr_dev0 && spm_addr_dev1) { + devmem->pagemap.range.start = MINOR(mdevice->cdevice.dev) ? + spm_addr_dev0 : + spm_addr_dev1; + devmem->pagemap.range.end = devmem->pagemap.range.start + + DEVMEM_CHUNK_SIZE - 1; + devmem->pagemap.type = MEMORY_DEVICE_COHERENT; + mdevice->zone_device_type = HMM_DMIRROR_MEMORY_DEVICE_COHERENT; + } else { + pr_err("Both spm_addr_dev parameters should be set\n"); + }
- mdevice->zone_device_type = HMM_DMIRROR_MEMORY_DEVICE_PRIVATE; - devmem->pagemap.type = MEMORY_DEVICE_PRIVATE; - devmem->pagemap.range.start = res->start; - devmem->pagemap.range.end = res->end; devmem->pagemap.nr_range = 1; devmem->pagemap.ops = &dmirror_devmem_ops; devmem->pagemap.owner = mdevice; @@ -493,10 +515,14 @@ static bool dmirror_allocate_chunk(struct dmirror_device *mdevice, mdevice->devmem_capacity = new_capacity; mdevice->devmem_chunks = new_chunks; } - ptr = memremap_pages(&devmem->pagemap, numa_node_id()); - if (IS_ERR(ptr)) + if (IS_ERR_OR_NULL(ptr)) { + if (ptr) + ret = PTR_ERR(ptr); + else + ret = -EFAULT; goto err_release; + }
devmem->mdevice = mdevice; pfn_first = devmem->pagemap.range.start >> PAGE_SHIFT; @@ -529,7 +555,8 @@ static bool dmirror_allocate_chunk(struct dmirror_device *mdevice,
err_release: mutex_unlock(&mdevice->devmem_lock); - release_mem_region(devmem->pagemap.range.start, range_len(&devmem->pagemap.range)); + if (res) + release_mem_region(devmem->pagemap.range.start, range_len(&devmem->pagemap.range)); err_devmem: kfree(devmem);
@@ -1097,10 +1124,8 @@ static int dmirror_device_init(struct dmirror_device *mdevice, int id) if (ret) return ret;
- /* Build a list of free ZONE_DEVICE private struct pages */ - dmirror_allocate_chunk(mdevice, NULL); - - return 0; + /* Build a list of free ZONE_DEVICE struct pages */ + return dmirror_allocate_chunk(mdevice, NULL); }
static void dmirror_device_remove(struct dmirror_device *mdevice) @@ -1113,8 +1138,9 @@ static void dmirror_device_remove(struct dmirror_device *mdevice) mdevice->devmem_chunks[i];
memunmap_pages(&devmem->pagemap); - release_mem_region(devmem->pagemap.range.start, - range_len(&devmem->pagemap.range)); + if (devmem->pagemap.type == MEMORY_DEVICE_PRIVATE) + release_mem_region(devmem->pagemap.range.start, + range_len(&devmem->pagemap.range)); kfree(devmem); } kfree(mdevice->devmem_chunks); diff --git a/lib/test_hmm_uapi.h b/lib/test_hmm_uapi.h index ee88701793d5..f86754be64fd 100644 --- a/lib/test_hmm_uapi.h +++ b/lib/test_hmm_uapi.h @@ -65,6 +65,7 @@ enum { enum { /* 0 is reserved to catch uninitialized type fields */ HMM_DMIRROR_MEMORY_DEVICE_PRIVATE = 1, + HMM_DMIRROR_MEMORY_DEVICE_COHERENT, };
#endif /* _LIB_TEST_HMM_UAPI_H */
Device Coherent type uses device memory that is coherently accesible by the CPU. This could be shown as SP (special purpose) memory range at the BIOS-e820 memory enumeration. If no SP memory is supported in system, this could be faked by setting CONFIG_EFI_FAKE_MEMMAP.
Currently, test_hmm only supports two different SP ranges of at least 256MB size. This could be specified in the kernel parameter variable efi_fake_mem. Ex. Two SP ranges of 1GB starting at 0x100000000 & 0x140000000 physical address. Ex. efi_fake_mem=1G@0x100000000:0x40000,1G@0x140000000:0x40000
Signed-off-by: Alex Sierra alex.sierra@amd.com --- lib/test_hmm.c | 195 ++++++++++++++++++++++++++++++++------------ lib/test_hmm_uapi.h | 16 +++- 2 files changed, 157 insertions(+), 54 deletions(-)
diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 70a9be0efa00..b349dd920f04 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -469,6 +469,7 @@ static int dmirror_allocate_chunk(struct dmirror_device *mdevice, unsigned long pfn_first; unsigned long pfn_last; void *ptr; + int ret = -ENOMEM;
devmem = kzalloc(sizeof(*devmem), GFP_KERNEL); if (!devmem) @@ -551,7 +552,7 @@ static int dmirror_allocate_chunk(struct dmirror_device *mdevice, } spin_unlock(&mdevice->lock);
- return true; + return 0;
err_release: mutex_unlock(&mdevice->devmem_lock); @@ -560,7 +561,7 @@ static int dmirror_allocate_chunk(struct dmirror_device *mdevice, err_devmem: kfree(devmem);
- return false; + return ret; }
static struct page *dmirror_devmem_alloc_page(struct dmirror_device *mdevice) @@ -569,13 +570,14 @@ static struct page *dmirror_devmem_alloc_page(struct dmirror_device *mdevice) struct page *rpage;
/* - * This is a fake device so we alloc real system memory to store - * our device memory. + * For ZONE_DEVICE private type, this is a fake device so we alloc real + * system memory to store our device memory. + * For ZONE_DEVICE coherent type we use the actual dpage to store the data + * and ignore rpage. */ rpage = alloc_page(GFP_HIGHUSER); if (!rpage) return NULL; - spin_lock(&mdevice->lock);
if (mdevice->free_pages) { @@ -603,7 +605,7 @@ static void dmirror_migrate_alloc_and_copy(struct migrate_vma *args, struct dmirror *dmirror) { struct dmirror_device *mdevice = dmirror->mdevice; - const unsigned long *src = args->src; + unsigned long *src = args->src; unsigned long *dst = args->dst; unsigned long addr;
@@ -621,12 +623,17 @@ static void dmirror_migrate_alloc_and_copy(struct migrate_vma *args, * unallocated pte_none() or read-only zero page. */ spage = migrate_pfn_to_page(*src); + if (spage && is_zone_device_page(spage)) + pr_err("page already in device spage pfn: 0x%lx\n", + page_to_pfn(spage)); + BUG_ON(spage && is_zone_device_page(spage));
dpage = dmirror_devmem_alloc_page(mdevice); if (!dpage) continue;
- rpage = dpage->zone_device_data; + rpage = is_device_private_page(dpage) ? dpage->zone_device_data : + dpage; if (spage) copy_highpage(rpage, spage); else @@ -638,8 +645,10 @@ static void dmirror_migrate_alloc_and_copy(struct migrate_vma *args, * the simulated device memory and that page holds the pointer * to the mirror. */ + rpage = dpage->zone_device_data; rpage->zone_device_data = dmirror; - + pr_debug("migrating from sys to dev pfn src: 0x%lx pfn dst: 0x%lx\n", + page_to_pfn(spage), page_to_pfn(dpage)); *dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED; if ((*src & MIGRATE_PFN_WRITE) || @@ -673,10 +682,13 @@ static int dmirror_migrate_finalize_and_map(struct migrate_vma *args, continue;
/* - * Store the page that holds the data so the page table - * doesn't have to deal with ZONE_DEVICE private pages. + * For ZONE_DEVICE private pages we store the page that + * holds the data so the page table doesn't have to deal it. + * For ZONE_DEVICE coherent pages we store the actual page, since + * the CPU has coherent access to the page. */ - entry = dpage->zone_device_data; + entry = is_device_private_page(dpage) ? dpage->zone_device_data : + dpage; if (*dst & MIGRATE_PFN_WRITE) entry = xa_tag_pointer(entry, DPT_XA_TAG_WRITE); entry = xa_store(&dmirror->pt, pfn, entry, GFP_ATOMIC); @@ -690,7 +702,110 @@ static int dmirror_migrate_finalize_and_map(struct migrate_vma *args, return 0; }
-static int dmirror_migrate(struct dmirror *dmirror, +static vm_fault_t dmirror_devmem_fault_alloc_and_copy(struct migrate_vma *args, + struct dmirror *dmirror) +{ + unsigned long *src = args->src; + unsigned long *dst = args->dst; + unsigned long start = args->start; + unsigned long end = args->end; + unsigned long addr; + + for (addr = start; addr < end; addr += PAGE_SIZE, + src++, dst++) { + struct page *dpage, *spage; + + spage = migrate_pfn_to_page(*src); + if (!spage || !(*src & MIGRATE_PFN_MIGRATE)) + continue; + + BUG_ON(!is_device_page(spage)); + spage = is_device_private_page(spage) ? spage->zone_device_data: + spage; + dpage = alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr); + if (!dpage) + continue; + pr_debug("migrating from dev to sys pfn src: 0x%lx pfn dst: 0x%lx\n", + page_to_pfn(spage), page_to_pfn(dpage)); + + lock_page(dpage); + xa_erase(&dmirror->pt, addr >> PAGE_SHIFT); + copy_highpage(dpage, spage); + *dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED; + if (*src & MIGRATE_PFN_WRITE) + *dst |= MIGRATE_PFN_WRITE; + } + return 0; +} + +static int dmirror_migrate_to_system(struct dmirror *dmirror, + struct hmm_dmirror_cmd *cmd) +{ + unsigned long start, end, addr; + unsigned long size = cmd->npages << PAGE_SHIFT; + struct mm_struct *mm = dmirror->notifier.mm; + struct vm_area_struct *vma; + unsigned long src_pfns[64]; + unsigned long dst_pfns[64]; + struct migrate_vma args; + unsigned long next; + int ret; + + start = cmd->addr; + end = start + size; + if (end < start) + return -EINVAL; + + /* Since the mm is for the mirrored process, get a reference first. */ + if (!mmget_not_zero(mm)) + return -EINVAL; + + mmap_read_lock(mm); + for (addr = start; addr < end; addr = next) { + vma = find_vma(mm, addr); + if (!vma || addr < vma->vm_start || + !(vma->vm_flags & VM_READ)) { + ret = -EINVAL; + goto out; + } + next = min(end, addr + (ARRAY_SIZE(src_pfns) << PAGE_SHIFT)); + if (next > vma->vm_end) + next = vma->vm_end; + + args.vma = vma; + args.src = src_pfns; + args.dst = dst_pfns; + args.start = addr; + args.end = next; + args.pgmap_owner = dmirror->mdevice; + args.flags = (dmirror->mdevice->zone_device_type == + HMM_DMIRROR_MEMORY_DEVICE_PRIVATE) ? + MIGRATE_VMA_SELECT_DEVICE_PRIVATE : + MIGRATE_VMA_SELECT_DEVICE_COHERENT; + + ret = migrate_vma_setup(&args); + if (ret) + goto out; + + pr_debug("Migrating from device mem to sys mem\n"); + dmirror_devmem_fault_alloc_and_copy(&args, dmirror); + + migrate_vma_pages(&args); + migrate_vma_finalize(&args); + } + mmap_read_unlock(mm); + mmput(mm); + + return ret; + +out: + mmap_read_unlock(mm); + mmput(mm); + return ret; +} + + +static int dmirror_migrate_to_device(struct dmirror *dmirror, struct hmm_dmirror_cmd *cmd) { unsigned long start, end, addr; @@ -736,6 +851,7 @@ static int dmirror_migrate(struct dmirror *dmirror, if (ret) goto out;
+ pr_debug("Migrating from sys mem to device mem\n"); dmirror_migrate_alloc_and_copy(&args, dmirror); migrate_vma_pages(&args); dmirror_migrate_finalize_and_map(&args, dmirror); @@ -744,7 +860,7 @@ static int dmirror_migrate(struct dmirror *dmirror, mmap_read_unlock(mm); mmput(mm);
- /* Return the migrated data for verification. */ + /* Return the migrated data for verification. only for pages in device zone */ ret = dmirror_bounce_init(&bounce, start, size); if (ret) return ret; @@ -758,6 +874,7 @@ static int dmirror_migrate(struct dmirror *dmirror, } cmd->cpages = bounce.cpages; dmirror_bounce_fini(&bounce); + return ret;
out: @@ -781,9 +898,15 @@ static void dmirror_mkentry(struct dmirror *dmirror, struct hmm_range *range, }
page = hmm_pfn_to_page(entry); - if (is_device_private_page(page)) { - /* Is the page migrated to this device or some other? */ - if (dmirror->mdevice == dmirror_page_to_device(page)) + if (is_device_page(page)) { + /* Is page ZONE_DEVICE coherent? */ + if (!is_device_private_page(page)) + *perm = HMM_DMIRROR_PROT_DEV_COHERENT; + /* + * Is page ZONE_DEVICE private migrated to + * this device or some other? + */ + else if (dmirror->mdevice == dmirror_page_to_device(page)) *perm = HMM_DMIRROR_PROT_DEV_PRIVATE_LOCAL; else *perm = HMM_DMIRROR_PROT_DEV_PRIVATE_REMOTE; @@ -983,8 +1106,12 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp, ret = dmirror_write(dmirror, &cmd); break;
- case HMM_DMIRROR_MIGRATE: - ret = dmirror_migrate(dmirror, &cmd); + case HMM_DMIRROR_MIGRATE_TO_DEV: + ret = dmirror_migrate_to_device(dmirror, &cmd); + break; + + case HMM_DMIRROR_MIGRATE_TO_SYS: + ret = dmirror_migrate_to_system(dmirror, &cmd); break;
case HMM_DMIRROR_SNAPSHOT: @@ -1030,38 +1157,6 @@ static void dmirror_devmem_free(struct page *page) spin_unlock(&mdevice->lock); }
-static vm_fault_t dmirror_devmem_fault_alloc_and_copy(struct migrate_vma *args, - struct dmirror *dmirror) -{ - const unsigned long *src = args->src; - unsigned long *dst = args->dst; - unsigned long start = args->start; - unsigned long end = args->end; - unsigned long addr; - - for (addr = start; addr < end; addr += PAGE_SIZE, - src++, dst++) { - struct page *dpage, *spage; - - spage = migrate_pfn_to_page(*src); - if (!spage || !(*src & MIGRATE_PFN_MIGRATE)) - continue; - spage = spage->zone_device_data; - - dpage = alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr); - if (!dpage) - continue; - - lock_page(dpage); - xa_erase(&dmirror->pt, addr >> PAGE_SHIFT); - copy_highpage(dpage, spage); - *dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED; - if (*src & MIGRATE_PFN_WRITE) - *dst |= MIGRATE_PFN_WRITE; - } - return 0; -} - static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf) { struct migrate_vma args; diff --git a/lib/test_hmm_uapi.h b/lib/test_hmm_uapi.h index f86754be64fd..13cec485328d 100644 --- a/lib/test_hmm_uapi.h +++ b/lib/test_hmm_uapi.h @@ -17,8 +17,12 @@ * @addr: (in) user address the device will read/write * @ptr: (in) user address where device data is copied to/from * @npages: (in) number of pages to read/write + * @alloc_to_devmem: (in) desired allocation destination during migration. + * True if allocation is to device memory. + * False if allocation is to system memory. * @cpages: (out) number of pages copied * @faults: (out) number of device page faults seen + * @zone_device_type: (out) zone device memory type */ struct hmm_dmirror_cmd { __u64 addr; @@ -26,15 +30,16 @@ struct hmm_dmirror_cmd { __u64 npages; __u64 cpages; __u64 faults; - __u64 zone_device_type; + __u32 zone_device_type; };
/* Expose the address space of the calling process through hmm device file */ #define HMM_DMIRROR_READ _IOWR('H', 0x00, struct hmm_dmirror_cmd) #define HMM_DMIRROR_WRITE _IOWR('H', 0x01, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_MIGRATE _IOWR('H', 0x02, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_SNAPSHOT _IOWR('H', 0x03, struct hmm_dmirror_cmd) -#define HMM_DMIRROR_GET_MEM_DEV_TYPE _IOWR('H', 0x04, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_MIGRATE_TO_DEV _IOWR('H', 0x02, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_MIGRATE_TO_SYS _IOWR('H', 0x03, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_SNAPSHOT _IOWR('H', 0x04, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_GET_MEM_DEV_TYPE _IOWR('H', 0x05, struct hmm_dmirror_cmd)
/* * Values returned in hmm_dmirror_cmd.ptr for HMM_DMIRROR_SNAPSHOT. @@ -49,6 +54,8 @@ struct hmm_dmirror_cmd { * device the ioctl() is made * HMM_DMIRROR_PROT_DEV_PRIVATE_REMOTE: Migrated device private page on some * other device + * HMM_DMIRROR_PROT_DEV_COHERENT: Migrate device coherent page on the device + * the ioctl() is made */ enum { HMM_DMIRROR_PROT_ERROR = 0xFF, @@ -60,6 +67,7 @@ enum { HMM_DMIRROR_PROT_ZERO = 0x10, HMM_DMIRROR_PROT_DEV_PRIVATE_LOCAL = 0x20, HMM_DMIRROR_PROT_DEV_PRIVATE_REMOTE = 0x30, + HMM_DMIRROR_PROT_DEV_COHERENT = 0x40, };
enum {
Test cases such as migrate_fault and migrate_multiple, were modified to explicit migrate from device to sys memory without the need of page faults, when using device coherent type.
Snapshot test case updated to read memory device type first and based on that, get the proper returned results migrate_ping_pong test case added to test explicit migration from device to sys memory for both private and coherent zone types.
Helpers to migrate from device to sys memory and vicerversa were also added.
Signed-off-by: Alex Sierra alex.sierra@amd.com --- tools/testing/selftests/vm/hmm-tests.c | 137 +++++++++++++++++++++---- 1 file changed, 119 insertions(+), 18 deletions(-)
diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c index 5d1ac691b9f4..e7fa87618dd5 100644 --- a/tools/testing/selftests/vm/hmm-tests.c +++ b/tools/testing/selftests/vm/hmm-tests.c @@ -44,6 +44,7 @@ struct hmm_buffer { int fd; uint64_t cpages; uint64_t faults; + int zone_device_type; };
#define TWOMEG (1 << 21) @@ -144,6 +145,7 @@ static int hmm_dmirror_cmd(int fd, } buffer->cpages = cmd.cpages; buffer->faults = cmd.faults; + buffer->zone_device_type = cmd.zone_device_type;
return 0; } @@ -211,6 +213,32 @@ static void hmm_nanosleep(unsigned int n) nanosleep(&t, NULL); }
+static int hmm_migrate_sys_to_dev(int fd, + struct hmm_buffer *buffer, + unsigned long npages) +{ + return hmm_dmirror_cmd(fd, HMM_DMIRROR_MIGRATE_TO_DEV, buffer, npages); +} + +static int hmm_migrate_dev_to_sys(int fd, + struct hmm_buffer *buffer, + unsigned long npages) +{ + return hmm_dmirror_cmd(fd, HMM_DMIRROR_MIGRATE_TO_SYS, buffer, npages); +} + +static int hmm_is_private_device(int fd, bool *res) +{ + struct hmm_buffer buffer; + int ret; + + buffer.ptr = 0; + ret = hmm_dmirror_cmd(fd, HMM_DMIRROR_GET_MEM_DEV_TYPE, &buffer, 1); + *res = (buffer.zone_device_type == HMM_DMIRROR_MEMORY_DEVICE_PRIVATE); + + return ret; +} + /* * Simple NULL test of device open/close. */ @@ -875,7 +903,7 @@ TEST_F(hmm, migrate) ptr[i] = i;
/* Migrate memory to device. */ - ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_MIGRATE, buffer, npages); + ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages); ASSERT_EQ(ret, 0); ASSERT_EQ(buffer->cpages, npages);
@@ -923,7 +951,7 @@ TEST_F(hmm, migrate_fault) ptr[i] = i;
/* Migrate memory to device. */ - ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_MIGRATE, buffer, npages); + ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages); ASSERT_EQ(ret, 0); ASSERT_EQ(buffer->cpages, npages);
@@ -936,7 +964,7 @@ TEST_F(hmm, migrate_fault) ASSERT_EQ(ptr[i], i);
/* Migrate memory to the device again. */ - ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_MIGRATE, buffer, npages); + ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages); ASSERT_EQ(ret, 0); ASSERT_EQ(buffer->cpages, npages);
@@ -976,7 +1004,7 @@ TEST_F(hmm, migrate_shared) ASSERT_NE(buffer->ptr, MAP_FAILED);
/* Migrate memory to device. */ - ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_MIGRATE, buffer, npages); + ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages); ASSERT_EQ(ret, -ENOENT);
hmm_buffer_free(buffer); @@ -1015,7 +1043,7 @@ TEST_F(hmm2, migrate_mixed) p = buffer->ptr;
/* Migrating a protected area should be an error. */ - ret = hmm_dmirror_cmd(self->fd1, HMM_DMIRROR_MIGRATE, buffer, npages); + ret = hmm_migrate_sys_to_dev(self->fd1, buffer, npages); ASSERT_EQ(ret, -EINVAL);
/* Punch a hole after the first page address. */ @@ -1023,7 +1051,7 @@ TEST_F(hmm2, migrate_mixed) ASSERT_EQ(ret, 0);
/* We expect an error if the vma doesn't cover the range. */ - ret = hmm_dmirror_cmd(self->fd1, HMM_DMIRROR_MIGRATE, buffer, 3); + ret = hmm_migrate_sys_to_dev(self->fd1, buffer, 3); ASSERT_EQ(ret, -EINVAL);
/* Page 2 will be a read-only zero page. */ @@ -1055,13 +1083,13 @@ TEST_F(hmm2, migrate_mixed)
/* Now try to migrate pages 2-5 to device 1. */ buffer->ptr = p + 2 * self->page_size; - ret = hmm_dmirror_cmd(self->fd1, HMM_DMIRROR_MIGRATE, buffer, 4); + ret = hmm_migrate_sys_to_dev(self->fd1, buffer, 4); ASSERT_EQ(ret, 0); ASSERT_EQ(buffer->cpages, 4);
/* Page 5 won't be migrated to device 0 because it's on device 1. */ buffer->ptr = p + 5 * self->page_size; - ret = hmm_dmirror_cmd(self->fd0, HMM_DMIRROR_MIGRATE, buffer, 1); + ret = hmm_migrate_sys_to_dev(self->fd0, buffer, 1); ASSERT_EQ(ret, -ENOENT); buffer->ptr = p;
@@ -1070,8 +1098,12 @@ TEST_F(hmm2, migrate_mixed) }
/* - * Migrate anonymous memory to device private memory and fault it back to system - * memory multiple times. + * Migrate anonymous memory to device memory and back to system memory + * multiple times. In case of private zone configuration, this is done + * through fault pages accessed by CPU. In case of coherent zone configuration, + * the pages from the device should be explicitly migrated back to system memory. + * The reason is Coherent device zone has coherent access to CPU, therefore + * it will not generate any page fault. */ TEST_F(hmm, migrate_multiple) { @@ -1082,7 +1114,9 @@ TEST_F(hmm, migrate_multiple) unsigned long c; int *ptr; int ret; + bool is_private;
+ ASSERT_EQ(hmm_is_private_device(self->fd, &is_private), 0); npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; ASSERT_NE(npages, 0); size = npages << self->page_shift; @@ -1107,8 +1141,7 @@ TEST_F(hmm, migrate_multiple) ptr[i] = i;
/* Migrate memory to device. */ - ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_MIGRATE, buffer, - npages); + ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages); ASSERT_EQ(ret, 0); ASSERT_EQ(buffer->cpages, npages);
@@ -1116,7 +1149,12 @@ TEST_F(hmm, migrate_multiple) for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i) ASSERT_EQ(ptr[i], i);
- /* Fault pages back to system memory and check them. */ + /* Migrate back to system memory and check them. */ + if (!is_private) { + ret = hmm_migrate_dev_to_sys(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + } + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) ASSERT_EQ(ptr[i], i);
@@ -1261,10 +1299,12 @@ TEST_F(hmm2, snapshot) unsigned char *m; int ret; int val; + bool is_private;
npages = 7; size = npages << self->page_shift;
+ ASSERT_EQ(hmm_is_private_device(self->fd0, &is_private), 0); buffer = malloc(sizeof(*buffer)); ASSERT_NE(buffer, NULL);
@@ -1312,13 +1352,13 @@ TEST_F(hmm2, snapshot)
/* Page 5 will be migrated to device 0. */ buffer->ptr = p + 5 * self->page_size; - ret = hmm_dmirror_cmd(self->fd0, HMM_DMIRROR_MIGRATE, buffer, 1); + ret = hmm_migrate_sys_to_dev(self->fd0, buffer, 1); ASSERT_EQ(ret, 0); ASSERT_EQ(buffer->cpages, 1);
/* Page 6 will be migrated to device 1. */ buffer->ptr = p + 6 * self->page_size; - ret = hmm_dmirror_cmd(self->fd1, HMM_DMIRROR_MIGRATE, buffer, 1); + ret = hmm_migrate_sys_to_dev(self->fd1, buffer, 1); ASSERT_EQ(ret, 0); ASSERT_EQ(buffer->cpages, 1);
@@ -1335,9 +1375,16 @@ TEST_F(hmm2, snapshot) ASSERT_EQ(m[2], HMM_DMIRROR_PROT_ZERO | HMM_DMIRROR_PROT_READ); ASSERT_EQ(m[3], HMM_DMIRROR_PROT_READ); ASSERT_EQ(m[4], HMM_DMIRROR_PROT_WRITE); - ASSERT_EQ(m[5], HMM_DMIRROR_PROT_DEV_PRIVATE_LOCAL | - HMM_DMIRROR_PROT_WRITE); - ASSERT_EQ(m[6], HMM_DMIRROR_PROT_NONE); + if (is_private) { + ASSERT_EQ(m[5], HMM_DMIRROR_PROT_DEV_PRIVATE_LOCAL | + HMM_DMIRROR_PROT_WRITE); + ASSERT_EQ(m[6], HMM_DMIRROR_PROT_NONE); + } else { + ASSERT_EQ(m[5], HMM_DMIRROR_PROT_DEV_COHERENT | + HMM_DMIRROR_PROT_WRITE); + ASSERT_EQ(m[6], HMM_DMIRROR_PROT_DEV_COHERENT | + HMM_DMIRROR_PROT_WRITE); + }
hmm_buffer_free(buffer); } @@ -1485,4 +1532,58 @@ TEST_F(hmm2, double_map) hmm_buffer_free(buffer); }
+/* + * Migrate anonymous memory to device memory and migrate back to system memory + * explicitly, without generating a page fault. + */ +TEST_F(hmm, migrate_ping_pong) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + int *ptr; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Migrate memory to device. */ + ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + /* Check what the device read. */ + for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + /* Migrate memory back to system mem. */ + ret = hmm_migrate_dev_to_sys(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + + /* Check the buffer migrated back to system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + hmm_buffer_free(buffer); +} + TEST_HARNESS_MAIN
Add two more parameters to set spm_addr_dev0 & spm_addr_dev1 addresses. These two parameters configure the start SP addresses for each device in test_hmm driver. Consequently, this configures zone device type as coherent.
Signed-off-by: Alex Sierra alex.sierra@amd.com --- tools/testing/selftests/vm/test_hmm.sh | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/vm/test_hmm.sh b/tools/testing/selftests/vm/test_hmm.sh index 0647b525a625..3eeabe94399f 100755 --- a/tools/testing/selftests/vm/test_hmm.sh +++ b/tools/testing/selftests/vm/test_hmm.sh @@ -40,7 +40,18 @@ check_test_requirements()
load_driver() { - modprobe $DRIVER > /dev/null 2>&1 + if [ $# -eq 0 ]; then + modprobe $DRIVER > /dev/null 2>&1 + else + if [ $# -eq 2 ]; then + modprobe $DRIVER spm_addr_dev0=$1 spm_addr_dev1=$2 + > /dev/null 2>&1 + else + echo "Missing module parameters. Make sure pass"\ + "spm_addr_dev0 and spm_addr_dev1" + usage + fi + fi if [ $? == 0 ]; then major=$(awk "$2=="HMM_DMIRROR" {print $1}" /proc/devices) mknod /dev/hmm_dmirror0 c $major 0 @@ -58,7 +69,7 @@ run_smoke() { echo "Running smoke test. Note, this test provides basic coverage."
- load_driver + load_driver $1 $2 $(dirname "${BASH_SOURCE[0]}")/hmm-tests unload_driver } @@ -75,6 +86,9 @@ usage() echo "# Smoke testing" echo "./${TEST_NAME}.sh smoke" echo + echo "# Smoke testing with SPM enabled" + echo "./${TEST_NAME}.sh smoke <spm_addr_dev0> <spm_addr_dev1>" + echo exit 0 }
@@ -84,7 +98,7 @@ function run_test() usage else if [ "$1" = "smoke" ]; then - run_smoke + run_smoke $2 $3 else usage fi
On Tue, 12 Oct 2021 12:12:35 -0500 Alex Sierra alex.sierra@amd.com wrote:
This patch series introduces MEMORY_DEVICE_COHERENT, a type of memory owned by a device that can be mapped into CPU page tables like MEMORY_DEVICE_GENERIC and can also be migrated like MEMORY_DEVICE_PRIVATE. With MEMORY_DEVICE_COHERENT, we isolate the new memory type from other subsystems as far as possible, though there are some small changes to other subsystems such as filesystem DAX, to handle the new memory type appropriately.
We use ZONE_DEVICE for this instead of NUMA so that the amdgpu allocator can manage it without conflicting with core mm for non-unified memory use cases.
How it works: The system BIOS advertises the GPU device memory (aka VRAM) as SPM (special purpose memory) in the UEFI system address map. The amdgpu driver registers the memory with devmap as MEMORY_DEVICE_COHERENT using devm_memremap_pages.
The initial user for this hardware page migration capability will be the Frontier supercomputer project.
To what other uses will this infrastructure be put?
Because I must ask: if this feature is for one single computer which presumably has a custom kernel, why add it to mainline Linux?
Our nodes in the lab have .5 TB of system memory plus 256 GB of device memory split across 4 GPUs, all in the same coherent address space. Page migration is expected to improve application efficiency significantly. We will report empirical results as they become available.
This includes patches originally by Ralph Campbell to change ZONE_DEVICE reference counting as requested in previous reviews of this patch series (see https://patchwork.freedesktop.org/series/90706/). We extended hmm_test to cover migration of MEMORY_DEVICE_COHERENT. This patch set builds on HMM and our SVM memory manager already merged in 5.14. We would like to complete review and merge this migration patchset for 5.16.
On Tue, Oct 12, 2021 at 11:39:57AM -0700, Andrew Morton wrote:
On Tue, 12 Oct 2021 12:12:35 -0500 Alex Sierra alex.sierra@amd.com wrote:
This patch series introduces MEMORY_DEVICE_COHERENT, a type of memory owned by a device that can be mapped into CPU page tables like MEMORY_DEVICE_GENERIC and can also be migrated like MEMORY_DEVICE_PRIVATE. With MEMORY_DEVICE_COHERENT, we isolate the new memory type from other subsystems as far as possible, though there are some small changes to other subsystems such as filesystem DAX, to handle the new memory type appropriately.
We use ZONE_DEVICE for this instead of NUMA so that the amdgpu allocator can manage it without conflicting with core mm for non-unified memory use cases.
How it works: The system BIOS advertises the GPU device memory (aka VRAM) as SPM (special purpose memory) in the UEFI system address map. The amdgpu driver registers the memory with devmap as MEMORY_DEVICE_COHERENT using devm_memremap_pages.
The initial user for this hardware page migration capability will be the Frontier supercomputer project.
To what other uses will this infrastructure be put?
Because I must ask: if this feature is for one single computer which presumably has a custom kernel, why add it to mainline Linux?
Well, it certainly isn't just "one single computer". Overall I know of about, hmm, ~10 *datacenters* worth of installations that are using similar technology underpinnings.
"Frontier" is the code name for a specific installation but as the technology is proven out there will be many copies made of that same approach.
The previous program "Summit" was done with NVIDIA GPUs and PowerPC CPUs and also included a very similar capability. I think this is a good sign that this coherently attached accelerator will continue to be a theme in computing going foward. IIRC this was done using out of tree kernel patches and NUMA localities.
Specifically with CXL now being standardized and on a path to ubiquity I think we will see an explosion in deployments of coherently attached accelerator memory. This is the high end trickling down to wider usage.
I strongly think many CXL accelerators are going to want to manage their on-accelerator memory in this way as it makes universal sense to want to carefully manage memory access locality to optimize for performance.
Jason
On Tue, 12 Oct 2021 15:56:29 -0300 Jason Gunthorpe jgg@nvidia.com wrote:
To what other uses will this infrastructure be put?
Because I must ask: if this feature is for one single computer which presumably has a custom kernel, why add it to mainline Linux?
Well, it certainly isn't just "one single computer". Overall I know of about, hmm, ~10 *datacenters* worth of installations that are using similar technology underpinnings.
"Frontier" is the code name for a specific installation but as the technology is proven out there will be many copies made of that same approach.
The previous program "Summit" was done with NVIDIA GPUs and PowerPC CPUs and also included a very similar capability. I think this is a good sign that this coherently attached accelerator will continue to be a theme in computing going foward. IIRC this was done using out of tree kernel patches and NUMA localities.
Specifically with CXL now being standardized and on a path to ubiquity I think we will see an explosion in deployments of coherently attached accelerator memory. This is the high end trickling down to wider usage.
I strongly think many CXL accelerators are going to want to manage their on-accelerator memory in this way as it makes universal sense to want to carefully manage memory access locality to optimize for performance.
Thanks. Can we please get something like the above into the [0/n] changelog? Along with any other high-level info which is relevant?
It's rather important. "why should I review this", "why should we merge this", etc.
Am 2021-10-12 um 3:03 p.m. schrieb Andrew Morton:
On Tue, 12 Oct 2021 15:56:29 -0300 Jason Gunthorpe jgg@nvidia.com wrote:
To what other uses will this infrastructure be put?
Because I must ask: if this feature is for one single computer which presumably has a custom kernel, why add it to mainline Linux?
Well, it certainly isn't just "one single computer". Overall I know of about, hmm, ~10 *datacenters* worth of installations that are using similar technology underpinnings.
"Frontier" is the code name for a specific installation but as the technology is proven out there will be many copies made of that same approach.
The previous program "Summit" was done with NVIDIA GPUs and PowerPC CPUs and also included a very similar capability. I think this is a good sign that this coherently attached accelerator will continue to be a theme in computing going foward. IIRC this was done using out of tree kernel patches and NUMA localities.
Specifically with CXL now being standardized and on a path to ubiquity I think we will see an explosion in deployments of coherently attached accelerator memory. This is the high end trickling down to wider usage.
I strongly think many CXL accelerators are going to want to manage their on-accelerator memory in this way as it makes universal sense to want to carefully manage memory access locality to optimize for performance.
Thanks. Can we please get something like the above into the [0/n] changelog? Along with any other high-level info which is relevant?
It's rather important. "why should I review this", "why should we merge this", etc.
Using Jason's input, I suggest adding this text for the next revision of the cover letter:
DEVICE_PRIVATE memory emulates coherence between CPU and the device by migrating data back and forth. An application that accesses the same page (or huge page) from CPU and device concurrently can cause many migrations, each involving device cache flushes, page table updates and page faults on the CPU or device.
In contrast, DEVICE_COHERENT enables truly concurrent CPU and device access to to ZONE_DEVICE pages by taking advantage of HW coherence protocols.
As a historical reference point, the Summit supercomputer implemented such a coherent memory architecture with NVidia GPUs and PowerPC CPUs.
The initial user for the DEVICE_COHERENT memory type will be the AMD GPU driver on the Frontier supercomputer. CXL standardizes a coherent peripheral interconnect, leading to more mainstream systems and devices with that capability.
Best regards, Felix
On Tue, Oct 12, 2021 at 03:56:29PM -0300, Jason Gunthorpe wrote:
On Tue, Oct 12, 2021 at 11:39:57AM -0700, Andrew Morton wrote:
On Tue, 12 Oct 2021 12:12:35 -0500 Alex Sierra alex.sierra@amd.com wrote:
This patch series introduces MEMORY_DEVICE_COHERENT, a type of memory owned by a device that can be mapped into CPU page tables like MEMORY_DEVICE_GENERIC and can also be migrated like MEMORY_DEVICE_PRIVATE. With MEMORY_DEVICE_COHERENT, we isolate the new memory type from other subsystems as far as possible, though there are some small changes to other subsystems such as filesystem DAX, to handle the new memory type appropriately.
We use ZONE_DEVICE for this instead of NUMA so that the amdgpu allocator can manage it without conflicting with core mm for non-unified memory use cases.
How it works: The system BIOS advertises the GPU device memory (aka VRAM) as SPM (special purpose memory) in the UEFI system address map. The amdgpu driver registers the memory with devmap as MEMORY_DEVICE_COHERENT using devm_memremap_pages.
The initial user for this hardware page migration capability will be the Frontier supercomputer project.
To what other uses will this infrastructure be put?
Because I must ask: if this feature is for one single computer which presumably has a custom kernel, why add it to mainline Linux?
Well, it certainly isn't just "one single computer". Overall I know of about, hmm, ~10 *datacenters* worth of installations that are using similar technology underpinnings.
"Frontier" is the code name for a specific installation but as the technology is proven out there will be many copies made of that same approach.
The previous program "Summit" was done with NVIDIA GPUs and PowerPC CPUs and also included a very similar capability. I think this is a good sign that this coherently attached accelerator will continue to be a theme in computing going foward. IIRC this was done using out of tree kernel patches and NUMA localities.
Specifically with CXL now being standardized and on a path to ubiquity I think we will see an explosion in deployments of coherently attached accelerator memory. This is the high end trickling down to wider usage.
I strongly think many CXL accelerators are going to want to manage their on-accelerator memory in this way as it makes universal sense to want to carefully manage memory access locality to optimize for performance.
Yeah with CXL this will be used by a lot more drivers/devices, not even including nvidia's blob.
I guess if you want make sure get an ack on this from CXL folks, so that we don't end up with a mess. -Daniel
Am 2021-10-12 um 2:39 p.m. schrieb Andrew Morton:
On Tue, 12 Oct 2021 12:12:35 -0500 Alex Sierra alex.sierra@amd.com wrote:
This patch series introduces MEMORY_DEVICE_COHERENT, a type of memory owned by a device that can be mapped into CPU page tables like MEMORY_DEVICE_GENERIC and can also be migrated like MEMORY_DEVICE_PRIVATE. With MEMORY_DEVICE_COHERENT, we isolate the new memory type from other subsystems as far as possible, though there are some small changes to other subsystems such as filesystem DAX, to handle the new memory type appropriately.
We use ZONE_DEVICE for this instead of NUMA so that the amdgpu allocator can manage it without conflicting with core mm for non-unified memory use cases.
How it works: The system BIOS advertises the GPU device memory (aka VRAM) as SPM (special purpose memory) in the UEFI system address map. The amdgpu driver registers the memory with devmap as MEMORY_DEVICE_COHERENT using devm_memremap_pages.
The initial user for this hardware page migration capability will be the Frontier supercomputer project.
To what other uses will this infrastructure be put?
Because I must ask: if this feature is for one single computer which presumably has a custom kernel, why add it to mainline Linux?
I'm not sure this will be the only system with this architecture. This is only the first one I know of. I hope it's not a one-off, after all the work we did on it. ;)
The Linux kernel on this system is based on SLES. We are working with SUSE on backporting patches needed for this system. However, those patches need to be upstream first.
DEVICE_PUBLIC was removed because it had no users. We're trying to add it (or something like it) back because we now have a use case for it.
Regards, Felix
Our nodes in the lab have .5 TB of system memory plus 256 GB of device memory split across 4 GPUs, all in the same coherent address space. Page migration is expected to improve application efficiency significantly. We will report empirical results as they become available.
This includes patches originally by Ralph Campbell to change ZONE_DEVICE reference counting as requested in previous reviews of this patch series (see https://patchwork.freedesktop.org/series/90706/ We extended hmm_test to cover migration of MEMORY_DEVICE_COHERENT. This patch set builds on HMM and our SVM memory manager already merged in 5.14. We would like to complete review and merge this migration patchset for 5.16.
On Tue, Oct 12, 2021 at 11:39:57AM -0700, Andrew Morton wrote:
Because I must ask: if this feature is for one single computer which presumably has a custom kernel, why add it to mainline Linux?
I think in particular patch 2 deserves to be merged because it removes a ton of cruft from every call to put_page() (at least if you're using a distro config). It makes me nervous, but I think it's the right thing to do. It may well need more fixups after it has been merged, but that's life.
Am 2021-10-12 um 3:11 p.m. schrieb Matthew Wilcox:
On Tue, Oct 12, 2021 at 11:39:57AM -0700, Andrew Morton wrote:
Because I must ask: if this feature is for one single computer which presumably has a custom kernel, why add it to mainline Linux?
I think in particular patch 2 deserves to be merged because it removes a ton of cruft from every call to put_page() (at least if you're using a distro config). It makes me nervous, but I think it's the right thing to do. It may well need more fixups after it has been merged, but that's life.
Maybe we should split the first two patches into a separate series, and get it merged first, while the more controversial stuff is still under review?
Thanks, Felix
On Tue, Oct 12, 2021 at 04:24:25PM -0400, Felix Kuehling wrote:
Am 2021-10-12 um 3:11 p.m. schrieb Matthew Wilcox:
On Tue, Oct 12, 2021 at 11:39:57AM -0700, Andrew Morton wrote:
Because I must ask: if this feature is for one single computer which presumably has a custom kernel, why add it to mainline Linux?
I think in particular patch 2 deserves to be merged because it removes a ton of cruft from every call to put_page() (at least if you're using a distro config). It makes me nervous, but I think it's the right thing to do. It may well need more fixups after it has been merged, but that's life.
Maybe we should split the first two patches into a separate series, and get it merged first, while the more controversial stuff is still under review?
Yes, please. I've seen that first patch several times already. :)
--D
Thanks, Felix
dri-devel@lists.freedesktop.org