On Wed, Aug 14, 2019 at 09:38:54AM +0200, Christoph Hellwig wrote:
On Tue, Aug 13, 2019 at 06:36:33PM -0700, Dan Williams wrote:
Section alignment constraints somewhat save us here. The only example I can think of a PMD not containing a uniform pgmap association for each pte is the case when the pgmap overlaps normal dram, i.e. shares the same 'struct memory_section' for a given span. Otherwise, distinct pgmaps arrange to manage their own exclusive sections (and now subsections as of v5.3). Otherwise the implementation could not guarantee different mapping lifetimes.
That said, this seems to want a better mechanism to determine "pfn is ZONE_DEVICE".
So I guess this patch is fine for now, and once you provide a better mechanism we can switch over to it?
What about the version I sent to just get rid of all the strange put_dev_pagemaps while scanning? Odds are good we will work with only a single pagemap, so it makes some sense to cache it once we find it?
diff --git a/mm/hmm.c b/mm/hmm.c index 9a908902e4cc38..4e30128c23a505 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -497,10 +497,6 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, } pfns[i] = hmm_device_entry_from_pfn(range, pfn) | cpu_flags; } - if (hmm_vma_walk->pgmap) { - put_dev_pagemap(hmm_vma_walk->pgmap); - hmm_vma_walk->pgmap = NULL; - } hmm_vma_walk->last = end; return 0; #else @@ -604,10 +600,6 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, return 0;
fault: - if (hmm_vma_walk->pgmap) { - put_dev_pagemap(hmm_vma_walk->pgmap); - hmm_vma_walk->pgmap = NULL; - } pte_unmap(ptep); /* Fault any virtual address we were asked to fault */ return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk); @@ -690,16 +682,6 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, return r; } } - if (hmm_vma_walk->pgmap) { - /* - * We do put_dev_pagemap() here and not in hmm_vma_handle_pte() - * so that we can leverage get_dev_pagemap() optimization which - * will not re-take a reference on a pgmap if we already have - * one. - */ - put_dev_pagemap(hmm_vma_walk->pgmap); - hmm_vma_walk->pgmap = NULL; - } pte_unmap(ptep - 1);
hmm_vma_walk->last = addr; @@ -751,10 +733,6 @@ static int hmm_vma_walk_pud(pud_t *pudp, pfns[i] = hmm_device_entry_from_pfn(range, pfn) | cpu_flags; } - if (hmm_vma_walk->pgmap) { - put_dev_pagemap(hmm_vma_walk->pgmap); - hmm_vma_walk->pgmap = NULL; - } hmm_vma_walk->last = end; return 0; } @@ -1026,6 +1004,14 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags) /* Keep trying while the range is valid. */ } while (ret == -EBUSY && range->valid);
+ /* + * We do put_dev_pagemap() here so that we can leverage + * get_dev_pagemap() optimization which will not re-take a + * reference on a pgmap if we already have one. + */ + if (hmm_vma_walk->pgmap) + put_dev_pagemap(hmm_vma_walk->pgmap); + if (ret) { unsigned long i;
On Wed, Aug 14, 2019 at 6:28 AM Jason Gunthorpe jgg@mellanox.com wrote:
On Wed, Aug 14, 2019 at 09:38:54AM +0200, Christoph Hellwig wrote:
On Tue, Aug 13, 2019 at 06:36:33PM -0700, Dan Williams wrote:
Section alignment constraints somewhat save us here. The only example I can think of a PMD not containing a uniform pgmap association for each pte is the case when the pgmap overlaps normal dram, i.e. shares the same 'struct memory_section' for a given span. Otherwise, distinct pgmaps arrange to manage their own exclusive sections (and now subsections as of v5.3). Otherwise the implementation could not guarantee different mapping lifetimes.
That said, this seems to want a better mechanism to determine "pfn is ZONE_DEVICE".
So I guess this patch is fine for now, and once you provide a better mechanism we can switch over to it?
What about the version I sent to just get rid of all the strange put_dev_pagemaps while scanning? Odds are good we will work with only a single pagemap, so it makes some sense to cache it once we find it?
Yes, if the scan is over a single pmd then caching it makes sense.
On Wed, Aug 14, 2019 at 07:48:28AM -0700, Dan Williams wrote:
On Wed, Aug 14, 2019 at 6:28 AM Jason Gunthorpe jgg@mellanox.com wrote:
On Wed, Aug 14, 2019 at 09:38:54AM +0200, Christoph Hellwig wrote:
On Tue, Aug 13, 2019 at 06:36:33PM -0700, Dan Williams wrote:
Section alignment constraints somewhat save us here. The only example I can think of a PMD not containing a uniform pgmap association for each pte is the case when the pgmap overlaps normal dram, i.e. shares the same 'struct memory_section' for a given span. Otherwise, distinct pgmaps arrange to manage their own exclusive sections (and now subsections as of v5.3). Otherwise the implementation could not guarantee different mapping lifetimes.
That said, this seems to want a better mechanism to determine "pfn is ZONE_DEVICE".
So I guess this patch is fine for now, and once you provide a better mechanism we can switch over to it?
What about the version I sent to just get rid of all the strange put_dev_pagemaps while scanning? Odds are good we will work with only a single pagemap, so it makes some sense to cache it once we find it?
Yes, if the scan is over a single pmd then caching it makes sense.
Quite frankly an easier an better solution is to remove the pagemap lookup as HMM user abide by mmu notifier it means we will not make use or dereference the struct page so that we are safe from any racing hotunplug of dax memory (as long as device driver using hmm do not have a bug).
Cheers, Jérôme
On Thu, Aug 15, 2019 at 02:03:25PM -0400, Jerome Glisse wrote:
On Wed, Aug 14, 2019 at 07:48:28AM -0700, Dan Williams wrote:
On Wed, Aug 14, 2019 at 6:28 AM Jason Gunthorpe jgg@mellanox.com wrote:
On Wed, Aug 14, 2019 at 09:38:54AM +0200, Christoph Hellwig wrote:
On Tue, Aug 13, 2019 at 06:36:33PM -0700, Dan Williams wrote:
Section alignment constraints somewhat save us here. The only example I can think of a PMD not containing a uniform pgmap association for each pte is the case when the pgmap overlaps normal dram, i.e. shares the same 'struct memory_section' for a given span. Otherwise, distinct pgmaps arrange to manage their own exclusive sections (and now subsections as of v5.3). Otherwise the implementation could not guarantee different mapping lifetimes.
That said, this seems to want a better mechanism to determine "pfn is ZONE_DEVICE".
So I guess this patch is fine for now, and once you provide a better mechanism we can switch over to it?
What about the version I sent to just get rid of all the strange put_dev_pagemaps while scanning? Odds are good we will work with only a single pagemap, so it makes some sense to cache it once we find it?
Yes, if the scan is over a single pmd then caching it makes sense.
Quite frankly an easier an better solution is to remove the pagemap lookup as HMM user abide by mmu notifier it means we will not make use or dereference the struct page so that we are safe from any racing hotunplug of dax memory (as long as device driver using hmm do not have a bug).
Yes, I also would prefer to drop the confusing checks entirely - Christoph can you resend this patch?
Thanks, Jason
On Thu, Aug 15, 2019 at 11:07 AM Jerome Glisse jglisse@redhat.com wrote:
On Wed, Aug 14, 2019 at 07:48:28AM -0700, Dan Williams wrote:
On Wed, Aug 14, 2019 at 6:28 AM Jason Gunthorpe jgg@mellanox.com wrote:
On Wed, Aug 14, 2019 at 09:38:54AM +0200, Christoph Hellwig wrote:
On Tue, Aug 13, 2019 at 06:36:33PM -0700, Dan Williams wrote:
Section alignment constraints somewhat save us here. The only example I can think of a PMD not containing a uniform pgmap association for each pte is the case when the pgmap overlaps normal dram, i.e. shares the same 'struct memory_section' for a given span. Otherwise, distinct pgmaps arrange to manage their own exclusive sections (and now subsections as of v5.3). Otherwise the implementation could not guarantee different mapping lifetimes.
That said, this seems to want a better mechanism to determine "pfn is ZONE_DEVICE".
So I guess this patch is fine for now, and once you provide a better mechanism we can switch over to it?
What about the version I sent to just get rid of all the strange put_dev_pagemaps while scanning? Odds are good we will work with only a single pagemap, so it makes some sense to cache it once we find it?
Yes, if the scan is over a single pmd then caching it makes sense.
Quite frankly an easier an better solution is to remove the pagemap lookup as HMM user abide by mmu notifier it means we will not make use or dereference the struct page so that we are safe from any racing hotunplug of dax memory (as long as device driver using hmm do not have a bug).
Yes, as long as the driver remove is synchronized against HMM operations via another mechanism then there is no need to take pagemap references. Can you briefly describe what that other mechanism is?
On Thu, Aug 15, 2019 at 12:36:58PM -0700, Dan Williams wrote:
On Thu, Aug 15, 2019 at 11:07 AM Jerome Glisse jglisse@redhat.com wrote:
On Wed, Aug 14, 2019 at 07:48:28AM -0700, Dan Williams wrote:
On Wed, Aug 14, 2019 at 6:28 AM Jason Gunthorpe jgg@mellanox.com wrote:
On Wed, Aug 14, 2019 at 09:38:54AM +0200, Christoph Hellwig wrote:
On Tue, Aug 13, 2019 at 06:36:33PM -0700, Dan Williams wrote:
Section alignment constraints somewhat save us here. The only example I can think of a PMD not containing a uniform pgmap association for each pte is the case when the pgmap overlaps normal dram, i.e. shares the same 'struct memory_section' for a given span. Otherwise, distinct pgmaps arrange to manage their own exclusive sections (and now subsections as of v5.3). Otherwise the implementation could not guarantee different mapping lifetimes.
That said, this seems to want a better mechanism to determine "pfn is ZONE_DEVICE".
So I guess this patch is fine for now, and once you provide a better mechanism we can switch over to it?
What about the version I sent to just get rid of all the strange put_dev_pagemaps while scanning? Odds are good we will work with only a single pagemap, so it makes some sense to cache it once we find it?
Yes, if the scan is over a single pmd then caching it makes sense.
Quite frankly an easier an better solution is to remove the pagemap lookup as HMM user abide by mmu notifier it means we will not make use or dereference the struct page so that we are safe from any racing hotunplug of dax memory (as long as device driver using hmm do not have a bug).
Yes, as long as the driver remove is synchronized against HMM operations via another mechanism then there is no need to take pagemap references. Can you briefly describe what that other mechanism is?
So if you hotunplug some dax memory i assume that this can only happens once all the pages are unmapped (as it must have the zero refcount, well 1 because of the bias) and any unmap will trigger a mmu notifier callback. User of hmm mirror abiding by the API will never make use of information they get through the fault or snapshot function until checking for racing notifier under lock.
Cheers, Jérôme
On Thu, Aug 15, 2019 at 12:44 PM Jerome Glisse jglisse@redhat.com wrote:
On Thu, Aug 15, 2019 at 12:36:58PM -0700, Dan Williams wrote:
On Thu, Aug 15, 2019 at 11:07 AM Jerome Glisse jglisse@redhat.com wrote:
On Wed, Aug 14, 2019 at 07:48:28AM -0700, Dan Williams wrote:
On Wed, Aug 14, 2019 at 6:28 AM Jason Gunthorpe jgg@mellanox.com wrote:
On Wed, Aug 14, 2019 at 09:38:54AM +0200, Christoph Hellwig wrote:
On Tue, Aug 13, 2019 at 06:36:33PM -0700, Dan Williams wrote: > Section alignment constraints somewhat save us here. The only example > I can think of a PMD not containing a uniform pgmap association for > each pte is the case when the pgmap overlaps normal dram, i.e. shares > the same 'struct memory_section' for a given span. Otherwise, distinct > pgmaps arrange to manage their own exclusive sections (and now > subsections as of v5.3). Otherwise the implementation could not > guarantee different mapping lifetimes. > > That said, this seems to want a better mechanism to determine "pfn is > ZONE_DEVICE".
So I guess this patch is fine for now, and once you provide a better mechanism we can switch over to it?
What about the version I sent to just get rid of all the strange put_dev_pagemaps while scanning? Odds are good we will work with only a single pagemap, so it makes some sense to cache it once we find it?
Yes, if the scan is over a single pmd then caching it makes sense.
Quite frankly an easier an better solution is to remove the pagemap lookup as HMM user abide by mmu notifier it means we will not make use or dereference the struct page so that we are safe from any racing hotunplug of dax memory (as long as device driver using hmm do not have a bug).
Yes, as long as the driver remove is synchronized against HMM operations via another mechanism then there is no need to take pagemap references. Can you briefly describe what that other mechanism is?
So if you hotunplug some dax memory i assume that this can only happens once all the pages are unmapped (as it must have the zero refcount, well 1 because of the bias) and any unmap will trigger a mmu notifier callback. User of hmm mirror abiding by the API will never make use of information they get through the fault or snapshot function until checking for racing notifier under lock.
Hmm that first assumption is not guaranteed by the dev_pagemap core. The dev_pagemap end of life model is "disable, invalidate, drain" so it's possible to call devm_munmap_pages() while pages are still mapped it just won't complete the teardown of the pagemap until the last reference is dropped. New references are blocked during this teardown.
However, if the driver is validating the liveness of the mapping in the mmu-notifier path and blocking new references it sounds like it should be ok. Might there be GPU driver unit tests that cover this racing teardown case?
On Thu, Aug 15, 2019 at 01:12:22PM -0700, Dan Williams wrote:
On Thu, Aug 15, 2019 at 12:44 PM Jerome Glisse jglisse@redhat.com wrote:
On Thu, Aug 15, 2019 at 12:36:58PM -0700, Dan Williams wrote:
On Thu, Aug 15, 2019 at 11:07 AM Jerome Glisse jglisse@redhat.com wrote:
On Wed, Aug 14, 2019 at 07:48:28AM -0700, Dan Williams wrote:
On Wed, Aug 14, 2019 at 6:28 AM Jason Gunthorpe jgg@mellanox.com wrote:
On Wed, Aug 14, 2019 at 09:38:54AM +0200, Christoph Hellwig wrote: > On Tue, Aug 13, 2019 at 06:36:33PM -0700, Dan Williams wrote: > > Section alignment constraints somewhat save us here. The only example > > I can think of a PMD not containing a uniform pgmap association for > > each pte is the case when the pgmap overlaps normal dram, i.e. shares > > the same 'struct memory_section' for a given span. Otherwise, distinct > > pgmaps arrange to manage their own exclusive sections (and now > > subsections as of v5.3). Otherwise the implementation could not > > guarantee different mapping lifetimes. > > > > That said, this seems to want a better mechanism to determine "pfn is > > ZONE_DEVICE". > > So I guess this patch is fine for now, and once you provide a better > mechanism we can switch over to it?
What about the version I sent to just get rid of all the strange put_dev_pagemaps while scanning? Odds are good we will work with only a single pagemap, so it makes some sense to cache it once we find it?
Yes, if the scan is over a single pmd then caching it makes sense.
Quite frankly an easier an better solution is to remove the pagemap lookup as HMM user abide by mmu notifier it means we will not make use or dereference the struct page so that we are safe from any racing hotunplug of dax memory (as long as device driver using hmm do not have a bug).
Yes, as long as the driver remove is synchronized against HMM operations via another mechanism then there is no need to take pagemap references. Can you briefly describe what that other mechanism is?
So if you hotunplug some dax memory i assume that this can only happens once all the pages are unmapped (as it must have the zero refcount, well 1 because of the bias) and any unmap will trigger a mmu notifier callback. User of hmm mirror abiding by the API will never make use of information they get through the fault or snapshot function until checking for racing notifier under lock.
Hmm that first assumption is not guaranteed by the dev_pagemap core. The dev_pagemap end of life model is "disable, invalidate, drain" so it's possible to call devm_munmap_pages() while pages are still mapped it just won't complete the teardown of the pagemap until the last reference is dropped. New references are blocked during this teardown.
However, if the driver is validating the liveness of the mapping in the mmu-notifier path and blocking new references it sounds like it should be ok. Might there be GPU driver unit tests that cover this racing teardown case?
So nor HMM nor driver should dereference the struct page (i do not think any iommu driver would either), they only care about the pfn. So even if we race with a teardown as soon as we get the mmu notifier callback to invalidate the mmapping we will do so. The pattern is:
mydriver_populate_vaddr_range(start, end) { hmm_range_register(range, start, end) again: ret = hmm_range_fault(start, end) if (ret < 0) return ret;
take_driver_page_table_lock(); if (range.valid) { populate_device_page_table(); release_driver_page_table_lock(); } else { release_driver_page_table_lock(); goto again; } }
The mmu notifier callback do use the same page table lock and we also have the range tracking going on. So either we populate device page table before racing with teardown in which case the device page table entry are clear through the mmu notifier call back. Or if we race, but then we can see the racing mmu notifier calls and retry again which will trigger a regular page fault which will return an error i assume.
So in the end we have the exact same behavior as if a CPU was trying to access that virtual address. This is the whole point of HMM, to behave exactly as if it was a CPU access. Fails in the same way, race in the same way. So if DAX teardown are safe versus racing CPU access to some vma that have that memory map, it will be the same for HMM users.
GPU driver test suite are not good at testing this. They are geared to test the GPU itself not the interaction of the GPU driver with rest of the kernel.
Cheers, Jérôme
On Thu, Aug 15, 2019 at 04:33:06PM -0400, Jerome Glisse wrote:
So nor HMM nor driver should dereference the struct page (i do not think any iommu driver would either),
Er, they do technically deref the struct page:
nouveau_dmem_convert_pfn(struct nouveau_drm *drm, struct hmm_range *range) struct page *page; page = hmm_pfn_to_page(range, range->pfns[i]); if (!nouveau_dmem_page(drm, page)) {
nouveau_dmem_page(struct nouveau_drm *drm, struct page *page) { return is_device_private_page(page) && drm->dmem == page_to_dmem(page)
Which does touch 'page->pgmap'
Is this OK without having a get_dev_pagemap() ?
Noting that the collision-retry scheme doesn't protect anything here as we can have a concurrent invalidation while doing the above deref.
Jason
On Thu, Aug 15, 2019 at 1:41 PM Jason Gunthorpe jgg@mellanox.com wrote:
On Thu, Aug 15, 2019 at 04:33:06PM -0400, Jerome Glisse wrote:
So nor HMM nor driver should dereference the struct page (i do not think any iommu driver would either),
Er, they do technically deref the struct page:
nouveau_dmem_convert_pfn(struct nouveau_drm *drm, struct hmm_range *range) struct page *page; page = hmm_pfn_to_page(range, range->pfns[i]); if (!nouveau_dmem_page(drm, page)) {
nouveau_dmem_page(struct nouveau_drm *drm, struct page *page) { return is_device_private_page(page) && drm->dmem == page_to_dmem(page)
Which does touch 'page->pgmap'
Is this OK without having a get_dev_pagemap() ?
Noting that the collision-retry scheme doesn't protect anything here as we can have a concurrent invalidation while doing the above deref.
As long take_driver_page_table_lock() in Jerome's flow can replace percpu_ref_tryget_live() on the pagemap reference. It seems nouveau_dmem_convert_pfn() happens after:
mutex_lock(&svmm->mutex); if (!nouveau_range_done(&range)) {
...so I would expect that to be functionally equivalent to validating the reference count.
On Thu, Aug 15, 2019 at 01:47:12PM -0700, Dan Williams wrote:
On Thu, Aug 15, 2019 at 1:41 PM Jason Gunthorpe jgg@mellanox.com wrote:
On Thu, Aug 15, 2019 at 04:33:06PM -0400, Jerome Glisse wrote:
So nor HMM nor driver should dereference the struct page (i do not think any iommu driver would either),
Er, they do technically deref the struct page:
nouveau_dmem_convert_pfn(struct nouveau_drm *drm, struct hmm_range *range) struct page *page; page = hmm_pfn_to_page(range, range->pfns[i]); if (!nouveau_dmem_page(drm, page)) {
nouveau_dmem_page(struct nouveau_drm *drm, struct page *page) { return is_device_private_page(page) && drm->dmem == page_to_dmem(page)
Which does touch 'page->pgmap'
Is this OK without having a get_dev_pagemap() ?
Noting that the collision-retry scheme doesn't protect anything here as we can have a concurrent invalidation while doing the above deref.
As long take_driver_page_table_lock() in Jerome's flow can replace percpu_ref_tryget_live() on the pagemap reference. It seems nouveau_dmem_convert_pfn() happens after:
mutex_lock(&svmm->mutex); if (!nouveau_range_done(&range)) {
...so I would expect that to be functionally equivalent to validating the reference count.
Yes, OK, that makes sense, I was mostly surprised by the statement the driver doesn't touch the struct page..
I suppose "doesn't touch the struct page out of the driver lock" is the case.
However, this means we cannot do any processing of ZONE_DEVICE pages outside the driver lock, so eg, doing any DMA map that might rely on MEMORY_DEVICE_PCI_P2PDMA has to be done in the driver lock, which is a bit unfortunate.
Jason
On Thu, Aug 15, 2019 at 5:41 PM Jason Gunthorpe jgg@mellanox.com wrote:
On Thu, Aug 15, 2019 at 01:47:12PM -0700, Dan Williams wrote:
On Thu, Aug 15, 2019 at 1:41 PM Jason Gunthorpe jgg@mellanox.com wrote:
On Thu, Aug 15, 2019 at 04:33:06PM -0400, Jerome Glisse wrote:
So nor HMM nor driver should dereference the struct page (i do not think any iommu driver would either),
Er, they do technically deref the struct page:
nouveau_dmem_convert_pfn(struct nouveau_drm *drm, struct hmm_range *range) struct page *page; page = hmm_pfn_to_page(range, range->pfns[i]); if (!nouveau_dmem_page(drm, page)) {
nouveau_dmem_page(struct nouveau_drm *drm, struct page *page) { return is_device_private_page(page) && drm->dmem == page_to_dmem(page)
Which does touch 'page->pgmap'
Is this OK without having a get_dev_pagemap() ?
Noting that the collision-retry scheme doesn't protect anything here as we can have a concurrent invalidation while doing the above deref.
As long take_driver_page_table_lock() in Jerome's flow can replace percpu_ref_tryget_live() on the pagemap reference. It seems nouveau_dmem_convert_pfn() happens after:
mutex_lock(&svmm->mutex); if (!nouveau_range_done(&range)) {
...so I would expect that to be functionally equivalent to validating the reference count.
Yes, OK, that makes sense, I was mostly surprised by the statement the driver doesn't touch the struct page..
I suppose "doesn't touch the struct page out of the driver lock" is the case.
However, this means we cannot do any processing of ZONE_DEVICE pages outside the driver lock, so eg, doing any DMA map that might rely on MEMORY_DEVICE_PCI_P2PDMA has to be done in the driver lock, which is a bit unfortunate.
Wouldn't P2PDMA use page pins? Not needing to hold a lock over ZONE_DEVICE page operations was one of the motivations for plumbing get_dev_pagemap() with a percpu-ref.
On Thu, Aug 15, 2019 at 08:54:46PM -0700, Dan Williams wrote:
However, this means we cannot do any processing of ZONE_DEVICE pages outside the driver lock, so eg, doing any DMA map that might rely on MEMORY_DEVICE_PCI_P2PDMA has to be done in the driver lock, which is a bit unfortunate.
Wouldn't P2PDMA use page pins? Not needing to hold a lock over ZONE_DEVICE page operations was one of the motivations for plumbing get_dev_pagemap() with a percpu-ref.
hmm_range_fault() doesn't use page pins at all, so if a ZONE_DEVICE page comes out of it then it needs to use another locking pattern.
If I follow it all right:
We can do a get_dev_pagemap inside the page_walk and touch the pgmap, or we can do the 'device mutex && retry' pattern and touch the pgmap in the driver, under that lock.
However in all cases the current get_dev_pagemap()'s in the page walk are not necessary, and we can delete them.
?
Jason
On Fri, Aug 16, 2019 at 5:24 AM Jason Gunthorpe jgg@mellanox.com wrote:
On Thu, Aug 15, 2019 at 08:54:46PM -0700, Dan Williams wrote:
However, this means we cannot do any processing of ZONE_DEVICE pages outside the driver lock, so eg, doing any DMA map that might rely on MEMORY_DEVICE_PCI_P2PDMA has to be done in the driver lock, which is a bit unfortunate.
Wouldn't P2PDMA use page pins? Not needing to hold a lock over ZONE_DEVICE page operations was one of the motivations for plumbing get_dev_pagemap() with a percpu-ref.
hmm_range_fault() doesn't use page pins at all, so if a ZONE_DEVICE page comes out of it then it needs to use another locking pattern.
If I follow it all right:
We can do a get_dev_pagemap inside the page_walk and touch the pgmap, or we can do the 'device mutex && retry' pattern and touch the pgmap in the driver, under that lock.
However in all cases the current get_dev_pagemap()'s in the page walk are not necessary, and we can delete them.
Yes, as long as 'struct page' instances resulting from that lookup are not passed outside of that lock.
On Fri, Aug 16, 2019 at 10:21:41AM -0700, Dan Williams wrote:
We can do a get_dev_pagemap inside the page_walk and touch the pgmap, or we can do the 'device mutex && retry' pattern and touch the pgmap in the driver, under that lock.
However in all cases the current get_dev_pagemap()'s in the page walk are not necessary, and we can delete them.
Yes, as long as 'struct page' instances resulting from that lookup are not passed outside of that lock.
Indeed.
Also, I was reflecting over lunch that the hmm_range_fault should only return DEVICE_PRIVATE pages for the caller's device (see other thread with HCH), and in this case, the caller should also be responsible to ensure that the driver is not calling hmm_range_fault at the same time it is deleting it's own DEVICE_PRIVATE mapping - ie by fencing its page fault handler.
This does not apply to PCI_P2PDMA, but, lets see how that looks when we get there.
So the whole thing seems pretty safe.
Jason
On 8/16/19 10:28 AM, Jason Gunthorpe wrote:
On Fri, Aug 16, 2019 at 10:21:41AM -0700, Dan Williams wrote:
We can do a get_dev_pagemap inside the page_walk and touch the pgmap, or we can do the 'device mutex && retry' pattern and touch the pgmap in the driver, under that lock.
However in all cases the current get_dev_pagemap()'s in the page walk are not necessary, and we can delete them.
Yes, as long as 'struct page' instances resulting from that lookup are not passed outside of that lock.
Indeed.
Also, I was reflecting over lunch that the hmm_range_fault should only return DEVICE_PRIVATE pages for the caller's device (see other thread with HCH), and in this case, the caller should also be responsible to ensure that the driver is not calling hmm_range_fault at the same time it is deleting it's own DEVICE_PRIVATE mapping - ie by fencing its page fault handler.
Yes, that would make it a one step process to access another device's migrated memory pages. Right now, it has to be a two step process where the caller calls hmm_range_fault, check the struct page to see if it is device private and not owned, then call hmm_range_fault again with range->pfns[i] |= range->flags[HMM_PFN_DEVICE_PRIVATE] to cause the other device to migrate the page back to system memory.
On Thu, Aug 15, 2019 at 08:41:33PM +0000, Jason Gunthorpe wrote:
On Thu, Aug 15, 2019 at 04:33:06PM -0400, Jerome Glisse wrote:
So nor HMM nor driver should dereference the struct page (i do not think any iommu driver would either),
Er, they do technically deref the struct page:
nouveau_dmem_convert_pfn(struct nouveau_drm *drm, struct hmm_range *range) struct page *page; page = hmm_pfn_to_page(range, range->pfns[i]); if (!nouveau_dmem_page(drm, page)) {
nouveau_dmem_page(struct nouveau_drm *drm, struct page *page) { return is_device_private_page(page) && drm->dmem == page_to_dmem(page)
Which does touch 'page->pgmap'
Is this OK without having a get_dev_pagemap() ?
Noting that the collision-retry scheme doesn't protect anything here as we can have a concurrent invalidation while doing the above deref.
Uh ? How so ? We are not reading the same code i think.
My read is that function is call when holding the device lock which exclude any racing mmu notifier from making forward progress and it is also protected by the range so at the time this happens it is safe to dereference the struct page. In this case any way we can update the nouveau_dmem_page() to check that page page->pgmap == the expected pgmap.
Cheers, Jérôme
On Thu, Aug 15, 2019 at 04:51:33PM -0400, Jerome Glisse wrote:
struct page. In this case any way we can update the nouveau_dmem_page() to check that page page->pgmap == the expected pgmap.
I was also wondering if that is a problem.. just blindly doing a container_of on the page->pgmap does seem like it assumes that only this driver is using DEVICE_PRIVATE.
It seems like something missing in hmm_range_fault, it should be told what DEVICE_PRIVATE is acceptable to trigger HMM_PFN_DEVICE_PRIVATE and fault all others?
Jason
dri-devel@lists.freedesktop.org