On Wed, Dec 04, 2019 at 04:51:36PM -0500, Jerome Glisse wrote:
On Tue, Dec 03, 2019 at 11:19:43AM -0800, Niranjan Vishwanathapura wrote:
On Tue, Nov 26, 2019 at 06:32:52PM +0000, Jason Gunthorpe wrote:
On Mon, Nov 25, 2019 at 11:33:27AM -0500, Jerome Glisse wrote:
On Fri, Nov 22, 2019 at 11:33:12PM +0000, Jason Gunthorpe wrote:
On Fri, Nov 22, 2019 at 12:57:27PM -0800, Niranjana Vishwanathapura wrote:
[...]
+static int +i915_range_fault(struct i915_svm *svm, struct hmm_range *range) +{
- long ret;
- range->default_flags = 0;
- range->pfn_flags_mask = -1UL;
- ret = hmm_range_register(range, &svm->mirror);
- if (ret) {
up_read(&svm->mm->mmap_sem);
return (int)ret;
- }
Using a temporary range is the pattern from nouveau, is it really necessary in this driver?
Just to comment on this, for GPU the usage model is not application register range of virtual address it wants to use. It is GPU can access _any_ CPU valid address just like the CPU would (modulo mmap of device file).
This is because the API you want in userspace is application passing random pointer to the GPU and GPU being able to chase down any kind of random pointer chain (assuming all valid ie pointing to valid virtual address for the process).
This is unlike the RDMA case.
No, RDMA has the full address space option as well, it is called 'implicit ODP'
This is implemented by registering ranges at a level in our page table (currently 512G) whenever that level has populated pages below it.
I think is a better strategy than temporary ranges.
HMM original design did not have range and was well suited to nouveau. Recent changes make it more tie to the range and less suited to nouveau. I would not consider 512GB implicit range as good thing. Plan i have is to create implicit range and align them on vma. I do not know when i will have time to get to that.
For mlx5 the 512G is aligned to the page table levels, so it is a reasonable approximation. GPU could do the same.
Not sure VMA related is really any better..
Jason