On Tue, Nov 26, 2019 at 06:32:52PM +0000, Jason Gunthorpe wrote:
On Mon, Nov 25, 2019 at 11:33:27AM -0500, Jerome Glisse wrote:
On Fri, Nov 22, 2019 at 11:33:12PM +0000, Jason Gunthorpe wrote:
On Fri, Nov 22, 2019 at 12:57:27PM -0800, Niranjana Vishwanathapura wrote:
[...]
+static int +i915_range_fault(struct i915_svm *svm, struct hmm_range *range) +{
- long ret;
- range->default_flags = 0;
- range->pfn_flags_mask = -1UL;
- ret = hmm_range_register(range, &svm->mirror);
- if (ret) {
up_read(&svm->mm->mmap_sem);
return (int)ret;
- }
Using a temporary range is the pattern from nouveau, is it really necessary in this driver?
Just to comment on this, for GPU the usage model is not application register range of virtual address it wants to use. It is GPU can access _any_ CPU valid address just like the CPU would (modulo mmap of device file).
This is because the API you want in userspace is application passing random pointer to the GPU and GPU being able to chase down any kind of random pointer chain (assuming all valid ie pointing to valid virtual address for the process).
This is unlike the RDMA case.
No, RDMA has the full address space option as well, it is called 'implicit ODP'
This is implemented by registering ranges at a level in our page table (currently 512G) whenever that level has populated pages below it.
I think is a better strategy than temporary ranges.
But other GPU drivers like AMD are using BO and TTM objects with fixed VA ranges and the range is tied to the BO/TTM.
I'm not sure if this i915 usage is closer to AMD or closer to nouveau.
I don't know the full details of the HMM usecases in amd and nouveau. AMD seems to be using it for usrptr objects which is tied to a BO. I am not sure if nouveau has any BO tied to these address ranges.
Niranjana
Jason