The VMA manager is page-size based so drm_vma_node_size() returns the size in pages. However, drm_gem_mmap_obj() requires the size in bytes. Apply PAGE_SHIFT so we no longer get EINVAL during mmaps due to too small buffers.
This bug was introduced in commit: 0de23977cfeb5b357ec884ba15417ae118ff9e9b "drm/gem: convert to new unified vma manager"
Fixes i915 gtt mmap failure reported by Sedat Dilek in: Re: linux-next: Tree for Jul 25 [ call-trace: drm | drm-intel related? ]
Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: David Herrmann dh.herrmann@gmail.com Reported-by: Sedat Dilek sedat.dilek@gmail.com Tested-by: Sedat Dilek sedat.dilek@gmail.com --- drivers/gpu/drm/drm_gem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 3613b50..1f76572 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -666,7 +666,7 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma) }
obj = container_of(node, struct drm_gem_object, vma_node); - ret = drm_gem_mmap_obj(obj, drm_vma_node_size(node), vma); + ret = drm_gem_mmap_obj(obj, drm_vma_node_size(node) << PAGE_SHIFT, vma);
mutex_unlock(&dev->struct_mutex);