Hi
This is the second part of the unified VMA manager. It implements proper access management so applications can map only buffers that they own handles for.
Patches of interest probably are: #1: Implement VMA access management helpers #9: Make TTM deny unprivileged mmap() calls #10: Do the same for GEM The remaining patches just hook it up in all drivers.
The implementation is pretty easy. On gem_open_object() drivers add the "struct file*" pointer to the list of allowed open-files of a bo. On gem_close_object() drivers remove it again. For TTM, drivers do this manually as there is no access to TTM bo's from generic GEM code. For GEM, drivers can set DRIVER_GEM_MMAP (copied from airlied's proposal) and GEM core will take care of this.
If we want this to be more uniform, I can add gem_open_object() and gem_close_object() callbacks to all the GEM drivers and call drm_vma_node_allow() and drm_vma_node_revoke() respectively. Just let me know what you think is cleaner.
Cheers David
David Herrmann (16): drm/vma: add access management helpers drm/ast: implement mmap access managament drm/cirrus: implement mmap access managament drm/mgag200: implement mmap access managament drm/nouveau: implement mmap access managament drm/radeon: implement mmap access managament drm/qxl: implement mmap access managament drm/vmwgfx: implement mmap access managament drm/ttm: prevent mmap access to unauthorized users drm/gem: implement mmap access management drm/i915: enable GEM mmap access management drm/exynos: enable GEM mmap access management drm/gma500: enable GEM mmap access management drm/omap: enable GEM mmap access management drm/udl: enable GEM mmap access management drm/host1x: enable GEM mmap access management
Documentation/DocBook/drm.tmpl | 13 +++ drivers/gpu/drm/ast/ast_drv.c | 2 + drivers/gpu/drm/ast/ast_drv.h | 4 + drivers/gpu/drm/ast/ast_main.c | 15 +++ drivers/gpu/drm/cirrus/cirrus_drv.h | 4 + drivers/gpu/drm/cirrus/cirrus_main.c | 15 +++ drivers/gpu/drm/drm_gem.c | 37 +++++++- drivers/gpu/drm/drm_vma_manager.c | 155 +++++++++++++++++++++++++++++++ drivers/gpu/drm/exynos/exynos_drm_drv.c | 3 +- drivers/gpu/drm/gma500/psb_drv.c | 3 +- drivers/gpu/drm/i915/i915_drv.c | 3 +- drivers/gpu/drm/mgag200/mgag200_drv.c | 2 + drivers/gpu/drm/mgag200/mgag200_drv.h | 4 + drivers/gpu/drm/mgag200/mgag200_main.c | 15 +++ drivers/gpu/drm/nouveau/nouveau_gem.c | 19 +++- drivers/gpu/drm/omapdrm/omap_drv.c | 3 +- drivers/gpu/drm/qxl/qxl_gem.c | 7 +- drivers/gpu/drm/radeon/radeon_gem.c | 7 ++ drivers/gpu/drm/ttm/ttm_bo_vm.c | 9 +- drivers/gpu/drm/udl/udl_drv.c | 3 +- drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 29 +++--- drivers/gpu/host1x/drm/drm.c | 2 +- include/drm/drmP.h | 1 + include/drm/drm_vma_manager.h | 21 ++++- 24 files changed, 342 insertions(+), 34 deletions(-)
The VMA offset manager uses a device-global address-space. Hence, any user can currently map any offset-node they want. They only need to guess the right offset. If we wanted per open-file offset spaces, we'd either need VM_NONLINEAR mappings or multiple "struct address_space" trees. As both doesn't really scale, we implement access management in the VMA manager itself.
We use an rb-tree to store open-files for each VMA node. On each mmap call, GEM, TTM or the drivers must check whether the current user is allowed to map this file.
We add a separate lock for each node as there is no generic lock available for the caller to protect the node easily.
As we currently don't know whether an object may be used for mmap(), we have to do access management for all objects. If it turns out to slow down handle creation/deletion significantly, we can optimize it in several ways: - Most times only a single filp is added per bo so we could use a static "struct file *main_filp" which is checked/added/removed first before we fall back to the rbtree+drm_vma_offset_file. This could be even done lockless with rcu. - Let user-space pass a hint whether mmap() should be supported on the bo and avoid access-management if not. - .. there are probably more ideas once we have benchmarks ..
Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/drm/drm_gem.c | 1 + drivers/gpu/drm/drm_vma_manager.c | 155 ++++++++++++++++++++++++++++++++++++++ include/drm/drm_vma_manager.h | 21 +++++- 3 files changed, 174 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 9ab038c..7043d89 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -156,6 +156,7 @@ void drm_gem_private_object_init(struct drm_device *dev, kref_init(&obj->refcount); atomic_set(&obj->handle_count, 0); obj->size = size; + drm_vma_node_reset(&obj->vma_node); } EXPORT_SYMBOL(drm_gem_private_object_init);
diff --git a/drivers/gpu/drm/drm_vma_manager.c b/drivers/gpu/drm/drm_vma_manager.c index 3837481..63b4712 100644 --- a/drivers/gpu/drm/drm_vma_manager.c +++ b/drivers/gpu/drm/drm_vma_manager.c @@ -25,6 +25,7 @@ #include <drm/drmP.h> #include <drm/drm_mm.h> #include <drm/drm_vma_manager.h> +#include <linux/fs.h> #include <linux/mm.h> #include <linux/module.h> #include <linux/rbtree.h> @@ -58,6 +59,13 @@ * must always be page-aligned (as usual). * If you want to get a valid byte-based user-space address for a given offset, * please see drm_vma_node_offset_addr(). + * + * Additionally to offset management, the vma offset manager also handles access + * management. For every open-file context that is allowed to access a given + * node, you must call drm_vma_node_allow(). Otherwise, an mmap() call on this + * open-file with the offset of the node will fail with -EACCES. To revoke + * access again, use drm_vma_node_revoke(). However, the caller is responsible + * for destroying already existing mappings, if required. */
/** @@ -279,3 +287,150 @@ void drm_vma_offset_remove(struct drm_vma_offset_manager *mgr, write_unlock(&mgr->vm_lock); } EXPORT_SYMBOL(drm_vma_offset_remove); + +/** + * drm_vma_node_allow - Add open-file to list of allowed users + * @node: Node to modify + * @filp: Open file to add + * + * Add @filp to the list of allowed open-files for this node. If @filp is + * already on this list, the ref-count is incremented. + * + * The list of allowed-users is preserved across drm_vma_offset_add() and + * drm_vma_offset_remove() calls. You may even call it if the node is currently + * not added to any offset-manager. + * + * You must remove all open-files the same number of times as you added them + * before destroying the node. Otherwise, you will leak memory. + * + * This is locked against concurrent access internally. + * + * RETURNS: + * 0 on success, negative error code on internal failure (out-of-mem) + */ +int drm_vma_node_allow(struct drm_vma_offset_node *node, struct file *filp) +{ + struct rb_node **iter; + struct rb_node *parent = NULL; + struct drm_vma_offset_file *new, *entry; + int ret = 0; + + /* Preallocate entry to avoid atomic allocations below. It is quite + * unlikely that an open-file is added twice to a single node so we + * don't optimize for this case. OOM is checked below only if the entry + * is actually used. */ + new = kmalloc(sizeof(*entry), GFP_KERNEL); + + write_lock(&node->vm_lock); + + iter = &node->vm_files.rb_node; + + while (likely(*iter)) { + parent = *iter; + entry = rb_entry(*iter, struct drm_vma_offset_file, vm_rb); + + if (filp == entry->vm_filp) { + entry->vm_count++; + goto unlock; + } else if (filp > entry->vm_filp) { + iter = &(*iter)->rb_right; + } else { + iter = &(*iter)->rb_left; + } + } + + if (!new) { + ret = -ENOMEM; + goto unlock; + } + + new->vm_filp = filp; + new->vm_count = 1; + rb_link_node(&new->vm_rb, parent, iter); + rb_insert_color(&new->vm_rb, &node->vm_files); + new = NULL; + +unlock: + write_unlock(&node->vm_lock); + kfree(new); + return ret; +} +EXPORT_SYMBOL(drm_vma_node_allow); + +/** + * drm_vma_node_revoke - Remove open-file from list of allowed users + * @node: Node to modify + * @filp: Open file to remove + * + * Decrement the ref-count of @filp in the list of allowed open-files on @node. + * If the ref-count drops to zero, remove @filp from the list. You must call + * this once for every drm_vma_node_allow() on @filp. + * + * This is locked against concurrent access internally. + * + * If @filp is not on the list, nothing is done. + */ +void drm_vma_node_revoke(struct drm_vma_offset_node *node, struct file *filp) +{ + struct drm_vma_offset_file *entry; + struct rb_node *iter; + + write_lock(&node->vm_lock); + + iter = node->vm_files.rb_node; + while (likely(iter)) { + entry = rb_entry(iter, struct drm_vma_offset_file, vm_rb); + if (filp == entry->vm_filp) { + if (!--entry->vm_count) { + rb_erase(&entry->vm_rb, &node->vm_files); + kfree(entry); + } + break; + } else if (filp > entry->vm_filp) { + iter = iter->rb_right; + } else { + iter = iter->rb_left; + } + } + + write_unlock(&node->vm_lock); +} +EXPORT_SYMBOL(drm_vma_node_revoke); + +/** + * drm_vma_node_is_allowed - Check whether an open-file is granted access + * @node: Node to check + * @filp: Open-file to check for + * + * Search the list in @node whether @filp is currently on the list of allowed + * open-files (see drm_vma_node_allow()). + * + * This is locked against concurrent access internally. + * + * RETURNS: + * true iff @filp is on the list + */ +bool drm_vma_node_is_allowed(struct drm_vma_offset_node *node, + struct file *filp) +{ + struct drm_vma_offset_file *entry; + struct rb_node *iter; + + read_lock(&node->vm_lock); + + iter = node->vm_files.rb_node; + while (likely(iter)) { + entry = rb_entry(iter, struct drm_vma_offset_file, vm_rb); + if (filp == entry->vm_filp) + break; + else if (filp > entry->vm_filp) + iter = iter->rb_right; + else + iter = iter->rb_left; + } + + read_unlock(&node->vm_lock); + + return iter; +} +EXPORT_SYMBOL(drm_vma_node_is_allowed); diff --git a/include/drm/drm_vma_manager.h b/include/drm/drm_vma_manager.h index 22eedac..d66b6a2 100644 --- a/include/drm/drm_vma_manager.h +++ b/include/drm/drm_vma_manager.h @@ -24,15 +24,24 @@ */
#include <drm/drm_mm.h> +#include <linux/fs.h> #include <linux/mm.h> #include <linux/module.h> #include <linux/rbtree.h> #include <linux/spinlock.h> #include <linux/types.h>
+struct drm_vma_offset_file { + struct rb_node vm_rb; + struct file *vm_filp; + unsigned long vm_count; +}; + struct drm_vma_offset_node { + rwlock_t vm_lock; struct drm_mm_node vm_node; struct rb_node vm_rb; + struct rb_root vm_files; };
struct drm_vma_offset_manager { @@ -56,6 +65,11 @@ int drm_vma_offset_add(struct drm_vma_offset_manager *mgr, void drm_vma_offset_remove(struct drm_vma_offset_manager *mgr, struct drm_vma_offset_node *node);
+int drm_vma_node_allow(struct drm_vma_offset_node *node, struct file *filp); +void drm_vma_node_revoke(struct drm_vma_offset_node *node, struct file *filp); +bool drm_vma_node_is_allowed(struct drm_vma_offset_node *node, + struct file *filp); + /** * drm_vma_offset_exact_lookup() - Look up node by exact address * @mgr: Manager object @@ -122,9 +136,8 @@ static inline void drm_vma_offset_unlock_lookup(struct drm_vma_offset_manager *m * drm_vma_node_reset() - Initialize or reset node object * @node: Node to initialize or reset * - * Reset a node to its initial state. This must be called if @node isn't - * already cleared (eg., via kzalloc) before using it with any VMA offset - * manager. + * Reset a node to its initial state. This must be called before using it with + * any VMA offset manager. * * This must not be called on an already allocated node, or you will leak * memory. @@ -132,6 +145,8 @@ static inline void drm_vma_offset_unlock_lookup(struct drm_vma_offset_manager *m static inline void drm_vma_node_reset(struct drm_vma_offset_node *node) { memset(node, 0, sizeof(*node)); + node->vm_files = RB_ROOT; + rwlock_init(&node->vm_lock); }
/**
Correctly allow and revoke buffer access on each open/close via the new VMA offset manager.
Cc: Dave Airlie airlied@redhat.com Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/drm/ast/ast_drv.c | 2 ++ drivers/gpu/drm/ast/ast_drv.h | 4 ++++ drivers/gpu/drm/ast/ast_main.c | 15 +++++++++++++++ 3 files changed, 21 insertions(+)
diff --git a/drivers/gpu/drm/ast/ast_drv.c b/drivers/gpu/drm/ast/ast_drv.c index a144fb0..bc6f2c5 100644 --- a/drivers/gpu/drm/ast/ast_drv.c +++ b/drivers/gpu/drm/ast/ast_drv.c @@ -214,6 +214,8 @@ static struct drm_driver driver = {
.gem_init_object = ast_gem_init_object, .gem_free_object = ast_gem_free_object, + .gem_open_object = ast_gem_open_object, + .gem_close_object = ast_gem_close_object, .dumb_create = ast_dumb_create, .dumb_map_offset = ast_dumb_mmap_offset, .dumb_destroy = drm_gem_dumb_destroy, diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h index 796dbb2..e6ab966 100644 --- a/drivers/gpu/drm/ast/ast_drv.h +++ b/drivers/gpu/drm/ast/ast_drv.h @@ -325,6 +325,10 @@ extern int ast_dumb_create(struct drm_file *file,
extern int ast_gem_init_object(struct drm_gem_object *obj); extern void ast_gem_free_object(struct drm_gem_object *obj); +extern int ast_gem_open_object(struct drm_gem_object *obj, + struct drm_file *file_priv); +extern void ast_gem_close_object(struct drm_gem_object *obj, + struct drm_file *file_priv); extern int ast_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev, uint32_t handle, diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c index 7f6152d..b7b4b16 100644 --- a/drivers/gpu/drm/ast/ast_main.c +++ b/drivers/gpu/drm/ast/ast_main.c @@ -477,6 +477,21 @@ void ast_gem_free_object(struct drm_gem_object *obj) ast_bo_unref(&ast_bo); }
+int ast_gem_open_object(struct drm_gem_object *obj, + struct drm_file *file_priv) +{ + struct ast_bo *ast_bo = gem_to_ast_bo(obj); + + return drm_vma_node_allow(&ast_bo->bo.vma_node, file_priv->filp); +} + +void ast_gem_close_object(struct drm_gem_object *obj, + struct drm_file *file_priv) +{ + struct ast_bo *ast_bo = gem_to_ast_bo(obj); + + drm_vma_node_revoke(&ast_bo->bo.vma_node, file_priv->filp); +}
static inline u64 ast_bo_mmap_offset(struct ast_bo *bo) {
Correctly allow and revoke buffer access on each open/close via the new VMA offset manager.
Cc: Dave Airlie airlied@redhat.com Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/drm/cirrus/cirrus_drv.h | 4 ++++ drivers/gpu/drm/cirrus/cirrus_main.c | 15 +++++++++++++++ 2 files changed, 19 insertions(+)
diff --git a/drivers/gpu/drm/cirrus/cirrus_drv.h b/drivers/gpu/drm/cirrus/cirrus_drv.h index 9b0bb91..2fec34b 100644 --- a/drivers/gpu/drm/cirrus/cirrus_drv.h +++ b/drivers/gpu/drm/cirrus/cirrus_drv.h @@ -193,6 +193,10 @@ int cirrus_device_init(struct cirrus_device *cdev, void cirrus_device_fini(struct cirrus_device *cdev); int cirrus_gem_init_object(struct drm_gem_object *obj); void cirrus_gem_free_object(struct drm_gem_object *obj); +int cirrus_gem_open_object(struct drm_gem_object *obj, + struct drm_file *file_priv); +void cirrus_gem_close_object(struct drm_gem_object *obj, + struct drm_file *file_priv); int cirrus_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev, uint32_t handle, diff --git a/drivers/gpu/drm/cirrus/cirrus_main.c b/drivers/gpu/drm/cirrus/cirrus_main.c index f130a53..7046c4b 100644 --- a/drivers/gpu/drm/cirrus/cirrus_main.c +++ b/drivers/gpu/drm/cirrus/cirrus_main.c @@ -284,6 +284,21 @@ void cirrus_gem_free_object(struct drm_gem_object *obj) cirrus_bo_unref(&cirrus_bo); }
+int cirrus_gem_open_object(struct drm_gem_object *obj, + struct drm_file *file_priv) +{ + struct cirrus_bo *bo = gem_to_cirrus_bo(obj); + + return drm_vma_node_allow(&bo->bo.vma_node, file_priv->filp); +} + +void cirrus_gem_close_object(struct drm_gem_object *obj, + struct drm_file *file_priv) +{ + struct cirrus_bo *bo = gem_to_cirrus_bo(obj); + + drm_vma_node_revoke(&bo->bo.vma_node, file_priv->filp); +}
static inline u64 cirrus_bo_mmap_offset(struct cirrus_bo *bo) {
Correctly allow and revoke buffer access on each open/close via the new VMA offset manager.
Cc: Dave Airlie airlied@redhat.com Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/drm/mgag200/mgag200_drv.c | 2 ++ drivers/gpu/drm/mgag200/mgag200_drv.h | 4 ++++ drivers/gpu/drm/mgag200/mgag200_main.c | 15 +++++++++++++++ 3 files changed, 21 insertions(+)
diff --git a/drivers/gpu/drm/mgag200/mgag200_drv.c b/drivers/gpu/drm/mgag200/mgag200_drv.c index bd91964..c9b0aa7 100644 --- a/drivers/gpu/drm/mgag200/mgag200_drv.c +++ b/drivers/gpu/drm/mgag200/mgag200_drv.c @@ -102,6 +102,8 @@ static struct drm_driver driver = {
.gem_init_object = mgag200_gem_init_object, .gem_free_object = mgag200_gem_free_object, + .gem_open_object = mgag200_gem_open_object, + .gem_close_object = mgag200_gem_close_object, .dumb_create = mgag200_dumb_create, .dumb_map_offset = mgag200_dumb_mmap_offset, .dumb_destroy = drm_gem_dumb_destroy, diff --git a/drivers/gpu/drm/mgag200/mgag200_drv.h b/drivers/gpu/drm/mgag200/mgag200_drv.h index baaae19..3e39b67 100644 --- a/drivers/gpu/drm/mgag200/mgag200_drv.h +++ b/drivers/gpu/drm/mgag200/mgag200_drv.h @@ -265,6 +265,10 @@ int mgag200_dumb_create(struct drm_file *file, struct drm_device *dev, struct drm_mode_create_dumb *args); void mgag200_gem_free_object(struct drm_gem_object *obj); +int mgag200_gem_open_object(struct drm_gem_object *obj, + struct drm_file *file_priv); +void mgag200_gem_close_object(struct drm_gem_object *obj, + struct drm_file *file_priv); int mgag200_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev, diff --git a/drivers/gpu/drm/mgag200/mgag200_main.c b/drivers/gpu/drm/mgag200/mgag200_main.c index 0f8b861..68578e7 100644 --- a/drivers/gpu/drm/mgag200/mgag200_main.c +++ b/drivers/gpu/drm/mgag200/mgag200_main.c @@ -339,6 +339,21 @@ void mgag200_gem_free_object(struct drm_gem_object *obj) mgag200_bo_unref(&mgag200_bo); }
+int mgag200_gem_open_object(struct drm_gem_object *obj, + struct drm_file *file_priv) +{ + struct mgag200_bo *bo = gem_to_mga_bo(obj); + + return drm_vma_node_allow(&bo->bo.vma_node, file_priv->filp); +} + +void mgag200_gem_close_object(struct drm_gem_object *obj, + struct drm_file *file_priv) +{ + struct mgag200_bo *bo = gem_to_mga_bo(obj); + + drm_vma_node_revoke(&bo->bo.vma_node, file_priv->filp); +}
static inline u64 mgag200_bo_mmap_offset(struct mgag200_bo *bo) {
Correctly allow and revoke buffer access on each open/close via the new VMA offset manager.
Cc: Ben Skeggs bskeggs@redhat.com Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/drm/nouveau/nouveau_gem.c | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index 487242f..f2c4040 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -67,32 +67,41 @@ nouveau_gem_object_open(struct drm_gem_object *gem, struct drm_file *file_priv) struct nouveau_vma *vma; int ret;
+ ret = drm_vma_node_allow(&nvbo->bo.vma_node, file_priv->filp); + if (ret) + return ret; + if (!cli->base.vm) return 0;
ret = ttm_bo_reserve(&nvbo->bo, false, false, false, 0); if (ret) - return ret; + goto err_node;
vma = nouveau_bo_vma_find(nvbo, cli->base.vm); if (!vma) { vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) { ret = -ENOMEM; - goto out; + goto err_reserve; }
ret = nouveau_bo_vma_add(nvbo, cli->base.vm, vma); if (ret) { kfree(vma); - goto out; + goto err_reserve; } } else { vma->refcount++; }
-out: ttm_bo_unreserve(&nvbo->bo); + return 0; + +err_reserve: + ttm_bo_unreserve(&nvbo->bo); +err_node: + drm_vma_node_revoke(&nvbo->bo.vma_node, file_priv->filp); return ret; }
@@ -139,6 +148,8 @@ nouveau_gem_object_close(struct drm_gem_object *gem, struct drm_file *file_priv) struct nouveau_vma *vma; int ret;
+ drm_vma_node_revoke(&nvbo->bo.vma_node, file_priv->filp); + if (!cli->base.vm) return;
Correctly allow and revoke buffer access on each open/close via the new VMA offset manager.
Cc: Alex Deucher alexander.deucher@amd.com Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/drm/radeon/radeon_gem.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c index dce99c8..58f5cc5 100644 --- a/drivers/gpu/drm/radeon/radeon_gem.c +++ b/drivers/gpu/drm/radeon/radeon_gem.c @@ -147,12 +147,17 @@ int radeon_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_pri struct radeon_bo_va *bo_va; int r;
+ r = drm_vma_node_allow(&rbo->tbo.vma_node, file_priv->filp); + if (r) + return r; + if (rdev->family < CHIP_CAYMAN) { return 0; }
r = radeon_bo_reserve(rbo, false); if (r) { + drm_vma_node_revoke(&rbo->tbo.vma_node, file_priv->filp); return r; }
@@ -177,6 +182,8 @@ void radeon_gem_object_close(struct drm_gem_object *obj, struct radeon_bo_va *bo_va; int r;
+ drm_vma_node_revoke(&rbo->tbo.vma_node, file_priv->filp); + if (rdev->family < CHIP_CAYMAN) { return; }
Correctly allow and revoke buffer access on each open/close via the new VMA offset manager.
Cc: Dave Airlie airlied@redhat.com Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/drm/qxl/qxl_gem.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/qxl/qxl_gem.c b/drivers/gpu/drm/qxl/qxl_gem.c index a235693..fb38a5e 100644 --- a/drivers/gpu/drm/qxl/qxl_gem.c +++ b/drivers/gpu/drm/qxl/qxl_gem.c @@ -129,12 +129,17 @@ void qxl_gem_object_unpin(struct drm_gem_object *obj)
int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv) { - return 0; + struct qxl_bo *qobj = obj->driver_private; + + return drm_vma_node_allow(&qobj->tbo.vma_node, file_priv->filp); }
void qxl_gem_object_close(struct drm_gem_object *obj, struct drm_file *file_priv) { + struct qxl_bo *qobj = obj->driver_private; + + drm_vma_node_revoke(&qobj->tbo.vma_node, file_priv->filp); }
int qxl_gem_init(struct qxl_device *qdev)
Correctly allow and revoke buffer access on each open/close via the new VMA offset manager. We also need to make vmw_user_dmabuf_reference() correctly increase the vma-allow counter, but it is unused so remove it instead.
Cc: Thomas Hellstrom thellstrom@vmware.com Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 0e67cf4..4d3f0ae 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -499,6 +499,12 @@ int vmw_dmabuf_alloc_ioctl(struct drm_device *dev, void *data, if (unlikely(ret != 0)) goto out_no_dmabuf;
+ ret = drm_vma_node_allow(&dma_buf->base.vma_node, file_priv->filp); + if (ret) { + vmw_dmabuf_unreference(&dma_buf); + goto out_no_dmabuf; + } + rep->handle = handle; rep->map_handle = drm_vma_node_offset_addr(&dma_buf->base.vma_node); rep->cur_gmr_id = handle; @@ -517,7 +523,18 @@ int vmw_dmabuf_unref_ioctl(struct drm_device *dev, void *data, { struct drm_vmw_unref_dmabuf_arg *arg = (struct drm_vmw_unref_dmabuf_arg *)data; + struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile; + struct vmw_dma_buffer *dma_buf; + int ret; + + ret = vmw_user_dmabuf_lookup(tfile, arg->handle, &dma_buf); + if (ret) + return -EINVAL;
+ drm_vma_node_revoke(&dma_buf->base.vma_node, file_priv->filp); + vmw_dmabuf_unreference(&dma_buf); + + /* FIXME: is this equivalent to vmw_dmabuf_unreference(dma_buf)? */ return ttm_ref_object_base_unref(vmw_fpriv(file_priv)->tfile, arg->handle, TTM_REF_USAGE); @@ -551,18 +568,6 @@ int vmw_user_dmabuf_lookup(struct ttm_object_file *tfile, return 0; }
-int vmw_user_dmabuf_reference(struct ttm_object_file *tfile, - struct vmw_dma_buffer *dma_buf) -{ - struct vmw_user_dma_buffer *user_bo; - - if (dma_buf->base.destroy != vmw_user_dmabuf_destroy) - return -EINVAL; - - user_bo = container_of(dma_buf, struct vmw_user_dma_buffer, dma); - return ttm_ref_object_add(tfile, &user_bo->base, TTM_REF_USAGE, NULL); -} - /* * Stream management */
Hi
On Tue, Aug 13, 2013 at 9:38 PM, David Herrmann dh.herrmann@gmail.com wrote:
Correctly allow and revoke buffer access on each open/close via the new VMA offset manager. We also need to make vmw_user_dmabuf_reference() correctly increase the vma-allow counter, but it is unused so remove it instead.
Cc: Thomas Hellstrom thellstrom@vmware.com Signed-off-by: David Herrmann dh.herrmann@gmail.com
Just as a hint, this patch would allow to remove the "->access_verify()" callback in vmwgfx. No other driver uses it, afaik. I will try to add this in v2.
Regards David
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 0e67cf4..4d3f0ae 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -499,6 +499,12 @@ int vmw_dmabuf_alloc_ioctl(struct drm_device *dev, void *data, if (unlikely(ret != 0)) goto out_no_dmabuf;
ret = drm_vma_node_allow(&dma_buf->base.vma_node, file_priv->filp);
if (ret) {
vmw_dmabuf_unreference(&dma_buf);
goto out_no_dmabuf;
}
rep->handle = handle; rep->map_handle = drm_vma_node_offset_addr(&dma_buf->base.vma_node); rep->cur_gmr_id = handle;
@@ -517,7 +523,18 @@ int vmw_dmabuf_unref_ioctl(struct drm_device *dev, void *data, { struct drm_vmw_unref_dmabuf_arg *arg = (struct drm_vmw_unref_dmabuf_arg *)data;
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct vmw_dma_buffer *dma_buf;
int ret;
ret = vmw_user_dmabuf_lookup(tfile, arg->handle, &dma_buf);
if (ret)
return -EINVAL;
drm_vma_node_revoke(&dma_buf->base.vma_node, file_priv->filp);
vmw_dmabuf_unreference(&dma_buf);
/* FIXME: is this equivalent to vmw_dmabuf_unreference(dma_buf)? */ return ttm_ref_object_base_unref(vmw_fpriv(file_priv)->tfile, arg->handle, TTM_REF_USAGE);
@@ -551,18 +568,6 @@ int vmw_user_dmabuf_lookup(struct ttm_object_file *tfile, return 0; }
-int vmw_user_dmabuf_reference(struct ttm_object_file *tfile,
struct vmw_dma_buffer *dma_buf)
-{
struct vmw_user_dma_buffer *user_bo;
if (dma_buf->base.destroy != vmw_user_dmabuf_destroy)
return -EINVAL;
user_bo = container_of(dma_buf, struct vmw_user_dma_buffer, dma);
return ttm_ref_object_add(tfile, &user_bo->base, TTM_REF_USAGE, NULL);
-}
/*
- Stream management
*/
1.8.3.4
(CC'ing the proper people since I'm still on parental leave).
On 08/13/2013 11:44 PM, David Herrmann wrote:
Please see inline comments.
Hi
On Tue, Aug 13, 2013 at 9:38 PM, David Herrmann dh.herrmann@gmail.com wrote:
Correctly allow and revoke buffer access on each open/close via the new VMA offset manager.
I haven't yet had time to check the access policies of the new VMA offset manager, but anything that is identical or stricter than the current vmwgfx verify_access() would be fine. If it's stricter however, we need to double-check backwards user-space compatibility.
We also need to make vmw_user_dmabuf_reference() correctly increase the vma-allow counter, but it is unused so remove it instead.
IIRC this function or a derivative thereof is used heavily in an upcoming version driver, so if possible, please add a corrected version rather than remove the (currently) unused code. This will trigger a merge error and the upcoming code can be more easily corrected.
Cc: Thomas Hellstrom thellstrom@vmware.com Signed-off-by: David Herrmann dh.herrmann@gmail.com
Just as a hint, this patch would allow to remove the "->access_verify()" callback in vmwgfx. No other driver uses it, afaik. I will try to add this in v2.
Regards David
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 0e67cf4..4d3f0ae 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -499,6 +499,12 @@ int vmw_dmabuf_alloc_ioctl(struct drm_device *dev, void *data, if (unlikely(ret != 0)) goto out_no_dmabuf;
ret = drm_vma_node_allow(&dma_buf->base.vma_node, file_priv->filp);
if (ret) {
vmw_dmabuf_unreference(&dma_buf);
goto out_no_dmabuf;
}
rep->handle = handle; rep->map_handle = drm_vma_node_offset_addr(&dma_buf->base.vma_node); rep->cur_gmr_id = handle;
@@ -517,7 +523,18 @@ int vmw_dmabuf_unref_ioctl(struct drm_device *dev, void *data, { struct drm_vmw_unref_dmabuf_arg *arg = (struct drm_vmw_unref_dmabuf_arg *)data;
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct vmw_dma_buffer *dma_buf;
int ret;
ret = vmw_user_dmabuf_lookup(tfile, arg->handle, &dma_buf);
if (ret)
return -EINVAL;
drm_vma_node_revoke(&dma_buf->base.vma_node, file_priv->filp);
vmw_dmabuf_unreference(&dma_buf);
/* FIXME: is this equivalent to vmw_dmabuf_unreference(dma_buf)? */
No. A ttm ref object is rather similar to a generic GEM object, only that it's generic in the sense that it is not restricted to buffers, and can make any desired object visible to user-space. So translated the below code removes a reference that the arg->handle holds on the "gem" object, potentially destroying the whole object in which the "gem" object is embedded.
return ttm_ref_object_base_unref(vmw_fpriv(file_priv)->tfile, arg->handle, TTM_REF_USAGE);
@@ -551,18 +568,6 @@ int vmw_user_dmabuf_lookup(struct ttm_object_file *tfile, return 0; }
-int vmw_user_dmabuf_reference(struct ttm_object_file *tfile,
struct vmw_dma_buffer *dma_buf)
-{
struct vmw_user_dma_buffer *user_bo;
if (dma_buf->base.destroy != vmw_user_dmabuf_destroy)
return -EINVAL;
user_bo = container_of(dma_buf, struct vmw_user_dma_buffer, dma);
return ttm_ref_object_add(tfile, &user_bo->base, TTM_REF_USAGE, NULL);
-}
- /*
*/
- Stream management
-- 1.8.3.4
Otherwise looks OK to me.
Thanks, Thomas
Hi
On Wed, Aug 14, 2013 at 7:35 PM, Thomas Hellstrom thellstrom@vmware.com wrote:
(CC'ing the proper people since I'm still on parental leave).
On 08/13/2013 11:44 PM, David Herrmann wrote:
Please see inline comments.
Hi
On Tue, Aug 13, 2013 at 9:38 PM, David Herrmann dh.herrmann@gmail.com wrote:
Correctly allow and revoke buffer access on each open/close via the new VMA offset manager.
I haven't yet had time to check the access policies of the new VMA offset manager, but anything that is identical or stricter than the current vmwgfx verify_access() would be fine. If it's stricter however, we need to double-check backwards user-space compatibility.
My patches make vmw_dmabuf_alloc_ioctl() add the caller's open-file (file*) to the list of allowed users of the new bo. vmw_dmabuf_unref_ioctl() removes it again. I haven't seen any way to pass a user-dmabuf to another user so there is currently at most one user for a vmw_dmabuf. vmw_user_dmabuf_reference() looks like it is intended exactly for this case so it would have to add the file* of the caller to the list of allowed users. I will change that in v2. This means every user who gets a handle for the buffer (like gem_open) will be added to the allowed users. For TTM-object currently only a single user is allowed.
So I replace vmw_user_bo->base.tfile with a list (actually rbtree) of allowed files. So more than one user can have access. This, however, breaks the "shareable" attribute which I wasn't aware of. As far as I can see, "shareable" is only used by vmwgfx_surface.c and can be set by userspace to allow arbitrary processes to map this buffer (sounds like a security issue similar to gem flink). I actually think we can replace the "shareable" attribute with proper access-management in the vma-manager. But first I'd need to know whether "shareable = true" is actually used by vmwgfx user-space and how buffers are shared? Do you simply pass the mmap offset between processes? Or do you pass some handle?
If you really pass mmap offsets in user-space and rely on this, I guess there is no way I can make vmwgfx use the vma-manager access management. I will have to find a way to work around it or move the "shareable" flag to ttm_bo.
We also need to make vmw_user_dmabuf_reference() correctly increase the vma-allow counter, but it is unused so remove it instead.
IIRC this function or a derivative thereof is used heavily in an upcoming version driver, so if possible, please add a corrected version rather than remove the (currently) unused code. This will trigger a merge error and the upcoming code can be more easily corrected.
I will do so.
Cc: Thomas Hellstrom thellstrom@vmware.com Signed-off-by: David Herrmann dh.herrmann@gmail.com
Just as a hint, this patch would allow to remove the "->access_verify()" callback in vmwgfx. No other driver uses it, afaik. I will try to add this in v2.
Regards David
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 0e67cf4..4d3f0ae 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -499,6 +499,12 @@ int vmw_dmabuf_alloc_ioctl(struct drm_device *dev, void *data, if (unlikely(ret != 0)) goto out_no_dmabuf;
ret = drm_vma_node_allow(&dma_buf->base.vma_node,
file_priv->filp);
if (ret) {
vmw_dmabuf_unreference(&dma_buf);
goto out_no_dmabuf;
}
rep->handle = handle; rep->map_handle =
drm_vma_node_offset_addr(&dma_buf->base.vma_node); rep->cur_gmr_id = handle; @@ -517,7 +523,18 @@ int vmw_dmabuf_unref_ioctl(struct drm_device *dev, void *data, { struct drm_vmw_unref_dmabuf_arg *arg = (struct drm_vmw_unref_dmabuf_arg *)data;
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct vmw_dma_buffer *dma_buf;
int ret;
ret = vmw_user_dmabuf_lookup(tfile, arg->handle, &dma_buf);
if (ret)
return -EINVAL;
drm_vma_node_revoke(&dma_buf->base.vma_node, file_priv->filp);
vmw_dmabuf_unreference(&dma_buf);
/* FIXME: is this equivalent to vmw_dmabuf_unreference(dma_buf)?
*/
No. A ttm ref object is rather similar to a generic GEM object, only that it's generic in the sense that it is not restricted to buffers, and can make any desired object visible to user-space. So translated the below code removes a reference that the arg->handle holds on the "gem" object, potentially destroying the whole object in which the "gem" object is embedded.
So I actually need both lookups, vmw_user_dmabuf_lookup() and the lookup in ttm_ref_object_base_unref()? Ugh.. but ok, I will leave the function then as it is now but remove the comment.
return ttm_ref_object_base_unref(vmw_fpriv(file_priv)->tfile, arg->handle, TTM_REF_USAGE);
@@ -551,18 +568,6 @@ int vmw_user_dmabuf_lookup(struct ttm_object_file *tfile, return 0; }
-int vmw_user_dmabuf_reference(struct ttm_object_file *tfile,
struct vmw_dma_buffer *dma_buf)
-{
struct vmw_user_dma_buffer *user_bo;
if (dma_buf->base.destroy != vmw_user_dmabuf_destroy)
return -EINVAL;
user_bo = container_of(dma_buf, struct vmw_user_dma_buffer, dma);
return ttm_ref_object_add(tfile, &user_bo->base, TTM_REF_USAGE,
NULL); -}
- /*
*/
- Stream management
-- 1.8.3.4
Otherwise looks OK to me.
Thanks! David
On 08/16/2013 03:19 PM, David Herrmann wrote:
Hi
On Wed, Aug 14, 2013 at 7:35 PM, Thomas Hellstrom thellstrom@vmware.com wrote:
(CC'ing the proper people since I'm still on parental leave).
On 08/13/2013 11:44 PM, David Herrmann wrote:
Please see inline comments.
Hi
On Tue, Aug 13, 2013 at 9:38 PM, David Herrmann dh.herrmann@gmail.com wrote:
Correctly allow and revoke buffer access on each open/close via the new VMA offset manager.
I haven't yet had time to check the access policies of the new VMA offset manager, but anything that is identical or stricter than the current vmwgfx verify_access() would be fine. If it's stricter however, we need to double-check backwards user-space compatibility.
My patches make vmw_dmabuf_alloc_ioctl() add the caller's open-file (file*) to the list of allowed users of the new bo. vmw_dmabuf_unref_ioctl() removes it again. I haven't seen any way to pass a user-dmabuf to another user so there is currently at most one user for a vmw_dmabuf. vmw_user_dmabuf_reference() looks like it is intended exactly for this case so it would have to add the file* of the caller to the list of allowed users. I will change that in v2. This means every user who gets a handle for the buffer (like gem_open) will be added to the allowed users. For TTM-object currently only a single user is allowed.
So I replace vmw_user_bo->base.tfile with a list (actually rbtree) of allowed files. So more than one user can have access. This, however, breaks the "shareable" attribute which I wasn't aware of. As far as I can see, "shareable" is only used by vmwgfx_surface.c and can be set by userspace to allow arbitrary processes to map this buffer (sounds like a security issue similar to gem flink). I actually think we can replace the "shareable" attribute with proper access-management in the vma-manager. But first I'd need to know whether "shareable = true" is actually used by vmwgfx user-space and how buffers are shared? Do you simply pass the mmap offset between processes? Or do you pass some handle?
Buffer- and surface sharing is done by passing an opaque (not mmap) handle. A process intending to map the shared buffer must obtain the map offset through a vmw_user_dmabuf_reference() call, and that only works if the buffer is "shareable". mmap offsets are never passed between processes, but valid only if obtained directly from the kernel driver.
This means that currently buffer mapping should have the same access restriction as the X server imposes on DRI clients; If a process is allowed to open the drm device, it also has map access to all "shareable" objects, which is a security hole in the sense that verify_access() should really check that the caller, if not the buffer owner, is also authenticated.
The reason verify_access() is there is to make the TTM buffer object transparent to how it is exported to user space (GEM or TTM objects). Apparently the GEM TTM drivers have ignored this hook for some unknown reason.
Some ideas: 1) Rather than having a list of allowable files on each buffer object, perhaps we should have a user and a group and a set of permissions (for user, group and system) more like how files are handled?
2) Rather than imposing a security policy in the vma manager, could we perhaps have a set a utility functions that are called through verify_access(). Each driver could then have a wrapper to gather the needed information and hand it over to the VMA manager?
If you really pass mmap offsets in user-space and rely on this, I guess there is no way I can make vmwgfx use the vma-manager access management. I will have to find a way to work around it or move the "shareable" flag to ttm_bo.
We also need to make vmw_user_dmabuf_reference() correctly increase the vma-allow counter, but it is unused so remove it instead.
IIRC this function or a derivative thereof is used heavily in an upcoming version driver, so if possible, please add a corrected version rather than remove the (currently) unused code. This will trigger a merge error and the upcoming code can be more easily corrected.
I will do so.
Cc: Thomas Hellstrom thellstrom@vmware.com Signed-off-by: David Herrmann dh.herrmann@gmail.com
Just as a hint, this patch would allow to remove the "->access_verify()" callback in vmwgfx. No other driver uses it, afaik. I will try to add this in v2.
Regards David
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 0e67cf4..4d3f0ae 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -499,6 +499,12 @@ int vmw_dmabuf_alloc_ioctl(struct drm_device *dev, void *data, if (unlikely(ret != 0)) goto out_no_dmabuf;
ret = drm_vma_node_allow(&dma_buf->base.vma_node,
file_priv->filp);
if (ret) {
vmw_dmabuf_unreference(&dma_buf);
goto out_no_dmabuf;
}
rep->handle = handle; rep->map_handle =
drm_vma_node_offset_addr(&dma_buf->base.vma_node); rep->cur_gmr_id = handle; @@ -517,7 +523,18 @@ int vmw_dmabuf_unref_ioctl(struct drm_device *dev, void *data, { struct drm_vmw_unref_dmabuf_arg *arg = (struct drm_vmw_unref_dmabuf_arg *)data;
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct vmw_dma_buffer *dma_buf;
int ret;
ret = vmw_user_dmabuf_lookup(tfile, arg->handle, &dma_buf);
if (ret)
return -EINVAL;
drm_vma_node_revoke(&dma_buf->base.vma_node, file_priv->filp);
vmw_dmabuf_unreference(&dma_buf);
/* FIXME: is this equivalent to vmw_dmabuf_unreference(dma_buf)?
*/
No. A ttm ref object is rather similar to a generic GEM object, only that it's generic in the sense that it is not restricted to buffers, and can make any desired object visible to user-space. So translated the below code removes a reference that the arg->handle holds on the "gem" object, potentially destroying the whole object in which the "gem" object is embedded.
So I actually need both lookups, vmw_user_dmabuf_lookup() and the lookup in ttm_ref_object_base_unref()? Ugh.. but ok, I will leave the function then as it is now but remove the comment.
Yes. This seems odd, but IIRC the lookups are from different hash tables. The unref() call makes a lookup in a hash table private to the file.
return ttm_ref_object_base_unref(vmw_fpriv(file_priv)->tfile, arg->handle, TTM_REF_USAGE);
@@ -551,18 +568,6 @@ int vmw_user_dmabuf_lookup(struct ttm_object_file *tfile, return 0; }
-int vmw_user_dmabuf_reference(struct ttm_object_file *tfile,
struct vmw_dma_buffer *dma_buf)
-{
struct vmw_user_dma_buffer *user_bo;
if (dma_buf->base.destroy != vmw_user_dmabuf_destroy)
return -EINVAL;
user_bo = container_of(dma_buf, struct vmw_user_dma_buffer, dma);
return ttm_ref_object_add(tfile, &user_bo->base, TTM_REF_USAGE,
NULL); -}
- /*
*/
- Stream management
-- 1.8.3.4
Otherwise looks OK to me.
Thanks! David
Hi
On Fri, Aug 16, 2013 at 5:33 PM, Thomas Hellstrom thellstrom@vmware.com wrote:
On 08/16/2013 03:19 PM, David Herrmann wrote:
Hi
On Wed, Aug 14, 2013 at 7:35 PM, Thomas Hellstrom thellstrom@vmware.com wrote:
(CC'ing the proper people since I'm still on parental leave).
On 08/13/2013 11:44 PM, David Herrmann wrote:
Please see inline comments.
Hi
On Tue, Aug 13, 2013 at 9:38 PM, David Herrmann dh.herrmann@gmail.com wrote:
Correctly allow and revoke buffer access on each open/close via the new VMA offset manager.
I haven't yet had time to check the access policies of the new VMA offset manager, but anything that is identical or stricter than the current vmwgfx verify_access() would be fine. If it's stricter however, we need to double-check backwards user-space compatibility.
My patches make vmw_dmabuf_alloc_ioctl() add the caller's open-file (file*) to the list of allowed users of the new bo. vmw_dmabuf_unref_ioctl() removes it again. I haven't seen any way to pass a user-dmabuf to another user so there is currently at most one user for a vmw_dmabuf. vmw_user_dmabuf_reference() looks like it is intended exactly for this case so it would have to add the file* of the caller to the list of allowed users. I will change that in v2. This means every user who gets a handle for the buffer (like gem_open) will be added to the allowed users. For TTM-object currently only a single user is allowed.
So I replace vmw_user_bo->base.tfile with a list (actually rbtree) of allowed files. So more than one user can have access. This, however, breaks the "shareable" attribute which I wasn't aware of. As far as I can see, "shareable" is only used by vmwgfx_surface.c and can be set by userspace to allow arbitrary processes to map this buffer (sounds like a security issue similar to gem flink). I actually think we can replace the "shareable" attribute with proper access-management in the vma-manager. But first I'd need to know whether "shareable = true" is actually used by vmwgfx user-space and how buffers are shared? Do you simply pass the mmap offset between processes? Or do you pass some handle?
Buffer- and surface sharing is done by passing an opaque (not mmap) handle. A process intending to map the shared buffer must obtain the map offset through a vmw_user_dmabuf_reference() call, and that only works if the buffer is "shareable".
Ugh? That's not true. At least in upstream vmwgfx vmw_user_dmabuf_reference() is unused. Maybe you have access to some newer codebase? Anyway, I can easily make this function call drm_vma_node_allow() and then newer vmwgfx additions will work just fine. This means, every user who calls vmw_user_dmabuf_reference() will then also be allowed to mmap that buffer. But users who do not own a handle (that is, they didn't call vmw_user_dmabuf_reference() or they dropped the reference via vmw_user_dmabuf_unref_ioctl()) will get -EACCES if they try to mmap the buffer.
This is an extension to how it currently works, so I doubt that it breaks any user-space. Is that fine for vmwgfx?
mmap offsets are never passed between processes, but valid only if obtained directly from the kernel driver.
Good to hear. That means this patch doesn't break any existing userspace.
This means that currently buffer mapping should have the same access restriction as the X server imposes on DRI clients; If a process is allowed to open the drm device, it also has map access to all "shareable" objects, which is a security hole in the sense that verify_access() should really check that the caller, if not the buffer owner, is also authenticated.
I actually don't care for DRI. This series tries to fix exactly that. I don't want that. Users with DRM access shouldn't be able to map arbitrary buffers. Instead, users should only be able to map buffers that they own a handle for. Access management for handles is an orthogonal problem that this series does not change. dma-buf does a pretty good job there, anyway.
The reason verify_access() is there is to make the TTM buffer object transparent to how it is exported to user space (GEM or TTM objects). Apparently the GEM TTM drivers have ignored this hook for some unknown reason.
I don't think that we need any extended access-management here. Why would we ever need different access-modes for mmap than for handles? This series reduces mmap() access-management to handle-access-management. That is, the right to mmap() a buffer is now bound to a buffer handle. If you don't own a handle, you cannot mmap the buffer. But if you own a handle, you're always allowed to mmap the buffer. I think this should be the policy to go for, or am I missing something?
That's also why I think verify_access() is not needed at all. Drivers shouldn't care for mmap() access, instead they should take care only privileged users get handles (whatever they do to guarantee that, gem-flink, dma-buf, ...).
Cheers David
Some ideas:
- Rather than having a list of allowable files on each buffer object,
perhaps we should have a user and a group and a set of permissions (for user, group and system) more like how files are handled?
- Rather than imposing a security policy in the vma manager, could we
perhaps have a set a utility functions that are called through verify_access(). Each driver could then have a wrapper to gather the needed information and hand it over to the VMA manager?
If you really pass mmap offsets in user-space and rely on this, I guess there is no way I can make vmwgfx use the vma-manager access management. I will have to find a way to work around it or move the "shareable" flag to ttm_bo.
We also need to make vmw_user_dmabuf_reference() correctly increase the vma-allow counter, but it is unused so remove it instead.
IIRC this function or a derivative thereof is used heavily in an upcoming version driver, so if possible, please add a corrected version rather than remove the (currently) unused code. This will trigger a merge error and the upcoming code can be more easily corrected.
I will do so.
Cc: Thomas Hellstrom thellstrom@vmware.com Signed-off-by: David Herrmann dh.herrmann@gmail.com
Just as a hint, this patch would allow to remove the "->access_verify()" callback in vmwgfx. No other driver uses it, afaik. I will try to add this in v2.
Regards David
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 0e67cf4..4d3f0ae 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -499,6 +499,12 @@ int vmw_dmabuf_alloc_ioctl(struct drm_device *dev, void *data, if (unlikely(ret != 0)) goto out_no_dmabuf;
ret = drm_vma_node_allow(&dma_buf->base.vma_node,
file_priv->filp);
if (ret) {
vmw_dmabuf_unreference(&dma_buf);
goto out_no_dmabuf;
}
rep->handle = handle; rep->map_handle =
drm_vma_node_offset_addr(&dma_buf->base.vma_node); rep->cur_gmr_id = handle; @@ -517,7 +523,18 @@ int vmw_dmabuf_unref_ioctl(struct drm_device *dev, void *data, { struct drm_vmw_unref_dmabuf_arg *arg = (struct drm_vmw_unref_dmabuf_arg *)data;
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct vmw_dma_buffer *dma_buf;
int ret;
ret = vmw_user_dmabuf_lookup(tfile, arg->handle, &dma_buf);
if (ret)
return -EINVAL;
drm_vma_node_revoke(&dma_buf->base.vma_node, file_priv->filp);
vmw_dmabuf_unreference(&dma_buf);
/* FIXME: is this equivalent to
vmw_dmabuf_unreference(dma_buf)? */
No. A ttm ref object is rather similar to a generic GEM object, only that it's generic in the sense that it is not restricted to buffers, and can make any desired object visible to user-space. So translated the below code removes a reference that the arg->handle holds on the "gem" object, potentially destroying the whole object in which the "gem" object is embedded.
So I actually need both lookups, vmw_user_dmabuf_lookup() and the lookup in ttm_ref_object_base_unref()? Ugh.. but ok, I will leave the function then as it is now but remove the comment.
Yes. This seems odd, but IIRC the lookups are from different hash tables. The unref() call makes a lookup in a hash table private to the file.
return ttm_ref_object_base_unref(vmw_fpriv(file_priv)->tfile, arg->handle, TTM_REF_USAGE);
@@ -551,18 +568,6 @@ int vmw_user_dmabuf_lookup(struct ttm_object_file *tfile, return 0; }
-int vmw_user_dmabuf_reference(struct ttm_object_file *tfile,
struct vmw_dma_buffer *dma_buf)
-{
struct vmw_user_dma_buffer *user_bo;
if (dma_buf->base.destroy != vmw_user_dmabuf_destroy)
return -EINVAL;
user_bo = container_of(dma_buf, struct vmw_user_dma_buffer,
dma);
return ttm_ref_object_add(tfile, &user_bo->base, TTM_REF_USAGE,
NULL); -}
- /*
*/
- Stream management
-- 1.8.3.4
Otherwise looks OK to me.
Thanks! David
On 08/16/2013 07:01 PM, David Herrmann wrote:
Hi
On Fri, Aug 16, 2013 at 5:33 PM, Thomas Hellstrom thellstrom@vmware.com wrote:
On 08/16/2013 03:19 PM, David Herrmann wrote:
Hi
On Wed, Aug 14, 2013 at 7:35 PM, Thomas Hellstrom thellstrom@vmware.com wrote:
(CC'ing the proper people since I'm still on parental leave).
On 08/13/2013 11:44 PM, David Herrmann wrote:
Please see inline comments.
Hi
On Tue, Aug 13, 2013 at 9:38 PM, David Herrmann dh.herrmann@gmail.com wrote:
Correctly allow and revoke buffer access on each open/close via the new VMA offset manager.
I haven't yet had time to check the access policies of the new VMA offset manager, but anything that is identical or stricter than the current vmwgfx verify_access() would be fine. If it's stricter however, we need to double-check backwards user-space compatibility.
My patches make vmw_dmabuf_alloc_ioctl() add the caller's open-file (file*) to the list of allowed users of the new bo. vmw_dmabuf_unref_ioctl() removes it again. I haven't seen any way to pass a user-dmabuf to another user so there is currently at most one user for a vmw_dmabuf. vmw_user_dmabuf_reference() looks like it is intended exactly for this case so it would have to add the file* of the caller to the list of allowed users. I will change that in v2. This means every user who gets a handle for the buffer (like gem_open) will be added to the allowed users. For TTM-object currently only a single user is allowed.
So I replace vmw_user_bo->base.tfile with a list (actually rbtree) of allowed files. So more than one user can have access. This, however, breaks the "shareable" attribute which I wasn't aware of. As far as I can see, "shareable" is only used by vmwgfx_surface.c and can be set by userspace to allow arbitrary processes to map this buffer (sounds like a security issue similar to gem flink). I actually think we can replace the "shareable" attribute with proper access-management in the vma-manager. But first I'd need to know whether "shareable = true" is actually used by vmwgfx user-space and how buffers are shared? Do you simply pass the mmap offset between processes? Or do you pass some handle?
Buffer- and surface sharing is done by passing an opaque (not mmap) handle. A process intending to map the shared buffer must obtain the map offset through a vmw_user_dmabuf_reference() call, and that only works if the buffer is "shareable".
Ugh? That's not true. At least in upstream vmwgfx vmw_user_dmabuf_reference() is unused. Maybe you have access to some newer codebase?
Yes, this is how TTM buffer management used to work in older TTM drivers and how the codebase for newer device versions will work.
Anyway, I can easily make this function call drm_vma_node_allow() and then newer vmwgfx additions will work just fine. This means, every user who calls vmw_user_dmabuf_reference() will then also be allowed to mmap that buffer. But users who do not own a handle (that is, they didn't call vmw_user_dmabuf_reference() or they dropped the reference via vmw_user_dmabuf_unref_ioctl()) will get -EACCES if they try to mmap the buffer.
This is an extension to how it currently works, so I doubt that it breaks any user-space. Is that fine for vmwgfx?
Yes, that sounds fine.
mmap offsets are never passed between processes, but valid only if obtained directly from the kernel driver.
Good to hear. That means this patch doesn't break any existing userspace.
This means that currently buffer mapping should have the same access restriction as the X server imposes on DRI clients; If a process is allowed to open the drm device, it also has map access to all "shareable" objects, which is a security hole in the sense that verify_access() should really check that the caller, if not the buffer owner, is also authenticated.
I actually don't care for DRI. This series tries to fix exactly that. I don't want that. Users with DRM access shouldn't be able to map arbitrary buffers. Instead, users should only be able to map buffers that they own a handle for. Access management for handles is an orthogonal problem that this series does not change. dma-buf does a pretty good job there, anyway.
Understood.
The reason verify_access() is there is to make the TTM buffer object transparent to how it is exported to user space (GEM or TTM objects). Apparently the GEM TTM drivers have ignored this hook for some unknown reason.
I don't think that we need any extended access-management here. Why would we ever need different access-modes for mmap than for handles? This series reduces mmap() access-management to handle-access-management. That is, the right to mmap() a buffer is now bound to a buffer handle. If you don't own a handle, you cannot mmap the buffer. But if you own a handle, you're always allowed to mmap the buffer. I think this should be the policy to go for, or am I missing something?
That's also why I think verify_access() is not needed at all. Drivers shouldn't care for mmap() access, instead they should take care only privileged users get handles (whatever they do to guarantee that, gem-flink, dma-buf, ...).
Sounds fair enough.
Thanks, Thomas
Cheers David
Some ideas:
- Rather than having a list of allowable files on each buffer object,
perhaps we should have a user and a group and a set of permissions (for user, group and system) more like how files are handled?
- Rather than imposing a security policy in the vma manager, could we
perhaps have a set a utility functions that are called through verify_access(). Each driver could then have a wrapper to gather the needed information and hand it over to the VMA manager?
If you really pass mmap offsets in user-space and rely on this, I guess there is no way I can make vmwgfx use the vma-manager access management. I will have to find a way to work around it or move the "shareable" flag to ttm_bo.
We also need to make vmw_user_dmabuf_reference()
correctly increase the vma-allow counter, but it is unused so remove it instead.
IIRC this function or a derivative thereof is used heavily in an upcoming version driver, so if possible, please add a corrected version rather than remove the (currently) unused code. This will trigger a merge error and the upcoming code can be more easily corrected.
I will do so.
Cc: Thomas Hellstrom thellstrom@vmware.com Signed-off-by: David Herrmann dh.herrmann@gmail.com
Just as a hint, this patch would allow to remove the "->access_verify()" callback in vmwgfx. No other driver uses it, afaik. I will try to add this in v2.
Regards David
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 29
+++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 0e67cf4..4d3f0ae 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -499,6 +499,12 @@ int vmw_dmabuf_alloc_ioctl(struct drm_device *dev, void *data, if (unlikely(ret != 0)) goto out_no_dmabuf;
ret = drm_vma_node_allow(&dma_buf->base.vma_node,
file_priv->filp);
if (ret) {
vmw_dmabuf_unreference(&dma_buf);
goto out_no_dmabuf;
}
rep->handle = handle; rep->map_handle =
drm_vma_node_offset_addr(&dma_buf->base.vma_node); rep->cur_gmr_id = handle; @@ -517,7 +523,18 @@ int vmw_dmabuf_unref_ioctl(struct drm_device *dev, void *data, { struct drm_vmw_unref_dmabuf_arg *arg = (struct drm_vmw_unref_dmabuf_arg *)data;
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct vmw_dma_buffer *dma_buf;
int ret;
ret = vmw_user_dmabuf_lookup(tfile, arg->handle, &dma_buf);
if (ret)
return -EINVAL;
drm_vma_node_revoke(&dma_buf->base.vma_node, file_priv->filp);
vmw_dmabuf_unreference(&dma_buf);
/* FIXME: is this equivalent to
vmw_dmabuf_unreference(dma_buf)? */
No. A ttm ref object is rather similar to a generic GEM object, only that it's generic in the sense that it is not restricted to buffers, and can make any desired object visible to user-space. So translated the below code removes a reference that the arg->handle holds on the "gem" object, potentially destroying the whole object in which the "gem" object is embedded.
So I actually need both lookups, vmw_user_dmabuf_lookup() and the lookup in ttm_ref_object_base_unref()? Ugh.. but ok, I will leave the function then as it is now but remove the comment.
Yes. This seems odd, but IIRC the lookups are from different hash tables. The unref() call makes a lookup in a hash table private to the file.
return ttm_ref_object_base_unref(vmw_fpriv(file_priv)->tfile, arg->handle, TTM_REF_USAGE);
@@ -551,18 +568,6 @@ int vmw_user_dmabuf_lookup(struct ttm_object_file *tfile, return 0; }
-int vmw_user_dmabuf_reference(struct ttm_object_file *tfile,
struct vmw_dma_buffer *dma_buf)
-{
struct vmw_user_dma_buffer *user_bo;
if (dma_buf->base.destroy != vmw_user_dmabuf_destroy)
return -EINVAL;
user_bo = container_of(dma_buf, struct vmw_user_dma_buffer,
dma);
return ttm_ref_object_add(tfile, &user_bo->base, TTM_REF_USAGE,
NULL); -}
- /*
*/
- Stream management
-- 1.8.3.4
Otherwise looks OK to me.
Thanks! David
If a user does not have access to a given buffer, we must not allow them to mmap it. Otherwise, users could "guess" the buffer offsets of other users and get access to the buffer. Similar to mmap(), we also fix ttm_bo_io() which is the backend for read() and write() syscalls. It's currently unused, though.
All TTM drivers already use the new VMA offset manager access management so we can enable TTM mmap access management now.
Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/drm/ttm/ttm_bo_vm.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 8c0e2c0..2c49953 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -219,7 +219,8 @@ static const struct vm_operations_struct ttm_bo_vm_ops = { .close = ttm_bo_vm_close };
-static struct ttm_buffer_object *ttm_bo_vm_lookup(struct ttm_bo_device *bdev, +static struct ttm_buffer_object *ttm_bo_vm_lookup(struct file *filp, + struct ttm_bo_device *bdev, unsigned long offset, unsigned long pages) { @@ -229,7 +230,7 @@ static struct ttm_buffer_object *ttm_bo_vm_lookup(struct ttm_bo_device *bdev, drm_vma_offset_lock_lookup(&bdev->vma_manager);
node = drm_vma_offset_lookup_locked(&bdev->vma_manager, offset, pages); - if (likely(node)) { + if (likely(node) && drm_vma_node_is_allowed(node, filp)) { bo = container_of(node, struct ttm_buffer_object, vma_node); if (!kref_get_unless_zero(&bo->kref)) bo = NULL; @@ -250,7 +251,7 @@ int ttm_bo_mmap(struct file *filp, struct vm_area_struct *vma, struct ttm_buffer_object *bo; int ret;
- bo = ttm_bo_vm_lookup(bdev, vma->vm_pgoff, vma_pages(vma)); + bo = ttm_bo_vm_lookup(filp, bdev, vma->vm_pgoff, vma_pages(vma)); if (unlikely(!bo)) return -EINVAL;
@@ -310,7 +311,7 @@ ssize_t ttm_bo_io(struct ttm_bo_device *bdev, struct file *filp, bool no_wait = false; bool dummy;
- bo = ttm_bo_vm_lookup(bdev, dev_offset, 1); + bo = ttm_bo_vm_lookup(filp, bdev, dev_offset, 1); if (unlikely(bo == NULL)) return -EFAULT;
Implement automatic access management for mmap offsets for all GEM drivers. This prevents user-space applications from "guessing" GEM BO offsets and accessing buffers which they don't own.
TTM drivers or other modesetting drivers with custom mm handling might make use of GEM but don't need its mmap helpers. To avoid unnecessary overhead, we limit GEM access management to drivers using DRIVER_GEM_MMAP. So for TTM drivers GEM will not call any *_allow() or *_revoke() helpers.
Signed-off-by: David Herrmann dh.herrmann@gmail.com --- Documentation/DocBook/drm.tmpl | 13 +++++++++++++ drivers/gpu/drm/drm_gem.c | 36 ++++++++++++++++++++++++++++++++---- include/drm/drmP.h | 1 + 3 files changed, 46 insertions(+), 4 deletions(-)
diff --git a/Documentation/DocBook/drm.tmpl b/Documentation/DocBook/drm.tmpl index 87e22ec..a388749 100644 --- a/Documentation/DocBook/drm.tmpl +++ b/Documentation/DocBook/drm.tmpl @@ -223,6 +223,19 @@ </para></listitem> </varlistentry> <varlistentry> + <term>DRIVER_GEM_MMAP</term> + <listitem><para> + Driver uses default GEM mmap helpers. This flag guarantees that + GEM core takes care of buffer access management and prevents + unprivileged users from mapping random buffers. This flag should + only be set by GEM-only drivers that use the drm_gem_mmap_*() + helpers directly. TTM, on the other hand, takes care of access + management itself, even though drivers might use DRIVER_GEM and + TTM at the same time. See the DRM VMA Offset Manager interface for + more information on buffer mmap() access management. + </para></listitem> + </varlistentry> + <varlistentry> <term>DRIVER_MODESET</term> <listitem><para> Driver supports mode setting interfaces (KMS). diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 7043d89..887274f 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -236,6 +236,9 @@ drm_gem_handle_delete(struct drm_file *filp, u32 handle)
drm_gem_remove_prime_handles(obj, filp);
+ if (drm_core_check_feature(dev, DRIVER_GEM_MMAP)) + drm_vma_node_revoke(&obj->vma_node, filp->filp); + if (dev->driver->gem_close_object) dev->driver->gem_close_object(obj, filp); drm_gem_object_handle_unreference_unlocked(obj); @@ -288,15 +291,26 @@ drm_gem_handle_create(struct drm_file *file_priv,
drm_gem_object_handle_reference(obj);
+ if (drm_core_check_feature(dev, DRIVER_GEM_MMAP)) { + ret = drm_vma_node_allow(&obj->vma_node, file_priv->filp); + if (ret) + goto err_handle; + } + if (dev->driver->gem_open_object) { ret = dev->driver->gem_open_object(obj, file_priv); - if (ret) { - drm_gem_handle_delete(file_priv, *handlep); - return ret; - } + if (ret) + goto err_node; }
return 0; + +err_node: + if (drm_core_check_feature(dev, DRIVER_GEM_MMAP)) + drm_vma_node_revoke(&obj->vma_node, file_priv->filp); +err_handle: + drm_gem_handle_delete(file_priv, *handlep); + return ret; } EXPORT_SYMBOL(drm_gem_handle_create);
@@ -486,6 +500,9 @@ drm_gem_object_release_handle(int id, void *ptr, void *data)
drm_gem_remove_prime_handles(obj, file_priv);
+ if (drm_core_check_feature(dev, DRIVER_GEM_MMAP)) + drm_vma_node_revoke(&obj->vma_node, file_priv->filp); + if (dev->driver->gem_close_object) dev->driver->gem_close_object(obj, file_priv);
@@ -610,6 +627,10 @@ EXPORT_SYMBOL(drm_gem_vm_close); * the GEM object is not looked up based on its fake offset. To implement the * DRM mmap operation, drivers should use the drm_gem_mmap() function. * + * drm_gem_mmap_obj() assumes the user is granted access to the buffer while + * drm_gem_mmap() prevents unprivileged users from mapping random objects. So + * callers must verify access restrictions before calling this helper. + * * NOTE: This function has to be protected with dev->struct_mutex * * Return 0 or success or -EINVAL if the object size is smaller than the VMA @@ -658,6 +679,9 @@ EXPORT_SYMBOL(drm_gem_mmap_obj); * Look up the GEM object based on the offset passed in (vma->vm_pgoff will * contain the fake offset we created when the GTT map ioctl was called on * the object) and map it with a call to drm_gem_mmap_obj(). + * + * If the caller is not granted access to the buffer object, the mmap will fail + * with EACCES. Please see DRIVER_GEM_MMAP for more information. */ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma) { @@ -678,6 +702,10 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma) if (!node) { mutex_unlock(&dev->struct_mutex); return drm_mmap(filp, vma); + } else if (drm_core_check_feature(dev, DRIVER_GEM_MMAP) && + !drm_vma_node_is_allowed(node, filp)) { + mutex_unlock(&dev->struct_mutex); + return -EACCES; }
obj = container_of(node, struct drm_gem_object, vma_node); diff --git a/include/drm/drmP.h b/include/drm/drmP.h index 3ecdde6..d51accd 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -152,6 +152,7 @@ int drm_err(const char *func, const char *format, ...); #define DRIVER_GEM 0x1000 #define DRIVER_MODESET 0x2000 #define DRIVER_PRIME 0x4000 +#define DRIVER_GEM_MMAP 0x8000
#define DRIVER_BUS_PCI 0x1 #define DRIVER_BUS_PLATFORM 0x2
On Tue, Aug 13, 2013 at 09:38:31PM +0200, David Herrmann wrote:
Implement automatic access management for mmap offsets for all GEM drivers. This prevents user-space applications from "guessing" GEM BO offsets and accessing buffers which they don't own.
TTM drivers or other modesetting drivers with custom mm handling might make use of GEM but don't need its mmap helpers. To avoid unnecessary overhead, we limit GEM access management to drivers using DRIVER_GEM_MMAP. So for TTM drivers GEM will not call any *_allow() or *_revoke() helpers.
Signed-off-by: David Herrmann dh.herrmann@gmail.com
Overall I'm not sold why we need the GEM_MMAP feature flag. Drivers that don't use drm_gem_mmap shouldn't get hurt with it if we ignore the little bit of useless overhead due to obj->vma_node. But they already have that anyway with obj->vma, so meh. And more incentives to move ttm over to the gem object manager completely ;-)
One comment below.
Cheers, Daniel
Documentation/DocBook/drm.tmpl | 13 +++++++++++++ drivers/gpu/drm/drm_gem.c | 36 ++++++++++++++++++++++++++++++++---- include/drm/drmP.h | 1 + 3 files changed, 46 insertions(+), 4 deletions(-)
diff --git a/Documentation/DocBook/drm.tmpl b/Documentation/DocBook/drm.tmpl index 87e22ec..a388749 100644 --- a/Documentation/DocBook/drm.tmpl +++ b/Documentation/DocBook/drm.tmpl @@ -223,6 +223,19 @@ </para></listitem> </varlistentry> <varlistentry>
<term>DRIVER_GEM_MMAP</term>
<listitem><para>
Driver uses default GEM mmap helpers. This flag guarantees that
GEM core takes care of buffer access management and prevents
unprivileged users from mapping random buffers. This flag should
only be set by GEM-only drivers that use the drm_gem_mmap_*()
helpers directly. TTM, on the other hand, takes care of access
management itself, even though drivers might use DRIVER_GEM and
TTM at the same time. See the DRM VMA Offset Manager interface for
more information on buffer mmap() access management.
</para></listitem>
</varlistentry>
<varlistentry> <term>DRIVER_MODESET</term> <listitem><para> Driver supports mode setting interfaces (KMS).
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 7043d89..887274f 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -236,6 +236,9 @@ drm_gem_handle_delete(struct drm_file *filp, u32 handle)
drm_gem_remove_prime_handles(obj, filp);
- if (drm_core_check_feature(dev, DRIVER_GEM_MMAP))
drm_vma_node_revoke(&obj->vma_node, filp->filp);
- if (dev->driver->gem_close_object) dev->driver->gem_close_object(obj, filp); drm_gem_object_handle_unreference_unlocked(obj);
@@ -288,15 +291,26 @@ drm_gem_handle_create(struct drm_file *file_priv,
drm_gem_object_handle_reference(obj);
- if (drm_core_check_feature(dev, DRIVER_GEM_MMAP)) {
ret = drm_vma_node_allow(&obj->vma_node, file_priv->filp);
if (ret)
goto err_handle;
- }
- if (dev->driver->gem_open_object) { ret = dev->driver->gem_open_object(obj, file_priv);
if (ret) {
drm_gem_handle_delete(file_priv, *handlep);
return ret;
}
if (ret)
goto err_node;
The error handling cleanup changes here aren't required since handle_delete will dtrt anyway. Or at least I hope it does ;-) -Daniel
}
return 0;
+err_node:
- if (drm_core_check_feature(dev, DRIVER_GEM_MMAP))
drm_vma_node_revoke(&obj->vma_node, file_priv->filp);
+err_handle:
- drm_gem_handle_delete(file_priv, *handlep);
- return ret;
} EXPORT_SYMBOL(drm_gem_handle_create);
@@ -486,6 +500,9 @@ drm_gem_object_release_handle(int id, void *ptr, void *data)
drm_gem_remove_prime_handles(obj, file_priv);
- if (drm_core_check_feature(dev, DRIVER_GEM_MMAP))
drm_vma_node_revoke(&obj->vma_node, file_priv->filp);
- if (dev->driver->gem_close_object) dev->driver->gem_close_object(obj, file_priv);
@@ -610,6 +627,10 @@ EXPORT_SYMBOL(drm_gem_vm_close);
- the GEM object is not looked up based on its fake offset. To implement the
- DRM mmap operation, drivers should use the drm_gem_mmap() function.
- drm_gem_mmap_obj() assumes the user is granted access to the buffer while
- drm_gem_mmap() prevents unprivileged users from mapping random objects. So
- callers must verify access restrictions before calling this helper.
- NOTE: This function has to be protected with dev->struct_mutex
- Return 0 or success or -EINVAL if the object size is smaller than the VMA
@@ -658,6 +679,9 @@ EXPORT_SYMBOL(drm_gem_mmap_obj);
- Look up the GEM object based on the offset passed in (vma->vm_pgoff will
- contain the fake offset we created when the GTT map ioctl was called on
- the object) and map it with a call to drm_gem_mmap_obj().
- If the caller is not granted access to the buffer object, the mmap will fail
*/
- with EACCES. Please see DRIVER_GEM_MMAP for more information.
int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma) { @@ -678,6 +702,10 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma) if (!node) { mutex_unlock(&dev->struct_mutex); return drm_mmap(filp, vma);
} else if (drm_core_check_feature(dev, DRIVER_GEM_MMAP) &&
!drm_vma_node_is_allowed(node, filp)) {
mutex_unlock(&dev->struct_mutex);
return -EACCES;
}
obj = container_of(node, struct drm_gem_object, vma_node);
diff --git a/include/drm/drmP.h b/include/drm/drmP.h index 3ecdde6..d51accd 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -152,6 +152,7 @@ int drm_err(const char *func, const char *format, ...); #define DRIVER_GEM 0x1000 #define DRIVER_MODESET 0x2000 #define DRIVER_PRIME 0x4000 +#define DRIVER_GEM_MMAP 0x8000
#define DRIVER_BUS_PCI 0x1
#define DRIVER_BUS_PLATFORM 0x2
1.8.3.4
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Hi
On Tue, Aug 13, 2013 at 11:05 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Tue, Aug 13, 2013 at 09:38:31PM +0200, David Herrmann wrote:
Implement automatic access management for mmap offsets for all GEM drivers. This prevents user-space applications from "guessing" GEM BO offsets and accessing buffers which they don't own.
TTM drivers or other modesetting drivers with custom mm handling might make use of GEM but don't need its mmap helpers. To avoid unnecessary overhead, we limit GEM access management to drivers using DRIVER_GEM_MMAP. So for TTM drivers GEM will not call any *_allow() or *_revoke() helpers.
Signed-off-by: David Herrmann dh.herrmann@gmail.com
Overall I'm not sold why we need the GEM_MMAP feature flag. Drivers that don't use drm_gem_mmap shouldn't get hurt with it if we ignore the little bit of useless overhead due to obj->vma_node. But they already have that anyway with obj->vma, so meh. And more incentives to move ttm over to the gem object manager completely ;-)
One comment below.
I fixed both, the error-path and GEM_MMAP. Thanks!
Cheers David
Set DRIVER_GEM_MMAP to make GEM core track buffer accesses via the DRM VMA Offset Manager.
Cc: Daniel Vetter daniel.vetter@ffwll.ch Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/drm/i915/i915_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index 13457e3e..c7e10ba 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -1008,7 +1008,8 @@ static struct drm_driver driver = { */ .driver_features = DRIVER_USE_AGP | DRIVER_REQUIRE_AGP | /* DRIVER_USE_MTRR |*/ - DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED | DRIVER_GEM | DRIVER_PRIME, + DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED | DRIVER_GEM | DRIVER_PRIME | + DRIVER_GEM_MMAP, .load = i915_driver_load, .unload = i915_driver_unload, .open = i915_driver_open,
Set DRIVER_GEM_MMAP to make GEM core track buffer accesses via the DRM VMA Offset Manager.
Cc: Inki Dae inki.dae@samsung.com Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/drm/exynos/exynos_drm_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/exynos/exynos_drm_drv.c b/drivers/gpu/drm/exynos/exynos_drm_drv.c index df81d3c..1d7f7cf 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_drv.c +++ b/drivers/gpu/drm/exynos/exynos_drm_drv.c @@ -256,7 +256,8 @@ static const struct file_operations exynos_drm_driver_fops = {
static struct drm_driver exynos_drm_driver = { .driver_features = DRIVER_HAVE_IRQ | DRIVER_MODESET | - DRIVER_GEM | DRIVER_PRIME, + DRIVER_GEM | DRIVER_PRIME | + DRIVER_GEM_MMAP, .load = exynos_drm_load, .unload = exynos_drm_unload, .open = exynos_drm_open,
Set DRIVER_GEM_MMAP to make GEM core track buffer accesses via the DRM VMA Offset Manager.
Cc: Patrik Jakobsson patrik.r.jakobsson@gmail.com Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/drm/gma500/psb_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/gma500/psb_drv.c b/drivers/gpu/drm/gma500/psb_drv.c index d13c2fc..58790f4 100644 --- a/drivers/gpu/drm/gma500/psb_drv.c +++ b/drivers/gpu/drm/gma500/psb_drv.c @@ -628,7 +628,8 @@ static const struct file_operations psb_gem_fops = {
static struct drm_driver driver = { .driver_features = DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED | \ - DRIVER_IRQ_VBL | DRIVER_MODESET | DRIVER_GEM , + DRIVER_IRQ_VBL | DRIVER_MODESET | DRIVER_GEM | + DRIVER_GEM_MMAP, .load = psb_driver_load, .unload = psb_driver_unload,
Set DRIVER_GEM_MMAP to make GEM core track buffer accesses via the DRM VMA Offset Manager.
Cc: Rob Clark robdclark@gmail.com Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/drm/omapdrm/omap_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c index 2f9e22e..700e71f 100644 --- a/drivers/gpu/drm/omapdrm/omap_drv.c +++ b/drivers/gpu/drm/omapdrm/omap_drv.c @@ -605,7 +605,8 @@ static const struct file_operations omapdriver_fops = {
static struct drm_driver omap_drm_driver = { .driver_features = - DRIVER_HAVE_IRQ | DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME, + DRIVER_HAVE_IRQ | DRIVER_MODESET | DRIVER_GEM | + DRIVER_PRIME | DRIVER_GEM_MMAP, .load = dev_load, .unload = dev_unload, .open = dev_open,
On Tue, Aug 13, 2013 at 3:38 PM, David Herrmann dh.herrmann@gmail.com wrote:
Set DRIVER_GEM_MMAP to make GEM core track buffer accesses via the DRM VMA Offset Manager.
Cc: Rob Clark robdclark@gmail.com Signed-off-by: David Herrmann dh.herrmann@gmail.com
Acked-by: Rob Clark robdclark@gmail.com
drivers/gpu/drm/omapdrm/omap_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c index 2f9e22e..700e71f 100644 --- a/drivers/gpu/drm/omapdrm/omap_drv.c +++ b/drivers/gpu/drm/omapdrm/omap_drv.c @@ -605,7 +605,8 @@ static const struct file_operations omapdriver_fops = {
static struct drm_driver omap_drm_driver = { .driver_features =
DRIVER_HAVE_IRQ | DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME,
DRIVER_HAVE_IRQ | DRIVER_MODESET | DRIVER_GEM |
DRIVER_PRIME | DRIVER_GEM_MMAP, .load = dev_load, .unload = dev_unload, .open = dev_open,
-- 1.8.3.4
Set DRIVER_GEM_MMAP to make GEM core track buffer accesses via the DRM VMA Offset Manager.
Cc: David Airlie airlied@linux.ie Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/drm/udl/udl_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/udl/udl_drv.c b/drivers/gpu/drm/udl/udl_drv.c index bb0af58..145a8e1 100644 --- a/drivers/gpu/drm/udl/udl_drv.c +++ b/drivers/gpu/drm/udl/udl_drv.c @@ -73,7 +73,8 @@ static const struct file_operations udl_driver_fops = { };
static struct drm_driver driver = { - .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME, + .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME | + DRIVER_GEM_MMAP, .load = udl_driver_load, .unload = udl_driver_unload,
Set DRIVER_GEM_MMAP to make GEM core track buffer accesses via the DRM VMA Offset Manager.
Cc: Thierry Reding thierry.reding@gmail.com Cc: "Terje Bergström" tbergstrom@nvidia.com Cc: Arto Merilainen amerilainen@nvidia.com Signed-off-by: David Herrmann dh.herrmann@gmail.com --- drivers/gpu/host1x/drm/drm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/host1x/drm/drm.c b/drivers/gpu/host1x/drm/drm.c index b128b90..1371a2b 100644 --- a/drivers/gpu/host1x/drm/drm.c +++ b/drivers/gpu/host1x/drm/drm.c @@ -613,7 +613,7 @@ static void tegra_debugfs_cleanup(struct drm_minor *minor) #endif
struct drm_driver tegra_drm_driver = { - .driver_features = DRIVER_MODESET | DRIVER_GEM, + .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_GEM_MMAP, .load = tegra_drm_load, .unload = tegra_drm_unload, .open = tegra_drm_open,
dri-devel@lists.freedesktop.org