flink_to is similar to flink, but the global name is only made available to one other drm fd, identified by the existing drm_magic_t cookie mechanism.
flink_to names are transient. Once the receiving fd opens a name, it goes away. flink_to lets application attach a binary blob of data to the name, which is passed to the receiving fd when it uses the open_with_data ioctl. This lets us pass meta data about the buffer along with the name, which generalizes how we currently passes object size, tile and swizzle information.
With the flink_to ioctl, the X server can specify exactly which clients should be able to access the window buffers, which avoids leaking buffer access to other X servers or unpriviledged X clients.
Signed-off-by: Kristian Høgsberg krh@bitplanet.net ---
We've discussed how the current drm security model is broken in different ways before. This is my proposal for fixing it:
- drm auth today is a bit in the file_priv, set by the drm master when a client requests it (typically through dri2 protocol). Doesn't take multiple masters or fine-grained buffers access into account.
- broken in multiple masters world; vt switch, server drops master, then anybody can become master and auth themselves. client is now authenticated and can access all buffers from the X server.
- XACE, no way to restrict access to window buffers, once one client is direct rendering to the window, all authenticated drm clients can access the buffers.
What is flink_to
- similar to flink, but global name is only made available to one other drm fd, identified by the existing drm_magic_t cookie mechanism.
- flink_to names are transient, once the receiving fd opens it, it goes away. makes user space easier, we don't have to track "did we already flink_to this buffer to that client and what was the name".
- flink_to fails if the receiving fd already has an flink_to name pending (EBUSY). this prevents unlimited flink_to names from piling up if the receiving fd doesn't open them.
- flink_to twice on the same bo gives different names.
- flink_to lets application attach a binary blob of data to the name (see below).
How does flink_to fix the security problems
- for dri2, the X server will use flink_to to grant each client access to the dri2 buffers as they call dri2 getbuffers on a per buffer basis.
- drm applications that aren't X clients can't talk to dri2 and can't get access. vt switching doesn't affect this in any way.
- xace can reject individual clients access to the dri2 extension, preventing those applications from accessing window buffers they aren't authorized for.
Suggest to drop auth requirement for gem ioctls
- rely on /dev/dri/card0 file permissions to restrict access to gpu to local users. similar to how it works for most other hw.
- access to buffers is in the drm-with-mm world is what needs to be controlled.
- keep auth and master requirement for kms and old drm ioctls
- allows gpgpu type applications
- risks: you now don't need to be an X client to access the gpu, malicious applications can starve or maybe crash gpu. malicious programs submitting hand-crafted batch buffers to read from absolute addresses? X front buffer location in gtt is pretty predictable. needs cmd checker or ppgtt or such to be 100% secure.
Attached data
- a mechanism to attach a binary blob to an flink_to buffer name. open_with_data returns the data. Userspace (typically libdrm) decides the layout and versioning of the blob and the contents will be chipset specific. it's an opaque blob to the kernel, which doesn't need to know about stride and formats etc.
- applications use this to pass metadata about the buffer along with the buffer name. this keeps chipset specific details out of ipc. We already share object size, tiling and swizzling information through the kernel and need to ioctl immediately after gem_open to get those details. this mechanism lets us pass that info along with other meta data and return it all as we open the buffer with open_with_data.
- EGLImage sharing: assign a global name to an EGLImage to share with a different process. Used by wayland or potentially other non-X systems. Khronos EGL extension in the works.
Backwards compat
- an flink_to name can be opened by old gem open, which succeeds if it was flinked_to magic 0 or the file_priv of the opener. If the name was flinked to magic 0, the name is not transient (it's not reclaimed by opening it). The X server can then use flink_to in getbuffers, and old and new clients can still use plain gem_open to open the buffers. This requires that clients only gem_open a name once per getbuffers request. this is currently true for all dri2 drivers.
- Or could just introduce new getbuffers request that returns flink_to names for all buffers and no metadata (require the ddx driver to attach the meta data to the name). this new request will also make it explicit that a name can only be opened once and that the driver must open all received names.
Oops, sorry, this got way too long...
Kristian
drivers/gpu/drm/drm_drv.c | 2 + drivers/gpu/drm/drm_gem.c | 224 +++++++++++++++++++++++++---------- drivers/gpu/drm/drm_info.c | 4 +- drivers/gpu/drm/i915/i915_debugfs.c | 26 ++++- drivers/gpu/drm/i915/i915_gem.c | 2 +- drivers/gpu/drm/i915/i915_irq.c | 2 +- include/drm/drm.h | 35 ++++++ include/drm/drmP.h | 20 +++- 8 files changed, 243 insertions(+), 72 deletions(-)
diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c index 4a66201..ee58a9e 100644 --- a/drivers/gpu/drm/drm_drv.c +++ b/drivers/gpu/drm/drm_drv.c @@ -129,6 +129,8 @@ static struct drm_ioctl_desc drm_ioctls[] = { DRM_IOCTL_DEF(DRM_IOCTL_GEM_CLOSE, drm_gem_close_ioctl, DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_GEM_FLINK, drm_gem_flink_ioctl, DRM_AUTH|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_GEM_OPEN, drm_gem_open_ioctl, DRM_AUTH|DRM_UNLOCKED), + DRM_IOCTL_DEF(DRM_IOCTL_GEM_FLINK_TO, drm_gem_flink_to_ioctl, DRM_AUTH|DRM_UNLOCKED), + DRM_IOCTL_DEF(DRM_IOCTL_GEM_OPEN_WITH_DATA, drm_gem_open_with_data_ioctl, DRM_AUTH|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETRESOURCES, drm_mode_getresources, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED), diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 33dad3f..8f8fd9f 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -140,6 +140,7 @@ int drm_gem_object_init(struct drm_device *dev, kref_init(&obj->refcount); kref_init(&obj->handlecount); obj->size = size; + INIT_LIST_HEAD(&obj->name_list);
atomic_inc(&dev->object_count); atomic_add(obj->size, &dev->object_memory); @@ -293,27 +294,45 @@ drm_gem_close_ioctl(struct drm_device *dev, void *data, return ret; }
-/** - * Create a global name for an object, returning the name. - * - * Note that the name does not hold a reference; when the object - * is freed, the name goes away. - */ int -drm_gem_flink_ioctl(struct drm_device *dev, void *data, - struct drm_file *file_priv) +drm_gem_flink_to_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) { - struct drm_gem_flink *args = data; + struct drm_gem_flink_to *args = data; + struct drm_gem_name *name, *n; struct drm_gem_object *obj; + void __user *udata; int ret;
if (!(dev->driver->driver_features & DRIVER_GEM)) return -ENODEV;
+ if (args->data_size > 2048) + return -EINVAL; + + if (args->magic == 0 && args->data_size > 0) + return -EINVAL; + obj = drm_gem_object_lookup(dev, file_priv, args->handle); if (obj == NULL) return -EBADF;
+ name = kmalloc(sizeof *name + args->data_size, GFP_KERNEL); + if (name == NULL) { + ret = -ENOMEM; + goto err; + } + + name->obj = obj; + name->magic = args->magic; + name->data_size = args->data_size; + + udata = (void __user *) (long) args->data; + if (copy_from_user(name->data, udata, args->data_size)) { + ret = -EFAULT; + goto err; + } + again: if (idr_pre_get(&dev->object_name_idr, GFP_KERNEL) == 0) { ret = -ENOMEM; @@ -321,31 +340,99 @@ again: }
spin_lock(&dev->object_name_lock); - if (!obj->name) { - ret = idr_get_new_above(&dev->object_name_idr, obj, 1, - &obj->name); - args->name = (uint64_t) obj->name; - spin_unlock(&dev->object_name_lock); - - if (ret == -EAGAIN) - goto again; - - if (ret != 0) - goto err; - - /* Allocate a reference for the name table. */ - drm_gem_object_reference(obj); - } else { - args->name = (uint64_t) obj->name; - spin_unlock(&dev->object_name_lock); - ret = 0; + + /* + * Find out if there already is a name for the given magic for + * this object. If magic is 0, then we're done and will + * return that name, but if magic is not 0, it's a bug and we + * return -EBUSY. Otherwise fall through and allocate a new + * name in the idr. + */ + list_for_each_entry(n, &obj->name_list, list) { + if (n->magic == 0 && args->magic == 0) { + args->name = n->name; + ret = 0; + goto err_lock; + } else if (n->magic == args->magic) { + ret = -EBUSY; + goto err_lock; + } }
+ ret = idr_get_new_above(&dev->object_name_idr, obj, 1, &name->name); + args->name = name->name; + if (ret == 0) + list_add_tail(&name->list, &obj->name_list); + + spin_unlock(&dev->object_name_lock); + + if (ret == -EAGAIN) + goto again; + + if (ret != 0) + goto err; + + /* Keep the reference for the name object. */ + + return 0; + +err_lock: + spin_unlock(&dev->object_name_lock); err: + kfree(name); drm_gem_object_unreference_unlocked(obj); return ret; }
+int +drm_gem_open_with_data_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + struct drm_gem_open_with_data *args = data; + struct drm_gem_name *name; + void __user *udata; + int ret = 0; + + if (!(dev->driver->driver_features & DRIVER_GEM)) + return -ENODEV; + + spin_lock(&dev->object_name_lock); + name = idr_find(&dev->object_name_idr, (int) args->name); + if (name && file_priv->magic == name->magic) + list_del(&name->list); + if (name && name->magic != 0) + name = NULL; + spin_unlock(&dev->object_name_lock); + if (!name) + return -ENOENT; + + /* + * We now have a name object for either the global (magic 0) + * name which stays around as long as the bo, or an name + * object that we just took out of the list and will free + * below. In other words, we know we can acess it outside the + * object_name_lock. + */ + + udata = (void __user *) (unsigned long) args->data; + if (copy_to_user(udata, name->data, + min(args->data_size, name->data_size))) { + ret = -EFAULT; + goto err; + } + + ret = drm_gem_handle_create(file_priv, name->obj, &args->handle); + args->data_size = name->data_size; + + err: + if (name->magic != 0) { + drm_gem_object_unreference_unlocked(name->obj); + kfree(name); + } + + return ret; +} + /** * Open an object using the global name, returning a handle and the size. * @@ -357,30 +444,52 @@ drm_gem_open_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) { struct drm_gem_open *args = data; + struct drm_gem_open_with_data open_with_data; struct drm_gem_object *obj; int ret; - u32 handle;
- if (!(dev->driver->driver_features & DRIVER_GEM)) - return -ENODEV; + open_with_data.name = args->name; + open_with_data.data = 0; + open_with_data.data_size = 0;
- spin_lock(&dev->object_name_lock); - obj = idr_find(&dev->object_name_idr, (int) args->name); - if (obj) - drm_gem_object_reference(obj); - spin_unlock(&dev->object_name_lock); - if (!obj) - return -ENOENT; + ret = drm_gem_open_with_data_ioctl(dev, &open_with_data, file_priv);
- ret = drm_gem_handle_create(file_priv, obj, &handle); - drm_gem_object_unreference_unlocked(obj); - if (ret) - return ret; + if (ret == 0) { + args->handle = open_with_data.handle; + obj = drm_gem_object_lookup(dev, file_priv, args->handle); + BUG_ON(!obj); + args->size = obj->size; + drm_gem_object_unreference_unlocked(obj); + }
- args->handle = handle; - args->size = obj->size; + return ret; +}
- return 0; +/** + * Create a global name for an object, returning the name. + * + * Note that the name does not hold a reference; when the object + * is freed, the name goes away. + */ +int +drm_gem_flink_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + struct drm_gem_flink *args = data; + struct drm_gem_flink_to flink_to; + int ret; + + flink_to.handle = args->handle; + flink_to.magic = 0; + flink_to.data = 0; + flink_to.data_size = 0; + + ret = drm_gem_flink_to_ioctl(dev, &flink_to, file_priv); + + if (ret == 0) + args->name = flink_to.name; + + return ret; }
/** @@ -488,27 +597,20 @@ static void drm_gem_object_ref_bug(struct kref *list_kref) void drm_gem_object_handle_free(struct kref *kref) { - struct drm_gem_object *obj = container_of(kref, - struct drm_gem_object, - handlecount); + struct drm_gem_object *obj = + container_of(kref, struct drm_gem_object, handlecount); struct drm_device *dev = obj->dev; + struct drm_gem_name *n, *tmp;
- /* Remove any name for this object */ + /* Remove all names for this object */ spin_lock(&dev->object_name_lock); - if (obj->name) { - idr_remove(&dev->object_name_idr, obj->name); - obj->name = 0; - spin_unlock(&dev->object_name_lock); - /* - * The object name held a reference to this object, drop - * that now. - * - * This cannot be the last reference, since the handle holds one too. - */ + list_for_each_entry_safe(n, tmp, &obj->name_list, list) { + idr_remove(&dev->object_name_idr, n->name); kref_put(&obj->refcount, drm_gem_object_ref_bug); - } else - spin_unlock(&dev->object_name_lock); - + list_del(&n->list); + kfree(n); + } + spin_unlock(&dev->object_name_lock); } EXPORT_SYMBOL(drm_gem_object_handle_free);
diff --git a/drivers/gpu/drm/drm_info.c b/drivers/gpu/drm/drm_info.c index f0f6c6b..9abe00d 100644 --- a/drivers/gpu/drm/drm_info.c +++ b/drivers/gpu/drm/drm_info.c @@ -240,10 +240,10 @@ int drm_gem_one_name_info(int id, void *ptr, void *data) struct drm_gem_object *obj = ptr; struct seq_file *m = data;
- seq_printf(m, "name %d size %zd\n", obj->name, obj->size); + seq_printf(m, "name %d size %zd\n", id, obj->size);
seq_printf(m, "%6d %8zd %7d %8d\n", - obj->name, obj->size, + id, obj->size, atomic_read(&obj->handlecount.refcount), atomic_read(&obj->refcount.refcount)); return 0; diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 52510ad..cbc1851 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -63,6 +63,25 @@ static const char *get_tiling_flag(struct drm_i915_gem_object *obj_priv) } }
+ +static void print_object_names(struct seq_file *m, + struct drm_device *dev, + struct drm_gem_object *obj) +{ + struct drm_gem_name *n; + + spin_lock(&dev->object_name_lock); + + if (!list_empty(&obj->name_list)) { + seq_printf(m, " (names:"); + list_for_each_entry(n, &obj->name_list, list) + seq_printf(m, " %d", n->name); + seq_printf(m, ")"); + } + + spin_unlock(&dev->object_name_lock); +} + static int i915_gem_object_list_info(struct seq_file *m, void *data) { struct drm_info_node *node = (struct drm_info_node *) m->private; @@ -106,8 +125,8 @@ static int i915_gem_object_list_info(struct seq_file *m, void *data) obj_priv->dirty ? " dirty" : "", obj_priv->madv == I915_MADV_DONTNEED ? " purgeable" : "");
- if (obj_priv->base.name) - seq_printf(m, " (name: %d)", obj_priv->base.name); + print_object_names(m, dev, &obj_priv->base); + if (obj_priv->fence_reg != I915_FENCE_REG_NONE) seq_printf(m, " (fence: %d)", obj_priv->fence_reg); if (obj_priv->gtt_space != NULL) @@ -235,8 +254,7 @@ static int i915_gem_fence_regs_info(struct seq_file *m, void *data) get_tiling_flag(obj_priv), obj->read_domains, obj->write_domain, obj_priv->last_rendering_seqno); - if (obj->name) - seq_printf(m, " (name: %d)", obj->name); + print_object_names(m, dev, obj); seq_printf(m, "\n"); } } diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 9ded3da..dac0f6d 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1225,7 +1225,7 @@ i915_gem_create_mmap_offset(struct drm_gem_object *obj) list->file_offset_node = drm_mm_search_free(&mm->offset_manager, obj->size / PAGE_SIZE, 0, 0); if (!list->file_offset_node) { - DRM_ERROR("failed to allocate offset for bo %d\n", obj->name); + DRM_ERROR("failed to allocate offset for bo %p\n", obj); ret = -ENOMEM; goto out_free_list; } diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 2479be0..cfcca51 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -644,7 +644,7 @@ static void i915_capture_error_state(struct drm_device *dev) struct drm_gem_object *obj = &obj_priv->base;
error->active_bo[i].size = obj->size; - error->active_bo[i].name = obj->name; + error->active_bo[i].name = 0; error->active_bo[i].seqno = obj_priv->last_rendering_seqno; error->active_bo[i].gtt_offset = obj_priv->gtt_offset; error->active_bo[i].read_domains = obj->read_domains; diff --git a/include/drm/drm.h b/include/drm/drm.h index e3f46e0..cb3f938 100644 --- a/include/drm/drm.h +++ b/include/drm/drm.h @@ -608,6 +608,39 @@ struct drm_gem_open { __u64 size; };
+struct drm_gem_flink_to { + /** Handle for the object being named */ + __u32 handle; + + /** Magic for destination file_priv */ + __u32 magic; + + /** Data to attach to name */ + __u64 data; + + /** Size of data, max is 2k bytes */ + __u32 data_size; + + /** Returned name for the object */ + __u32 name; +}; + +struct drm_gem_open_with_data { + /** Name of object being opened */ + __u32 name; + + /** Returned handle for the object */ + __u32 handle; + + /** Pointer to userspace buffer for data */ + __u64 data; + + /** Size of userspace buffer, returned size of the object */ + __u32 data_size; + + __u32 pad; +}; + #include "drm_mode.h"
#define DRM_IOCTL_BASE 'd' @@ -628,6 +661,8 @@ struct drm_gem_open { #define DRM_IOCTL_GEM_CLOSE DRM_IOW (0x09, struct drm_gem_close) #define DRM_IOCTL_GEM_FLINK DRM_IOWR(0x0a, struct drm_gem_flink) #define DRM_IOCTL_GEM_OPEN DRM_IOWR(0x0b, struct drm_gem_open) +#define DRM_IOCTL_GEM_FLINK_TO DRM_IOWR(0x0c, struct drm_gem_flink_to) +#define DRM_IOCTL_GEM_OPEN_WITH_DATA DRM_IOWR(0x0d, struct drm_gem_open_with_data)
#define DRM_IOCTL_SET_UNIQUE DRM_IOW( 0x10, struct drm_unique) #define DRM_IOCTL_AUTH_MAGIC DRM_IOW( 0x11, struct drm_auth) diff --git a/include/drm/drmP.h b/include/drm/drmP.h index c1b9871..f60b35c 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -597,6 +597,15 @@ struct drm_gem_mm { struct drm_open_hash offset_hash; /**< User token hash table for maps */ };
+struct drm_gem_name { + struct drm_gem_object *obj; + drm_magic_t magic; + struct list_head list; + u32 name; + u32 data_size; + char data[0]; +}; + /** * This structure defines the drm_mm memory object, which will be used by the * DRM for its buffer objects. @@ -624,10 +633,11 @@ struct drm_gem_object { size_t size;
/** - * Global name for this object, starts at 1. 0 means unnamed. - * Access is covered by the object_name_lock in the related drm_device + * Global names for this object, starts at 1. Empty list means + * unnamed. Access is covered by the object_name_lock in the + * related drm_device */ - int name; + struct list_head name_list;
/** * Memory domains. These monitor which caches contain read/write data @@ -1510,6 +1520,10 @@ int drm_gem_flink_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); int drm_gem_open_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +int drm_gem_flink_to_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +int drm_gem_open_with_data_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); void drm_gem_open(struct drm_device *dev, struct drm_file *file_private); void drm_gem_release(struct drm_device *dev, struct drm_file *file_private);
On Thu, 8 Jul 2010 11:23:25 -0400, Kristian Høgsberg krh@bitplanet.net wrote:
- a mechanism to attach a binary blob to an flink_to buffer name. open_with_data returns the data. Userspace (typically libdrm) decides the layout and versioning of the blob and the contents will be chipset specific. it's an opaque blob to the kernel, which doesn't need to know about stride and formats etc.
Arbitrary binary blobs considered harmful? Even if the kernel doesn't need to know all of this data, having it in an explicit (versioned) format will protect applications from randomly mis-interpreting the data.
On Thu, Jul 8, 2010 at 11:59 AM, Keith Packard keithp@keithp.com wrote:
On Thu, 8 Jul 2010 11:23:25 -0400, Kristian Høgsberg krh@bitplanet.net wrote:
- a mechanism to attach a binary blob to an flink_to buffer name. open_with_data returns the data. Userspace (typically libdrm) decides the layout and versioning of the blob and the contents will be chipset specific. it's an opaque blob to the kernel, which doesn't need to know about stride and formats etc.
Arbitrary binary blobs considered harmful? Even if the kernel doesn't need to know all of this data, having it in an explicit (versioned) format will protect applications from randomly mis-interpreting the data.
I talked with ickle about that and whether or not to include a version+format u32 for the data in the ioctl args. He convinced me that the kernel didn't need to know about the layout of the blob and that requiring by convention that the first u32 of the blob is the version+format u32 would suffice. I can go either way on this, but I guess I have a small preference for making it part of the ioctl args as you suggest.
Kristian
On Thu, 8 Jul 2010 12:14:28 -0400, Kristian Høgsberg krh@bitplanet.net wrote:
On Thu, Jul 8, 2010 at 11:59 AM, Keith Packard keithp@keithp.com wrote:
On Thu, 8 Jul 2010 11:23:25 -0400, Kristian Høgsberg krh@bitplanet.net wrote:
- a mechanism to attach a binary blob to an flink_to buffer name. open_with_data returns the data. Userspace (typically libdrm) decides the layout and versioning of the blob and the contents will be chipset specific. it's an opaque blob to the kernel, which doesn't need to know about stride and formats etc.
Arbitrary binary blobs considered harmful? Even if the kernel doesn't need to know all of this data, having it in an explicit (versioned) format will protect applications from randomly mis-interpreting the data.
I talked with ickle about that and whether or not to include a version+format u32 for the data in the ioctl args. He convinced me that the kernel didn't need to know about the layout of the blob and that requiring by convention that the first u32 of the blob is the version+format u32 would suffice. I can go either way on this, but I guess I have a small preference for making it part of the ioctl args as you suggest.
I am not going to argue with someone who has been tackling the issue of protocol extensions for 25 years... ;-)
My argument was based around that the current system is designed as a directory of opaque objects and so the extended attributes should be kept opaque to the kernel as well and left open to interpretation by userland. What I am most unclear about is under which circumstances is this backchannel communication preferable to passing the extra information over the IPC that needs to be performed anyway in order to open a surface. -ickle
On Thu, 08 Jul 2010 17:37:20 +0100 Chris Wilson chris@chris-wilson.co.uk wrote:
On Thu, 8 Jul 2010 12:14:28 -0400, Kristian Høgsberg krh@bitplanet.net wrote:
On Thu, Jul 8, 2010 at 11:59 AM, Keith Packard keithp@keithp.com wrote:
On Thu, 8 Jul 2010 11:23:25 -0400, Kristian Høgsberg krh@bitplanet.net wrote:
- a mechanism to attach a binary blob to an flink_to buffer name. open_with_data returns the data. Userspace (typically libdrm) decides the layout and versioning of the blob and the contents will be chipset specific. it's an opaque blob to the kernel, which doesn't need to know about stride and formats etc.
Arbitrary binary blobs considered harmful? Even if the kernel doesn't need to know all of this data, having it in an explicit (versioned) format will protect applications from randomly mis-interpreting the data.
I talked with ickle about that and whether or not to include a version+format u32 for the data in the ioctl args. He convinced me that the kernel didn't need to know about the layout of the blob and that requiring by convention that the first u32 of the blob is the version+format u32 would suffice. I can go either way on this, but I guess I have a small preference for making it part of the ioctl args as you suggest.
I am not going to argue with someone who has been tackling the issue of protocol extensions for 25 years... ;-)
My argument was based around that the current system is designed as a directory of opaque objects and so the extended attributes should be kept opaque to the kernel as well and left open to interpretation by userland. What I am most unclear about is under which circumstances is this backchannel communication preferable to passing the extra information over the IPC that needs to be performed anyway in order to open a surface.
That's the part I had trouble with as well. Passing the blob through the kernel saves a little IPC but also seems unnecessary, and so rubs against my kernel minimalist side...
My argument was based around that the current system is designed as a directory of opaque objects and so the extended attributes should be kept opaque to the kernel as well and left open to interpretation by userland. What I am most unclear about is under which circumstances is this backchannel communication preferable to passing the extra information over the IPC that needs to be performed anyway in order to open a surface.
That's the part I had trouble with as well. Passing the blob through the kernel saves a little IPC but also seems unnecessary, and so rubs against my kernel minimalist side...
Passing the blob through the kernel does give you a basis for more complex security since you can agree a blob format with the kernel security layer and add security hooks to the interface in future.
On Thu, 8 Jul 2010 09:49:26 -0700, Jesse Barnes jbarnes@virtuousgeek.org wrote:
That's the part I had trouble with as well. Passing the blob through the kernel saves a little IPC but also seems unnecessary, and so rubs against my kernel minimalist side...
Yeah, if the kernel doesn't need to know it, why is the kernel storing it? What options do we have here?
On Thu, Jul 8, 2010 at 12:49 PM, Jesse Barnes jbarnes@virtuousgeek.org wrote:
On Thu, 08 Jul 2010 17:37:20 +0100 Chris Wilson chris@chris-wilson.co.uk wrote:
On Thu, 8 Jul 2010 12:14:28 -0400, Kristian Høgsberg krh@bitplanet.net wrote:
On Thu, Jul 8, 2010 at 11:59 AM, Keith Packard keithp@keithp.com wrote:
On Thu, 8 Jul 2010 11:23:25 -0400, Kristian Høgsberg krh@bitplanet.net wrote:
- a mechanism to attach a binary blob to an flink_to buffer name. open_with_data returns the data. Userspace (typically libdrm) decides the layout and versioning of the blob and the contents will be chipset specific. it's an opaque blob to the kernel, which doesn't need to know about stride and formats etc.
Arbitrary binary blobs considered harmful? Even if the kernel doesn't need to know all of this data, having it in an explicit (versioned) format will protect applications from randomly mis-interpreting the data.
I talked with ickle about that and whether or not to include a version+format u32 for the data in the ioctl args. He convinced me that the kernel didn't need to know about the layout of the blob and that requiring by convention that the first u32 of the blob is the version+format u32 would suffice. I can go either way on this, but I guess I have a small preference for making it part of the ioctl args as you suggest.
I am not going to argue with someone who has been tackling the issue of protocol extensions for 25 years... ;-)
My argument was based around that the current system is designed as a directory of opaque objects and so the extended attributes should be kept opaque to the kernel as well and left open to interpretation by userland. What I am most unclear about is under which circumstances is this backchannel communication preferable to passing the extra information over the IPC that needs to be performed anyway in order to open a surface.
That's the part I had trouble with as well. Passing the blob through the kernel saves a little IPC but also seems unnecessary, and so rubs against my kernel minimalist side...
There's nothing in the use cases I have in mind that strictly requires passing buffers around as integer IDs. We could standardize a serialization mechanism in libdrm that would give you a blob that contains the bo name and the meta data, and you would then send that over the protocol when you need to share a buffer. Attaching a blob to the name instead and passing just the uint32_t name around makes life a little easier though, in the EGL API, in the protocol code. It is IPC-ish, I guess, but in the same sense as xattr's are, for example.
The other point is that we already do this to some extent, with the object size (returned by open) and the tiling and swizzle parameters (communicated through the set and get ioctls). They're are somewhat special in that the kernel also needs to know these values, but we do use the kernel to communicate these values between processes. There's no reason gem_open needs to return size in the ioctl args and the intel get_tiling ioctl is only required because we don't pass the tiling meta data over dri2 protocol (it's a chipset specific thing after all). The flink_to blob approach combines the convenience of getting the data at gem_open time and the option to pass free form chipset specific meta data.
In the work on the EGL extension, the other Khronos members have indicated that sharing buffers by passing an integer could work for their implementations too, and that's what the draft standard currently requires. I could try to get that changed to a size+bytearray couple, but then the "pass the blob" API makes it all the way into user APIs. We could implement the integer ID in userspace by creating a file in a shared directory by the name of the integer handle and serialize meta data into that. I'm not fond of either approach, but they're possible.
Kristian
2010/7/8 Kristian Høgsberg krh@bitplanet.net: ...
In the work on the EGL extension, the other Khronos members have indicated that sharing buffers by passing an integer could work for their implementations too, and that's what the draft standard currently requires. I could try to get that changed to a size+bytearray couple, but then the "pass the blob" API makes it all the way into user APIs. We could implement the integer ID in userspace by creating a file in a shared directory by the name of the integer handle and serialize meta data into that. I'm not fond of either approach, but they're possible.
Chris was asking for details about this extensions, and I thought I'd post it here too to give a better idea of what it might look like. We're looking at two new entry points:
EGLDisplayNameKHR eglGetDisplayNameKHR(EGLDisplay dpy);
EGLBoolean eglExportImageKHR(EGLDisplay dpy, EGLImageKHR image, EGLDisplayNameKHR target_dpy EGLImageNameKHR *handle);
with
typedef khronos_uint32_t EGLDisplayNameKHR; typedef khronos_uint32_t EGLImageNameKHR;
The process that wants to receive a shared EGLImage must call eglGetDisplayNameKHR() to get a global name for the EGLDisplay handle it wants to import the EGLImage into and send that to the process sharing the image. The process sharing the EGLImage then calls eglExportImageKHR() and gets an integer handle for the EGLImage it can send back to the receiving process, which will pass it as the EGLClientBuffer argument to eglCreateImage(). At that point, the receiving process can use the EGLImage as usual.
As I said above, nothing here requires using integer names, but it makes an application level API easier to use and does fit in rather nicely with how GL otherwise uses integer names for various objects.
Kristian
dri-devel@lists.freedesktop.org