Hi
On Wed, Mar 26, 2014 at 9:40 PM, Thomas Hellstrom thellstrom@vmware.com wrote:
- Why don't add a spin-lock to "drm_file" instead? Use that one to
manage master contexts, but keep "struct_mutex" whenever calling into driver callbacks (set_master/drop_master)
See above. We can't have a lock in the drm_file structure since it protects drm_minor data. Also, while it might be possible to restructure some code to be able to use spinlocks instead of mutexes I see no reason to. The established locking order now is master_mutex -> ttm_lock -> struct_mutex which means master_mutex must be a mutex.
Thanks, that order really helps understanding what these locks do. More, actually, than the commit message ;) It also shows how awful struct_mutex is.. Using it as data-protection and execution-sync is really weird. So I'm all for doing more fine-grained locking if it's as simple as with the master-stuff.
- why is master_mutex per device and not per-minor? I thought masters
on minors are _entirely_ independent?How do multiple keysyms
Because currently there is only one master capable minor per device, so it would be equivalent. And even if there were more, there is no reason to expect any contention and thus a single master_mutex would be fine.
Fair enough.
diff --git a/drivers/gpu/drm/drm_fops.c b/drivers/gpu/drm/drm_fops.c index e6cdd0f..dad571f 100644 --- a/drivers/gpu/drm/drm_fops.c +++ b/drivers/gpu/drm/drm_fops.c @@ -231,12 +231,13 @@ static int drm_open_helper(struct inode *inode, struct file *filp,
/* if there is no current master make this fd it, but do not create * any master object for render clients */
mutex_lock(&dev->struct_mutex);
mutex_lock(&dev->master_mutex); if (drm_is_legacy_client(priv) && !priv->minor->master) { /* create a new master */
mutex_lock(&dev->struct_mutex); priv->minor->master = drm_master_create(priv->minor);
mutex_unlock(&dev->struct_mutex); if (!priv->minor->master) {
mutex_unlock(&dev->struct_mutex); ret = -ENOMEM; goto out_close; }
@@ -245,28 +246,25 @@ static int drm_open_helper(struct inode *inode, struct file *filp, /* take another reference for the copy in the local file priv */ priv->master = drm_master_get(priv->minor->master);
mutex_lock(&dev->struct_mutex); priv->authenticated = 1; mutex_unlock(&dev->struct_mutex);
What's that struct_mutex doing here? We're in ->open(), there is no-one racing against us.
Well, it was held at that point before, and the purpose of this patch is not to generally clean up struct_mutex usage.
Well, it now looks like this:
mutex_lock(&dev->struct_mutex); priv->authenticated = 1; mutex_unlock(&dev->struct_mutex);
Which looks so _wrong_ that I thought we should fix it right away. But ok, I'm not going to force you to do so.
Quick lock-review: * drm_authmagic() uses drm_global_mutex to protect setting priv->authenticated (racing against us..) * current context is ->open() so no-one has access to "priv". We haven't even linked it to dev->filelist, but we called into the driver which might have done anything.. * drm-fops read ->authenticated _unlocked_, so no reason at all to do an explicit locking around a _single_ write (which is only needed in very rare cases, anyway) * it is never set to anything else but 1; a _single_ barrier after setting it should be enough
In case you don't want to incorporate that, I will send a cleanup.
Would be nice to have the mutex-locking in drm-next to get some testing. v2 looks good, I haven't done a thorough locking review, though. But I guess you did, so feel free to include it in your pull. Thanks David