On 21.04.2012 17:57, Dave Airlie wrote:
2012/4/21 Jerome Glissej.glisse@gmail.com:
2012/4/21 Christian Königdeathsimple@vodafone.de:
On 21.04.2012 16:08, Jerome Glisse wrote:
2012/4/21 Christian Königdeathsimple@vodafone.de:
Interesting, I'm pretty sure that I haven't touched the locking order of the cs_mutex vs. vm_mutex.
Maybe it is just some kind of side effect, going to locking into it anyway.
Christian.
It's the using, init path take lock in different order than cs path
Well, could you explain to me why the vm code takes cs mutex in the first place?
It clearly has it's own mutex and it doesn't looks like that it deals with any cs related data anyway.
Christian.
Lock simplification is on my todo. The issue is that vm manager is protected by cs_mutex The vm.mutex is specific to each vm it doesn't protect the global vm management. I didn't wanted to introduce a new global vm mutex as vm activity is mostly trigger on behalf of cs so i dediced to use the cs mutex.
That's why non cs path of vm need to take the cs mutex.
So if one app is adding a bo, and another doing CS, isn't deadlock a real possibility?
Yeah, I think so.
I expect the VM code need to take CS mutex earlier then.
I would strongly suggest to give the vm code their own global mutex and remove the per vm mutex, cause the later is pretty superfluous if the cs_mutex is also taken most of the time.
The attached patch is against drm-fixes and does exactly that.
Christian.