On 8/22/19 3:36 PM, Daniel Vetter wrote:
On Thu, Aug 22, 2019 at 3:30 PM Thomas Hellström (VMware) thomas_os@shipmail.org wrote:
On 8/22/19 3:07 PM, Daniel Vetter wrote:
Full audit of everyone:
i915, radeon, amdgpu should be clean per their maintainers.
vram helpers should be fine, they don't do command submission, so really no business holding struct_mutex while doing copy_*_user. But I haven't checked them all.
panfrost seems to dma_resv_lock only in panfrost_job_push, which looks clean.
v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(), copying from/to userspace happens all in v3d_lookup_bos which is outside of the critical section.
vmwgfx has a bunch of ioctls that do their own copy_*_user:
- vmw_execbuf_process: First this does some copies in vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself. Then comes the usual ttm reserve/validate sequence, then actual submission/fencing, then unreserving, and finally some more copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of details, but looks all safe.
- vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be seen, seems to only create a fence and copy it out.
- a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be found there.
Summary: vmwgfx seems to be fine too.
virtio: There's virtio_gpu_execbuffer_ioctl, which does all the copying from userspace before even looking up objects through their handles, so safe. Plus the getparam/getcaps ioctl, also both safe.
qxl only has qxl_execbuffer_ioctl, which calls into qxl_process_single_command. There's a lovely comment before the __copy_from_user_inatomic that the slowpath should be copied from i915, but I guess that never happened. Try not to be unlucky and get your CS data evicted between when it's written and the kernel tries to read it. The only other copy_from_user is for relocs, but those are done before qxl_release_reserve_list(), which seems to be the only thing reserving buffers (in the ttm/dma_resv sense) in that code. So looks safe.
A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this everywhere and needs to be fixed up.
v2: Thomas pointed at that vmwgfx calls dma_resv_init while it holds a dma_resv lock of a different object already. Christian mentioned that ttm core does this too for ghost objects. intel-gfx-ci highlighted that i915 has similar issues.
Unfortunately we can't do this in the usual module init functions, because kernel threads don't have an ->mm - we have to wait around for some user thread to do this.
Solution is to spawn a worker (but only once). It's horrible, but it works.
v3: We can allocate mm! (Chris). Horrible worker hack out, clean initcall solution in.
v4: Annotate with __init (Rob Herring)
Cc: Rob Herring robh@kernel.org Cc: Alex Deucher alexander.deucher@amd.com Cc: Christian König christian.koenig@amd.com Cc: Chris Wilson chris@chris-wilson.co.uk Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Eric Anholt eric@anholt.net Cc: Dave Airlie airlied@redhat.com Cc: Gerd Hoffmann kraxel@redhat.com Cc: Ben Skeggs bskeggs@redhat.com Cc: "VMware Graphics" linux-graphics-maintainer@vmware.com Cc: Thomas Hellstrom thellstrom@vmware.com Reviewed-by: Christian König christian.koenig@amd.com Reviewed-by: Chris Wilson chris@chris-wilson.co.uk Tested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/dma-buf/dma-resv.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 42a8f3f11681..97c4c4812d08 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -34,6 +34,7 @@
#include <linux/dma-resv.h> #include <linux/export.h> +#include <linux/sched/mm.h>
/** * DOC: Reservation Object Overview @@ -95,6 +96,29 @@ static void dma_resv_list_free(struct dma_resv_list *list) kfree_rcu(list, rcu); }
+#if IS_ENABLED(CONFIG_LOCKDEP) +static void __init dma_resv_lockdep(void) +{
struct mm_struct *mm = mm_alloc();
struct dma_resv obj;
if (!mm)
return;
dma_resv_init(&obj);
down_read(&mm->mmap_sem);
I took a quick look into using lockdep macros replacing the actual locks: Something along the lines of
lock_acquire(mm->mmap_sem.dep_map, 0, 0, 1, 1, NULL, _THIS_IP_);
Yeah I'm not a fan of the magic numbers this nees :-/ And now this is run once at startup, so the taking the fake locks for real, once, shouldn't hurt. Lockdep updating it's data structures is going to be 100x more cpu cycles anyway :-)
ww_mutex_lock(&obj.lock, NULL);
lock_acquire(obj.lock.dep_map, 0, 0, 0, 1, NULL, _THIS_IP_);
fs_reclaim_acquire(GFP_KERNEL);
fs_reclaim_release(GFP_KERNEL);
ww_mutex_unlock(&obj.lock);
lock_release(obj.lock.dep_map, 0, _THIS_IP_);
up_read(&mm->mmap_sem);
lock_release(obj.lock.dep_map, 0, _THIS_IP_);
Either way is fine with me, though.
Reviewed-by: Thomas Hellström thellstrom@vmware.com
Thanks for your review comments.
Can you pls also run this in some test cycles, if that's easily possible? I'd like to have a tested-by from at least the big drivers - i915, amd, nouveau, vmwgfx and is definitely using ttm to its fullest too, so best chances for hitting an oversight.
Cheers, Daniel
Tested vmwgfx with a decent OpenGL / rendercheck stress test and no lockdep trips.
/Thomas
Tested-by: Thomas Hellström thellstrom@vmware.com