Full audit of everyone:
- i915, radeon, amdgpu should be clean per their maintainers.
- vram helpers should be fine, they don't do command submission, so really no business holding struct_mutex while doing copy_*_user. But I haven't checked them all.
- panfrost seems to dma_resv_lock only in panfrost_job_push, which looks clean.
- v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(), copying from/to userspace happens all in v3d_lookup_bos which is outside of the critical section.
- vmwgfx has a bunch of ioctls that do their own copy_*_user: - vmw_execbuf_process: First this does some copies in vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself. Then comes the usual ttm reserve/validate sequence, then actual submission/fencing, then unreserving, and finally some more copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of details, but looks all safe. - vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be seen, seems to only create a fence and copy it out. - a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be found there. Summary: vmwgfx seems to be fine too.
- virtio: There's virtio_gpu_execbuffer_ioctl, which does all the copying from userspace before even looking up objects through their handles, so safe. Plus the getparam/getcaps ioctl, also both safe.
- qxl only has qxl_execbuffer_ioctl, which calls into qxl_process_single_command. There's a lovely comment before the __copy_from_user_inatomic that the slowpath should be copied from i915, but I guess that never happened. Try not to be unlucky and get your CS data evicted between when it's written and the kernel tries to read it. The only other copy_from_user is for relocs, but those are done before qxl_release_reserve_list(), which seems to be the only thing reserving buffers (in the ttm/dma_resv sense) in that code. So looks safe.
- A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this everywhere and needs to be fixed up.
v2: Thomas pointed at that vmwgfx calls dma_resv_init while it holds a dma_resv lock of a different object already. Christian mentioned that ttm core does this too for ghost objects. intel-gfx-ci highlighted that i915 has similar issues.
Unfortunately we can't do this in the usual module init functions, because kernel threads don't have an ->mm - we have to wait around for some user thread to do this.
Solution is to spawn a worker (but only once). It's horrible, but it works.
v3: We can allocate mm! (Chris). Horrible worker hack out, clean initcall solution in.
v4: Annotate with __init (Rob Herring)
Cc: Rob Herring robh@kernel.org Cc: Alex Deucher alexander.deucher@amd.com Cc: Christian König christian.koenig@amd.com Cc: Chris Wilson chris@chris-wilson.co.uk Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Eric Anholt eric@anholt.net Cc: Dave Airlie airlied@redhat.com Cc: Gerd Hoffmann kraxel@redhat.com Cc: Ben Skeggs bskeggs@redhat.com Cc: "VMware Graphics" linux-graphics-maintainer@vmware.com Cc: Thomas Hellstrom thellstrom@vmware.com Reviewed-by: Christian König christian.koenig@amd.com Reviewed-by: Chris Wilson chris@chris-wilson.co.uk Tested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Daniel Vetter daniel.vetter@intel.com --- drivers/dma-buf/dma-resv.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 709002515550..a05ff542be22 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -34,6 +34,7 @@
#include <linux/dma-resv.h> #include <linux/export.h> +#include <linux/sched/mm.h>
/** * DOC: Reservation Object Overview @@ -95,6 +96,29 @@ static void dma_resv_list_free(struct dma_resv_list *list) kfree_rcu(list, rcu); }
+#if IS_ENABLED(CONFIG_LOCKDEP) +static void __init dma_resv_lockdep(void) +{ + struct mm_struct *mm = mm_alloc(); + struct dma_resv obj; + + if (!mm) + return; + + dma_resv_init(&obj); + + down_read(&mm->mmap_sem); + ww_mutex_lock(&obj.lock, NULL); + fs_reclaim_acquire(GFP_KERNEL); + fs_reclaim_release(GFP_KERNEL); + ww_mutex_unlock(&obj.lock); + up_read(&mm->mmap_sem); + + mmput(mm); +} +subsys_initcall(dma_resv_lockdep); +#endif + /** * dma_resv_init - initialize a reservation object * @obj: the reservation object
We can't copy_*_user while holding reservations, that will (soon even for nouveau) lead to deadlocks. And it breaks the cross-driver contract around dma_resv.
Fix this by adding a slowpath for when we need relocations, and by pushing the writeback of the new presumed offsets to the very end.
Aside from "it compiles" entirely untested unfortunately.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Ilia Mirkin imirkin@alum.mit.edu Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Ben Skeggs bskeggs@redhat.com Cc: nouveau@lists.freedesktop.org --- drivers/gpu/drm/nouveau/nouveau_gem.c | 57 ++++++++++++++++++--------- 1 file changed, 38 insertions(+), 19 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index 1324c19f4e5c..05ec8edd6a8b 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -484,12 +484,9 @@ validate_init(struct nouveau_channel *chan, struct drm_file *file_priv,
static int validate_list(struct nouveau_channel *chan, struct nouveau_cli *cli, - struct list_head *list, struct drm_nouveau_gem_pushbuf_bo *pbbo, - uint64_t user_pbbo_ptr) + struct list_head *list, struct drm_nouveau_gem_pushbuf_bo *pbbo) { struct nouveau_drm *drm = chan->drm; - struct drm_nouveau_gem_pushbuf_bo __user *upbbo = - (void __force __user *)(uintptr_t)user_pbbo_ptr; struct nouveau_bo *nvbo; int ret, relocs = 0;
@@ -533,10 +530,6 @@ validate_list(struct nouveau_channel *chan, struct nouveau_cli *cli, b->presumed.offset = nvbo->bo.offset; b->presumed.valid = 0; relocs++; - - if (copy_to_user(&upbbo[nvbo->pbbo_index].presumed, - &b->presumed, sizeof(b->presumed))) - return -EFAULT; } }
@@ -547,8 +540,8 @@ static int nouveau_gem_pushbuf_validate(struct nouveau_channel *chan, struct drm_file *file_priv, struct drm_nouveau_gem_pushbuf_bo *pbbo, - uint64_t user_buffers, int nr_buffers, - struct validate_op *op, int *apply_relocs) + int nr_buffers, + struct validate_op *op, bool *apply_relocs) { struct nouveau_cli *cli = nouveau_cli(file_priv); int ret; @@ -565,7 +558,7 @@ nouveau_gem_pushbuf_validate(struct nouveau_channel *chan, return ret; }
- ret = validate_list(chan, cli, &op->list, pbbo, user_buffers); + ret = validate_list(chan, cli, &op->list, pbbo); if (unlikely(ret < 0)) { if (ret != -ERESTARTSYS) NV_PRINTK(err, cli, "validating bo list\n"); @@ -605,16 +598,12 @@ u_memcpya(uint64_t user, unsigned nmemb, unsigned size) static int nouveau_gem_pushbuf_reloc_apply(struct nouveau_cli *cli, struct drm_nouveau_gem_pushbuf *req, + struct drm_nouveau_gem_pushbuf_reloc *reloc, struct drm_nouveau_gem_pushbuf_bo *bo) { - struct drm_nouveau_gem_pushbuf_reloc *reloc = NULL; int ret = 0; unsigned i;
- reloc = u_memcpya(req->relocs, req->nr_relocs, sizeof(*reloc)); - if (IS_ERR(reloc)) - return PTR_ERR(reloc); - for (i = 0; i < req->nr_relocs; i++) { struct drm_nouveau_gem_pushbuf_reloc *r = &reloc[i]; struct drm_nouveau_gem_pushbuf_bo *b; @@ -693,11 +682,13 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, struct nouveau_drm *drm = nouveau_drm(dev); struct drm_nouveau_gem_pushbuf *req = data; struct drm_nouveau_gem_pushbuf_push *push; + struct drm_nouveau_gem_pushbuf_reloc *reloc = NULL; struct drm_nouveau_gem_pushbuf_bo *bo; struct nouveau_channel *chan = NULL; struct validate_op op; struct nouveau_fence *fence = NULL; - int i, j, ret = 0, do_reloc = 0; + int i, j, ret = 0; + bool do_reloc = false;
if (unlikely(!abi16)) return -ENOMEM; @@ -755,7 +746,8 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, }
/* Validate buffer list */ - ret = nouveau_gem_pushbuf_validate(chan, file_priv, bo, req->buffers, +revalidate: + ret = nouveau_gem_pushbuf_validate(chan, file_priv, bo, req->nr_buffers, &op, &do_reloc); if (ret) { if (ret != -ERESTARTSYS) @@ -765,7 +757,18 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data,
/* Apply any relocations that are required */ if (do_reloc) { - ret = nouveau_gem_pushbuf_reloc_apply(cli, req, bo); + if (!reloc) { + validate_fini(&op, chan, NULL, bo); + reloc = u_memcpya(req->relocs, req->nr_relocs, sizeof(*reloc)); + if (IS_ERR(reloc)) { + ret = PTR_ERR(reloc); + goto out_prevalid; + } + + goto revalidate; + } + + ret = nouveau_gem_pushbuf_reloc_apply(cli, req, reloc, bo); if (ret) { NV_PRINTK(err, cli, "reloc apply: %d\n", ret); goto out; @@ -851,6 +854,22 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, validate_fini(&op, chan, fence, bo); nouveau_fence_unref(&fence);
+ if (do_reloc) { + struct drm_nouveau_gem_pushbuf_bo __user *upbbo = + u64_to_user_ptr(req->buffers); + + for (i = 0; i < req->nr_buffers; i++) { + if (bo[i].presumed.valid) + continue; + + if (copy_to_user(&upbbo[i].presumed, &bo[i].presumed, + sizeof(bo[i].presumed))) { + ret = -EFAULT; + break; + } + } + u_free(reloc); + } out_prevalid: u_free(bo); u_free(push);
On Mon, Nov 04, 2019 at 06:38:00PM +0100, Daniel Vetter wrote:
We can't copy_*_user while holding reservations, that will (soon even for nouveau) lead to deadlocks. And it breaks the cross-driver contract around dma_resv.
Fix this by adding a slowpath for when we need relocations, and by pushing the writeback of the new presumed offsets to the very end.
Aside from "it compiles" entirely untested unfortunately.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Ilia Mirkin imirkin@alum.mit.edu Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Ben Skeggs bskeggs@redhat.com Cc: nouveau@lists.freedesktop.org
Ping for ack/review so I can land this entire series. intel-gfx-ci is all happy with the rebased version, so nouveau ack is the only hold-up here. -Daniel
drivers/gpu/drm/nouveau/nouveau_gem.c | 57 ++++++++++++++++++--------- 1 file changed, 38 insertions(+), 19 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index 1324c19f4e5c..05ec8edd6a8b 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -484,12 +484,9 @@ validate_init(struct nouveau_channel *chan, struct drm_file *file_priv,
static int validate_list(struct nouveau_channel *chan, struct nouveau_cli *cli,
struct list_head *list, struct drm_nouveau_gem_pushbuf_bo *pbbo,
uint64_t user_pbbo_ptr)
struct list_head *list, struct drm_nouveau_gem_pushbuf_bo *pbbo)
{ struct nouveau_drm *drm = chan->drm;
- struct drm_nouveau_gem_pushbuf_bo __user *upbbo =
struct nouveau_bo *nvbo; int ret, relocs = 0;(void __force __user *)(uintptr_t)user_pbbo_ptr;
@@ -533,10 +530,6 @@ validate_list(struct nouveau_channel *chan, struct nouveau_cli *cli, b->presumed.offset = nvbo->bo.offset; b->presumed.valid = 0; relocs++;
if (copy_to_user(&upbbo[nvbo->pbbo_index].presumed,
&b->presumed, sizeof(b->presumed)))
} }return -EFAULT;
@@ -547,8 +540,8 @@ static int nouveau_gem_pushbuf_validate(struct nouveau_channel *chan, struct drm_file *file_priv, struct drm_nouveau_gem_pushbuf_bo *pbbo,
uint64_t user_buffers, int nr_buffers,
struct validate_op *op, int *apply_relocs)
int nr_buffers,
struct validate_op *op, bool *apply_relocs)
{ struct nouveau_cli *cli = nouveau_cli(file_priv); int ret; @@ -565,7 +558,7 @@ nouveau_gem_pushbuf_validate(struct nouveau_channel *chan, return ret; }
- ret = validate_list(chan, cli, &op->list, pbbo, user_buffers);
- ret = validate_list(chan, cli, &op->list, pbbo); if (unlikely(ret < 0)) { if (ret != -ERESTARTSYS) NV_PRINTK(err, cli, "validating bo list\n");
@@ -605,16 +598,12 @@ u_memcpya(uint64_t user, unsigned nmemb, unsigned size) static int nouveau_gem_pushbuf_reloc_apply(struct nouveau_cli *cli, struct drm_nouveau_gem_pushbuf *req,
struct drm_nouveau_gem_pushbuf_reloc *reloc, struct drm_nouveau_gem_pushbuf_bo *bo)
{
struct drm_nouveau_gem_pushbuf_reloc *reloc = NULL; int ret = 0; unsigned i;
reloc = u_memcpya(req->relocs, req->nr_relocs, sizeof(*reloc));
if (IS_ERR(reloc))
return PTR_ERR(reloc);
for (i = 0; i < req->nr_relocs; i++) { struct drm_nouveau_gem_pushbuf_reloc *r = &reloc[i]; struct drm_nouveau_gem_pushbuf_bo *b;
@@ -693,11 +682,13 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, struct nouveau_drm *drm = nouveau_drm(dev); struct drm_nouveau_gem_pushbuf *req = data; struct drm_nouveau_gem_pushbuf_push *push;
- struct drm_nouveau_gem_pushbuf_reloc *reloc = NULL; struct drm_nouveau_gem_pushbuf_bo *bo; struct nouveau_channel *chan = NULL; struct validate_op op; struct nouveau_fence *fence = NULL;
- int i, j, ret = 0, do_reloc = 0;
int i, j, ret = 0;
bool do_reloc = false;
if (unlikely(!abi16)) return -ENOMEM;
@@ -755,7 +746,8 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, }
/* Validate buffer list */
- ret = nouveau_gem_pushbuf_validate(chan, file_priv, bo, req->buffers,
+revalidate:
- ret = nouveau_gem_pushbuf_validate(chan, file_priv, bo, req->nr_buffers, &op, &do_reloc); if (ret) { if (ret != -ERESTARTSYS)
@@ -765,7 +757,18 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data,
/* Apply any relocations that are required */ if (do_reloc) {
ret = nouveau_gem_pushbuf_reloc_apply(cli, req, bo);
if (!reloc) {
validate_fini(&op, chan, NULL, bo);
reloc = u_memcpya(req->relocs, req->nr_relocs, sizeof(*reloc));
if (IS_ERR(reloc)) {
ret = PTR_ERR(reloc);
goto out_prevalid;
}
goto revalidate;
}
if (ret) { NV_PRINTK(err, cli, "reloc apply: %d\n", ret); goto out;ret = nouveau_gem_pushbuf_reloc_apply(cli, req, reloc, bo);
@@ -851,6 +854,22 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, validate_fini(&op, chan, fence, bo); nouveau_fence_unref(&fence);
- if (do_reloc) {
struct drm_nouveau_gem_pushbuf_bo __user *upbbo =
u64_to_user_ptr(req->buffers);
for (i = 0; i < req->nr_buffers; i++) {
if (bo[i].presumed.valid)
continue;
if (copy_to_user(&upbbo[i].presumed, &bo[i].presumed,
sizeof(bo[i].presumed))) {
ret = -EFAULT;
break;
}
}
u_free(reloc);
- }
out_prevalid: u_free(bo); u_free(push); -- 2.24.0.rc2
Op 05-11-2019 om 12:04 schreef Daniel Vetter:
On Mon, Nov 04, 2019 at 06:38:00PM +0100, Daniel Vetter wrote:
We can't copy_*_user while holding reservations, that will (soon even for nouveau) lead to deadlocks. And it breaks the cross-driver contract around dma_resv.
Fix this by adding a slowpath for when we need relocations, and by pushing the writeback of the new presumed offsets to the very end.
Aside from "it compiles" entirely untested unfortunately.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Ilia Mirkin imirkin@alum.mit.edu Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Ben Skeggs bskeggs@redhat.com Cc: nouveau@lists.freedesktop.org
Ping for ack/review so I can land this entire series. intel-gfx-ci is all happy with the rebased version, so nouveau ack is the only hold-up here.
I don't feel confident reviewing this as I lack the hw, so all review caveats reply.
Having said that, this looks sane, and if it blows up we'll found out eventually. :)
Reviewed-by: Maarten Lankhorst maarten.lankhorst@linux.intel.com
With nouveau fixed all ttm-using drives have the correct nesting of mmap_sem vs dma_resv, and we can just lock the buffer.
Assuming I didn't screw up anything with my audit of course.
v2: - Dont forget wu_mutex (Christian König) - Keep the mmap_sem-less wait optimization (Thomas) - Use _lock_interruptible to be good citizens (Thomas)
v3: Rebase over fault handler helperification.
Reviewed-by: Christian König christian.koenig@amd.com (v2) Reviewed-by: Thomas Hellström thellstrom@vmware.com (v2) Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Christian Koenig christian.koenig@amd.com Cc: Huang Rui ray.huang@amd.com Cc: Gerd Hoffmann kraxel@redhat.com Cc: "VMware Graphics" linux-graphics-maintainer@vmware.com Cc: Thomas Hellstrom thellstrom@vmware.com --- drivers/gpu/drm/ttm/ttm_bo.c | 36 ------------------------------- drivers/gpu/drm/ttm/ttm_bo_util.c | 1 - drivers/gpu/drm/ttm/ttm_bo_vm.c | 18 +++++----------- include/drm/ttm/ttm_bo_api.h | 4 ---- 4 files changed, 5 insertions(+), 54 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 8d91b0428af1..5df596fb0280 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -161,7 +161,6 @@ static void ttm_bo_release_list(struct kref *list_kref) dma_fence_put(bo->moving); if (!ttm_bo_uses_embedded_gem_object(bo)) dma_resv_fini(&bo->base._resv); - mutex_destroy(&bo->wu_mutex); bo->destroy(bo); ttm_mem_global_free(&ttm_mem_glob, acc_size); } @@ -1299,7 +1298,6 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, INIT_LIST_HEAD(&bo->ddestroy); INIT_LIST_HEAD(&bo->swap); INIT_LIST_HEAD(&bo->io_reserve_lru); - mutex_init(&bo->wu_mutex); bo->bdev = bdev; bo->type = type; bo->num_pages = num_pages; @@ -1903,37 +1901,3 @@ void ttm_bo_swapout_all(struct ttm_bo_device *bdev) while (ttm_bo_swapout(&ttm_bo_glob, &ctx) == 0); } EXPORT_SYMBOL(ttm_bo_swapout_all); - -/** - * ttm_bo_wait_unreserved - interruptible wait for a buffer object to become - * unreserved - * - * @bo: Pointer to buffer - */ -int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo) -{ - int ret; - - /* - * In the absense of a wait_unlocked API, - * Use the bo::wu_mutex to avoid triggering livelocks due to - * concurrent use of this function. Note that this use of - * bo::wu_mutex can go away if we change locking order to - * mmap_sem -> bo::reserve. - */ - ret = mutex_lock_interruptible(&bo->wu_mutex); - if (unlikely(ret != 0)) - return -ERESTARTSYS; - if (!dma_resv_is_locked(bo->base.resv)) - goto out_unlock; - ret = dma_resv_lock_interruptible(bo->base.resv, NULL); - if (ret == -EINTR) - ret = -ERESTARTSYS; - if (unlikely(ret != 0)) - goto out_unlock; - dma_resv_unlock(bo->base.resv); - -out_unlock: - mutex_unlock(&bo->wu_mutex); - return ret; -} diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 6b0883a1776e..2b0e5a088da0 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -504,7 +504,6 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo, INIT_LIST_HEAD(&fbo->base.lru); INIT_LIST_HEAD(&fbo->base.swap); INIT_LIST_HEAD(&fbo->base.io_reserve_lru); - mutex_init(&fbo->base.wu_mutex); fbo->base.moving = NULL; drm_vma_node_reset(&fbo->base.base.vma_node);
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 11863fbdd5d6..91466cfb6f16 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -128,30 +128,22 @@ static unsigned long ttm_bo_io_mem_pfn(struct ttm_buffer_object *bo, vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo, struct vm_fault *vmf) { - /* - * Work around locking order reversal in fault / nopfn - * between mmap_sem and bo_reserve: Perform a trylock operation - * for reserve, and if it fails, retry the fault after waiting - * for the buffer to become unreserved. - */ if (unlikely(!dma_resv_trylock(bo->base.resv))) { if (vmf->flags & FAULT_FLAG_ALLOW_RETRY) { if (!(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) { ttm_bo_get(bo); up_read(&vmf->vma->vm_mm->mmap_sem); - (void) ttm_bo_wait_unreserved(bo); + if (!dma_resv_lock_interruptible(bo->base.resv, + NULL)) + dma_resv_unlock(bo->base.resv); ttm_bo_put(bo); }
return VM_FAULT_RETRY; }
- /* - * If we'd want to change locking order to - * mmap_sem -> bo::reserve, we'd use a blocking reserve here - * instead of retrying the fault... - */ - return VM_FAULT_NOPAGE; + if (dma_resv_lock_interruptible(bo->base.resv, NULL)) + return VM_FAULT_NOPAGE; }
return 0; diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 65e399d280f7..e8b0f0c66059 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -154,7 +154,6 @@ struct ttm_tt; * @offset: The current GPU offset, which can have different meanings * depending on the memory type. For SYSTEM type memory, it should be 0. * @cur_placement: Hint of current placement. - * @wu_mutex: Wait unreserved mutex. * * Base class for TTM buffer object, that deals with data placement and CPU * mappings. GPU mappings are really up to the driver, but for simpler GPUs @@ -222,8 +221,6 @@ struct ttm_buffer_object { uint64_t offset; /* GPU address space is independent of CPU word size */
struct sg_table *sg; - - struct mutex wu_mutex; };
/** @@ -707,7 +704,6 @@ ssize_t ttm_bo_io(struct ttm_bo_device *bdev, struct file *filp, int ttm_bo_swapout(struct ttm_bo_global *glob, struct ttm_operation_ctx *ctx); void ttm_bo_swapout_all(struct ttm_bo_device *bdev); -int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo);
/** * ttm_bo_uses_embedded_gem_object - check if the given bo uses the
On Mon, Nov 04, 2019 at 06:38:01PM +0100, Daniel Vetter wrote:
With nouveau fixed all ttm-using drives have the correct nesting of mmap_sem vs dma_resv, and we can just lock the buffer.
Assuming I didn't screw up anything with my audit of course.
v2:
- Dont forget wu_mutex (Christian König)
- Keep the mmap_sem-less wait optimization (Thomas)
- Use _lock_interruptible to be good citizens (Thomas)
v3: Rebase over fault handler helperification.
Reviewed-by: Christian König christian.koenig@amd.com (v2) Reviewed-by: Thomas Hellström thellstrom@vmware.com (v2) Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Christian Koenig christian.koenig@amd.com Cc: Huang Rui ray.huang@amd.com Cc: Gerd Hoffmann kraxel@redhat.com Cc: "VMware Graphics" linux-graphics-maintainer@vmware.com Cc: Thomas Hellstrom thellstrom@vmware.com
Entire series merged into drm-misc-next (probably for 5.6) with Dave's irc-ack for the nouveau patch. -Daniel
drivers/gpu/drm/ttm/ttm_bo.c | 36 ------------------------------- drivers/gpu/drm/ttm/ttm_bo_util.c | 1 - drivers/gpu/drm/ttm/ttm_bo_vm.c | 18 +++++----------- include/drm/ttm/ttm_bo_api.h | 4 ---- 4 files changed, 5 insertions(+), 54 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 8d91b0428af1..5df596fb0280 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -161,7 +161,6 @@ static void ttm_bo_release_list(struct kref *list_kref) dma_fence_put(bo->moving); if (!ttm_bo_uses_embedded_gem_object(bo)) dma_resv_fini(&bo->base._resv);
- mutex_destroy(&bo->wu_mutex); bo->destroy(bo); ttm_mem_global_free(&ttm_mem_glob, acc_size);
} @@ -1299,7 +1298,6 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev, INIT_LIST_HEAD(&bo->ddestroy); INIT_LIST_HEAD(&bo->swap); INIT_LIST_HEAD(&bo->io_reserve_lru);
- mutex_init(&bo->wu_mutex); bo->bdev = bdev; bo->type = type; bo->num_pages = num_pages;
@@ -1903,37 +1901,3 @@ void ttm_bo_swapout_all(struct ttm_bo_device *bdev) while (ttm_bo_swapout(&ttm_bo_glob, &ctx) == 0); } EXPORT_SYMBOL(ttm_bo_swapout_all);
-/**
- ttm_bo_wait_unreserved - interruptible wait for a buffer object to become
- unreserved
- @bo: Pointer to buffer
- */
-int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo) -{
- int ret;
- /*
* In the absense of a wait_unlocked API,
* Use the bo::wu_mutex to avoid triggering livelocks due to
* concurrent use of this function. Note that this use of
* bo::wu_mutex can go away if we change locking order to
* mmap_sem -> bo::reserve.
*/
- ret = mutex_lock_interruptible(&bo->wu_mutex);
- if (unlikely(ret != 0))
return -ERESTARTSYS;
- if (!dma_resv_is_locked(bo->base.resv))
goto out_unlock;
- ret = dma_resv_lock_interruptible(bo->base.resv, NULL);
- if (ret == -EINTR)
ret = -ERESTARTSYS;
- if (unlikely(ret != 0))
goto out_unlock;
- dma_resv_unlock(bo->base.resv);
-out_unlock:
- mutex_unlock(&bo->wu_mutex);
- return ret;
-} diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 6b0883a1776e..2b0e5a088da0 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -504,7 +504,6 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo, INIT_LIST_HEAD(&fbo->base.lru); INIT_LIST_HEAD(&fbo->base.swap); INIT_LIST_HEAD(&fbo->base.io_reserve_lru);
- mutex_init(&fbo->base.wu_mutex); fbo->base.moving = NULL; drm_vma_node_reset(&fbo->base.base.vma_node);
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 11863fbdd5d6..91466cfb6f16 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -128,30 +128,22 @@ static unsigned long ttm_bo_io_mem_pfn(struct ttm_buffer_object *bo, vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo, struct vm_fault *vmf) {
- /*
* Work around locking order reversal in fault / nopfn
* between mmap_sem and bo_reserve: Perform a trylock operation
* for reserve, and if it fails, retry the fault after waiting
* for the buffer to become unreserved.
if (unlikely(!dma_resv_trylock(bo->base.resv))) { if (vmf->flags & FAULT_FLAG_ALLOW_RETRY) { if (!(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) { ttm_bo_get(bo); up_read(&vmf->vma->vm_mm->mmap_sem);*/
(void) ttm_bo_wait_unreserved(bo);
if (!dma_resv_lock_interruptible(bo->base.resv,
NULL))
dma_resv_unlock(bo->base.resv); ttm_bo_put(bo); } return VM_FAULT_RETRY;
}
/*
* If we'd want to change locking order to
* mmap_sem -> bo::reserve, we'd use a blocking reserve here
* instead of retrying the fault...
*/
return VM_FAULT_NOPAGE;
if (dma_resv_lock_interruptible(bo->base.resv, NULL))
return VM_FAULT_NOPAGE;
}
return 0;
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 65e399d280f7..e8b0f0c66059 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -154,7 +154,6 @@ struct ttm_tt;
- @offset: The current GPU offset, which can have different meanings
- depending on the memory type. For SYSTEM type memory, it should be 0.
- @cur_placement: Hint of current placement.
- @wu_mutex: Wait unreserved mutex.
- Base class for TTM buffer object, that deals with data placement and CPU
- mappings. GPU mappings are really up to the driver, but for simpler GPUs
@@ -222,8 +221,6 @@ struct ttm_buffer_object { uint64_t offset; /* GPU address space is independent of CPU word size */
struct sg_table *sg;
- struct mutex wu_mutex;
};
/** @@ -707,7 +704,6 @@ ssize_t ttm_bo_io(struct ttm_bo_device *bdev, struct file *filp, int ttm_bo_swapout(struct ttm_bo_global *glob, struct ttm_operation_ctx *ctx); void ttm_bo_swapout_all(struct ttm_bo_device *bdev); -int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo);
/**
- ttm_bo_uses_embedded_gem_object - check if the given bo uses the
-- 2.24.0.rc2
On Mon, Nov 04, 2019 at 06:37:59PM +0100, Daniel Vetter wrote:
Full audit of everyone:
i915, radeon, amdgpu should be clean per their maintainers.
vram helpers should be fine, they don't do command submission, so really no business holding struct_mutex while doing copy_*_user. But I haven't checked them all.
panfrost seems to dma_resv_lock only in panfrost_job_push, which looks clean.
v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(), copying from/to userspace happens all in v3d_lookup_bos which is outside of the critical section.
vmwgfx has a bunch of ioctls that do their own copy_*_user:
- vmw_execbuf_process: First this does some copies in vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself. Then comes the usual ttm reserve/validate sequence, then actual submission/fencing, then unreserving, and finally some more copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of details, but looks all safe.
- vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be seen, seems to only create a fence and copy it out.
- a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be found there.
Summary: vmwgfx seems to be fine too.
virtio: There's virtio_gpu_execbuffer_ioctl, which does all the copying from userspace before even looking up objects through their handles, so safe. Plus the getparam/getcaps ioctl, also both safe.
qxl only has qxl_execbuffer_ioctl, which calls into qxl_process_single_command. There's a lovely comment before the __copy_from_user_inatomic that the slowpath should be copied from i915, but I guess that never happened. Try not to be unlucky and get your CS data evicted between when it's written and the kernel tries to read it. The only other copy_from_user is for relocs, but those are done before qxl_release_reserve_list(), which seems to be the only thing reserving buffers (in the ttm/dma_resv sense) in that code. So looks safe.
A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this everywhere and needs to be fixed up.
v2: Thomas pointed at that vmwgfx calls dma_resv_init while it holds a dma_resv lock of a different object already. Christian mentioned that ttm core does this too for ghost objects. intel-gfx-ci highlighted that i915 has similar issues.
Unfortunately we can't do this in the usual module init functions, because kernel threads don't have an ->mm - we have to wait around for some user thread to do this.
Solution is to spawn a worker (but only once). It's horrible, but it works.
v3: We can allocate mm! (Chris). Horrible worker hack out, clean initcall solution in.
v4: Annotate with __init (Rob Herring)
Cc: Rob Herring robh@kernel.org Cc: Alex Deucher alexander.deucher@amd.com Cc: Christian König christian.koenig@amd.com Cc: Chris Wilson chris@chris-wilson.co.uk Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Eric Anholt eric@anholt.net Cc: Dave Airlie airlied@redhat.com Cc: Gerd Hoffmann kraxel@redhat.com Cc: Ben Skeggs bskeggs@redhat.com Cc: "VMware Graphics" linux-graphics-maintainer@vmware.com Cc: Thomas Hellstrom thellstrom@vmware.com Reviewed-by: Christian König christian.koenig@amd.com Reviewed-by: Chris Wilson chris@chris-wilson.co.uk Tested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Daniel Vetter daniel.vetter@intel.com
I lost the r-b from Thomas on the last round:
Reviewed-by: Thomas Hellstrom thellstrom@vmware.com
drivers/dma-buf/dma-resv.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 709002515550..a05ff542be22 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -34,6 +34,7 @@
#include <linux/dma-resv.h> #include <linux/export.h> +#include <linux/sched/mm.h>
/**
- DOC: Reservation Object Overview
@@ -95,6 +96,29 @@ static void dma_resv_list_free(struct dma_resv_list *list) kfree_rcu(list, rcu); }
+#if IS_ENABLED(CONFIG_LOCKDEP) +static void __init dma_resv_lockdep(void) +{
- struct mm_struct *mm = mm_alloc();
- struct dma_resv obj;
- if (!mm)
return;
- dma_resv_init(&obj);
- down_read(&mm->mmap_sem);
- ww_mutex_lock(&obj.lock, NULL);
- fs_reclaim_acquire(GFP_KERNEL);
- fs_reclaim_release(GFP_KERNEL);
- ww_mutex_unlock(&obj.lock);
- up_read(&mm->mmap_sem);
- mmput(mm);
+} +subsys_initcall(dma_resv_lockdep); +#endif
/**
- dma_resv_init - initialize a reservation object
- @obj: the reservation object
-- 2.24.0.rc2
Am 04.11.19 um 18:37 schrieb Daniel Vetter:
Full audit of everyone:
i915, radeon, amdgpu should be clean per their maintainers.
vram helpers should be fine, they don't do command submission, so really no business holding struct_mutex while doing copy_*_user. But I haven't checked them all.
panfrost seems to dma_resv_lock only in panfrost_job_push, which looks clean.
v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(), copying from/to userspace happens all in v3d_lookup_bos which is outside of the critical section.
vmwgfx has a bunch of ioctls that do their own copy_*_user:
- vmw_execbuf_process: First this does some copies in vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself. Then comes the usual ttm reserve/validate sequence, then actual submission/fencing, then unreserving, and finally some more copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of details, but looks all safe.
- vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be seen, seems to only create a fence and copy it out.
- a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be found there.
Summary: vmwgfx seems to be fine too.
virtio: There's virtio_gpu_execbuffer_ioctl, which does all the copying from userspace before even looking up objects through their handles, so safe. Plus the getparam/getcaps ioctl, also both safe.
qxl only has qxl_execbuffer_ioctl, which calls into qxl_process_single_command. There's a lovely comment before the __copy_from_user_inatomic that the slowpath should be copied from i915, but I guess that never happened. Try not to be unlucky and get your CS data evicted between when it's written and the kernel tries to read it. The only other copy_from_user is for relocs, but those are done before qxl_release_reserve_list(), which seems to be the only thing reserving buffers (in the ttm/dma_resv sense) in that code. So looks safe.
A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this everywhere and needs to be fixed up.
v2: Thomas pointed at that vmwgfx calls dma_resv_init while it holds a dma_resv lock of a different object already. Christian mentioned that ttm core does this too for ghost objects. intel-gfx-ci highlighted that i915 has similar issues.
Unfortunately we can't do this in the usual module init functions, because kernel threads don't have an ->mm - we have to wait around for some user thread to do this.
Solution is to spawn a worker (but only once). It's horrible, but it works.
v3: We can allocate mm! (Chris). Horrible worker hack out, clean initcall solution in.
v4: Annotate with __init (Rob Herring)
Cc: Rob Herring robh@kernel.org Cc: Alex Deucher alexander.deucher@amd.com Cc: Christian König christian.koenig@amd.com Cc: Chris Wilson chris@chris-wilson.co.uk Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Eric Anholt eric@anholt.net Cc: Dave Airlie airlied@redhat.com Cc: Gerd Hoffmann kraxel@redhat.com Cc: Ben Skeggs bskeggs@redhat.com Cc: "VMware Graphics" linux-graphics-maintainer@vmware.com Cc: Thomas Hellstrom thellstrom@vmware.com Reviewed-by: Christian König christian.koenig@amd.com Reviewed-by: Chris Wilson chris@chris-wilson.co.uk Tested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Daniel Vetter daniel.vetter@intel.com
What's holding you back to commit that?
Christian.
drivers/dma-buf/dma-resv.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 709002515550..a05ff542be22 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -34,6 +34,7 @@
#include <linux/dma-resv.h> #include <linux/export.h> +#include <linux/sched/mm.h>
/**
- DOC: Reservation Object Overview
@@ -95,6 +96,29 @@ static void dma_resv_list_free(struct dma_resv_list *list) kfree_rcu(list, rcu); }
+#if IS_ENABLED(CONFIG_LOCKDEP) +static void __init dma_resv_lockdep(void) +{
- struct mm_struct *mm = mm_alloc();
- struct dma_resv obj;
- if (!mm)
return;
- dma_resv_init(&obj);
- down_read(&mm->mmap_sem);
- ww_mutex_lock(&obj.lock, NULL);
- fs_reclaim_acquire(GFP_KERNEL);
- fs_reclaim_release(GFP_KERNEL);
- ww_mutex_unlock(&obj.lock);
- up_read(&mm->mmap_sem);
- mmput(mm);
+} +subsys_initcall(dma_resv_lockdep); +#endif
- /**
- dma_resv_init - initialize a reservation object
- @obj: the reservation object
On Mon, Nov 04, 2019 at 08:01:09PM +0000, Koenig, Christian wrote:
Am 04.11.19 um 18:37 schrieb Daniel Vetter:
Full audit of everyone:
i915, radeon, amdgpu should be clean per their maintainers.
vram helpers should be fine, they don't do command submission, so really no business holding struct_mutex while doing copy_*_user. But I haven't checked them all.
panfrost seems to dma_resv_lock only in panfrost_job_push, which looks clean.
v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(), copying from/to userspace happens all in v3d_lookup_bos which is outside of the critical section.
vmwgfx has a bunch of ioctls that do their own copy_*_user:
- vmw_execbuf_process: First this does some copies in vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself. Then comes the usual ttm reserve/validate sequence, then actual submission/fencing, then unreserving, and finally some more copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of details, but looks all safe.
- vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be seen, seems to only create a fence and copy it out.
- a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be found there.
Summary: vmwgfx seems to be fine too.
virtio: There's virtio_gpu_execbuffer_ioctl, which does all the copying from userspace before even looking up objects through their handles, so safe. Plus the getparam/getcaps ioctl, also both safe.
qxl only has qxl_execbuffer_ioctl, which calls into qxl_process_single_command. There's a lovely comment before the __copy_from_user_inatomic that the slowpath should be copied from i915, but I guess that never happened. Try not to be unlucky and get your CS data evicted between when it's written and the kernel tries to read it. The only other copy_from_user is for relocs, but those are done before qxl_release_reserve_list(), which seems to be the only thing reserving buffers (in the ttm/dma_resv sense) in that code. So looks safe.
A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this everywhere and needs to be fixed up.
v2: Thomas pointed at that vmwgfx calls dma_resv_init while it holds a dma_resv lock of a different object already. Christian mentioned that ttm core does this too for ghost objects. intel-gfx-ci highlighted that i915 has similar issues.
Unfortunately we can't do this in the usual module init functions, because kernel threads don't have an ->mm - we have to wait around for some user thread to do this.
Solution is to spawn a worker (but only once). It's horrible, but it works.
v3: We can allocate mm! (Chris). Horrible worker hack out, clean initcall solution in.
v4: Annotate with __init (Rob Herring)
Cc: Rob Herring robh@kernel.org Cc: Alex Deucher alexander.deucher@amd.com Cc: Christian König christian.koenig@amd.com Cc: Chris Wilson chris@chris-wilson.co.uk Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Eric Anholt eric@anholt.net Cc: Dave Airlie airlied@redhat.com Cc: Gerd Hoffmann kraxel@redhat.com Cc: Ben Skeggs bskeggs@redhat.com Cc: "VMware Graphics" linux-graphics-maintainer@vmware.com Cc: Thomas Hellstrom thellstrom@vmware.com Reviewed-by: Christian König christian.koenig@amd.com Reviewed-by: Chris Wilson chris@chris-wilson.co.uk Tested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Daniel Vetter daniel.vetter@intel.com
What's holding you back to commit that?
The nouveau patch, can't push this one without also fixing nouveau :-/ -Daniel
Christian.
drivers/dma-buf/dma-resv.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 709002515550..a05ff542be22 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -34,6 +34,7 @@
#include <linux/dma-resv.h> #include <linux/export.h> +#include <linux/sched/mm.h>
/**
- DOC: Reservation Object Overview
@@ -95,6 +96,29 @@ static void dma_resv_list_free(struct dma_resv_list *list) kfree_rcu(list, rcu); }
+#if IS_ENABLED(CONFIG_LOCKDEP) +static void __init dma_resv_lockdep(void) +{
- struct mm_struct *mm = mm_alloc();
- struct dma_resv obj;
- if (!mm)
return;
- dma_resv_init(&obj);
- down_read(&mm->mmap_sem);
- ww_mutex_lock(&obj.lock, NULL);
- fs_reclaim_acquire(GFP_KERNEL);
- fs_reclaim_release(GFP_KERNEL);
- ww_mutex_unlock(&obj.lock);
- up_read(&mm->mmap_sem);
- mmput(mm);
+} +subsys_initcall(dma_resv_lockdep); +#endif
- /**
- dma_resv_init - initialize a reservation object
- @obj: the reservation object
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On 04/11/2019 17:37, Daniel Vetter wrote:
Full audit of everyone:
i915, radeon, amdgpu should be clean per their maintainers.
vram helpers should be fine, they don't do command submission, so really no business holding struct_mutex while doing copy_*_user. But I haven't checked them all.
panfrost seems to dma_resv_lock only in panfrost_job_push, which looks clean.
v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(), copying from/to userspace happens all in v3d_lookup_bos which is outside of the critical section.
vmwgfx has a bunch of ioctls that do their own copy_*_user:
- vmw_execbuf_process: First this does some copies in vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself. Then comes the usual ttm reserve/validate sequence, then actual submission/fencing, then unreserving, and finally some more copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of details, but looks all safe.
- vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be seen, seems to only create a fence and copy it out.
- a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be found there.
Summary: vmwgfx seems to be fine too.
virtio: There's virtio_gpu_execbuffer_ioctl, which does all the copying from userspace before even looking up objects through their handles, so safe. Plus the getparam/getcaps ioctl, also both safe.
qxl only has qxl_execbuffer_ioctl, which calls into qxl_process_single_command. There's a lovely comment before the __copy_from_user_inatomic that the slowpath should be copied from i915, but I guess that never happened. Try not to be unlucky and get your CS data evicted between when it's written and the kernel tries to read it. The only other copy_from_user is for relocs, but those are done before qxl_release_reserve_list(), which seems to be the only thing reserving buffers (in the ttm/dma_resv sense) in that code. So looks safe.
A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this everywhere and needs to be fixed up.
v2: Thomas pointed at that vmwgfx calls dma_resv_init while it holds a dma_resv lock of a different object already. Christian mentioned that ttm core does this too for ghost objects. intel-gfx-ci highlighted that i915 has similar issues.
Unfortunately we can't do this in the usual module init functions, because kernel threads don't have an ->mm - we have to wait around for some user thread to do this.
Solution is to spawn a worker (but only once). It's horrible, but it works.
v3: We can allocate mm! (Chris). Horrible worker hack out, clean initcall solution in.
v4: Annotate with __init (Rob Herring)
Cc: Rob Herring robh@kernel.org Cc: Alex Deucher alexander.deucher@amd.com Cc: Christian König christian.koenig@amd.com Cc: Chris Wilson chris@chris-wilson.co.uk Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Eric Anholt eric@anholt.net Cc: Dave Airlie airlied@redhat.com Cc: Gerd Hoffmann kraxel@redhat.com Cc: Ben Skeggs bskeggs@redhat.com Cc: "VMware Graphics" linux-graphics-maintainer@vmware.com Cc: Thomas Hellstrom thellstrom@vmware.com Reviewed-by: Christian König christian.koenig@amd.com Reviewed-by: Chris Wilson chris@chris-wilson.co.uk Tested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/dma-buf/dma-resv.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 709002515550..a05ff542be22 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -34,6 +34,7 @@
#include <linux/dma-resv.h> #include <linux/export.h> +#include <linux/sched/mm.h>
/**
- DOC: Reservation Object Overview
@@ -95,6 +96,29 @@ static void dma_resv_list_free(struct dma_resv_list *list) kfree_rcu(list, rcu); }
+#if IS_ENABLED(CONFIG_LOCKDEP) +static void __init dma_resv_lockdep(void) +{
- struct mm_struct *mm = mm_alloc();
- struct dma_resv obj;
- if (!mm)
return;
- dma_resv_init(&obj);
- down_read(&mm->mmap_sem);
- ww_mutex_lock(&obj.lock, NULL);
- fs_reclaim_acquire(GFP_KERNEL);
- fs_reclaim_release(GFP_KERNEL);
- ww_mutex_unlock(&obj.lock);
- up_read(&mm->mmap_sem);
Nit: trailing whitespace
- mmput(mm);
+} +subsys_initcall(dma_resv_lockdep);
This expects a function returning int, but dma_resv_lockdep() is void. Causing:
drivers/dma-buf/dma-resv.c:119:17: error: initialization of ‘initcall_t’ {aka ‘int (*)(void)’} from incompatible pointer type ‘void (*)(void)’ [-Werror=incompatible-pointer-types] subsys_initcall(dma_resv_lockdep);
The below fixes it for me.
Steve
----8<----
From d07ea81611ed6e4fb8cc290f42d23dbcca2da2f8 Mon Sep 17 00:00:00 2001
From: Steven Price steven.price@arm.com Date: Mon, 11 Nov 2019 13:07:19 +0000 Subject: [PATCH] dma_resv: Correct return type of dma_resv_lockdep()
subsys_initcall() expects a function which returns 'int'. Fix dma_resv_lockdep() so it returns an 'int' error code.
Fixes: b2a8116e2592 ("dma_resv: prime lockdep annotations") Signed-off-by: Steven Price steven.price@arm.com --- drivers/dma-buf/dma-resv.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index a05ff542be22..9918a6e5cf91 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -97,13 +97,13 @@ static void dma_resv_list_free(struct dma_resv_list *list) }
#if IS_ENABLED(CONFIG_LOCKDEP) -static void __init dma_resv_lockdep(void) +static int __init dma_resv_lockdep(void) { struct mm_struct *mm = mm_alloc(); struct dma_resv obj;
if (!mm) - return; + return -ENOMEM;
dma_resv_init(&obj);
@@ -115,6 +115,8 @@ static void __init dma_resv_lockdep(void) up_read(&mm->mmap_sem); mmput(mm); + + return 0; } subsys_initcall(dma_resv_lockdep); #endif
On Mon, Nov 11, 2019 at 2:11 PM Steven Price steven.price@arm.com wrote:
On 04/11/2019 17:37, Daniel Vetter wrote:
Full audit of everyone:
i915, radeon, amdgpu should be clean per their maintainers.
vram helpers should be fine, they don't do command submission, so really no business holding struct_mutex while doing copy_*_user. But I haven't checked them all.
panfrost seems to dma_resv_lock only in panfrost_job_push, which looks clean.
v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(), copying from/to userspace happens all in v3d_lookup_bos which is outside of the critical section.
vmwgfx has a bunch of ioctls that do their own copy_*_user:
- vmw_execbuf_process: First this does some copies in vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself. Then comes the usual ttm reserve/validate sequence, then actual submission/fencing, then unreserving, and finally some more copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of details, but looks all safe.
- vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be seen, seems to only create a fence and copy it out.
- a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be found there.
Summary: vmwgfx seems to be fine too.
virtio: There's virtio_gpu_execbuffer_ioctl, which does all the copying from userspace before even looking up objects through their handles, so safe. Plus the getparam/getcaps ioctl, also both safe.
qxl only has qxl_execbuffer_ioctl, which calls into qxl_process_single_command. There's a lovely comment before the __copy_from_user_inatomic that the slowpath should be copied from i915, but I guess that never happened. Try not to be unlucky and get your CS data evicted between when it's written and the kernel tries to read it. The only other copy_from_user is for relocs, but those are done before qxl_release_reserve_list(), which seems to be the only thing reserving buffers (in the ttm/dma_resv sense) in that code. So looks safe.
A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this everywhere and needs to be fixed up.
v2: Thomas pointed at that vmwgfx calls dma_resv_init while it holds a dma_resv lock of a different object already. Christian mentioned that ttm core does this too for ghost objects. intel-gfx-ci highlighted that i915 has similar issues.
Unfortunately we can't do this in the usual module init functions, because kernel threads don't have an ->mm - we have to wait around for some user thread to do this.
Solution is to spawn a worker (but only once). It's horrible, but it works.
v3: We can allocate mm! (Chris). Horrible worker hack out, clean initcall solution in.
v4: Annotate with __init (Rob Herring)
Cc: Rob Herring robh@kernel.org Cc: Alex Deucher alexander.deucher@amd.com Cc: Christian König christian.koenig@amd.com Cc: Chris Wilson chris@chris-wilson.co.uk Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Eric Anholt eric@anholt.net Cc: Dave Airlie airlied@redhat.com Cc: Gerd Hoffmann kraxel@redhat.com Cc: Ben Skeggs bskeggs@redhat.com Cc: "VMware Graphics" linux-graphics-maintainer@vmware.com Cc: Thomas Hellstrom thellstrom@vmware.com Reviewed-by: Christian König christian.koenig@amd.com Reviewed-by: Chris Wilson chris@chris-wilson.co.uk Tested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/dma-buf/dma-resv.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 709002515550..a05ff542be22 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -34,6 +34,7 @@
#include <linux/dma-resv.h> #include <linux/export.h> +#include <linux/sched/mm.h>
/**
- DOC: Reservation Object Overview
@@ -95,6 +96,29 @@ static void dma_resv_list_free(struct dma_resv_list *list) kfree_rcu(list, rcu); }
+#if IS_ENABLED(CONFIG_LOCKDEP) +static void __init dma_resv_lockdep(void) +{
struct mm_struct *mm = mm_alloc();
struct dma_resv obj;
if (!mm)
return;
dma_resv_init(&obj);
down_read(&mm->mmap_sem);
ww_mutex_lock(&obj.lock, NULL);
fs_reclaim_acquire(GFP_KERNEL);
fs_reclaim_release(GFP_KERNEL);
ww_mutex_unlock(&obj.lock);
up_read(&mm->mmap_sem);
Nit: trailing whitespace
mmput(mm);
+} +subsys_initcall(dma_resv_lockdep);
This expects a function returning int, but dma_resv_lockdep() is void. Causing:
drivers/dma-buf/dma-resv.c:119:17: error: initialization of ‘initcall_t’ {aka ‘int (*)(void)’} from incompatible pointer type ‘void (*)(void)’ [-Werror=incompatible-pointer-types] subsys_initcall(dma_resv_lockdep);
The below fixes it for me.
Uh, so _that_ was what the 0day thing was all about, I totally misread that completely. Thanks for the patch.
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
Aside, do you need commit rights for pushing this kind of stuff? -Daniel
Steve
----8<---- From d07ea81611ed6e4fb8cc290f42d23dbcca2da2f8 Mon Sep 17 00:00:00 2001 From: Steven Price steven.price@arm.com Date: Mon, 11 Nov 2019 13:07:19 +0000 Subject: [PATCH] dma_resv: Correct return type of dma_resv_lockdep()
subsys_initcall() expects a function which returns 'int'. Fix dma_resv_lockdep() so it returns an 'int' error code.
Fixes: b2a8116e2592 ("dma_resv: prime lockdep annotations") Signed-off-by: Steven Price steven.price@arm.com
drivers/dma-buf/dma-resv.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index a05ff542be22..9918a6e5cf91 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -97,13 +97,13 @@ static void dma_resv_list_free(struct dma_resv_list *list) }
#if IS_ENABLED(CONFIG_LOCKDEP) -static void __init dma_resv_lockdep(void) +static int __init dma_resv_lockdep(void) { struct mm_struct *mm = mm_alloc(); struct dma_resv obj;
if (!mm)
return;
return -ENOMEM; dma_resv_init(&obj);
@@ -115,6 +115,8 @@ static void __init dma_resv_lockdep(void) up_read(&mm->mmap_sem);
mmput(mm);
return 0;
} subsys_initcall(dma_resv_lockdep);
#endif
2.20.1
On 11/11/2019 15:42, Daniel Vetter wrote:
On Mon, Nov 11, 2019 at 2:11 PM Steven Price steven.price@arm.com wrote:
On 04/11/2019 17:37, Daniel Vetter wrote:
Full audit of everyone:
i915, radeon, amdgpu should be clean per their maintainers.
vram helpers should be fine, they don't do command submission, so really no business holding struct_mutex while doing copy_*_user. But I haven't checked them all.
panfrost seems to dma_resv_lock only in panfrost_job_push, which looks clean.
v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(), copying from/to userspace happens all in v3d_lookup_bos which is outside of the critical section.
vmwgfx has a bunch of ioctls that do their own copy_*_user:
- vmw_execbuf_process: First this does some copies in vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself. Then comes the usual ttm reserve/validate sequence, then actual submission/fencing, then unreserving, and finally some more copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of details, but looks all safe.
- vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be seen, seems to only create a fence and copy it out.
- a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be found there.
Summary: vmwgfx seems to be fine too.
virtio: There's virtio_gpu_execbuffer_ioctl, which does all the copying from userspace before even looking up objects through their handles, so safe. Plus the getparam/getcaps ioctl, also both safe.
qxl only has qxl_execbuffer_ioctl, which calls into qxl_process_single_command. There's a lovely comment before the __copy_from_user_inatomic that the slowpath should be copied from i915, but I guess that never happened. Try not to be unlucky and get your CS data evicted between when it's written and the kernel tries to read it. The only other copy_from_user is for relocs, but those are done before qxl_release_reserve_list(), which seems to be the only thing reserving buffers (in the ttm/dma_resv sense) in that code. So looks safe.
A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this everywhere and needs to be fixed up.
v2: Thomas pointed at that vmwgfx calls dma_resv_init while it holds a dma_resv lock of a different object already. Christian mentioned that ttm core does this too for ghost objects. intel-gfx-ci highlighted that i915 has similar issues.
Unfortunately we can't do this in the usual module init functions, because kernel threads don't have an ->mm - we have to wait around for some user thread to do this.
Solution is to spawn a worker (but only once). It's horrible, but it works.
v3: We can allocate mm! (Chris). Horrible worker hack out, clean initcall solution in.
v4: Annotate with __init (Rob Herring)
Cc: Rob Herring robh@kernel.org Cc: Alex Deucher alexander.deucher@amd.com Cc: Christian König christian.koenig@amd.com Cc: Chris Wilson chris@chris-wilson.co.uk Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Eric Anholt eric@anholt.net Cc: Dave Airlie airlied@redhat.com Cc: Gerd Hoffmann kraxel@redhat.com Cc: Ben Skeggs bskeggs@redhat.com Cc: "VMware Graphics" linux-graphics-maintainer@vmware.com Cc: Thomas Hellstrom thellstrom@vmware.com Reviewed-by: Christian König christian.koenig@amd.com Reviewed-by: Chris Wilson chris@chris-wilson.co.uk Tested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/dma-buf/dma-resv.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 709002515550..a05ff542be22 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -34,6 +34,7 @@
#include <linux/dma-resv.h> #include <linux/export.h> +#include <linux/sched/mm.h>
/**
- DOC: Reservation Object Overview
@@ -95,6 +96,29 @@ static void dma_resv_list_free(struct dma_resv_list *list) kfree_rcu(list, rcu); }
+#if IS_ENABLED(CONFIG_LOCKDEP) +static void __init dma_resv_lockdep(void) +{
struct mm_struct *mm = mm_alloc();
struct dma_resv obj;
if (!mm)
return;
dma_resv_init(&obj);
down_read(&mm->mmap_sem);
ww_mutex_lock(&obj.lock, NULL);
fs_reclaim_acquire(GFP_KERNEL);
fs_reclaim_release(GFP_KERNEL);
ww_mutex_unlock(&obj.lock);
up_read(&mm->mmap_sem);
Nit: trailing whitespace
mmput(mm);
+} +subsys_initcall(dma_resv_lockdep);
This expects a function returning int, but dma_resv_lockdep() is void. Causing:
drivers/dma-buf/dma-resv.c:119:17: error: initialization of ‘initcall_t’ {aka ‘int (*)(void)’} from incompatible pointer type ‘void (*)(void)’ [-Werror=incompatible-pointer-types] subsys_initcall(dma_resv_lockdep);
The below fixes it for me.
Uh, so _that_ was what the 0day thing was all about, I totally misread that completely. Thanks for the patch.
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
Aside, do you need commit rights for pushing this kind of stuff?
I guess it's about time I got round to requesting that:
https://gitlab.freedesktop.org/freedesktop/freedesktop/issues/208
Thanks,
Steve
On Thu, Nov 14, 2019 at 11:50:28AM +0000, Steven Price wrote:
On 11/11/2019 15:42, Daniel Vetter wrote:
On Mon, Nov 11, 2019 at 2:11 PM Steven Price steven.price@arm.com wrote:
On 04/11/2019 17:37, Daniel Vetter wrote:
Full audit of everyone:
i915, radeon, amdgpu should be clean per their maintainers.
vram helpers should be fine, they don't do command submission, so really no business holding struct_mutex while doing copy_*_user. But I haven't checked them all.
panfrost seems to dma_resv_lock only in panfrost_job_push, which looks clean.
v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(), copying from/to userspace happens all in v3d_lookup_bos which is outside of the critical section.
vmwgfx has a bunch of ioctls that do their own copy_*_user:
- vmw_execbuf_process: First this does some copies in vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself. Then comes the usual ttm reserve/validate sequence, then actual submission/fencing, then unreserving, and finally some more copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of details, but looks all safe.
- vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be seen, seems to only create a fence and copy it out.
- a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be found there.
Summary: vmwgfx seems to be fine too.
virtio: There's virtio_gpu_execbuffer_ioctl, which does all the copying from userspace before even looking up objects through their handles, so safe. Plus the getparam/getcaps ioctl, also both safe.
qxl only has qxl_execbuffer_ioctl, which calls into qxl_process_single_command. There's a lovely comment before the __copy_from_user_inatomic that the slowpath should be copied from i915, but I guess that never happened. Try not to be unlucky and get your CS data evicted between when it's written and the kernel tries to read it. The only other copy_from_user is for relocs, but those are done before qxl_release_reserve_list(), which seems to be the only thing reserving buffers (in the ttm/dma_resv sense) in that code. So looks safe.
A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this everywhere and needs to be fixed up.
v2: Thomas pointed at that vmwgfx calls dma_resv_init while it holds a dma_resv lock of a different object already. Christian mentioned that ttm core does this too for ghost objects. intel-gfx-ci highlighted that i915 has similar issues.
Unfortunately we can't do this in the usual module init functions, because kernel threads don't have an ->mm - we have to wait around for some user thread to do this.
Solution is to spawn a worker (but only once). It's horrible, but it works.
v3: We can allocate mm! (Chris). Horrible worker hack out, clean initcall solution in.
v4: Annotate with __init (Rob Herring)
Cc: Rob Herring robh@kernel.org Cc: Alex Deucher alexander.deucher@amd.com Cc: Christian König christian.koenig@amd.com Cc: Chris Wilson chris@chris-wilson.co.uk Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Eric Anholt eric@anholt.net Cc: Dave Airlie airlied@redhat.com Cc: Gerd Hoffmann kraxel@redhat.com Cc: Ben Skeggs bskeggs@redhat.com Cc: "VMware Graphics" linux-graphics-maintainer@vmware.com Cc: Thomas Hellstrom thellstrom@vmware.com Reviewed-by: Christian König christian.koenig@amd.com Reviewed-by: Chris Wilson chris@chris-wilson.co.uk Tested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/dma-buf/dma-resv.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 709002515550..a05ff542be22 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -34,6 +34,7 @@
#include <linux/dma-resv.h> #include <linux/export.h> +#include <linux/sched/mm.h>
/**
- DOC: Reservation Object Overview
@@ -95,6 +96,29 @@ static void dma_resv_list_free(struct dma_resv_list *list) kfree_rcu(list, rcu); }
+#if IS_ENABLED(CONFIG_LOCKDEP) +static void __init dma_resv_lockdep(void) +{
struct mm_struct *mm = mm_alloc();
struct dma_resv obj;
if (!mm)
return;
dma_resv_init(&obj);
down_read(&mm->mmap_sem);
ww_mutex_lock(&obj.lock, NULL);
fs_reclaim_acquire(GFP_KERNEL);
fs_reclaim_release(GFP_KERNEL);
ww_mutex_unlock(&obj.lock);
up_read(&mm->mmap_sem);
Nit: trailing whitespace
mmput(mm);
+} +subsys_initcall(dma_resv_lockdep);
This expects a function returning int, but dma_resv_lockdep() is void. Causing:
drivers/dma-buf/dma-resv.c:119:17: error: initialization of ‘initcall_t’ {aka ‘int (*)(void)’} from incompatible pointer type ‘void (*)(void)’ [-Werror=incompatible-pointer-types] subsys_initcall(dma_resv_lockdep);
The below fixes it for me.
Uh, so _that_ was what the 0day thing was all about, I totally misread that completely. Thanks for the patch.
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
Aside, do you need commit rights for pushing this kind of stuff?
I guess it's about time I got round to requesting that:
https://gitlab.freedesktop.org/freedesktop/freedesktop/issues/208
Since this seems a bit stuck in processing I went ahead and merged your fix meanwhile.
Thanks, Daniel
dri-devel@lists.freedesktop.org