Hi all,
Quick new version since the previous one was a bit too broken: - dropped the bug-on patch to avoid breaking amdgpu's gpu reset failure games - another attempt at splitting job_init/arm, hopefully we're getting there.
Note that Christian has brought up a bikeshed on the new functions to add dependencies to drm_sched_jobs. I'm happy to repaint, if there's some kind of consensus on what it should be.
Testing and review very much welcome, as usual.
Cheers, Daniel
Daniel Vetter (18): drm/sched: Split drm_sched_job_init drm/sched: Barriers are needed for entity->last_scheduled drm/sched: Add dependency tracking drm/sched: drop entity parameter from drm_sched_push_job drm/sched: improve docs around drm_sched_entity drm/panfrost: use scheduler dependency tracking drm/lima: use scheduler dependency tracking drm/v3d: Move drm_sched_job_init to v3d_job_init drm/v3d: Use scheduler dependency handling drm/etnaviv: Use scheduler dependency handling drm/gem: Delete gem array fencing helpers drm/sched: Don't store self-dependencies drm/sched: Check locking in drm_sched_job_await_implicit drm/msm: Don't break exclusive fence ordering drm/etnaviv: Don't break exclusive fence ordering drm/i915: delete exclude argument from i915_sw_fence_await_reservation drm/i915: Don't break exclusive fence ordering dma-resv: Give the docs a do-over
Documentation/gpu/drm-mm.rst | 3 + drivers/dma-buf/dma-resv.c | 24 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 4 +- drivers/gpu/drm/drm_gem.c | 96 --------- drivers/gpu/drm/etnaviv/etnaviv_gem.h | 5 +- drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 64 +++--- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 65 +----- drivers/gpu/drm/etnaviv/etnaviv_sched.h | 3 +- drivers/gpu/drm/i915/display/intel_display.c | 4 +- drivers/gpu/drm/i915/gem/i915_gem_clflush.c | 2 +- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 8 +- drivers/gpu/drm/i915/i915_sw_fence.c | 6 +- drivers/gpu/drm/i915/i915_sw_fence.h | 1 - drivers/gpu/drm/lima/lima_gem.c | 7 +- drivers/gpu/drm/lima/lima_sched.c | 28 +-- drivers/gpu/drm/lima/lima_sched.h | 6 +- drivers/gpu/drm/msm/msm_gem_submit.c | 3 +- drivers/gpu/drm/panfrost/panfrost_drv.c | 16 +- drivers/gpu/drm/panfrost/panfrost_job.c | 39 +--- drivers/gpu/drm/panfrost/panfrost_job.h | 5 +- drivers/gpu/drm/scheduler/sched_entity.c | 140 +++++++------ drivers/gpu/drm/scheduler/sched_fence.c | 19 +- drivers/gpu/drm/scheduler/sched_main.c | 181 +++++++++++++++-- drivers/gpu/drm/v3d/v3d_drv.h | 6 +- drivers/gpu/drm/v3d/v3d_gem.c | 115 +++++------ drivers/gpu/drm/v3d/v3d_sched.c | 44 +---- include/drm/drm_gem.h | 5 - include/drm/gpu_scheduler.h | 186 ++++++++++++++---- include/linux/dma-buf.h | 7 + include/linux/dma-resv.h | 104 +++++++++- 31 files changed, 672 insertions(+), 528 deletions(-)
This is a very confusingly named function, because not just does it init an object, it arms it and provides a point of no return for pushing a job into the scheduler. It would be nice if that's a bit clearer in the interface.
But the real reason is that I want to push the dependency tracking helpers into the scheduler code, and that means drm_sched_job_init must be called a lot earlier, without arming the job.
v2: - don't change .gitignore (Steven) - don't forget v3d (Emma)
v3: Emma noticed that I leak the memory allocated in drm_sched_job_init if we bail out before the point of no return in subsequent driver patches. To be able to fix this change drm_sched_job_cleanup() so it can handle being called both before and after drm_sched_job_arm().
Also improve the kerneldoc for this.
v4: - Fix the drm_sched_job_cleanup logic, I inverted the booleans, as usual (Melissa)
- Christian pointed out that drm_sched_entity_select_rq() also needs to be moved into drm_sched_job_arm, which made me realize that the job->id definitely needs to be moved too.
Shuffle things to fit between job_init and job_arm.
v5: Reshuffle the split between init/arm once more, amdgpu abuses drm_sched.ready to signal gpu reset failures. Also document this somewhat. (Christian)
Cc: Melissa Wen melissa.srw@gmail.com Acked-by: Steven Price steven.price@arm.com (v2) Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Lucas Stach l.stach@pengutronix.de Cc: Russell King linux+etnaviv@armlinux.org.uk Cc: Christian Gmeiner christian.gmeiner@gmail.com Cc: Qiang Yu yuq825@gmail.com Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Steven Price steven.price@arm.com Cc: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Sumit Semwal sumit.semwal@linaro.org Cc: "Christian König" christian.koenig@amd.com Cc: Masahiro Yamada masahiroy@kernel.org Cc: Kees Cook keescook@chromium.org Cc: Adam Borowski kilobyte@angband.pl Cc: Nick Terrell terrelln@fb.com Cc: Mauro Carvalho Chehab mchehab+huawei@kernel.org Cc: Paul Menzel pmenzel@molgen.mpg.de Cc: Sami Tolvanen samitolvanen@google.com Cc: Viresh Kumar viresh.kumar@linaro.org Cc: Alex Deucher alexander.deucher@amd.com Cc: Dave Airlie airlied@redhat.com Cc: Nirmoy Das nirmoy.das@amd.com Cc: Deepak R Varma mh12gx2825@gmail.com Cc: Lee Jones lee.jones@linaro.org Cc: Kevin Wang kevin1.wang@amd.com Cc: Chen Li chenli@uniontech.com Cc: Luben Tuikov luben.tuikov@amd.com Cc: "Marek Olšák" marek.olsak@amd.com Cc: Dennis Li Dennis.Li@amd.com Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Sonny Jiang sonny.jiang@amd.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Tian Tao tiantao6@hisilicon.com Cc: etnaviv@lists.freedesktop.org Cc: lima@lists.freedesktop.org Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Cc: Emma Anholt emma@anholt.net --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 + drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 + drivers/gpu/drm/etnaviv/etnaviv_sched.c | 2 + drivers/gpu/drm/lima/lima_sched.c | 2 + drivers/gpu/drm/panfrost/panfrost_job.c | 2 + drivers/gpu/drm/scheduler/sched_entity.c | 6 +-- drivers/gpu/drm/scheduler/sched_fence.c | 19 ++++--- drivers/gpu/drm/scheduler/sched_main.c | 69 ++++++++++++++++++++---- drivers/gpu/drm/v3d/v3d_gem.c | 2 + include/drm/gpu_scheduler.h | 7 ++- 10 files changed, 91 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index c5386d13eb4a..a4ec092af9a7 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1226,6 +1226,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, if (r) goto error_unlock;
+ drm_sched_job_arm(&job->base); + /* No memory allocation is allowed while holding the notifier lock. * The lock is held until amdgpu_cs_submit is finished and fence is * added to BOs. diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index d33e6d97cc89..5ddb955d2315 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -170,6 +170,8 @@ int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity, if (r) return r;
+ drm_sched_job_arm(&job->base); + *f = dma_fence_get(&job->base.s_fence->finished); amdgpu_job_free_resources(job); drm_sched_entity_push_job(&job->base, entity); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index feb6da1b6ceb..05f412204118 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -163,6 +163,8 @@ int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity, if (ret) goto out_unlock;
+ drm_sched_job_arm(&submit->sched_job); + submit->out_fence = dma_fence_get(&submit->sched_job.s_fence->finished); submit->out_fence_id = idr_alloc_cyclic(&submit->gpu->fence_idr, submit->out_fence, 0, diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index dba8329937a3..38f755580507 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -129,6 +129,8 @@ int lima_sched_task_init(struct lima_sched_task *task, return err; }
+ drm_sched_job_arm(&task->base); + task->num_bos = num_bos; task->vm = lima_vm_get(vm);
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 71a72fb50e6b..2992dc85325f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -288,6 +288,8 @@ int panfrost_job_push(struct panfrost_job *job) goto unlock; }
+ drm_sched_job_arm(&job->base); + job->render_done_fence = dma_fence_get(&job->base.s_fence->finished);
ret = panfrost_acquire_object_fences(job->bos, job->bo_count, diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 79554aa4dbb1..f7347c284886 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -485,9 +485,9 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) * @sched_job: job to submit * @entity: scheduler entity * - * Note: To guarantee that the order of insertion to queue matches - * the job's fence sequence number this function should be - * called with drm_sched_job_init under common lock. + * Note: To guarantee that the order of insertion to queue matches the job's + * fence sequence number this function should be called with drm_sched_job_arm() + * under common lock. * * Returns 0 for success, negative error code otherwise. */ diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index 69de2c76731f..bcea035cf4c6 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -90,7 +90,7 @@ static const char *drm_sched_fence_get_timeline_name(struct dma_fence *f) * * Free up the fence memory after the RCU grace period. */ -static void drm_sched_fence_free(struct rcu_head *rcu) +void drm_sched_fence_free(struct rcu_head *rcu) { struct dma_fence *f = container_of(rcu, struct dma_fence, rcu); struct drm_sched_fence *fence = to_drm_sched_fence(f); @@ -152,27 +152,32 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f) } EXPORT_SYMBOL(to_drm_sched_fence);
-struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity, - void *owner) +struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity, + void *owner) { struct drm_sched_fence *fence = NULL; - unsigned seq;
fence = kmem_cache_zalloc(sched_fence_slab, GFP_KERNEL); if (fence == NULL) return NULL;
fence->owner = owner; - fence->sched = entity->rq->sched; spin_lock_init(&fence->lock);
+ return fence; +} + +void drm_sched_fence_init(struct drm_sched_fence *fence, + struct drm_sched_entity *entity) +{ + unsigned seq; + + fence->sched = entity->rq->sched; seq = atomic_inc_return(&entity->fence_seq); dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled, &fence->lock, entity->fence_context, seq); dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished, &fence->lock, entity->fence_context + 1, seq); - - return fence; }
module_init(drm_sched_fence_slab_init); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 33c414d55fab..454cb6164bdc 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -48,9 +48,11 @@ #include <linux/wait.h> #include <linux/sched.h> #include <linux/completion.h> +#include <linux/dma-resv.h> #include <uapi/linux/sched/types.h>
#include <drm/drm_print.h> +#include <drm/drm_gem.h> #include <drm/gpu_scheduler.h> #include <drm/spsc_queue.h>
@@ -569,7 +571,6 @@ EXPORT_SYMBOL(drm_sched_resubmit_jobs_ext);
/** * drm_sched_job_init - init a scheduler job - * * @job: scheduler job to init * @entity: scheduler entity to use * @owner: job owner for debugging @@ -577,27 +578,28 @@ EXPORT_SYMBOL(drm_sched_resubmit_jobs_ext); * Refer to drm_sched_entity_push_job() documentation * for locking considerations. * + * Drivers must make sure drm_sched_job_cleanup() if this function returns + * successfully, even when @job is aborted before drm_sched_job_arm() is called. + * + * WARNING: amdgpu abuses &drm_sched.ready to signal when the hardware + * has died, which can mean that there's no valid runqueue for a @entity. + * This function returns -ENOENT in this case (which probably should be -EIO as + * a more meanigful return value). + * * Returns 0 for success, negative error code otherwise. */ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner) { - struct drm_gpu_scheduler *sched; - drm_sched_entity_select_rq(entity); if (!entity->rq) return -ENOENT;
- sched = entity->rq->sched; - - job->sched = sched; job->entity = entity; - job->s_priority = entity->rq - sched->sched_rq; - job->s_fence = drm_sched_fence_create(entity, owner); + job->s_fence = drm_sched_fence_alloc(entity, owner); if (!job->s_fence) return -ENOMEM; - job->id = atomic64_inc_return(&sched->job_id_count);
INIT_LIST_HEAD(&job->list);
@@ -606,13 +608,58 @@ int drm_sched_job_init(struct drm_sched_job *job, EXPORT_SYMBOL(drm_sched_job_init);
/** - * drm_sched_job_cleanup - clean up scheduler job resources + * drm_sched_job_arm - arm a scheduler job for execution + * @job: scheduler job to arm + * + * This arms a scheduler job for execution. Specifically it initializes the + * &drm_sched_job.s_fence of @job, so that it can be attached to struct dma_resv + * or other places that need to track the completion of this job. + * + * Refer to drm_sched_entity_push_job() documentation for locking + * considerations. * + * This can only be called if drm_sched_job_init() succeeded. + */ +void drm_sched_job_arm(struct drm_sched_job *job) +{ + struct drm_gpu_scheduler *sched; + struct drm_sched_entity *entity = job->entity; + + BUG_ON(!entity); + + sched = entity->rq->sched; + + job->sched = sched; + job->s_priority = entity->rq - sched->sched_rq; + job->id = atomic64_inc_return(&sched->job_id_count); + + drm_sched_fence_init(job->s_fence, job->entity); +} +EXPORT_SYMBOL(drm_sched_job_arm); + +/** + * drm_sched_job_cleanup - clean up scheduler job resources * @job: scheduler job to clean up + * + * Cleans up the resources allocated with drm_sched_job_init(). + * + * Drivers should call this from their error unwind code if @job is aborted + * before drm_sched_job_arm() is called. + * + * After that point of no return @job is committed to be executed by the + * scheduler, and this function should be called from the + * &drm_sched_backend_ops.free_job callback. */ void drm_sched_job_cleanup(struct drm_sched_job *job) { - dma_fence_put(&job->s_fence->finished); + if (kref_read(&job->s_fence->finished.refcount)) { + /* drm_sched_job_arm() has been called */ + dma_fence_put(&job->s_fence->finished); + } else { + /* aborted job before committing to run it */ + drm_sched_fence_free(&job->s_fence->finished.rcu); + } + job->s_fence = NULL; } EXPORT_SYMBOL(drm_sched_job_cleanup); diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index 4eb354226972..5c3a99027ecd 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -475,6 +475,8 @@ v3d_push_job(struct v3d_file_priv *v3d_priv, if (ret) return ret;
+ drm_sched_job_arm(&job->base); + job->done_fence = dma_fence_get(&job->base.s_fence->finished);
/* put by scheduler job completion */ diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 88ae7f331bb1..83afc3aa8e2f 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -348,6 +348,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched); int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner); +void drm_sched_job_arm(struct drm_sched_job *job); void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, unsigned int num_sched_list); @@ -387,8 +388,12 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity, enum drm_sched_priority priority); bool drm_sched_entity_is_ready(struct drm_sched_entity *entity);
-struct drm_sched_fence *drm_sched_fence_create( +struct drm_sched_fence *drm_sched_fence_alloc( struct drm_sched_entity *s_entity, void *owner); +void drm_sched_fence_init(struct drm_sched_fence *fence, + struct drm_sched_entity *entity); +void drm_sched_fence_free(struct rcu_head *rcu); + void drm_sched_fence_scheduled(struct drm_sched_fence *fence); void drm_sched_fence_finished(struct drm_sched_fence *fence);
On Mon, Jul 12, 2021 at 1:01 PM Daniel Vetter daniel.vetter@ffwll.ch wrote:
This is a very confusingly named function, because not just does it init an object, it arms it and provides a point of no return for pushing a job into the scheduler. It would be nice if that's a bit clearer in the interface.
But the real reason is that I want to push the dependency tracking helpers into the scheduler code, and that means drm_sched_job_init must be called a lot earlier, without arming the job.
v2:
- don't change .gitignore (Steven)
- don't forget v3d (Emma)
v3: Emma noticed that I leak the memory allocated in drm_sched_job_init if we bail out before the point of no return in subsequent driver patches. To be able to fix this change drm_sched_job_cleanup() so it can handle being called both before and after drm_sched_job_arm().
Also improve the kerneldoc for this.
v4:
Fix the drm_sched_job_cleanup logic, I inverted the booleans, as usual (Melissa)
Christian pointed out that drm_sched_entity_select_rq() also needs to be moved into drm_sched_job_arm, which made me realize that the job->id definitely needs to be moved too.
Shuffle things to fit between job_init and job_arm.
v5: Reshuffle the split between init/arm once more, amdgpu abuses drm_sched.ready to signal gpu reset failures. Also document this somewhat. (Christian)
Ack from me for the changes I was Cced on.
Am 12.07.21 um 19:53 schrieb Daniel Vetter:
This is a very confusingly named function, because not just does it init an object, it arms it and provides a point of no return for pushing a job into the scheduler. It would be nice if that's a bit clearer in the interface.
But the real reason is that I want to push the dependency tracking helpers into the scheduler code, and that means drm_sched_job_init must be called a lot earlier, without arming the job.
v2:
- don't change .gitignore (Steven)
- don't forget v3d (Emma)
v3: Emma noticed that I leak the memory allocated in drm_sched_job_init if we bail out before the point of no return in subsequent driver patches. To be able to fix this change drm_sched_job_cleanup() so it can handle being called both before and after drm_sched_job_arm().
Also improve the kerneldoc for this.
v4:
Fix the drm_sched_job_cleanup logic, I inverted the booleans, as usual (Melissa)
Christian pointed out that drm_sched_entity_select_rq() also needs to be moved into drm_sched_job_arm, which made me realize that the job->id definitely needs to be moved too.
As far as I can see you still have drm_sched_entity_select_rq() in drm_sched_job_init()?
Christian.
Shuffle things to fit between job_init and job_arm.
v5: Reshuffle the split between init/arm once more, amdgpu abuses drm_sched.ready to signal gpu reset failures. Also document this somewhat. (Christian)
Cc: Melissa Wen melissa.srw@gmail.com Acked-by: Steven Price steven.price@arm.com (v2) Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Lucas Stach l.stach@pengutronix.de Cc: Russell King linux+etnaviv@armlinux.org.uk Cc: Christian Gmeiner christian.gmeiner@gmail.com Cc: Qiang Yu yuq825@gmail.com Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Steven Price steven.price@arm.com Cc: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Sumit Semwal sumit.semwal@linaro.org Cc: "Christian König" christian.koenig@amd.com Cc: Masahiro Yamada masahiroy@kernel.org Cc: Kees Cook keescook@chromium.org Cc: Adam Borowski kilobyte@angband.pl Cc: Nick Terrell terrelln@fb.com Cc: Mauro Carvalho Chehab mchehab+huawei@kernel.org Cc: Paul Menzel pmenzel@molgen.mpg.de Cc: Sami Tolvanen samitolvanen@google.com Cc: Viresh Kumar viresh.kumar@linaro.org Cc: Alex Deucher alexander.deucher@amd.com Cc: Dave Airlie airlied@redhat.com Cc: Nirmoy Das nirmoy.das@amd.com Cc: Deepak R Varma mh12gx2825@gmail.com Cc: Lee Jones lee.jones@linaro.org Cc: Kevin Wang kevin1.wang@amd.com Cc: Chen Li chenli@uniontech.com Cc: Luben Tuikov luben.tuikov@amd.com Cc: "Marek Olšák" marek.olsak@amd.com Cc: Dennis Li Dennis.Li@amd.com Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Sonny Jiang sonny.jiang@amd.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Tian Tao tiantao6@hisilicon.com Cc: etnaviv@lists.freedesktop.org Cc: lima@lists.freedesktop.org Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Cc: Emma Anholt emma@anholt.net
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 + drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 + drivers/gpu/drm/etnaviv/etnaviv_sched.c | 2 + drivers/gpu/drm/lima/lima_sched.c | 2 + drivers/gpu/drm/panfrost/panfrost_job.c | 2 + drivers/gpu/drm/scheduler/sched_entity.c | 6 +-- drivers/gpu/drm/scheduler/sched_fence.c | 19 ++++--- drivers/gpu/drm/scheduler/sched_main.c | 69 ++++++++++++++++++++---- drivers/gpu/drm/v3d/v3d_gem.c | 2 + include/drm/gpu_scheduler.h | 7 ++- 10 files changed, 91 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index c5386d13eb4a..a4ec092af9a7 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1226,6 +1226,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, if (r) goto error_unlock;
- drm_sched_job_arm(&job->base);
- /* No memory allocation is allowed while holding the notifier lock.
- The lock is held until amdgpu_cs_submit is finished and fence is
- added to BOs.
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index d33e6d97cc89..5ddb955d2315 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -170,6 +170,8 @@ int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity, if (r) return r;
- drm_sched_job_arm(&job->base);
- *f = dma_fence_get(&job->base.s_fence->finished); amdgpu_job_free_resources(job); drm_sched_entity_push_job(&job->base, entity);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index feb6da1b6ceb..05f412204118 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -163,6 +163,8 @@ int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity, if (ret) goto out_unlock;
- drm_sched_job_arm(&submit->sched_job);
- submit->out_fence = dma_fence_get(&submit->sched_job.s_fence->finished); submit->out_fence_id = idr_alloc_cyclic(&submit->gpu->fence_idr, submit->out_fence, 0,
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index dba8329937a3..38f755580507 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -129,6 +129,8 @@ int lima_sched_task_init(struct lima_sched_task *task, return err; }
- drm_sched_job_arm(&task->base);
- task->num_bos = num_bos; task->vm = lima_vm_get(vm);
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 71a72fb50e6b..2992dc85325f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -288,6 +288,8 @@ int panfrost_job_push(struct panfrost_job *job) goto unlock; }
drm_sched_job_arm(&job->base);
job->render_done_fence = dma_fence_get(&job->base.s_fence->finished);
ret = panfrost_acquire_object_fences(job->bos, job->bo_count,
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 79554aa4dbb1..f7347c284886 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -485,9 +485,9 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
- @sched_job: job to submit
- @entity: scheduler entity
- Note: To guarantee that the order of insertion to queue matches
- the job's fence sequence number this function should be
- called with drm_sched_job_init under common lock.
- Note: To guarantee that the order of insertion to queue matches the job's
- fence sequence number this function should be called with drm_sched_job_arm()
*/
- under common lock.
- Returns 0 for success, negative error code otherwise.
diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index 69de2c76731f..bcea035cf4c6 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -90,7 +90,7 @@ static const char *drm_sched_fence_get_timeline_name(struct dma_fence *f)
- Free up the fence memory after the RCU grace period.
*/ -static void drm_sched_fence_free(struct rcu_head *rcu) +void drm_sched_fence_free(struct rcu_head *rcu) { struct dma_fence *f = container_of(rcu, struct dma_fence, rcu); struct drm_sched_fence *fence = to_drm_sched_fence(f); @@ -152,27 +152,32 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f) } EXPORT_SYMBOL(to_drm_sched_fence);
-struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity,
void *owner)
+struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,
{ struct drm_sched_fence *fence = NULL;void *owner)
unsigned seq;
fence = kmem_cache_zalloc(sched_fence_slab, GFP_KERNEL); if (fence == NULL) return NULL;
fence->owner = owner;
fence->sched = entity->rq->sched; spin_lock_init(&fence->lock);
- return fence;
+}
+void drm_sched_fence_init(struct drm_sched_fence *fence,
struct drm_sched_entity *entity)
+{
- unsigned seq;
- fence->sched = entity->rq->sched; seq = atomic_inc_return(&entity->fence_seq); dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled, &fence->lock, entity->fence_context, seq); dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished, &fence->lock, entity->fence_context + 1, seq);
return fence; }
module_init(drm_sched_fence_slab_init);
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 33c414d55fab..454cb6164bdc 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -48,9 +48,11 @@ #include <linux/wait.h> #include <linux/sched.h> #include <linux/completion.h> +#include <linux/dma-resv.h> #include <uapi/linux/sched/types.h>
#include <drm/drm_print.h> +#include <drm/drm_gem.h> #include <drm/gpu_scheduler.h> #include <drm/spsc_queue.h>
@@ -569,7 +571,6 @@ EXPORT_SYMBOL(drm_sched_resubmit_jobs_ext);
/**
- drm_sched_job_init - init a scheduler job
- @job: scheduler job to init
- @entity: scheduler entity to use
- @owner: job owner for debugging
@@ -577,27 +578,28 @@ EXPORT_SYMBOL(drm_sched_resubmit_jobs_ext);
- Refer to drm_sched_entity_push_job() documentation
- for locking considerations.
- Drivers must make sure drm_sched_job_cleanup() if this function returns
- successfully, even when @job is aborted before drm_sched_job_arm() is called.
- WARNING: amdgpu abuses &drm_sched.ready to signal when the hardware
- has died, which can mean that there's no valid runqueue for a @entity.
- This function returns -ENOENT in this case (which probably should be -EIO as
- a more meanigful return value).
*/ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner) {
- Returns 0 for success, negative error code otherwise.
struct drm_gpu_scheduler *sched;
drm_sched_entity_select_rq(entity); if (!entity->rq) return -ENOENT;
sched = entity->rq->sched;
job->sched = sched; job->entity = entity;
job->s_priority = entity->rq - sched->sched_rq;
job->s_fence = drm_sched_fence_create(entity, owner);
- job->s_fence = drm_sched_fence_alloc(entity, owner); if (!job->s_fence) return -ENOMEM;
job->id = atomic64_inc_return(&sched->job_id_count);
INIT_LIST_HEAD(&job->list);
@@ -606,13 +608,58 @@ int drm_sched_job_init(struct drm_sched_job *job, EXPORT_SYMBOL(drm_sched_job_init);
/**
- drm_sched_job_cleanup - clean up scheduler job resources
- drm_sched_job_arm - arm a scheduler job for execution
- @job: scheduler job to arm
- This arms a scheduler job for execution. Specifically it initializes the
- &drm_sched_job.s_fence of @job, so that it can be attached to struct dma_resv
- or other places that need to track the completion of this job.
- Refer to drm_sched_entity_push_job() documentation for locking
- considerations.
- This can only be called if drm_sched_job_init() succeeded.
- */
+void drm_sched_job_arm(struct drm_sched_job *job) +{
- struct drm_gpu_scheduler *sched;
- struct drm_sched_entity *entity = job->entity;
- BUG_ON(!entity);
- sched = entity->rq->sched;
- job->sched = sched;
- job->s_priority = entity->rq - sched->sched_rq;
- job->id = atomic64_inc_return(&sched->job_id_count);
- drm_sched_fence_init(job->s_fence, job->entity);
+} +EXPORT_SYMBOL(drm_sched_job_arm);
+/**
- drm_sched_job_cleanup - clean up scheduler job resources
- @job: scheduler job to clean up
- Cleans up the resources allocated with drm_sched_job_init().
- Drivers should call this from their error unwind code if @job is aborted
- before drm_sched_job_arm() is called.
- After that point of no return @job is committed to be executed by the
- scheduler, and this function should be called from the
*/ void drm_sched_job_cleanup(struct drm_sched_job *job) {
- &drm_sched_backend_ops.free_job callback.
- dma_fence_put(&job->s_fence->finished);
- if (kref_read(&job->s_fence->finished.refcount)) {
/* drm_sched_job_arm() has been called */
dma_fence_put(&job->s_fence->finished);
- } else {
/* aborted job before committing to run it */
drm_sched_fence_free(&job->s_fence->finished.rcu);
- }
- job->s_fence = NULL; } EXPORT_SYMBOL(drm_sched_job_cleanup);
diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index 4eb354226972..5c3a99027ecd 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -475,6 +475,8 @@ v3d_push_job(struct v3d_file_priv *v3d_priv, if (ret) return ret;
drm_sched_job_arm(&job->base);
job->done_fence = dma_fence_get(&job->base.s_fence->finished);
/* put by scheduler job completion */
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 88ae7f331bb1..83afc3aa8e2f 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -348,6 +348,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched); int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner); +void drm_sched_job_arm(struct drm_sched_job *job); void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, unsigned int num_sched_list); @@ -387,8 +388,12 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity, enum drm_sched_priority priority); bool drm_sched_entity_is_ready(struct drm_sched_entity *entity);
-struct drm_sched_fence *drm_sched_fence_create( +struct drm_sched_fence *drm_sched_fence_alloc( struct drm_sched_entity *s_entity, void *owner); +void drm_sched_fence_init(struct drm_sched_fence *fence,
struct drm_sched_entity *entity);
+void drm_sched_fence_free(struct rcu_head *rcu);
- void drm_sched_fence_scheduled(struct drm_sched_fence *fence); void drm_sched_fence_finished(struct drm_sched_fence *fence);
On Tue, Jul 13, 2021 at 8:40 AM Christian König christian.koenig@amd.com wrote:
Am 12.07.21 um 19:53 schrieb Daniel Vetter:
This is a very confusingly named function, because not just does it init an object, it arms it and provides a point of no return for pushing a job into the scheduler. It would be nice if that's a bit clearer in the interface.
But the real reason is that I want to push the dependency tracking helpers into the scheduler code, and that means drm_sched_job_init must be called a lot earlier, without arming the job.
v2:
- don't change .gitignore (Steven)
- don't forget v3d (Emma)
v3: Emma noticed that I leak the memory allocated in drm_sched_job_init if we bail out before the point of no return in subsequent driver patches. To be able to fix this change drm_sched_job_cleanup() so it can handle being called both before and after drm_sched_job_arm().
Also improve the kerneldoc for this.
v4:
Fix the drm_sched_job_cleanup logic, I inverted the booleans, as usual (Melissa)
Christian pointed out that drm_sched_entity_select_rq() also needs to be moved into drm_sched_job_arm, which made me realize that the job->id definitely needs to be moved too.
As far as I can see you still have drm_sched_entity_select_rq() in drm_sched_job_init()?
Yeah it's again in there, but everything else which changes entity->rq state isn't in there anymore, but in job_arm(). I also checked the cleanup code, and we only update entity state in there, not job state, so there's no additional complications for cleanup.
Of course this is quite a bit earlier, than if we do it in job_arm(), but also not fundamentally a new race window. Just a bigger one, so assuming the current code is correct, should be all fine. But also, very possible I missed something else again :-) -Daniel
Christian.
Shuffle things to fit between job_init and job_arm.
v5: Reshuffle the split between init/arm once more, amdgpu abuses drm_sched.ready to signal gpu reset failures. Also document this somewhat. (Christian)
Cc: Melissa Wen melissa.srw@gmail.com Acked-by: Steven Price steven.price@arm.com (v2) Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Lucas Stach l.stach@pengutronix.de Cc: Russell King linux+etnaviv@armlinux.org.uk Cc: Christian Gmeiner christian.gmeiner@gmail.com Cc: Qiang Yu yuq825@gmail.com Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Steven Price steven.price@arm.com Cc: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Sumit Semwal sumit.semwal@linaro.org Cc: "Christian König" christian.koenig@amd.com Cc: Masahiro Yamada masahiroy@kernel.org Cc: Kees Cook keescook@chromium.org Cc: Adam Borowski kilobyte@angband.pl Cc: Nick Terrell terrelln@fb.com Cc: Mauro Carvalho Chehab mchehab+huawei@kernel.org Cc: Paul Menzel pmenzel@molgen.mpg.de Cc: Sami Tolvanen samitolvanen@google.com Cc: Viresh Kumar viresh.kumar@linaro.org Cc: Alex Deucher alexander.deucher@amd.com Cc: Dave Airlie airlied@redhat.com Cc: Nirmoy Das nirmoy.das@amd.com Cc: Deepak R Varma mh12gx2825@gmail.com Cc: Lee Jones lee.jones@linaro.org Cc: Kevin Wang kevin1.wang@amd.com Cc: Chen Li chenli@uniontech.com Cc: Luben Tuikov luben.tuikov@amd.com Cc: "Marek Olšák" marek.olsak@amd.com Cc: Dennis Li Dennis.Li@amd.com Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Sonny Jiang sonny.jiang@amd.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Tian Tao tiantao6@hisilicon.com Cc: etnaviv@lists.freedesktop.org Cc: lima@lists.freedesktop.org Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Cc: Emma Anholt emma@anholt.net
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 + drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 + drivers/gpu/drm/etnaviv/etnaviv_sched.c | 2 + drivers/gpu/drm/lima/lima_sched.c | 2 + drivers/gpu/drm/panfrost/panfrost_job.c | 2 + drivers/gpu/drm/scheduler/sched_entity.c | 6 +-- drivers/gpu/drm/scheduler/sched_fence.c | 19 ++++--- drivers/gpu/drm/scheduler/sched_main.c | 69 ++++++++++++++++++++---- drivers/gpu/drm/v3d/v3d_gem.c | 2 + include/drm/gpu_scheduler.h | 7 ++- 10 files changed, 91 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index c5386d13eb4a..a4ec092af9a7 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1226,6 +1226,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, if (r) goto error_unlock;
drm_sched_job_arm(&job->base);
/* No memory allocation is allowed while holding the notifier lock. * The lock is held until amdgpu_cs_submit is finished and fence is * added to BOs.
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index d33e6d97cc89..5ddb955d2315 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -170,6 +170,8 @@ int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity, if (r) return r;
drm_sched_job_arm(&job->base);
*f = dma_fence_get(&job->base.s_fence->finished); amdgpu_job_free_resources(job); drm_sched_entity_push_job(&job->base, entity);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index feb6da1b6ceb..05f412204118 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -163,6 +163,8 @@ int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity, if (ret) goto out_unlock;
drm_sched_job_arm(&submit->sched_job);
submit->out_fence = dma_fence_get(&submit->sched_job.s_fence->finished); submit->out_fence_id = idr_alloc_cyclic(&submit->gpu->fence_idr, submit->out_fence, 0,
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index dba8329937a3..38f755580507 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -129,6 +129,8 @@ int lima_sched_task_init(struct lima_sched_task *task, return err; }
drm_sched_job_arm(&task->base);
task->num_bos = num_bos; task->vm = lima_vm_get(vm);
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 71a72fb50e6b..2992dc85325f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -288,6 +288,8 @@ int panfrost_job_push(struct panfrost_job *job) goto unlock; }
drm_sched_job_arm(&job->base);
job->render_done_fence = dma_fence_get(&job->base.s_fence->finished); ret = panfrost_acquire_object_fences(job->bos, job->bo_count,
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 79554aa4dbb1..f7347c284886 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -485,9 +485,9 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
- @sched_job: job to submit
- @entity: scheduler entity
- Note: To guarantee that the order of insertion to queue matches
- the job's fence sequence number this function should be
- called with drm_sched_job_init under common lock.
- Note: To guarantee that the order of insertion to queue matches the job's
- fence sequence number this function should be called with drm_sched_job_arm()
*/
- under common lock.
- Returns 0 for success, negative error code otherwise.
diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index 69de2c76731f..bcea035cf4c6 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -90,7 +90,7 @@ static const char *drm_sched_fence_get_timeline_name(struct dma_fence *f)
- Free up the fence memory after the RCU grace period.
*/ -static void drm_sched_fence_free(struct rcu_head *rcu) +void drm_sched_fence_free(struct rcu_head *rcu) { struct dma_fence *f = container_of(rcu, struct dma_fence, rcu); struct drm_sched_fence *fence = to_drm_sched_fence(f); @@ -152,27 +152,32 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f) } EXPORT_SYMBOL(to_drm_sched_fence);
-struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity,
void *owner)
+struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,
{ struct drm_sched_fence *fence = NULL;void *owner)
unsigned seq; fence = kmem_cache_zalloc(sched_fence_slab, GFP_KERNEL); if (fence == NULL) return NULL; fence->owner = owner;
fence->sched = entity->rq->sched; spin_lock_init(&fence->lock);
return fence;
+}
+void drm_sched_fence_init(struct drm_sched_fence *fence,
struct drm_sched_entity *entity)
+{
unsigned seq;
fence->sched = entity->rq->sched; seq = atomic_inc_return(&entity->fence_seq); dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled, &fence->lock, entity->fence_context, seq); dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished, &fence->lock, entity->fence_context + 1, seq);
return fence;
}
module_init(drm_sched_fence_slab_init);
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 33c414d55fab..454cb6164bdc 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -48,9 +48,11 @@ #include <linux/wait.h> #include <linux/sched.h> #include <linux/completion.h> +#include <linux/dma-resv.h> #include <uapi/linux/sched/types.h>
#include <drm/drm_print.h> +#include <drm/drm_gem.h> #include <drm/gpu_scheduler.h> #include <drm/spsc_queue.h>
@@ -569,7 +571,6 @@ EXPORT_SYMBOL(drm_sched_resubmit_jobs_ext);
/**
- drm_sched_job_init - init a scheduler job
- @job: scheduler job to init
- @entity: scheduler entity to use
- @owner: job owner for debugging
@@ -577,27 +578,28 @@ EXPORT_SYMBOL(drm_sched_resubmit_jobs_ext);
- Refer to drm_sched_entity_push_job() documentation
- for locking considerations.
- Drivers must make sure drm_sched_job_cleanup() if this function returns
- successfully, even when @job is aborted before drm_sched_job_arm() is called.
- WARNING: amdgpu abuses &drm_sched.ready to signal when the hardware
- has died, which can mean that there's no valid runqueue for a @entity.
- This function returns -ENOENT in this case (which probably should be -EIO as
- a more meanigful return value).
*/ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner) {
- Returns 0 for success, negative error code otherwise.
struct drm_gpu_scheduler *sched;
drm_sched_entity_select_rq(entity); if (!entity->rq) return -ENOENT;
sched = entity->rq->sched;
job->sched = sched; job->entity = entity;
job->s_priority = entity->rq - sched->sched_rq;
job->s_fence = drm_sched_fence_create(entity, owner);
job->s_fence = drm_sched_fence_alloc(entity, owner); if (!job->s_fence) return -ENOMEM;
job->id = atomic64_inc_return(&sched->job_id_count); INIT_LIST_HEAD(&job->list);
@@ -606,13 +608,58 @@ int drm_sched_job_init(struct drm_sched_job *job, EXPORT_SYMBOL(drm_sched_job_init);
/**
- drm_sched_job_cleanup - clean up scheduler job resources
- drm_sched_job_arm - arm a scheduler job for execution
- @job: scheduler job to arm
- This arms a scheduler job for execution. Specifically it initializes the
- &drm_sched_job.s_fence of @job, so that it can be attached to struct dma_resv
- or other places that need to track the completion of this job.
- Refer to drm_sched_entity_push_job() documentation for locking
- considerations.
- This can only be called if drm_sched_job_init() succeeded.
- */
+void drm_sched_job_arm(struct drm_sched_job *job) +{
struct drm_gpu_scheduler *sched;
struct drm_sched_entity *entity = job->entity;
BUG_ON(!entity);
sched = entity->rq->sched;
job->sched = sched;
job->s_priority = entity->rq - sched->sched_rq;
job->id = atomic64_inc_return(&sched->job_id_count);
drm_sched_fence_init(job->s_fence, job->entity);
+} +EXPORT_SYMBOL(drm_sched_job_arm);
+/**
- drm_sched_job_cleanup - clean up scheduler job resources
- @job: scheduler job to clean up
- Cleans up the resources allocated with drm_sched_job_init().
- Drivers should call this from their error unwind code if @job is aborted
- before drm_sched_job_arm() is called.
- After that point of no return @job is committed to be executed by the
- scheduler, and this function should be called from the
*/ void drm_sched_job_cleanup(struct drm_sched_job *job) {
- &drm_sched_backend_ops.free_job callback.
dma_fence_put(&job->s_fence->finished);
if (kref_read(&job->s_fence->finished.refcount)) {
/* drm_sched_job_arm() has been called */
dma_fence_put(&job->s_fence->finished);
} else {
/* aborted job before committing to run it */
drm_sched_fence_free(&job->s_fence->finished.rcu);
}
} EXPORT_SYMBOL(drm_sched_job_cleanup);job->s_fence = NULL;
diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index 4eb354226972..5c3a99027ecd 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -475,6 +475,8 @@ v3d_push_job(struct v3d_file_priv *v3d_priv, if (ret) return ret;
drm_sched_job_arm(&job->base);
job->done_fence = dma_fence_get(&job->base.s_fence->finished); /* put by scheduler job completion */
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 88ae7f331bb1..83afc3aa8e2f 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -348,6 +348,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched); int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner); +void drm_sched_job_arm(struct drm_sched_job *job); void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, unsigned int num_sched_list); @@ -387,8 +388,12 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity, enum drm_sched_priority priority); bool drm_sched_entity_is_ready(struct drm_sched_entity *entity);
-struct drm_sched_fence *drm_sched_fence_create( +struct drm_sched_fence *drm_sched_fence_alloc( struct drm_sched_entity *s_entity, void *owner); +void drm_sched_fence_init(struct drm_sched_fence *fence,
struct drm_sched_entity *entity);
+void drm_sched_fence_free(struct rcu_head *rcu);
- void drm_sched_fence_scheduled(struct drm_sched_fence *fence); void drm_sched_fence_finished(struct drm_sched_fence *fence);
It might be good enough on x86 with just READ_ONCE, but the write side should then at least be WRITE_ONCE because x86 has total store order.
It's definitely not enough on arm.
Fix this proplery, which means - explain the need for the barrier in both places - point at the other side in each comment
Also pull out the !sched_list case as the first check, so that the code flow is clearer.
While at it sprinkle some comments around because it was very non-obvious to me what's actually going on here and why.
Note that we really need full barriers here, at first I thought store-release and load-acquire on ->last_scheduled would be enough, but we actually requiring ordering between that and the queue state.
v2: Put smp_rmp() in the right place and fix up comment (Andrey)
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: "Christian König" christian.koenig@amd.com Cc: Steven Price steven.price@arm.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Lee Jones lee.jones@linaro.org Cc: Boris Brezillon boris.brezillon@collabora.com --- drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index f7347c284886..89e3f6eaf519 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED);
dma_fence_put(entity->last_scheduled); + entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
+ /* + * If the queue is empty we allow drm_sched_entity_select_rq() to + * locklessly access ->last_scheduled. This only works if we set the + * pointer before we dequeue and if we a write barrier here. + */ + smp_wmb(); + spsc_queue_pop(&entity->job_queue); return sched_job; } @@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) struct drm_gpu_scheduler *sched; struct drm_sched_rq *rq;
- if (spsc_queue_count(&entity->job_queue) || !entity->sched_list) + /* single possible engine and already selected */ + if (!entity->sched_list) + return; + + /* queue non-empty, stay on the same engine */ + if (spsc_queue_count(&entity->job_queue)) return;
- fence = READ_ONCE(entity->last_scheduled); + /* + * Only when the queue is empty are we guaranteed that the scheduler + * thread cannot change ->last_scheduled. To enforce ordering we need + * a read barrier here. See drm_sched_entity_pop_job() for the other + * side. + */ + smp_rmb(); + + fence = entity->last_scheduled; + + /* stay on the same engine if the previous job hasn't finished */ if (fence && !dma_fence_is_signaled(fence)) return;
Am 12.07.21 um 19:53 schrieb Daniel Vetter:
It might be good enough on x86 with just READ_ONCE, but the write side should then at least be WRITE_ONCE because x86 has total store order.
It's definitely not enough on arm.
Fix this proplery, which means
- explain the need for the barrier in both places
- point at the other side in each comment
Also pull out the !sched_list case as the first check, so that the code flow is clearer.
While at it sprinkle some comments around because it was very non-obvious to me what's actually going on here and why.
Note that we really need full barriers here, at first I thought store-release and load-acquire on ->last_scheduled would be enough, but we actually requiring ordering between that and the queue state.
v2: Put smp_rmp() in the right place and fix up comment (Andrey)
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: "Christian König" christian.koenig@amd.com Cc: Steven Price steven.price@arm.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Lee Jones lee.jones@linaro.org Cc: Boris Brezillon boris.brezillon@collabora.com
drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index f7347c284886..89e3f6eaf519 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED);
dma_fence_put(entity->last_scheduled);
entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
/*
* If the queue is empty we allow drm_sched_entity_select_rq() to
* locklessly access ->last_scheduled. This only works if we set the
* pointer before we dequeue and if we a write barrier here.
*/
smp_wmb();
Again, conceptual those barriers should be part of the spsc_queue container and not externally.
Regards, Christian.
spsc_queue_pop(&entity->job_queue); return sched_job; } @@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) struct drm_gpu_scheduler *sched; struct drm_sched_rq *rq;
- if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
- /* single possible engine and already selected */
- if (!entity->sched_list)
return;
- /* queue non-empty, stay on the same engine */
- if (spsc_queue_count(&entity->job_queue)) return;
- fence = READ_ONCE(entity->last_scheduled);
- /*
* Only when the queue is empty are we guaranteed that the scheduler
* thread cannot change ->last_scheduled. To enforce ordering we need
* a read barrier here. See drm_sched_entity_pop_job() for the other
* side.
*/
- smp_rmb();
- fence = entity->last_scheduled;
- /* stay on the same engine if the previous job hasn't finished */ if (fence && !dma_fence_is_signaled(fence)) return;
On Tue, Jul 13, 2021 at 8:35 AM Christian König christian.koenig@amd.com wrote:
Am 12.07.21 um 19:53 schrieb Daniel Vetter:
It might be good enough on x86 with just READ_ONCE, but the write side should then at least be WRITE_ONCE because x86 has total store order.
It's definitely not enough on arm.
Fix this proplery, which means
- explain the need for the barrier in both places
- point at the other side in each comment
Also pull out the !sched_list case as the first check, so that the code flow is clearer.
While at it sprinkle some comments around because it was very non-obvious to me what's actually going on here and why.
Note that we really need full barriers here, at first I thought store-release and load-acquire on ->last_scheduled would be enough, but we actually requiring ordering between that and the queue state.
v2: Put smp_rmp() in the right place and fix up comment (Andrey)
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: "Christian König" christian.koenig@amd.com Cc: Steven Price steven.price@arm.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Lee Jones lee.jones@linaro.org Cc: Boris Brezillon boris.brezillon@collabora.com
drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index f7347c284886..89e3f6eaf519 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED);
dma_fence_put(entity->last_scheduled);
entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
/*
* If the queue is empty we allow drm_sched_entity_select_rq() to
* locklessly access ->last_scheduled. This only works if we set the
* pointer before we dequeue and if we a write barrier here.
*/
smp_wmb();
Again, conceptual those barriers should be part of the spsc_queue container and not externally.
That would be extremely unusual api. Let's assume that your queue is very dumb, and protected by a simple lock. That's about the maximum any user could expect.
But then you still need barriers here, because linux locks (spinlock, mutex) are defined to be one-way barriers: Stuff that's inside is guaranteed to be done insinde, but stuff outside of the locked region can leak in. They're load-acquire/store-release barriers. So not good enough.
You really need to have barriers here, and they really all need to be documented properly. And yes that's a shit-ton of work in drm/sched, because it's full of yolo lockless stuff.
The other case you could make is that this works like a wakeup queue, or similar. The rules there are: - wake_up (i.e. pushing something into the queue) is a store-release barrier - the waked up (i.e. popping an entry) is a load acquire barrier Which is obviuosly needed because otherwise you don't have coherency for the data queued up. And again not the barriers you're locking for here.
Either way, we'd still need the comments, because it's still lockless trickery, and every single one of that needs to have a comment on both sides to explain what's going on.
Essentially replace spsc_queue with an llist underneath, and that's the amount of barriers a data structure should provide. Anything else is asking your datastructure to paper over bugs in your users.
This is similar to how atomic_t is by default completely unordered, and users need to add barriers as needed, with comments. I think this is all to make sure people don't just write lockless algorithms because it's a cool idea, but are forced to think this all through. Which seems to not have happened very consistently for drm/sched, so I guess needs to be fixed.
I'm definitely not going to hide all that by making the spsc_queue stuff provide random unjustified barriers just because that would paper over drm/sched bugs. We need to fix the actual bugs, and preferrable all of them. I've found a few, but I wasn't involved in drm/sched thus far, so best I can do is discover them as we go. -Daniel
Regards, Christian.
spsc_queue_pop(&entity->job_queue); return sched_job;
} @@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) struct drm_gpu_scheduler *sched; struct drm_sched_rq *rq;
if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
/* single possible engine and already selected */
if (!entity->sched_list)
return;
/* queue non-empty, stay on the same engine */
if (spsc_queue_count(&entity->job_queue)) return;
fence = READ_ONCE(entity->last_scheduled);
/*
* Only when the queue is empty are we guaranteed that the scheduler
* thread cannot change ->last_scheduled. To enforce ordering we need
* a read barrier here. See drm_sched_entity_pop_job() for the other
* side.
*/
smp_rmb();
fence = entity->last_scheduled;
/* stay on the same engine if the previous job hasn't finished */ if (fence && !dma_fence_is_signaled(fence)) return;
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
Am 13.07.21 um 08:50 schrieb Daniel Vetter:
On Tue, Jul 13, 2021 at 8:35 AM Christian König christian.koenig@amd.com wrote:
Am 12.07.21 um 19:53 schrieb Daniel Vetter:
It might be good enough on x86 with just READ_ONCE, but the write side should then at least be WRITE_ONCE because x86 has total store order.
It's definitely not enough on arm.
Fix this proplery, which means
- explain the need for the barrier in both places
- point at the other side in each comment
Also pull out the !sched_list case as the first check, so that the code flow is clearer.
While at it sprinkle some comments around because it was very non-obvious to me what's actually going on here and why.
Note that we really need full barriers here, at first I thought store-release and load-acquire on ->last_scheduled would be enough, but we actually requiring ordering between that and the queue state.
v2: Put smp_rmp() in the right place and fix up comment (Andrey)
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: "Christian König" christian.koenig@amd.com Cc: Steven Price steven.price@arm.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Lee Jones lee.jones@linaro.org Cc: Boris Brezillon boris.brezillon@collabora.com
drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index f7347c284886..89e3f6eaf519 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED);
dma_fence_put(entity->last_scheduled);
entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
/*
* If the queue is empty we allow drm_sched_entity_select_rq() to
* locklessly access ->last_scheduled. This only works if we set the
* pointer before we dequeue and if we a write barrier here.
*/
smp_wmb();
Again, conceptual those barriers should be part of the spsc_queue container and not externally.
That would be extremely unusual api. Let's assume that your queue is very dumb, and protected by a simple lock. That's about the maximum any user could expect.
But then you still need barriers here, because linux locks (spinlock, mutex) are defined to be one-way barriers: Stuff that's inside is guaranteed to be done insinde, but stuff outside of the locked region can leak in. They're load-acquire/store-release barriers. So not good enough.
You really need to have barriers here, and they really all need to be documented properly. And yes that's a shit-ton of work in drm/sched, because it's full of yolo lockless stuff.
The other case you could make is that this works like a wakeup queue, or similar. The rules there are:
- wake_up (i.e. pushing something into the queue) is a store-release barrier
- the waked up (i.e. popping an entry) is a load acquire barrier
Which is obviuosly needed because otherwise you don't have coherency for the data queued up. And again not the barriers you're locking for here.
Exactly that was the idea, yes.
Either way, we'd still need the comments, because it's still lockless trickery, and every single one of that needs to have a comment on both sides to explain what's going on.
Essentially replace spsc_queue with an llist underneath, and that's the amount of barriers a data structure should provide. Anything else is asking your datastructure to paper over bugs in your users.
This is similar to how atomic_t is by default completely unordered, and users need to add barriers as needed, with comments.
My main problem is as always that kernel atomics work different than userspace atomics.
I think this is all to make sure people don't just write lockless algorithms because it's a cool idea, but are forced to think this all through. Which seems to not have happened very consistently for drm/sched, so I guess needs to be fixed.
Well at least initially that was all perfectly thought through. The problem is nobody is really maintaining that stuff.
I'm definitely not going to hide all that by making the spsc_queue stuff provide random unjustified barriers just because that would paper over drm/sched bugs. We need to fix the actual bugs, and preferrable all of them. I've found a few, but I wasn't involved in drm/sched thus far, so best I can do is discover them as we go.
I don't think that those are random unjustified barriers at all and it sounds like you didn't grip what I said here.
See the spsc queue must have the following semantics:
1. When you pop a job all changes made before you push the job must be visible.
2. When the queue becomes empty all the changes made before you pop the last job must be visible.
Otherwise I completely agree with you that the whole scheduler doesn't work at all and we need to add tons of external barriers.
Regards, Christian.
-Daniel
Regards, Christian.
spsc_queue_pop(&entity->job_queue); return sched_job;
} @@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) struct drm_gpu_scheduler *sched; struct drm_sched_rq *rq;
if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
/* single possible engine and already selected */
if (!entity->sched_list)
return;
/* queue non-empty, stay on the same engine */
if (spsc_queue_count(&entity->job_queue)) return;
fence = READ_ONCE(entity->last_scheduled);
/*
* Only when the queue is empty are we guaranteed that the scheduler
* thread cannot change ->last_scheduled. To enforce ordering we need
* a read barrier here. See drm_sched_entity_pop_job() for the other
* side.
*/
smp_rmb();
fence = entity->last_scheduled;
/* stay on the same engine if the previous job hasn't finished */ if (fence && !dma_fence_is_signaled(fence)) return;
-- Daniel Vetter Software Engineer, Intel Corporation https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll....
On Tue, Jul 13, 2021 at 9:25 AM Christian König christian.koenig@amd.com wrote:
Am 13.07.21 um 08:50 schrieb Daniel Vetter:
On Tue, Jul 13, 2021 at 8:35 AM Christian König christian.koenig@amd.com wrote:
Am 12.07.21 um 19:53 schrieb Daniel Vetter:
It might be good enough on x86 with just READ_ONCE, but the write side should then at least be WRITE_ONCE because x86 has total store order.
It's definitely not enough on arm.
Fix this proplery, which means
- explain the need for the barrier in both places
- point at the other side in each comment
Also pull out the !sched_list case as the first check, so that the code flow is clearer.
While at it sprinkle some comments around because it was very non-obvious to me what's actually going on here and why.
Note that we really need full barriers here, at first I thought store-release and load-acquire on ->last_scheduled would be enough, but we actually requiring ordering between that and the queue state.
v2: Put smp_rmp() in the right place and fix up comment (Andrey)
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: "Christian König" christian.koenig@amd.com Cc: Steven Price steven.price@arm.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Lee Jones lee.jones@linaro.org Cc: Boris Brezillon boris.brezillon@collabora.com
drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index f7347c284886..89e3f6eaf519 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED);
dma_fence_put(entity->last_scheduled);
entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
/*
* If the queue is empty we allow drm_sched_entity_select_rq() to
* locklessly access ->last_scheduled. This only works if we set the
* pointer before we dequeue and if we a write barrier here.
*/
smp_wmb();
Again, conceptual those barriers should be part of the spsc_queue container and not externally.
That would be extremely unusual api. Let's assume that your queue is very dumb, and protected by a simple lock. That's about the maximum any user could expect.
But then you still need barriers here, because linux locks (spinlock, mutex) are defined to be one-way barriers: Stuff that's inside is guaranteed to be done insinde, but stuff outside of the locked region can leak in. They're load-acquire/store-release barriers. So not good enough.
You really need to have barriers here, and they really all need to be documented properly. And yes that's a shit-ton of work in drm/sched, because it's full of yolo lockless stuff.
The other case you could make is that this works like a wakeup queue, or similar. The rules there are:
- wake_up (i.e. pushing something into the queue) is a store-release barrier
- the waked up (i.e. popping an entry) is a load acquire barrier
Which is obviuosly needed because otherwise you don't have coherency for the data queued up. And again not the barriers you're locking for here.
Exactly that was the idea, yes.
Either way, we'd still need the comments, because it's still lockless trickery, and every single one of that needs to have a comment on both sides to explain what's going on.
Essentially replace spsc_queue with an llist underneath, and that's the amount of barriers a data structure should provide. Anything else is asking your datastructure to paper over bugs in your users.
This is similar to how atomic_t is by default completely unordered, and users need to add barriers as needed, with comments.
My main problem is as always that kernel atomics work different than userspace atomics.
I think this is all to make sure people don't just write lockless algorithms because it's a cool idea, but are forced to think this all through. Which seems to not have happened very consistently for drm/sched, so I guess needs to be fixed.
Well at least initially that was all perfectly thought through. The problem is nobody is really maintaining that stuff.
I'm definitely not going to hide all that by making the spsc_queue stuff provide random unjustified barriers just because that would paper over drm/sched bugs. We need to fix the actual bugs, and preferrable all of them. I've found a few, but I wasn't involved in drm/sched thus far, so best I can do is discover them as we go.
I don't think that those are random unjustified barriers at all and it sounds like you didn't grip what I said here.
See the spsc queue must have the following semantics:
- When you pop a job all changes made before you push the job must be
visible.
This is the standard barriers that also wake-up queues have, it's just store-release+load-acquire.
- When the queue becomes empty all the changes made before you pop the
last job must be visible.
This is very much non-standard for a queue. I guess you could make that part of the spsc_queue api between pop and is_empty (really we shouldn't expose the _count() function for this), but that's all very clever.
I think having explicit barriers in the code, with comments, is much more robust. Because it forces you to think about all this, and document it properly. Because there's also lockless stuff like drm_sched.ready, which doesn't look at all like it's ordered somehow.
E.g. there's also an rmb(); in drm_sched_entity_is_idle(), which - probably should be an smp_rmb() - really should document what it actually synchronizes against, and the lack of an smp_wmb() somewhere else indicates it's probably busted. You always need two barriers.
Otherwise I completely agree with you that the whole scheduler doesn't work at all and we need to add tons of external barriers.
Imo that's what we need to do. And the most important part for maintainability is to properly document thing with comments, and the most important part in that comment is pointing at the other side of a barrier (since a barrier on one side only orders nothing).
Also, on x86 almost nothing here matters, because both rmb() and wmb() are no-op. Aside from the compiler barrier, which tends to not be the biggest issue. Only mb() does anything, because x86 is only allowed to reorder reads ahead of writes.
So in practice it's not quite as big a disaster, imo the big thing here is maintainability of all these tricks just not being documented. -Daniel
Regards, Christian.
-Daniel
Regards, Christian.
spsc_queue_pop(&entity->job_queue); return sched_job;
} @@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) struct drm_gpu_scheduler *sched; struct drm_sched_rq *rq;
if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
/* single possible engine and already selected */
if (!entity->sched_list)
return;
/* queue non-empty, stay on the same engine */
if (spsc_queue_count(&entity->job_queue)) return;
fence = READ_ONCE(entity->last_scheduled);
/*
* Only when the queue is empty are we guaranteed that the scheduler
* thread cannot change ->last_scheduled. To enforce ordering we need
* a read barrier here. See drm_sched_entity_pop_job() for the other
* side.
*/
smp_rmb();
fence = entity->last_scheduled;
/* stay on the same engine if the previous job hasn't finished */ if (fence && !dma_fence_is_signaled(fence)) return;
-- Daniel Vetter Software Engineer, Intel Corporation https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll....
Am 13.07.21 um 11:10 schrieb Daniel Vetter:
On Tue, Jul 13, 2021 at 9:25 AM Christian König christian.koenig@amd.com wrote:
Am 13.07.21 um 08:50 schrieb Daniel Vetter:
On Tue, Jul 13, 2021 at 8:35 AM Christian König christian.koenig@amd.com wrote:
Am 12.07.21 um 19:53 schrieb Daniel Vetter:
It might be good enough on x86 with just READ_ONCE, but the write side should then at least be WRITE_ONCE because x86 has total store order.
It's definitely not enough on arm.
Fix this proplery, which means
- explain the need for the barrier in both places
- point at the other side in each comment
Also pull out the !sched_list case as the first check, so that the code flow is clearer.
While at it sprinkle some comments around because it was very non-obvious to me what's actually going on here and why.
Note that we really need full barriers here, at first I thought store-release and load-acquire on ->last_scheduled would be enough, but we actually requiring ordering between that and the queue state.
v2: Put smp_rmp() in the right place and fix up comment (Andrey)
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: "Christian König" christian.koenig@amd.com Cc: Steven Price steven.price@arm.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Lee Jones lee.jones@linaro.org Cc: Boris Brezillon boris.brezillon@collabora.com
drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index f7347c284886..89e3f6eaf519 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED);
dma_fence_put(entity->last_scheduled);
entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
/*
* If the queue is empty we allow drm_sched_entity_select_rq() to
* locklessly access ->last_scheduled. This only works if we set the
* pointer before we dequeue and if we a write barrier here.
*/
smp_wmb();
Again, conceptual those barriers should be part of the spsc_queue container and not externally.
That would be extremely unusual api. Let's assume that your queue is very dumb, and protected by a simple lock. That's about the maximum any user could expect.
But then you still need barriers here, because linux locks (spinlock, mutex) are defined to be one-way barriers: Stuff that's inside is guaranteed to be done insinde, but stuff outside of the locked region can leak in. They're load-acquire/store-release barriers. So not good enough.
You really need to have barriers here, and they really all need to be documented properly. And yes that's a shit-ton of work in drm/sched, because it's full of yolo lockless stuff.
The other case you could make is that this works like a wakeup queue, or similar. The rules there are:
- wake_up (i.e. pushing something into the queue) is a store-release barrier
- the waked up (i.e. popping an entry) is a load acquire barrier
Which is obviuosly needed because otherwise you don't have coherency for the data queued up. And again not the barriers you're locking for here.
Exactly that was the idea, yes.
Either way, we'd still need the comments, because it's still lockless trickery, and every single one of that needs to have a comment on both sides to explain what's going on.
Essentially replace spsc_queue with an llist underneath, and that's the amount of barriers a data structure should provide. Anything else is asking your datastructure to paper over bugs in your users.
This is similar to how atomic_t is by default completely unordered, and users need to add barriers as needed, with comments.
My main problem is as always that kernel atomics work different than userspace atomics.
I think this is all to make sure people don't just write lockless algorithms because it's a cool idea, but are forced to think this all through. Which seems to not have happened very consistently for drm/sched, so I guess needs to be fixed.
Well at least initially that was all perfectly thought through. The problem is nobody is really maintaining that stuff.
I'm definitely not going to hide all that by making the spsc_queue stuff provide random unjustified barriers just because that would paper over drm/sched bugs. We need to fix the actual bugs, and preferrable all of them. I've found a few, but I wasn't involved in drm/sched thus far, so best I can do is discover them as we go.
I don't think that those are random unjustified barriers at all and it sounds like you didn't grip what I said here.
See the spsc queue must have the following semantics:
- When you pop a job all changes made before you push the job must be
visible.
This is the standard barriers that also wake-up queues have, it's just store-release+load-acquire.
- When the queue becomes empty all the changes made before you pop the
last job must be visible.
This is very much non-standard for a queue. I guess you could make that part of the spsc_queue api between pop and is_empty (really we shouldn't expose the _count() function for this), but that's all very clever.
Yeah, even having count is superfluous. You can much easier do this by checking if the pointer is NULL or not.
I think having explicit barriers in the code, with comments, is much more robust. Because it forces you to think about all this, and document it properly. Because there's also lockless stuff like drm_sched.ready, which doesn't look at all like it's ordered somehow.
But then you have to fix drm_sched_entity_fini() as well which also relies on the same behavior.
Regards, Christian.
E.g. there's also an rmb(); in drm_sched_entity_is_idle(), which
- probably should be an smp_rmb()
- really should document what it actually synchronizes against, and
the lack of an smp_wmb() somewhere else indicates it's probably busted. You always need two barriers.
Otherwise I completely agree with you that the whole scheduler doesn't work at all and we need to add tons of external barriers.
Imo that's what we need to do. And the most important part for maintainability is to properly document thing with comments, and the most important part in that comment is pointing at the other side of a barrier (since a barrier on one side only orders nothing).
Also, on x86 almost nothing here matters, because both rmb() and wmb() are no-op. Aside from the compiler barrier, which tends to not be the biggest issue. Only mb() does anything, because x86 is only allowed to reorder reads ahead of writes.
So in practice it's not quite as big a disaster, imo the big thing here is maintainability of all these tricks just not being documented. -Daniel
Regards, Christian.
-Daniel
Regards, Christian.
spsc_queue_pop(&entity->job_queue); return sched_job; }
@@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) struct drm_gpu_scheduler *sched; struct drm_sched_rq *rq;
if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
/* single possible engine and already selected */
if (!entity->sched_list)
return;
/* queue non-empty, stay on the same engine */
if (spsc_queue_count(&entity->job_queue)) return;
fence = READ_ONCE(entity->last_scheduled);
/*
* Only when the queue is empty are we guaranteed that the scheduler
* thread cannot change ->last_scheduled. To enforce ordering we need
* a read barrier here. See drm_sched_entity_pop_job() for the other
* side.
*/
smp_rmb();
fence = entity->last_scheduled;
/* stay on the same engine if the previous job hasn't finished */ if (fence && !dma_fence_is_signaled(fence)) return;
-- Daniel Vetter Software Engineer, Intel Corporation https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll....
On 2021-07-13 5:10 a.m., Daniel Vetter wrote:
On Tue, Jul 13, 2021 at 9:25 AM Christian König christian.koenig@amd.com wrote:
Am 13.07.21 um 08:50 schrieb Daniel Vetter:
On Tue, Jul 13, 2021 at 8:35 AM Christian König christian.koenig@amd.com wrote:
Am 12.07.21 um 19:53 schrieb Daniel Vetter:
It might be good enough on x86 with just READ_ONCE, but the write side should then at least be WRITE_ONCE because x86 has total store order.
It's definitely not enough on arm.
Fix this proplery, which means
- explain the need for the barrier in both places
- point at the other side in each comment
Also pull out the !sched_list case as the first check, so that the code flow is clearer.
While at it sprinkle some comments around because it was very non-obvious to me what's actually going on here and why.
Note that we really need full barriers here, at first I thought store-release and load-acquire on ->last_scheduled would be enough, but we actually requiring ordering between that and the queue state.
v2: Put smp_rmp() in the right place and fix up comment (Andrey)
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: "Christian König" christian.koenig@amd.com Cc: Steven Price steven.price@arm.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Lee Jones lee.jones@linaro.org Cc: Boris Brezillon boris.brezillon@collabora.com
drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index f7347c284886..89e3f6eaf519 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED);
dma_fence_put(entity->last_scheduled);
entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
/*
* If the queue is empty we allow drm_sched_entity_select_rq() to
* locklessly access ->last_scheduled. This only works if we set the
* pointer before we dequeue and if we a write barrier here.
*/
smp_wmb();
Again, conceptual those barriers should be part of the spsc_queue container and not externally.
That would be extremely unusual api. Let's assume that your queue is very dumb, and protected by a simple lock. That's about the maximum any user could expect.
But then you still need barriers here, because linux locks (spinlock, mutex) are defined to be one-way barriers: Stuff that's inside is guaranteed to be done insinde, but stuff outside of the locked region can leak in. They're load-acquire/store-release barriers. So not good enough.
You really need to have barriers here, and they really all need to be documented properly. And yes that's a shit-ton of work in drm/sched, because it's full of yolo lockless stuff.
The other case you could make is that this works like a wakeup queue, or similar. The rules there are:
- wake_up (i.e. pushing something into the queue) is a store-release barrier
- the waked up (i.e. popping an entry) is a load acquire barrier
Which is obviuosly needed because otherwise you don't have coherency for the data queued up. And again not the barriers you're locking for here.
Exactly that was the idea, yes.
Either way, we'd still need the comments, because it's still lockless trickery, and every single one of that needs to have a comment on both sides to explain what's going on.
Essentially replace spsc_queue with an llist underneath, and that's the amount of barriers a data structure should provide. Anything else is asking your datastructure to paper over bugs in your users.
This is similar to how atomic_t is by default completely unordered, and users need to add barriers as needed, with comments.
My main problem is as always that kernel atomics work different than userspace atomics.
I think this is all to make sure people don't just write lockless algorithms because it's a cool idea, but are forced to think this all through. Which seems to not have happened very consistently for drm/sched, so I guess needs to be fixed.
Well at least initially that was all perfectly thought through. The problem is nobody is really maintaining that stuff.
I'm definitely not going to hide all that by making the spsc_queue stuff provide random unjustified barriers just because that would paper over drm/sched bugs. We need to fix the actual bugs, and preferrable all of them. I've found a few, but I wasn't involved in drm/sched thus far, so best I can do is discover them as we go.
I don't think that those are random unjustified barriers at all and it sounds like you didn't grip what I said here.
See the spsc queue must have the following semantics:
- When you pop a job all changes made before you push the job must be
visible.
This is the standard barriers that also wake-up queues have, it's just store-release+load-acquire.
- When the queue becomes empty all the changes made before you pop the
last job must be visible.
This is very much non-standard for a queue. I guess you could make that part of the spsc_queue api between pop and is_empty (really we shouldn't expose the _count() function for this), but that's all very clever.
I think having explicit barriers in the code, with comments, is much more robust. Because it forces you to think about all this, and document it properly. Because there's also lockless stuff like drm_sched.ready, which doesn't look at all like it's ordered somehow.
At least for amdgpu, after drm_sched_fini is called (setting sched.ready = false) we call amdgpu_fence_wait_empty to ensure all in progress jobs are done. Seems to me at least, this should guarantee that all in flight consumers of sched.ready (those who still see sched.ready == true) are finished while all later consumers will see sched.ready == fakle and will bail out.
On second thought there is a gap between checking for sched.ready and inserting the HW fence for the new job so this might still be a bug... Looks like we need to check for sched.ready after inserting the HW fence and for this we will need barrier or locking.
Andrey
E.g. there's also an rmb(); in drm_sched_entity_is_idle(), which
- probably should be an smp_rmb()
- really should document what it actually synchronizes against, and
the lack of an smp_wmb() somewhere else indicates it's probably busted. You always need two barriers.
Otherwise I completely agree with you that the whole scheduler doesn't work at all and we need to add tons of external barriers.
Imo that's what we need to do. And the most important part for maintainability is to properly document thing with comments, and the most important part in that comment is pointing at the other side of a barrier (since a barrier on one side only orders nothing).
Also, on x86 almost nothing here matters, because both rmb() and wmb() are no-op. Aside from the compiler barrier, which tends to not be the biggest issue. Only mb() does anything, because x86 is only allowed to reorder reads ahead of writes.
So in practice it's not quite as big a disaster, imo the big thing here is maintainability of all these tricks just not being documented. -Daniel
Regards, Christian.
-Daniel
Regards, Christian.
spsc_queue_pop(&entity->job_queue); return sched_job; }
@@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) struct drm_gpu_scheduler *sched; struct drm_sched_rq *rq;
if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
/* single possible engine and already selected */
if (!entity->sched_list)
return;
/* queue non-empty, stay on the same engine */
if (spsc_queue_count(&entity->job_queue)) return;
fence = READ_ONCE(entity->last_scheduled);
/*
* Only when the queue is empty are we guaranteed that the scheduler
* thread cannot change ->last_scheduled. To enforce ordering we need
* a read barrier here. See drm_sched_entity_pop_job() for the other
* side.
*/
smp_rmb();
fence = entity->last_scheduled;
/* stay on the same engine if the previous job hasn't finished */ if (fence && !dma_fence_is_signaled(fence)) return;
-- Daniel Vetter Software Engineer, Intel Corporation https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll....
On Tue, Jul 13, 2021 at 6:11 PM Andrey Grodzovsky andrey.grodzovsky@amd.com wrote:
On 2021-07-13 5:10 a.m., Daniel Vetter wrote:
On Tue, Jul 13, 2021 at 9:25 AM Christian König christian.koenig@amd.com wrote:
Am 13.07.21 um 08:50 schrieb Daniel Vetter:
On Tue, Jul 13, 2021 at 8:35 AM Christian König christian.koenig@amd.com wrote:
Am 12.07.21 um 19:53 schrieb Daniel Vetter:
It might be good enough on x86 with just READ_ONCE, but the write side should then at least be WRITE_ONCE because x86 has total store order.
It's definitely not enough on arm.
Fix this proplery, which means
- explain the need for the barrier in both places
- point at the other side in each comment
Also pull out the !sched_list case as the first check, so that the code flow is clearer.
While at it sprinkle some comments around because it was very non-obvious to me what's actually going on here and why.
Note that we really need full barriers here, at first I thought store-release and load-acquire on ->last_scheduled would be enough, but we actually requiring ordering between that and the queue state.
v2: Put smp_rmp() in the right place and fix up comment (Andrey)
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: "Christian König" christian.koenig@amd.com Cc: Steven Price steven.price@arm.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Lee Jones lee.jones@linaro.org Cc: Boris Brezillon boris.brezillon@collabora.com
drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index f7347c284886..89e3f6eaf519 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED);
dma_fence_put(entity->last_scheduled);
entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
/*
* If the queue is empty we allow drm_sched_entity_select_rq() to
* locklessly access ->last_scheduled. This only works if we set the
* pointer before we dequeue and if we a write barrier here.
*/
smp_wmb();
Again, conceptual those barriers should be part of the spsc_queue container and not externally.
That would be extremely unusual api. Let's assume that your queue is very dumb, and protected by a simple lock. That's about the maximum any user could expect.
But then you still need barriers here, because linux locks (spinlock, mutex) are defined to be one-way barriers: Stuff that's inside is guaranteed to be done insinde, but stuff outside of the locked region can leak in. They're load-acquire/store-release barriers. So not good enough.
You really need to have barriers here, and they really all need to be documented properly. And yes that's a shit-ton of work in drm/sched, because it's full of yolo lockless stuff.
The other case you could make is that this works like a wakeup queue, or similar. The rules there are:
- wake_up (i.e. pushing something into the queue) is a store-release barrier
- the waked up (i.e. popping an entry) is a load acquire barrier
Which is obviuosly needed because otherwise you don't have coherency for the data queued up. And again not the barriers you're locking for here.
Exactly that was the idea, yes.
Either way, we'd still need the comments, because it's still lockless trickery, and every single one of that needs to have a comment on both sides to explain what's going on.
Essentially replace spsc_queue with an llist underneath, and that's the amount of barriers a data structure should provide. Anything else is asking your datastructure to paper over bugs in your users.
This is similar to how atomic_t is by default completely unordered, and users need to add barriers as needed, with comments.
My main problem is as always that kernel atomics work different than userspace atomics.
I think this is all to make sure people don't just write lockless algorithms because it's a cool idea, but are forced to think this all through. Which seems to not have happened very consistently for drm/sched, so I guess needs to be fixed.
Well at least initially that was all perfectly thought through. The problem is nobody is really maintaining that stuff.
I'm definitely not going to hide all that by making the spsc_queue stuff provide random unjustified barriers just because that would paper over drm/sched bugs. We need to fix the actual bugs, and preferrable all of them. I've found a few, but I wasn't involved in drm/sched thus far, so best I can do is discover them as we go.
I don't think that those are random unjustified barriers at all and it sounds like you didn't grip what I said here.
See the spsc queue must have the following semantics:
- When you pop a job all changes made before you push the job must be
visible.
This is the standard barriers that also wake-up queues have, it's just store-release+load-acquire.
- When the queue becomes empty all the changes made before you pop the
last job must be visible.
This is very much non-standard for a queue. I guess you could make that part of the spsc_queue api between pop and is_empty (really we shouldn't expose the _count() function for this), but that's all very clever.
I think having explicit barriers in the code, with comments, is much more robust. Because it forces you to think about all this, and document it properly. Because there's also lockless stuff like drm_sched.ready, which doesn't look at all like it's ordered somehow.
At least for amdgpu, after drm_sched_fini is called (setting sched.ready = false) we call amdgpu_fence_wait_empty to ensure all in progress jobs are done. Seems to me at least, this should guarantee that all in flight consumers of sched.ready (those who still see sched.ready == true) are finished while all later consumers will see sched.ready == fakle and will bail out.
On second thought there is a gap between checking for sched.ready and inserting the HW fence for the new job so this might still be a bug... Looks like we need to check for sched.ready after inserting the HW fence and for this we will need barrier or locking.
Yeah, and at that point I think it's good to split up drm_sched.ready from a new thing for when the hw died, like drm_sched.wedged or .hw_death or similar, so that we can tell them apart. Trying to submit a job to a non-ready scheduler is a driver bug and should WARN, while submitting a job to a dead scheduler should probably result in -EIO being returned to userspace (instead of the current -ENOENT, assuming I haven't missed a errno remapping code somewhere in amdgpu).
Also, then you could do a drm_sched_die() or similar function which combines setting the hw_died with the right barriers and cleaning up all the jobs.
Wrt the fundamental race: I think that's not fixeable easily, so maybe the scheduler thread also needs to handle this and immediately fail these jobs by setting all fences to -EIO and completing them, without even calling into the driver. If you try to catch this synchronously I think it would require some kind of locking in push_job, plus failure handling, which would be a) slow and b) real ugly in the driver code. Just accepting that some jobs can slip through and letting the scheduler thread clean them up is I think cleaner.
If userspace then goes ahead and closes the ctx before all the jobs are cleaned up we can handle that with the normal drm_sched_entity cleanup logic. Which would be another reason to split normal cleanup from hw death. -Daniel
Andrey
E.g. there's also an rmb(); in drm_sched_entity_is_idle(), which
- probably should be an smp_rmb()
- really should document what it actually synchronizes against, and
the lack of an smp_wmb() somewhere else indicates it's probably busted. You always need two barriers.
Otherwise I completely agree with you that the whole scheduler doesn't work at all and we need to add tons of external barriers.
Imo that's what we need to do. And the most important part for maintainability is to properly document thing with comments, and the most important part in that comment is pointing at the other side of a barrier (since a barrier on one side only orders nothing).
Also, on x86 almost nothing here matters, because both rmb() and wmb() are no-op. Aside from the compiler barrier, which tends to not be the biggest issue. Only mb() does anything, because x86 is only allowed to reorder reads ahead of writes.
So in practice it's not quite as big a disaster, imo the big thing here is maintainability of all these tricks just not being documented. -Daniel
Regards, Christian.
-Daniel
Regards, Christian.
spsc_queue_pop(&entity->job_queue); return sched_job; }
@@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) struct drm_gpu_scheduler *sched; struct drm_sched_rq *rq;
if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
/* single possible engine and already selected */
if (!entity->sched_list)
return;
/* queue non-empty, stay on the same engine */
if (spsc_queue_count(&entity->job_queue)) return;
fence = READ_ONCE(entity->last_scheduled);
/*
* Only when the queue is empty are we guaranteed that the scheduler
* thread cannot change ->last_scheduled. To enforce ordering we need
* a read barrier here. See drm_sched_entity_pop_job() for the other
* side.
*/
smp_rmb();
fence = entity->last_scheduled;
/* stay on the same engine if the previous job hasn't finished */ if (fence && !dma_fence_is_signaled(fence)) return;
-- Daniel Vetter Software Engineer, Intel Corporation https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll....
On 2021-07-13 12:45 p.m., Daniel Vetter wrote:
On Tue, Jul 13, 2021 at 6:11 PM Andrey Grodzovsky andrey.grodzovsky@amd.com wrote:
On 2021-07-13 5:10 a.m., Daniel Vetter wrote:
On Tue, Jul 13, 2021 at 9:25 AM Christian König christian.koenig@amd.com wrote:
Am 13.07.21 um 08:50 schrieb Daniel Vetter:
On Tue, Jul 13, 2021 at 8:35 AM Christian König christian.koenig@amd.com wrote:
Am 12.07.21 um 19:53 schrieb Daniel Vetter: > It might be good enough on x86 with just READ_ONCE, but the write side > should then at least be WRITE_ONCE because x86 has total store order. > > It's definitely not enough on arm. > > Fix this proplery, which means > - explain the need for the barrier in both places > - point at the other side in each comment > > Also pull out the !sched_list case as the first check, so that the > code flow is clearer. > > While at it sprinkle some comments around because it was very > non-obvious to me what's actually going on here and why. > > Note that we really need full barriers here, at first I thought > store-release and load-acquire on ->last_scheduled would be enough, > but we actually requiring ordering between that and the queue state. > > v2: Put smp_rmp() in the right place and fix up comment (Andrey) > > Signed-off-by: Daniel Vetter daniel.vetter@intel.com > Cc: "Christian König" christian.koenig@amd.com > Cc: Steven Price steven.price@arm.com > Cc: Daniel Vetter daniel.vetter@ffwll.ch > Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com > Cc: Lee Jones lee.jones@linaro.org > Cc: Boris Brezillon boris.brezillon@collabora.com > --- > drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++-- > 1 file changed, 25 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c > index f7347c284886..89e3f6eaf519 100644 > --- a/drivers/gpu/drm/scheduler/sched_entity.c > +++ b/drivers/gpu/drm/scheduler/sched_entity.c > @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) > dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED); > > dma_fence_put(entity->last_scheduled); > + > entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished); > > + /* > + * If the queue is empty we allow drm_sched_entity_select_rq() to > + * locklessly access ->last_scheduled. This only works if we set the > + * pointer before we dequeue and if we a write barrier here. > + */ > + smp_wmb(); > + Again, conceptual those barriers should be part of the spsc_queue container and not externally.
That would be extremely unusual api. Let's assume that your queue is very dumb, and protected by a simple lock. That's about the maximum any user could expect.
But then you still need barriers here, because linux locks (spinlock, mutex) are defined to be one-way barriers: Stuff that's inside is guaranteed to be done insinde, but stuff outside of the locked region can leak in. They're load-acquire/store-release barriers. So not good enough.
You really need to have barriers here, and they really all need to be documented properly. And yes that's a shit-ton of work in drm/sched, because it's full of yolo lockless stuff.
The other case you could make is that this works like a wakeup queue, or similar. The rules there are:
- wake_up (i.e. pushing something into the queue) is a store-release barrier
- the waked up (i.e. popping an entry) is a load acquire barrier
Which is obviuosly needed because otherwise you don't have coherency for the data queued up. And again not the barriers you're locking for here.
Exactly that was the idea, yes.
Either way, we'd still need the comments, because it's still lockless trickery, and every single one of that needs to have a comment on both sides to explain what's going on.
Essentially replace spsc_queue with an llist underneath, and that's the amount of barriers a data structure should provide. Anything else is asking your datastructure to paper over bugs in your users.
This is similar to how atomic_t is by default completely unordered, and users need to add barriers as needed, with comments.
My main problem is as always that kernel atomics work different than userspace atomics.
I think this is all to make sure people don't just write lockless algorithms because it's a cool idea, but are forced to think this all through. Which seems to not have happened very consistently for drm/sched, so I guess needs to be fixed.
Well at least initially that was all perfectly thought through. The problem is nobody is really maintaining that stuff.
I'm definitely not going to hide all that by making the spsc_queue stuff provide random unjustified barriers just because that would paper over drm/sched bugs. We need to fix the actual bugs, and preferrable all of them. I've found a few, but I wasn't involved in drm/sched thus far, so best I can do is discover them as we go.
I don't think that those are random unjustified barriers at all and it sounds like you didn't grip what I said here.
See the spsc queue must have the following semantics:
- When you pop a job all changes made before you push the job must be
visible.
This is the standard barriers that also wake-up queues have, it's just store-release+load-acquire.
- When the queue becomes empty all the changes made before you pop the
last job must be visible.
This is very much non-standard for a queue. I guess you could make that part of the spsc_queue api between pop and is_empty (really we shouldn't expose the _count() function for this), but that's all very clever.
I think having explicit barriers in the code, with comments, is much more robust. Because it forces you to think about all this, and document it properly. Because there's also lockless stuff like drm_sched.ready, which doesn't look at all like it's ordered somehow.
At least for amdgpu, after drm_sched_fini is called (setting sched.ready = false) we call amdgpu_fence_wait_empty to ensure all in progress jobs are done. Seems to me at least, this should guarantee that all in flight consumers of sched.ready (those who still see sched.ready == true) are finished while all later consumers will see sched.ready == fakle and will bail out.
On second thought there is a gap between checking for sched.ready and inserting the HW fence for the new job so this might still be a bug... Looks like we need to check for sched.ready after inserting the HW fence and for this we will need barrier or locking.
Yeah, and at that point I think it's good to split up drm_sched.ready from a new thing for when the hw died, like drm_sched.wedged or .hw_death or similar, so that we can tell them apart. Trying to submit a job to a non-ready scheduler is a driver bug and should WARN, while submitting a job to a dead scheduler should probably result in -EIO being returned to userspace (instead of the current -ENOENT, assuming I haven't missed a errno remapping code somewhere in amdgpu).
Also, then you could do a drm_sched_die() or similar function which combines setting the hw_died with the right barriers and cleaning up all the jobs.
Wrt the fundamental race: I think that's not fixeable easily, so maybe the scheduler thread also needs to handle this and immediately fail these jobs by setting all fences to -EIO and completing them, without even calling into the driver. If you try to catch this synchronously I think it would require some kind of locking in push_job, plus failure handling, which would be a) slow and b) real ugly in the driver code. Just accepting that some jobs can slip through and letting the scheduler thread clean them up is I think cleaner.
I agree about moving this check to scheduler thread, I also not quite understand why in some places which are clearly post the job being pick-up by it's scheduler thread such as amdgpu_ib_schedule, still check for sched.ready... What's the point ? Also there are direct submission cases where IB insertion into HW ring is done without any scheduler involvement and even more in that case why we care that scheduler is not ready.
Andrey
If userspace then goes ahead and closes the ctx before all the jobs are cleaned up we can handle that with the normal drm_sched_entity cleanup logic. Which would be another reason to split normal cleanup from hw death. -Daniel
Andrey
E.g. there's also an rmb(); in drm_sched_entity_is_idle(), which
- probably should be an smp_rmb()
- really should document what it actually synchronizes against, and
the lack of an smp_wmb() somewhere else indicates it's probably busted. You always need two barriers.
Otherwise I completely agree with you that the whole scheduler doesn't work at all and we need to add tons of external barriers.
Imo that's what we need to do. And the most important part for maintainability is to properly document thing with comments, and the most important part in that comment is pointing at the other side of a barrier (since a barrier on one side only orders nothing).
Also, on x86 almost nothing here matters, because both rmb() and wmb() are no-op. Aside from the compiler barrier, which tends to not be the biggest issue. Only mb() does anything, because x86 is only allowed to reorder reads ahead of writes.
So in practice it's not quite as big a disaster, imo the big thing here is maintainability of all these tricks just not being documented. -Daniel
Regards, Christian.
-Daniel
Regards, Christian.
> spsc_queue_pop(&entity->job_queue); > return sched_job; > } > @@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) > struct drm_gpu_scheduler *sched; > struct drm_sched_rq *rq; > > - if (spsc_queue_count(&entity->job_queue) || !entity->sched_list) > + /* single possible engine and already selected */ > + if (!entity->sched_list) > + return; > + > + /* queue non-empty, stay on the same engine */ > + if (spsc_queue_count(&entity->job_queue)) > return; > > - fence = READ_ONCE(entity->last_scheduled); > + /* > + * Only when the queue is empty are we guaranteed that the scheduler > + * thread cannot change ->last_scheduled. To enforce ordering we need > + * a read barrier here. See drm_sched_entity_pop_job() for the other > + * side. > + */ > + smp_rmb(); > + > + fence = entity->last_scheduled; > + > + /* stay on the same engine if the previous job hasn't finished */ > if (fence && !dma_fence_is_signaled(fence)) > return; >
-- Daniel Vetter Software Engineer, Intel Corporation https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll....
On Wed, Jul 14, 2021 at 06:12:54PM -0400, Andrey Grodzovsky wrote:
On 2021-07-13 12:45 p.m., Daniel Vetter wrote:
On Tue, Jul 13, 2021 at 6:11 PM Andrey Grodzovsky andrey.grodzovsky@amd.com wrote:
On 2021-07-13 5:10 a.m., Daniel Vetter wrote:
On Tue, Jul 13, 2021 at 9:25 AM Christian König christian.koenig@amd.com wrote:
Am 13.07.21 um 08:50 schrieb Daniel Vetter:
On Tue, Jul 13, 2021 at 8:35 AM Christian König christian.koenig@amd.com wrote: > Am 12.07.21 um 19:53 schrieb Daniel Vetter: > > It might be good enough on x86 with just READ_ONCE, but the write side > > should then at least be WRITE_ONCE because x86 has total store order. > > > > It's definitely not enough on arm. > > > > Fix this proplery, which means > > - explain the need for the barrier in both places > > - point at the other side in each comment > > > > Also pull out the !sched_list case as the first check, so that the > > code flow is clearer. > > > > While at it sprinkle some comments around because it was very > > non-obvious to me what's actually going on here and why. > > > > Note that we really need full barriers here, at first I thought > > store-release and load-acquire on ->last_scheduled would be enough, > > but we actually requiring ordering between that and the queue state. > > > > v2: Put smp_rmp() in the right place and fix up comment (Andrey) > > > > Signed-off-by: Daniel Vetter daniel.vetter@intel.com > > Cc: "Christian König" christian.koenig@amd.com > > Cc: Steven Price steven.price@arm.com > > Cc: Daniel Vetter daniel.vetter@ffwll.ch > > Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com > > Cc: Lee Jones lee.jones@linaro.org > > Cc: Boris Brezillon boris.brezillon@collabora.com > > --- > > drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++-- > > 1 file changed, 25 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c > > index f7347c284886..89e3f6eaf519 100644 > > --- a/drivers/gpu/drm/scheduler/sched_entity.c > > +++ b/drivers/gpu/drm/scheduler/sched_entity.c > > @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) > > dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED); > > > > dma_fence_put(entity->last_scheduled); > > + > > entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished); > > > > + /* > > + * If the queue is empty we allow drm_sched_entity_select_rq() to > > + * locklessly access ->last_scheduled. This only works if we set the > > + * pointer before we dequeue and if we a write barrier here. > > + */ > > + smp_wmb(); > > + > Again, conceptual those barriers should be part of the spsc_queue > container and not externally. That would be extremely unusual api. Let's assume that your queue is very dumb, and protected by a simple lock. That's about the maximum any user could expect.
But then you still need barriers here, because linux locks (spinlock, mutex) are defined to be one-way barriers: Stuff that's inside is guaranteed to be done insinde, but stuff outside of the locked region can leak in. They're load-acquire/store-release barriers. So not good enough.
You really need to have barriers here, and they really all need to be documented properly. And yes that's a shit-ton of work in drm/sched, because it's full of yolo lockless stuff.
The other case you could make is that this works like a wakeup queue, or similar. The rules there are:
- wake_up (i.e. pushing something into the queue) is a store-release barrier
- the waked up (i.e. popping an entry) is a load acquire barrier
Which is obviuosly needed because otherwise you don't have coherency for the data queued up. And again not the barriers you're locking for here.
Exactly that was the idea, yes.
Either way, we'd still need the comments, because it's still lockless trickery, and every single one of that needs to have a comment on both sides to explain what's going on.
Essentially replace spsc_queue with an llist underneath, and that's the amount of barriers a data structure should provide. Anything else is asking your datastructure to paper over bugs in your users.
This is similar to how atomic_t is by default completely unordered, and users need to add barriers as needed, with comments.
My main problem is as always that kernel atomics work different than userspace atomics.
I think this is all to make sure people don't just write lockless algorithms because it's a cool idea, but are forced to think this all through. Which seems to not have happened very consistently for drm/sched, so I guess needs to be fixed.
Well at least initially that was all perfectly thought through. The problem is nobody is really maintaining that stuff.
I'm definitely not going to hide all that by making the spsc_queue stuff provide random unjustified barriers just because that would paper over drm/sched bugs. We need to fix the actual bugs, and preferrable all of them. I've found a few, but I wasn't involved in drm/sched thus far, so best I can do is discover them as we go.
I don't think that those are random unjustified barriers at all and it sounds like you didn't grip what I said here.
See the spsc queue must have the following semantics:
- When you pop a job all changes made before you push the job must be
visible.
This is the standard barriers that also wake-up queues have, it's just store-release+load-acquire.
- When the queue becomes empty all the changes made before you pop the
last job must be visible.
This is very much non-standard for a queue. I guess you could make that part of the spsc_queue api between pop and is_empty (really we shouldn't expose the _count() function for this), but that's all very clever.
I think having explicit barriers in the code, with comments, is much more robust. Because it forces you to think about all this, and document it properly. Because there's also lockless stuff like drm_sched.ready, which doesn't look at all like it's ordered somehow.
At least for amdgpu, after drm_sched_fini is called (setting sched.ready = false) we call amdgpu_fence_wait_empty to ensure all in progress jobs are done. Seems to me at least, this should guarantee that all in flight consumers of sched.ready (those who still see sched.ready == true) are finished while all later consumers will see sched.ready == fakle and will bail out.
On second thought there is a gap between checking for sched.ready and inserting the HW fence for the new job so this might still be a bug... Looks like we need to check for sched.ready after inserting the HW fence and for this we will need barrier or locking.
Yeah, and at that point I think it's good to split up drm_sched.ready from a new thing for when the hw died, like drm_sched.wedged or .hw_death or similar, so that we can tell them apart. Trying to submit a job to a non-ready scheduler is a driver bug and should WARN, while submitting a job to a dead scheduler should probably result in -EIO being returned to userspace (instead of the current -ENOENT, assuming I haven't missed a errno remapping code somewhere in amdgpu).
Also, then you could do a drm_sched_die() or similar function which combines setting the hw_died with the right barriers and cleaning up all the jobs.
Wrt the fundamental race: I think that's not fixeable easily, so maybe the scheduler thread also needs to handle this and immediately fail these jobs by setting all fences to -EIO and completing them, without even calling into the driver. If you try to catch this synchronously I think it would require some kind of locking in push_job, plus failure handling, which would be a) slow and b) real ugly in the driver code. Just accepting that some jobs can slip through and letting the scheduler thread clean them up is I think cleaner.
I agree about moving this check to scheduler thread, I also not quite understand why in some places which are clearly post the job being pick-up by it's scheduler thread such as amdgpu_ib_schedule, still check for sched.ready... What's the point ? Also there are direct submission cases where IB insertion into HW ring is done without any scheduler involvement and even more in that case why we care that scheduler is not ready.
I think (but I haven't checked the code in full detail) that this is because there's a mixup of what ->ready means: - Setup/teardown ordering, where we sometimes try to submit stuff without the scheduler actually being ready yet (or maybe the hw isn't ready yet) and want to transparently fall back to something else.
- The actual "the hw died irrecoverably and reset couldn't resurrect it" case.
That's why I want to tear these two apart, so it's clear why we check things. Also in general I think solving the former problem with checks littered all over is bad style, but sometimes unavoidable (like when you're deep in a callchain through ttm to evict buffers for suspend). Usually it's better to order the code such that you never try to submit to hw when it's not ready.
Ofc the hw death is a different beast and can happen any time, hence needs to be treated differently - there's actual races possible with that, the code ordering issues around suspend/resume and driver load/unload are all single threaded so not possible to race. Ok maybe hotunplug is more like hw death since it can happen while we use it. -Daniel
Andrey
If userspace then goes ahead and closes the ctx before all the jobs are cleaned up we can handle that with the normal drm_sched_entity cleanup logic. Which would be another reason to split normal cleanup from hw death. -Daniel
Andrey
E.g. there's also an rmb(); in drm_sched_entity_is_idle(), which
- probably should be an smp_rmb()
- really should document what it actually synchronizes against, and
the lack of an smp_wmb() somewhere else indicates it's probably busted. You always need two barriers.
Otherwise I completely agree with you that the whole scheduler doesn't work at all and we need to add tons of external barriers.
Imo that's what we need to do. And the most important part for maintainability is to properly document thing with comments, and the most important part in that comment is pointing at the other side of a barrier (since a barrier on one side only orders nothing).
Also, on x86 almost nothing here matters, because both rmb() and wmb() are no-op. Aside from the compiler barrier, which tends to not be the biggest issue. Only mb() does anything, because x86 is only allowed to reorder reads ahead of writes.
So in practice it's not quite as big a disaster, imo the big thing here is maintainability of all these tricks just not being documented. -Daniel
Regards, Christian.
-Daniel
> Regards, > Christian. > > > spsc_queue_pop(&entity->job_queue); > > return sched_job; > > } > > @@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) > > struct drm_gpu_scheduler *sched; > > struct drm_sched_rq *rq; > > > > - if (spsc_queue_count(&entity->job_queue) || !entity->sched_list) > > + /* single possible engine and already selected */ > > + if (!entity->sched_list) > > + return; > > + > > + /* queue non-empty, stay on the same engine */ > > + if (spsc_queue_count(&entity->job_queue)) > > return; > > > > - fence = READ_ONCE(entity->last_scheduled); > > + /* > > + * Only when the queue is empty are we guaranteed that the scheduler > > + * thread cannot change ->last_scheduled. To enforce ordering we need > > + * a read barrier here. See drm_sched_entity_pop_job() for the other > > + * side. > > + */ > > + smp_rmb(); > > + > > + fence = entity->last_scheduled; > > + > > + /* stay on the same engine if the previous job hasn't finished */ > > if (fence && !dma_fence_is_signaled(fence)) > > return;
> >
Daniel Vetter Software Engineer, Intel Corporation https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll....
Instead of just a callback we can just glue in the gem helpers that panfrost, v3d and lima currently use. There's really not that many ways to skin this cat.
On the naming bikeshed: The idea for using _await_ to denote adding dependencies to a job comes from i915, where that's used quite extensively all over the place, in lots of datastructures.
v2/3: Rebased.
Reviewed-by: Steven Price steven.price@arm.com (v1) Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Sumit Semwal sumit.semwal@linaro.org Cc: "Christian König" christian.koenig@amd.com Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Lee Jones lee.jones@linaro.org Cc: Nirmoy Das nirmoy.aiemd@gmail.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Luben Tuikov luben.tuikov@amd.com Cc: Alex Deucher alexander.deucher@amd.com Cc: Jack Zhang Jack.Zhang1@amd.com Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/gpu/drm/scheduler/sched_entity.c | 18 +++- drivers/gpu/drm/scheduler/sched_main.c | 103 +++++++++++++++++++++++ include/drm/gpu_scheduler.h | 31 ++++++- 3 files changed, 146 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 89e3f6eaf519..381fbf462ea7 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -211,6 +211,19 @@ static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f, job->sched->ops->free_job(job); }
+static struct dma_fence * +drm_sched_job_dependency(struct drm_sched_job *job, + struct drm_sched_entity *entity) +{ + if (!xa_empty(&job->dependencies)) + return xa_erase(&job->dependencies, job->last_dependency++); + + if (job->sched->ops->dependency) + return job->sched->ops->dependency(job, entity); + + return NULL; +} + /** * drm_sched_entity_kill_jobs - Make sure all remaining jobs are killed * @@ -229,7 +242,7 @@ static void drm_sched_entity_kill_jobs(struct drm_sched_entity *entity) struct drm_sched_fence *s_fence = job->s_fence;
/* Wait for all dependencies to avoid data corruptions */ - while ((f = job->sched->ops->dependency(job, entity))) + while ((f = drm_sched_job_dependency(job, entity))) dma_fence_wait(f, false);
drm_sched_fence_scheduled(s_fence); @@ -419,7 +432,6 @@ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) */ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) { - struct drm_gpu_scheduler *sched = entity->rq->sched; struct drm_sched_job *sched_job;
sched_job = to_drm_sched_job(spsc_queue_peek(&entity->job_queue)); @@ -427,7 +439,7 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) return NULL;
while ((entity->dependency = - sched->ops->dependency(sched_job, entity))) { + drm_sched_job_dependency(sched_job, entity))) { trace_drm_sched_job_wait_dep(sched_job, entity->dependency);
if (drm_sched_entity_add_dependency_cb(entity)) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 454cb6164bdc..84c30badb78e 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -603,6 +603,8 @@ int drm_sched_job_init(struct drm_sched_job *job,
INIT_LIST_HEAD(&job->list);
+ xa_init_flags(&job->dependencies, XA_FLAGS_ALLOC); + return 0; } EXPORT_SYMBOL(drm_sched_job_init); @@ -637,6 +639,98 @@ void drm_sched_job_arm(struct drm_sched_job *job) } EXPORT_SYMBOL(drm_sched_job_arm);
+/** + * drm_sched_job_await_fence - adds the fence as a job dependency + * @job: scheduler job to add the dependencies to + * @fence: the dma_fence to add to the list of dependencies. + * + * Note that @fence is consumed in both the success and error cases. + * + * Returns: + * 0 on success, or an error on failing to expand the array. + */ +int drm_sched_job_await_fence(struct drm_sched_job *job, + struct dma_fence *fence) +{ + struct dma_fence *entry; + unsigned long index; + u32 id = 0; + int ret; + + if (!fence) + return 0; + + /* Deduplicate if we already depend on a fence from the same context. + * This lets the size of the array of deps scale with the number of + * engines involved, rather than the number of BOs. + */ + xa_for_each(&job->dependencies, index, entry) { + if (entry->context != fence->context) + continue; + + if (dma_fence_is_later(fence, entry)) { + dma_fence_put(entry); + xa_store(&job->dependencies, index, fence, GFP_KERNEL); + } else { + dma_fence_put(fence); + } + return 0; + } + + ret = xa_alloc(&job->dependencies, &id, fence, xa_limit_32b, GFP_KERNEL); + if (ret != 0) + dma_fence_put(fence); + + return ret; +} +EXPORT_SYMBOL(drm_sched_job_await_fence); + +/** + * drm_sched_job_await_implicit - adds implicit dependencies as job dependencies + * @job: scheduler job to add the dependencies to + * @obj: the gem object to add new dependencies from. + * @write: whether the job might write the object (so we need to depend on + * shared fences in the reservation object). + * + * This should be called after drm_gem_lock_reservations() on your array of + * GEM objects used in the job but before updating the reservations with your + * own fences. + * + * Returns: + * 0 on success, or an error on failing to expand the array. + */ +int drm_sched_job_await_implicit(struct drm_sched_job *job, + struct drm_gem_object *obj, + bool write) +{ + int ret; + struct dma_fence **fences; + unsigned int i, fence_count; + + if (!write) { + struct dma_fence *fence = dma_resv_get_excl_unlocked(obj->resv); + + return drm_sched_job_await_fence(job, fence); + } + + ret = dma_resv_get_fences(obj->resv, NULL, &fence_count, &fences); + if (ret || !fence_count) + return ret; + + for (i = 0; i < fence_count; i++) { + ret = drm_sched_job_await_fence(job, fences[i]); + if (ret) + break; + } + + for (; i < fence_count; i++) + dma_fence_put(fences[i]); + kfree(fences); + return ret; +} +EXPORT_SYMBOL(drm_sched_job_await_implicit); + + /** * drm_sched_job_cleanup - clean up scheduler job resources * @job: scheduler job to clean up @@ -652,6 +746,9 @@ EXPORT_SYMBOL(drm_sched_job_arm); */ void drm_sched_job_cleanup(struct drm_sched_job *job) { + struct dma_fence *fence; + unsigned long index; + if (kref_read(&job->s_fence->finished.refcount)) { /* drm_sched_job_arm() has been called */ dma_fence_put(&job->s_fence->finished); @@ -661,6 +758,12 @@ void drm_sched_job_cleanup(struct drm_sched_job *job) }
job->s_fence = NULL; + + xa_for_each(&job->dependencies, index, fence) { + dma_fence_put(fence); + } + xa_destroy(&job->dependencies); + } EXPORT_SYMBOL(drm_sched_job_cleanup);
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 83afc3aa8e2f..74fb321dbc44 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -27,9 +27,12 @@ #include <drm/spsc_queue.h> #include <linux/dma-fence.h> #include <linux/completion.h> +#include <linux/xarray.h>
#define MAX_WAIT_SCHED_ENTITY_Q_EMPTY msecs_to_jiffies(1000)
+struct drm_gem_object; + struct drm_gpu_scheduler; struct drm_sched_rq;
@@ -198,6 +201,16 @@ struct drm_sched_job { enum drm_sched_priority s_priority; struct drm_sched_entity *entity; struct dma_fence_cb cb; + /** + * @dependencies: + * + * Contains the dependencies as struct dma_fence for this job, see + * drm_sched_job_await_fence() and drm_sched_job_await_implicit(). + */ + struct xarray dependencies; + + /** @last_dependency: tracks @dependencies as they signal */ + unsigned long last_dependency; };
static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job, @@ -220,9 +233,14 @@ enum drm_gpu_sched_stat { */ struct drm_sched_backend_ops { /** - * @dependency: Called when the scheduler is considering scheduling - * this job next, to get another struct dma_fence for this job to - * block on. Once it returns NULL, run_job() may be called. + * @dependency: + * + * Called when the scheduler is considering scheduling this job next, to + * get another struct dma_fence for this job to block on. Once it + * returns NULL, run_job() may be called. + * + * If a driver exclusively uses drm_sched_job_await_fence() and + * drm_sched_job_await_implicit() this can be ommitted and left as NULL. */ struct dma_fence *(*dependency)(struct drm_sched_job *sched_job, struct drm_sched_entity *s_entity); @@ -349,6 +367,13 @@ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner); void drm_sched_job_arm(struct drm_sched_job *job); +int drm_sched_job_await_fence(struct drm_sched_job *job, + struct dma_fence *fence); +int drm_sched_job_await_implicit(struct drm_sched_job *job, + struct drm_gem_object *obj, + bool write); + + void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, unsigned int num_sched_list);
Adding a few more people to this bikeshed.
On Mon, Jul 12, 2021 at 10:02 PM Daniel Vetter daniel.vetter@ffwll.ch wrote:
@@ -349,6 +367,13 @@ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner); void drm_sched_job_arm(struct drm_sched_job *job); +int drm_sched_job_await_fence(struct drm_sched_job *job,
struct dma_fence *fence);
+int drm_sched_job_await_implicit(struct drm_sched_job *job,
struct drm_gem_object *obj,
bool write);
I'm still waiting on the paint delivery for these two functions so I can finish this shed.
Thanks, Daniel
void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, unsigned int num_sched_list); -- 2.32.0
Am 27.07.21 um 13:09 schrieb Daniel Vetter:
Adding a few more people to this bikeshed.
On Mon, Jul 12, 2021 at 10:02 PM Daniel Vetter daniel.vetter@ffwll.ch wrote:
@@ -349,6 +367,13 @@ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner); void drm_sched_job_arm(struct drm_sched_job *job); +int drm_sched_job_await_fence(struct drm_sched_job *job,
struct dma_fence *fence);
+int drm_sched_job_await_implicit(struct drm_sched_job *job,
struct drm_gem_object *obj,
bool write);
I'm still waiting on the paint delivery for these two functions so I can finish this shed.
Well I wouldn't call that bike shedding, good names are important.
Just imaging we would have called the exclusive-fence write-fence instead.
What speaks against calling them add_dependency() and _add_implicit_depencencies() ?
Regards, Christian.
Thanks, Daniel
void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, unsigned int num_sched_list); -- 2.32.0
On Wed, Jul 28, 2021 at 1:29 PM Christian König ckoenig.leichtzumerken@gmail.com wrote:
Am 27.07.21 um 13:09 schrieb Daniel Vetter:
Adding a few more people to this bikeshed.
On Mon, Jul 12, 2021 at 10:02 PM Daniel Vetter daniel.vetter@ffwll.ch wrote:
@@ -349,6 +367,13 @@ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner); void drm_sched_job_arm(struct drm_sched_job *job); +int drm_sched_job_await_fence(struct drm_sched_job *job,
struct dma_fence *fence);
+int drm_sched_job_await_implicit(struct drm_sched_job *job,
struct drm_gem_object *obj,
bool write);
I'm still waiting on the paint delivery for these two functions so I can finish this shed.
Well I wouldn't call that bike shedding, good names are important.
Just imaging we would have called the exclusive-fence write-fence instead.
Sure naming matters, but at least to my English understanding there's not a semantic different between telling something to await for something else (i.e. add a dependency) or to tell something to add a dependency (i.e. await that thing later on before you start doing your own thing).
Exclusive vs write fence otoh is a pretty big difference in what it means.
But also if there's consensus that I'm wrong then I'm happy to pick the more preferred of the two options I deem equivalent.
What speaks against calling them add_dependency() and _add_implicit_depencencies() ?
Nothing. I just like another ack on this before I rename it all. Also I wasnt sure what you'd want to name the implicit dependency thing.
Lucas, Boris, Melissa, any acks here? -Daniel
Regards, Christian.
Thanks, Daniel
void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, unsigned int num_sched_list); -- 2.32.0
Am 28.07.21 um 14:09 schrieb Daniel Vetter:
On Wed, Jul 28, 2021 at 1:29 PM Christian König ckoenig.leichtzumerken@gmail.com wrote:
Am 27.07.21 um 13:09 schrieb Daniel Vetter:
Adding a few more people to this bikeshed.
On Mon, Jul 12, 2021 at 10:02 PM Daniel Vetter daniel.vetter@ffwll.ch wrote:
@@ -349,6 +367,13 @@ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner); void drm_sched_job_arm(struct drm_sched_job *job); +int drm_sched_job_await_fence(struct drm_sched_job *job,
struct dma_fence *fence);
+int drm_sched_job_await_implicit(struct drm_sched_job *job,
struct drm_gem_object *obj,
bool write);
I'm still waiting on the paint delivery for these two functions so I can finish this shed.
Well I wouldn't call that bike shedding, good names are important.
Just imaging we would have called the exclusive-fence write-fence instead.
Sure naming matters, but at least to my English understanding there's not a semantic different between telling something to await for something else (i.e. add a dependency) or to tell something to add a dependency (i.e. await that thing later on before you start doing your own thing).
To be honest I had to google what await means when you first mentioned it because I didn't had that in my English vocabulary.
(But I have to note that my English education is basically non-existent. I speak German and a good bunch of Dutch and just interfere most of the words).
Regards, Christian.
Exclusive vs write fence otoh is a pretty big difference in what it means.
But also if there's consensus that I'm wrong then I'm happy to pick the more preferred of the two options I deem equivalent.
What speaks against calling them add_dependency() and _add_implicit_depencencies() ?
Nothing. I just like another ack on this before I rename it all. Also I wasnt sure what you'd want to name the implicit dependency thing.
Lucas, Boris, Melissa, any acks here? -Daniel
Regards, Christian.
Thanks, Daniel
void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, unsigned int num_sched_list); -- 2.32.0
On 07/28, Daniel Vetter wrote:
On Wed, Jul 28, 2021 at 1:29 PM Christian König ckoenig.leichtzumerken@gmail.com wrote:
Am 27.07.21 um 13:09 schrieb Daniel Vetter:
Adding a few more people to this bikeshed.
On Mon, Jul 12, 2021 at 10:02 PM Daniel Vetter daniel.vetter@ffwll.ch wrote:
@@ -349,6 +367,13 @@ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner); void drm_sched_job_arm(struct drm_sched_job *job); +int drm_sched_job_await_fence(struct drm_sched_job *job,
struct dma_fence *fence);
+int drm_sched_job_await_implicit(struct drm_sched_job *job,
struct drm_gem_object *obj,
bool write);
I'm still waiting on the paint delivery for these two functions so I can finish this shed.
Well I wouldn't call that bike shedding, good names are important.
Just imaging we would have called the exclusive-fence write-fence instead.
Sure naming matters, but at least to my English understanding there's not a semantic different between telling something to await for something else (i.e. add a dependency) or to tell something to add a dependency (i.e. await that thing later on before you start doing your own thing).
Exclusive vs write fence otoh is a pretty big difference in what it means.
But also if there's consensus that I'm wrong then I'm happy to pick the more preferred of the two options I deem equivalent.
What speaks against calling them add_dependency() and _add_implicit_depencencies() ?
Nothing. I just like another ack on this before I rename it all. Also I wasnt sure what you'd want to name the implicit dependency thing.
Lucas, Boris, Melissa, any acks here?
so, my English is far from good; but _add_dependency sounds good to me.
Melissa
-Daniel
Regards, Christian.
Thanks, Daniel
void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, unsigned int num_sched_list); -- 2.32.0
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
Originally a job was only bound to the queue when we pushed this, but now that's done in drm_sched_job_init, making that parameter entirely redundant.
Remove it.
The same applies to the context parameter in lima_sched_context_queue_task, simplify that too.
Reviewed-by: Steven Price steven.price@arm.com (v1) Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Lucas Stach l.stach@pengutronix.de Cc: Russell King linux+etnaviv@armlinux.org.uk Cc: Christian Gmeiner christian.gmeiner@gmail.com Cc: Qiang Yu yuq825@gmail.com Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Steven Price steven.price@arm.com Cc: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Cc: Emma Anholt emma@anholt.net Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Sumit Semwal sumit.semwal@linaro.org Cc: "Christian König" christian.koenig@amd.com Cc: Alex Deucher alexander.deucher@amd.com Cc: Nirmoy Das nirmoy.das@amd.com Cc: Dave Airlie airlied@redhat.com Cc: Chen Li chenli@uniontech.com Cc: Lee Jones lee.jones@linaro.org Cc: Deepak R Varma mh12gx2825@gmail.com Cc: Kevin Wang kevin1.wang@amd.com Cc: Luben Tuikov luben.tuikov@amd.com Cc: "Marek Olšák" marek.olsak@amd.com Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Dennis Li Dennis.Li@amd.com Cc: Boris Brezillon boris.brezillon@collabora.com Cc: etnaviv@lists.freedesktop.org Cc: lima@lists.freedesktop.org Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 2 +- drivers/gpu/drm/lima/lima_gem.c | 3 +-- drivers/gpu/drm/lima/lima_sched.c | 5 ++--- drivers/gpu/drm/lima/lima_sched.h | 3 +-- drivers/gpu/drm/panfrost/panfrost_job.c | 2 +- drivers/gpu/drm/scheduler/sched_entity.c | 6 ++---- drivers/gpu/drm/v3d/v3d_gem.c | 2 +- include/drm/gpu_scheduler.h | 3 +-- 10 files changed, 12 insertions(+), 18 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index a4ec092af9a7..18f63567fb69 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -1267,7 +1267,7 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
trace_amdgpu_cs_ioctl(job); amdgpu_vm_bo_trace_cs(&fpriv->vm, &p->ticket); - drm_sched_entity_push_job(&job->base, entity); + drm_sched_entity_push_job(&job->base);
amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index 5ddb955d2315..b8609cccc9c1 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -174,7 +174,7 @@ int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,
*f = dma_fence_get(&job->base.s_fence->finished); amdgpu_job_free_resources(job); - drm_sched_entity_push_job(&job->base, entity); + drm_sched_entity_push_job(&job->base);
return 0; } diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index 05f412204118..180bb633d5c5 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -178,7 +178,7 @@ int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity, /* the scheduler holds on to the job now */ kref_get(&submit->refcount);
- drm_sched_entity_push_job(&submit->sched_job, sched_entity); + drm_sched_entity_push_job(&submit->sched_job);
out_unlock: mutex_unlock(&submit->gpu->fence_lock); diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index de62966243cd..c528f40981bb 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -359,8 +359,7 @@ int lima_gem_submit(struct drm_file *file, struct lima_submit *submit) goto err_out2; }
- fence = lima_sched_context_queue_task( - submit->ctx->context + submit->pipe, submit->task); + fence = lima_sched_context_queue_task(submit->task);
for (i = 0; i < submit->nr_bos; i++) { if (submit->bos[i].flags & LIMA_SUBMIT_BO_WRITE) diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index 38f755580507..e968b5a8f0b0 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -177,13 +177,12 @@ void lima_sched_context_fini(struct lima_sched_pipe *pipe, drm_sched_entity_fini(&context->base); }
-struct dma_fence *lima_sched_context_queue_task(struct lima_sched_context *context, - struct lima_sched_task *task) +struct dma_fence *lima_sched_context_queue_task(struct lima_sched_task *task) { struct dma_fence *fence = dma_fence_get(&task->base.s_fence->finished);
trace_lima_task_submit(task); - drm_sched_entity_push_job(&task->base, &context->base); + drm_sched_entity_push_job(&task->base); return fence; }
diff --git a/drivers/gpu/drm/lima/lima_sched.h b/drivers/gpu/drm/lima/lima_sched.h index 90f03c48ef4a..ac70006b0e26 100644 --- a/drivers/gpu/drm/lima/lima_sched.h +++ b/drivers/gpu/drm/lima/lima_sched.h @@ -98,8 +98,7 @@ int lima_sched_context_init(struct lima_sched_pipe *pipe, atomic_t *guilty); void lima_sched_context_fini(struct lima_sched_pipe *pipe, struct lima_sched_context *context); -struct dma_fence *lima_sched_context_queue_task(struct lima_sched_context *context, - struct lima_sched_task *task); +struct dma_fence *lima_sched_context_queue_task(struct lima_sched_task *task);
int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char *name); void lima_sched_pipe_fini(struct lima_sched_pipe *pipe); diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 2992dc85325f..4bc962763e1f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -301,7 +301,7 @@ int panfrost_job_push(struct panfrost_job *job)
kref_get(&job->refcount); /* put by scheduler job completion */
- drm_sched_entity_push_job(&job->base, entity); + drm_sched_entity_push_job(&job->base);
mutex_unlock(&pfdev->sched_lock);
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 381fbf462ea7..e4d33db1eb45 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -516,9 +516,7 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
/** * drm_sched_entity_push_job - Submit a job to the entity's job queue - * * @sched_job: job to submit - * @entity: scheduler entity * * Note: To guarantee that the order of insertion to queue matches the job's * fence sequence number this function should be called with drm_sched_job_arm() @@ -526,9 +524,9 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) * * Returns 0 for success, negative error code otherwise. */ -void drm_sched_entity_push_job(struct drm_sched_job *sched_job, - struct drm_sched_entity *entity) +void drm_sched_entity_push_job(struct drm_sched_job *sched_job) { + struct drm_sched_entity *entity = sched_job->entity; bool first;
trace_drm_sched_job(sched_job, entity); diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index 5c3a99027ecd..69ac20e11b09 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -482,7 +482,7 @@ v3d_push_job(struct v3d_file_priv *v3d_priv, /* put by scheduler job completion */ kref_get(&job->refcount);
- drm_sched_entity_push_job(&job->base, &v3d_priv->sched_entity[queue]); + drm_sched_entity_push_job(&job->base);
return 0; } diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 74fb321dbc44..2bb1869f2352 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -407,8 +407,7 @@ void drm_sched_entity_fini(struct drm_sched_entity *entity); void drm_sched_entity_destroy(struct drm_sched_entity *entity); void drm_sched_entity_select_rq(struct drm_sched_entity *entity); struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity); -void drm_sched_entity_push_job(struct drm_sched_job *sched_job, - struct drm_sched_entity *entity); +void drm_sched_entity_push_job(struct drm_sched_job *sched_job); void drm_sched_entity_set_priority(struct drm_sched_entity *entity, enum drm_sched_priority priority); bool drm_sched_entity_is_ready(struct drm_sched_entity *entity);
I found a few too many things that are tricky and not documented, so I started typing.
I found a few more things that looked broken while typing, see the varios FIXME in drm_sched_entity.
Also some of the usual logics: - actually include sched_entity.c declarations, that was lost in the move here: 620e762f9a98 ("drm/scheduler: move entity handling into separate file")
- Ditch the kerneldoc for internal functions, keep the comments where they're describing more than what the function name already implies.
- Switch drm_sched_entity to inline docs.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com --- Documentation/gpu/drm-mm.rst | 3 + drivers/gpu/drm/scheduler/sched_entity.c | 85 ++++--------- include/drm/gpu_scheduler.h | 145 ++++++++++++++++++----- 3 files changed, 146 insertions(+), 87 deletions(-)
diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst index d5a73fa2c9ef..0198fa43d254 100644 --- a/Documentation/gpu/drm-mm.rst +++ b/Documentation/gpu/drm-mm.rst @@ -504,3 +504,6 @@ Scheduler Function References
.. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c :export: + +.. kernel-doc:: drivers/gpu/drm/scheduler/sched_entity.c + :export: diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index e4d33db1eb45..27e1573af96e 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -45,8 +45,14 @@ * @guilty: atomic_t set to 1 when a job on this queue * is found to be guilty causing a timeout * - * Note: the sched_list should have at least one element to schedule - * the entity + * Note that the &sched_list must have at least one element to schedule the entity. + * + * For changing @priority later on at runtime see + * drm_sched_entity_set_priority(). For changing the set of schedulers + * @sched_list at runtime see drm_sched_entity_modify_sched(). + * + * An entity is cleaned up by callind drm_sched_entity_fini(). See also + * drm_sched_entity_destroy(). * * Returns 0 on success or a negative error code on failure. */ @@ -92,6 +98,11 @@ EXPORT_SYMBOL(drm_sched_entity_init); * @sched_list: the list of new drm scheds which will replace * existing entity->sched_list * @num_sched_list: number of drm sched in sched_list + * + * Note that this must be called under the same common lock for @entity as + * drm_sched_job_arm() and drm_sched_entity_push_job(), or the driver needs to + * guarantee through some other means that this is never called while new jobs + * can be pushed to @entity. */ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, struct drm_gpu_scheduler **sched_list, @@ -104,13 +115,6 @@ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, } EXPORT_SYMBOL(drm_sched_entity_modify_sched);
-/** - * drm_sched_entity_is_idle - Check if entity is idle - * - * @entity: scheduler entity - * - * Returns true if the entity does not have any unscheduled jobs. - */ static bool drm_sched_entity_is_idle(struct drm_sched_entity *entity) { rmb(); /* for list_empty to work without lock */ @@ -123,13 +127,7 @@ static bool drm_sched_entity_is_idle(struct drm_sched_entity *entity) return false; }
-/** - * drm_sched_entity_is_ready - Check if entity is ready - * - * @entity: scheduler entity - * - * Return true if entity could provide a job. - */ +/* Return true if entity could provide a job. */ bool drm_sched_entity_is_ready(struct drm_sched_entity *entity) { if (spsc_queue_peek(&entity->job_queue) == NULL) @@ -192,14 +190,7 @@ long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout) } EXPORT_SYMBOL(drm_sched_entity_flush);
-/** - * drm_sched_entity_kill_jobs_cb - helper for drm_sched_entity_kill_jobs - * - * @f: signaled fence - * @cb: our callback structure - * - * Signal the scheduler finished fence when the entity in question is killed. - */ +/* Signal the scheduler finished fence when the entity in question is killed. */ static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f, struct dma_fence_cb *cb) { @@ -224,14 +215,6 @@ drm_sched_job_dependency(struct drm_sched_job *job, return NULL; }
-/** - * drm_sched_entity_kill_jobs - Make sure all remaining jobs are killed - * - * @entity: entity which is cleaned up - * - * Makes sure that all remaining jobs in an entity are killed before it is - * destroyed. - */ static void drm_sched_entity_kill_jobs(struct drm_sched_entity *entity) { struct drm_sched_job *job; @@ -273,9 +256,11 @@ static void drm_sched_entity_kill_jobs(struct drm_sched_entity *entity) * * @entity: scheduler entity * - * This should be called after @drm_sched_entity_do_release. It goes over the - * entity and signals all jobs with an error code if the process was killed. + * Cleanups up @entity which has been initialized by drm_sched_entity_init(). * + * If there are potentially job still in flight or getting newly queued + * drm_sched_entity_flush() must be called first. This function then goes over + * the entity and signals all jobs with an error code if the process was killed. */ void drm_sched_entity_fini(struct drm_sched_entity *entity) { @@ -315,10 +300,10 @@ EXPORT_SYMBOL(drm_sched_entity_fini);
/** * drm_sched_entity_destroy - Destroy a context entity - * * @entity: scheduler entity * - * Calls drm_sched_entity_do_release() and drm_sched_entity_cleanup() + * Calls drm_sched_entity_flush() and drm_sched_entity_fini() as a + * convenience wrapper. */ void drm_sched_entity_destroy(struct drm_sched_entity *entity) { @@ -327,9 +312,7 @@ void drm_sched_entity_destroy(struct drm_sched_entity *entity) } EXPORT_SYMBOL(drm_sched_entity_destroy);
-/* - * drm_sched_entity_clear_dep - callback to clear the entities dependency - */ +/* drm_sched_entity_clear_dep - callback to clear the entities dependency */ static void drm_sched_entity_clear_dep(struct dma_fence *f, struct dma_fence_cb *cb) { @@ -371,11 +354,7 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity, } EXPORT_SYMBOL(drm_sched_entity_set_priority);
-/** - * drm_sched_entity_add_dependency_cb - add callback for the entities dependency - * - * @entity: entity with dependency - * +/* * Add a callback to the current dependency of the entity to wake up the * scheduler when the entity becomes available. */ @@ -423,13 +402,6 @@ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) return false; }
-/** - * drm_sched_entity_pop_job - get a ready to be scheduled job from the entity - * - * @entity: entity to get the job from - * - * Process all dependencies and try to get one job from the entities queue. - */ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) { struct drm_sched_job *sched_job; @@ -465,14 +437,6 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) return sched_job; }
-/** - * drm_sched_entity_select_rq - select a new rq for the entity - * - * @entity: scheduler entity - * - * Check all prerequisites and select a new rq for the entity for load - * balancing. - */ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) { struct dma_fence *fence; @@ -520,7 +484,8 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) * * Note: To guarantee that the order of insertion to queue matches the job's * fence sequence number this function should be called with drm_sched_job_arm() - * under common lock. + * under common lock for the struct drm_sched_entity that was set up for + * @sched_job in drm_sched_job_init(). * * Returns 0 for success, negative error code otherwise. */ diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 2bb1869f2352..4451336bc758 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -53,56 +53,147 @@ enum drm_sched_priority { * struct drm_sched_entity - A wrapper around a job queue (typically * attached to the DRM file_priv). * - * @list: used to append this struct to the list of entities in the - * runqueue. - * @rq: runqueue on which this entity is currently scheduled. - * @sched_list: A list of schedulers (drm_gpu_schedulers). - * Jobs from this entity can be scheduled on any scheduler - * on this list. - * @num_sched_list: number of drm_gpu_schedulers in the sched_list. - * @priority: priority of the entity - * @rq_lock: lock to modify the runqueue to which this entity belongs. - * @job_queue: the list of jobs of this entity. - * @fence_seq: a linearly increasing seqno incremented with each - * new &drm_sched_fence which is part of the entity. - * @fence_context: a unique context for all the fences which belong - * to this entity. - * The &drm_sched_fence.scheduled uses the - * fence_context but &drm_sched_fence.finished uses - * fence_context + 1. - * @dependency: the dependency fence of the job which is on the top - * of the job queue. - * @cb: callback for the dependency fence above. - * @guilty: points to ctx's guilty. - * @fini_status: contains the exit status in case the process was signalled. - * @last_scheduled: points to the finished fence of the last scheduled job. - * @last_user: last group leader pushing a job into the entity. - * @stopped: Marks the enity as removed from rq and destined for termination. - * @entity_idle: Signals when enityt is not in use - * * Entities will emit jobs in order to their corresponding hardware * ring, and the scheduler will alternate between entities based on * scheduling policy. */ struct drm_sched_entity { + /** + * @list: + * + * Used to append this struct to the list of entities in the runqueue + * @rq under &drm_sched_rq.entities. + * + * Protected by &drm_sched_rq.lock of @rq. + */ struct list_head list; + + /** + * @rq: + * + * Runqueue on which this entity is currently scheduled. + * + * FIXME: Locking is very unclear for this. Writers are protected by + * @rq_lock, but readers are generally lockless and seem to just race + * with not even a READ_ONCE. + */ struct drm_sched_rq *rq; + + /** + * @sched_list: + * + * A list of schedulers (struct drm_gpu_scheduler). Jobs from this entity can + * be scheduled on any scheduler on this list. + * + * This can be modified by calling drm_sched_entity_modify_sched(). + * Locking is entirely up to the driver, see the above function for more + * details. + * + * This will be set to NULL if &num_sched_list equals 1 and @rq has been + * set already. + * + * FIXME: This means priority changes through + * drm_sched_entity_set_priority() will be lost henceforth in this case. + */ struct drm_gpu_scheduler **sched_list; + + /** + * @num_sched_list: + * + * Number of drm_gpu_schedulers in the @sched_list. + */ unsigned int num_sched_list; + + /** + * @priority: + * + * Priority of the entity. This can be modified by calling + * drm_sched_entity_set_priority(). Protected by &rq_lock. + */ enum drm_sched_priority priority; + + /** + * @rq_lock: + * + * Lock to modify the runqueue to which this entity belongs. + */ spinlock_t rq_lock;
+ /** + * @job_queue: the list of jobs of this entity. + */ struct spsc_queue job_queue;
+ /** + * @fence_seq: + * + * A linearly increasing seqno incremented with each new + * &drm_sched_fence which is part of the entity. + * + * FIXME: Callers of drm_sched_job_arm() need to ensure correct locking, + * this doesn't need to be atomic. + */ atomic_t fence_seq; + + /** + * @fence_context: + * + * A unique context for all the fences which belong to this entity. The + * &drm_sched_fence.scheduled uses the fence_context but + * &drm_sched_fence.finished uses fence_context + 1. + */ uint64_t fence_context;
+ /** + * @dependency: + * + * The dependency fence of the job which is on the top of the job queue. + */ struct dma_fence *dependency; + + /** + * @cb: + * + * Callback for the dependency fence above. + */ struct dma_fence_cb cb; + + /** + * @guilty: + * + * Points to entities' guilty. + */ atomic_t *guilty; + + /** + * @last_scheduled: + * + * Points to the finished fence of the last scheduled job. Only written + * by the scheduler thread, can be accessed locklessly from + * drm_sched_job_arm() iff the queue is empty. + */ struct dma_fence *last_scheduled; + + /** + * @last_user: last group leader pushing a job into the entity. + */ struct task_struct *last_user; + + /** + * @stopped: + * + * Marks the enity as removed from rq and destined for + * termination. This is set by calling drm_sched_entity_flush() and by + * drm_sched_fini(). + */ bool stopped; + + /** + * @entity_idle: + * + * Signals when entity is not in use, used to sequence entity cleanup in + * drm_sched_entity_fini(). + */ struct completion entity_idle; };
Just deletes some code that's now more shared.
Note that thanks to the split into drm_sched_job_init/arm we can now easily pull the _init() part from under the submission lock way ahead where we're adding the sync file in-fences as dependencies.
v2: Correctly clean up the partially set up job, now that job_init() and job_arm() are apart (Emma).
Reviewed-by: Steven Price steven.price@arm.com Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Steven Price steven.price@arm.com Cc: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Cc: Sumit Semwal sumit.semwal@linaro.org Cc: "Christian König" christian.koenig@amd.com Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/gpu/drm/panfrost/panfrost_drv.c | 16 ++++++++--- drivers/gpu/drm/panfrost/panfrost_job.c | 37 +++---------------------- drivers/gpu/drm/panfrost/panfrost_job.h | 5 +--- 3 files changed, 17 insertions(+), 41 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 1ffaef5ec5ff..9f53bea07d61 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -218,7 +218,7 @@ panfrost_copy_in_sync(struct drm_device *dev, if (ret) goto fail;
- ret = drm_gem_fence_array_add(&job->deps, fence); + ret = drm_sched_job_await_fence(&job->base, fence);
if (ret) goto fail; @@ -236,7 +236,7 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data, struct drm_panfrost_submit *args = data; struct drm_syncobj *sync_out = NULL; struct panfrost_job *job; - int ret = 0; + int ret = 0, slot;
if (!args->jc) return -EINVAL; @@ -258,14 +258,20 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
kref_init(&job->refcount);
- xa_init_flags(&job->deps, XA_FLAGS_ALLOC); - job->pfdev = pfdev; job->jc = args->jc; job->requirements = args->requirements; job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev); job->file_priv = file->driver_priv;
+ slot = panfrost_job_get_slot(job); + + ret = drm_sched_job_init(&job->base, + &job->file_priv->sched_entity[slot], + NULL); + if (ret) + goto fail_job_put; + ret = panfrost_copy_in_sync(dev, file, args, job); if (ret) goto fail_job; @@ -283,6 +289,8 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data, drm_syncobj_replace_fence(sync_out, job->render_done_fence);
fail_job: + drm_sched_job_cleanup(&job->base); +fail_job_put: panfrost_job_put(job); fail_out_sync: if (sync_out) diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 4bc962763e1f..86c843d8822e 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -102,7 +102,7 @@ static struct dma_fence *panfrost_fence_create(struct panfrost_device *pfdev, in return &fence->base; }
-static int panfrost_job_get_slot(struct panfrost_job *job) +int panfrost_job_get_slot(struct panfrost_job *job) { /* JS0: fragment jobs. * JS1: vertex/tiler jobs @@ -242,13 +242,13 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
static int panfrost_acquire_object_fences(struct drm_gem_object **bos, int bo_count, - struct xarray *deps) + struct drm_sched_job *job) { int i, ret;
for (i = 0; i < bo_count; i++) { /* panfrost always uses write mode in its current uapi */ - ret = drm_gem_fence_array_add_implicit(deps, bos[i], true); + ret = drm_sched_job_await_implicit(job, bos[i], true); if (ret) return ret; } @@ -269,31 +269,21 @@ static void panfrost_attach_object_fences(struct drm_gem_object **bos, int panfrost_job_push(struct panfrost_job *job) { struct panfrost_device *pfdev = job->pfdev; - int slot = panfrost_job_get_slot(job); - struct drm_sched_entity *entity = &job->file_priv->sched_entity[slot]; struct ww_acquire_ctx acquire_ctx; int ret = 0;
- ret = drm_gem_lock_reservations(job->bos, job->bo_count, &acquire_ctx); if (ret) return ret;
mutex_lock(&pfdev->sched_lock); - - ret = drm_sched_job_init(&job->base, entity, NULL); - if (ret) { - mutex_unlock(&pfdev->sched_lock); - goto unlock; - } - drm_sched_job_arm(&job->base);
job->render_done_fence = dma_fence_get(&job->base.s_fence->finished);
ret = panfrost_acquire_object_fences(job->bos, job->bo_count, - &job->deps); + &job->base); if (ret) { mutex_unlock(&pfdev->sched_lock); goto unlock; @@ -318,15 +308,8 @@ static void panfrost_job_cleanup(struct kref *ref) { struct panfrost_job *job = container_of(ref, struct panfrost_job, refcount); - struct dma_fence *fence; - unsigned long index; unsigned int i;
- xa_for_each(&job->deps, index, fence) { - dma_fence_put(fence); - } - xa_destroy(&job->deps); - dma_fence_put(job->done_fence); dma_fence_put(job->render_done_fence);
@@ -365,17 +348,6 @@ static void panfrost_job_free(struct drm_sched_job *sched_job) panfrost_job_put(job); }
-static struct dma_fence *panfrost_job_dependency(struct drm_sched_job *sched_job, - struct drm_sched_entity *s_entity) -{ - struct panfrost_job *job = to_panfrost_job(sched_job); - - if (!xa_empty(&job->deps)) - return xa_erase(&job->deps, job->last_dep++); - - return NULL; -} - static struct dma_fence *panfrost_job_run(struct drm_sched_job *sched_job) { struct panfrost_job *job = to_panfrost_job(sched_job); @@ -765,7 +737,6 @@ static void panfrost_reset_work(struct work_struct *work) }
static const struct drm_sched_backend_ops panfrost_sched_ops = { - .dependency = panfrost_job_dependency, .run_job = panfrost_job_run, .timedout_job = panfrost_job_timedout, .free_job = panfrost_job_free diff --git a/drivers/gpu/drm/panfrost/panfrost_job.h b/drivers/gpu/drm/panfrost/panfrost_job.h index 82306a03b57e..77e6d0e6f612 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.h +++ b/drivers/gpu/drm/panfrost/panfrost_job.h @@ -19,10 +19,6 @@ struct panfrost_job { struct panfrost_device *pfdev; struct panfrost_file_priv *file_priv;
- /* Contains both explicit and implicit fences */ - struct xarray deps; - unsigned long last_dep; - /* Fence to be signaled by IRQ handler when the job is complete. */ struct dma_fence *done_fence;
@@ -42,6 +38,7 @@ int panfrost_job_init(struct panfrost_device *pfdev); void panfrost_job_fini(struct panfrost_device *pfdev); int panfrost_job_open(struct panfrost_file_priv *panfrost_priv); void panfrost_job_close(struct panfrost_file_priv *panfrost_priv); +int panfrost_job_get_slot(struct panfrost_job *job); int panfrost_job_push(struct panfrost_job *job); void panfrost_job_put(struct panfrost_job *job); void panfrost_job_enable_interrupts(struct panfrost_device *pfdev);
Nothing special going on here.
Aside reviewing the code, it seems like drm_sched_job_arm() should be moved into lima_sched_context_queue_task and put under some mutex together with drm_sched_push_job(). See the kerneldoc for drm_sched_push_job().
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Qiang Yu yuq825@gmail.com Cc: Sumit Semwal sumit.semwal@linaro.org Cc: "Christian König" christian.koenig@amd.com Cc: lima@lists.freedesktop.org Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/gpu/drm/lima/lima_gem.c | 4 ++-- drivers/gpu/drm/lima/lima_sched.c | 21 --------------------- drivers/gpu/drm/lima/lima_sched.h | 3 --- 3 files changed, 2 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index c528f40981bb..e54a88d5037a 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -267,7 +267,7 @@ static int lima_gem_sync_bo(struct lima_sched_task *task, struct lima_bo *bo, if (explicit) return 0;
- return drm_gem_fence_array_add_implicit(&task->deps, &bo->base.base, write); + return drm_sched_job_await_implicit(&task->base, &bo->base.base, write); }
static int lima_gem_add_deps(struct drm_file *file, struct lima_submit *submit) @@ -285,7 +285,7 @@ static int lima_gem_add_deps(struct drm_file *file, struct lima_submit *submit) if (err) return err;
- err = drm_gem_fence_array_add(&submit->task->deps, fence); + err = drm_sched_job_await_fence(&submit->task->base, fence); if (err) { dma_fence_put(fence); return err; diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index e968b5a8f0b0..99d5f6f1a882 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -134,24 +134,15 @@ int lima_sched_task_init(struct lima_sched_task *task, task->num_bos = num_bos; task->vm = lima_vm_get(vm);
- xa_init_flags(&task->deps, XA_FLAGS_ALLOC); - return 0; }
void lima_sched_task_fini(struct lima_sched_task *task) { - struct dma_fence *fence; - unsigned long index; int i;
drm_sched_job_cleanup(&task->base);
- xa_for_each(&task->deps, index, fence) { - dma_fence_put(fence); - } - xa_destroy(&task->deps); - if (task->bos) { for (i = 0; i < task->num_bos; i++) drm_gem_object_put(&task->bos[i]->base.base); @@ -186,17 +177,6 @@ struct dma_fence *lima_sched_context_queue_task(struct lima_sched_task *task) return fence; }
-static struct dma_fence *lima_sched_dependency(struct drm_sched_job *job, - struct drm_sched_entity *entity) -{ - struct lima_sched_task *task = to_lima_task(job); - - if (!xa_empty(&task->deps)) - return xa_erase(&task->deps, task->last_dep++); - - return NULL; -} - static int lima_pm_busy(struct lima_device *ldev) { int ret; @@ -472,7 +452,6 @@ static void lima_sched_free_job(struct drm_sched_job *job) }
static const struct drm_sched_backend_ops lima_sched_ops = { - .dependency = lima_sched_dependency, .run_job = lima_sched_run_job, .timedout_job = lima_sched_timedout_job, .free_job = lima_sched_free_job, diff --git a/drivers/gpu/drm/lima/lima_sched.h b/drivers/gpu/drm/lima/lima_sched.h index ac70006b0e26..6a11764d87b3 100644 --- a/drivers/gpu/drm/lima/lima_sched.h +++ b/drivers/gpu/drm/lima/lima_sched.h @@ -23,9 +23,6 @@ struct lima_sched_task { struct lima_vm *vm; void *frame;
- struct xarray deps; - unsigned long last_dep; - struct lima_bo **bos; int num_bos;
Prep work for using the scheduler dependency handling. We need to call drm_sched_job_init earlier so we can use the new drm_sched_job_await* functions for dependency handling here.
v2: Slightly better commit message and rebase to include the drm_sched_job_arm() call (Emma).
v3: Cleanup jobs under construction correctly (Emma)
Cc: Melissa Wen melissa.srw@gmail.com Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Emma Anholt emma@anholt.net --- drivers/gpu/drm/v3d/v3d_drv.h | 1 + drivers/gpu/drm/v3d/v3d_gem.c | 88 ++++++++++++++------------------- drivers/gpu/drm/v3d/v3d_sched.c | 15 +++--- 3 files changed, 44 insertions(+), 60 deletions(-)
diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h index 8a390738d65b..1d870261eaac 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.h +++ b/drivers/gpu/drm/v3d/v3d_drv.h @@ -332,6 +332,7 @@ int v3d_submit_csd_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); int v3d_wait_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +void v3d_job_cleanup(struct v3d_job *job); void v3d_job_put(struct v3d_job *job); void v3d_reset(struct v3d_dev *v3d); void v3d_invalidate_caches(struct v3d_dev *v3d); diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index 69ac20e11b09..5eccd3658938 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -392,6 +392,12 @@ v3d_render_job_free(struct kref *ref) v3d_job_free(ref); }
+void v3d_job_cleanup(struct v3d_job *job) +{ + drm_sched_job_cleanup(&job->base); + v3d_job_put(job); +} + void v3d_job_put(struct v3d_job *job) { kref_put(&job->refcount, job->free); @@ -433,9 +439,10 @@ v3d_wait_bo_ioctl(struct drm_device *dev, void *data, static int v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv, struct v3d_job *job, void (*free)(struct kref *ref), - u32 in_sync) + u32 in_sync, enum v3d_queue queue) { struct dma_fence *in_fence = NULL; + struct v3d_file_priv *v3d_priv = file_priv->driver_priv; int ret;
job->v3d = v3d; @@ -446,35 +453,33 @@ v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv, return ret;
xa_init_flags(&job->deps, XA_FLAGS_ALLOC); + ret = drm_sched_job_init(&job->base, &v3d_priv->sched_entity[queue], + v3d_priv); + if (ret) + goto fail;
ret = drm_syncobj_find_fence(file_priv, in_sync, 0, 0, &in_fence); if (ret == -EINVAL) - goto fail; + goto fail_job;
ret = drm_gem_fence_array_add(&job->deps, in_fence); if (ret) - goto fail; + goto fail_job;
kref_init(&job->refcount);
return 0; +fail_job: + drm_sched_job_cleanup(&job->base); fail: xa_destroy(&job->deps); pm_runtime_put_autosuspend(v3d->drm.dev); return ret; }
-static int -v3d_push_job(struct v3d_file_priv *v3d_priv, - struct v3d_job *job, enum v3d_queue queue) +static void +v3d_push_job(struct v3d_job *job) { - int ret; - - ret = drm_sched_job_init(&job->base, &v3d_priv->sched_entity[queue], - v3d_priv); - if (ret) - return ret; - drm_sched_job_arm(&job->base);
job->done_fence = dma_fence_get(&job->base.s_fence->finished); @@ -483,8 +488,6 @@ v3d_push_job(struct v3d_file_priv *v3d_priv, kref_get(&job->refcount);
drm_sched_entity_push_job(&job->base); - - return 0; }
static void @@ -530,7 +533,6 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) { struct v3d_dev *v3d = to_v3d_dev(dev); - struct v3d_file_priv *v3d_priv = file_priv->driver_priv; struct drm_v3d_submit_cl *args = data; struct v3d_bin_job *bin = NULL; struct v3d_render_job *render; @@ -556,7 +558,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, INIT_LIST_HEAD(&render->unref_list);
ret = v3d_job_init(v3d, file_priv, &render->base, - v3d_render_job_free, args->in_sync_rcl); + v3d_render_job_free, args->in_sync_rcl, V3D_RENDER); if (ret) { kfree(render); return ret; @@ -570,7 +572,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, }
ret = v3d_job_init(v3d, file_priv, &bin->base, - v3d_job_free, args->in_sync_bcl); + v3d_job_free, args->in_sync_bcl, V3D_BIN); if (ret) { v3d_job_put(&render->base); kfree(bin); @@ -592,7 +594,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, goto fail; }
- ret = v3d_job_init(v3d, file_priv, clean_job, v3d_job_free, 0); + ret = v3d_job_init(v3d, file_priv, clean_job, v3d_job_free, 0, V3D_CACHE_CLEAN); if (ret) { kfree(clean_job); clean_job = NULL; @@ -615,9 +617,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
mutex_lock(&v3d->sched_lock); if (bin) { - ret = v3d_push_job(v3d_priv, &bin->base, V3D_BIN); - if (ret) - goto fail_unreserve; + v3d_push_job(&bin->base);
ret = drm_gem_fence_array_add(&render->base.deps, dma_fence_get(bin->base.done_fence)); @@ -625,9 +625,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, goto fail_unreserve; }
- ret = v3d_push_job(v3d_priv, &render->base, V3D_RENDER); - if (ret) - goto fail_unreserve; + v3d_push_job(&render->base);
if (clean_job) { struct dma_fence *render_fence = @@ -635,9 +633,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, ret = drm_gem_fence_array_add(&clean_job->deps, render_fence); if (ret) goto fail_unreserve; - ret = v3d_push_job(v3d_priv, clean_job, V3D_CACHE_CLEAN); - if (ret) - goto fail_unreserve; + v3d_push_job(clean_job); }
mutex_unlock(&v3d->sched_lock); @@ -662,10 +658,10 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, last_job->bo_count, &acquire_ctx); fail: if (bin) - v3d_job_put(&bin->base); - v3d_job_put(&render->base); + v3d_job_cleanup(&bin->base); + v3d_job_cleanup(&render->base); if (clean_job) - v3d_job_put(clean_job); + v3d_job_cleanup(clean_job);
return ret; } @@ -684,7 +680,6 @@ v3d_submit_tfu_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) { struct v3d_dev *v3d = to_v3d_dev(dev); - struct v3d_file_priv *v3d_priv = file_priv->driver_priv; struct drm_v3d_submit_tfu *args = data; struct v3d_tfu_job *job; struct ww_acquire_ctx acquire_ctx; @@ -697,7 +692,7 @@ v3d_submit_tfu_ioctl(struct drm_device *dev, void *data, return -ENOMEM;
ret = v3d_job_init(v3d, file_priv, &job->base, - v3d_job_free, args->in_sync); + v3d_job_free, args->in_sync, V3D_TFU); if (ret) { kfree(job); return ret; @@ -741,9 +736,7 @@ v3d_submit_tfu_ioctl(struct drm_device *dev, void *data, goto fail;
mutex_lock(&v3d->sched_lock); - ret = v3d_push_job(v3d_priv, &job->base, V3D_TFU); - if (ret) - goto fail_unreserve; + v3d_push_job(&job->base); mutex_unlock(&v3d->sched_lock);
v3d_attach_fences_and_unlock_reservation(file_priv, @@ -755,12 +748,8 @@ v3d_submit_tfu_ioctl(struct drm_device *dev, void *data,
return 0;
-fail_unreserve: - mutex_unlock(&v3d->sched_lock); - drm_gem_unlock_reservations(job->base.bo, job->base.bo_count, - &acquire_ctx); fail: - v3d_job_put(&job->base); + v3d_job_cleanup(&job->base);
return ret; } @@ -779,7 +768,6 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) { struct v3d_dev *v3d = to_v3d_dev(dev); - struct v3d_file_priv *v3d_priv = file_priv->driver_priv; struct drm_v3d_submit_csd *args = data; struct v3d_csd_job *job; struct v3d_job *clean_job; @@ -798,7 +786,7 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data, return -ENOMEM;
ret = v3d_job_init(v3d, file_priv, &job->base, - v3d_job_free, args->in_sync); + v3d_job_free, args->in_sync, V3D_CSD); if (ret) { kfree(job); return ret; @@ -811,7 +799,7 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data, return -ENOMEM; }
- ret = v3d_job_init(v3d, file_priv, clean_job, v3d_job_free, 0); + ret = v3d_job_init(v3d, file_priv, clean_job, v3d_job_free, 0, V3D_CACHE_CLEAN); if (ret) { v3d_job_put(&job->base); kfree(clean_job); @@ -830,18 +818,14 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data, goto fail;
mutex_lock(&v3d->sched_lock); - ret = v3d_push_job(v3d_priv, &job->base, V3D_CSD); - if (ret) - goto fail_unreserve; + v3d_push_job(&job->base);
ret = drm_gem_fence_array_add(&clean_job->deps, dma_fence_get(job->base.done_fence)); if (ret) goto fail_unreserve;
- ret = v3d_push_job(v3d_priv, clean_job, V3D_CACHE_CLEAN); - if (ret) - goto fail_unreserve; + v3d_push_job(clean_job); mutex_unlock(&v3d->sched_lock);
v3d_attach_fences_and_unlock_reservation(file_priv, @@ -860,8 +844,8 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data, drm_gem_unlock_reservations(clean_job->bo, clean_job->bo_count, &acquire_ctx); fail: - v3d_job_put(&job->base); - v3d_job_put(clean_job); + v3d_job_cleanup(&job->base); + v3d_job_cleanup(clean_job);
return ret; } diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index a39bdd5cfc4f..3f352d73af9c 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -55,12 +55,11 @@ to_csd_job(struct drm_sched_job *sched_job) }
static void -v3d_job_free(struct drm_sched_job *sched_job) +v3d_sched_job_free(struct drm_sched_job *sched_job) { struct v3d_job *job = to_v3d_job(sched_job);
- drm_sched_job_cleanup(sched_job); - v3d_job_put(job); + v3d_job_cleanup(job); }
/* @@ -360,35 +359,35 @@ static const struct drm_sched_backend_ops v3d_bin_sched_ops = { .dependency = v3d_job_dependency, .run_job = v3d_bin_job_run, .timedout_job = v3d_bin_job_timedout, - .free_job = v3d_job_free, + .free_job = v3d_sched_job_free, };
static const struct drm_sched_backend_ops v3d_render_sched_ops = { .dependency = v3d_job_dependency, .run_job = v3d_render_job_run, .timedout_job = v3d_render_job_timedout, - .free_job = v3d_job_free, + .free_job = v3d_sched_job_free, };
static const struct drm_sched_backend_ops v3d_tfu_sched_ops = { .dependency = v3d_job_dependency, .run_job = v3d_tfu_job_run, .timedout_job = v3d_generic_job_timedout, - .free_job = v3d_job_free, + .free_job = v3d_sched_job_free, };
static const struct drm_sched_backend_ops v3d_csd_sched_ops = { .dependency = v3d_job_dependency, .run_job = v3d_csd_job_run, .timedout_job = v3d_csd_job_timedout, - .free_job = v3d_job_free + .free_job = v3d_sched_job_free };
static const struct drm_sched_backend_ops v3d_cache_clean_sched_ops = { .dependency = v3d_job_dependency, .run_job = v3d_cache_clean_job_run, .timedout_job = v3d_generic_job_timedout, - .free_job = v3d_job_free + .free_job = v3d_sched_job_free };
int
On 07/12, Daniel Vetter wrote:
Prep work for using the scheduler dependency handling. We need to call drm_sched_job_init earlier so we can use the new drm_sched_job_await* functions for dependency handling here.
v2: Slightly better commit message and rebase to include the drm_sched_job_arm() call (Emma).
v3: Cleanup jobs under construction correctly (Emma)
Cc: Melissa Wen melissa.srw@gmail.com Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Emma Anholt emma@anholt.net
drivers/gpu/drm/v3d/v3d_drv.h | 1 + drivers/gpu/drm/v3d/v3d_gem.c | 88 ++++++++++++++------------------- drivers/gpu/drm/v3d/v3d_sched.c | 15 +++--- 3 files changed, 44 insertions(+), 60 deletions(-)
diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h index 8a390738d65b..1d870261eaac 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.h +++ b/drivers/gpu/drm/v3d/v3d_drv.h @@ -332,6 +332,7 @@ int v3d_submit_csd_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); int v3d_wait_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +void v3d_job_cleanup(struct v3d_job *job); void v3d_job_put(struct v3d_job *job); void v3d_reset(struct v3d_dev *v3d); void v3d_invalidate_caches(struct v3d_dev *v3d); diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index 69ac20e11b09..5eccd3658938 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -392,6 +392,12 @@ v3d_render_job_free(struct kref *ref) v3d_job_free(ref); }
+void v3d_job_cleanup(struct v3d_job *job) +{
- drm_sched_job_cleanup(&job->base);
- v3d_job_put(job);
+}
void v3d_job_put(struct v3d_job *job) { kref_put(&job->refcount, job->free); @@ -433,9 +439,10 @@ v3d_wait_bo_ioctl(struct drm_device *dev, void *data, static int v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv, struct v3d_job *job, void (*free)(struct kref *ref),
u32 in_sync)
u32 in_sync, enum v3d_queue queue)
{ struct dma_fence *in_fence = NULL;
struct v3d_file_priv *v3d_priv = file_priv->driver_priv; int ret;
job->v3d = v3d;
@@ -446,35 +453,33 @@ v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv, return ret;
xa_init_flags(&job->deps, XA_FLAGS_ALLOC);
ret = drm_sched_job_init(&job->base, &v3d_priv->sched_entity[queue],
v3d_priv);
if (ret)
goto fail;
ret = drm_syncobj_find_fence(file_priv, in_sync, 0, 0, &in_fence); if (ret == -EINVAL)
goto fail;
goto fail_job;
ret = drm_gem_fence_array_add(&job->deps, in_fence); if (ret)
goto fail;
goto fail_job;
kref_init(&job->refcount);
return 0;
+fail_job:
- drm_sched_job_cleanup(&job->base);
fail: xa_destroy(&job->deps); pm_runtime_put_autosuspend(v3d->drm.dev); return ret; }
-static int -v3d_push_job(struct v3d_file_priv *v3d_priv,
struct v3d_job *job, enum v3d_queue queue)
+static void +v3d_push_job(struct v3d_job *job) {
int ret;
ret = drm_sched_job_init(&job->base, &v3d_priv->sched_entity[queue],
v3d_priv);
if (ret)
return ret;
drm_sched_job_arm(&job->base);
job->done_fence = dma_fence_get(&job->base.s_fence->finished);
@@ -483,8 +488,6 @@ v3d_push_job(struct v3d_file_priv *v3d_priv, kref_get(&job->refcount);
drm_sched_entity_push_job(&job->base);
- return 0;
}
static void @@ -530,7 +533,6 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) { struct v3d_dev *v3d = to_v3d_dev(dev);
- struct v3d_file_priv *v3d_priv = file_priv->driver_priv; struct drm_v3d_submit_cl *args = data; struct v3d_bin_job *bin = NULL; struct v3d_render_job *render;
@@ -556,7 +558,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, INIT_LIST_HEAD(&render->unref_list);
ret = v3d_job_init(v3d, file_priv, &render->base,
v3d_render_job_free, args->in_sync_rcl);
if (ret) { kfree(render); return ret;v3d_render_job_free, args->in_sync_rcl, V3D_RENDER);
@@ -570,7 +572,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, }
ret = v3d_job_init(v3d, file_priv, &bin->base,
v3d_job_free, args->in_sync_bcl);
if (ret) { v3d_job_put(&render->base); kfree(bin);v3d_job_free, args->in_sync_bcl, V3D_BIN);
@@ -592,7 +594,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, goto fail; }
ret = v3d_job_init(v3d, file_priv, clean_job, v3d_job_free, 0);
if (ret) { kfree(clean_job); clean_job = NULL;ret = v3d_job_init(v3d, file_priv, clean_job, v3d_job_free, 0, V3D_CACHE_CLEAN);
@@ -615,9 +617,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
mutex_lock(&v3d->sched_lock); if (bin) {
ret = v3d_push_job(v3d_priv, &bin->base, V3D_BIN);
if (ret)
goto fail_unreserve;
v3d_push_job(&bin->base);
ret = drm_gem_fence_array_add(&render->base.deps, dma_fence_get(bin->base.done_fence));
@@ -625,9 +625,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, goto fail_unreserve; }
- ret = v3d_push_job(v3d_priv, &render->base, V3D_RENDER);
- if (ret)
goto fail_unreserve;
v3d_push_job(&render->base);
if (clean_job) { struct dma_fence *render_fence =
@@ -635,9 +633,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, ret = drm_gem_fence_array_add(&clean_job->deps, render_fence); if (ret) goto fail_unreserve;
ret = v3d_push_job(v3d_priv, clean_job, V3D_CACHE_CLEAN);
if (ret)
goto fail_unreserve;
v3d_push_job(clean_job);
}
mutex_unlock(&v3d->sched_lock);
@@ -662,10 +658,10 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, last_job->bo_count, &acquire_ctx); fail: if (bin)
v3d_job_put(&bin->base);
- v3d_job_put(&render->base);
v3d_job_cleanup(&bin->base);
- v3d_job_cleanup(&render->base); if (clean_job)
v3d_job_put(clean_job);
v3d_job_cleanup(clean_job);
return ret;
} @@ -684,7 +680,6 @@ v3d_submit_tfu_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) { struct v3d_dev *v3d = to_v3d_dev(dev);
- struct v3d_file_priv *v3d_priv = file_priv->driver_priv; struct drm_v3d_submit_tfu *args = data; struct v3d_tfu_job *job; struct ww_acquire_ctx acquire_ctx;
@@ -697,7 +692,7 @@ v3d_submit_tfu_ioctl(struct drm_device *dev, void *data, return -ENOMEM;
ret = v3d_job_init(v3d, file_priv, &job->base,
v3d_job_free, args->in_sync);
if (ret) { kfree(job); return ret;v3d_job_free, args->in_sync, V3D_TFU);
@@ -741,9 +736,7 @@ v3d_submit_tfu_ioctl(struct drm_device *dev, void *data, goto fail;
mutex_lock(&v3d->sched_lock);
- ret = v3d_push_job(v3d_priv, &job->base, V3D_TFU);
- if (ret)
goto fail_unreserve;
v3d_push_job(&job->base); mutex_unlock(&v3d->sched_lock);
v3d_attach_fences_and_unlock_reservation(file_priv,
@@ -755,12 +748,8 @@ v3d_submit_tfu_ioctl(struct drm_device *dev, void *data,
return 0;
-fail_unreserve:
- mutex_unlock(&v3d->sched_lock);
- drm_gem_unlock_reservations(job->base.bo, job->base.bo_count,
&acquire_ctx);
fail:
- v3d_job_put(&job->base);
v3d_job_cleanup(&job->base);
return ret;
} @@ -779,7 +768,6 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) { struct v3d_dev *v3d = to_v3d_dev(dev);
- struct v3d_file_priv *v3d_priv = file_priv->driver_priv; struct drm_v3d_submit_csd *args = data; struct v3d_csd_job *job; struct v3d_job *clean_job;
@@ -798,7 +786,7 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data, return -ENOMEM;
ret = v3d_job_init(v3d, file_priv, &job->base,
v3d_job_free, args->in_sync);
if (ret) { kfree(job); return ret;v3d_job_free, args->in_sync, V3D_CSD);
@@ -811,7 +799,7 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data, return -ENOMEM; }
- ret = v3d_job_init(v3d, file_priv, clean_job, v3d_job_free, 0);
- ret = v3d_job_init(v3d, file_priv, clean_job, v3d_job_free, 0, V3D_CACHE_CLEAN); if (ret) { v3d_job_put(&job->base); kfree(clean_job);
@@ -830,18 +818,14 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data, goto fail;
mutex_lock(&v3d->sched_lock);
- ret = v3d_push_job(v3d_priv, &job->base, V3D_CSD);
- if (ret)
goto fail_unreserve;
v3d_push_job(&job->base);
ret = drm_gem_fence_array_add(&clean_job->deps, dma_fence_get(job->base.done_fence)); if (ret) goto fail_unreserve;
- ret = v3d_push_job(v3d_priv, clean_job, V3D_CACHE_CLEAN);
- if (ret)
goto fail_unreserve;
v3d_push_job(clean_job); mutex_unlock(&v3d->sched_lock);
v3d_attach_fences_and_unlock_reservation(file_priv,
@@ -860,8 +844,8 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data, drm_gem_unlock_reservations(clean_job->bo, clean_job->bo_count, &acquire_ctx); fail:
- v3d_job_put(&job->base);
- v3d_job_put(clean_job);
v3d_job_cleanup(&job->base);
v3d_job_cleanup(clean_job);
return ret;
} diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index a39bdd5cfc4f..3f352d73af9c 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -55,12 +55,11 @@ to_csd_job(struct drm_sched_job *sched_job) }
static void -v3d_job_free(struct drm_sched_job *sched_job) +v3d_sched_job_free(struct drm_sched_job *sched_job) { struct v3d_job *job = to_v3d_job(sched_job);
- drm_sched_job_cleanup(sched_job);
- v3d_job_put(job);
- v3d_job_cleanup(job);
}
/* @@ -360,35 +359,35 @@ static const struct drm_sched_backend_ops v3d_bin_sched_ops = { .dependency = v3d_job_dependency, .run_job = v3d_bin_job_run, .timedout_job = v3d_bin_job_timedout,
- .free_job = v3d_job_free,
- .free_job = v3d_sched_job_free,
};
static const struct drm_sched_backend_ops v3d_render_sched_ops = { .dependency = v3d_job_dependency, .run_job = v3d_render_job_run, .timedout_job = v3d_render_job_timedout,
- .free_job = v3d_job_free,
- .free_job = v3d_sched_job_free,
};
static const struct drm_sched_backend_ops v3d_tfu_sched_ops = { .dependency = v3d_job_dependency, .run_job = v3d_tfu_job_run, .timedout_job = v3d_generic_job_timedout,
- .free_job = v3d_job_free,
- .free_job = v3d_sched_job_free,
};
static const struct drm_sched_backend_ops v3d_csd_sched_ops = { .dependency = v3d_job_dependency, .run_job = v3d_csd_job_run, .timedout_job = v3d_csd_job_timedout,
- .free_job = v3d_job_free
- .free_job = v3d_sched_job_free
};
static const struct drm_sched_backend_ops v3d_cache_clean_sched_ops = { .dependency = v3d_job_dependency, .run_job = v3d_cache_clean_job_run, .timedout_job = v3d_generic_job_timedout,
- .free_job = v3d_job_free
- .free_job = v3d_sched_job_free
};
Hi Daniel,
lgtm too.
Reviewed-by: Melissa Wen mwen@igalia.com
Thanks, Melissa
int
2.32.0
With the prep work out of the way this isn't tricky anymore.
Aside: The chaining of the various jobs is a bit awkward, with the possibility of failure in bad places. I think with the drm_sched_job_init/arm split and maybe preloading the job->dependencies xarray this should be fixable.
Cc: Melissa Wen melissa.srw@gmail.com Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Cc: Emma Anholt emma@anholt.net --- drivers/gpu/drm/v3d/v3d_drv.h | 5 ----- drivers/gpu/drm/v3d/v3d_gem.c | 25 ++++++++----------------- drivers/gpu/drm/v3d/v3d_sched.c | 29 +---------------------------- 3 files changed, 9 insertions(+), 50 deletions(-)
diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h index 1d870261eaac..f80f4ff1f7aa 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.h +++ b/drivers/gpu/drm/v3d/v3d_drv.h @@ -192,11 +192,6 @@ struct v3d_job { struct drm_gem_object **bo; u32 bo_count;
- /* Array of struct dma_fence * to block on before submitting this job. - */ - struct xarray deps; - unsigned long last_dep; - /* v3d fence to be signaled by IRQ handler when the job is complete. */ struct dma_fence *irq_fence;
diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index 5eccd3658938..42b07ffbea5e 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -257,8 +257,8 @@ v3d_lock_bo_reservations(struct v3d_job *job, return ret;
for (i = 0; i < job->bo_count; i++) { - ret = drm_gem_fence_array_add_implicit(&job->deps, - job->bo[i], true); + ret = drm_sched_job_await_implicit(&job->base, + job->bo[i], true); if (ret) { drm_gem_unlock_reservations(job->bo, job->bo_count, acquire_ctx); @@ -354,8 +354,6 @@ static void v3d_job_free(struct kref *ref) { struct v3d_job *job = container_of(ref, struct v3d_job, refcount); - unsigned long index; - struct dma_fence *fence; int i;
for (i = 0; i < job->bo_count; i++) { @@ -364,11 +362,6 @@ v3d_job_free(struct kref *ref) } kvfree(job->bo);
- xa_for_each(&job->deps, index, fence) { - dma_fence_put(fence); - } - xa_destroy(&job->deps); - dma_fence_put(job->irq_fence); dma_fence_put(job->done_fence);
@@ -452,7 +445,6 @@ v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv, if (ret < 0) return ret;
- xa_init_flags(&job->deps, XA_FLAGS_ALLOC); ret = drm_sched_job_init(&job->base, &v3d_priv->sched_entity[queue], v3d_priv); if (ret) @@ -462,7 +454,7 @@ v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv, if (ret == -EINVAL) goto fail_job;
- ret = drm_gem_fence_array_add(&job->deps, in_fence); + ret = drm_sched_job_await_fence(&job->base, in_fence); if (ret) goto fail_job;
@@ -472,7 +464,6 @@ v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv, fail_job: drm_sched_job_cleanup(&job->base); fail: - xa_destroy(&job->deps); pm_runtime_put_autosuspend(v3d->drm.dev); return ret; } @@ -619,8 +610,8 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, if (bin) { v3d_push_job(&bin->base);
- ret = drm_gem_fence_array_add(&render->base.deps, - dma_fence_get(bin->base.done_fence)); + ret = drm_sched_job_await_fence(&render->base.base, + dma_fence_get(bin->base.done_fence)); if (ret) goto fail_unreserve; } @@ -630,7 +621,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, if (clean_job) { struct dma_fence *render_fence = dma_fence_get(render->base.done_fence); - ret = drm_gem_fence_array_add(&clean_job->deps, render_fence); + ret = drm_sched_job_await_fence(&clean_job->base, render_fence); if (ret) goto fail_unreserve; v3d_push_job(clean_job); @@ -820,8 +811,8 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data, mutex_lock(&v3d->sched_lock); v3d_push_job(&job->base);
- ret = drm_gem_fence_array_add(&clean_job->deps, - dma_fence_get(job->base.done_fence)); + ret = drm_sched_job_await_fence(&clean_job->base, + dma_fence_get(job->base.done_fence)); if (ret) goto fail_unreserve;
diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index 3f352d73af9c..f0de584f452c 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -13,7 +13,7 @@ * jobs when bulk background jobs are queued up, we submit a new job * to the HW only when it has completed the last one, instead of * filling up the CT[01]Q FIFOs with jobs. Similarly, we use - * v3d_job_dependency() to manage the dependency between bin and + * drm_sched_job_await_fence() to manage the dependency between bin and * render, instead of having the clients submit jobs using the HW's * semaphores to interlock between them. */ @@ -62,28 +62,6 @@ v3d_sched_job_free(struct drm_sched_job *sched_job) v3d_job_cleanup(job); }
-/* - * Returns the fences that the job depends on, one by one. - * - * If placed in the scheduler's .dependency method, the corresponding - * .run_job won't be called until all of them have been signaled. - */ -static struct dma_fence * -v3d_job_dependency(struct drm_sched_job *sched_job, - struct drm_sched_entity *s_entity) -{ - struct v3d_job *job = to_v3d_job(sched_job); - - /* XXX: Wait on a fence for switching the GMP if necessary, - * and then do so. - */ - - if (!xa_empty(&job->deps)) - return xa_erase(&job->deps, job->last_dep++); - - return NULL; -} - static struct dma_fence *v3d_bin_job_run(struct drm_sched_job *sched_job) { struct v3d_bin_job *job = to_bin_job(sched_job); @@ -356,35 +334,30 @@ v3d_csd_job_timedout(struct drm_sched_job *sched_job) }
static const struct drm_sched_backend_ops v3d_bin_sched_ops = { - .dependency = v3d_job_dependency, .run_job = v3d_bin_job_run, .timedout_job = v3d_bin_job_timedout, .free_job = v3d_sched_job_free, };
static const struct drm_sched_backend_ops v3d_render_sched_ops = { - .dependency = v3d_job_dependency, .run_job = v3d_render_job_run, .timedout_job = v3d_render_job_timedout, .free_job = v3d_sched_job_free, };
static const struct drm_sched_backend_ops v3d_tfu_sched_ops = { - .dependency = v3d_job_dependency, .run_job = v3d_tfu_job_run, .timedout_job = v3d_generic_job_timedout, .free_job = v3d_sched_job_free, };
static const struct drm_sched_backend_ops v3d_csd_sched_ops = { - .dependency = v3d_job_dependency, .run_job = v3d_csd_job_run, .timedout_job = v3d_csd_job_timedout, .free_job = v3d_sched_job_free };
static const struct drm_sched_backend_ops v3d_cache_clean_sched_ops = { - .dependency = v3d_job_dependency, .run_job = v3d_cache_clean_job_run, .timedout_job = v3d_generic_job_timedout, .free_job = v3d_sched_job_free
On 07/12, Daniel Vetter wrote:
With the prep work out of the way this isn't tricky anymore.
Aside: The chaining of the various jobs is a bit awkward, with the possibility of failure in bad places. I think with the drm_sched_job_init/arm split and maybe preloading the job->dependencies xarray this should be fixable.
Cc: Melissa Wen melissa.srw@gmail.com Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Cc: Emma Anholt emma@anholt.net
drivers/gpu/drm/v3d/v3d_drv.h | 5 ----- drivers/gpu/drm/v3d/v3d_gem.c | 25 ++++++++----------------- drivers/gpu/drm/v3d/v3d_sched.c | 29 +---------------------------- 3 files changed, 9 insertions(+), 50 deletions(-)
diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h index 1d870261eaac..f80f4ff1f7aa 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.h +++ b/drivers/gpu/drm/v3d/v3d_drv.h @@ -192,11 +192,6 @@ struct v3d_job { struct drm_gem_object **bo; u32 bo_count;
- /* Array of struct dma_fence * to block on before submitting this job.
*/
- struct xarray deps;
- unsigned long last_dep;
- /* v3d fence to be signaled by IRQ handler when the job is complete. */ struct dma_fence *irq_fence;
diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index 5eccd3658938..42b07ffbea5e 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -257,8 +257,8 @@ v3d_lock_bo_reservations(struct v3d_job *job, return ret;
for (i = 0; i < job->bo_count; i++) {
ret = drm_gem_fence_array_add_implicit(&job->deps,
job->bo[i], true);
ret = drm_sched_job_await_implicit(&job->base,
if (ret) { drm_gem_unlock_reservations(job->bo, job->bo_count, acquire_ctx);job->bo[i], true);
@@ -354,8 +354,6 @@ static void v3d_job_free(struct kref *ref) { struct v3d_job *job = container_of(ref, struct v3d_job, refcount);
unsigned long index;
struct dma_fence *fence; int i;
for (i = 0; i < job->bo_count; i++) {
@@ -364,11 +362,6 @@ v3d_job_free(struct kref *ref) } kvfree(job->bo);
- xa_for_each(&job->deps, index, fence) {
dma_fence_put(fence);
- }
- xa_destroy(&job->deps);
- dma_fence_put(job->irq_fence); dma_fence_put(job->done_fence);
@@ -452,7 +445,6 @@ v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv, if (ret < 0) return ret;
- xa_init_flags(&job->deps, XA_FLAGS_ALLOC); ret = drm_sched_job_init(&job->base, &v3d_priv->sched_entity[queue], v3d_priv); if (ret)
@@ -462,7 +454,7 @@ v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv, if (ret == -EINVAL) goto fail_job;
- ret = drm_gem_fence_array_add(&job->deps, in_fence);
- ret = drm_sched_job_await_fence(&job->base, in_fence); if (ret) goto fail_job;
@@ -472,7 +464,6 @@ v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv, fail_job: drm_sched_job_cleanup(&job->base); fail:
- xa_destroy(&job->deps); pm_runtime_put_autosuspend(v3d->drm.dev); return ret;
} @@ -619,8 +610,8 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, if (bin) { v3d_push_job(&bin->base);
ret = drm_gem_fence_array_add(&render->base.deps,
dma_fence_get(bin->base.done_fence));
ret = drm_sched_job_await_fence(&render->base.base,
if (ret) goto fail_unreserve; }dma_fence_get(bin->base.done_fence));
@@ -630,7 +621,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, if (clean_job) { struct dma_fence *render_fence = dma_fence_get(render->base.done_fence);
ret = drm_gem_fence_array_add(&clean_job->deps, render_fence);
if (ret) goto fail_unreserve; v3d_push_job(clean_job);ret = drm_sched_job_await_fence(&clean_job->base, render_fence);
@@ -820,8 +811,8 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data, mutex_lock(&v3d->sched_lock); v3d_push_job(&job->base);
- ret = drm_gem_fence_array_add(&clean_job->deps,
dma_fence_get(job->base.done_fence));
- ret = drm_sched_job_await_fence(&clean_job->base,
if (ret) goto fail_unreserve;dma_fence_get(job->base.done_fence));
diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index 3f352d73af9c..f0de584f452c 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -13,7 +13,7 @@
- jobs when bulk background jobs are queued up, we submit a new job
- to the HW only when it has completed the last one, instead of
- filling up the CT[01]Q FIFOs with jobs. Similarly, we use
- v3d_job_dependency() to manage the dependency between bin and
*/
- drm_sched_job_await_fence() to manage the dependency between bin and
- render, instead of having the clients submit jobs using the HW's
- semaphores to interlock between them.
@@ -62,28 +62,6 @@ v3d_sched_job_free(struct drm_sched_job *sched_job) v3d_job_cleanup(job); }
-/*
- Returns the fences that the job depends on, one by one.
- If placed in the scheduler's .dependency method, the corresponding
- .run_job won't be called until all of them have been signaled.
- */
-static struct dma_fence * -v3d_job_dependency(struct drm_sched_job *sched_job,
struct drm_sched_entity *s_entity)
-{
- struct v3d_job *job = to_v3d_job(sched_job);
- /* XXX: Wait on a fence for switching the GMP if necessary,
* and then do so.
*/
- if (!xa_empty(&job->deps))
return xa_erase(&job->deps, job->last_dep++);
- return NULL;
-}
static struct dma_fence *v3d_bin_job_run(struct drm_sched_job *sched_job) { struct v3d_bin_job *job = to_bin_job(sched_job); @@ -356,35 +334,30 @@ v3d_csd_job_timedout(struct drm_sched_job *sched_job) }
static const struct drm_sched_backend_ops v3d_bin_sched_ops = {
- .dependency = v3d_job_dependency, .run_job = v3d_bin_job_run, .timedout_job = v3d_bin_job_timedout, .free_job = v3d_sched_job_free,
};
static const struct drm_sched_backend_ops v3d_render_sched_ops = {
- .dependency = v3d_job_dependency, .run_job = v3d_render_job_run, .timedout_job = v3d_render_job_timedout, .free_job = v3d_sched_job_free,
};
static const struct drm_sched_backend_ops v3d_tfu_sched_ops = {
- .dependency = v3d_job_dependency, .run_job = v3d_tfu_job_run, .timedout_job = v3d_generic_job_timedout, .free_job = v3d_sched_job_free,
};
static const struct drm_sched_backend_ops v3d_csd_sched_ops = {
- .dependency = v3d_job_dependency, .run_job = v3d_csd_job_run, .timedout_job = v3d_csd_job_timedout, .free_job = v3d_sched_job_free
};
static const struct drm_sched_backend_ops v3d_cache_clean_sched_ops = {
- .dependency = v3d_job_dependency, .run_job = v3d_cache_clean_job_run, .timedout_job = v3d_generic_job_timedout, .free_job = v3d_sched_job_free
Also here.
Reviewed-by: Melissa Wen mwen@igalia.com
-- 2.32.0
We need to pull the drm_sched_job_init much earlier, but that's very minor surgery.
v2: Actually fix up cleanup paths by calling drm_sched_job_init, which I wanted to to in the previous round (and did, for all other drivers). Spotted by Lucas.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Lucas Stach l.stach@pengutronix.de Cc: Russell King linux+etnaviv@armlinux.org.uk Cc: Christian Gmeiner christian.gmeiner@gmail.com Cc: Sumit Semwal sumit.semwal@linaro.org Cc: "Christian König" christian.koenig@amd.com Cc: etnaviv@lists.freedesktop.org Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/gpu/drm/etnaviv/etnaviv_gem.h | 5 +- drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 58 +++++++++--------- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 63 +------------------- drivers/gpu/drm/etnaviv/etnaviv_sched.h | 3 +- 4 files changed, 35 insertions(+), 94 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.h b/drivers/gpu/drm/etnaviv/etnaviv_gem.h index 98e60df882b6..63688e6e4580 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.h @@ -80,9 +80,6 @@ struct etnaviv_gem_submit_bo { u64 va; struct etnaviv_gem_object *obj; struct etnaviv_vram_mapping *mapping; - struct dma_fence *excl; - unsigned int nr_shared; - struct dma_fence **shared; };
/* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, @@ -95,7 +92,7 @@ struct etnaviv_gem_submit { struct etnaviv_file_private *ctx; struct etnaviv_gpu *gpu; struct etnaviv_iommu_context *mmu_context, *prev_mmu_context; - struct dma_fence *out_fence, *in_fence; + struct dma_fence *out_fence; int out_fence_id; struct list_head node; /* GPU active submit list */ struct etnaviv_cmdbuf cmdbuf; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index 4dd7d9d541c0..5b97ce1299ad 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -188,16 +188,10 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit) if (submit->flags & ETNA_SUBMIT_NO_IMPLICIT) continue;
- if (bo->flags & ETNA_SUBMIT_BO_WRITE) { - ret = dma_resv_get_fences(robj, &bo->excl, - &bo->nr_shared, - &bo->shared); - if (ret) - return ret; - } else { - bo->excl = dma_resv_get_excl_unlocked(robj); - } - + ret = drm_sched_job_await_implicit(&submit->sched_job, &bo->obj->base, + bo->flags & ETNA_SUBMIT_BO_WRITE); + if (ret) + return ret; }
return ret; @@ -403,8 +397,6 @@ static void submit_cleanup(struct kref *kref)
wake_up_all(&submit->gpu->fence_event);
- if (submit->in_fence) - dma_fence_put(submit->in_fence); if (submit->out_fence) { /* first remove from IDR, so fence can not be found anymore */ mutex_lock(&submit->gpu->fence_lock); @@ -529,7 +521,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, ret = etnaviv_cmdbuf_init(priv->cmdbuf_suballoc, &submit->cmdbuf, ALIGN(args->stream_size, 8) + 8); if (ret) - goto err_submit_objects; + goto err_submit_put;
submit->ctx = file->driver_priv; etnaviv_iommu_context_get(submit->ctx->mmu); @@ -537,51 +529,61 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, submit->exec_state = args->exec_state; submit->flags = args->flags;
+ ret = drm_sched_job_init(&submit->sched_job, + &ctx->sched_entity[args->pipe], + submit->ctx); + if (ret) + goto err_submit_put; + ret = submit_lookup_objects(submit, file, bos, args->nr_bos); if (ret) - goto err_submit_objects; + goto err_submit_job;
if ((priv->mmu_global->version != ETNAVIV_IOMMU_V2) && !etnaviv_cmd_validate_one(gpu, stream, args->stream_size / 4, relocs, args->nr_relocs)) { ret = -EINVAL; - goto err_submit_objects; + goto err_submit_job; }
if (args->flags & ETNA_SUBMIT_FENCE_FD_IN) { - submit->in_fence = sync_file_get_fence(args->fence_fd); - if (!submit->in_fence) { + struct dma_fence *in_fence = sync_file_get_fence(args->fence_fd); + if (!in_fence) { ret = -EINVAL; - goto err_submit_objects; + goto err_submit_job; } + + ret = drm_sched_job_await_fence(&submit->sched_job, in_fence); + if (ret) + goto err_submit_job; }
ret = submit_pin_objects(submit); if (ret) - goto err_submit_objects; + goto err_submit_job;
ret = submit_reloc(submit, stream, args->stream_size / 4, relocs, args->nr_relocs); if (ret) - goto err_submit_objects; + goto err_submit_job;
ret = submit_perfmon_validate(submit, args->exec_state, pmrs); if (ret) - goto err_submit_objects; + goto err_submit_job;
memcpy(submit->cmdbuf.vaddr, stream, args->stream_size);
ret = submit_lock_objects(submit, &ticket); if (ret) - goto err_submit_objects; + goto err_submit_job;
ret = submit_fence_sync(submit); if (ret) - goto err_submit_objects; + goto err_submit_job;
- ret = etnaviv_sched_push_job(&ctx->sched_entity[args->pipe], submit); + ret = etnaviv_sched_push_job(submit); if (ret) - goto err_submit_objects; + goto err_submit_job;
submit_attach_object_fences(submit);
@@ -595,7 +597,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, sync_file = sync_file_create(submit->out_fence); if (!sync_file) { ret = -ENOMEM; - goto err_submit_objects; + goto err_submit_job; } fd_install(out_fence_fd, sync_file->file); } @@ -603,7 +605,9 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, args->fence_fd = out_fence_fd; args->fence = submit->out_fence_id;
-err_submit_objects: +err_submit_job: + drm_sched_job_cleanup(&submit->sched_job); +err_submit_put: etnaviv_submit_put(submit);
err_submit_ww_acquire: diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index 180bb633d5c5..2bbbd6ccc95e 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -17,58 +17,6 @@ module_param_named(job_hang_limit, etnaviv_job_hang_limit, int , 0444); static int etnaviv_hw_jobs_limit = 4; module_param_named(hw_job_limit, etnaviv_hw_jobs_limit, int , 0444);
-static struct dma_fence * -etnaviv_sched_dependency(struct drm_sched_job *sched_job, - struct drm_sched_entity *entity) -{ - struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job); - struct dma_fence *fence; - int i; - - if (unlikely(submit->in_fence)) { - fence = submit->in_fence; - submit->in_fence = NULL; - - if (!dma_fence_is_signaled(fence)) - return fence; - - dma_fence_put(fence); - } - - for (i = 0; i < submit->nr_bos; i++) { - struct etnaviv_gem_submit_bo *bo = &submit->bos[i]; - int j; - - if (bo->excl) { - fence = bo->excl; - bo->excl = NULL; - - if (!dma_fence_is_signaled(fence)) - return fence; - - dma_fence_put(fence); - } - - for (j = 0; j < bo->nr_shared; j++) { - if (!bo->shared[j]) - continue; - - fence = bo->shared[j]; - bo->shared[j] = NULL; - - if (!dma_fence_is_signaled(fence)) - return fence; - - dma_fence_put(fence); - } - kfree(bo->shared); - bo->nr_shared = 0; - bo->shared = NULL; - } - - return NULL; -} - static struct dma_fence *etnaviv_sched_run_job(struct drm_sched_job *sched_job) { struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job); @@ -140,29 +88,22 @@ static void etnaviv_sched_free_job(struct drm_sched_job *sched_job) }
static const struct drm_sched_backend_ops etnaviv_sched_ops = { - .dependency = etnaviv_sched_dependency, .run_job = etnaviv_sched_run_job, .timedout_job = etnaviv_sched_timedout_job, .free_job = etnaviv_sched_free_job, };
-int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity, - struct etnaviv_gem_submit *submit) +int etnaviv_sched_push_job(struct etnaviv_gem_submit *submit) { int ret = 0;
/* * Hold the fence lock across the whole operation to avoid jobs being * pushed out of order with regard to their sched fence seqnos as - * allocated in drm_sched_job_init. + * allocated in drm_sched_job_arm. */ mutex_lock(&submit->gpu->fence_lock);
- ret = drm_sched_job_init(&submit->sched_job, sched_entity, - submit->ctx); - if (ret) - goto out_unlock; - drm_sched_job_arm(&submit->sched_job);
submit->out_fence = dma_fence_get(&submit->sched_job.s_fence->finished); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.h b/drivers/gpu/drm/etnaviv/etnaviv_sched.h index c0a6796e22c9..baebfa069afc 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.h @@ -18,7 +18,6 @@ struct etnaviv_gem_submit *to_etnaviv_submit(struct drm_sched_job *sched_job)
int etnaviv_sched_init(struct etnaviv_gpu *gpu); void etnaviv_sched_fini(struct etnaviv_gpu *gpu); -int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity, - struct etnaviv_gem_submit *submit); +int etnaviv_sched_push_job(struct etnaviv_gem_submit *submit);
#endif /* __ETNAVIV_SCHED_H__ */
Integrated into the scheduler now and all users converted over.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard mripard@kernel.org Cc: Thomas Zimmermann tzimmermann@suse.de Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch Cc: Sumit Semwal sumit.semwal@linaro.org Cc: "Christian König" christian.koenig@amd.com Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/gpu/drm/drm_gem.c | 96 --------------------------------------- include/drm/drm_gem.h | 5 -- 2 files changed, 101 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 68deb1de8235..24d49a2636e0 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1294,99 +1294,3 @@ drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, ww_acquire_fini(acquire_ctx); } EXPORT_SYMBOL(drm_gem_unlock_reservations); - -/** - * drm_gem_fence_array_add - Adds the fence to an array of fences to be - * waited on, deduplicating fences from the same context. - * - * @fence_array: array of dma_fence * for the job to block on. - * @fence: the dma_fence to add to the list of dependencies. - * - * This functions consumes the reference for @fence both on success and error - * cases. - * - * Returns: - * 0 on success, or an error on failing to expand the array. - */ -int drm_gem_fence_array_add(struct xarray *fence_array, - struct dma_fence *fence) -{ - struct dma_fence *entry; - unsigned long index; - u32 id = 0; - int ret; - - if (!fence) - return 0; - - /* Deduplicate if we already depend on a fence from the same context. - * This lets the size of the array of deps scale with the number of - * engines involved, rather than the number of BOs. - */ - xa_for_each(fence_array, index, entry) { - if (entry->context != fence->context) - continue; - - if (dma_fence_is_later(fence, entry)) { - dma_fence_put(entry); - xa_store(fence_array, index, fence, GFP_KERNEL); - } else { - dma_fence_put(fence); - } - return 0; - } - - ret = xa_alloc(fence_array, &id, fence, xa_limit_32b, GFP_KERNEL); - if (ret != 0) - dma_fence_put(fence); - - return ret; -} -EXPORT_SYMBOL(drm_gem_fence_array_add); - -/** - * drm_gem_fence_array_add_implicit - Adds the implicit dependencies tracked - * in the GEM object's reservation object to an array of dma_fences for use in - * scheduling a rendering job. - * - * This should be called after drm_gem_lock_reservations() on your array of - * GEM objects used in the job but before updating the reservations with your - * own fences. - * - * @fence_array: array of dma_fence * for the job to block on. - * @obj: the gem object to add new dependencies from. - * @write: whether the job might write the object (so we need to depend on - * shared fences in the reservation object). - */ -int drm_gem_fence_array_add_implicit(struct xarray *fence_array, - struct drm_gem_object *obj, - bool write) -{ - int ret; - struct dma_fence **fences; - unsigned int i, fence_count; - - if (!write) { - struct dma_fence *fence = - dma_resv_get_excl_unlocked(obj->resv); - - return drm_gem_fence_array_add(fence_array, fence); - } - - ret = dma_resv_get_fences(obj->resv, NULL, - &fence_count, &fences); - if (ret || !fence_count) - return ret; - - for (i = 0; i < fence_count; i++) { - ret = drm_gem_fence_array_add(fence_array, fences[i]); - if (ret) - break; - } - - for (; i < fence_count; i++) - dma_fence_put(fences[i]); - kfree(fences); - return ret; -} -EXPORT_SYMBOL(drm_gem_fence_array_add_implicit); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 240049566592..6d5e33b89074 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -409,11 +409,6 @@ int drm_gem_lock_reservations(struct drm_gem_object **objs, int count, struct ww_acquire_ctx *acquire_ctx); void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, struct ww_acquire_ctx *acquire_ctx); -int drm_gem_fence_array_add(struct xarray *fence_array, - struct dma_fence *fence); -int drm_gem_fence_array_add_implicit(struct xarray *fence_array, - struct drm_gem_object *obj, - bool write); int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, u32 handle, u64 *offset);
This is essentially part of drm_sched_dependency_optimized(), which only amdgpu seems to make use of. Use it a bit more.
This would mean that as-is amdgpu can't use the dependency helpers, at least not with the current approach amdgpu has for deciding whether a vm_flush is needed. Since amdgpu also has very special rules around implicit fencing it can't use those helpers either, and adding a drm_sched_job_await_fence_always or similar for amdgpu wouldn't be too onerous. That way the special case handling for amdgpu sticks even more out and we have higher chances that reviewers that go across all drivers wont miss it.
Reviewed-by: Lucas Stach l.stach@pengutronix.de Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: "Christian König" christian.koenig@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Luben Tuikov luben.tuikov@amd.com Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Alex Deucher alexander.deucher@amd.com Cc: Jack Zhang Jack.Zhang1@amd.com --- drivers/gpu/drm/scheduler/sched_main.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 84c30badb78e..fd52db906b90 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -660,6 +660,13 @@ int drm_sched_job_await_fence(struct drm_sched_job *job, if (!fence) return 0;
+ /* if it's a fence from us it's guaranteed to be earlier */ + if (fence->context == job->entity->fence_context || + fence->context == job->entity->fence_context + 1) { + dma_fence_put(fence); + return 0; + } + /* Deduplicate if we already depend on a fence from the same context. * This lets the size of the array of deps scale with the number of * engines involved, rather than the number of BOs.
You really need to hold the reservation here or all kinds of funny things can happen between grabbing the dependencies and inserting the new fences.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: "Christian König" christian.koenig@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Luben Tuikov luben.tuikov@amd.com Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Alex Deucher alexander.deucher@amd.com Cc: Jack Zhang Jack.Zhang1@amd.com --- drivers/gpu/drm/scheduler/sched_main.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index fd52db906b90..6fa6ccd30d2a 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -714,6 +714,8 @@ int drm_sched_job_await_implicit(struct drm_sched_job *job, struct dma_fence **fences; unsigned int i, fence_count;
+ dma_resv_assert_held(obj->resv); + if (!write) { struct dma_fence *fence = dma_resv_get_excl_unlocked(obj->resv);
There's only one exclusive slot, and we must not break the ordering.
Adding a new exclusive fence drops all previous fences from the dma_resv. To avoid violating the signalling order we err on the side of over-synchronizing by waiting for the existing fences, even if userspace asked us to ignore them.
A better fix would be to us a dma_fence_chain or _array like e.g. amdgpu now uses, but - msm has a synchronous dma_fence_wait for anything from another context, so doesn't seem to care much, - and it probably makes sense to lift this into dma-resv.c code as a proper concept, so that drivers don't have to hack up their own solution each on their own.
v2: Improve commit message per Lucas' suggestion.
Cc: Lucas Stach l.stach@pengutronix.de Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Clark robdclark@gmail.com Cc: Sean Paul sean@poorly.run Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org --- drivers/gpu/drm/msm/msm_gem_submit.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index b71da71a3dd8..edd0051d849f 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -306,7 +306,8 @@ static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) return ret; }
- if (no_implicit) + /* exclusive fences must be ordered */ + if (no_implicit && !write) continue;
ret = msm_gem_sync_object(&msm_obj->base, submit->ring->fctx,
On Mon, Jul 12, 2021 at 1:02 PM Daniel Vetter daniel.vetter@ffwll.ch wrote:
There's only one exclusive slot, and we must not break the ordering.
Adding a new exclusive fence drops all previous fences from the dma_resv. To avoid violating the signalling order we err on the side of over-synchronizing by waiting for the existing fences, even if userspace asked us to ignore them.
A better fix would be to us a dma_fence_chain or _array like e.g. amdgpu now uses, but
- msm has a synchronous dma_fence_wait for anything from another context, so doesn't seem to care much,
- and it probably makes sense to lift this into dma-resv.c code as a proper concept, so that drivers don't have to hack up their own solution each on their own.
v2: Improve commit message per Lucas' suggestion.
Cc: Lucas Stach l.stach@pengutronix.de Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Clark robdclark@gmail.com Cc: Sean Paul sean@poorly.run Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org
drivers/gpu/drm/msm/msm_gem_submit.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index b71da71a3dd8..edd0051d849f 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -306,7 +306,8 @@ static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) return ret; }
if (no_implicit)
/* exclusive fences must be ordered */
if (no_implicit && !write) continue;
In practice, modern userspace (the kind that is more likely to set the no-implicit flag on every submit) also sets MSM_SUBMIT_BO_WRITE on every bo, to shave some cpu overhead so I suppose this would not really hurt anything
Do you know if this is covered in any piglit/etc test?
BR, -R
ret = msm_gem_sync_object(&msm_obj->base, submit->ring->fctx,
-- 2.32.0
On Tue, Jul 13, 2021 at 6:51 PM Rob Clark robdclark@gmail.com wrote:
On Mon, Jul 12, 2021 at 1:02 PM Daniel Vetter daniel.vetter@ffwll.ch wrote:
There's only one exclusive slot, and we must not break the ordering.
Adding a new exclusive fence drops all previous fences from the dma_resv. To avoid violating the signalling order we err on the side of over-synchronizing by waiting for the existing fences, even if userspace asked us to ignore them.
A better fix would be to us a dma_fence_chain or _array like e.g. amdgpu now uses, but
- msm has a synchronous dma_fence_wait for anything from another context, so doesn't seem to care much,
- and it probably makes sense to lift this into dma-resv.c code as a proper concept, so that drivers don't have to hack up their own solution each on their own.
v2: Improve commit message per Lucas' suggestion.
Cc: Lucas Stach l.stach@pengutronix.de Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Clark robdclark@gmail.com Cc: Sean Paul sean@poorly.run Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org
drivers/gpu/drm/msm/msm_gem_submit.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index b71da71a3dd8..edd0051d849f 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -306,7 +306,8 @@ static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) return ret; }
if (no_implicit)
/* exclusive fences must be ordered */
if (no_implicit && !write) continue;
In practice, modern userspace (the kind that is more likely to set the no-implicit flag on every submit) also sets MSM_SUBMIT_BO_WRITE on every bo, to shave some cpu overhead so I suppose this would not really hurt anything
Do you know if this is covered in any piglit/etc test?
You need some command submission, plus buffer sharing with vgem setting it's own exclusive fences, plus checking with dma_buf poll() whether it signals all in the right order. That's pretty low-level, so maybe something in igt, but I haven't typed that. Maybe I need to do that for i915 at least. -Daniel
BR, -R
ret = msm_gem_sync_object(&msm_obj->base, submit->ring->fctx,
-- 2.32.0
On Tue, Jul 13, 2021 at 9:58 AM Daniel Vetter daniel.vetter@ffwll.ch wrote:
On Tue, Jul 13, 2021 at 6:51 PM Rob Clark robdclark@gmail.com wrote:
On Mon, Jul 12, 2021 at 1:02 PM Daniel Vetter daniel.vetter@ffwll.ch wrote:
There's only one exclusive slot, and we must not break the ordering.
Adding a new exclusive fence drops all previous fences from the dma_resv. To avoid violating the signalling order we err on the side of over-synchronizing by waiting for the existing fences, even if userspace asked us to ignore them.
A better fix would be to us a dma_fence_chain or _array like e.g. amdgpu now uses, but
- msm has a synchronous dma_fence_wait for anything from another context, so doesn't seem to care much,
- and it probably makes sense to lift this into dma-resv.c code as a proper concept, so that drivers don't have to hack up their own solution each on their own.
v2: Improve commit message per Lucas' suggestion.
Cc: Lucas Stach l.stach@pengutronix.de Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Clark robdclark@gmail.com Cc: Sean Paul sean@poorly.run Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org
drivers/gpu/drm/msm/msm_gem_submit.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index b71da71a3dd8..edd0051d849f 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -306,7 +306,8 @@ static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) return ret; }
if (no_implicit)
/* exclusive fences must be ordered */
if (no_implicit && !write) continue;
In practice, modern userspace (the kind that is more likely to set the no-implicit flag on every submit) also sets MSM_SUBMIT_BO_WRITE on every bo, to shave some cpu overhead so I suppose this would not really hurt anything
Do you know if this is covered in any piglit/etc test?
You need some command submission, plus buffer sharing with vgem setting it's own exclusive fences, plus checking with dma_buf poll() whether it signals all in the right order. That's pretty low-level, so maybe something in igt, but I haven't typed that. Maybe I need to do that for i915 at least.
ok, you lost me at vgem ;-)
(the vgem vs cache situation on arm is kinda hopeless)
BR, -R
-Daniel
BR, -R
ret = msm_gem_sync_object(&msm_obj->base, submit->ring->fctx,
-- 2.32.0
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
On Tue, Jul 13, 2021 at 7:42 PM Rob Clark robdclark@gmail.com wrote:
On Tue, Jul 13, 2021 at 9:58 AM Daniel Vetter daniel.vetter@ffwll.ch wrote:
On Tue, Jul 13, 2021 at 6:51 PM Rob Clark robdclark@gmail.com wrote:
On Mon, Jul 12, 2021 at 1:02 PM Daniel Vetter daniel.vetter@ffwll.ch wrote:
There's only one exclusive slot, and we must not break the ordering.
Adding a new exclusive fence drops all previous fences from the dma_resv. To avoid violating the signalling order we err on the side of over-synchronizing by waiting for the existing fences, even if userspace asked us to ignore them.
A better fix would be to us a dma_fence_chain or _array like e.g. amdgpu now uses, but
- msm has a synchronous dma_fence_wait for anything from another context, so doesn't seem to care much,
- and it probably makes sense to lift this into dma-resv.c code as a proper concept, so that drivers don't have to hack up their own solution each on their own.
v2: Improve commit message per Lucas' suggestion.
Cc: Lucas Stach l.stach@pengutronix.de Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Clark robdclark@gmail.com Cc: Sean Paul sean@poorly.run Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org
drivers/gpu/drm/msm/msm_gem_submit.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index b71da71a3dd8..edd0051d849f 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -306,7 +306,8 @@ static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) return ret; }
if (no_implicit)
/* exclusive fences must be ordered */
if (no_implicit && !write) continue;
In practice, modern userspace (the kind that is more likely to set the no-implicit flag on every submit) also sets MSM_SUBMIT_BO_WRITE on every bo, to shave some cpu overhead so I suppose this would not really hurt anything
Do you know if this is covered in any piglit/etc test?
You need some command submission, plus buffer sharing with vgem setting it's own exclusive fences, plus checking with dma_buf poll() whether it signals all in the right order. That's pretty low-level, so maybe something in igt, but I haven't typed that. Maybe I need to do that for i915 at least.
ok, you lost me at vgem ;-)
(the vgem vs cache situation on arm is kinda hopeless)
Oh that explains a few things ... I just found out why vgem is failing for wc buffers on x86 (on some of our less-coherent igpu at least), and wondered how the heck this works on arm. Sounds like it just doesn't :-/
On the testcase: You'd never actually check buffer contents, only fences, so the test would still work. -Daniel
BR, -R
-Daniel
BR, -R
ret = msm_gem_sync_object(&msm_obj->base, submit->ring->fctx,
-- 2.32.0
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
There's only one exclusive slot, and we must not break the ordering. Adding a new exclusive fence drops all previous fences from the dma_resv. To avoid violating the signalling order we err on the side of over-synchronizing by waiting for the existing fences, even if userspace asked us to ignore them.
A better fix would be to us a dma_fence_chain or _array like e.g. amdgpu now uses, but it probably makes sense to lift this into dma-resv.c code as a proper concept, so that drivers don't have to hack up their own solution each on their own. Hence go with the simple fix for now.
Another option is the fence import ioctl from Jason:
https://lore.kernel.org/dri-devel/20210610210925.642582-7-jason@jlekstrand.n...
v2: Improve commit message per Lucas' suggestion.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Lucas Stach l.stach@pengutronix.de Cc: Russell King linux+etnaviv@armlinux.org.uk Cc: Christian Gmeiner christian.gmeiner@gmail.com Cc: etnaviv@lists.freedesktop.org --- drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index 5b97ce1299ad..07454db4b150 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -178,18 +178,20 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit) for (i = 0; i < submit->nr_bos; i++) { struct etnaviv_gem_submit_bo *bo = &submit->bos[i]; struct dma_resv *robj = bo->obj->base.resv; + bool write = bo->flags & ETNA_SUBMIT_BO_WRITE;
- if (!(bo->flags & ETNA_SUBMIT_BO_WRITE)) { + if (!(write)) { ret = dma_resv_reserve_shared(robj, 1); if (ret) return ret; }
- if (submit->flags & ETNA_SUBMIT_NO_IMPLICIT) + /* exclusive fences must be ordered */ + if (submit->flags & ETNA_SUBMIT_NO_IMPLICIT && !write) continue;
ret = drm_sched_job_await_implicit(&submit->sched_job, &bo->obj->base, - bo->flags & ETNA_SUBMIT_BO_WRITE); + write); if (ret) return ret; }
No longer used, the last user disappeared with
commit d07f0e59b2c762584478920cd2d11fba2980a94a Author: Chris Wilson chris@chris-wilson.co.uk Date: Fri Oct 28 13:58:44 2016 +0100
drm/i915: Move GEM activity tracking into a common struct reservation_object
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: "Thomas Hellström" thomas.hellstrom@linux.intel.com Cc: Jason Ekstrand jason@jlekstrand.net --- drivers/gpu/drm/i915/display/intel_display.c | 4 ++-- drivers/gpu/drm/i915/gem/i915_gem_clflush.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 2 +- drivers/gpu/drm/i915/i915_sw_fence.c | 6 +----- drivers/gpu/drm/i915/i915_sw_fence.h | 1 - 5 files changed, 5 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index 98e0f4ed7e4a..678c7839034e 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -11119,7 +11119,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane, */ if (intel_crtc_needs_modeset(crtc_state)) { ret = i915_sw_fence_await_reservation(&state->commit_ready, - old_obj->base.resv, NULL, + old_obj->base.resv, false, 0, GFP_KERNEL); if (ret < 0) @@ -11153,7 +11153,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane, struct dma_fence *fence;
ret = i915_sw_fence_await_reservation(&state->commit_ready, - obj->base.resv, NULL, + obj->base.resv, false, i915_fence_timeout(dev_priv), GFP_KERNEL); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c index daf9284ef1f5..93439d2c7a58 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c @@ -106,7 +106,7 @@ bool i915_gem_clflush_object(struct drm_i915_gem_object *obj, clflush = clflush_work_create(obj); if (clflush) { i915_sw_fence_await_reservation(&clflush->base.chain, - obj->base.resv, NULL, true, + obj->base.resv, true, i915_fence_timeout(to_i915(obj->base.dev)), I915_FENCE_GFP); dma_resv_add_excl_fence(obj->base.resv, &clflush->base.dma); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index b95c8927d465..b4a77eba8631 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -2087,7 +2087,7 @@ static int eb_parse_pipeline(struct i915_execbuffer *eb,
/* Wait for all writes (and relocs) into the batch to complete */ err = i915_sw_fence_await_reservation(&pw->base.chain, - pw->batch->resv, NULL, false, + pw->batch->resv, false, 0, I915_FENCE_GFP); if (err < 0) goto err_commit; diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c index c589a681da77..91711a46b1c7 100644 --- a/drivers/gpu/drm/i915/i915_sw_fence.c +++ b/drivers/gpu/drm/i915/i915_sw_fence.c @@ -567,7 +567,6 @@ int __i915_sw_fence_await_dma_fence(struct i915_sw_fence *fence,
int i915_sw_fence_await_reservation(struct i915_sw_fence *fence, struct dma_resv *resv, - const struct dma_fence_ops *exclude, bool write, unsigned long timeout, gfp_t gfp) @@ -587,9 +586,6 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence, return ret;
for (i = 0; i < count; i++) { - if (shared[i]->ops == exclude) - continue; - pending = i915_sw_fence_await_dma_fence(fence, shared[i], timeout, @@ -609,7 +605,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence, excl = dma_resv_get_excl_unlocked(resv); }
- if (ret >= 0 && excl && excl->ops != exclude) { + if (ret >= 0 && excl) { pending = i915_sw_fence_await_dma_fence(fence, excl, timeout, diff --git a/drivers/gpu/drm/i915/i915_sw_fence.h b/drivers/gpu/drm/i915/i915_sw_fence.h index 30a863353ee6..6572f01668e4 100644 --- a/drivers/gpu/drm/i915/i915_sw_fence.h +++ b/drivers/gpu/drm/i915/i915_sw_fence.h @@ -86,7 +86,6 @@ int i915_sw_fence_await_dma_fence(struct i915_sw_fence *fence,
int i915_sw_fence_await_reservation(struct i915_sw_fence *fence, struct dma_resv *resv, - const struct dma_fence_ops *exclude, bool write, unsigned long timeout, gfp_t gfp);
There's only one exclusive slot, and we must not break the ordering. Adding a new exclusive fence drops all previous fences from the dma_resv. To avoid violating the signalling order we err on the side of over-synchronizing by waiting for the existing fences, even if userspace asked us to ignore them.
A better fix would be to us a dma_fence_chain or _array like e.g. amdgpu now uses, but it probably makes sense to lift this into dma-resv.c code as a proper concept, so that drivers don't have to hack up their own solution each on their own. Hence go with the simple fix for now.
Another option is the fence import ioctl from Jason:
https://lore.kernel.org/dri-devel/20210610210925.642582-7-jason@jlekstrand.n...
v2: Improve commit message per Lucas' suggestion.
Cc: Lucas Stach l.stach@pengutronix.de Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: "Thomas Hellström" thomas.hellstrom@linux.intel.com Cc: Jason Ekstrand jason@jlekstrand.net --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index b4a77eba8631..b3d675987493 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1767,6 +1767,7 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb) struct i915_vma *vma = ev->vma; unsigned int flags = ev->flags; struct drm_i915_gem_object *obj = vma->obj; + bool async, write;
assert_vma_held(vma);
@@ -1798,7 +1799,10 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb) flags &= ~EXEC_OBJECT_ASYNC; }
- if (err == 0 && !(flags & EXEC_OBJECT_ASYNC)) { + async = flags & EXEC_OBJECT_ASYNC; + write = flags & EXEC_OBJECT_WRITE; + + if (err == 0 && (!async || write)) { err = i915_request_await_object (eb->request, obj, flags & EXEC_OBJECT_WRITE); }
Specifically document the new/clarified rules around how the shared fences do not have any ordering requirements against the exclusive fence.
But also document all the things a bit better, given how central struct dma_resv to dynamic buffer management the docs have been very inadequat.
- Lots more links to other pieces of the puzzle. Unfortunately ttm_buffer_object has no docs, so no links :-(
- Explain/complain a bit about dma_resv_locking_ctx(). I still don't like that one, but fixing the ttm call chains is going to be horrible. Plus we want to plug in real slowpath locking when we do that anyway.
- Main part of the patch is some actual docs for struct dma_resv.
Overall I think we still have a lot of bad naming in this area (e.g. dma_resv.fence is singular, but contains the multiple shared fences), but I think that's more indicative of how the semantics and rules are just not great.
Another thing that's real awkard is how chaining exclusive fences right now means direct dma_resv.exclusive_fence pointer access with an rcu_assign_pointer. Not so great either.
v2: - Fix a pile of typos (Matt, Jason) - Hammer it in that breaking the rules leads to use-after-free issues around dma-buf sharing (Christian)
Reviewed-by: Christian König christian.koenig@amd.com Cc: Jason Ekstrand jason@jlekstrand.net Cc: Matthew Auld matthew.auld@intel.com Reviewed-by: Matthew Auld matthew.auld@intel.com Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Sumit Semwal sumit.semwal@linaro.org Cc: "Christian König" christian.koenig@amd.com Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/dma-buf/dma-resv.c | 24 ++++++--- include/linux/dma-buf.h | 7 +++ include/linux/dma-resv.h | 104 +++++++++++++++++++++++++++++++++++-- 3 files changed, 124 insertions(+), 11 deletions(-)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index e744fd87c63c..84fbe60629e3 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -48,6 +48,8 @@ * write operations) or N shared fences (read operations). The RCU * mechanism is used to protect read access to fences from locked * write-side updates. + * + * See struct dma_resv for more details. */
DEFINE_WD_CLASS(reservation_ww_class); @@ -137,7 +139,11 @@ EXPORT_SYMBOL(dma_resv_fini); * @num_fences: number of fences we want to add * * Should be called before dma_resv_add_shared_fence(). Must - * be called with obj->lock held. + * be called with @obj locked through dma_resv_lock(). + * + * Note that the preallocated slots need to be re-reserved if @obj is unlocked + * at any time before calling dma_resv_add_shared_fence(). This is validated + * when CONFIG_DEBUG_MUTEXES is enabled. * * RETURNS * Zero for success, or -errno @@ -234,8 +240,10 @@ EXPORT_SYMBOL(dma_resv_reset_shared_max); * @obj: the reservation object * @fence: the shared fence to add * - * Add a fence to a shared slot, obj->lock must be held, and + * Add a fence to a shared slot, @obj must be locked with dma_resv_lock(), and * dma_resv_reserve_shared() has been called. + * + * See also &dma_resv.fence for a discussion of the semantics. */ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence) { @@ -278,9 +286,11 @@ EXPORT_SYMBOL(dma_resv_add_shared_fence); /** * dma_resv_add_excl_fence - Add an exclusive fence. * @obj: the reservation object - * @fence: the shared fence to add + * @fence: the exclusive fence to add * - * Add a fence to the exclusive slot. The obj->lock must be held. + * Add a fence to the exclusive slot. @obj must be locked with dma_resv_lock(). + * Note that this function replaces all fences attached to @obj, see also + * &dma_resv.fence_excl for a discussion of the semantics. */ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) { @@ -609,9 +619,11 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence) * fence * * Callers are not required to hold specific locks, but maybe hold - * dma_resv_lock() already + * dma_resv_lock() already. + * * RETURNS - * true if all fences signaled, else false + * + * True if all fences signaled, else false. */ bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all) { diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 2b814fde0d11..8cc0c55877a6 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -420,6 +420,13 @@ struct dma_buf { * - Dynamic importers should set fences for any access that they can't * disable immediately from their &dma_buf_attach_ops.move_notify * callback. + * + * IMPORTANT: + * + * All drivers must obey the struct dma_resv rules, specifically the + * rules for updating fences, see &dma_resv.fence_excl and + * &dma_resv.fence. If these dependency rules are broken access tracking + * can be lost resulting in use after free issues. */ struct dma_resv *resv;
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index e1ca2080a1ff..9100dd3dc21f 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -62,16 +62,90 @@ struct dma_resv_list {
/** * struct dma_resv - a reservation object manages fences for a buffer - * @lock: update side lock - * @seq: sequence count for managing RCU read-side synchronization - * @fence_excl: the exclusive fence, if there is one currently - * @fence: list of current shared fences + * + * There are multiple uses for this, with sometimes slightly different rules in + * how the fence slots are used. + * + * One use is to synchronize cross-driver access to a struct dma_buf, either for + * dynamic buffer management or just to handle implicit synchronization between + * different users of the buffer in userspace. See &dma_buf.resv for a more + * in-depth discussion. + * + * The other major use is to manage access and locking within a driver in a + * buffer based memory manager. struct ttm_buffer_object is the canonical + * example here, since this is where reservation objects originated from. But + * use in drivers is spreading and some drivers also manage struct + * drm_gem_object with the same scheme. */ struct dma_resv { + /** + * @lock: + * + * Update side lock. Don't use directly, instead use the wrapper + * functions like dma_resv_lock() and dma_resv_unlock(). + * + * Drivers which use the reservation object to manage memory dynamically + * also use this lock to protect buffer object state like placement, + * allocation policies or throughout command submission. + */ struct ww_mutex lock; + + /** + * @seq: + * + * Sequence count for managing RCU read-side synchronization, allows + * read-only access to @fence_excl and @fence while ensuring we take a + * consistent snapshot. + */ seqcount_ww_mutex_t seq;
+ /** + * @fence_excl: + * + * The exclusive fence, if there is one currently. + * + * There are two ways to update this fence: + * + * - First by calling dma_resv_add_excl_fence(), which replaces all + * fences attached to the reservation object. To guarantee that no + * fences are lost, this new fence must signal only after all previous + * fences, both shared and exclusive, have signalled. In some cases it + * is convenient to achieve that by attaching a struct dma_fence_array + * with all the new and old fences. + * + * - Alternatively the fence can be set directly, which leaves the + * shared fences unchanged. To guarantee that no fences are lost, this + * new fence must signal only after the previous exclusive fence has + * signalled. Since the shared fences are staying intact, it is not + * necessary to maintain any ordering against those. If semantically + * only a new access is added without actually treating the previous + * one as a dependency the exclusive fences can be strung together + * using struct dma_fence_chain. + * + * Note that actual semantics of what an exclusive or shared fence mean + * is defined by the user, for reservation objects shared across drivers + * see &dma_buf.resv. + */ struct dma_fence __rcu *fence_excl; + + /** + * @fence: + * + * List of current shared fences. + * + * There are no ordering constraints of shared fences against the + * exclusive fence slot. If a waiter needs to wait for all access, it + * has to wait for both sets of fences to signal. + * + * A new fence is added by calling dma_resv_add_shared_fence(). Since + * this often needs to be done past the point of no return in command + * submission it cannot fail, and therefore sufficient slots need to be + * reserved by calling dma_resv_reserve_shared(). + * + * Note that actual semantics of what an exclusive or shared fence mean + * is defined by the user, for reservation objects shared across drivers + * see &dma_buf.resv. + */ struct dma_resv_list __rcu *fence; };
@@ -98,6 +172,13 @@ static inline void dma_resv_reset_shared_max(struct dma_resv *obj) {} * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation * object may be locked by itself by passing NULL as @ctx. + * + * When a die situation is indicated by returning -EDEADLK all locks held by + * @ctx must be unlocked and then dma_resv_lock_slow() called on @obj. + * + * Unlocked by calling dma_resv_unlock(). + * + * See also dma_resv_lock_interruptible() for the interruptible variant. */ static inline int dma_resv_lock(struct dma_resv *obj, struct ww_acquire_ctx *ctx) @@ -119,6 +200,12 @@ static inline int dma_resv_lock(struct dma_resv *obj, * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation * object may be locked by itself by passing NULL as @ctx. + * + * When a die situation is indicated by returning -EDEADLK all locks held by + * @ctx must be unlocked and then dma_resv_lock_slow_interruptible() called on + * @obj. + * + * Unlocked by calling dma_resv_unlock(). */ static inline int dma_resv_lock_interruptible(struct dma_resv *obj, struct ww_acquire_ctx *ctx) @@ -134,6 +221,8 @@ static inline int dma_resv_lock_interruptible(struct dma_resv *obj, * Acquires the reservation object after a die case. This function * will sleep until the lock becomes available. See dma_resv_lock() as * well. + * + * See also dma_resv_lock_slow_interruptible() for the interruptible variant. */ static inline void dma_resv_lock_slow(struct dma_resv *obj, struct ww_acquire_ctx *ctx) @@ -167,7 +256,7 @@ static inline int dma_resv_lock_slow_interruptible(struct dma_resv *obj, * if they overlap with a writer. * * Also note that since no context is provided, no deadlock protection is - * possible. + * possible, which is also not needed for a trylock. * * Returns true if the lock was acquired, false otherwise. */ @@ -193,6 +282,11 @@ static inline bool dma_resv_is_locked(struct dma_resv *obj) * * Returns the context used to lock a reservation object or NULL if no context * was used or the object is not locked at all. + * + * WARNING: This interface is pretty horrible, but TTM needs it because it + * doesn't pass the struct ww_acquire_ctx around in some very long callchains. + * Everyone else just uses it to check whether they're holding a reservation or + * not. */ static inline struct ww_acquire_ctx *dma_resv_locking_ctx(struct dma_resv *obj) {
On Mon, 12 Jul 2021 19:53:34 +0200 Daniel Vetter daniel.vetter@ffwll.ch wrote:
Hi all,
Quick new version since the previous one was a bit too broken:
- dropped the bug-on patch to avoid breaking amdgpu's gpu reset failure games
- another attempt at splitting job_init/arm, hopefully we're getting there.
Note that Christian has brought up a bikeshed on the new functions to add dependencies to drm_sched_jobs. I'm happy to repaint, if there's some kind of consensus on what it should be.
Testing and review very much welcome, as usual.
Cheers, Daniel
Daniel Vetter (18): drm/sched: Split drm_sched_job_init drm/sched: Barriers are needed for entity->last_scheduled drm/sched: Add dependency tracking drm/sched: drop entity parameter from drm_sched_push_job drm/sched: improve docs around drm_sched_entity
Patches 1, 3, 4 and 5 are
Reviewed-by: Boris Brezillon boris.brezillon@collabora.com
drm/panfrost: use scheduler dependency tracking drm/lima: use scheduler dependency tracking drm/v3d: Move drm_sched_job_init to v3d_job_init drm/v3d: Use scheduler dependency handling drm/etnaviv: Use scheduler dependency handling drm/gem: Delete gem array fencing helpers drm/sched: Don't store self-dependencies drm/sched: Check locking in drm_sched_job_await_implicit drm/msm: Don't break exclusive fence ordering drm/etnaviv: Don't break exclusive fence ordering drm/i915: delete exclude argument from i915_sw_fence_await_reservation drm/i915: Don't break exclusive fence ordering dma-resv: Give the docs a do-over
Documentation/gpu/drm-mm.rst | 3 + drivers/dma-buf/dma-resv.c | 24 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 4 +- drivers/gpu/drm/drm_gem.c | 96 --------- drivers/gpu/drm/etnaviv/etnaviv_gem.h | 5 +- drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 64 +++--- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 65 +----- drivers/gpu/drm/etnaviv/etnaviv_sched.h | 3 +- drivers/gpu/drm/i915/display/intel_display.c | 4 +- drivers/gpu/drm/i915/gem/i915_gem_clflush.c | 2 +- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 8 +- drivers/gpu/drm/i915/i915_sw_fence.c | 6 +- drivers/gpu/drm/i915/i915_sw_fence.h | 1 - drivers/gpu/drm/lima/lima_gem.c | 7 +- drivers/gpu/drm/lima/lima_sched.c | 28 +-- drivers/gpu/drm/lima/lima_sched.h | 6 +- drivers/gpu/drm/msm/msm_gem_submit.c | 3 +- drivers/gpu/drm/panfrost/panfrost_drv.c | 16 +- drivers/gpu/drm/panfrost/panfrost_job.c | 39 +--- drivers/gpu/drm/panfrost/panfrost_job.h | 5 +- drivers/gpu/drm/scheduler/sched_entity.c | 140 +++++++------ drivers/gpu/drm/scheduler/sched_fence.c | 19 +- drivers/gpu/drm/scheduler/sched_main.c | 181 +++++++++++++++-- drivers/gpu/drm/v3d/v3d_drv.h | 6 +- drivers/gpu/drm/v3d/v3d_gem.c | 115 +++++------ drivers/gpu/drm/v3d/v3d_sched.c | 44 +---- include/drm/drm_gem.h | 5 - include/drm/gpu_scheduler.h | 186 ++++++++++++++---- include/linux/dma-buf.h | 7 + include/linux/dma-resv.h | 104 +++++++++- 31 files changed, 672 insertions(+), 528 deletions(-)
On 07/12, Daniel Vetter wrote:
Hi all,
Quick new version since the previous one was a bit too broken:
- dropped the bug-on patch to avoid breaking amdgpu's gpu reset failure games
- another attempt at splitting job_init/arm, hopefully we're getting there.
Note that Christian has brought up a bikeshed on the new functions to add dependencies to drm_sched_jobs. I'm happy to repaint, if there's some kind of consensus on what it should be.
Testing and review very much welcome, as usual.
Hi,
I've tested it some time ago; but now, for v3d, don't forget to rebase.
Also, common parts lgtm, so for them:
Acked-by: Melissa Wen mwen@igalia.com
Cheers, Daniel
Daniel Vetter (18): drm/sched: Split drm_sched_job_init drm/sched: Barriers are needed for entity->last_scheduled drm/sched: Add dependency tracking drm/sched: drop entity parameter from drm_sched_push_job drm/sched: improve docs around drm_sched_entity drm/panfrost: use scheduler dependency tracking drm/lima: use scheduler dependency tracking drm/v3d: Move drm_sched_job_init to v3d_job_init drm/v3d: Use scheduler dependency handling drm/etnaviv: Use scheduler dependency handling drm/gem: Delete gem array fencing helpers drm/sched: Don't store self-dependencies drm/sched: Check locking in drm_sched_job_await_implicit drm/msm: Don't break exclusive fence ordering drm/etnaviv: Don't break exclusive fence ordering drm/i915: delete exclude argument from i915_sw_fence_await_reservation drm/i915: Don't break exclusive fence ordering dma-resv: Give the docs a do-over
Documentation/gpu/drm-mm.rst | 3 + drivers/dma-buf/dma-resv.c | 24 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 4 +- drivers/gpu/drm/drm_gem.c | 96 --------- drivers/gpu/drm/etnaviv/etnaviv_gem.h | 5 +- drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 64 +++--- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 65 +----- drivers/gpu/drm/etnaviv/etnaviv_sched.h | 3 +- drivers/gpu/drm/i915/display/intel_display.c | 4 +- drivers/gpu/drm/i915/gem/i915_gem_clflush.c | 2 +- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 8 +- drivers/gpu/drm/i915/i915_sw_fence.c | 6 +- drivers/gpu/drm/i915/i915_sw_fence.h | 1 - drivers/gpu/drm/lima/lima_gem.c | 7 +- drivers/gpu/drm/lima/lima_sched.c | 28 +-- drivers/gpu/drm/lima/lima_sched.h | 6 +- drivers/gpu/drm/msm/msm_gem_submit.c | 3 +- drivers/gpu/drm/panfrost/panfrost_drv.c | 16 +- drivers/gpu/drm/panfrost/panfrost_job.c | 39 +--- drivers/gpu/drm/panfrost/panfrost_job.h | 5 +- drivers/gpu/drm/scheduler/sched_entity.c | 140 +++++++------ drivers/gpu/drm/scheduler/sched_fence.c | 19 +- drivers/gpu/drm/scheduler/sched_main.c | 181 +++++++++++++++-- drivers/gpu/drm/v3d/v3d_drv.h | 6 +- drivers/gpu/drm/v3d/v3d_gem.c | 115 +++++------ drivers/gpu/drm/v3d/v3d_sched.c | 44 +---- include/drm/drm_gem.h | 5 - include/drm/gpu_scheduler.h | 186 ++++++++++++++---- include/linux/dma-buf.h | 7 + include/linux/dma-resv.h | 104 +++++++++- 31 files changed, 672 insertions(+), 528 deletions(-)
-- 2.32.0
dri-devel@lists.freedesktop.org