On Mon, Aug 6, 2018 at 6:45 PM Christian König < ckoenig.leichtzumerken@gmail.com> wrote:
Am 06.08.2018 um 15:11 schrieb Nayan Deshmukh:
Hi Christian,
On Mon, Aug 6, 2018 at 5:49 PM Christian König < ckoenig.leichtzumerken@gmail.com> wrote:
Update job earlier with the scheduler it is supposed to be scheduled on.
Otherwise we could incorrectly optimize dependencies when moving an entity from one scheduler to another.
I don't think change is really required, the old code should work fine. Read below for more comments.
Signed-off-by: Christian König christian.koenig@amd.com
drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ++-- drivers/gpu/drm/scheduler/sched_fence.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c index 65078dd3c82c..029863726c99 100644 --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c @@ -533,8 +533,6 @@ drm_sched_entity_pop_job(struct drm_sched_entity *entity) if (!sched_job) return NULL;
sched_job->sched = sched;
sched_job->s_fence->sched = sched; while ((entity->dependency = sched->ops->dependency(sched_job,
entity)))
The job should have the correct scheduler before the sched->oops->dependency() function is called for the first time. Hence I placed the assignment here.
It might also be that I am missing some case so please let me know if such a case exists where this might lead to wrong optimization.
The problem is that the scheduler fence of job A could be the dependency of another job B
Job B then makes an incorrect optimization because it sees the wrong scheduler in the scheduler fence of job A.
Ah..I missed this case. This make sense now. Reviewed-by: Nayan Deshmukh <
nayan26deshmukh@gmail.com>
Regards, Christian.
Regards, Nayan
if (drm_sched_entity_add_dependency_cb(entity)) return NULL;
@@ -581,6 +579,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job, spin_unlock(&entity->rq_lock); }
sched_job->sched = entity->rq->sched;
sched_job->s_fence->sched = entity->rq->sched; trace_drm_sched_job(sched_job, entity); atomic_inc(&entity->rq->sched->num_jobs); WRITE_ONCE(entity->last_user, current->group_leader);
diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index 4029312fdd81..6dab18d288d7 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -172,7 +172,7 @@ struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity, return NULL;
fence->owner = owner;
fence->sched = entity->rq->sched;
fence->sched = NULL; spin_lock_init(&fence->lock); seq = atomic_inc_return(&entity->fence_seq);
-- 2.14.1