Am 24.06.21 um 16:00 schrieb Daniel Vetter:
This is essentially part of drm_sched_dependency_optimized(), which only amdgpu seems to make use of. Use it a bit more.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: "Christian König" christian.koenig@amd.com Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Luben Tuikov luben.tuikov@amd.com Cc: Andrey Grodzovsky andrey.grodzovsky@amd.com Cc: Alex Deucher alexander.deucher@amd.com Cc: Jack Zhang Jack.Zhang1@amd.com
drivers/gpu/drm/scheduler/sched_main.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 370c336d383f..c31d7cf7df74 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -649,6 +649,13 @@ int drm_sched_job_await_fence(struct drm_sched_job *job, if (!fence) return 0;
- /* if it's a fence from us it's guaranteed to be earlier */
- if (fence->context == job->entity->fence_context ||
fence->context == job->entity->fence_context + 1) {
dma_fence_put(fence);
return 0;
- }
Well NAK. That would break Vulkan.
The problem is that Vulkan can insert dependencies between jobs which run on the same queue.
So we need to track those as well and if the previous job for the same queue/scheduler is not yet finished a pipeline synchronization needs to be inserted.
That's one of the reasons we wasn't able to unify the dependency handling yet.
Christian.
/* Deduplicate if we already depend on a fence from the same context. * This lets the size of the array of deps scale with the number of * engines involved, rather than the number of BOs.