On Thu, Aug 19, 2021 at 05:01:03PM -0700, Daniele Ceraolo Spurio wrote:
On 8/18/2021 11:16 PM, Matthew Brost wrote:
Don't drop ce->guc_active.lock when unwinding a context after reset. At one point we had to drop this because of a lock inversion but that is no longer the case. It is much safer to hold the lock so let's do that.
Fixes: eb5e7da736f3 ("drm/i915/guc: Reset implementation for new GuC interface") Signed-off-by: Matthew Brost matthew.brost@intel.com Cc: stable@vger.kernel.org
Reviewed-by: Daniele Ceraolo Spurio daniele.ceraolospurio@intel.com
Do we have a trybot of this series with GuC enabled? I've checked the functions called in the previously unlocked chunk and didn't spot anything requiring the lock to be dropped, but I'd feel safer if we had lockdep results as well.
RKL uses GuC submission with BAT. This has been thoroughly tested by me too and no lockdep splats. Can kick off a trybot on the next rev of this series too.
Matt
Daniele
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 4 ---- 1 file changed, 4 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 9ca0ba4ea85a..e4a099f8f820 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -812,8 +812,6 @@ __unwind_incomplete_requests(struct intel_context *ce) continue; list_del_init(&rq->sched.link);
spin_unlock(&ce->guc_active.lock);
- __i915_request_unsubmit(rq); /* Push the request back into the queue for later resubmission. */
@@ -826,8 +824,6 @@ __unwind_incomplete_requests(struct intel_context *ce) list_add(&rq->sched.link, pl); set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
} spin_unlock(&ce->guc_active.lock); spin_unlock_irqrestore(&sched_engine->lock, flags);spin_lock(&ce->guc_active.lock);