Panfrost uses multiple schedulers (one for each slot, so 2 in reality), and on a timeout has to stop all the schedulers to safely perform a reset. However more than one scheduler can trigger a timeout at the same time. This race condition results in jobs being freed while they are still in use.
When stopping other slots use cancel_delayed_work_sync() to ensure that any timeout started for that slot has completed. Also use mutex_trylock() to obtain reset_lock. This means that only one thread attempts the reset, the other threads will simply complete without doing anything (the first thread will wait for this in the call to cancel_delayed_work_sync()).
While we're here and since the function is already dependent on sched_job not being NULL, let's remove the unnecessary checks, along with a commented out call to panfrost_core_dump() which has never existed in mainline.
Signed-off-by: Steven Price steven.price@arm.com --- This is a tidied up version of the patch orginally posted here: http://lkml.kernel.org/r/26ae2a4d-8df1-e8db-3060-41638ed63e2a%40arm.com
drivers/gpu/drm/panfrost/panfrost_job.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index a58551668d9a..dcc9a7603685 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -381,13 +381,19 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) job_read(pfdev, JS_TAIL_LO(js)), sched_job);
- mutex_lock(&pfdev->reset_lock); + if (!mutex_trylock(&pfdev->reset_lock)) + return;
- for (i = 0; i < NUM_JOB_SLOTS; i++) - drm_sched_stop(&pfdev->js->queue[i].sched, sched_job); + for (i = 0; i < NUM_JOB_SLOTS; i++) { + struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched; + + drm_sched_stop(sched, sched_job); + if (js != i) + /* Ensure any timeouts on other slots have finished */ + cancel_delayed_work_sync(&sched->work_tdr); + }
- if (sched_job) - drm_sched_increase_karma(sched_job); + drm_sched_increase_karma(sched_job);
spin_lock_irqsave(&pfdev->js->job_lock, flags); for (i = 0; i < NUM_JOB_SLOTS; i++) { @@ -398,7 +404,6 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) } spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
- /* panfrost_core_dump(pfdev); */
panfrost_devfreq_record_transition(pfdev, js); panfrost_device_reset(pfdev);
Hi Steven,
On 07/10/2019 14:50, Steven Price wrote:
Panfrost uses multiple schedulers (one for each slot, so 2 in reality), and on a timeout has to stop all the schedulers to safely perform a reset. However more than one scheduler can trigger a timeout at the same time. This race condition results in jobs being freed while they are still in use.
When stopping other slots use cancel_delayed_work_sync() to ensure that any timeout started for that slot has completed. Also use mutex_trylock() to obtain reset_lock. This means that only one thread attempts the reset, the other threads will simply complete without doing anything (the first thread will wait for this in the call to cancel_delayed_work_sync()).
While we're here and since the function is already dependent on sched_job not being NULL, let's remove the unnecessary checks, along with a commented out call to panfrost_core_dump() which has never existed in mainline.
A Fixes: tags would be welcome here so it would be backported to v5.3
Signed-off-by: Steven Price steven.price@arm.com
This is a tidied up version of the patch orginally posted here: http://lkml.kernel.org/r/26ae2a4d-8df1-e8db-3060-41638ed63e2a%40arm.com
drivers/gpu/drm/panfrost/panfrost_job.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index a58551668d9a..dcc9a7603685 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -381,13 +381,19 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) job_read(pfdev, JS_TAIL_LO(js)), sched_job);
- mutex_lock(&pfdev->reset_lock);
- if (!mutex_trylock(&pfdev->reset_lock))
return;
- for (i = 0; i < NUM_JOB_SLOTS; i++)
drm_sched_stop(&pfdev->js->queue[i].sched, sched_job);
- for (i = 0; i < NUM_JOB_SLOTS; i++) {
struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched;
drm_sched_stop(sched, sched_job);
if (js != i)
/* Ensure any timeouts on other slots have finished */
cancel_delayed_work_sync(&sched->work_tdr);
- }
- if (sched_job)
drm_sched_increase_karma(sched_job);
- drm_sched_increase_karma(sched_job);
Indeed looks cleaner.
spin_lock_irqsave(&pfdev->js->job_lock, flags); for (i = 0; i < NUM_JOB_SLOTS; i++) { @@ -398,7 +404,6 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) } spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
- /* panfrost_core_dump(pfdev); */
This should be cleaned in another patch !
panfrost_devfreq_record_transition(pfdev, js); panfrost_device_reset(pfdev);
Thanks, Testing it right now with the last change removed (doesn't apply on v5.3 with it), results in a few hours... or minutes !
Neil
On 10/7/19 6:09 AM, Neil Armstrong wrote:
Hi Steven,
On 07/10/2019 14:50, Steven Price wrote:
Panfrost uses multiple schedulers (one for each slot, so 2 in reality), and on a timeout has to stop all the schedulers to safely perform a reset. However more than one scheduler can trigger a timeout at the same time. This race condition results in jobs being freed while they are still in use.
When stopping other slots use cancel_delayed_work_sync() to ensure that any timeout started for that slot has completed. Also use mutex_trylock() to obtain reset_lock. This means that only one thread attempts the reset, the other threads will simply complete without doing anything (the first thread will wait for this in the call to cancel_delayed_work_sync()).
While we're here and since the function is already dependent on sched_job not being NULL, let's remove the unnecessary checks, along with a commented out call to panfrost_core_dump() which has never existed in mainline.
A Fixes: tags would be welcome here so it would be backported to v5.3
Signed-off-by: Steven Price steven.price@arm.com
This is a tidied up version of the patch orginally posted here: http://lkml.kernel.org/r/26ae2a4d-8df1-e8db-3060-41638ed63e2a%40arm.com
drivers/gpu/drm/panfrost/panfrost_job.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index a58551668d9a..dcc9a7603685 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -381,13 +381,19 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) job_read(pfdev, JS_TAIL_LO(js)), sched_job);
- mutex_lock(&pfdev->reset_lock);
- if (!mutex_trylock(&pfdev->reset_lock))
return;
- for (i = 0; i < NUM_JOB_SLOTS; i++)
drm_sched_stop(&pfdev->js->queue[i].sched, sched_job);
- for (i = 0; i < NUM_JOB_SLOTS; i++) {
struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched;
drm_sched_stop(sched, sched_job);
if (js != i)
/* Ensure any timeouts on other slots have finished */
cancel_delayed_work_sync(&sched->work_tdr);
- }
- if (sched_job)
drm_sched_increase_karma(sched_job);
- drm_sched_increase_karma(sched_job);
Indeed looks cleaner.
spin_lock_irqsave(&pfdev->js->job_lock, flags); for (i = 0; i < NUM_JOB_SLOTS; i++) { @@ -398,7 +404,6 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) } spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
- /* panfrost_core_dump(pfdev); */
This should be cleaned in another patch !
Seems to me that this should be some kind of TODO, see etnaviv_core_dump() for the kind of things we could be doing.
Maybe we can delete this line and mention this in the TODO file?
Cheers,
Tomeu
panfrost_devfreq_record_transition(pfdev, js); panfrost_device_reset(pfdev);
Thanks, Testing it right now with the last change removed (doesn't apply on v5.3 with it), results in a few hours... or minutes !
Neil
On 07/10/2019 17:14, Tomeu Vizoso wrote:
On 10/7/19 6:09 AM, Neil Armstrong wrote:
Hi Steven,
On 07/10/2019 14:50, Steven Price wrote:
Panfrost uses multiple schedulers (one for each slot, so 2 in reality), and on a timeout has to stop all the schedulers to safely perform a reset. However more than one scheduler can trigger a timeout at the same time. This race condition results in jobs being freed while they are still in use.
When stopping other slots use cancel_delayed_work_sync() to ensure that any timeout started for that slot has completed. Also use mutex_trylock() to obtain reset_lock. This means that only one thread attempts the reset, the other threads will simply complete without doing anything (the first thread will wait for this in the call to cancel_delayed_work_sync()).
While we're here and since the function is already dependent on sched_job not being NULL, let's remove the unnecessary checks, along with a commented out call to panfrost_core_dump() which has never existed in mainline.
A Fixes: tags would be welcome here so it would be backported to v5.3
Signed-off-by: Steven Price steven.price@arm.com
This is a tidied up version of the patch orginally posted here: http://lkml.kernel.org/r/26ae2a4d-8df1-e8db-3060-41638ed63e2a%40arm.com
drivers/gpu/drm/panfrost/panfrost_job.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index a58551668d9a..dcc9a7603685 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -381,13 +381,19 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) job_read(pfdev, JS_TAIL_LO(js)), sched_job); - mutex_lock(&pfdev->reset_lock); + if (!mutex_trylock(&pfdev->reset_lock)) + return; - for (i = 0; i < NUM_JOB_SLOTS; i++) - drm_sched_stop(&pfdev->js->queue[i].sched, sched_job); + for (i = 0; i < NUM_JOB_SLOTS; i++) { + struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched;
+ drm_sched_stop(sched, sched_job); + if (js != i) + /* Ensure any timeouts on other slots have finished */ + cancel_delayed_work_sync(&sched->work_tdr); + } - if (sched_job) - drm_sched_increase_karma(sched_job); + drm_sched_increase_karma(sched_job);
Indeed looks cleaner.
spin_lock_irqsave(&pfdev->js->job_lock, flags); for (i = 0; i < NUM_JOB_SLOTS; i++) { @@ -398,7 +404,6 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) } spin_unlock_irqrestore(&pfdev->js->job_lock, flags); - /* panfrost_core_dump(pfdev); */
This should be cleaned in another patch !
Seems to me that this should be some kind of TODO, see etnaviv_core_dump() for the kind of things we could be doing.
Maybe we can delete this line and mention this in the TODO file?
Fair enough - I'll split this into a separate patch and add an entry to the TODO file. kbase has a mechanism to "dump on job fault" [1],[2] so we could do something similar.
Steve
[1] https://gitlab.freedesktop.org/panfrost/mali_kbase/blob/master/driver/produc...
[2] https://gitlab.freedesktop.org/panfrost/mali_kbase/blob/master/driver/produc...
Cheers,
Tomeu
panfrost_devfreq_record_transition(pfdev, js); panfrost_device_reset(pfdev);
Thanks, Testing it right now with the last change removed (doesn't apply on v5.3 with it), results in a few hours... or minutes !
Neil
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On 07/10/2019 14:50, Steven Price wrote:
Panfrost uses multiple schedulers (one for each slot, so 2 in reality), and on a timeout has to stop all the schedulers to safely perform a reset. However more than one scheduler can trigger a timeout at the same time. This race condition results in jobs being freed while they are still in use.
When stopping other slots use cancel_delayed_work_sync() to ensure that any timeout started for that slot has completed. Also use mutex_trylock() to obtain reset_lock. This means that only one thread attempts the reset, the other threads will simply complete without doing anything (the first thread will wait for this in the call to cancel_delayed_work_sync()).
While we're here and since the function is already dependent on sched_job not being NULL, let's remove the unnecessary checks, along with a commented out call to panfrost_core_dump() which has never existed in mainline.
Signed-off-by: Steven Price steven.price@arm.com
This is a tidied up version of the patch orginally posted here: http://lkml.kernel.org/r/26ae2a4d-8df1-e8db-3060-41638ed63e2a%40arm.com
drivers/gpu/drm/panfrost/panfrost_job.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index a58551668d9a..dcc9a7603685 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -381,13 +381,19 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) job_read(pfdev, JS_TAIL_LO(js)), sched_job);
- mutex_lock(&pfdev->reset_lock);
- if (!mutex_trylock(&pfdev->reset_lock))
return;
- for (i = 0; i < NUM_JOB_SLOTS; i++)
drm_sched_stop(&pfdev->js->queue[i].sched, sched_job);
- for (i = 0; i < NUM_JOB_SLOTS; i++) {
struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched;
drm_sched_stop(sched, sched_job);
if (js != i)
/* Ensure any timeouts on other slots have finished */
cancel_delayed_work_sync(&sched->work_tdr);
- }
- if (sched_job)
drm_sched_increase_karma(sched_job);
drm_sched_increase_karma(sched_job);
spin_lock_irqsave(&pfdev->js->job_lock, flags); for (i = 0; i < NUM_JOB_SLOTS; i++) {
@@ -398,7 +404,6 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) } spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
/* panfrost_core_dump(pfdev); */
panfrost_devfreq_record_transition(pfdev, js); panfrost_device_reset(pfdev);
It ran successfully 10 dEQP tests without crashing the Amlogic S912 with Mali T820: Tested-by: Neil Armstrong narmstrong@baylibre.com
dri-devel@lists.freedesktop.org