+ dri-devel
Since scheduler is a shared component, please add dri-devel on all scheduler patches.
On Wed, Aug 18, 2021 at 7:21 AM Jingwen Chen Jingwen.Chen2@amd.com wrote:
On 2021-08-18 10:02 a.m., Alex Deucher wrote:
While this is true for amdgpu, it has no meaning for other drivers for whom we haven't done the refactoring of embedding HW fence (parent) into the job structure. In fact thinking about it, unless you do the HW fence embedding for all the drivers using the scheduler you cannot revert this patch or you will just break them.
Andrey
On Wed, Aug 18, 2021 at 10:26:25AM -0400, Andrey Grodzovsky wrote:
btw, why did you do that embedding? I do still have my patches with dma_fence annotations floating around, but my idea at least was to fix that issue with a mempool, not with embeddeding. What was the motivation for embedding the wh fence? -Daniel
On 2021-08-18 10:32 a.m., Daniel Vetter wrote:
The motivation was 2 fold, avoid memory allocation during jobs submissions (HW fence allocation) because as Christian explained this leads to deadlock with mm code during evictions due to memory pressure (Christian can clarify if I messed this explanation). Second is to exactly revert this patch because while it solved the issue described in the patch it created another with drivers who baildc out early during TDR handling for various reason and the job would just leak because it was already removed form pending list.
Andrey
On Wed, Aug 18, 2021 at 10:36:32AM -0400, Andrey Grodzovsky wrote:
Yeah that's the exact same thing I've chased with my dma_fence annotations, but thus far zero to none interested in getting it sorted. I think it'd be good to have some cross-driver agreement on how this should be solved before someone just charges ahead ...
Can't we reinsert it before we restart the scheduler thread? It might need a separate list for that due to the lockless queue tricks. Or am I thinking about the wrong kind of "we lost the job"? -Danile
On 2021-08-18 10:42 a.m., Daniel Vetter wrote:
If you look at the original patch it would reinsert it even earlier - right after stopping the SW scheduler thread, and even then it was to late for some drivers as they would decide to return back from their TDR handler even before that. It is solvable but in an ugly way as far as I see, you need to require each driver in his code to put the job back in the list if they do it before reaching the place where scheduler framework does it. Kind of spaghetti code seems to me.
Andrey
On Wed, Aug 18, 2021 at 10:51:00AM -0400, Andrey Grodzovsky wrote:
Hm yeah I didn't realize this all happens before we stop the scheduler thread.
Why can't we stop the scheduler thread first, so that there's guaranteed no race? I've recently had a lot of discussions with panfrost folks about their reset that spawns across engines, and without stopping the scheduler thread first before you touch anything it's just plain impossible.
I'm also still not understanding what exactly you guys have done, can someone please dig out the the amdgpu patches that motivate all this maybe that's clearer? A full explanation would still be good since I've only started in scheduler stuff.
Another thing I recently pondered for tdr races looking at i915 code is whether the tdr should first block the completion fence for that job. My motivation is to have a race-free error capture (if the completion races then we might start evicting memory and everything goes boom), but maybe that helps here too. Some kind of atomic "block this fence from completing thing.
Or I'm I completely guessing in the wrong direction? -Daniel
[AMD Official Use Only]
Hi Daniel
Why can't we stop the scheduler thread first, so that there's guaranteed no race? I've recently had a lot of discussions with panfrost folks about their reset that spawns across engines, and without stopping the scheduler thread first before you touch anything it's just plain impossible.
Yeah we had this though as well in our mind.
Our second approach is to call ktrhead_stop() in job_timedout() routine so that the "bad" job is guaranteed to be used without scheduler's touching or freeing, Check this sample patch one as well please:
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index a2a9536..50a49cb 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -319,17 +319,12 @@ static void drm_sched_job_timedout(struct work_struct *work) sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
/* Protects against concurrent deletion in drm_sched_get_cleanup_job */ + kthread_park(sched->thread); spin_lock(&sched->job_list_lock); job = list_first_entry_or_null(&sched->pending_list, struct drm_sched_job, list);
if (job) { - /* - * Remove the bad job so it cannot be freed by concurrent - * drm_sched_cleanup_jobs. It will be reinserted back after sched->thread - * is parked at which point it's safe. - */ - list_del_init(&job->list); spin_unlock(&sched->job_list_lock);
status = job->sched->ops->timedout_job(job); @@ -345,6 +340,7 @@ static void drm_sched_job_timedout(struct work_struct *work) } else { spin_unlock(&sched->job_list_lock); } + kthread_unpark(sched->thread);
if (status != DRM_GPU_SCHED_STAT_ENODEV) { spin_lock(&sched->job_list_lock); @@ -393,20 +389,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) kthread_park(sched->thread);
/* - * Reinsert back the bad job here - now it's safe as - * drm_sched_get_cleanup_job cannot race against us and release the - * bad job at this point - we parked (waited for) any in progress - * (earlier) cleanups and drm_sched_get_cleanup_job will not be called - * now until the scheduler thread is unparked. - */ - if (bad && bad->sched == sched) - /* - * Add at the head of the queue to reflect it was the earliest - * job extracted. - */ - list_add(&bad->list, &sched->pending_list); - - /* * Iterate the job list from later to earlier one and either deactive * their HW callbacks or remove them from pending list if they already * signaled.
Thanks
------------------------------------------ Monk Liu | Cloud-GPU Core team ------------------------------------------
-----Original Message----- From: Daniel Vetter daniel@ffwll.ch Sent: Thursday, August 19, 2021 5:31 PM To: Grodzovsky, Andrey Andrey.Grodzovsky@amd.com Cc: Daniel Vetter daniel@ffwll.ch; Alex Deucher alexdeucher@gmail.com; Chen, JingWen JingWen.Chen2@amd.com; Maling list - DRI developers dri-devel@lists.freedesktop.org; amd-gfx list amd-gfx@lists.freedesktop.org; Liu, Monk Monk.Liu@amd.com; Koenig, Christian Christian.Koenig@amd.com Subject: Re: [PATCH v2] Revert "drm/scheduler: Avoid accessing freed bad job."
On Wed, Aug 18, 2021 at 10:51:00AM -0400, Andrey Grodzovsky wrote:
Hm yeah I didn't realize this all happens before we stop the scheduler thread.
Why can't we stop the scheduler thread first, so that there's guaranteed no race? I've recently had a lot of discussions with panfrost folks about their reset that spawns across engines, and without stopping the scheduler thread first before you touch anything it's just plain impossible.
I'm also still not understanding what exactly you guys have done, can someone please dig out the the amdgpu patches that motivate all this maybe that's clearer? A full explanation would still be good since I've only started in scheduler stuff.
Another thing I recently pondered for tdr races looking at i915 code is whether the tdr should first block the completion fence for that job. My motivation is to have a race-free error capture (if the completion races then we might start evicting memory and everything goes boom), but maybe that helps here too. Some kind of atomic "block this fence from completing thing.
Or I'm I completely guessing in the wrong direction? -Daniel
-- Daniel Vetter Software Engineer, Intel Corporation https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll....
[AMD Official Use Only]
@Daniel Vetter @Grodzovsky, Andrey @Koenig, Christian
Do you have any concern on the kthread_park() approach ?
Theoretically speaking sched_main shall run there exclusively with job_timeout since they both touches jobs, and stop scheduler during job_timeout won't impact performance since in that scenario There was already something wrong/stuck on that ring/scheduler
Thanks
------------------------------------------ Monk Liu | Cloud-GPU Core team ------------------------------------------
-----Original Message----- From: Liu, Monk Sent: Thursday, August 19, 2021 6:26 PM To: Daniel Vetter daniel@ffwll.ch; Grodzovsky, Andrey Andrey.Grodzovsky@amd.com Cc: Alex Deucher alexdeucher@gmail.com; Chen, JingWen JingWen.Chen2@amd.com; Maling list - DRI developers dri-devel@lists.freedesktop.org; amd-gfx list amd-gfx@lists.freedesktop.org; Koenig, Christian Christian.Koenig@amd.com Subject: RE: [PATCH v2] Revert "drm/scheduler: Avoid accessing freed bad job."
[AMD Official Use Only]
Hi Daniel
Why can't we stop the scheduler thread first, so that there's guaranteed no race? I've recently had a lot of discussions with panfrost folks about their reset that spawns across engines, and without stopping the scheduler thread first before you touch anything it's just plain impossible.
Yeah we had this though as well in our mind.
Our second approach is to call ktrhead_stop() in job_timedout() routine so that the "bad" job is guaranteed to be used without scheduler's touching or freeing, Check this sample patch one as well please:
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index a2a9536..50a49cb 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -319,17 +319,12 @@ static void drm_sched_job_timedout(struct work_struct *work) sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
/* Protects against concurrent deletion in drm_sched_get_cleanup_job */ + kthread_park(sched->thread); spin_lock(&sched->job_list_lock); job = list_first_entry_or_null(&sched->pending_list, struct drm_sched_job, list);
if (job) { - /* - * Remove the bad job so it cannot be freed by concurrent - * drm_sched_cleanup_jobs. It will be reinserted back after sched->thread - * is parked at which point it's safe. - */ - list_del_init(&job->list); spin_unlock(&sched->job_list_lock);
status = job->sched->ops->timedout_job(job); @@ -345,6 +340,7 @@ static void drm_sched_job_timedout(struct work_struct *work) } else { spin_unlock(&sched->job_list_lock); } + kthread_unpark(sched->thread);
if (status != DRM_GPU_SCHED_STAT_ENODEV) { spin_lock(&sched->job_list_lock); @@ -393,20 +389,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) kthread_park(sched->thread);
/* - * Reinsert back the bad job here - now it's safe as - * drm_sched_get_cleanup_job cannot race against us and release the - * bad job at this point - we parked (waited for) any in progress - * (earlier) cleanups and drm_sched_get_cleanup_job will not be called - * now until the scheduler thread is unparked. - */ - if (bad && bad->sched == sched) - /* - * Add at the head of the queue to reflect it was the earliest - * job extracted. - */ - list_add(&bad->list, &sched->pending_list); - - /* * Iterate the job list from later to earlier one and either deactive * their HW callbacks or remove them from pending list if they already * signaled.
Thanks
------------------------------------------ Monk Liu | Cloud-GPU Core team ------------------------------------------
-----Original Message----- From: Daniel Vetter daniel@ffwll.ch Sent: Thursday, August 19, 2021 5:31 PM To: Grodzovsky, Andrey Andrey.Grodzovsky@amd.com Cc: Daniel Vetter daniel@ffwll.ch; Alex Deucher alexdeucher@gmail.com; Chen, JingWen JingWen.Chen2@amd.com; Maling list - DRI developers dri-devel@lists.freedesktop.org; amd-gfx list amd-gfx@lists.freedesktop.org; Liu, Monk Monk.Liu@amd.com; Koenig, Christian Christian.Koenig@amd.com Subject: Re: [PATCH v2] Revert "drm/scheduler: Avoid accessing freed bad job."
On Wed, Aug 18, 2021 at 10:51:00AM -0400, Andrey Grodzovsky wrote:
Hm yeah I didn't realize this all happens before we stop the scheduler thread.
Why can't we stop the scheduler thread first, so that there's guaranteed no race? I've recently had a lot of discussions with panfrost folks about their reset that spawns across engines, and without stopping the scheduler thread first before you touch anything it's just plain impossible.
I'm also still not understanding what exactly you guys have done, can someone please dig out the the amdgpu patches that motivate all this maybe that's clearer? A full explanation would still be good since I've only started in scheduler stuff.
Another thing I recently pondered for tdr races looking at i915 code is whether the tdr should first block the completion fence for that job. My motivation is to have a race-free error capture (if the completion races then we might start evicting memory and everything goes boom), but maybe that helps here too. Some kind of atomic "block this fence from completing thing.
Or I'm I completely guessing in the wrong direction? -Daniel
-- Daniel Vetter Software Engineer, Intel Corporation https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll....
No, that perfectly works for me.
The problem we used to have with this approach was that we potentially have multiple timeouts at the same time.
But when we serialize the timeout handling by using a single workqueue as suggested by Daniel now as well then that isn't an issue any more.
Regards, Christian.
Am 20.08.21 um 09:12 schrieb Liu, Monk:
I believe we have some minor confusion here
On 2021-08-20 4:09 a.m., Jingwen Chen wrote:
While we do use single work queue by default (system_wq) for this, we use different work items, one per scheduler which means they still run in parallel. I didn't see the original mail by Daniel but from what Christian mentioned I assume he suggested to serialize all TO handlers from all possible engines by either using single work item for TO handler or by using single threaded queue for all TO handlers. So i believe it's premature to send V3 patch without also switching all TDR handling to actual single threaded handling per entire ASIC or in case of amdgpu we actually need to consider XGMI hives and so it goes beyond a single device.
Andrey
On Fri, Aug 20, 2021 at 09:20:42AM +0200, Christian König wrote:
Sorry I got massively burried in everything, catching up. Iirc there's a special function for parking schedulers (which panfrost now uses to handle its cross-engine reset), would be good to use that.
And yeah if your reset code is potentially spawning across engines I think you need a single workqueue to make sure stuff doesn't go boom. Tbh might be best to check out what panfrost has done and ask panfrost folks for an ack on your approach. -Daniel
On 2021-08-20 3:12 a.m., Liu, Monk wrote:
Regarding last paragraph, and specifically the claim that there was already something wrong if the TO handler starts execution - Not sure about this and I wonder if we have a potential bug here - when we start the timeout timer in drm_sched_job_begin we do it for each new incoming job. In a constant rapid stream of jobs each new job comming will try to start the timer but most of the time this operation just bails out as there is already pending timer from one of the previous jobs which cancels out any new ones [1] so, when the TO handler does execute eventually it's not because something wrong but simply because TO has expired. If in this case the pending list not empty a false TDR will be triggered. I think long ago we used TO handler per job and not per scheduler, this would solve this problem but hurt the serialization issue we are trying to solve. So not sure what to do.
[1] - https://elixir.bootlin.com/linux/v5.14-rc1/source/kernel/workqueue.c#L1665
Andrey
[AMD Official Use Only]
Hi Andrey
Sorry that it is really hard for me to get any particular or solid potential bugs from your reply, can you be more specific, e.g.: what kind of race issue is introduced by this "kthread_stop/start" approach.
To your another question/concern:
. In a constant rapid stream of jobs each new job comming will try to start the timer but most of the time this operation just bails out as there is already pending timer from one of the previous jobs which cancels out any new ones [1] so, when the TO handler does execute eventually it's not because something wrong but simply because TO has
Expired
I totally agree withy you on this point, and I think I have a patch to address this, but this problem is not related with our current topic at all ... our current topic is the bailout bad job handling from advanced TDR mode.
The bug here is our current TO handler only do the counting on the first job to the given scheduler, and the following coming job won't recalculate the TO at all, and I can assure you that this is a regression because when I implement TDR years ago I already considered planned for such problem. Please check this change to resolve it:
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index a2a9536..7b5f99a 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -235,6 +235,13 @@ static void drm_sched_start_timeout(struct drm_gpu_scheduler *sched) schedule_delayed_work(&sched->work_tdr, sched->timeout); }
+static void drm_sched_restart_timeout(struct drm_gpu_scheduler *sched) +{ + if (sched->timeout != MAX_SCHEDULE_TIMEOUT && + !list_empty(&sched->pending_list)) + mod_delayed_work(system_wq, &sched->work_tdr, sched->timeout); +} + /** * drm_sched_fault - immediately start timeout handler * @@ -693,6 +682,11 @@ drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched) if (job && dma_fence_is_signaled(&job->s_fence->finished)) { /* remove job from pending_list */ list_del_init(&job->list); + + /* once the job deleted from pending list we should restart + * the timeout calculation for the next job. + */ + drm_sched_restart_timeout(sched); /* make the scheduled timestamp more accurate */ next = list_first_entry_or_null(&sched->pending_list, typeof(*next), list);
if you guys do not have concerns I can submit this patch for review, but again, let's focus on bailing out had job handling as our priority, we are very close to our purpose, let me know what's your concerned race issue and we can address it.
Thanks
------------------------------------------ Monk Liu | Cloud-GPU Core team ------------------------------------------
-----Original Message----- From: Grodzovsky, Andrey Andrey.Grodzovsky@amd.com Sent: Friday, August 20, 2021 10:07 PM To: Liu, Monk Monk.Liu@amd.com; Daniel Vetter daniel@ffwll.ch; Koenig, Christian Christian.Koenig@amd.com Cc: Alex Deucher alexdeucher@gmail.com; Chen, JingWen JingWen.Chen2@amd.com; Maling list - DRI developers dri-devel@lists.freedesktop.org; amd-gfx list amd-gfx@lists.freedesktop.org Subject: Re: [PATCH v2] Revert "drm/scheduler: Avoid accessing freed bad job."
On 2021-08-20 3:12 a.m., Liu, Monk wrote:
Regarding last paragraph, and specifically the claim that there was already something wrong if the TO handler starts execution - Not sure about this and I wonder if we have a potential bug here - when we start the timeout timer in drm_sched_job_begin we do it for each new incoming job. In a constant rapid stream of jobs each new job comming will try to start the timer but most of the time this operation just bails out as there is already pending timer from one of the previous jobs which cancels out any new ones [1] so, when the TO handler does execute eventually it's not because something wrong but simply because TO has expired. If in this case the pending list not empty a false TDR will be triggered. I think long ago we used TO handler per job and not per scheduler, this would solve this problem but hurt the serialization issue we are trying to solve. So not sure what to do.
[1] - https://elixir.bootlin.com/linux/v5.14-rc1/source/kernel/workqueue.c#L1665
Andrey
On 2021-08-24 3:24 a.m., Liu, Monk wrote:
Hey, you might have missed my replies in the thread regarding this. Check them here.
https://www.spinics.net/lists/amd-gfx/msg67041.html https://www.spinics.net/lists/amd-gfx/msg67090.html
In summery IMHO we can park/unpark only within serialized section against all other possible TDR handlers (at whole ASIC or even XGMI hive level). Today we achieve this by locking. IN the new proposal there is no locking - so we either add one or just serialize TDRs to single thread execution. Let me know if you think it's not an issue actually - i might be missing something.
Andrey
On 2021-08-19 5:30 a.m., Daniel Vetter wrote:
Talked with Christian on that, for each TDR we actually stop all the schedulers for all the rings and not only the hanged ring since ASIC reset will impact all the rings anyway. So we cannot allow other timeout handlers for other rings run in parallel to ours as they will stop/restart the threads we just stopped and rely on them being stopped. So it's all done with device wide lock inside the amdgpu tTDR handler. Only inside the locked section then we may stop/restart the scheduler threads. Christian also mentioned that you proposed at some point to serialize all TDR handling into single threading for all rings - this seems like something that could be used - we then don't need any locking against TDR handlers from other rings and then we may stop the scheduler thread as first step
https://gitlab.freedesktop.org/agd5f/linux/-/commit/de7515d43659f852590645a6...
I think we already do it here - https://elixir.bootlin.com/linux/v5.14-rc1/source/drivers/gpu/drm/scheduler/...
Andrey
On Thu, Aug 19, 2021 at 11:25:09AM -0400, Andrey Grodzovsky wrote:
Uh, it would have been really good if this was discussed a bit wider beforehand. Now we have rather diverging approaches to this. Also would be really good to resurrect the dma_fence annotations too.
Can you guys pls spend a bit of time on this? Shouldn't be to hard to type up rfc conversion patches for the other drivers.
Ah yes this works becase drm/sched has separate hw fence from the logical job fence. -Daniel
On Thu, Aug 26, 2021 at 11:04:14AM +0200, Daniel Vetter wrote:
Ping for this. Currently the hw fence is returned from the ->run_job callback, and that's not great design.
If we embed it, then I think it should start existing latest from drm_sched_job_arm. Maybe not yet initialized, but at least allocated. So the right thing to do here is to have the hw fence as a pointer in struct drm_sched_job. And check in drm_sched_job_arm() that it's at least allocated.
Otherwise we're just diverging across drivers and tempting them to do the wrong thing with the current ->run_job callback interface.
Can you guys look into this? -Daniel
On 2021-08-31 9:11 a.m., Daniel Vetter wrote:
What's the problem you see there ?
Why we need to allocate the HW fence if it's embedded within a job struct ?
Otherwise we're just diverging across drivers and tempting them to do the wrong thing with the current ->run_job callback interface.
Maybe we should switch from embedding in driver level job struct as it's now to drm_sched_job and just leave the fence initialization to driver specific code ?
Andrey
On Tue, Aug 31, 2021 at 02:24:52PM -0400, Andrey Grodzovsky wrote:
For one, all other drivers work like that, and it's not great to be inconsistent. And it allows that inconsistent/wrong pattern to continue.
Second I'm not even sure you can embed the hw fence, because there's this job restarting going on. Which at least thus far allocated a new hw fence. So this needs considerations.
the hw fence is a refcounted struct, and the drm_sched_job is a different struct. And we didn't have a dri-devel discussion about whether it's correct to conflate these two lifetimes, amdgpu folks simply hacked something together.
Maybe? Like I've not been involved in these discussion ont he amd side at all, I'm just noticing that we do have a now rather inconsistently used inteface across drivers. Which is no good. -Daniel
On 2021-09-02 10:28 a.m., Daniel Vetter wrote:
There is a solution to this at least at the amdgou level, see here - https://www.spinics.net/lists/amd-gfx/msg66614.html So we would reset the embedded fence seqno for this purpose (see amdgpu_fence_emit).
Obviously scheduler level changes must be discussed at dri-devel forum level. What happened here and as Monk already mentioned - we had internal discussion about how to fix the problem in the header of this thread - avoiding accessing feed job from TDR handler without the current hack in place of removal and back insertion into pending list. It's there we we came up (I think Christian first mentioned this) with the idea of embedding the HW fence into amdgpu job - both for avoiding memory allocations but also - because this allows to use the HW fence's recounting as a solution to the above problem by simply grabbing a reference to the next fence in the pending list as a first step in the TDR handler. What we didn't take into account at the time is that indeed this change cannot be limited to amdgpu level - this we noticed much later during final code reviews.
Andrey
On Thu, Sep 02, 2021 at 11:36:34AM -0400, Andrey Grodzovsky wrote:
I think stuff like this really should be lifted into standard behaviour. I have no idea whether this is doable across the board in all drivers, and having incompatible solutions here without understanding the constraints across drivers is no good at all.
Not sure where this fell through cracks, but imo at least changing where the hw fence is allocated is a very fundamental change, and latest then you should have discussed this on dri-devel.
But even the tdr races would probably have been good to start on dri-devel. Now it looks like Monk&team have lost 6 months for nothing. -Daniel
Am 07.09.21 um 10:47 schrieb Daniel Vetter:
I'm the one who kicked this off in April and I made a nice internal presentation to explain what the problems is etc... So the idea of embedding the hardware fence into the job came from me.
But during the presentation I also noted that we need to sync up with a guy named Daniel Vetter because it was his patch set which surfaced this issue by annotating fence completion prerequisite in lockdep.
But even the tdr races would probably have been good to start on dri-devel. Now it looks like Monk&team have lost 6 months for nothing.
Well to make it clear I've noted during the presentation in April that this needs to be discussed with you, I've also noted to the first guy working on this that this needs to be discussed on dri-devel instead of internally and I'm pretty sure that I've noted this a couple of more times after it moved to somebody else. And IIRC Andrey also noted that we should not discuss this internally pretty early as well.
So if people are not listening it is not a surprise that they spend time on stuff which isn't upstreamable like this.
Christian.
[AMD Official Use Only]
Hi Andrey and Daniel
We worked for a really long time on this new feature to AMD that finally can pick up the bad job from all timedout ones, and the change in scheduler (get/put fence in drm_sched_job_timedout, and remove the bad job delete and put back) is the last piece for us.
While we understand and realized that after the "bad job list node delete logic" being removed from job_timedout, there will be race issues introduced if vendor's job_timeout calback is accessing the bad job in parallel of scheduler doing "sched->ops->free_job(leanup_job)".
And to not introduce impact at all on those vendors I'd like to proposal a very simple change (which introduced a new bool member for scheduler to indicate if the del/put-back logic is needed or not) , check patch here below:
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index 47ea468..5e0bdc4 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -495,6 +495,8 @@ int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring, return r; }
+ ring->sched.keep_bad_job = true; + return 0; }
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 92d8de2..e7ac384 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -314,6 +314,7 @@ static void drm_sched_job_timedout(struct work_struct *work) { struct drm_gpu_scheduler *sched; struct drm_sched_job *job; + struct dma_fence *f = NULL;
sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
@@ -328,7 +329,11 @@ static void drm_sched_job_timedout(struct work_struct *work) * drm_sched_cleanup_jobs. It will be reinserted back after sched->thread * is parked at which point it's safe. */ - list_del_init(&job->list); + if (sched->keep_bad_job == false) + list_del_init(&job->list); + else + f = dma_fence_get(job->s_fence->parent);//get parent fence here to prevent hw_fence dropping to zero due to sched-main's cleanup_jobs, for amdgpu once parent fence drop to zero the sched_job will be kfree-ed + spin_unlock(&sched->job_list_lock);
job->sched->ops->timedout_job(job); @@ -341,6 +346,8 @@ static void drm_sched_job_timedout(struct work_struct *work) job->sched->ops->free_job(job); sched->free_guilty = false; } + + dma_fence_put(f); } else { spin_unlock(&sched->job_list_lock); } @@ -396,7 +403,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) * (earlier) cleanups and drm_sched_get_cleanup_job will not be called * now until the scheduler thread is unparked. */ - if (bad && bad->sched == sched) + if (bad && bad->sched == sched && sched->keep_bad_job == false) /* * Add at the head of the queue to reflect it was the earliest * job extracted. diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 4ea8606..5f9a640 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -301,6 +301,7 @@ struct drm_gpu_scheduler { atomic_t _score; bool ready; bool free_guilty; + bool keep_bad_job; };
int drm_sched_init(struct drm_gpu_scheduler *sched,
Thanks
------------------------------------------ Monk Liu | Cloud-GPU Core team ------------------------------------------
-----Original Message----- From: Daniel Vetter daniel@ffwll.ch Sent: Wednesday, August 18, 2021 10:43 PM To: Grodzovsky, Andrey Andrey.Grodzovsky@amd.com Cc: Daniel Vetter daniel@ffwll.ch; Alex Deucher alexdeucher@gmail.com; Chen, JingWen JingWen.Chen2@amd.com; Maling list - DRI developers dri-devel@lists.freedesktop.org; amd-gfx list amd-gfx@lists.freedesktop.org; Liu, Monk Monk.Liu@amd.com; Koenig, Christian Christian.Koenig@amd.com Subject: Re: [PATCH v2] Revert "drm/scheduler: Avoid accessing freed bad job."
On Wed, Aug 18, 2021 at 10:36:32AM -0400, Andrey Grodzovsky wrote:
Yeah that's the exact same thing I've chased with my dma_fence annotations, but thus far zero to none interested in getting it sorted. I think it'd be good to have some cross-driver agreement on how this should be solved before someone just charges ahead ...
Can't we reinsert it before we restart the scheduler thread? It might need a separate list for that due to the lockless queue tricks. Or am I thinking about the wrong kind of "we lost the job"? -Danile
-- Daniel Vetter Software Engineer, Intel Corporation https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll....
On Thu, Aug 19, 2021 at 03:01:26AM +0000, Liu, Monk wrote:
If everyone operates like that then the shared code becomes a massive mess of incompatible options and unmaintainable. I don't think that's a good path forward. -Daniel
On Wed, Aug 18, 2021 at 10:02:06AM -0400, Alex Deucher wrote:
Do we need a MAINTAINRS entry specifically for this, or just oversight?
Does this also hold for all other drivers? In general the commit message feels rather rushed and I have no idea what's really going on.
Also at least around tdr there's been some solid clarifications around how this is supposed to work between tdr and main scheduler thread, would be good to explain how that all fits together. Or should fit together. -Daniel
dri-devel@lists.freedesktop.org