From: Michel Dänzer michel.daenzer@amd.com
Not doing this causes piglit hangs[0] on my Cape Verde card. No issues on Bonaire and Kaveri though.
[0] Same symptoms as those fixed on CIK by 'drm/radeon: set VM base addr using the PFP v2'.
Signed-off-by: Michel Dänzer michel.daenzer@amd.com --- drivers/gpu/drm/radeon/radeon_vm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c index ccae4d9..898cbb7 100644 --- a/drivers/gpu/drm/radeon/radeon_vm.c +++ b/drivers/gpu/drm/radeon/radeon_vm.c @@ -238,7 +238,9 @@ void radeon_vm_flush(struct radeon_device *rdev, uint64_t pd_addr = radeon_bo_gpu_offset(vm->page_directory);
/* if we can't remember our last VM flush then flush now! */ - if (!vm->last_flush || pd_addr != vm->pd_gpu_addr) { + /* XXX figure out why we have to flush all the time before CIK */ + if (rdev->family < CHIP_BONAIRE || + !vm->last_flush || pd_addr != vm->pd_gpu_addr) { trace_radeon_vm_flush(pd_addr, ring, vm->id); vm->pd_gpu_addr = pd_addr; radeon_ring_vm_flush(rdev, ring, vm);
On Thu, Aug 7, 2014 at 3:46 AM, Michel Dänzer michel@daenzer.net wrote:
From: Michel Dänzer michel.daenzer@amd.com
Not doing this causes piglit hangs[0] on my Cape Verde card. No issues on Bonaire and Kaveri though.
[0] Same symptoms as those fixed on CIK by 'drm/radeon: set VM base addr using the PFP v2'.
Signed-off-by: Michel Dänzer michel.daenzer@amd.com
We should be using PFP as much as possible. Does the attached patch help?
Alex
drivers/gpu/drm/radeon/radeon_vm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c index ccae4d9..898cbb7 100644 --- a/drivers/gpu/drm/radeon/radeon_vm.c +++ b/drivers/gpu/drm/radeon/radeon_vm.c @@ -238,7 +238,9 @@ void radeon_vm_flush(struct radeon_device *rdev, uint64_t pd_addr = radeon_bo_gpu_offset(vm->page_directory);
/* if we can't remember our last VM flush then flush now! */
if (!vm->last_flush || pd_addr != vm->pd_gpu_addr) {
/* XXX figure out why we have to flush all the time before CIK */
if (rdev->family < CHIP_BONAIRE ||
!vm->last_flush || pd_addr != vm->pd_gpu_addr) { trace_radeon_vm_flush(pd_addr, ring, vm->id); vm->pd_gpu_addr = pd_addr; radeon_ring_vm_flush(rdev, ring, vm);
-- 2.0.1
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
So what's difference between WRITE_DATA with PFP vs ME? Would it also be preferable for DMA_DATA and COPY_DATA?
Marek
On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher alexdeucher@gmail.com wrote:
On Thu, Aug 7, 2014 at 3:46 AM, Michel Dänzer michel@daenzer.net wrote:
From: Michel Dänzer michel.daenzer@amd.com
Not doing this causes piglit hangs[0] on my Cape Verde card. No issues on Bonaire and Kaveri though.
[0] Same symptoms as those fixed on CIK by 'drm/radeon: set VM base addr using the PFP v2'.
Signed-off-by: Michel Dänzer michel.daenzer@amd.com
We should be using PFP as much as possible. Does the attached patch help?
Alex
drivers/gpu/drm/radeon/radeon_vm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c index ccae4d9..898cbb7 100644 --- a/drivers/gpu/drm/radeon/radeon_vm.c +++ b/drivers/gpu/drm/radeon/radeon_vm.c @@ -238,7 +238,9 @@ void radeon_vm_flush(struct radeon_device *rdev, uint64_t pd_addr = radeon_bo_gpu_offset(vm->page_directory);
/* if we can't remember our last VM flush then flush now! */
if (!vm->last_flush || pd_addr != vm->pd_gpu_addr) {
/* XXX figure out why we have to flush all the time before CIK */
if (rdev->family < CHIP_BONAIRE ||
!vm->last_flush || pd_addr != vm->pd_gpu_addr) { trace_radeon_vm_flush(pd_addr, ring, vm->id); vm->pd_gpu_addr = pd_addr; radeon_ring_vm_flush(rdev, ring, vm);
-- 2.0.1
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
The GFX CP is split up into two different engines - the PFP and the ME (starting with SI you additionally get the CE as well).
The PFP is responsible for reading commands out of memory and forwarding it to the ME (or the CE). Some commands can be executed on the PFP as well, like simple register writes, but most commands can only run on the ME.
The PFP and the ME are connected through a 8 entry ring buffer (IIRC), so when you do something on the ME the PFP depends on you need to block the PFP for the ME to finish it's operation.
It strongly depends on what we want to do if we should use the PFP or the ME, but in most cases (like writing to memory) it's only the ME that can do the operation anyway.
Regards, Christian.
Am 07.08.2014 um 17:38 schrieb Marek Olšák:
So what's difference between WRITE_DATA with PFP vs ME? Would it also be preferable for DMA_DATA and COPY_DATA?
Marek
On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher alexdeucher@gmail.com wrote:
On Thu, Aug 7, 2014 at 3:46 AM, Michel Dänzer michel@daenzer.net wrote:
From: Michel Dänzer michel.daenzer@amd.com
Not doing this causes piglit hangs[0] on my Cape Verde card. No issues on Bonaire and Kaveri though.
[0] Same symptoms as those fixed on CIK by 'drm/radeon: set VM base addr using the PFP v2'.
Signed-off-by: Michel Dänzer michel.daenzer@amd.com
We should be using PFP as much as possible. Does the attached patch help?
Alex
drivers/gpu/drm/radeon/radeon_vm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c index ccae4d9..898cbb7 100644 --- a/drivers/gpu/drm/radeon/radeon_vm.c +++ b/drivers/gpu/drm/radeon/radeon_vm.c @@ -238,7 +238,9 @@ void radeon_vm_flush(struct radeon_device *rdev, uint64_t pd_addr = radeon_bo_gpu_offset(vm->page_directory);
/* if we can't remember our last VM flush then flush now! */
if (!vm->last_flush || pd_addr != vm->pd_gpu_addr) {
/* XXX figure out why we have to flush all the time before CIK */
if (rdev->family < CHIP_BONAIRE ||
!vm->last_flush || pd_addr != vm->pd_gpu_addr) { trace_radeon_vm_flush(pd_addr, ring, vm->id); vm->pd_gpu_addr = pd_addr; radeon_ring_vm_flush(rdev, ring, vm);
-- 2.0.1
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Aug 7, 2014 at 11:38 AM, Marek Olšák maraeo@gmail.com wrote:
So what's difference between WRITE_DATA with PFP vs ME? Would it also be preferable for DMA_DATA and COPY_DATA?
The PFP comes before the ME in the pipeline. Note that there is no PFP (or CE) on the compute queues so we can't use PFP (or CE) for compute. According to the internal gfx teams, we should use PFP whenever possible since the PFP is rarely as busy as the ME. Note also that the engine bit is not always consistent (for some packets 0 = ME, 1 = PFP and for others 1= ME and 0 = PFP).
Alex
Marek
On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher alexdeucher@gmail.com wrote:
On Thu, Aug 7, 2014 at 3:46 AM, Michel Dänzer michel@daenzer.net wrote:
From: Michel Dänzer michel.daenzer@amd.com
Not doing this causes piglit hangs[0] on my Cape Verde card. No issues on Bonaire and Kaveri though.
[0] Same symptoms as those fixed on CIK by 'drm/radeon: set VM base addr using the PFP v2'.
Signed-off-by: Michel Dänzer michel.daenzer@amd.com
We should be using PFP as much as possible. Does the attached patch help?
Alex
drivers/gpu/drm/radeon/radeon_vm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c index ccae4d9..898cbb7 100644 --- a/drivers/gpu/drm/radeon/radeon_vm.c +++ b/drivers/gpu/drm/radeon/radeon_vm.c @@ -238,7 +238,9 @@ void radeon_vm_flush(struct radeon_device *rdev, uint64_t pd_addr = radeon_bo_gpu_offset(vm->page_directory);
/* if we can't remember our last VM flush then flush now! */
if (!vm->last_flush || pd_addr != vm->pd_gpu_addr) {
/* XXX figure out why we have to flush all the time before CIK */
if (rdev->family < CHIP_BONAIRE ||
!vm->last_flush || pd_addr != vm->pd_gpu_addr) { trace_radeon_vm_flush(pd_addr, ring, vm->id); vm->pd_gpu_addr = pd_addr; radeon_ring_vm_flush(rdev, ring, vm);
-- 2.0.1
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On 08.08.2014 00:55, Alex Deucher wrote:
Note that there is no PFP (or CE) on the compute queues so we can't use PFP (or CE) for compute.
AFAICT cik_hdp_flush_cp_ring_emit() always uses the PFP though.
Note also that the engine bit is not always consistent (for some packets 0 = ME, 1 = PFP and for others 1= ME and 0 = PFP).
Ugh. Then we should probably use explicit *_ENGINE_PFP/ME macros instead of *_ENGINE(lucky_number). :)
On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher alexdeucher@gmail.com wrote:
We should be using PFP as much as possible. Does the attached patch help?
Unfortunately not.
On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher alexdeucher@gmail.com wrote:
We should be using PFP as much as possible. Does the attached patch help?
Unfortunately not.
Maybe add a readback of the VM base addr pointer to make sure that the write has really reached the SBRM?
Otherwise I'm out of ideas as well, Christian.
Am 08.08.2014 um 04:38 schrieb Michel Dänzer:
On 08.08.2014 00:55, Alex Deucher wrote:
Note that there is no PFP (or CE) on the compute queues so we can't use PFP (or CE) for compute.
AFAICT cik_hdp_flush_cp_ring_emit() always uses the PFP though.
Note also that the engine bit is not always consistent (for some packets 0 = ME, 1 = PFP and for others 1= ME and 0 = PFP).
Ugh. Then we should probably use explicit *_ENGINE_PFP/ME macros instead of *_ENGINE(lucky_number). :)
On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher alexdeucher@gmail.com wrote:
We should be using PFP as much as possible. Does the attached patch help?
Unfortunately not.
On 08.08.2014 17:44, Christian König wrote:
On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher alexdeucher@gmail.com wrote:
We should be using PFP as much as possible. Does the attached patch help?
Unfortunately not.
Maybe add a readback of the VM base addr pointer to make sure that the write has really reached the SBRM?
I'm not sure what exactly you're thinking of, but I'm happy to test any patches you guys come up with. :)
On Fri, Aug 8, 2014 at 4:50 AM, Michel Dänzer michel@daenzer.net wrote:
On 08.08.2014 17:44, Christian König wrote:
On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher alexdeucher@gmail.com wrote:
We should be using PFP as much as possible. Does the attached patch help?
Unfortunately not.
Maybe add a readback of the VM base addr pointer to make sure that the write has really reached the SBRM?
I'm not sure what exactly you're thinking of, but I'm happy to test any patches you guys come up with. :)
Maybe some variant of this patch?
Alex
On Fri, Aug 8, 2014 at 9:31 AM, Alex Deucher alexdeucher@gmail.com wrote:
On Fri, Aug 8, 2014 at 4:50 AM, Michel Dänzer michel@daenzer.net wrote:
On 08.08.2014 17:44, Christian König wrote:
On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher alexdeucher@gmail.com wrote: > We should be using PFP as much as possible. Does the attached > patch help?
Unfortunately not.
Maybe add a readback of the VM base addr pointer to make sure that the write has really reached the SBRM?
I'm not sure what exactly you're thinking of, but I'm happy to test any patches you guys come up with. :)
Maybe some variant of this patch?
Ignore that one. typo. Try this one instead.
Alex
On 08.08.2014 22:34, Alex Deucher wrote:
On Fri, Aug 8, 2014 at 9:31 AM, Alex Deucher alexdeucher@gmail.com wrote:
On Fri, Aug 8, 2014 at 4:50 AM, Michel Dänzer michel@daenzer.net wrote:
On 08.08.2014 17:44, Christian König wrote:
> On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher alexdeucher@gmail.com > wrote: >> We should be using PFP as much as possible. Does the attached >> patch help?
Unfortunately not.
Maybe add a readback of the VM base addr pointer to make sure that the write has really reached the SBRM?
I'm not sure what exactly you're thinking of, but I'm happy to test any patches you guys come up with. :)
Maybe some variant of this patch?
Ignore that one. typo. Try this one instead.
Thanks, but still no luck.
On Mon, Aug 11, 2014 at 4:42 AM, Michel Dänzer michel@daenzer.net wrote:
On 08.08.2014 22:34, Alex Deucher wrote:
On Fri, Aug 8, 2014 at 9:31 AM, Alex Deucher alexdeucher@gmail.com wrote:
On Fri, Aug 8, 2014 at 4:50 AM, Michel Dänzer michel@daenzer.net wrote:
On 08.08.2014 17:44, Christian König wrote:
>> On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher alexdeucher@gmail.com >> wrote: >>> We should be using PFP as much as possible. Does the attached >>> patch help? Unfortunately not.
Maybe add a readback of the VM base addr pointer to make sure that the write has really reached the SBRM?
I'm not sure what exactly you're thinking of, but I'm happy to test any patches you guys come up with. :)
Maybe some variant of this patch?
Ignore that one. typo. Try this one instead.
Thanks, but still no luck.
I'm out of ideas at the moment. I'll apply your patch unless Christian can think of anything else.
Alex
Am 11.08.2014 um 17:00 schrieb Alex Deucher:
On Mon, Aug 11, 2014 at 4:42 AM, Michel Dänzer michel@daenzer.net wrote:
On 08.08.2014 22:34, Alex Deucher wrote:
On Fri, Aug 8, 2014 at 9:31 AM, Alex Deucher alexdeucher@gmail.com wrote:
On Fri, Aug 8, 2014 at 4:50 AM, Michel Dänzer michel@daenzer.net wrote:
On 08.08.2014 17:44, Christian König wrote:
>>> On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher alexdeucher@gmail.com >>> wrote: >>>> We should be using PFP as much as possible. Does the attached >>>> patch help? > Unfortunately not. Maybe add a readback of the VM base addr pointer to make sure that the write has really reached the SBRM?
I'm not sure what exactly you're thinking of, but I'm happy to test any patches you guys come up with. :)
Maybe some variant of this patch?
Ignore that one. typo. Try this one instead.
Thanks, but still no luck.
I'm out of ideas at the moment. I'll apply your patch unless Christian can think of anything else.
Unfortunately not, so apply the patch for now.
Christian.
Alex _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Hey guys,
I think I've figured out what the cause of the remaining issues is while working on the implicit sync stuff.
The issue happens when the flush is done because of a CS to the DMA ring and a CS to the GFX ring directly after that which depends on the DMA submission to be finished.
In this situation we insert semaphore command so that the GFX ring wait for the DMA ring to finish execution and normally don't flush on the GFX ring a second time (the flush should be done on the DMA ring and we waited for that to finish).
The problem here is that semaphores can't be executed on the PFP, so the PFP doesn't wait for the semaphore to be completed and happily starts fetching commands while the flush on the DMA ring isn't completed.
@Michel: can you give this branch a try and see if it now works for you: http://cgit.freedesktop.org/~deathsimple/linux/log/?h=vm-flushing
We should keep that behavior in mind should we switch to put IBs into normal BOs, cause when those a swapped out the synchronization won't wait for swapping them back in using the DMA as well.
Thanks, Christian.
Am 12.08.2014 um 11:05 schrieb Christian König:
Am 11.08.2014 um 17:00 schrieb Alex Deucher:
On Mon, Aug 11, 2014 at 4:42 AM, Michel Dänzer michel@daenzer.net wrote:
On 08.08.2014 22:34, Alex Deucher wrote:
On Fri, Aug 8, 2014 at 9:31 AM, Alex Deucher alexdeucher@gmail.com wrote:
On Fri, Aug 8, 2014 at 4:50 AM, Michel Dänzer michel@daenzer.net wrote:
On 08.08.2014 17:44, Christian König wrote: >>>> On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher >>>> alexdeucher@gmail.com >>>> wrote: >>>>> We should be using PFP as much as possible. Does the attached >>>>> patch help? >> Unfortunately not. > Maybe add a readback of the VM base addr pointer to make sure > that the > write has really reached the SBRM? I'm not sure what exactly you're thinking of, but I'm happy to test any patches you guys come up with. :)
Maybe some variant of this patch?
Ignore that one. typo. Try this one instead.
Thanks, but still no luck.
I'm out of ideas at the moment. I'll apply your patch unless Christian can think of anything else.
Unfortunately not, so apply the patch for now.
Christian.
Alex _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On 15.08.2014 23:52, Christian König wrote:
I think I've figured out what the cause of the remaining issues is while working on the implicit sync stuff.
The issue happens when the flush is done because of a CS to the DMA ring and a CS to the GFX ring directly after that which depends on the DMA submission to be finished.
In this situation we insert semaphore command so that the GFX ring wait for the DMA ring to finish execution and normally don't flush on the GFX ring a second time (the flush should be done on the DMA ring and we waited for that to finish).
The problem here is that semaphores can't be executed on the PFP, so the PFP doesn't wait for the semaphore to be completed and happily starts fetching commands while the flush on the DMA ring isn't completed.
@Michel: can you give this branch a try and see if it now works for you: http://cgit.freedesktop.org/~deathsimple/linux/log/?h=vm-flushing
Unfortunately not; in fact, it seems to make the problem occur even faster, after just hundreds of piglit tests instead of after thousands.
However, based on your description above, I came up with the patch below, which fixes the problem for me, with or without your 'drop RADEON_FENCE_SIGNALED_SEQ' patch.
From: =?UTF-8?q?Michel=20D=C3=A4nzer?= michel.daenzer@amd.com Date: Mon, 18 Aug 2014 17:29:17 +0900 Subject: [PATCH] drm/radeon: Sync ME and PFP after CP semaphore waits on >= Cayman MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit
Fixes lockups due to CP read GPUVM faults when running piglit on Cape Verde.
Signed-off-by: Michel Dänzer michel.daenzer@amd.com ---
If the PACKET3_PFP_SYNC_ME packet was already supported before Cayman, it might be a good idea to do this wherever possible, to avoid any other issues the PFP running ahead of semaphore waits might cause.
drivers/gpu/drm/radeon/cik.c | 17 +++++++++++++++++ drivers/gpu/drm/radeon/ni.c | 33 +++++++++++++++++++++++++++++++++ drivers/gpu/drm/radeon/nid.h | 2 ++ drivers/gpu/drm/radeon/radeon_asic.c | 4 ++-- drivers/gpu/drm/radeon/radeon_asic.h | 4 ++++ 5 files changed, 58 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c index 81d07e6..49707ac 100644 --- a/drivers/gpu/drm/radeon/cik.c +++ b/drivers/gpu/drm/radeon/cik.c @@ -3920,6 +3920,17 @@ void cik_fence_compute_ring_emit(struct radeon_device *rdev, radeon_ring_write(ring, 0); }
+/** + * cik_semaphore_ring_emit - emit a semaphore on the CP ring + * + * @rdev: radeon_device pointer + * @ring: radeon ring buffer object + * @semaphore: radeon semaphore object + * @emit_wait: Is this a sempahore wait? + * + * Emits a semaphore signal/wait packet to the CP ring and prevents the PFP + * from running ahead of semaphore waits. + */ bool cik_semaphore_ring_emit(struct radeon_device *rdev, struct radeon_ring *ring, struct radeon_semaphore *semaphore, @@ -3932,6 +3943,12 @@ bool cik_semaphore_ring_emit(struct radeon_device *rdev, radeon_ring_write(ring, lower_32_bits(addr)); radeon_ring_write(ring, (upper_32_bits(addr) & 0xffff) | sel);
+ if (emit_wait) { + /* Prevent the PFP from running ahead of the semaphore wait */ + radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0)); + radeon_ring_write(ring, 0x0); + } + return true; }
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c index ba89375..4e586c7 100644 --- a/drivers/gpu/drm/radeon/ni.c +++ b/drivers/gpu/drm/radeon/ni.c @@ -1363,6 +1363,39 @@ void cayman_fence_ring_emit(struct radeon_device *rdev, radeon_ring_write(ring, 0); }
+ +/** + * cayman_semaphore_ring_emit - emit a semaphore on the CP ring + * + * @rdev: radeon_device pointer + * @ring: radeon ring buffer object + * @semaphore: radeon semaphore object + * @emit_wait: Is this a sempahore wait? + * + * Emits a semaphore signal/wait packet to the CP ring and prevents the PFP + * from running ahead of semaphore waits. + */ +bool cayman_semaphore_ring_emit(struct radeon_device *rdev, + struct radeon_ring *ring, + struct radeon_semaphore *semaphore, + bool emit_wait) +{ + uint64_t addr = semaphore->gpu_addr; + unsigned sel = emit_wait ? PACKET3_SEM_SEL_WAIT : PACKET3_SEM_SEL_SIGNAL; + + radeon_ring_write(ring, PACKET3(PACKET3_MEM_SEMAPHORE, 1)); + radeon_ring_write(ring, lower_32_bits(addr)); + radeon_ring_write(ring, (upper_32_bits(addr) & 0xff) | sel); + + if (emit_wait) { + /* Prevent the PFP from running ahead of the semaphore wait */ + radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0)); + radeon_ring_write(ring, 0x0); + } + + return true; +} + void cayman_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib) { struct radeon_ring *ring = &rdev->ring[ib->ring]; diff --git a/drivers/gpu/drm/radeon/nid.h b/drivers/gpu/drm/radeon/nid.h index 2e12e4d..dba6d37 100644 --- a/drivers/gpu/drm/radeon/nid.h +++ b/drivers/gpu/drm/radeon/nid.h @@ -1131,6 +1131,8 @@ #define PACKET3_DRAW_INDEX_MULTI_ELEMENT 0x36 #define PACKET3_WRITE_DATA 0x37 #define PACKET3_MEM_SEMAPHORE 0x39 +# define PACKET3_SEM_SEL_SIGNAL (0x6 << 29) +# define PACKET3_SEM_SEL_WAIT (0x7 << 29) #define PACKET3_MPEG_INDEX 0x3A #define PACKET3_WAIT_REG_MEM 0x3C #define PACKET3_MEM_WRITE 0x3D diff --git a/drivers/gpu/drm/radeon/radeon_asic.c b/drivers/gpu/drm/radeon/radeon_asic.c index eeeeabe..67eb267 100644 --- a/drivers/gpu/drm/radeon/radeon_asic.c +++ b/drivers/gpu/drm/radeon/radeon_asic.c @@ -1555,7 +1555,7 @@ static struct radeon_asic_ring cayman_gfx_ring = { .ib_execute = &cayman_ring_ib_execute, .ib_parse = &evergreen_ib_parse, .emit_fence = &cayman_fence_ring_emit, - .emit_semaphore = &r600_semaphore_ring_emit, + .emit_semaphore = &cayman_semaphore_ring_emit, .cs_parse = &evergreen_cs_parse, .ring_test = &r600_ring_test, .ib_test = &r600_ib_test, @@ -1804,7 +1804,7 @@ static struct radeon_asic_ring si_gfx_ring = { .ib_execute = &si_ring_ib_execute, .ib_parse = &si_ib_parse, .emit_fence = &si_fence_ring_emit, - .emit_semaphore = &r600_semaphore_ring_emit, + .emit_semaphore = &cayman_semaphore_ring_emit, .cs_parse = NULL, .ring_test = &r600_ring_test, .ib_test = &r600_ib_test, diff --git a/drivers/gpu/drm/radeon/radeon_asic.h b/drivers/gpu/drm/radeon/radeon_asic.h index 275a5dc..6b4c757 100644 --- a/drivers/gpu/drm/radeon/radeon_asic.h +++ b/drivers/gpu/drm/radeon/radeon_asic.h @@ -590,6 +590,10 @@ int sumo_dpm_force_performance_level(struct radeon_device *rdev, */ void cayman_fence_ring_emit(struct radeon_device *rdev, struct radeon_fence *fence); +bool cayman_semaphore_ring_emit(struct radeon_device *rdev, + struct radeon_ring *cp, + struct radeon_semaphore *semaphore, + bool emit_wait); void cayman_pcie_gart_tlb_flush(struct radeon_device *rdev); int cayman_init(struct radeon_device *rdev); void cayman_fini(struct radeon_device *rdev);
Am 18.08.2014 um 11:10 schrieb Michel Dänzer:
On 15.08.2014 23:52, Christian König wrote:
I think I've figured out what the cause of the remaining issues is while working on the implicit sync stuff.
The issue happens when the flush is done because of a CS to the DMA ring and a CS to the GFX ring directly after that which depends on the DMA submission to be finished.
In this situation we insert semaphore command so that the GFX ring wait for the DMA ring to finish execution and normally don't flush on the GFX ring a second time (the flush should be done on the DMA ring and we waited for that to finish).
The problem here is that semaphores can't be executed on the PFP, so the PFP doesn't wait for the semaphore to be completed and happily starts fetching commands while the flush on the DMA ring isn't completed.
@Michel: can you give this branch a try and see if it now works for you: http://cgit.freedesktop.org/~deathsimple/linux/log/?h=vm-flushing
Unfortunately not; in fact, it seems to make the problem occur even faster, after just hundreds of piglit tests instead of after thousands.
However, based on your description above, I came up with the patch below, which fixes the problem for me, with or without your 'drop RADEON_FENCE_SIGNALED_SEQ' patch.
Oh, yes of course! That's indeed much simpler and the PFP_SYNC_ME packet should be available even on R600. I'm going to take care of this and supplying patches for all hardware generations we have.
Thanks for pointing me to the right solution, Christian.
From: =?UTF-8?q?Michel=20D=C3=A4nzer?= michel.daenzer@amd.com Date: Mon, 18 Aug 2014 17:29:17 +0900 Subject: [PATCH] drm/radeon: Sync ME and PFP after CP semaphore waits on >= Cayman MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit
Fixes lockups due to CP read GPUVM faults when running piglit on Cape Verde.
Signed-off-by: Michel Dänzer michel.daenzer@amd.com
If the PACKET3_PFP_SYNC_ME packet was already supported before Cayman, it might be a good idea to do this wherever possible, to avoid any other issues the PFP running ahead of semaphore waits might cause.
drivers/gpu/drm/radeon/cik.c | 17 +++++++++++++++++ drivers/gpu/drm/radeon/ni.c | 33 +++++++++++++++++++++++++++++++++ drivers/gpu/drm/radeon/nid.h | 2 ++ drivers/gpu/drm/radeon/radeon_asic.c | 4 ++-- drivers/gpu/drm/radeon/radeon_asic.h | 4 ++++ 5 files changed, 58 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c index 81d07e6..49707ac 100644 --- a/drivers/gpu/drm/radeon/cik.c +++ b/drivers/gpu/drm/radeon/cik.c @@ -3920,6 +3920,17 @@ void cik_fence_compute_ring_emit(struct radeon_device *rdev, radeon_ring_write(ring, 0); }
+/**
- cik_semaphore_ring_emit - emit a semaphore on the CP ring
- @rdev: radeon_device pointer
- @ring: radeon ring buffer object
- @semaphore: radeon semaphore object
- @emit_wait: Is this a sempahore wait?
- Emits a semaphore signal/wait packet to the CP ring and prevents the PFP
- from running ahead of semaphore waits.
- */ bool cik_semaphore_ring_emit(struct radeon_device *rdev, struct radeon_ring *ring, struct radeon_semaphore *semaphore,
@@ -3932,6 +3943,12 @@ bool cik_semaphore_ring_emit(struct radeon_device *rdev, radeon_ring_write(ring, lower_32_bits(addr)); radeon_ring_write(ring, (upper_32_bits(addr) & 0xffff) | sel);
- if (emit_wait) {
/* Prevent the PFP from running ahead of the semaphore wait */
radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
radeon_ring_write(ring, 0x0);
- }
- return true; }
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c index ba89375..4e586c7 100644 --- a/drivers/gpu/drm/radeon/ni.c +++ b/drivers/gpu/drm/radeon/ni.c @@ -1363,6 +1363,39 @@ void cayman_fence_ring_emit(struct radeon_device *rdev, radeon_ring_write(ring, 0); }
+/**
- cayman_semaphore_ring_emit - emit a semaphore on the CP ring
- @rdev: radeon_device pointer
- @ring: radeon ring buffer object
- @semaphore: radeon semaphore object
- @emit_wait: Is this a sempahore wait?
- Emits a semaphore signal/wait packet to the CP ring and prevents the PFP
- from running ahead of semaphore waits.
- */
+bool cayman_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
bool emit_wait)
+{
- uint64_t addr = semaphore->gpu_addr;
- unsigned sel = emit_wait ? PACKET3_SEM_SEL_WAIT : PACKET3_SEM_SEL_SIGNAL;
- radeon_ring_write(ring, PACKET3(PACKET3_MEM_SEMAPHORE, 1));
- radeon_ring_write(ring, lower_32_bits(addr));
- radeon_ring_write(ring, (upper_32_bits(addr) & 0xff) | sel);
- if (emit_wait) {
/* Prevent the PFP from running ahead of the semaphore wait */
radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
radeon_ring_write(ring, 0x0);
- }
- return true;
+}
- void cayman_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib) { struct radeon_ring *ring = &rdev->ring[ib->ring];
diff --git a/drivers/gpu/drm/radeon/nid.h b/drivers/gpu/drm/radeon/nid.h index 2e12e4d..dba6d37 100644 --- a/drivers/gpu/drm/radeon/nid.h +++ b/drivers/gpu/drm/radeon/nid.h @@ -1131,6 +1131,8 @@ #define PACKET3_DRAW_INDEX_MULTI_ELEMENT 0x36 #define PACKET3_WRITE_DATA 0x37 #define PACKET3_MEM_SEMAPHORE 0x39 +# define PACKET3_SEM_SEL_SIGNAL (0x6 << 29) +# define PACKET3_SEM_SEL_WAIT (0x7 << 29) #define PACKET3_MPEG_INDEX 0x3A #define PACKET3_WAIT_REG_MEM 0x3C #define PACKET3_MEM_WRITE 0x3D diff --git a/drivers/gpu/drm/radeon/radeon_asic.c b/drivers/gpu/drm/radeon/radeon_asic.c index eeeeabe..67eb267 100644 --- a/drivers/gpu/drm/radeon/radeon_asic.c +++ b/drivers/gpu/drm/radeon/radeon_asic.c @@ -1555,7 +1555,7 @@ static struct radeon_asic_ring cayman_gfx_ring = { .ib_execute = &cayman_ring_ib_execute, .ib_parse = &evergreen_ib_parse, .emit_fence = &cayman_fence_ring_emit,
- .emit_semaphore = &r600_semaphore_ring_emit,
- .emit_semaphore = &cayman_semaphore_ring_emit, .cs_parse = &evergreen_cs_parse, .ring_test = &r600_ring_test, .ib_test = &r600_ib_test,
@@ -1804,7 +1804,7 @@ static struct radeon_asic_ring si_gfx_ring = { .ib_execute = &si_ring_ib_execute, .ib_parse = &si_ib_parse, .emit_fence = &si_fence_ring_emit,
- .emit_semaphore = &r600_semaphore_ring_emit,
- .emit_semaphore = &cayman_semaphore_ring_emit, .cs_parse = NULL, .ring_test = &r600_ring_test, .ib_test = &r600_ib_test,
diff --git a/drivers/gpu/drm/radeon/radeon_asic.h b/drivers/gpu/drm/radeon/radeon_asic.h index 275a5dc..6b4c757 100644 --- a/drivers/gpu/drm/radeon/radeon_asic.h +++ b/drivers/gpu/drm/radeon/radeon_asic.h @@ -590,6 +590,10 @@ int sumo_dpm_force_performance_level(struct radeon_device *rdev, */ void cayman_fence_ring_emit(struct radeon_device *rdev, struct radeon_fence *fence); +bool cayman_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *cp,
struct radeon_semaphore *semaphore,
void cayman_pcie_gart_tlb_flush(struct radeon_device *rdev); int cayman_init(struct radeon_device *rdev); void cayman_fini(struct radeon_device *rdev);bool emit_wait);
dri-devel@lists.freedesktop.org