I'm afraid this won't always work and it can be a source of bugs.
Userspace doesn't have to call GEM_WAIT_IDLE before a CPU access to a VRAM buffer. For example, consider a wait-idle request with a non-zero timeout, which is implemented as a loop which calls GEM_BUSY. Also, userspace can use fences (alright they are backed by 1-page-sized VRAM buffers at the moment) and it may use real fences in the future which are not tied to a buffer object.
If the HDP flush isn't allowed in userspace IBs, I think we will have to expose it as an ioctl and call it explicitly from userspace.
Marek
On Fri, Aug 1, 2014 at 10:22 AM, Michel Dänzer michel@daenzer.net wrote:
From: Michel Dänzer michel.daenzer@amd.com
The HDP cache only applies to CPU access to VRAM.
Signed-off-by: Michel Dänzer michel.daenzer@amd.com
drivers/gpu/drm/radeon/radeon_gem.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c index a350cf9..8f2cb58 100644 --- a/drivers/gpu/drm/radeon/radeon_gem.c +++ b/drivers/gpu/drm/radeon/radeon_gem.c @@ -359,15 +359,17 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data, struct drm_gem_object *gobj; struct radeon_bo *robj; int r;
uint32_t cur_placement = 0; gobj = drm_gem_object_lookup(dev, filp, args->handle); if (gobj == NULL) { return -ENOENT; } robj = gem_to_radeon_bo(gobj);
r = radeon_bo_wait(robj, NULL, false);
r = radeon_bo_wait(robj, &cur_placement, false); /* Flush HDP cache via MMIO if necessary */
if (rdev->asic->mmio_hdp_flush)
if (rdev->asic->mmio_hdp_flush &&
radeon_mem_type_to_domain(cur_placement) == RADEON_GEM_DOMAIN_VRAM) robj->rdev->asic->mmio_hdp_flush(rdev); drm_gem_object_unreference_unlocked(gobj); r = radeon_gem_handle_lockup(rdev, r);
-- 2.0.1
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel