On Thu, Jul 22, 2021 at 08:40:56PM +0200, Thomas Zimmermann wrote:
Hi
Am 13.07.21 um 22:51 schrieb Daniel Vetter:
intel-gfx-ci realized that something is not quite coherent anymore on some platforms for our i915+vgem tests, when I tried to switch vgem over to shmem helpers.
After lots of head-scratching I realized that I've removed calls to drm_clflush. And we need those. To make this a bit cleaner use the same page allocation tooling as ttm, which does internally clflush (and more, as neeeded on any platform instead of just the intel x86 cpus i915 can be combined with).
Vgem would therefore not work correctly on non-X86 platforms?
Anything using shmem helpers doesn't work correctly on non-x86 platforms. At least if they use wc.
vgem with intel-gfx-ci is simply running some very nasty tests that catch the bugs.
I'm kinda hoping that someone from the armsoc world would care enough to fix this there. But it's a tricky issue.
Unfortunately this doesn't exist on arm, or as a generic feature. For that I think only the dma-api can get at wc memory reliably, so maybe we'd need some kind of GFP_WC flag to do this properly.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Christian König christian.koenig@amd.com Cc: "Thomas Hellström" thomas.hellstrom@linux.intel.com Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com Cc: Maxime Ripard mripard@kernel.org Cc: Thomas Zimmermann tzimmermann@suse.de Cc: David Airlie airlied@linux.ie Cc: Daniel Vetter daniel@ffwll.ch
drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 296ab1b7c07f..657d2490aaa5 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -10,6 +10,10 @@ #include <linux/slab.h> #include <linux/vmalloc.h> +#ifdef CONFIG_X86 +#include <asm/set_memory.h> +#endif
- #include <drm/drm.h> #include <drm/drm_device.h> #include <drm/drm_drv.h>
@@ -162,6 +166,11 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) return PTR_ERR(pages); } +#ifdef CONFIG_X86
- if (shmem->map_wc)
set_pages_array_wc(pages, obj->size >> PAGE_SHIFT);
+#endif
I cannot comment much on the technical details of the caching of various architectures. If this patch goes in, there should be a longer comment that reflects the discussion in this thread. It's apparently a workaround.
I think the call itself should be hidden behind a DRM API, which depends on CONFIG_X86. Something simple like
ifdef CONFIG_X86 drm_set_pages_array_wc() { set_pages_array_wc(); } else drm_set_pages_array_wc() { } #endif
Maybe in drm_cache.h?
We do have a bunch of this in drm_cache.h already, and architecture maintainers hate us for it.
The real fix is to get at the architecture-specific wc allocator, which is currently not something that's exposed, but hidden within the dma api. I think having this stick out like this is better than hiding it behind fake generic code (like we do with drm_clflush, which defacto also only really works on x86).
Also note that ttm has the exact same ifdef in its page allocator, but it does fall back to using dma_alloc_coherent on other platforms. -Daniel
Best regard Thomas
- shmem->pages = pages; return 0;
@@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) if (--shmem->pages_use_count > 0) return; +#ifdef CONFIG_X86
- if (shmem->map_wc)
set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
+#endif
- drm_gem_put_pages(obj, shmem->pages, shmem->pages_mark_dirty_on_put, shmem->pages_mark_accessed_on_put);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer