Hi
Am 23.07.21 um 09:36 schrieb Daniel Vetter:
The real fix is to get at the architecture-specific wc allocator, which is currently not something that's exposed, but hidden within the dma api. I think having this stick out like this is better than hiding it behind fake generic code (like we do with drm_clflush, which defacto also only really works on x86).
Also note that ttm has the exact same ifdef in its page allocator, but it does fall back to using dma_alloc_coherent on other platforms.
If this fixes a real problem and there's no full solution yet, let's take what we have. So if you can extract the essence of this comment into a TODO comment that tells how to fix the issue, fell free to add my
Acked-by: Thomas Zimmermann tzimmermann@suse.de
Best regards Thomas
-Daniel
Best regard Thomas
- shmem->pages = pages; return 0;
@@ -203,6 +212,11 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) if (--shmem->pages_use_count > 0) return; +#ifdef CONFIG_X86
- if (shmem->map_wc)
set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
+#endif
- drm_gem_put_pages(obj, shmem->pages, shmem->pages_mark_dirty_on_put, shmem->pages_mark_accessed_on_put);
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer