On Tue, Aug 06, 2019 at 09:31:55AM -0700, Rob Clark wrote:
On Tue, Aug 6, 2019 at 7:35 AM Mark Rutland mark.rutland@arm.com wrote:
On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
On Tue, Aug 6, 2019 at 1:48 AM Christoph Hellwig hch@lst.de wrote:
This goes in the wrong direction. drm_cflush_* are a bad API we need to get rid of, not add use of it. The reason for that is two-fold:
a) it doesn't address how cache maintaince actually works in most platforms. When talking about a cache we three fundamental operations:
1) write back - this writes the content of the cache back to the backing memory 2) invalidate - this remove the content of the cache 3) write back + invalidate - do both of the above
Agreed that drm_cflush_* isn't a great API. In this particular case (IIUC), I need wb+inv so that there aren't dirty cache lines that drop out to memory later, and so that I don't get a cache hit on uncached/wc mmap'ing.
Is there a cacheable alias lying around (e.g. the linear map), or are these addresses only mapped uncached/wc?
If there's a cacheable alias, performing an invalidate isn't sufficient, since a CPU can allocate a new (clean) entry at any point in time (e.g. as a result of prefetching or arbitrary speculation).
I *believe* that there are not alias mappings (that I don't control myself) for pages coming from shmem_file_setup()/shmem_read_mapping_page()..
AFAICT, that's regular anonymous memory, so there will be a cacheable alias in the linear/direct map.
digging around at what dma_sync_sg_* does under the hood, it looks like it is just arch_sync_dma_for_cpu/device(), so I guess that should be sufficient for what I need.
I don't think that's the case, per the example I gave above.
Thanks, Mark.