On Fri, Oct 16, 2020 at 12:04 PM John Stultz john.stultz@linaro.org wrote:
On Thu, Oct 8, 2020 at 4:51 AM Brian Starkey brian.starkey@arm.com wrote:
On Sat, Oct 03, 2020 at 04:02:57AM +0000, John Stultz wrote:
@@ -393,6 +424,16 @@ static int system_heap_allocate(struct dma_heap *heap, /* just return, as put will call release and that will free */ return ret; }
/*
* For uncached buffers, we need to initially flush cpu cache, since
* the __GFP_ZERO on the allocation means the zeroing was done by the
* cpu and thus it is likely cached. Map (and implicitly flush) it out
* now so we don't get corruption later on.
*/
if (buffer->uncached)
dma_map_sgtable(dma_heap_get_dev(heap), table, DMA_BIDIRECTIONAL, 0);
Do we have to keep this mapping around for the entire lifetime of the buffer?
Yea, I guess we can just map and unmap it right there. It will look a little absurd, but that sort of aligns with your next point.
Also, this problem (and solution) keeps lingering around. It really feels like there should be a better way to solve "clean the linear mapping all the way to DRAM", but I don't know what that should be.
Yea, something better here would be nice...
In ION, we had a little helper function named ion_buffer_prep_noncached that called arch_dma_prep_coherent() on all sg entries like so
for_each_sg(table->sgl, sg, table->orig_nents, i) arch_dma_prep_coherent(sg_page(sg), sg->length);
Would that help?