On 6/23/2021 2:37 PM, Will Deacon wrote:
On Wed, Jun 23, 2021 at 12:39:29PM -0400, Qian Cai wrote:
On 6/18/2021 11:40 PM, Claire Chang wrote:
Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and use it to determine whether to bounce the data or not. This will be useful later to allow for different pools.
Signed-off-by: Claire Chang tientzu@chromium.org Reviewed-by: Christoph Hellwig hch@lst.de Tested-by: Stefano Stabellini sstabellini@kernel.org Tested-by: Will Deacon will@kernel.org Acked-by: Stefano Stabellini sstabellini@kernel.org
Reverting the rest of the series up to this patch fixed a boot crash with NVMe on today's linux-next.
Hmm, so that makes patch 7 the suspicious one, right?
Will, no. It is rather patch #6 (this patch). Only the patch from #6 to #12 were reverted to fix the issue. Also, looking at this offset of the crash,
pc : dma_direct_map_sg+0x304/0x8f0 is_swiotlb_force_bounce at /usr/src/linux-next/./include/linux/swiotlb.h:119
is_swiotlb_force_bounce() was the new function introduced in this patch here.
+static inline bool is_swiotlb_force_bounce(struct device *dev) +{ + return dev->dma_io_tlb_mem->force_bounce; +}
Looking at that one more closely, it looks like swiotlb_find_slots() takes 'alloc_size + offset' as its 'alloc_size' parameter from swiotlb_tbl_map_single() and initialises 'mem->slots[i].alloc_size' based on 'alloc_size + offset', which looks like a change in behaviour from the old code, which didn't include the offset there.
swiotlb_release_slots() then adds the offset back on afaict, so we end up accounting for it twice and possibly unmap more than we're supposed to?
Will