On Mon, Sep 26, 2016 at 9:32 PM, Ard Biesheuvel ard.biesheuvel@linaro.org wrote:
Some subdevices (i.e., fb/nv50.c and fb/gf100.c) map a scratch page using dma_map_page() way before the TTM layer has had a chance to set the DMA mask. This may prevent the driver from loading at all on platforms whose system memory is not covered by the default DMA mask of 32-bit (i.e., when all RAM is above 4 GB).
So set a preliminary DMA mask right after constructing the PCI device, and base it on the .dma_bits member of the MMU subdevice, which is what the TTM layer will base the DMA mask on as well.
Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org
drivers/gpu/drm/nouveau/nouveau_drm.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c index 652ab111dd74..e61e9a0adb51 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drm.c +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c @@ -361,6 +361,17 @@ static int nouveau_drm_probe(struct pci_dev *pdev,
pci_set_master(pdev);
/*
* Set a preliminary DMA mask based on the .dma_bits member of the
* MMU subdevice. This allows other subdevices to create DMA mappings
* in their init() function, which are called before the TTM layer sets
* the DMA mask definitively.
* This is necessary for platforms where the default DMA mask of 32
* does not cover any system memory, i.e., when all RAM is > 4 GB.
*/
dma_set_mask_and_coherent(device->dev,
DMA_BIT_MASK(device->mmu->dma_bits));
I would just move this to nvkm_device_pci_new() so that it perfectly mirrors the same call done in nvkm_device_tegra_new(), which was done for the same purpose. Otherwise, looks good to me.