This patch fixes the problem of mapping gem objects which are non-contigous dma buffers. These buffers are described using SG table and SG lists. Each valid SG List is pointing to a single page or group of pages which are physically contigous.
Current implementation just maps the first page of each SG List and leave the other pages unmapped, leading to a crash.
Given solution finds the page struct for all the pages in the SG list and map them one by one. This ensures all the pages of current SG list are mapped. Next SG list of the same buffer will be mapped when page fault occurs for the offset belongs to that list.
This patch is based on branch exynos-drm-next-iommu at git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynos.git
Signed-off-by: Rahul Sharma rahul.sharma@samsung.com --- drivers/gpu/drm/exynos/exynos_drm_gem.c | 34 +++++++++++++++++++++++++++--- 1 files changed, 30 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c index 7057729..d2d6188 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c @@ -95,17 +95,43 @@ static int exynos_drm_gem_map_buf(struct drm_gem_object *obj, { struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj); struct exynos_drm_gem_buf *buf = exynos_gem_obj->buffer; + struct scatterlist *sgl; unsigned long pfn; + unsigned int phys_addr; + int ret, i;
if (exynos_gem_obj->flags & EXYNOS_BO_NONCONTIG) { if (!buf->pages) return -EINTR;
- pfn = page_to_pfn(buf->pages[page_offset++]); - } else + sgl = buf->sgt->sgl; + for_each_sg(buf->sgt->sgl, sgl, buf->sgt->nents, i) { + if (page_offset < (sgl->length >> PAGE_SHIFT)) + break; + page_offset -= (sgl->length >> PAGE_SHIFT); + } + + if (i >= buf->sgt->nents) { + DRM_ERROR("invalid page offset\n"); + return -EINVAL; + } + + phys_addr = sg_phys(sgl); + + for (i = 0; i < (sgl->length >> PAGE_SHIFT); i++) { + pfn = __phys_to_pfn(phys_addr + (i << PAGE_SHIFT)); + ret = vm_insert_mixed(vma, f_vaddr + (i << PAGE_SHIFT), + pfn); + if (ret < 0) { + DRM_ERROR("failed to map page.\n"); + return ret; + } + } + return 0; + } else { pfn = (buf->dma_addr >> PAGE_SHIFT) + page_offset; - - return vm_insert_mixed(vma, f_vaddr, pfn); + return vm_insert_mixed(vma, f_vaddr, pfn); + } }
static int exynos_drm_gem_handle_create(struct drm_gem_object *obj,
It's good patch. Right, there was my missing point. I thought each sgl would always have 4k page. But dma mapping api, dma_alloc_alloc(), allocates physical memory as continuously as possible so the sgl can have group of pages.
Below is my comment.
-----Original Message----- From: Rahul Sharma [mailto:rahul.sharma@samsung.com] Sent: Thursday, November 01, 2012 9:35 PM To: dri-devel@lists.freedesktop.org Cc: sw0312.kim@samsung.com; inki.dae@samsung.com; jy0922.shim@samsung.com; kyungmin.park@samsung.com; prashanth.g@samsung.com; joshi@samsung.com; s.shirish@samsung.com; r.sh.open@gmail.com; rahul.sharma@samsung.com Subject: [PATCH] drm: exynos: fix for mapping non contig dma buffers
This patch fixes the problem of mapping gem objects which are non- contigous dma buffers. These buffers are described using SG table and SG lists. Each valid SG List is pointing to a single page or group of pages which are physically contigous.
Current implementation just maps the first page of each SG List and leave the other pages unmapped, leading to a crash.
Given solution finds the page struct for all the pages in the SG list and map them one by one. This ensures all the pages of current SG list are mapped. Next SG list of the same buffer will be mapped when page fault occurs for the offset belongs to that list.
This patch is based on branch exynos-drm-next-iommu at git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynos.git
Signed-off-by: Rahul Sharma rahul.sharma@samsung.com
drivers/gpu/drm/exynos/exynos_drm_gem.c | 34 +++++++++++++++++++++++++++--- 1 files changed, 30 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c index 7057729..d2d6188 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c @@ -95,17 +95,43 @@ static int exynos_drm_gem_map_buf(struct drm_gem_object *obj, { struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj); struct exynos_drm_gem_buf *buf = exynos_gem_obj->buffer;
struct scatterlist *sgl; unsigned long pfn;
unsigned int phys_addr;
int ret, i;
if (exynos_gem_obj->flags & EXYNOS_BO_NONCONTIG) { if (!buf->pages) return -EINTR;
Buf->pages should be checked in all cases. For this, see below comment.
pfn = page_to_pfn(buf->pages[page_offset++]);
- } else
sgl = buf->sgt->sgl;
for_each_sg(buf->sgt->sgl, sgl, buf->sgt->nents, i) {
if (page_offset < (sgl->length >> PAGE_SHIFT))
break;
page_offset -= (sgl->length >> PAGE_SHIFT);
}
if (i >= buf->sgt->nents) {
DRM_ERROR("invalid page offset\n");
return -EINVAL;
}
phys_addr = sg_phys(sgl);
for (i = 0; i < (sgl->length >> PAGE_SHIFT); i++) {
pfn = __phys_to_pfn(phys_addr + (i << PAGE_SHIFT));
ret = vm_insert_mixed(vma, f_vaddr + (i <<
PAGE_SHIFT),
pfn);
if (ret < 0) {
DRM_ERROR("failed to map page.\n");
return ret;
}
}
return 0;
- } else { pfn = (buf->dma_addr >> PAGE_SHIFT) + page_offset;
It seems like that you missed EXYNOS_BO_CONTIG type testing. With iommu, buf->dma_addr has device address but not physical address. So correct it like below, pfn = page_to_pfn(buf->pages[0]) + page_offset;
And one more posting? :)
Thanks, Inki Dae
- return vm_insert_mixed(vma, f_vaddr, pfn);
return vm_insert_mixed(vma, f_vaddr, pfn);
- }
}
static int exynos_drm_gem_handle_create(struct drm_gem_object *obj,
1.7.0.4
Thanks Mr. Dae,
I improvised the above patch and submitted another patch for contiguous buffer.
regards, Rahul Sharma
On Fri, Nov 2, 2012 at 10:14 AM, Inki Dae inki.dae@samsung.com wrote:
It's good patch. Right, there was my missing point. I thought each sgl would always have 4k page. But dma mapping api, dma_alloc_alloc(), allocates physical memory as continuously as possible so the sgl can have group of pages.
Below is my comment.
-----Original Message----- From: Rahul Sharma [mailto:rahul.sharma@samsung.com] Sent: Thursday, November 01, 2012 9:35 PM To: dri-devel@lists.freedesktop.org Cc: sw0312.kim@samsung.com; inki.dae@samsung.com; jy0922.shim@samsung.com; kyungmin.park@samsung.com; prashanth.g@samsung.com; joshi@samsung.com; s.shirish@samsung.com; r.sh.open@gmail.com; rahul.sharma@samsung.com Subject: [PATCH] drm: exynos: fix for mapping non contig dma buffers
This patch fixes the problem of mapping gem objects which are non- contigous dma buffers. These buffers are described using SG table and SG lists. Each valid SG List is pointing to a single page or group of pages which are physically contigous.
Current implementation just maps the first page of each SG List and leave the other pages unmapped, leading to a crash.
Given solution finds the page struct for all the pages in the SG list and map them one by one. This ensures all the pages of current SG list are mapped. Next SG list of the same buffer will be mapped when page fault occurs for the offset belongs to that list.
This patch is based on branch exynos-drm-next-iommu at git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynos.git
Signed-off-by: Rahul Sharma rahul.sharma@samsung.com
drivers/gpu/drm/exynos/exynos_drm_gem.c | 34 +++++++++++++++++++++++++++--- 1 files changed, 30 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c index 7057729..d2d6188 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c @@ -95,17 +95,43 @@ static int exynos_drm_gem_map_buf(struct drm_gem_object *obj, { struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj); struct exynos_drm_gem_buf *buf = exynos_gem_obj->buffer;
struct scatterlist *sgl; unsigned long pfn;
unsigned int phys_addr;
int ret, i; if (exynos_gem_obj->flags & EXYNOS_BO_NONCONTIG) { if (!buf->pages) return -EINTR;
Buf->pages should be checked in all cases. For this, see below comment.
pfn = page_to_pfn(buf->pages[page_offset++]);
} else
sgl = buf->sgt->sgl;
for_each_sg(buf->sgt->sgl, sgl, buf->sgt->nents, i) {
if (page_offset < (sgl->length >> PAGE_SHIFT))
break;
page_offset -= (sgl->length >> PAGE_SHIFT);
}
if (i >= buf->sgt->nents) {
DRM_ERROR("invalid page offset\n");
return -EINVAL;
}
phys_addr = sg_phys(sgl);
for (i = 0; i < (sgl->length >> PAGE_SHIFT); i++) {
pfn = __phys_to_pfn(phys_addr + (i << PAGE_SHIFT));
ret = vm_insert_mixed(vma, f_vaddr + (i <<
PAGE_SHIFT),
pfn);
if (ret < 0) {
DRM_ERROR("failed to map page.\n");
return ret;
}
}
return 0;
} else { pfn = (buf->dma_addr >> PAGE_SHIFT) + page_offset;
It seems like that you missed EXYNOS_BO_CONTIG type testing. With iommu, buf->dma_addr has device address but not physical address. So correct it like below, pfn = page_to_pfn(buf->pages[0]) + page_offset;
And one more posting? :)
Thanks, Inki Dae
return vm_insert_mixed(vma, f_vaddr, pfn);
return vm_insert_mixed(vma, f_vaddr, pfn);
}
}
static int exynos_drm_gem_handle_create(struct drm_gem_object *obj,
1.7.0.4
dri-devel@lists.freedesktop.org