From: Rob Clark rob@ti.com
In the process of adding GEM support for omapdrm driver, I noticed that I was adding code for creating/freeing mmap offsets which was virtually identical to what was already duplicated in i915 and gma500 drivers. And the code for attach/detatch_pages was quite similar as well.
Rather than duplicating the code a 3rd time, it seemed like a good idea to move it to the GEM core.
Note that I don't actually have a way to test psb or i915, but the changes seem straightforward enough.
v1: initial patches v2: rebase + add common get/put_pages functions
Rob Clark (6): drm/gem: add functions for mmap offset creation drm/i915: use common functions for mmap offset creation drm/gma500: use common functions for mmap offset creation drm/gem: add functions to get/put pages drm/i915: use common functions for get/put pages drm/gma500: use common functions for get/put pages
drivers/gpu/drm/drm_gem.c | 156 +++++++++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_gem.c | 136 +++----------------------------- drivers/staging/gma500/gem.c | 2 +- drivers/staging/gma500/gem_glue.c | 61 +-------------- drivers/staging/gma500/gem_glue.h | 1 - drivers/staging/gma500/gtt.c | 47 +++-------- include/drm/drmP.h | 6 ++ 7 files changed, 188 insertions(+), 221 deletions(-)
From: Rob Clark rob@ti.com
Signed-off-by: Rob Clark rob@ti.com --- drivers/gpu/drm/drm_gem.c | 88 +++++++++++++++++++++++++++++++++++++++++++++ include/drm/drmP.h | 3 ++ 2 files changed, 91 insertions(+), 0 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 186d62e..396e60c 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -285,6 +285,94 @@ again: } EXPORT_SYMBOL(drm_gem_handle_create);
+ +/** + * drm_gem_free_mmap_offset - release a fake mmap offset for an object + * @obj: obj in question + * + * This routine frees fake offsets allocated by drm_gem_create_mmap_offset(). + */ +void +drm_gem_free_mmap_offset(struct drm_gem_object *obj) +{ + struct drm_device *dev = obj->dev; + struct drm_gem_mm *mm = dev->mm_private; + struct drm_map_list *list = &obj->map_list; + + drm_ht_remove_item(&mm->offset_hash, &list->hash); + drm_mm_put_block(list->file_offset_node); + kfree(list->map); + list->map = NULL; +} +EXPORT_SYMBOL(drm_gem_free_mmap_offset); + +/** + * drm_gem_create_mmap_offset - create a fake mmap offset for an object + * @obj: obj in question + * + * GEM memory mapping works by handing back to userspace a fake mmap offset + * it can use in a subsequent mmap(2) call. The DRM core code then looks + * up the object based on the offset and sets up the various memory mapping + * structures. + * + * This routine allocates and attaches a fake offset for @obj. + */ +int +drm_gem_create_mmap_offset(struct drm_gem_object *obj) +{ + struct drm_device *dev = obj->dev; + struct drm_gem_mm *mm = dev->mm_private; + struct drm_map_list *list; + struct drm_local_map *map; + int ret = 0; + + /* Set the object up for mmap'ing */ + list = &obj->map_list; + list->map = kzalloc(sizeof(struct drm_map_list), GFP_KERNEL); + if (!list->map) + return -ENOMEM; + + map = list->map; + map->type = _DRM_GEM; + map->size = obj->size; + map->handle = obj; + + /* Get a DRM GEM mmap offset allocated... */ + list->file_offset_node = drm_mm_search_free(&mm->offset_manager, + obj->size / PAGE_SIZE, 0, 0); + + if (!list->file_offset_node) { + DRM_ERROR("failed to allocate offset for bo %d\n", obj->name); + ret = -ENOSPC; + goto out_free_list; + } + + list->file_offset_node = drm_mm_get_block(list->file_offset_node, + obj->size / PAGE_SIZE, 0); + if (!list->file_offset_node) { + ret = -ENOMEM; + goto out_free_list; + } + + list->hash.key = list->file_offset_node->start; + ret = drm_ht_insert_item(&mm->offset_hash, &list->hash); + if (ret) { + DRM_ERROR("failed to add to map hash\n"); + goto out_free_mm; + } + + return 0; + +out_free_mm: + drm_mm_put_block(list->file_offset_node); +out_free_list: + kfree(list->map); + list->map = NULL; + + return ret; +} +EXPORT_SYMBOL(drm_gem_create_mmap_offset); + /** Returns a reference to the object named by the handle. */ struct drm_gem_object * drm_gem_object_lookup(struct drm_device *dev, struct drm_file *filp, diff --git a/include/drm/drmP.h b/include/drm/drmP.h index 9b7c2bb..43538b6 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -1624,6 +1624,9 @@ drm_gem_object_handle_unreference_unlocked(struct drm_gem_object *obj) drm_gem_object_unreference_unlocked(obj); }
+void drm_gem_free_mmap_offset(struct drm_gem_object *obj); +int drm_gem_create_mmap_offset(struct drm_gem_object *obj); + struct drm_gem_object *drm_gem_object_lookup(struct drm_device *dev, struct drm_file *filp, u32 handle);
From: Rob Clark rob@ti.com
Signed-off-by: Rob Clark rob@ti.com --- drivers/gpu/drm/i915/i915_gem.c | 85 +-------------------------------------- 1 files changed, 2 insertions(+), 83 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index a546a71..ee59f31 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1265,74 +1265,6 @@ out: }
/** - * i915_gem_create_mmap_offset - create a fake mmap offset for an object - * @obj: obj in question - * - * GEM memory mapping works by handing back to userspace a fake mmap offset - * it can use in a subsequent mmap(2) call. The DRM core code then looks - * up the object based on the offset and sets up the various memory mapping - * structures. - * - * This routine allocates and attaches a fake offset for @obj. - */ -static int -i915_gem_create_mmap_offset(struct drm_i915_gem_object *obj) -{ - struct drm_device *dev = obj->base.dev; - struct drm_gem_mm *mm = dev->mm_private; - struct drm_map_list *list; - struct drm_local_map *map; - int ret = 0; - - /* Set the object up for mmap'ing */ - list = &obj->base.map_list; - list->map = kzalloc(sizeof(struct drm_map_list), GFP_KERNEL); - if (!list->map) - return -ENOMEM; - - map = list->map; - map->type = _DRM_GEM; - map->size = obj->base.size; - map->handle = obj; - - /* Get a DRM GEM mmap offset allocated... */ - list->file_offset_node = drm_mm_search_free(&mm->offset_manager, - obj->base.size / PAGE_SIZE, - 0, 0); - if (!list->file_offset_node) { - DRM_ERROR("failed to allocate offset for bo %d\n", - obj->base.name); - ret = -ENOSPC; - goto out_free_list; - } - - list->file_offset_node = drm_mm_get_block(list->file_offset_node, - obj->base.size / PAGE_SIZE, - 0); - if (!list->file_offset_node) { - ret = -ENOMEM; - goto out_free_list; - } - - list->hash.key = list->file_offset_node->start; - ret = drm_ht_insert_item(&mm->offset_hash, &list->hash); - if (ret) { - DRM_ERROR("failed to add to map hash\n"); - goto out_free_mm; - } - - return 0; - -out_free_mm: - drm_mm_put_block(list->file_offset_node); -out_free_list: - kfree(list->map); - list->map = NULL; - - return ret; -} - -/** * i915_gem_release_mmap - remove physical page mappings * @obj: obj in question * @@ -1360,19 +1292,6 @@ i915_gem_release_mmap(struct drm_i915_gem_object *obj) obj->fault_mappable = false; }
-static void -i915_gem_free_mmap_offset(struct drm_i915_gem_object *obj) -{ - struct drm_device *dev = obj->base.dev; - struct drm_gem_mm *mm = dev->mm_private; - struct drm_map_list *list = &obj->base.map_list; - - drm_ht_remove_item(&mm->offset_hash, &list->hash); - drm_mm_put_block(list->file_offset_node); - kfree(list->map); - list->map = NULL; -} - static uint32_t i915_gem_get_gtt_size(struct drm_device *dev, uint32_t size, int tiling_mode) { @@ -1485,7 +1404,7 @@ i915_gem_mmap_gtt(struct drm_file *file, }
if (!obj->base.map_list.map) { - ret = i915_gem_create_mmap_offset(obj); + ret = drm_gem_create_mmap_offset(&obj->base); if (ret) goto out; } @@ -3752,7 +3671,7 @@ static void i915_gem_free_object_tail(struct drm_i915_gem_object *obj) trace_i915_gem_object_destroy(obj);
if (obj->base.map_list.map) - i915_gem_free_mmap_offset(obj); + drm_gem_free_mmap_offset(&obj->base);
drm_gem_object_release(&obj->base); i915_gem_info_remove_obj(dev_priv, obj->base.size);
On Mon, Sep 12, 2011 at 02:21:22PM -0500, Rob Clark wrote:
From: Rob Clark rob@ti.com
Signed-off-by: Rob Clark rob@ti.com
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
From: Rob Clark rob@ti.com
Signed-off-by: Rob Clark rob@ti.com Signed-off-by: Alan Cox alan@linux.intel.com --- drivers/staging/gma500/gem.c | 2 +- drivers/staging/gma500/gem_glue.c | 61 +------------------------------------ drivers/staging/gma500/gem_glue.h | 1 - 3 files changed, 2 insertions(+), 62 deletions(-)
diff --git a/drivers/staging/gma500/gem.c b/drivers/staging/gma500/gem.c index 65fdd6b..1ac438a 100644 --- a/drivers/staging/gma500/gem.c +++ b/drivers/staging/gma500/gem.c @@ -77,7 +77,7 @@ int psb_gem_dumb_map_gtt(struct drm_file *file, struct drm_device *dev,
/* Make it mmapable */ if (!obj->map_list.map) { - ret = gem_create_mmap_offset(obj); + ret = drm_gem_create_mmap_offset(obj); if (ret) goto out; } diff --git a/drivers/staging/gma500/gem_glue.c b/drivers/staging/gma500/gem_glue.c index daac121..90e5f98 100644 --- a/drivers/staging/gma500/gem_glue.c +++ b/drivers/staging/gma500/gem_glue.c @@ -24,66 +24,7 @@ void drm_gem_object_release_wrap(struct drm_gem_object *obj) { /* Remove the list map if one is present */ if (obj->map_list.map) { - struct drm_gem_mm *mm = obj->dev->mm_private; - struct drm_map_list *list = &obj->map_list; - drm_ht_remove_item(&mm->offset_hash, &list->hash); - drm_mm_put_block(list->file_offset_node); - kfree(list->map); - list->map = NULL; + drm_gem_free_mmap_offset(obj); } drm_gem_object_release(obj); } - -/** - * gem_create_mmap_offset - invent an mmap offset - * @obj: our object - * - * Standard implementation of offset generation for mmap as is - * duplicated in several drivers. This belongs in GEM. - */ -int gem_create_mmap_offset(struct drm_gem_object *obj) -{ - struct drm_device *dev = obj->dev; - struct drm_gem_mm *mm = dev->mm_private; - struct drm_map_list *list; - struct drm_local_map *map; - int ret; - - list = &obj->map_list; - list->map = kzalloc(sizeof(struct drm_map_list), GFP_KERNEL); - if (list->map == NULL) - return -ENOMEM; - map = list->map; - map->type = _DRM_GEM; - map->size = obj->size; - map->handle = obj; - - list->file_offset_node = drm_mm_search_free(&mm->offset_manager, - obj->size / PAGE_SIZE, 0, 0); - if (!list->file_offset_node) { - dev_err(dev->dev, "failed to allocate offset for bo %d\n", - obj->name); - ret = -ENOSPC; - goto free_it; - } - list->file_offset_node = drm_mm_get_block(list->file_offset_node, - obj->size / PAGE_SIZE, 0); - if (!list->file_offset_node) { - ret = -ENOMEM; - goto free_it; - } - list->hash.key = list->file_offset_node->start; - ret = drm_ht_insert_item(&mm->offset_hash, &list->hash); - if (ret) { - dev_err(dev->dev, "failed to add to map hash\n"); - goto free_mm; - } - return 0; - -free_mm: - drm_mm_put_block(list->file_offset_node); -free_it: - kfree(list->map); - list->map = NULL; - return ret; -} diff --git a/drivers/staging/gma500/gem_glue.h b/drivers/staging/gma500/gem_glue.h index ce5ce30..891b7b6 100644 --- a/drivers/staging/gma500/gem_glue.h +++ b/drivers/staging/gma500/gem_glue.h @@ -1,2 +1 @@ extern void drm_gem_object_release_wrap(struct drm_gem_object *obj); -extern int gem_create_mmap_offset(struct drm_gem_object *obj);
From: Rob Clark rob@ti.com
This factors out common code from psb_gtt_attach_pages()/ i915_gem_object_get_pages_gtt() and psb_gtt_detach_pages()/ i915_gem_object_put_pages_gtt().
Signed-off-by: Rob Clark rob@ti.com --- drivers/gpu/drm/drm_gem.c | 68 +++++++++++++++++++++++++++++++++++++++++++++ include/drm/drmP.h | 3 ++ 2 files changed, 71 insertions(+), 0 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 396e60c..05113c3 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -285,6 +285,74 @@ again: } EXPORT_SYMBOL(drm_gem_handle_create);
+/** + * drm_gem_get_pages - helper to allocate backing pages for a GEM object + * @obj: obj in question + * @gfpmask: gfp mask of requested pages + */ +struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask) +{ + struct inode *inode; + struct address_space *mapping; + struct page *p, **pages; + int i, npages; + + /* This is the shared memory object that backs the GEM resource */ + inode = obj->filp->f_path.dentry->d_inode; + mapping = inode->i_mapping; + + npages = obj->size >> PAGE_SHIFT; + + pages = drm_malloc_ab(npages, sizeof(struct page *)); + if (pages == NULL) + return ERR_PTR(-ENOMEM); + + gfpmask |= mapping_gfp_mask(mapping); + + for (i = 0; i < npages; i++) { + p = shmem_read_mapping_page_gfp(mapping, i, gfpmask); + if (IS_ERR(p)) + goto fail; + pages[i] = p; + } + + return pages; + +fail: + while (i--) { + page_cache_release(pages[i]); + } + drm_free_large(pages); + return ERR_PTR(PTR_ERR(p)); +} +EXPORT_SYMBOL(drm_gem_get_pages); + +/** + * drm_gem_put_pages - helper to free backing pages for a GEM object + * @obj: obj in question + * @pages: pages to free + */ +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, + bool dirty, bool accessed) +{ + int i, npages; + + npages = obj->size >> PAGE_SHIFT; + + for (i = 0; i < npages; i++) { + if (dirty) + set_page_dirty(pages[i]); + + if (accessed) + mark_page_accessed(pages[i]); + + /* Undo the reference we took when populating the table */ + page_cache_release(pages[i]); + } + + drm_free_large(pages); +} +EXPORT_SYMBOL(drm_gem_put_pages);
/** * drm_gem_free_mmap_offset - release a fake mmap offset for an object diff --git a/include/drm/drmP.h b/include/drm/drmP.h index 43538b6..a62d8fe 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -1624,6 +1624,9 @@ drm_gem_object_handle_unreference_unlocked(struct drm_gem_object *obj) drm_gem_object_unreference_unlocked(obj); }
+struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask); +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, + bool dirty, bool accessed); void drm_gem_free_mmap_offset(struct drm_gem_object *obj); int drm_gem_create_mmap_offset(struct drm_gem_object *obj);
On Mon, Sep 12, 2011 at 2:21 PM, Rob Clark rob.clark@linaro.org wrote:
From: Rob Clark rob@ti.com
This factors out common code from psb_gtt_attach_pages()/ i915_gem_object_get_pages_gtt() and psb_gtt_detach_pages()/ i915_gem_object_put_pages_gtt().
Signed-off-by: Rob Clark rob@ti.com
drivers/gpu/drm/drm_gem.c | 68 +++++++++++++++++++++++++++++++++++++++++++++ include/drm/drmP.h | 3 ++ 2 files changed, 71 insertions(+), 0 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 396e60c..05113c3 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -285,6 +285,74 @@ again: } EXPORT_SYMBOL(drm_gem_handle_create);
+/**
- drm_gem_get_pages - helper to allocate backing pages for a GEM object
- @obj: obj in question
- @gfpmask: gfp mask of requested pages
- */
+struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask) +{
- struct inode *inode;
- struct address_space *mapping;
- struct page *p, **pages;
- int i, npages;
- /* This is the shared memory object that backs the GEM resource */
- inode = obj->filp->f_path.dentry->d_inode;
- mapping = inode->i_mapping;
- npages = obj->size >> PAGE_SHIFT;
- pages = drm_malloc_ab(npages, sizeof(struct page *));
- if (pages == NULL)
- return ERR_PTR(-ENOMEM);
- gfpmask |= mapping_gfp_mask(mapping);
- for (i = 0; i < npages; i++) {
- p = shmem_read_mapping_page_gfp(mapping, i, gfpmask);
note: I'll send an updated version of this patch w/ a
BUG_ON((gfpmask & __GFP_DMA32) && (page_to_pfn(p) >= 0x00100000UL));
or something roughly like this, to catch cases where shmem_read_mapping_page_gfp() doesn't actually give us a page in the low 4GB..
It is only a theoretical issue currently, as (AFAIK) no devices w/ 4GB restriction currently have enough memory to hit this problem. But it would be good to have some error checking in case shmem_read_mapping_page_gfp() isn't fixed by the time we have devices that would have this problem.
BR, -R
- if (IS_ERR(p))
- goto fail;
- pages[i] = p;
- }
- return pages;
+fail:
- while (i--) {
- page_cache_release(pages[i]);
- }
- drm_free_large(pages);
- return ERR_PTR(PTR_ERR(p));
+} +EXPORT_SYMBOL(drm_gem_get_pages);
+/**
- drm_gem_put_pages - helper to free backing pages for a GEM object
- @obj: obj in question
- @pages: pages to free
- */
+void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
- bool dirty, bool accessed)
+{
- int i, npages;
- npages = obj->size >> PAGE_SHIFT;
- for (i = 0; i < npages; i++) {
- if (dirty)
- set_page_dirty(pages[i]);
- if (accessed)
- mark_page_accessed(pages[i]);
- /* Undo the reference we took when populating the table */
- page_cache_release(pages[i]);
- }
- drm_free_large(pages);
+} +EXPORT_SYMBOL(drm_gem_put_pages);
/** * drm_gem_free_mmap_offset - release a fake mmap offset for an object diff --git a/include/drm/drmP.h b/include/drm/drmP.h index 43538b6..a62d8fe 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -1624,6 +1624,9 @@ drm_gem_object_handle_unreference_unlocked(struct drm_gem_object *obj) drm_gem_object_unreference_unlocked(obj); }
+struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask); +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
- bool dirty, bool accessed);
void drm_gem_free_mmap_offset(struct drm_gem_object *obj); int drm_gem_create_mmap_offset(struct drm_gem_object *obj);
-- 1.7.5.4
From: Rob Clark rob@ti.com
This factors out common code from psb_gtt_attach_pages()/ i915_gem_object_get_pages_gtt() and psb_gtt_detach_pages()/ i915_gem_object_put_pages_gtt().
Signed-off-by: Rob Clark rob@ti.com --- drivers/gpu/drm/drm_gem.c | 87 +++++++++++++++++++++++++++++++++++++++++++++ include/drm/drmP.h | 3 ++ 2 files changed, 90 insertions(+), 0 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 396e60c..821ba8a 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -285,6 +285,93 @@ again: } EXPORT_SYMBOL(drm_gem_handle_create);
+/** + * drm_gem_get_pages - helper to allocate backing pages for a GEM object + * @obj: obj in question + * @gfpmask: gfp mask of requested pages + */ +struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask) +{ + struct inode *inode; + struct address_space *mapping; + struct page *p, **pages; + int i, npages; + + /* This is the shared memory object that backs the GEM resource */ + inode = obj->filp->f_path.dentry->d_inode; + mapping = inode->i_mapping; + + npages = obj->size >> PAGE_SHIFT; + + pages = drm_malloc_ab(npages, sizeof(struct page *)); + if (pages == NULL) + return ERR_PTR(-ENOMEM); + + gfpmask |= mapping_gfp_mask(mapping); + + for (i = 0; i < npages; i++) { + p = shmem_read_mapping_page_gfp(mapping, i, gfpmask); + if (IS_ERR(p)) + goto fail; + pages[i] = p; + + /* There is a hypothetical issue w/ drivers that require + * buffer memory in the low 4GB.. if the pages are un- + * pinned, and swapped out, they can end up swapped back + * in above 4GB. If pages are already in memory, then + * shmem_read_mapping_page_gfp will ignore the gfpmask, + * even if the already in-memory page disobeys the mask. + * + * It is only a theoretical issue today, because none of + * the devices with this limitation can be populated with + * enough memory to trigger the issue. But this BUG_ON() + * is here as a reminder in case the problem with + * shmem_read_mapping_page_gfp() isn't solved by the time + * it does become a real issue. + * + * See this thread: http://lkml.org/lkml/2011/7/11/238 + */ + BUG_ON((gfpmask & __GFP_DMA32) && + (page_to_pfn(p) >= 0x00100000UL)); + } + + return pages; + +fail: + while (i--) { + page_cache_release(pages[i]); + } + drm_free_large(pages); + return ERR_PTR(PTR_ERR(p)); +} +EXPORT_SYMBOL(drm_gem_get_pages); + +/** + * drm_gem_put_pages - helper to free backing pages for a GEM object + * @obj: obj in question + * @pages: pages to free + */ +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, + bool dirty, bool accessed) +{ + int i, npages; + + npages = obj->size >> PAGE_SHIFT; + + for (i = 0; i < npages; i++) { + if (dirty) + set_page_dirty(pages[i]); + + if (accessed) + mark_page_accessed(pages[i]); + + /* Undo the reference we took when populating the table */ + page_cache_release(pages[i]); + } + + drm_free_large(pages); +} +EXPORT_SYMBOL(drm_gem_put_pages);
/** * drm_gem_free_mmap_offset - release a fake mmap offset for an object diff --git a/include/drm/drmP.h b/include/drm/drmP.h index 43538b6..a62d8fe 100644 --- a/include/drm/drmP.h +++ b/include/drm/drmP.h @@ -1624,6 +1624,9 @@ drm_gem_object_handle_unreference_unlocked(struct drm_gem_object *obj) drm_gem_object_unreference_unlocked(obj); }
+struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask); +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, + bool dirty, bool accessed); void drm_gem_free_mmap_offset(struct drm_gem_object *obj); int drm_gem_create_mmap_offset(struct drm_gem_object *obj);
On Thu, Sep 15, 2011 at 5:47 PM, Rob Clark rob.clark@linaro.org wrote:
+/**
- drm_gem_get_pages - helper to allocate backing pages for a GEM object
- @obj: obj in question
- @gfpmask: gfp mask of requested pages
- */
+struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask) +{
Hmm, in working through tiled buffer support over the weekend, I think I've hit a case where I want to decouple the physical size (in terms of pages) from virtual size.. which means I don't want to rely on the same obj->size value for mmap offset creation as for determining # of pages to allocate.
Since the patch for drm_gem_{get,put}_pages() doesn't seem to be on drm-core-next yet, I think the more straightforward thing to do is add a size (or numpages) arg to the get/put_pages functions resubmit this patch..
BR, -R
- struct inode *inode;
- struct address_space *mapping;
- struct page *p, **pages;
- int i, npages;
- /* This is the shared memory object that backs the GEM resource */
- inode = obj->filp->f_path.dentry->d_inode;
- mapping = inode->i_mapping;
- npages = obj->size >> PAGE_SHIFT;
- pages = drm_malloc_ab(npages, sizeof(struct page *));
- if (pages == NULL)
- return ERR_PTR(-ENOMEM);
- gfpmask |= mapping_gfp_mask(mapping);
- for (i = 0; i < npages; i++) {
- p = shmem_read_mapping_page_gfp(mapping, i, gfpmask);
- if (IS_ERR(p))
- goto fail;
- pages[i] = p;
- /* There is a hypothetical issue w/ drivers that require
- * buffer memory in the low 4GB.. if the pages are un-
- * pinned, and swapped out, they can end up swapped back
- * in above 4GB. If pages are already in memory, then
- * shmem_read_mapping_page_gfp will ignore the gfpmask,
- * even if the already in-memory page disobeys the mask.
- *
- * It is only a theoretical issue today, because none of
- * the devices with this limitation can be populated with
- * enough memory to trigger the issue. But this BUG_ON()
- * is here as a reminder in case the problem with
- * shmem_read_mapping_page_gfp() isn't solved by the time
- * it does become a real issue.
- *
- * See this thread: http://lkml.org/lkml/2011/7/11/238
- */
- BUG_ON((gfpmask & __GFP_DMA32) &&
- (page_to_pfn(p) >= 0x00100000UL));
- }
- return pages;
+fail:
- while (i--) {
- page_cache_release(pages[i]);
- }
- drm_free_large(pages);
- return ERR_PTR(PTR_ERR(p));
+} +EXPORT_SYMBOL(drm_gem_get_pages);
+/**
- drm_gem_put_pages - helper to free backing pages for a GEM object
- @obj: obj in question
- @pages: pages to free
- */
+void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
- bool dirty, bool accessed)
+{
- int i, npages;
- npages = obj->size >> PAGE_SHIFT;
- for (i = 0; i < npages; i++) {
- if (dirty)
- set_page_dirty(pages[i]);
- if (accessed)
- mark_page_accessed(pages[i]);
- /* Undo the reference we took when populating the table */
- page_cache_release(pages[i]);
- }
- drm_free_large(pages);
+} +EXPORT_SYMBOL(drm_gem_put_pages);
/** * drm_gem_free_mmap_offset - release a fake mmap offset for an object
On Mon, Sep 26, 2011 at 01:18:40PM -0500, Rob Clark wrote:
On Thu, Sep 15, 2011 at 5:47 PM, Rob Clark rob.clark@linaro.org wrote:
+/**
- drm_gem_get_pages - helper to allocate backing pages for a GEM object
- @obj: obj in question
- @gfpmask: gfp mask of requested pages
- */
+struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask) +{
Hmm, in working through tiled buffer support over the weekend, I think I've hit a case where I want to decouple the physical size (in terms of pages) from virtual size.. which means I don't want to rely on the same obj->size value for mmap offset creation as for determining # of pages to allocate.
Since the patch for drm_gem_{get,put}_pages() doesn't seem to be on drm-core-next yet, I think the more straightforward thing to do is add a size (or numpages) arg to the get/put_pages functions resubmit this patch..
I think making obj->size not be the size of the backing storage is the wrong thing. In i915 we have a similar issue with tiled regions on gen2/3: The minimal sizes for these are pretty large, so objects are smaller than a the tiled region. So backing storage and maps are both obj->size large, but the area the object occupies in the gtt may be much large. To put some packing storage behind the not-used region (which sounds like what you're trying to do here) we simply use one global dummy page. Userspace should never use this, so that's fine.
Or is this a case of insane arm hw, and you can't actually put the same physical page at different gtt locations?
-Daniel
On Mon, Sep 26, 2011 at 2:43 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Mon, Sep 26, 2011 at 01:18:40PM -0500, Rob Clark wrote:
On Thu, Sep 15, 2011 at 5:47 PM, Rob Clark rob.clark@linaro.org wrote:
+/**
- drm_gem_get_pages - helper to allocate backing pages for a GEM object
- @obj: obj in question
- @gfpmask: gfp mask of requested pages
- */
+struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask) +{
Hmm, in working through tiled buffer support over the weekend, I think I've hit a case where I want to decouple the physical size (in terms of pages) from virtual size.. which means I don't want to rely on the same obj->size value for mmap offset creation as for determining # of pages to allocate.
Since the patch for drm_gem_{get,put}_pages() doesn't seem to be on drm-core-next yet, I think the more straightforward thing to do is add a size (or numpages) arg to the get/put_pages functions resubmit this patch..
I think making obj->size not be the size of the backing storage is the wrong thing. In i915 we have a similar issue with tiled regions on gen2/3: The minimal sizes for these are pretty large, so objects are smaller than a the tiled region. So backing storage and maps are both obj->size large, but the area the object occupies in the gtt may be much large. To put some packing storage behind the not-used region (which sounds like what you're trying to do here) we simply use one global dummy page. Userspace should never use this, so that's fine.
Or is this a case of insane arm hw, and you can't actually put the same physical page at different gtt locations?
Well, the "GART" in our case is a bit restrictive.. we can have same page in multiple locations, but not arbitrary locations. Things need to go to slot boundaries. So it isn't really possible to get row n+1 to appear directly after row n. To handle this we just round the buffer stride up to next 4kb boundary and insert some dummy page the the remaining slots to pad out to 4kb.
The only other way I could think of to handle that would be to have a separate vsize and psize in 'struct drm_gem_object'..
BR, -R
-Daniel
Daniel Vetter Mail: daniel@ffwll.ch Mobile: +41 (0)79 365 57 48 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On Mon, Sep 26, 2011 at 21:56, Rob Clark rob.clark@linaro.org wrote:
On Mon, Sep 26, 2011 at 2:43 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Mon, Sep 26, 2011 at 01:18:40PM -0500, Rob Clark wrote:
On Thu, Sep 15, 2011 at 5:47 PM, Rob Clark rob.clark@linaro.org wrote:
+/**
- drm_gem_get_pages - helper to allocate backing pages for a GEM object
- @obj: obj in question
- @gfpmask: gfp mask of requested pages
- */
+struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask) +{
Hmm, in working through tiled buffer support over the weekend, I think I've hit a case where I want to decouple the physical size (in terms of pages) from virtual size.. which means I don't want to rely on the same obj->size value for mmap offset creation as for determining # of pages to allocate.
Since the patch for drm_gem_{get,put}_pages() doesn't seem to be on drm-core-next yet, I think the more straightforward thing to do is add a size (or numpages) arg to the get/put_pages functions resubmit this patch..
I think making obj->size not be the size of the backing storage is the wrong thing. In i915 we have a similar issue with tiled regions on gen2/3: The minimal sizes for these are pretty large, so objects are smaller than a the tiled region. So backing storage and maps are both obj->size large, but the area the object occupies in the gtt may be much large. To put some packing storage behind the not-used region (which sounds like what you're trying to do here) we simply use one global dummy page. Userspace should never use this, so that's fine.
Or is this a case of insane arm hw, and you can't actually put the same physical page at different gtt locations?
Well, the "GART" in our case is a bit restrictive.. we can have same page in multiple locations, but not arbitrary locations. Things need to go to slot boundaries. So it isn't really possible to get row n+1 to appear directly after row n. To handle this we just round the buffer stride up to next 4kb boundary and insert some dummy page the the remaining slots to pad out to 4kb.
The only other way I could think of to handle that would be to have a separate vsize and psize in 'struct drm_gem_object'..
Well I think for this case the solution is simple: Tiling not allowed if userspace is too dumb to properly round the buffer up so it fulfills whatever odd requirement the hw has. I think hiding the fact that certain buffers need more backing storage than a naive userspace might assume is ripe for ugly problems down the road. -Daniel
On Mon, Sep 26, 2011 at 3:22 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Mon, Sep 26, 2011 at 21:56, Rob Clark rob.clark@linaro.org wrote:
On Mon, Sep 26, 2011 at 2:43 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Mon, Sep 26, 2011 at 01:18:40PM -0500, Rob Clark wrote:
On Thu, Sep 15, 2011 at 5:47 PM, Rob Clark rob.clark@linaro.org wrote:
+/**
- drm_gem_get_pages - helper to allocate backing pages for a GEM object
- @obj: obj in question
- @gfpmask: gfp mask of requested pages
- */
+struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask) +{
Hmm, in working through tiled buffer support over the weekend, I think I've hit a case where I want to decouple the physical size (in terms of pages) from virtual size.. which means I don't want to rely on the same obj->size value for mmap offset creation as for determining # of pages to allocate.
Since the patch for drm_gem_{get,put}_pages() doesn't seem to be on drm-core-next yet, I think the more straightforward thing to do is add a size (or numpages) arg to the get/put_pages functions resubmit this patch..
I think making obj->size not be the size of the backing storage is the wrong thing. In i915 we have a similar issue with tiled regions on gen2/3: The minimal sizes for these are pretty large, so objects are smaller than a the tiled region. So backing storage and maps are both obj->size large, but the area the object occupies in the gtt may be much large. To put some packing storage behind the not-used region (which sounds like what you're trying to do here) we simply use one global dummy page. Userspace should never use this, so that's fine.
Or is this a case of insane arm hw, and you can't actually put the same physical page at different gtt locations?
Well, the "GART" in our case is a bit restrictive.. we can have same page in multiple locations, but not arbitrary locations. Things need to go to slot boundaries. So it isn't really possible to get row n+1 to appear directly after row n. To handle this we just round the buffer stride up to next 4kb boundary and insert some dummy page the the remaining slots to pad out to 4kb.
The only other way I could think of to handle that would be to have a separate vsize and psize in 'struct drm_gem_object'..
Well I think for this case the solution is simple: Tiling not allowed if userspace is too dumb to properly round the buffer up so it fulfills whatever odd requirement the hw has. I think hiding the fact that certain buffers need more backing storage than a naive userspace might assume is ripe for ugly problems down the road.
I don't think this is quite the issue.. in case of tiled buffers, the way I had setup the interface, userspace passes width/height and on the kernel side it is calculating the sizes..
It is just that if we restrict to size in # of backing pages and size of userspace mapping being the same, then we end up allocating a lot of extra pages because every random tiled buffer, if it might end up being mapped to userspace, we having to round the size up to stride aligned to 4kb.. whereas everything beyond the nearest slot boundary doesn't really need to have a backing page
BR, -R
-Daniel
Daniel Vetter daniel.vetter@ffwll.ch - +41 (0) 79 364 57 48 - http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Well I think for this case the solution is simple: Tiling not allowed if userspace is too dumb to properly round the buffer up so it fulfills whatever odd requirement the hw has. I think hiding the fact that certain buffers need more backing storage than a naive userspace might assume is ripe for ugly problems down the road.
That depends a lot upon the interface. One good reason for hiding it for example is that if you have hardware where a limit goes away (or worse yet appears) in some rev of the device or an erratum you don't have to issue a new X server.
For some of the other interfaces like the dumb fb api it's even more important the code doesn't know.
I don't however think the helper should know about padding because I think a driver can implement its own function which wraps the helper and then adds the padding itself ?
Alan
On Tue, Sep 27, 2011 at 4:35 AM, Alan Cox alan@lxorguk.ukuu.org.uk wrote:
Well I think for this case the solution is simple: Tiling not allowed if userspace is too dumb to properly round the buffer up so it fulfills whatever odd requirement the hw has. I think hiding the fact that certain buffers need more backing storage than a naive userspace might assume is ripe for ugly problems down the road.
That depends a lot upon the interface. One good reason for hiding it for example is that if you have hardware where a limit goes away (or worse yet appears) in some rev of the device or an erratum you don't have to issue a new X server.
For some of the other interfaces like the dumb fb api it's even more important the code doesn't know.
I don't however think the helper should know about padding because I think a driver can implement its own function which wraps the helper and then adds the padding itself ?
fwiw, Daniel convinced me to go a slightly different route, and keep get/put_pages() as-is, but instead go with a variant of drm_gem_create_mmap_offset() that takes a size parameter.. ie. roughly like:
int drm_gem_create_mmap_offset(struct drm_gem_object *obj) { return drm_gem_create_mmap_offset_size(obj, obj->size); } int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size) { ... }
I'll just call drm_gem_create_mmap_offset_size() directly, normal drivers can just use drm_gem_create_mmap_offset(). That seems like it should work..
BR, -R
Alan _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
From: Rob Clark rob@ti.com
Signed-off-by: Rob Clark rob@ti.com --- drivers/gpu/drm/i915/i915_gem.c | 51 +++++++------------------------------- 1 files changed, 10 insertions(+), 41 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index ee59f31..6b49b4e 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1450,52 +1450,29 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj, gfp_t gfpmask) { - int page_count, i; - struct address_space *mapping; - struct inode *inode; - struct page *page; + struct page **pages;
- /* Get the list of pages out of our struct file. They'll be pinned - * at this point until we release them. - */ - page_count = obj->base.size / PAGE_SIZE; BUG_ON(obj->pages != NULL); - obj->pages = drm_malloc_ab(page_count, sizeof(struct page *)); - if (obj->pages == NULL) - return -ENOMEM; - - inode = obj->base.filp->f_path.dentry->d_inode; - mapping = inode->i_mapping; - gfpmask |= mapping_gfp_mask(mapping);
- for (i = 0; i < page_count; i++) { - page = shmem_read_mapping_page_gfp(mapping, i, gfpmask); - if (IS_ERR(page)) - goto err_pages; + pages = drm_gem_get_pages(&obj->base, gfpmask);
- obj->pages[i] = page; + if (IS_ERR(pages)) { + dev_err(obj->base.dev->dev, + "could not get pages: %ld\n", PTR_ERR(pages)); + return PTR_ERR(pages); }
if (obj->tiling_mode != I915_TILING_NONE) i915_gem_object_do_bit_17_swizzle(obj);
- return 0; - -err_pages: - while (i--) - page_cache_release(obj->pages[i]); + obj->pages = pages;
- drm_free_large(obj->pages); - obj->pages = NULL; - return PTR_ERR(page); + return 0; }
static void i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj) { - int page_count = obj->base.size / PAGE_SIZE; - int i; - BUG_ON(obj->madv == __I915_MADV_PURGED);
if (obj->tiling_mode != I915_TILING_NONE) @@ -1504,18 +1481,10 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj) if (obj->madv == I915_MADV_DONTNEED) obj->dirty = 0;
- for (i = 0; i < page_count; i++) { - if (obj->dirty) - set_page_dirty(obj->pages[i]); - - if (obj->madv == I915_MADV_WILLNEED) - mark_page_accessed(obj->pages[i]); + drm_gem_put_pages(&obj->base, obj->pages, obj->dirty, + obj->madv == I915_MADV_WILLNEED);
- page_cache_release(obj->pages[i]); - } obj->dirty = 0; - - drm_free_large(obj->pages); obj->pages = NULL; }
On Mon, Sep 12, 2011 at 02:21:25PM -0500, Rob Clark wrote:
From: Rob Clark rob@ti.com
Signed-off-by: Rob Clark rob@ti.com
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
From: Rob Clark rob@ti.com
Signed-off-by: Rob Clark rob@ti.com --- drivers/staging/gma500/gtt.c | 47 ++++++++++------------------------------- 1 files changed, 12 insertions(+), 35 deletions(-)
diff --git a/drivers/staging/gma500/gtt.c b/drivers/staging/gma500/gtt.c index 461ead2..f453321 100644 --- a/drivers/staging/gma500/gtt.c +++ b/drivers/staging/gma500/gtt.c @@ -140,39 +140,21 @@ static void psb_gtt_remove(struct drm_device *dev, struct gtt_range *r) */ static int psb_gtt_attach_pages(struct gtt_range *gt) { - struct inode *inode; - struct address_space *mapping; - int i; - struct page *p; - int pages = gt->gem.size / PAGE_SIZE; + struct page **pages;
WARN_ON(gt->pages);
- /* This is the shared memory object that backs the GEM resource */ - inode = gt->gem.filp->f_path.dentry->d_inode; - mapping = inode->i_mapping; - - gt->pages = kmalloc(pages * sizeof(struct page *), GFP_KERNEL); - if (gt->pages == NULL) - return -ENOMEM; - gt->npage = pages; - - for (i = 0; i < pages; i++) { - /* FIXME: review flags later */ - p = read_cache_page_gfp(mapping, i, - __GFP_COLD | GFP_KERNEL); - if (IS_ERR(p)) - goto err; - gt->pages[i] = p; + /* FIXME: review flags later */ + pages = drm_gem_get_pages(>->gem, + __GFP_DMA32 | __GFP_COLD | GFP_KERNEL); + if (IS_ERR(pages)) { + dev_err(gt->gem.dev->dev, "could not get pages: %ld\n", + PTR_ERR(pages)); + return PTR_ERR(pages); } - return 0;
-err: - while (i--) - page_cache_release(gt->pages[i]); - kfree(gt->pages); - gt->pages = NULL; - return PTR_ERR(p); + gt->pages = pages; + return 0; }
/** @@ -185,13 +167,8 @@ err: */ static void psb_gtt_detach_pages(struct gtt_range *gt) { - int i; - for (i = 0; i < gt->npage; i++) { - /* FIXME: do we need to force dirty */ - set_page_dirty(gt->pages[i]); - page_cache_release(gt->pages[i]); - } - kfree(gt->pages); + /* FIXME: do we need to force dirty */ + drm_gem_put_pages(>->gem, gt->pages, true, false); gt->pages = NULL; }
On Mon, 12 Sep 2011 14:21:26 -0500 Rob Clark rob.clark@linaro.org wrote:
From: Rob Clark rob@ti.com
Signed-off-by: Rob Clark rob@ti.com
Generally looks sensible but:
1. This is a staging driver, so good practise is to cc the staging maintainer and preferably the author (though I'm on dri-devel so its ok). It needs to be co-ordinated with existing staging changes and the maintainer needs to know what is going on
2. It needs a changelog. Your 0/6 won't be in the git tree and someone chasing regressions may only see the individual patch changelog and not be sure what it relates to. From/Signed off by alone is not helpful.
3. GMA500 used the old way of doing things because last mail conversation I had with Hugh the cleaned up interfaces could not guarantee the page is mapped in the low 32bits and for any of the GMA500/600 series devices.
Has that changed ? I think I'd also prefer it if the methods had a BUG_ON() getting a high page when they asked for DMA32.
Alan
On Mon, Sep 12, 2011 at 3:31 PM, Alan Cox alan@lxorguk.ukuu.org.uk wrote:
On Mon, 12 Sep 2011 14:21:26 -0500 Rob Clark rob.clark@linaro.org wrote:
From: Rob Clark rob@ti.com
Signed-off-by: Rob Clark rob@ti.com
Generally looks sensible but:
- This is a staging driver, so good practise is to cc the staging
maintainer and preferably the author (though I'm on dri-devel so its ok). It needs to be co-ordinated with existing staging changes and the maintainer needs to know what is going on
ok, will do that in the future
- It needs a changelog. Your 0/6 won't be in the git tree and someone
chasing regressions may only see the individual patch changelog and not be sure what it relates to. From/Signed off by alone is not helpful.
ok.. in this case the changelog only applied to the first patches (initial patchset didn't have the drm_gem_get/put_pages()) but I will do this in the future as needed
- GMA500 used the old way of doing things because last mail conversation
I had with Hugh the cleaned up interfaces could not guarantee the page is mapped in the low 32bits and for any of the GMA500/600 series devices.
Has that changed ? I think I'd also prefer it if the methods had a BUG_ON() getting a high page when they asked for DMA32.
Hmm, ok, I found this thread, which is I guess what you are referring to:
https://lkml.org/lkml/2011/7/11/269
I'm not really sure if anything has changed.. But it doesn't look like the gma500 driver ever unpins anything.. so I guess it isn't a problem (yet). (Well, either that or I am overlooking something.)
btw, I could be missing something, but it seems like as long as you remove the pages from any userspace mmap'ing before you unpin the pages, that you shouldn't hit the case of page getting swapped back in
4gb.. if you are always in control of bringing the page back into
memory, you can ensure that DMA32 is always specified. Not sure if that helps at all. But at some point in the not too distant future, I'll be in the same boat so I am interested in a good way to handle this.
re: BUG_ON(): I wonder if a check would better belong in shmem_read_mapping_page_gfp() or shmem_getpage_gfp()?
BR, -R
Alan _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
- GMA500 used the old way of doing things because last mail conversation
I had with Hugh the cleaned up interfaces could not guarantee the page is mapped in the low 32bits and for any of the GMA500/600 series devices.
Has that changed ? I think I'd also prefer it if the methods had a BUG_ON() getting a high page when they asked for DMA32.
Hmm, ok, I found this thread, which is I guess what you are referring to:
https://lkml.org/lkml/2011/7/11/269
I'm not really sure if anything has changed.. But it doesn't look like the gma500 driver ever unpins anything.. so I guess it isn't a problem (yet). (Well, either that or I am overlooking something.)
It unpins objects when they are removed from the GTT.
btw, I could be missing something, but it seems like as long as you remove the pages from any userspace mmap'ing before you unpin the pages, that you shouldn't hit the case of page getting swapped back in
I don't see how you can guarantee another task won't page it in or have it referenced (eg via shmem and GEM's FLINK feature). So it's pretty hard to do.
4gb.. if you are always in control of bringing the page back into
memory, you can ensure that DMA32 is always specified. Not sure if that helps at all. But at some point in the not too distant future, I'll be in the same boat so I am interested in a good way to handle this.
Might be worth following up with Hugh if you also need it. Right now we kind of dodge it on GMA500 as you can't buy a box with that much memory attached to one so its theoretical for the moment.
We did briefly discuss offlist but didn't chase it down because it was theoretical for the moment. One possibility would be for the shmem/vfs code to be smart enough to invalidate and move the page.
re: BUG_ON(): I wonder if a check would better belong in shmem_read_mapping_page_gfp() or shmem_getpage_gfp()?
Agreed. The BUG_ON is sufficient for GMA500 itself - just so if we ever forget about it we bug rather than do something very nasty.
On Mon, Sep 12, 2011 at 8:21 PM, Rob Clark rob.clark@linaro.org wrote:
From: Rob Clark rob@ti.com
In the process of adding GEM support for omapdrm driver, I noticed that I was adding code for creating/freeing mmap offsets which was virtually identical to what was already duplicated in i915 and gma500 drivers. And the code for attach/detatch_pages was quite similar as well.
Rather than duplicating the code a 3rd time, it seemed like a good idea to move it to the GEM core.
Note that I don't actually have a way to test psb or i915, but the changes seem straightforward enough
I've merged bits of this already,
.http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-core-next
this is temporary -next home. care to rebase?
Dave.
dri-devel@lists.freedesktop.org