Hi all,
This patch-set changes the algorithm in drm_mm.c to not need additional allocations to track free space and adds an api to make embedding struct drm_mm_node possible. Benefits:
- If struct drm_mm_node is provided, no allocations need to be done anymore in drm_mm. It looks like some decent surgery, but ttm should be able to drop its preallocation dance. - void *priv is back, but done right ;) - Avoids a pointer chase when lru-scanning in i915 and saves a few bytes.
As a proof of concept I've converted i915. Beware though, the drm/i915 patches depend on my direct-gtt patches (which are actually the reason for this series here).
Tested on my i855gm, i945gme, ironlake and agp rv570.
Comments, flames, reviews highly welcome.
Please consider merging the core drm parts (and the nouveau prep patch) for -next (the i915 patches need coordination with Chris Wilson, they're rather invasive).
Thanks, Daniel
Daniel Vetter (9): drm/nouveau: don't munge in drm_mm internals drm: mm: track free areas implicitly drm: mm: extract node insert helper functions drm: mm: add api for embedding struct drm_mm_node drm/i915: embed struct drm_mm_node into struct drm_i915_gem_object drm/i915: kill obj->gtt_offset drm/i915: kill gtt_list drm: mm: add helper to unwind scan state drm/i915: use drm_mm_for_each_scanned_node_reverse helper
drivers/gpu/drm/drm_mm.c | 570 ++++++++++++++++------------- drivers/gpu/drm/i915/i915_debugfs.c | 22 +- drivers/gpu/drm/i915/i915_drv.h | 13 +- drivers/gpu/drm/i915/i915_gem.c | 173 ++++----- drivers/gpu/drm/i915/i915_gem_debug.c | 10 +- drivers/gpu/drm/i915/i915_gem_evict.c | 37 +- drivers/gpu/drm/i915/i915_gem_gtt.c | 14 +- drivers/gpu/drm/i915/i915_gem_tiling.c | 6 +- drivers/gpu/drm/i915/i915_irq.c | 34 +- drivers/gpu/drm/i915/intel_display.c | 26 +- drivers/gpu/drm/i915/intel_fb.c | 6 +- drivers/gpu/drm/i915/intel_overlay.c | 14 +- drivers/gpu/drm/i915/intel_ringbuffer.c | 10 +- drivers/gpu/drm/nouveau/nouveau_object.c | 2 +- drivers/gpu/drm/nouveau/nv50_instmem.c | 2 +- include/drm/drm_mm.h | 49 ++- 16 files changed, 525 insertions(+), 463 deletions(-)
Nouveau was checking drm_mm internals on teardown to see whether the memory manager was initialized. Hide these internals in a small inline helper function.
Cc: Ben Skeggs bskeggs@redhat.com Signed-off-by: Daniel Vetter daniel.vetter@ffwll.ch --- drivers/gpu/drm/nouveau/nouveau_object.c | 2 +- drivers/gpu/drm/nouveau/nv50_instmem.c | 2 +- include/drm/drm_mm.h | 5 +++++ 3 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_object.c b/drivers/gpu/drm/nouveau/nouveau_object.c index 896cf86..09646b3 100644 --- a/drivers/gpu/drm/nouveau/nouveau_object.c +++ b/drivers/gpu/drm/nouveau/nouveau_object.c @@ -779,7 +779,7 @@ nouveau_gpuobj_channel_takedown(struct nouveau_channel *chan) for (i = 0; i < dev_priv->vm_vram_pt_nr; i++) nouveau_gpuobj_ref(NULL, &chan->vm_vram_pt[i]);
- if (chan->ramin_heap.free_stack.next) + if (drm_mm_initialized(&chan->ramin_heap)) drm_mm_takedown(&chan->ramin_heap); nouveau_gpuobj_ref(NULL, &chan->ramin); } diff --git a/drivers/gpu/drm/nouveau/nv50_instmem.c b/drivers/gpu/drm/nouveau/nv50_instmem.c index a53fc97..8073cc2 100644 --- a/drivers/gpu/drm/nouveau/nv50_instmem.c +++ b/drivers/gpu/drm/nouveau/nv50_instmem.c @@ -49,7 +49,7 @@ nv50_channel_del(struct nouveau_channel **pchan)
nouveau_gpuobj_ref(NULL, &chan->ramfc); nouveau_gpuobj_ref(NULL, &chan->vm_pd); - if (chan->ramin_heap.free_stack.next) + if (drm_mm_initialized(&chan->ramin_heap)) drm_mm_takedown(&chan->ramin_heap); nouveau_gpuobj_ref(NULL, &chan->ramin); kfree(chan); diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h index e391777..0d79146 100644 --- a/include/drm/drm_mm.h +++ b/include/drm/drm_mm.h @@ -72,6 +72,11 @@ struct drm_mm { unsigned long scan_end; };
+static inline bool drm_mm_initialized(struct drm_mm *mm) +{ + return mm->free_stack.next; +} + /* * Basic range manager support (drm_mm.c) */
The idea is to track free holes implicitly by marking the allocation immediatly preceeding a hole.
To avoid an ugly corner case add a dummy head_node to struct drm_mm to track the hole that spans to complete allocation area when the memory manager is empty.
To guarantee that there's always a preceeding/following node (that might be marked as hole_follows == 1), move the mm->node_list list_head to the head_node.
The main allocator and fair-lru scan code actually becomes simpler. Only the debug code slightly suffers because free areas are no longer explicit.
Also add drm_mm_for_each_node (which will be much more useful when struct drm_mm_node is embeddable).
Signed-off-by: Daniel Vetter daniel.vetter@ffwll.ch --- drivers/gpu/drm/drm_mm.c | 464 +++++++++++++++++++++------------------------- include/drm/drm_mm.h | 21 ++- 2 files changed, 225 insertions(+), 260 deletions(-)
diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c index c59515b..4fa33e1 100644 --- a/drivers/gpu/drm/drm_mm.c +++ b/drivers/gpu/drm/drm_mm.c @@ -64,8 +64,8 @@ static struct drm_mm_node *drm_mm_kmalloc(struct drm_mm *mm, int atomic) else { child = list_entry(mm->unused_nodes.next, - struct drm_mm_node, free_stack); - list_del(&child->free_stack); + struct drm_mm_node, node_list); + list_del(&child->node_list); --mm->num_unused; } spin_unlock(&mm->unused_lock); @@ -94,126 +94,123 @@ int drm_mm_pre_get(struct drm_mm *mm) return ret; } ++mm->num_unused; - list_add_tail(&node->free_stack, &mm->unused_nodes); + list_add_tail(&node->node_list, &mm->unused_nodes); } spin_unlock(&mm->unused_lock); return 0; } EXPORT_SYMBOL(drm_mm_pre_get);
-static int drm_mm_create_tail_node(struct drm_mm *mm, - unsigned long start, - unsigned long size, int atomic) +static inline unsigned long drm_mm_hole_node_start(struct drm_mm_node *hole_node) { - struct drm_mm_node *child; - - child = drm_mm_kmalloc(mm, atomic); - if (unlikely(child == NULL)) - return -ENOMEM; - - child->free = 1; - child->size = size; - child->start = start; - child->mm = mm; - - list_add_tail(&child->node_list, &mm->node_list); - list_add_tail(&child->free_stack, &mm->free_stack); - - return 0; + return hole_node->start + hole_node->size; }
-static struct drm_mm_node *drm_mm_split_at_start(struct drm_mm_node *parent, - unsigned long size, - int atomic) +static inline unsigned long drm_mm_hole_node_end(struct drm_mm_node *hole_node) { - struct drm_mm_node *child; - - child = drm_mm_kmalloc(parent->mm, atomic); - if (unlikely(child == NULL)) - return NULL; - - INIT_LIST_HEAD(&child->free_stack); + struct drm_mm_node *next_node = + list_entry(hole_node->node_list.next, struct drm_mm_node, + node_list);
- child->size = size; - child->start = parent->start; - child->mm = parent->mm; - - list_add_tail(&child->node_list, &parent->node_list); - INIT_LIST_HEAD(&child->free_stack); - - parent->size -= size; - parent->start += size; - return child; + return next_node->start; }
- -struct drm_mm_node *drm_mm_get_block_generic(struct drm_mm_node *node, +struct drm_mm_node *drm_mm_get_block_generic(struct drm_mm_node *hole_node, unsigned long size, unsigned alignment, int atomic) {
- struct drm_mm_node *align_splitoff = NULL; - unsigned tmp = 0; + struct drm_mm_node *node; + struct drm_mm *mm = hole_node->mm; + unsigned long tmp = 0, wasted = 0; + unsigned long hole_start = drm_mm_hole_node_start(hole_node); + unsigned long hole_end = drm_mm_hole_node_end(hole_node); + + BUG_ON(!hole_node->hole_follows); + + node = drm_mm_kmalloc(mm, atomic); + if (unlikely(node == NULL)) + return NULL;
if (alignment) - tmp = node->start % alignment; + tmp = hole_start % alignment;
- if (tmp) { - align_splitoff = - drm_mm_split_at_start(node, alignment - tmp, atomic); - if (unlikely(align_splitoff == NULL)) - return NULL; - } + if (!tmp) { + hole_node->hole_follows = 0; + list_del_init(&hole_node->hole_stack); + } else + wasted = alignment - tmp; + + node->start = hole_start + wasted; + node->size = size; + node->mm = mm;
- if (node->size == size) { - list_del_init(&node->free_stack); - node->free = 0; + INIT_LIST_HEAD(&node->hole_stack); + list_add(&node->node_list, &hole_node->node_list); + + BUG_ON(node->start + node->size > hole_end); + + if (node->start + node->size < hole_end) { + list_add(&node->hole_stack, &mm->hole_stack); + node->hole_follows = 1; } else { - node = drm_mm_split_at_start(node, size, atomic); + node->hole_follows = 0; }
- if (align_splitoff) - drm_mm_put_block(align_splitoff); - return node; } EXPORT_SYMBOL(drm_mm_get_block_generic);
-struct drm_mm_node *drm_mm_get_block_range_generic(struct drm_mm_node *node, +struct drm_mm_node *drm_mm_get_block_range_generic(struct drm_mm_node *hole_node, unsigned long size, unsigned alignment, unsigned long start, unsigned long end, int atomic) { - struct drm_mm_node *align_splitoff = NULL; - unsigned tmp = 0; - unsigned wasted = 0; + struct drm_mm_node *node; + struct drm_mm *mm = hole_node->mm; + unsigned long tmp = 0, wasted = 0; + unsigned long hole_start = drm_mm_hole_node_start(hole_node); + unsigned long hole_end = drm_mm_hole_node_end(hole_node);
- if (node->start < start) - wasted += start - node->start; + BUG_ON(!hole_node->hole_follows); + + node = drm_mm_kmalloc(mm, atomic); + if (unlikely(node == NULL)) + return NULL; + + if (hole_start < start) + wasted += start - hole_start; if (alignment) - tmp = ((node->start + wasted) % alignment); + tmp = (hole_start + wasted) % alignment;
if (tmp) wasted += alignment - tmp; - if (wasted) { - align_splitoff = drm_mm_split_at_start(node, wasted, atomic); - if (unlikely(align_splitoff == NULL)) - return NULL; + + if (!wasted) { + hole_node->hole_follows = 0; + list_del_init(&hole_node->hole_stack); }
- if (node->size == size) { - list_del_init(&node->free_stack); - node->free = 0; + node->start = hole_start + wasted; + node->size = size; + node->mm = mm; + + INIT_LIST_HEAD(&node->hole_stack); + list_add(&node->node_list, &hole_node->node_list); + + BUG_ON(node->start + node->size > hole_end); + BUG_ON(node->start + node->size > end); + + if (node->start + node->size < hole_end) { + list_add(&node->hole_stack, &mm->hole_stack); + node->hole_follows = 1; } else { - node = drm_mm_split_at_start(node, size, atomic); + node->hole_follows = 0; }
- if (align_splitoff) - drm_mm_put_block(align_splitoff); - return node; } EXPORT_SYMBOL(drm_mm_get_block_range_generic); @@ -223,66 +220,41 @@ EXPORT_SYMBOL(drm_mm_get_block_range_generic); * Otherwise add to the free stack. */
-void drm_mm_put_block(struct drm_mm_node *cur) +void drm_mm_put_block(struct drm_mm_node *node) {
- struct drm_mm *mm = cur->mm; - struct list_head *cur_head = &cur->node_list; - struct list_head *root_head = &mm->node_list; - struct drm_mm_node *prev_node = NULL; - struct drm_mm_node *next_node; + struct drm_mm *mm = node->mm; + struct drm_mm_node *prev_node;
- int merged = 0; + BUG_ON(node->scanned_block || node->scanned_prev_free + || node->scanned_next_free);
- BUG_ON(cur->scanned_block || cur->scanned_prev_free - || cur->scanned_next_free); + prev_node = + list_entry(node->node_list.prev, struct drm_mm_node, node_list);
- if (cur_head->prev != root_head) { - prev_node = - list_entry(cur_head->prev, struct drm_mm_node, node_list); - if (prev_node->free) { - prev_node->size += cur->size; - merged = 1; - } - } - if (cur_head->next != root_head) { - next_node = - list_entry(cur_head->next, struct drm_mm_node, node_list); - if (next_node->free) { - if (merged) { - prev_node->size += next_node->size; - list_del(&next_node->node_list); - list_del(&next_node->free_stack); - spin_lock(&mm->unused_lock); - if (mm->num_unused < MM_UNUSED_TARGET) { - list_add(&next_node->free_stack, - &mm->unused_nodes); - ++mm->num_unused; - } else - kfree(next_node); - spin_unlock(&mm->unused_lock); - } else { - next_node->size += cur->size; - next_node->start = cur->start; - merged = 1; - } - } - } - if (!merged) { - cur->free = 1; - list_add(&cur->free_stack, &mm->free_stack); - } else { - list_del(&cur->node_list); - spin_lock(&mm->unused_lock); - if (mm->num_unused < MM_UNUSED_TARGET) { - list_add(&cur->free_stack, &mm->unused_nodes); - ++mm->num_unused; - } else - kfree(cur); - spin_unlock(&mm->unused_lock); - } -} + if (node->hole_follows) { + BUG_ON(drm_mm_hole_node_start(node) + == drm_mm_hole_node_end(node)); + list_del(&node->hole_stack); + } else + BUG_ON(drm_mm_hole_node_start(node) + != drm_mm_hole_node_end(node));
+ if (!prev_node->hole_follows) { + prev_node->hole_follows = 1; + list_add(&prev_node->hole_stack, &mm->hole_stack); + } else + list_move(&prev_node->hole_stack, &mm->hole_stack); + + list_del(&node->node_list); + spin_lock(&mm->unused_lock); + if (mm->num_unused < MM_UNUSED_TARGET) { + list_add(&node->node_list, &mm->unused_nodes); + ++mm->num_unused; + } else + kfree(node); + spin_unlock(&mm->unused_lock); +} EXPORT_SYMBOL(drm_mm_put_block);
static int check_free_hole(unsigned long start, unsigned long end, @@ -319,8 +291,10 @@ struct drm_mm_node *drm_mm_search_free(const struct drm_mm *mm, best = NULL; best_size = ~0UL;
- list_for_each_entry(entry, &mm->free_stack, free_stack) { - if (!check_free_hole(entry->start, entry->start + entry->size, + list_for_each_entry(entry, &mm->hole_stack, hole_stack) { + BUG_ON(!entry->hole_follows); + if (!check_free_hole(drm_mm_hole_node_start(entry), + drm_mm_hole_node_end(entry), size, alignment)) continue;
@@ -353,12 +327,13 @@ struct drm_mm_node *drm_mm_search_free_in_range(const struct drm_mm *mm, best = NULL; best_size = ~0UL;
- list_for_each_entry(entry, &mm->free_stack, free_stack) { - unsigned long adj_start = entry->start < start ? - start : entry->start; - unsigned long adj_end = entry->start + entry->size > end ? - end : entry->start + entry->size; + list_for_each_entry(entry, &mm->hole_stack, hole_stack) { + unsigned long adj_start = drm_mm_hole_node_start(entry) < start ? + start : drm_mm_hole_node_start(entry); + unsigned long adj_end = drm_mm_hole_node_end(entry) > end ? + end : drm_mm_hole_node_end(entry);
+ BUG_ON(!entry->hole_follows); if (!check_free_hole(adj_start, adj_end, size, alignment)) continue;
@@ -430,70 +405,40 @@ EXPORT_SYMBOL(drm_mm_init_scan_with_range); int drm_mm_scan_add_block(struct drm_mm_node *node) { struct drm_mm *mm = node->mm; - struct list_head *prev_free, *next_free; - struct drm_mm_node *prev_node, *next_node; + struct drm_mm_node *prev_node; + unsigned long hole_start, hole_end; unsigned long adj_start; unsigned long adj_end;
mm->scanned_blocks++;
- prev_free = next_free = NULL; - - BUG_ON(node->free); + BUG_ON(node->scanned_block); node->scanned_block = 1; - node->free = 1; - - if (node->node_list.prev != &mm->node_list) { - prev_node = list_entry(node->node_list.prev, struct drm_mm_node, - node_list); - - if (prev_node->free) { - list_del(&prev_node->node_list); - - node->start = prev_node->start; - node->size += prev_node->size; - - prev_node->scanned_prev_free = 1; - - prev_free = &prev_node->free_stack; - } - } - - if (node->node_list.next != &mm->node_list) { - next_node = list_entry(node->node_list.next, struct drm_mm_node, - node_list); - - if (next_node->free) { - list_del(&next_node->node_list); - - node->size += next_node->size; - - next_node->scanned_next_free = 1;
- next_free = &next_node->free_stack; - } - } + prev_node = list_entry(node->node_list.prev, struct drm_mm_node, + node_list);
- /* The free_stack list is not used for allocated objects, so these two - * pointers can be abused (as long as no allocations in this memory - * manager happens). */ - node->free_stack.prev = prev_free; - node->free_stack.next = next_free; + node->scanned_preceeds_hole = prev_node->hole_follows; + prev_node->hole_follows = 1; + list_del(&node->node_list); + node->node_list.prev = &prev_node->node_list;
+ hole_start = drm_mm_hole_node_start(prev_node); + hole_end = drm_mm_hole_node_end(prev_node); if (mm->scan_check_range) { - adj_start = node->start < mm->scan_start ? - mm->scan_start : node->start; - adj_end = node->start + node->size > mm->scan_end ? - mm->scan_end : node->start + node->size; + adj_start = hole_start < mm->scan_start ? + mm->scan_start : hole_start; + adj_end = hole_end > mm->scan_end ? + mm->scan_end : hole_end; } else { - adj_start = node->start; - adj_end = node->start + node->size; + adj_start = hole_start; + adj_end = hole_end; }
if (check_free_hole(adj_start , adj_end, mm->scan_size, mm->scan_alignment)) { - mm->scan_hit_start = node->start; - mm->scan_hit_size = node->size; + mm->scan_hit_start = hole_start; + mm->scan_hit_size = hole_end;
return 1; } @@ -519,39 +464,19 @@ EXPORT_SYMBOL(drm_mm_scan_add_block); int drm_mm_scan_remove_block(struct drm_mm_node *node) { struct drm_mm *mm = node->mm; - struct drm_mm_node *prev_node, *next_node; + struct drm_mm_node *prev_node;
mm->scanned_blocks--;
BUG_ON(!node->scanned_block); node->scanned_block = 0; - node->free = 0; - - prev_node = list_entry(node->free_stack.prev, struct drm_mm_node, - free_stack); - next_node = list_entry(node->free_stack.next, struct drm_mm_node, - free_stack); - - if (prev_node) { - BUG_ON(!prev_node->scanned_prev_free); - prev_node->scanned_prev_free = 0; - - list_add_tail(&prev_node->node_list, &node->node_list);
- node->start = prev_node->start + prev_node->size; - node->size -= prev_node->size; - } - - if (next_node) { - BUG_ON(!next_node->scanned_next_free); - next_node->scanned_next_free = 0; - - list_add(&next_node->node_list, &node->node_list); - - node->size -= next_node->size; - } + prev_node = list_entry(node->node_list.prev, struct drm_mm_node, + node_list);
- INIT_LIST_HEAD(&node->free_stack); + prev_node->hole_follows = node->scanned_preceeds_hole; + INIT_LIST_HEAD(&node->node_list); + list_add(&node->node_list, &prev_node->node_list);
/* Only need to check for containement because start&size for the * complete resulting free block (not just the desired part) is @@ -568,7 +493,7 @@ EXPORT_SYMBOL(drm_mm_scan_remove_block);
int drm_mm_clean(struct drm_mm * mm) { - struct list_head *head = &mm->node_list; + struct list_head *head = &mm->head_node.node_list;
return (head->next->next == head); } @@ -576,38 +501,40 @@ EXPORT_SYMBOL(drm_mm_clean);
int drm_mm_init(struct drm_mm * mm, unsigned long start, unsigned long size) { - INIT_LIST_HEAD(&mm->node_list); - INIT_LIST_HEAD(&mm->free_stack); + INIT_LIST_HEAD(&mm->hole_stack); INIT_LIST_HEAD(&mm->unused_nodes); mm->num_unused = 0; mm->scanned_blocks = 0; spin_lock_init(&mm->unused_lock);
- return drm_mm_create_tail_node(mm, start, size, 0); + /* Clever trick to avoid a special case in the free hole tracking. */ + INIT_LIST_HEAD(&mm->head_node.node_list); + INIT_LIST_HEAD(&mm->head_node.hole_stack); + mm->head_node.hole_follows = 1; + mm->head_node.scanned_block = 0; + mm->head_node.scanned_prev_free = 0; + mm->head_node.scanned_next_free = 0; + mm->head_node.mm = mm; + mm->head_node.start = start + size; + mm->head_node.size = start - mm->head_node.start; + list_add_tail(&mm->head_node.hole_stack, &mm->hole_stack); + + return 0; } EXPORT_SYMBOL(drm_mm_init);
void drm_mm_takedown(struct drm_mm * mm) { - struct list_head *bnode = mm->free_stack.next; - struct drm_mm_node *entry; - struct drm_mm_node *next; + struct drm_mm_node *entry, *next;
- entry = list_entry(bnode, struct drm_mm_node, free_stack); - - if (entry->node_list.next != &mm->node_list || - entry->free_stack.next != &mm->free_stack) { + if (!list_empty(&mm->head_node.node_list)) { DRM_ERROR("Memory manager not clean. Delaying takedown\n"); return; }
- list_del(&entry->free_stack); - list_del(&entry->node_list); - kfree(entry); - spin_lock(&mm->unused_lock); - list_for_each_entry_safe(entry, next, &mm->unused_nodes, free_stack) { - list_del(&entry->free_stack); + list_for_each_entry_safe(entry, next, &mm->unused_nodes, node_list) { + list_del(&entry->node_list); kfree(entry); --mm->num_unused; } @@ -620,19 +547,37 @@ EXPORT_SYMBOL(drm_mm_takedown); void drm_mm_debug_table(struct drm_mm *mm, const char *prefix) { struct drm_mm_node *entry; - int total_used = 0, total_free = 0, total = 0; - - list_for_each_entry(entry, &mm->node_list, node_list) { - printk(KERN_DEBUG "%s 0x%08lx-0x%08lx: %8ld: %s\n", + unsigned long total_used = 0, total_free = 0, total = 0; + unsigned long hole_start, hole_end, hole_size; + + hole_start = drm_mm_hole_node_start(&mm->head_node); + hole_end = drm_mm_hole_node_end(&mm->head_node); + hole_size = hole_end - hole_start; + if (hole_size) + printk(KERN_DEBUG "%s 0x%08lx-0x%08lx: %8lu: free\n", + prefix, hole_start, hole_end, + hole_size); + total_free += hole_size; + + drm_mm_for_each_node(entry, mm) { + printk(KERN_DEBUG "%s 0x%08lx-0x%08lx: %8lu: used\n", prefix, entry->start, entry->start + entry->size, - entry->size, entry->free ? "free" : "used"); - total += entry->size; - if (entry->free) - total_free += entry->size; - else - total_used += entry->size; + entry->size); + total_used += entry->size; + + if (entry->hole_follows) { + hole_start = drm_mm_hole_node_start(entry); + hole_end = drm_mm_hole_node_end(entry); + hole_size = hole_end - hole_start; + printk(KERN_DEBUG "%s 0x%08lx-0x%08lx: %8lu: free\n", + prefix, hole_start, hole_end, + hole_size); + total_free += hole_size; + } } - printk(KERN_DEBUG "%s total: %d, used %d free %d\n", prefix, total, + total = total_free + total_used; + + printk(KERN_DEBUG "%s total: %lu, used %lu free %lu\n", prefix, total, total_used, total_free); } EXPORT_SYMBOL(drm_mm_debug_table); @@ -641,17 +586,34 @@ EXPORT_SYMBOL(drm_mm_debug_table); int drm_mm_dump_table(struct seq_file *m, struct drm_mm *mm) { struct drm_mm_node *entry; - int total_used = 0, total_free = 0, total = 0; - - list_for_each_entry(entry, &mm->node_list, node_list) { - seq_printf(m, "0x%08lx-0x%08lx: 0x%08lx: %s\n", entry->start, entry->start + entry->size, entry->size, entry->free ? "free" : "used"); - total += entry->size; - if (entry->free) - total_free += entry->size; - else - total_used += entry->size; + unsigned long total_used = 0, total_free = 0, total = 0; + unsigned long hole_start, hole_end, hole_size; + + hole_start = drm_mm_hole_node_start(&mm->head_node); + hole_end = drm_mm_hole_node_end(&mm->head_node); + hole_size = hole_end - hole_start; + if (hole_size) + seq_printf(m, "0x%08lx-0x%08lx: 0x%08lx: free\n", + hole_start, hole_end, hole_size); + total_free += hole_size; + + drm_mm_for_each_node(entry, mm) { + seq_printf(m, "0x%08lx-0x%08lx: 0x%08lx: used\n", + entry->start, entry->start + entry->size, + entry->size); + total_used += entry->size; + if (entry->hole_follows) { + hole_start = drm_mm_hole_node_start(&mm->head_node); + hole_end = drm_mm_hole_node_end(&mm->head_node); + hole_size = hole_end - hole_start; + seq_printf(m, "0x%08lx-0x%08lx: 0x%08lx: free\n", + hole_start, hole_end, hole_size); + total_free += hole_size; + } } - seq_printf(m, "total: %d, used %d free %d\n", total, total_used, total_free); + total = total_free + total_used; + + seq_printf(m, "total: %lu, used %lu free %lu\n", total, total_used, total_free); return 0; } EXPORT_SYMBOL(drm_mm_dump_table); diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h index 0d79146..34fa36f 100644 --- a/include/drm/drm_mm.h +++ b/include/drm/drm_mm.h @@ -42,23 +42,24 @@ #endif
struct drm_mm_node { - struct list_head free_stack; struct list_head node_list; - unsigned free : 1; + struct list_head hole_stack; + unsigned hole_follows : 1; unsigned scanned_block : 1; unsigned scanned_prev_free : 1; unsigned scanned_next_free : 1; + unsigned scanned_preceeds_hole : 1; unsigned long start; unsigned long size; struct drm_mm *mm; };
struct drm_mm { - /* List of free memory blocks, most recently freed ordered. */ - struct list_head free_stack; - /* List of all memory nodes, ordered according to the (increasing) start - * address of the memory node. */ - struct list_head node_list; + /* List of all memory nodes that immediatly preceed a free hole. */ + struct list_head hole_stack; + /* head_node.node_list is the list of all memory nodes, ordered + * according to the (increasing) start address of the memory node. */ + struct drm_mm_node head_node; struct list_head unused_nodes; int num_unused; spinlock_t unused_lock; @@ -74,9 +75,11 @@ struct drm_mm {
static inline bool drm_mm_initialized(struct drm_mm *mm) { - return mm->free_stack.next; + return mm->hole_stack.next; } - +#define drm_mm_for_each_node(entry, mm) list_for_each_entry(entry, \ + &(mm)->head_node.node_list, \ + node_list); /* * Basic range manager support (drm_mm.c) */
Signed-off-by: Daniel Vetter daniel.vetter@ffwll.ch --- drivers/gpu/drm/drm_mm.c | 67 ++++++++++++++++++++++++++++----------------- 1 files changed, 42 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c index 4fa33e1..fecb406 100644 --- a/drivers/gpu/drm/drm_mm.c +++ b/drivers/gpu/drm/drm_mm.c @@ -115,24 +115,15 @@ static inline unsigned long drm_mm_hole_node_end(struct drm_mm_node *hole_node) return next_node->start; }
-struct drm_mm_node *drm_mm_get_block_generic(struct drm_mm_node *hole_node, - unsigned long size, - unsigned alignment, - int atomic) +static void drm_mm_insert_helper(struct drm_mm_node *hole_node, + struct drm_mm_node *node, + unsigned long size, unsigned alignment) { - - struct drm_mm_node *node; struct drm_mm *mm = hole_node->mm; unsigned long tmp = 0, wasted = 0; unsigned long hole_start = drm_mm_hole_node_start(hole_node); unsigned long hole_end = drm_mm_hole_node_end(hole_node);
- BUG_ON(!hole_node->hole_follows); - - node = drm_mm_kmalloc(mm, atomic); - if (unlikely(node == NULL)) - return NULL; - if (alignment) tmp = hole_start % alignment;
@@ -157,30 +148,37 @@ struct drm_mm_node *drm_mm_get_block_generic(struct drm_mm_node *hole_node, } else { node->hole_follows = 0; } +} + +struct drm_mm_node *drm_mm_get_block_generic(struct drm_mm_node *hole_node, + unsigned long size, + unsigned alignment, + int atomic) +{ + struct drm_mm_node *node; + + BUG_ON(!hole_node->hole_follows); + + node = drm_mm_kmalloc(hole_node->mm, atomic); + if (unlikely(node == NULL)) + return NULL; + + drm_mm_insert_helper(hole_node, node, size, alignment);
return node; } EXPORT_SYMBOL(drm_mm_get_block_generic);
-struct drm_mm_node *drm_mm_get_block_range_generic(struct drm_mm_node *hole_node, - unsigned long size, - unsigned alignment, - unsigned long start, - unsigned long end, - int atomic) +static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node, + struct drm_mm_node *node, + unsigned long size, unsigned alignment, + unsigned long start, unsigned long end) { - struct drm_mm_node *node; struct drm_mm *mm = hole_node->mm; unsigned long tmp = 0, wasted = 0; unsigned long hole_start = drm_mm_hole_node_start(hole_node); unsigned long hole_end = drm_mm_hole_node_end(hole_node);
- BUG_ON(!hole_node->hole_follows); - - node = drm_mm_kmalloc(mm, atomic); - if (unlikely(node == NULL)) - return NULL; - if (hole_start < start) wasted += start - hole_start; if (alignment) @@ -210,6 +208,25 @@ struct drm_mm_node *drm_mm_get_block_range_generic(struct drm_mm_node *hole_node } else { node->hole_follows = 0; } +} + +struct drm_mm_node *drm_mm_get_block_range_generic(struct drm_mm_node *hole_node, + unsigned long size, + unsigned alignment, + unsigned long start, + unsigned long end, + int atomic) +{ + struct drm_mm_node *node; + + BUG_ON(!hole_node->hole_follows); + + node = drm_mm_kmalloc(hole_node->mm, atomic); + if (unlikely(node == NULL)) + return NULL; + + drm_mm_insert_helper_range(hole_node, node, size, alignment, + start, end);
return node; }
The old api has a two-step process: First search for a suitable free hole, then allocate from that specific hole. No user used this to do anything clever. So drop it.
With struct drm_mm_node embedded, we cannot track allocations anymore by checking for a NULL pointer. So keep track of this and add a small helper drm_mm_node_allocated.
Also add a function to move allocations between different struct drm_mm_node.
v2: Implement suggestions by Chris Wilson.
Signed-off-by: Daniel Vetter daniel.vetter@ffwll.ch --- drivers/gpu/drm/drm_mm.c | 93 +++++++++++++++++++++++++++++++++++++++++---- include/drm/drm_mm.h | 19 +++++++-- 2 files changed, 98 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c index fecb406..d6432f9 100644 --- a/drivers/gpu/drm/drm_mm.c +++ b/drivers/gpu/drm/drm_mm.c @@ -124,6 +124,8 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node, unsigned long hole_start = drm_mm_hole_node_start(hole_node); unsigned long hole_end = drm_mm_hole_node_end(hole_node);
+ BUG_ON(!hole_node->hole_follows || node->allocated); + if (alignment) tmp = hole_start % alignment;
@@ -136,6 +138,7 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node, node->start = hole_start + wasted; node->size = size; node->mm = mm; + node->allocated = 1;
INIT_LIST_HEAD(&node->hole_stack); list_add(&node->node_list, &hole_node->node_list); @@ -157,8 +160,6 @@ struct drm_mm_node *drm_mm_get_block_generic(struct drm_mm_node *hole_node, { struct drm_mm_node *node;
- BUG_ON(!hole_node->hole_follows); - node = drm_mm_kmalloc(hole_node->mm, atomic); if (unlikely(node == NULL)) return NULL; @@ -169,6 +170,26 @@ struct drm_mm_node *drm_mm_get_block_generic(struct drm_mm_node *hole_node, } EXPORT_SYMBOL(drm_mm_get_block_generic);
+/** + * Search for free space and insert a preallocated memory node. Returns + * -ENOSPC if no suitable free area is available. The preallocated memory node + * must be cleared. + */ +int drm_mm_insert_node(struct drm_mm *mm, struct drm_mm_node *node, + unsigned long size, unsigned alignment) +{ + struct drm_mm_node *hole_node; + + hole_node = drm_mm_search_free(mm, size, alignment, 0); + if (!hole_node) + return -ENOSPC; + + drm_mm_insert_helper(hole_node, node, size, alignment); + + return 0; +} +EXPORT_SYMBOL(drm_mm_insert_node); + static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node, struct drm_mm_node *node, unsigned long size, unsigned alignment, @@ -179,6 +200,8 @@ static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node, unsigned long hole_start = drm_mm_hole_node_start(hole_node); unsigned long hole_end = drm_mm_hole_node_end(hole_node);
+ BUG_ON(!hole_node->hole_follows || node->allocated); + if (hole_start < start) wasted += start - hole_start; if (alignment) @@ -195,6 +218,7 @@ static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node, node->start = hole_start + wasted; node->size = size; node->mm = mm; + node->allocated = 1;
INIT_LIST_HEAD(&node->hole_stack); list_add(&node->node_list, &hole_node->node_list); @@ -219,8 +243,6 @@ struct drm_mm_node *drm_mm_get_block_range_generic(struct drm_mm_node *hole_node { struct drm_mm_node *node;
- BUG_ON(!hole_node->hole_follows); - node = drm_mm_kmalloc(hole_node->mm, atomic); if (unlikely(node == NULL)) return NULL; @@ -232,14 +254,34 @@ struct drm_mm_node *drm_mm_get_block_range_generic(struct drm_mm_node *hole_node } EXPORT_SYMBOL(drm_mm_get_block_range_generic);
-/* - * Put a block. Merge with the previous and / or next block if they are free. - * Otherwise add to the free stack. +/** + * Search for free space and insert a preallocated memory node. Returns + * -ENOSPC if no suitable free area is available. This is for range + * restricted allocations. The preallocated memory node must be cleared. */ - -void drm_mm_put_block(struct drm_mm_node *node) +int drm_mm_insert_node_in_range(struct drm_mm *mm, struct drm_mm_node *node, + unsigned long size, unsigned alignment, + unsigned long start, unsigned long end) { + struct drm_mm_node *hole_node; + + hole_node = drm_mm_search_free_in_range(mm, size, alignment, + start, end, 0); + if (!hole_node) + return -ENOSPC; + + drm_mm_insert_helper_range(hole_node, node, size, alignment, + start, end);
+ return 0; +} +EXPORT_SYMBOL(drm_mm_insert_node_in_range); + +/** + * Remove a memory node from the allocator. + */ +void drm_mm_remove_node(struct drm_mm_node *node) +{ struct drm_mm *mm = node->mm; struct drm_mm_node *prev_node;
@@ -264,6 +306,22 @@ void drm_mm_put_block(struct drm_mm_node *node) list_move(&prev_node->hole_stack, &mm->hole_stack);
list_del(&node->node_list); + node->allocated = 0; +} +EXPORT_SYMBOL(drm_mm_remove_node); + +/* + * Remove a memory node from the allocator and free the allocated struct + * drm_mm_node. Only to be used on a struct drm_mm_node obtained by one of the + * drm_mm_get_block functions. + */ +void drm_mm_put_block(struct drm_mm_node *node) +{ + + struct drm_mm *mm = node->mm; + + drm_mm_remove_node(node); + spin_lock(&mm->unused_lock); if (mm->num_unused < MM_UNUSED_TARGET) { list_add(&node->node_list, &mm->unused_nodes); @@ -368,6 +426,23 @@ struct drm_mm_node *drm_mm_search_free_in_range(const struct drm_mm *mm, EXPORT_SYMBOL(drm_mm_search_free_in_range);
/** + * Moves an allocation. To be used with embedded struct drm_mm_node. + */ +void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new) +{ + list_replace(&old->node_list, &new->node_list); + list_replace(&old->node_list, &new->hole_stack); + new->hole_follows = old->hole_follows; + new->mm = old->mm; + new->start = old->start; + new->size = old->size; + + old->allocated = 0; + new->allocated = 1; +} +EXPORT_SYMBOL(drm_mm_replace_node); + +/** * Initializa lru scanning. * * This simply sets up the scanning routines with the parameters for the desired diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h index 34fa36f..17a070e 100644 --- a/include/drm/drm_mm.h +++ b/include/drm/drm_mm.h @@ -49,6 +49,7 @@ struct drm_mm_node { unsigned scanned_prev_free : 1; unsigned scanned_next_free : 1; unsigned scanned_preceeds_hole : 1; + unsigned allocated : 1; unsigned long start; unsigned long size; struct drm_mm *mm; @@ -73,6 +74,11 @@ struct drm_mm { unsigned long scan_end; };
+static inline bool drm_mm_node_allocated(struct drm_mm_node *node) +{ + return node->allocated; +} + static inline bool drm_mm_initialized(struct drm_mm *mm) { return mm->hole_stack.next; @@ -126,7 +132,15 @@ static inline struct drm_mm_node *drm_mm_get_block_atomic_range( return drm_mm_get_block_range_generic(parent, size, alignment, start, end, 1); } +extern int drm_mm_insert_node(struct drm_mm *mm, struct drm_mm_node *node, + unsigned long size, unsigned alignment); +extern int drm_mm_insert_node_in_range(struct drm_mm *mm, + struct drm_mm_node *node, + unsigned long size, unsigned alignment, + unsigned long start, unsigned long end); extern void drm_mm_put_block(struct drm_mm_node *cur); +extern void drm_mm_remove_node(struct drm_mm_node *node); +extern void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new); extern struct drm_mm_node *drm_mm_search_free(const struct drm_mm *mm, unsigned long size, unsigned alignment, @@ -142,11 +156,6 @@ extern int drm_mm_init(struct drm_mm *mm, unsigned long start, unsigned long size); extern void drm_mm_takedown(struct drm_mm *mm); extern int drm_mm_clean(struct drm_mm *mm); -extern unsigned long drm_mm_tail_space(struct drm_mm *mm); -extern int drm_mm_remove_space_from_tail(struct drm_mm *mm, - unsigned long size); -extern int drm_mm_add_space_to_tail(struct drm_mm *mm, - unsigned long size, int atomic); extern int drm_mm_pre_get(struct drm_mm *mm);
static inline struct drm_mm *drm_get_mm(struct drm_mm_node *block)
Signed-off-by: Daniel Vetter daniel.vetter@ffwll.ch --- drivers/gpu/drm/i915/i915_debugfs.c | 6 +- drivers/gpu/drm/i915/i915_drv.h | 2 +- drivers/gpu/drm/i915/i915_gem.c | 93 ++++++++++++++------------------- drivers/gpu/drm/i915/i915_gem_evict.c | 6 +- drivers/gpu/drm/i915/i915_gem_gtt.c | 10 ++-- 5 files changed, 52 insertions(+), 65 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 8caa55f..af133ac 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -124,9 +124,9 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) seq_printf(m, " (name: %d)", obj->base.name); if (obj->fence_reg != I915_FENCE_REG_NONE) seq_printf(m, " (fence: %d)", obj->fence_reg); - if (obj->gtt_space != NULL) + if (drm_mm_node_allocated(&obj->gtt_space)) seq_printf(m, " (gtt offset: %08x, size: %08x)", - obj->gtt_offset, (unsigned int)obj->gtt_space->size); + obj->gtt_offset, (unsigned int)obj->gtt_space.size); if (obj->pin_mappable || obj->fault_mappable) seq_printf(m, " (mappable)"); if (obj->ring != NULL) @@ -180,7 +180,7 @@ static int i915_gem_object_list_info(struct seq_file *m, void *data) describe_obj(m, obj); seq_printf(m, "\n"); total_obj_size += obj->base.size; - total_gtt_size += obj->gtt_space->size; + total_gtt_size += obj->gtt_space.size; count++; } mutex_unlock(&dev->struct_mutex); diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 8a4b247..bdb05c2 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -712,7 +712,7 @@ struct drm_i915_gem_object { struct drm_gem_object base;
/** Current space allocated to this object in the GTT, if any. */ - struct drm_mm_node *gtt_space; + struct drm_mm_node gtt_space; struct list_head gtt_list;
/** This object's place on the active/flushing/inactive lists */ diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 868d3a1..f8612be 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -87,10 +87,10 @@ static void i915_gem_info_add_gtt(struct drm_i915_private *dev_priv, struct drm_i915_gem_object *obj) { dev_priv->mm.gtt_count++; - dev_priv->mm.gtt_memory += obj->gtt_space->size; + dev_priv->mm.gtt_memory += obj->gtt_space.size; if (obj->gtt_offset < dev_priv->mm.gtt_mappable_end) { dev_priv->mm.mappable_gtt_used += - min_t(size_t, obj->gtt_space->size, + min_t(size_t, obj->gtt_space.size, dev_priv->mm.gtt_mappable_end - obj->gtt_offset); } list_add_tail(&obj->gtt_list, &dev_priv->mm.gtt_list); @@ -100,10 +100,10 @@ static void i915_gem_info_remove_gtt(struct drm_i915_private *dev_priv, struct drm_i915_gem_object *obj) { dev_priv->mm.gtt_count--; - dev_priv->mm.gtt_memory -= obj->gtt_space->size; + dev_priv->mm.gtt_memory -= obj->gtt_space.size; if (obj->gtt_offset < dev_priv->mm.gtt_mappable_end) { dev_priv->mm.mappable_gtt_used -= - min_t(size_t, obj->gtt_space->size, + min_t(size_t, obj->gtt_space.size, dev_priv->mm.gtt_mappable_end - obj->gtt_offset); } list_del_init(&obj->gtt_list); @@ -124,13 +124,13 @@ i915_gem_info_update_mappable(struct drm_i915_private *dev_priv, /* Combined state was already mappable. */ return; dev_priv->mm.gtt_mappable_count++; - dev_priv->mm.gtt_mappable_memory += obj->gtt_space->size; + dev_priv->mm.gtt_mappable_memory += obj->gtt_space.size; } else { if (obj->pin_mappable || obj->fault_mappable) /* Combined state still mappable. */ return; dev_priv->mm.gtt_mappable_count--; - dev_priv->mm.gtt_mappable_memory -= obj->gtt_space->size; + dev_priv->mm.gtt_mappable_memory -= obj->gtt_space.size; } }
@@ -139,7 +139,7 @@ static void i915_gem_info_add_pin(struct drm_i915_private *dev_priv, bool mappable) { dev_priv->mm.pin_count++; - dev_priv->mm.pin_memory += obj->gtt_space->size; + dev_priv->mm.pin_memory += obj->gtt_space.size; if (mappable) { obj->pin_mappable = true; i915_gem_info_update_mappable(dev_priv, obj, true); @@ -150,7 +150,7 @@ static void i915_gem_info_remove_pin(struct drm_i915_private *dev_priv, struct drm_i915_gem_object *obj) { dev_priv->mm.pin_count--; - dev_priv->mm.pin_memory -= obj->gtt_space->size; + dev_priv->mm.pin_memory -= obj->gtt_space.size; if (obj->pin_mappable) { obj->pin_mappable = false; i915_gem_info_update_mappable(dev_priv, obj, false); @@ -212,7 +212,8 @@ static int i915_mutex_lock_interruptible(struct drm_device *dev) static inline bool i915_gem_object_is_inactive(struct drm_i915_gem_object *obj) { - return obj->gtt_space && !obj->active && obj->pin_count == 0; + return drm_mm_node_allocated(&obj->gtt_space) + && !obj->active && obj->pin_count == 0; }
int i915_gem_do_init(struct drm_device *dev, @@ -1059,7 +1060,7 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data, if (obj->phys_obj) ret = i915_gem_phys_pwrite(dev, obj, args, file); else if (obj->tiling_mode == I915_TILING_NONE && - obj->gtt_space && + drm_mm_node_allocated(&obj->gtt_space) && obj->base.write_domain != I915_GEM_DOMAIN_CPU) { ret = i915_gem_object_pin(obj, 0, true); if (ret) @@ -1283,7 +1284,7 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) mutex_lock(&dev->struct_mutex); BUG_ON(obj->pin_count && !obj->pin_mappable);
- if (obj->gtt_space) { + if (drm_mm_node_allocated(&obj->gtt_space)) { if (!obj->map_and_fenceable) { ret = i915_gem_object_unbind(obj); if (ret) @@ -1291,7 +1292,7 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) } }
- if (!obj->gtt_space) { + if (!drm_mm_node_allocated(&obj->gtt_space)) { ret = i915_gem_object_bind_to_gtt(obj, 0, true); if (ret) goto unlock; @@ -2193,7 +2194,7 @@ i915_gem_object_unbind(struct drm_i915_gem_object *obj) struct drm_i915_private *dev_priv = dev->dev_private; int ret = 0;
- if (obj->gtt_space == NULL) + if (!drm_mm_node_allocated(&obj->gtt_space)) return 0;
if (obj->pin_count != 0) { @@ -2235,8 +2236,7 @@ i915_gem_object_unbind(struct drm_i915_gem_object *obj) /* Avoid an unnecessary call to unbind on rebind. */ obj->map_and_fenceable = true;
- drm_mm_put_block(obj->gtt_space); - obj->gtt_space = NULL; + drm_mm_remove_node(&obj->gtt_space); obj->gtt_offset = 0;
if (i915_gem_object_is_purgeable(obj)) @@ -2292,7 +2292,7 @@ static void sandybridge_write_fence_reg(struct drm_i915_gem_object *obj) { struct drm_device *dev = obj->base.dev; drm_i915_private_t *dev_priv = dev->dev_private; - u32 size = obj->gtt_space->size; + u32 size = obj->gtt_space.size; int regnum = obj->fence_reg; uint64_t val;
@@ -2313,7 +2313,7 @@ static void i965_write_fence_reg(struct drm_i915_gem_object *obj) { struct drm_device *dev = obj->base.dev; drm_i915_private_t *dev_priv = dev->dev_private; - u32 size = obj->gtt_space->size; + u32 size = obj->gtt_space.size; int regnum = obj->fence_reg; uint64_t val;
@@ -2332,7 +2332,7 @@ static void i915_write_fence_reg(struct drm_i915_gem_object *obj) { struct drm_device *dev = obj->base.dev; drm_i915_private_t *dev_priv = dev->dev_private; - u32 size = obj->gtt_space->size; + u32 size = obj->gtt_space.size; uint32_t fence_reg, val, pitch_val; int tile_width;
@@ -2340,7 +2340,7 @@ static void i915_write_fence_reg(struct drm_i915_gem_object *obj) (obj->gtt_offset & (size - 1))) { WARN(1, "%s: object 0x%08x [fenceable? %d] not 1M or size (0x%08x) aligned [gtt_space offset=%lx, size=%lx]\n", __func__, obj->gtt_offset, obj->map_and_fenceable, size, - obj->gtt_space->start, obj->gtt_space->size); + obj->gtt_space.start, obj->gtt_space.size); return; }
@@ -2379,7 +2379,7 @@ static void i830_write_fence_reg(struct drm_i915_gem_object *obj) { struct drm_device *dev = obj->base.dev; drm_i915_private_t *dev_priv = dev->dev_private; - u32 size = obj->gtt_space->size; + u32 size = obj->gtt_space.size; int regnum = obj->fence_reg; uint32_t val; uint32_t pitch_val; @@ -2642,7 +2642,6 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj, { struct drm_device *dev = obj->base.dev; drm_i915_private_t *dev_priv = dev->dev_private; - struct drm_mm_node *free_space; gfp_t gfpmask = __GFP_NORETRY | __GFP_NOWARN; u32 size, fence_size, fence_alignment; bool mappable, fenceable; @@ -2676,27 +2675,17 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
search_free: if (map_and_fenceable) - free_space = - drm_mm_search_free_in_range(&dev_priv->mm.gtt_space, + ret = + drm_mm_insert_node_in_range(&dev_priv->mm.gtt_space, + &obj->gtt_space, size, alignment, 0, - dev_priv->mm.gtt_mappable_end, - 0); + dev_priv->mm.gtt_mappable_end); else - free_space = drm_mm_search_free(&dev_priv->mm.gtt_space, - size, alignment, 0); - - if (free_space != NULL) { - if (map_and_fenceable) - obj->gtt_space = - drm_mm_get_block_range_generic(free_space, - size, alignment, 0, - dev_priv->mm.gtt_mappable_end, - 0); - else - obj->gtt_space = - drm_mm_get_block(free_space, size, alignment); - } - if (obj->gtt_space == NULL) { + ret = drm_mm_insert_node(&dev_priv->mm.gtt_space, + &obj->gtt_space, + size, alignment); + + if (ret != 0) { /* If the gtt is empty and we're still having trouble * fitting our object in, we're out of memory. */ @@ -2710,8 +2699,7 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
ret = i915_gem_object_get_pages_gtt(obj, gfpmask); if (ret) { - drm_mm_put_block(obj->gtt_space); - obj->gtt_space = NULL; + drm_mm_remove_node(&obj->gtt_space);
if (ret == -ENOMEM) { /* first try to clear up some space from the GTT */ @@ -2737,8 +2725,7 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj, ret = i915_gem_gtt_bind_object(obj); if (ret) { i915_gem_object_put_pages_gtt(obj); - drm_mm_put_block(obj->gtt_space); - obj->gtt_space = NULL; + drm_mm_remove_node(&obj->gtt_space);
ret = i915_gem_evict_something(dev, size, alignment, map_and_fenceable); @@ -2748,7 +2735,7 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj, goto search_free; }
- obj->gtt_offset = obj->gtt_space->start; + obj->gtt_offset = obj->gtt_space.start;
/* keep track of bounds object by adding it to the inactive list */ list_add_tail(&obj->mm_list, &dev_priv->mm.inactive_list); @@ -2764,8 +2751,8 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj, trace_i915_gem_object_bind(obj, obj->gtt_offset, map_and_fenceable);
fenceable = - obj->gtt_space->size == fence_size && - (obj->gtt_space->start & (fence_alignment -1)) == 0; + obj->gtt_space.size == fence_size && + (obj->gtt_space.start & (fence_alignment -1)) == 0;
mappable = obj->gtt_offset + obj->base.size <= dev_priv->mm.gtt_mappable_end; @@ -2866,7 +2853,7 @@ i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj, int write) int ret;
/* Not valid to be called on unbound objects. */ - if (obj->gtt_space == NULL) + if (!drm_mm_node_allocated(&obj->gtt_space)) return -EINVAL;
ret = i915_gem_object_flush_gpu_write_domain(obj, false); @@ -2914,7 +2901,7 @@ i915_gem_object_set_to_display_plane(struct drm_i915_gem_object *obj, int ret;
/* Not valid to be called on unbound objects. */ - if (obj->gtt_space == NULL) + if (!drm_mm_node_allocated(&obj->gtt_space)) return -EINVAL;
ret = i915_gem_object_flush_gpu_write_domain(obj, true); @@ -4084,7 +4071,7 @@ i915_gem_object_pin(struct drm_i915_gem_object *obj, BUG_ON(map_and_fenceable && !map_and_fenceable); WARN_ON(i915_verify_lists(dev));
- if (obj->gtt_space != NULL) { + if (drm_mm_node_allocated(&obj->gtt_space)) { if ((alignment && obj->gtt_offset & (alignment - 1)) || (map_and_fenceable && !obj->map_and_fenceable)) { WARN(obj->pin_count, @@ -4100,7 +4087,7 @@ i915_gem_object_pin(struct drm_i915_gem_object *obj, } }
- if (obj->gtt_space == NULL) { + if (!drm_mm_node_allocated(&obj->gtt_space)) { ret = i915_gem_object_bind_to_gtt(obj, alignment, map_and_fenceable); if (ret) @@ -4127,7 +4114,7 @@ i915_gem_object_unpin(struct drm_i915_gem_object *obj)
WARN_ON(i915_verify_lists(dev)); BUG_ON(obj->pin_count == 0); - BUG_ON(obj->gtt_space == NULL); + BUG_ON(!drm_mm_node_allocated(&obj->gtt_space));
if (--obj->pin_count == 0) { if (!obj->active) @@ -4319,7 +4306,7 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
/* if the object is no longer bound, discard its backing storage */ if (i915_gem_object_is_purgeable(obj) && - obj->gtt_space == NULL) + !drm_mm_node_allocated(&obj->gtt_space)) i915_gem_object_truncate(obj);
args->retained = obj->madv != __I915_MADV_PURGED; diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c index 03e15d3..ea252a4 100644 --- a/drivers/gpu/drm/i915/i915_gem_evict.c +++ b/drivers/gpu/drm/i915/i915_gem_evict.c @@ -36,7 +36,7 @@ mark_free(struct drm_i915_gem_object *obj, struct list_head *unwind) { list_add(&obj->evict_list, unwind); drm_gem_object_reference(&obj->base); - return drm_mm_scan_add_block(obj->gtt_space); + return drm_mm_scan_add_block(&obj->gtt_space); }
int @@ -128,7 +128,7 @@ i915_gem_evict_something(struct drm_device *dev, int min_size,
/* Nothing found, clean up and bail out! */ list_for_each_entry(obj, &unwind_list, evict_list) { - ret = drm_mm_scan_remove_block(obj->gtt_space); + ret = drm_mm_scan_remove_block(&obj->gtt_space); BUG_ON(ret); drm_gem_object_unreference(&obj->base); } @@ -147,7 +147,7 @@ found: obj = list_first_entry(&unwind_list, struct drm_i915_gem_object, evict_list); - if (drm_mm_scan_remove_block(obj->gtt_space)) { + if (drm_mm_scan_remove_block(&obj->gtt_space)) { list_move(&obj->evict_list, &eviction_list); continue; } diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 71c2b0f..d4537b5 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -40,11 +40,11 @@ void i915_gem_restore_gtt_mappings(struct drm_device *dev)
intel_gtt_insert_sg_entries(obj->sg_list, obj->num_sg, - obj->gtt_space->start + obj->gtt_space.start >> PAGE_SHIFT, obj->agp_type); } else - intel_gtt_insert_pages(obj->gtt_space->start + intel_gtt_insert_pages(obj->gtt_space.start >> PAGE_SHIFT, obj->base.size >> PAGE_SHIFT, obj->pages, @@ -71,10 +71,10 @@ int i915_gem_gtt_bind_object(struct drm_i915_gem_object *obj)
intel_gtt_insert_sg_entries(obj->sg_list, obj->num_sg, - obj->gtt_space->start >> PAGE_SHIFT, + obj->gtt_space.start >> PAGE_SHIFT, obj->agp_type); } else - intel_gtt_insert_pages(obj->gtt_space->start >> PAGE_SHIFT, + intel_gtt_insert_pages(obj->gtt_space.start >> PAGE_SHIFT, obj->base.size >> PAGE_SHIFT, obj->pages, obj->agp_type); @@ -93,6 +93,6 @@ void i915_gem_gtt_unbind_object(struct drm_i915_gem_object *obj) obj->num_sg = 0; }
- intel_gtt_clear_range(obj->gtt_space->start >> PAGE_SHIFT, + intel_gtt_clear_range(obj->gtt_space.start >> PAGE_SHIFT, obj->base.size >> PAGE_SHIFT); }
It's a copy of obj->gtt_space.start. With all the recent massive sed-processing over the tree, removing this wont hurt. And with gtt_space embedded it can't be called a performance optimization anymore.
Signed-off-by: Daniel Vetter daniel.vetter@ffwll.ch --- drivers/gpu/drm/i915/i915_debugfs.c | 18 ++++--- drivers/gpu/drm/i915/i915_drv.h | 7 --- drivers/gpu/drm/i915/i915_gem.c | 78 +++++++++++++++--------------- drivers/gpu/drm/i915/i915_gem_debug.c | 10 ++-- drivers/gpu/drm/i915/i915_gem_tiling.c | 6 +- drivers/gpu/drm/i915/i915_irq.c | 34 +++++++------- drivers/gpu/drm/i915/intel_display.c | 26 +++++----- drivers/gpu/drm/i915/intel_fb.c | 6 +- drivers/gpu/drm/i915/intel_overlay.c | 14 +++--- drivers/gpu/drm/i915/intel_ringbuffer.c | 10 ++-- 10 files changed, 103 insertions(+), 106 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index af133ac..2ef746c 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -125,8 +125,8 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) if (obj->fence_reg != I915_FENCE_REG_NONE) seq_printf(m, " (fence: %d)", obj->fence_reg); if (drm_mm_node_allocated(&obj->gtt_space)) - seq_printf(m, " (gtt offset: %08x, size: %08x)", - obj->gtt_offset, (unsigned int)obj->gtt_space.size); + seq_printf(m, " (gtt offset: %08lx, size: %08x)", + obj->gtt_space.start, (unsigned int)obj->gtt_space.size); if (obj->pin_mappable || obj->fault_mappable) seq_printf(m, " (mappable)"); if (obj->ring != NULL) @@ -253,12 +253,14 @@ static int i915_gem_pageflip_info(struct seq_file *m, void *data) if (work->old_fb_obj) { struct drm_i915_gem_object *obj = work->old_fb_obj; if (obj) - seq_printf(m, "Old framebuffer gtt_offset 0x%08x\n", obj->gtt_offset); + seq_printf(m, "Old framebuffer gtt_offset 0x%08lx\n", + obj->gtt_space.start); } if (work->pending_flip_obj) { struct drm_i915_gem_object *obj = work->pending_flip_obj; if (obj) - seq_printf(m, "New framebuffer gtt_offset 0x%08x\n", obj->gtt_offset); + seq_printf(m, "New framebuffer gtt_offset 0x%08lx\n", + obj->gtt_space.start); } } spin_unlock_irqrestore(&dev->event_lock, flags); @@ -472,7 +474,7 @@ static void i915_dump_object(struct seq_file *m, page_count = obj->base.size / PAGE_SIZE; for (page = 0; page < page_count; page++) { u32 *mem = io_mapping_map_wc(mapping, - obj->gtt_offset + page * PAGE_SIZE); + obj->gtt_space.start + page * PAGE_SIZE); for (i = 0; i < PAGE_SIZE; i += 4) seq_printf(m, "%08x : %08x\n", i, mem[i / 4]); io_mapping_unmap(mem); @@ -493,7 +495,8 @@ static int i915_batchbuffer_info(struct seq_file *m, void *data)
list_for_each_entry(obj, &dev_priv->mm.active_list, mm_list) { if (obj->base.read_domains & I915_GEM_DOMAIN_COMMAND) { - seq_printf(m, "--- gtt_offset = 0x%08x\n", obj->gtt_offset); + seq_printf(m, "--- gtt_offset = 0x%08lx\n", + obj->gtt_space.start); i915_dump_object(m, dev_priv->mm.gtt_mapping, obj); } } @@ -683,7 +686,8 @@ static int i915_error_state(struct seq_file *m, void *unused) if (error->batchbuffer[i]) { struct drm_i915_error_object *obj = error->batchbuffer[i];
- seq_printf(m, "--- gtt_offset = 0x%08x\n", obj->gtt_offset); + seq_printf(m, "--- gtt_offset = 0x%08x\n", + obj->gtt_offset); offset = 0; for (page = 0; page < obj->page_count; page++) { for (elt = 0; elt < PAGE_SIZE/4; elt++) { diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index bdb05c2..d3739c7 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -795,13 +795,6 @@ struct drm_i915_gem_object { struct scatterlist *sg_list; int num_sg;
- /** - * Current offset of the object in GTT space. - * - * This is the same as gtt_space->start - */ - uint32_t gtt_offset; - /* Which ring is refering to is this object */ struct intel_ring_buffer *ring;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index f8612be..80ad876 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -88,10 +88,11 @@ static void i915_gem_info_add_gtt(struct drm_i915_private *dev_priv, { dev_priv->mm.gtt_count++; dev_priv->mm.gtt_memory += obj->gtt_space.size; - if (obj->gtt_offset < dev_priv->mm.gtt_mappable_end) { + if (obj->gtt_space.start < dev_priv->mm.gtt_mappable_end) { dev_priv->mm.mappable_gtt_used += min_t(size_t, obj->gtt_space.size, - dev_priv->mm.gtt_mappable_end - obj->gtt_offset); + dev_priv->mm.gtt_mappable_end + - obj->gtt_space.start); } list_add_tail(&obj->gtt_list, &dev_priv->mm.gtt_list); } @@ -101,10 +102,11 @@ static void i915_gem_info_remove_gtt(struct drm_i915_private *dev_priv, { dev_priv->mm.gtt_count--; dev_priv->mm.gtt_memory -= obj->gtt_space.size; - if (obj->gtt_offset < dev_priv->mm.gtt_mappable_end) { + if (obj->gtt_space.start < dev_priv->mm.gtt_mappable_end) { dev_priv->mm.mappable_gtt_used -= min_t(size_t, obj->gtt_space.size, - dev_priv->mm.gtt_mappable_end - obj->gtt_offset); + dev_priv->mm.gtt_mappable_end + - obj->gtt_space.start); } list_del_init(&obj->gtt_list); } @@ -691,7 +693,7 @@ i915_gem_gtt_pwrite_fast(struct drm_device *dev, user_data = (char __user *) (uintptr_t) args->data_ptr; remain = args->size;
- offset = obj->gtt_offset + args->offset; + offset = obj->gtt_space.start + args->offset;
while (remain > 0) { /* Operation in this page @@ -776,7 +778,7 @@ i915_gem_gtt_pwrite_slow(struct drm_device *dev, if (ret) goto out_unpin_pages;
- offset = obj->gtt_offset + args->offset; + offset = obj->gtt_space.start + args->offset;
while (remain > 0) { /* Operation in this page @@ -1317,7 +1319,7 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) if (i915_gem_object_is_inactive(obj)) list_move_tail(&obj->mm_list, &dev_priv->mm.inactive_list);
- pfn = ((dev->agp->base + obj->gtt_offset) >> PAGE_SHIFT) + + pfn = ((dev->agp->base + obj->gtt_space.start) >> PAGE_SHIFT) + page_offset;
/* Finally, remap it using the new GTT offset */ @@ -2237,7 +2239,6 @@ i915_gem_object_unbind(struct drm_i915_gem_object *obj) obj->map_and_fenceable = true;
drm_mm_remove_node(&obj->gtt_space); - obj->gtt_offset = 0;
if (i915_gem_object_is_purgeable(obj)) i915_gem_object_truncate(obj); @@ -2296,9 +2297,9 @@ static void sandybridge_write_fence_reg(struct drm_i915_gem_object *obj) int regnum = obj->fence_reg; uint64_t val;
- val = (uint64_t)((obj->gtt_offset + size - 4096) & + val = (uint64_t)((obj->gtt_space.start + size - 4096) & 0xfffff000) << 32; - val |= obj->gtt_offset & 0xfffff000; + val |= obj->gtt_space.start & 0xfffff000; val |= (uint64_t)((obj->stride / 128) - 1) << SANDYBRIDGE_FENCE_PITCH_SHIFT;
@@ -2317,9 +2318,9 @@ static void i965_write_fence_reg(struct drm_i915_gem_object *obj) int regnum = obj->fence_reg; uint64_t val;
- val = (uint64_t)((obj->gtt_offset + size - 4096) & + val = (uint64_t)((obj->gtt_space.start + size - 4096) & 0xfffff000) << 32; - val |= obj->gtt_offset & 0xfffff000; + val |= obj->gtt_space.start & 0xfffff000; val |= ((obj->stride / 128) - 1) << I965_FENCE_PITCH_SHIFT; if (obj->tiling_mode == I915_TILING_Y) val |= 1 << I965_FENCE_TILING_Y_SHIFT; @@ -2336,10 +2337,10 @@ static void i915_write_fence_reg(struct drm_i915_gem_object *obj) uint32_t fence_reg, val, pitch_val; int tile_width;
- if ((obj->gtt_offset & ~I915_FENCE_START_MASK) || - (obj->gtt_offset & (size - 1))) { - WARN(1, "%s: object 0x%08x [fenceable? %d] not 1M or size (0x%08x) aligned [gtt_space offset=%lx, size=%lx]\n", - __func__, obj->gtt_offset, obj->map_and_fenceable, size, + if ((obj->gtt_space.start & ~I915_FENCE_START_MASK) || + (obj->gtt_space.start & (size - 1))) { + WARN(1, "%s: object 0x%08lx [fenceable? %d] not 1M or size (0x%08x) aligned [gtt_space offset=%lx, size=%lx]\n", + __func__, obj->gtt_space.start, obj->map_and_fenceable, size, obj->gtt_space.start, obj->gtt_space.size); return; } @@ -2360,7 +2361,7 @@ static void i915_write_fence_reg(struct drm_i915_gem_object *obj) else WARN_ON(pitch_val > I915_FENCE_MAX_PITCH_VAL);
- val = obj->gtt_offset; + val = obj->gtt_space.start; if (obj->tiling_mode == I915_TILING_Y) val |= 1 << I830_FENCE_TILING_Y_SHIFT; val |= I915_FENCE_SIZE_BITS(size); @@ -2385,10 +2386,10 @@ static void i830_write_fence_reg(struct drm_i915_gem_object *obj) uint32_t pitch_val; uint32_t fence_size_bits;
- if ((obj->gtt_offset & ~I830_FENCE_START_MASK) || - (obj->gtt_offset & (obj->base.size - 1))) { - WARN(1, "%s: object 0x%08x not 512K or size aligned\n", - __func__, obj->gtt_offset); + if ((obj->gtt_space.start & ~I830_FENCE_START_MASK) || + (obj->gtt_space.start & (obj->base.size - 1))) { + WARN(1, "%s: object 0x%08lx not 512K or size aligned\n", + __func__, obj->gtt_space.start); return; }
@@ -2396,7 +2397,7 @@ static void i830_write_fence_reg(struct drm_i915_gem_object *obj) pitch_val = ffs(pitch_val) - 1; WARN_ON(pitch_val > I830_FENCE_MAX_PITCH_VAL);
- val = obj->gtt_offset; + val = obj->gtt_space.start; if (obj->tiling_mode == I915_TILING_Y) val |= 1 << I830_FENCE_TILING_Y_SHIFT; fence_size_bits = I830_FENCE_SIZE_BITS(size); @@ -2496,15 +2497,15 @@ i915_gem_object_get_fence_reg(struct drm_i915_gem_object *obj, if (!obj->stride) return -EINVAL; WARN((obj->stride & (512 - 1)), - "object 0x%08x is X tiled but has non-512B pitch\n", - obj->gtt_offset); + "object 0x%08lx is X tiled but has non-512B pitch\n", + obj->gtt_space.start); break; case I915_TILING_Y: if (!obj->stride) return -EINVAL; WARN((obj->stride & (128 - 1)), - "object 0x%08x is Y tiled but has non-128B pitch\n", - obj->gtt_offset); + "object 0x%08lx is Y tiled but has non-128B pitch\n", + obj->gtt_space.start); break; }
@@ -2735,8 +2736,6 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj, goto search_free; }
- obj->gtt_offset = obj->gtt_space.start; - /* keep track of bounds object by adding it to the inactive list */ list_add_tail(&obj->mm_list, &dev_priv->mm.inactive_list); i915_gem_info_add_gtt(dev_priv, obj); @@ -2748,14 +2747,15 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj, BUG_ON(obj->base.read_domains & I915_GEM_GPU_DOMAINS); BUG_ON(obj->base.write_domain & I915_GEM_GPU_DOMAINS);
- trace_i915_gem_object_bind(obj, obj->gtt_offset, map_and_fenceable); + trace_i915_gem_object_bind(obj, obj->gtt_space.start, map_and_fenceable);
fenceable = obj->gtt_space.size == fence_size && (obj->gtt_space.start & (fence_alignment -1)) == 0;
mappable = - obj->gtt_offset + obj->base.size <= dev_priv->mm.gtt_mappable_end; + obj->gtt_space.start + obj->base.size + <= dev_priv->mm.gtt_mappable_end;
obj->map_and_fenceable = mappable && fenceable;
@@ -3294,7 +3294,7 @@ i915_gem_execbuffer_relocate(struct drm_i915_gem_object *obj,
target_handle = reloc.target_handle; } - target_offset = to_intel_bo(target_obj)->gtt_offset; + target_offset = to_intel_bo(target_obj)->gtt_space.start;
#if WATCH_RELOC DRM_INFO("%s: obj %p offset %08x target %d " @@ -3412,7 +3412,7 @@ i915_gem_execbuffer_relocate(struct drm_i915_gem_object *obj, break;
/* Map the page containing the relocation we're going to perform. */ - reloc.offset += obj->gtt_offset; + reloc.offset += obj->gtt_space.start; reloc_page = io_mapping_map_atomic_wc(dev_priv->mm.gtt_mapping, reloc.offset & PAGE_MASK); reloc_entry = (uint32_t __iomem *) @@ -3487,7 +3487,7 @@ i915_gem_execbuffer_pin(struct drm_device *dev, dev_priv->fence_regs[obj->fence_reg].gpu = true; }
- entry->offset = obj->gtt_offset; + entry->offset = obj->gtt_space.start; }
while (i--) @@ -3803,7 +3803,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data, batch_obj->base.pending_read_domains |= I915_GEM_DOMAIN_COMMAND;
/* Sanity check the batch buffer */ - exec_offset = batch_obj->gtt_offset; + exec_offset = batch_obj->gtt_space.start; ret = i915_gem_check_execbuffer(args, exec_offset); if (ret != 0) { DRM_ERROR("execbuf with invalid offset/length\n"); @@ -4072,13 +4072,13 @@ i915_gem_object_pin(struct drm_i915_gem_object *obj, WARN_ON(i915_verify_lists(dev));
if (drm_mm_node_allocated(&obj->gtt_space)) { - if ((alignment && obj->gtt_offset & (alignment - 1)) || + if ((alignment && obj->gtt_space.start & (alignment - 1)) || (map_and_fenceable && !obj->map_and_fenceable)) { WARN(obj->pin_count, "bo is already pinned with incorrect alignment:" - " offset=%x, req.alignment=%x, req.map_and_fenceable=%d," + " offset=%lx, req.alignment=%x, req.map_and_fenceable=%d," " obj->map_and_fenceable=%d\n", - obj->gtt_offset, alignment, + obj->gtt_space.start, alignment, map_and_fenceable, obj->map_and_fenceable); ret = i915_gem_object_unbind(obj); @@ -4168,7 +4168,7 @@ i915_gem_pin_ioctl(struct drm_device *dev, void *data, * as the X server doesn't manage domains yet */ i915_gem_object_flush_cpu_write_domain(obj); - args->offset = obj->gtt_offset; + args->offset = obj->gtt_space.start; out: drm_gem_object_unreference(&obj->base); unlock: @@ -4468,7 +4468,7 @@ i915_gem_init_pipe_control(struct drm_device *dev) if (ret) goto err_unref;
- dev_priv->seqno_gfx_addr = obj->gtt_offset; + dev_priv->seqno_gfx_addr = obj->gtt_space.start; dev_priv->seqno_page = kmap(obj->pages[0]); if (dev_priv->seqno_page == NULL) goto err_unpin; diff --git a/drivers/gpu/drm/i915/i915_gem_debug.c b/drivers/gpu/drm/i915/i915_gem_debug.c index 29d014c..068fe57 100644 --- a/drivers/gpu/drm/i915/i915_gem_debug.c +++ b/drivers/gpu/drm/i915/i915_gem_debug.c @@ -157,7 +157,7 @@ i915_gem_dump_object(struct drm_i915_gem_object *obj, int len, { int page;
- DRM_INFO("%s: object at offset %08x\n", where, obj->gtt_offset); + DRM_INFO("%s: object at offset %08x\n", where, obj->gtt_space.start); for (page = 0; page < (len + PAGE_SIZE-1) / PAGE_SIZE; page++) { int page_len, chunk, chunk_len;
@@ -171,7 +171,7 @@ i915_gem_dump_object(struct drm_i915_gem_object *obj, int len, chunk_len = 128; i915_gem_dump_page(obj->pages[page], chunk, chunk + chunk_len, - obj->gtt_offset + + obj->gtt_space.start + page * PAGE_SIZE, mark); } @@ -190,10 +190,10 @@ i915_gem_object_check_coherency(struct drm_i915_gem_object *obj, int handle) int bad_count = 0;
DRM_INFO("%s: checking coherency of object %p@0x%08x (%d, %zdkb):\n", - __func__, obj, obj->gtt_offset, handle, + __func__, obj, obj->gtt_space.start, handle, obj->size / 1024);
- gtt_mapping = ioremap(dev->agp->base + obj->gtt_offset, obj->base.size); + gtt_mapping = ioremap(dev->agp->base + obj->gtt_space.start, obj->base.size); if (gtt_mapping == NULL) { DRM_ERROR("failed to map GTT space\n"); return; @@ -217,7 +217,7 @@ i915_gem_object_check_coherency(struct drm_i915_gem_object *obj, int handle) if (cpuval != gttval) { DRM_INFO("incoherent CPU vs GPU at 0x%08x: " "0x%08x vs 0x%08x\n", - (int)(obj->gtt_offset + + (int)(obj->gtt_space.start + page * PAGE_SIZE + i * 4), cpuval, gttval); if (bad_count++ >= 8) { diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c index 257302a..05fa803 100644 --- a/drivers/gpu/drm/i915/i915_gem_tiling.c +++ b/drivers/gpu/drm/i915/i915_gem_tiling.c @@ -256,14 +256,14 @@ i915_gem_object_fence_ok(struct drm_i915_gem_object *obj, int tiling_mode) while (size < obj->base.size) size <<= 1;
- if (obj->gtt_offset & (size - 1)) + if (obj->gtt_space.start & (size - 1)) return false;
if (INTEL_INFO(obj->base.dev)->gen == 3) { - if (obj->gtt_offset & ~I915_FENCE_START_MASK) + if (obj->gtt_space.start & ~I915_FENCE_START_MASK) return false; } else { - if (obj->gtt_offset & ~I830_FENCE_START_MASK) + if (obj->gtt_space.start & ~I830_FENCE_START_MASK) return false; }
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 909ab59..699a920 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -439,7 +439,7 @@ i915_error_object_create(struct drm_device *dev, if (dst == NULL) return NULL;
- reloc_offset = src->gtt_offset; + reloc_offset = src->gtt_space.start; for (page = 0; page < page_count; page++) { unsigned long flags; void __iomem *s; @@ -461,7 +461,7 @@ i915_error_object_create(struct drm_device *dev, reloc_offset += PAGE_SIZE; } dst->page_count = page_count; - dst->gtt_offset = src->gtt_offset; + dst->gtt_offset = src->gtt_space.start;
return dst;
@@ -631,13 +631,13 @@ static void i915_capture_error_state(struct drm_device *dev) count = 0; list_for_each_entry(obj, &dev_priv->mm.active_list, mm_list) { if (batchbuffer[0] == NULL && - bbaddr >= obj->gtt_offset && - bbaddr < obj->gtt_offset + obj->base.size) + bbaddr >= obj->gtt_space.start && + bbaddr < obj->gtt_space.start + obj->base.size) batchbuffer[0] = obj;
if (batchbuffer[1] == NULL && - error->acthd >= obj->gtt_offset && - error->acthd < obj->gtt_offset + obj->base.size) + error->acthd >= obj->gtt_space.start && + error->acthd < obj->gtt_space.start + obj->base.size) batchbuffer[1] = obj;
count++; @@ -646,13 +646,13 @@ static void i915_capture_error_state(struct drm_device *dev) if (batchbuffer[0] == NULL || batchbuffer[1] == NULL) { list_for_each_entry(obj, &dev_priv->mm.flushing_list, mm_list) { if (batchbuffer[0] == NULL && - bbaddr >= obj->gtt_offset && - bbaddr < obj->gtt_offset + obj->base.size) + bbaddr >= obj->gtt_space.start && + bbaddr < obj->gtt_space.start + obj->base.size) batchbuffer[0] = obj;
if (batchbuffer[1] == NULL && - error->acthd >= obj->gtt_offset && - error->acthd < obj->gtt_offset + obj->base.size) + error->acthd >= obj->gtt_space.start && + error->acthd < obj->gtt_space.start + obj->base.size) batchbuffer[1] = obj;
if (batchbuffer[0] && batchbuffer[1]) @@ -662,13 +662,13 @@ static void i915_capture_error_state(struct drm_device *dev) if (batchbuffer[0] == NULL || batchbuffer[1] == NULL) { list_for_each_entry(obj, &dev_priv->mm.inactive_list, mm_list) { if (batchbuffer[0] == NULL && - bbaddr >= obj->gtt_offset && - bbaddr < obj->gtt_offset + obj->base.size) + bbaddr >= obj->gtt_space.start && + bbaddr < obj->gtt_space.start + obj->base.size) batchbuffer[0] = obj;
if (batchbuffer[1] == NULL && - error->acthd >= obj->gtt_offset && - error->acthd < obj->gtt_offset + obj->base.size) + error->acthd >= obj->gtt_space.start && + error->acthd < obj->gtt_space.start + obj->base.size) batchbuffer[1] = obj;
if (batchbuffer[0] && batchbuffer[1]) @@ -703,7 +703,7 @@ static void i915_capture_error_state(struct drm_device *dev) error->active_bo[i].size = obj->base.size; error->active_bo[i].name = obj->base.name; error->active_bo[i].seqno = obj->last_rendering_seqno; - error->active_bo[i].gtt_offset = obj->gtt_offset; + error->active_bo[i].gtt_offset = obj->gtt_space.start; error->active_bo[i].read_domains = obj->base.read_domains; error->active_bo[i].write_domain = obj->base.write_domain; error->active_bo[i].fence_reg = obj->fence_reg; @@ -929,10 +929,10 @@ static void i915_pageflip_stall_check(struct drm_device *dev, int pipe) obj = work->pending_flip_obj; if (INTEL_INFO(dev)->gen >= 4) { int dspsurf = intel_crtc->plane == 0 ? DSPASURF : DSPBSURF; - stall_detected = I915_READ(dspsurf) == obj->gtt_offset; + stall_detected = I915_READ(dspsurf) == obj->gtt_space.start; } else { int dspaddr = intel_crtc->plane == 0 ? DSPAADDR : DSPBADDR; - stall_detected = I915_READ(dspaddr) == (obj->gtt_offset + + stall_detected = I915_READ(dspaddr) == (obj->gtt_space.start + crtc->y * crtc->fb->pitch + crtc->x * crtc->fb->bits_per_pixel/8); } diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index 74fd980..7e189d9 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -1232,7 +1232,7 @@ static void ironlake_enable_fbc(struct drm_crtc *crtc, unsigned long interval) if (dev_priv->cfb_pitch == dev_priv->cfb_pitch / 64 - 1 && dev_priv->cfb_fence == obj->fence_reg && dev_priv->cfb_plane == intel_crtc->plane && - dev_priv->cfb_offset == obj->gtt_offset && + dev_priv->cfb_offset == obj->gtt_space.start && dev_priv->cfb_y == crtc->y) return;
@@ -1244,7 +1244,7 @@ static void ironlake_enable_fbc(struct drm_crtc *crtc, unsigned long interval) dev_priv->cfb_pitch = (dev_priv->cfb_pitch / 64) - 1; dev_priv->cfb_fence = obj->fence_reg; dev_priv->cfb_plane = intel_crtc->plane; - dev_priv->cfb_offset = obj->gtt_offset; + dev_priv->cfb_offset = obj->gtt_space.start; dev_priv->cfb_y = crtc->y;
dpfc_ctl &= DPFC_RESERVED; @@ -1260,7 +1260,7 @@ static void ironlake_enable_fbc(struct drm_crtc *crtc, unsigned long interval) (stall_watermark << DPFC_RECOMP_STALL_WM_SHIFT) | (interval << DPFC_RECOMP_TIMER_COUNT_SHIFT)); I915_WRITE(ILK_DPFC_FENCE_YOFF, crtc->y); - I915_WRITE(ILK_FBC_RT_BASE, obj->gtt_offset | ILK_FBC_RT_VALID); + I915_WRITE(ILK_FBC_RT_BASE, obj->gtt_space.start | ILK_FBC_RT_VALID); /* enable it... */ I915_WRITE(ILK_DPFC_CONTROL, dpfc_ctl | DPFC_CTL_EN);
@@ -1549,7 +1549,7 @@ intel_pipe_set_base_atomic(struct drm_crtc *crtc, struct drm_framebuffer *fb,
I915_WRITE(reg, dspcntr);
- Start = obj->gtt_offset; + Start = obj->gtt_space.start; Offset = y * fb->pitch + x * (fb->bits_per_pixel / 8);
DRM_DEBUG_KMS("Writing base %08lX %08lX %d %d %d\n", @@ -4371,7 +4371,7 @@ static int intel_crtc_cursor_set(struct drm_crtc *crtc, goto fail_unpin; }
- addr = obj->gtt_offset; + addr = obj->gtt_space.start; } else { int align = IS_I830(dev) ? 16 * 1024 : 256; ret = i915_gem_attach_phys_object(dev, obj, @@ -5135,7 +5135,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc, OUT_RING(MI_DISPLAY_FLIP | MI_DISPLAY_FLIP_PLANE(intel_crtc->plane)); OUT_RING(fb->pitch); - OUT_RING(obj->gtt_offset + offset); + OUT_RING(obj->gtt_space.start + offset); OUT_RING(MI_NOOP); break;
@@ -5143,7 +5143,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc, OUT_RING(MI_DISPLAY_FLIP_I915 | MI_DISPLAY_FLIP_PLANE(intel_crtc->plane)); OUT_RING(fb->pitch); - OUT_RING(obj->gtt_offset + offset); + OUT_RING(obj->gtt_space.start + offset); OUT_RING(MI_NOOP); break;
@@ -5156,7 +5156,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc, OUT_RING(MI_DISPLAY_FLIP | MI_DISPLAY_FLIP_PLANE(intel_crtc->plane)); OUT_RING(fb->pitch); - OUT_RING(obj->gtt_offset | obj->tiling_mode); + OUT_RING(obj->gtt_space.start | obj->tiling_mode);
/* XXX Enabling the panel-fitter across page-flip is so far * untested on non-native modes, so ignore it for now. @@ -5171,7 +5171,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc, OUT_RING(MI_DISPLAY_FLIP | MI_DISPLAY_FLIP_PLANE(intel_crtc->plane)); OUT_RING(fb->pitch | obj->tiling_mode); - OUT_RING(obj->gtt_offset); + OUT_RING(obj->gtt_space.start);
pf = I915_READ(pipe == 0 ? PFA_CTL_1 : PFB_CTL_1) & PF_ENABLE; pipesrc = I915_READ(pipe == 0 ? PIPEASRC : PIPEBSRC) & 0x0fff0fff; @@ -5866,7 +5866,7 @@ void intel_init_clock_gating(struct drm_device *dev) struct drm_i915_gem_object *obj = dev_priv->renderctx; if (BEGIN_LP_RING(4) == 0) { OUT_RING(MI_SET_CONTEXT); - OUT_RING(obj->gtt_offset | + OUT_RING(obj->gtt_space.start | MI_MM_SPACE_GTT | MI_SAVE_EXT_STATE_EN | MI_RESTORE_EXT_STATE_EN | @@ -5885,7 +5885,7 @@ void intel_init_clock_gating(struct drm_device *dev) dev_priv->pwrctx = intel_alloc_context_page(dev); if (dev_priv->pwrctx) { struct drm_i915_gem_object *obj = dev_priv->pwrctx; - I915_WRITE(PWRCTXA, obj->gtt_offset | PWRCTX_EN); + I915_WRITE(PWRCTXA, obj->gtt_space.start | PWRCTX_EN); I915_WRITE(MCHBAR_RENDER_STANDBY, I915_READ(MCHBAR_RENDER_STANDBY) & ~RCX_SW_EXIT); } @@ -6162,7 +6162,7 @@ void intel_modeset_cleanup(struct drm_device *dev) if (dev_priv->renderctx) { struct drm_i915_gem_object *obj = dev_priv->renderctx;
- I915_WRITE(CCID, obj->gtt_offset &~ CCID_EN); + I915_WRITE(CCID, obj->gtt_space.start &~ CCID_EN); POSTING_READ(CCID);
i915_gem_object_unpin(obj); @@ -6173,7 +6173,7 @@ void intel_modeset_cleanup(struct drm_device *dev) if (dev_priv->pwrctx) { struct drm_i915_gem_object *obj = dev_priv->pwrctx;
- I915_WRITE(PWRCTXA, obj->gtt_offset &~ PWRCTX_EN); + I915_WRITE(PWRCTXA, obj->gtt_space.start &~ PWRCTX_EN); POSTING_READ(PWRCTXA);
i915_gem_object_unpin(obj); diff --git a/drivers/gpu/drm/i915/intel_fb.c b/drivers/gpu/drm/i915/intel_fb.c index c2cffeb..1bb2007 100644 --- a/drivers/gpu/drm/i915/intel_fb.c +++ b/drivers/gpu/drm/i915/intel_fb.c @@ -132,10 +132,10 @@ static int intelfb_create(struct intel_fbdev *ifbdev, else info->apertures->ranges[0].size = pci_resource_len(dev->pdev, 0);
- info->fix.smem_start = dev->mode_config.fb_base + obj->gtt_offset; + info->fix.smem_start = dev->mode_config.fb_base + obj->gtt_space.start; info->fix.smem_len = size;
- info->screen_base = ioremap_wc(dev->agp->base + obj->gtt_offset, size); + info->screen_base = ioremap_wc(dev->agp->base + obj->gtt_space.start, size); if (!info->screen_base) { ret = -ENOSPC; goto out_unpin; @@ -165,7 +165,7 @@ static int intelfb_create(struct intel_fbdev *ifbdev,
DRM_DEBUG_KMS("allocated %dx%d fb: 0x%08x, bo %p\n", fb->width, fb->height, - obj->gtt_offset, obj); + obj->gtt_space.start, obj);
mutex_unlock(&dev->struct_mutex); diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c index af715cc..36cd0d6 100644 --- a/drivers/gpu/drm/i915/intel_overlay.c +++ b/drivers/gpu/drm/i915/intel_overlay.c @@ -199,7 +199,7 @@ intel_overlay_map_regs(struct intel_overlay *overlay) regs = overlay->reg_bo->phys_obj->handle->vaddr; else regs = io_mapping_map_wc(dev_priv->mm.gtt_mapping, - overlay->reg_bo->gtt_offset); + overlay->reg_bo->gtt_space.start);
return regs; } @@ -823,7 +823,7 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay, regs->SWIDTHSW = calc_swidthsw(overlay->dev, params->offset_Y, tmp_width); regs->SHEIGHT = params->src_h; - regs->OBUF_0Y = new_bo->gtt_offset + params-> offset_Y; + regs->OBUF_0Y = new_bo->gtt_space.start + params-> offset_Y; regs->OSTRIDE = params->stride_Y;
if (params->format & I915_OVERLAY_YUV_PLANAR) { @@ -837,8 +837,8 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay, params->src_w/uv_hscale); regs->SWIDTHSW |= max_t(u32, tmp_U, tmp_V) << 16; regs->SHEIGHT |= (params->src_h/uv_vscale) << 16; - regs->OBUF_0U = new_bo->gtt_offset + params->offset_U; - regs->OBUF_0V = new_bo->gtt_offset + params->offset_V; + regs->OBUF_0U = new_bo->gtt_space.start + params->offset_U; + regs->OBUF_0V = new_bo->gtt_space.start + params->offset_V; regs->OSTRIDE |= params->stride_UV << 16; }
@@ -1428,7 +1428,7 @@ void intel_setup_overlay(struct drm_device *dev) DRM_ERROR("failed to pin overlay register bo\n"); goto out_free_bo; } - overlay->flip_addr = reg_bo->gtt_offset; + overlay->flip_addr = reg_bo->gtt_space.start;
ret = i915_gem_object_set_to_gtt_domain(reg_bo, true); if (ret) { @@ -1502,7 +1502,7 @@ intel_overlay_map_regs_atomic(struct intel_overlay *overlay) regs = overlay->reg_bo->phys_obj->handle->vaddr; else regs = io_mapping_map_atomic_wc(dev_priv->mm.gtt_mapping, - overlay->reg_bo->gtt_offset); + overlay->reg_bo->gtt_space.start);
return regs; } @@ -1535,7 +1535,7 @@ intel_overlay_capture_error_state(struct drm_device *dev) if (OVERLAY_NEEDS_PHYSICAL(overlay->dev)) error->base = (long) overlay->reg_bo->phys_obj->handle->vaddr; else - error->base = (long) overlay->reg_bo->gtt_offset; + error->base = (long) overlay->reg_bo->gtt_space.start;
regs = intel_overlay_map_regs_atomic(overlay); if (!regs) diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index d5cf97f..953c677 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -148,7 +148,7 @@ static int init_ring_common(struct intel_ring_buffer *ring) ring->write_tail(ring, 0);
/* Initialize the ring. */ - I915_WRITE_START(ring, obj->gtt_offset); + I915_WRITE_START(ring, obj->gtt_space.start); head = I915_READ_HEAD(ring) & HEAD_ADDR;
/* G45 ring initialization fails to reset head to zero */ @@ -178,7 +178,7 @@ static int init_ring_common(struct intel_ring_buffer *ring)
/* If the head is still not zero, the ring is dead */ if ((I915_READ_CTL(ring) & RING_VALID) == 0 || - I915_READ_START(ring) != obj->gtt_offset || + I915_READ_START(ring) != obj->gtt_space.start || (I915_READ_HEAD(ring) & HEAD_ADDR) != 0) { if (IS_GEN6(ring->dev) && ring->dev->pdev->revision <= 8) { /* Early revisions of Sandybridge do not like @@ -564,7 +564,7 @@ static int init_status_page(struct intel_ring_buffer *ring) goto err_unref; }
- ring->status_page.gfx_addr = obj->gtt_offset; + ring->status_page.gfx_addr = obj->gtt_space.start; ring->status_page.page_addr = kmap(obj->pages[0]); if (ring->status_page.page_addr == NULL) { memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map)); @@ -618,7 +618,7 @@ int intel_init_ring_buffer(struct drm_device *dev, goto err_unref;
ring->map.size = ring->size; - ring->map.offset = dev->agp->base + obj->gtt_offset; + ring->map.offset = dev->agp->base + obj->gtt_space.start; ring->map.type = 0; ring->map.flags = 0; ring->map.mtrr = 0; @@ -949,7 +949,7 @@ static int blt_ring_begin(struct intel_ring_buffer *ring, return ret;
intel_ring_emit(ring, MI_BATCH_BUFFER_START); - intel_ring_emit(ring, to_blt_workaround(ring)->gtt_offset); + intel_ring_emit(ring, to_blt_workaround(ring)->gtt_space.start);
return 0; } else
Use the list iterator provided by drm_mm instead.
Signed-off-by: Daniel Vetter daniel.vetter@ffwll.ch --- drivers/gpu/drm/i915/i915_drv.h | 4 ---- drivers/gpu/drm/i915/i915_gem.c | 4 ---- drivers/gpu/drm/i915/i915_gem_gtt.c | 4 +++- 3 files changed, 3 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index d3739c7..f227985 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -544,9 +544,6 @@ typedef struct drm_i915_private { struct drm_mm vram; /** Memory allocator for GTT */ struct drm_mm gtt_space; - /** List of all objects in gtt_space. Used to restore gtt - * mappings on resume */ - struct list_head gtt_list; /** End of mappable part of GTT */ unsigned long gtt_mappable_end;
@@ -713,7 +710,6 @@ struct drm_i915_gem_object {
/** Current space allocated to this object in the GTT, if any. */ struct drm_mm_node gtt_space; - struct list_head gtt_list;
/** This object's place on the active/flushing/inactive lists */ struct list_head ring_list; diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 80ad876..0b3c781 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -94,7 +94,6 @@ static void i915_gem_info_add_gtt(struct drm_i915_private *dev_priv, dev_priv->mm.gtt_mappable_end - obj->gtt_space.start); } - list_add_tail(&obj->gtt_list, &dev_priv->mm.gtt_list); }
static void i915_gem_info_remove_gtt(struct drm_i915_private *dev_priv, @@ -108,7 +107,6 @@ static void i915_gem_info_remove_gtt(struct drm_i915_private *dev_priv, dev_priv->mm.gtt_mappable_end - obj->gtt_space.start); } - list_del_init(&obj->gtt_list); }
/** @@ -4342,7 +4340,6 @@ struct drm_i915_gem_object *i915_gem_alloc_object(struct drm_device *dev, obj->base.driver_private = NULL; obj->fence_reg = I915_FENCE_REG_NONE; INIT_LIST_HEAD(&obj->mm_list); - INIT_LIST_HEAD(&obj->gtt_list); INIT_LIST_HEAD(&obj->ring_list); INIT_LIST_HEAD(&obj->gpu_write_list); obj->madv = I915_MADV_WILLNEED; @@ -4650,7 +4647,6 @@ i915_gem_load(struct drm_device *dev) INIT_LIST_HEAD(&dev_priv->mm.pinned_list); INIT_LIST_HEAD(&dev_priv->mm.fence_list); INIT_LIST_HEAD(&dev_priv->mm.deferred_free_list); - INIT_LIST_HEAD(&dev_priv->mm.gtt_list); init_ring_lists(&dev_priv->render_ring); init_ring_lists(&dev_priv->bsd_ring); init_ring_lists(&dev_priv->blt_ring); diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index d4537b5..68334cb 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -33,8 +33,10 @@ void i915_gem_restore_gtt_mappings(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_gem_object *obj; + struct drm_mm_node *node;
- list_for_each_entry(obj, &dev_priv->mm.gtt_list, gtt_list) { + drm_mm_for_each_node(node, &dev_priv->mm.gtt_space) { + obj = container_of(node, struct drm_i915_gem_object, gtt_space); if (dev_priv->mm.gtt->needs_dmar) { BUG_ON(!obj->sg_list);
With the switch to implicit free space accounting one pointer got unused when scanning. Use it to create a single-linked list to ensure correct unwinding of the scan state.
Signed-off-by: Daniel Vetter daniel.vetter@ffwll.ch --- drivers/gpu/drm/drm_mm.c | 4 ++++ include/drm/drm_mm.h | 8 ++++++++ 2 files changed, 12 insertions(+), 0 deletions(-)
diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c index d6432f9..add1737 100644 --- a/drivers/gpu/drm/drm_mm.c +++ b/drivers/gpu/drm/drm_mm.c @@ -460,6 +460,7 @@ void drm_mm_init_scan(struct drm_mm *mm, unsigned long size, mm->scan_hit_start = 0; mm->scan_hit_size = 0; mm->scan_check_range = 0; + mm->prev_scanned_node = NULL; } EXPORT_SYMBOL(drm_mm_init_scan);
@@ -485,6 +486,7 @@ void drm_mm_init_scan_with_range(struct drm_mm *mm, unsigned long size, mm->scan_start = start; mm->scan_end = end; mm->scan_check_range = 1; + mm->prev_scanned_node = NULL; } EXPORT_SYMBOL(drm_mm_init_scan_with_range);
@@ -514,6 +516,8 @@ int drm_mm_scan_add_block(struct drm_mm_node *node) prev_node->hole_follows = 1; list_del(&node->node_list); node->node_list.prev = &prev_node->node_list; + node->node_list.next = &mm->prev_scanned_node->node_list; + mm->prev_scanned_node = node;
hole_start = drm_mm_hole_node_start(prev_node); hole_end = drm_mm_hole_node_end(prev_node); diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h index 17a070e..b1e7809 100644 --- a/include/drm/drm_mm.h +++ b/include/drm/drm_mm.h @@ -72,6 +72,7 @@ struct drm_mm { unsigned scanned_blocks; unsigned long scan_start; unsigned long scan_end; + struct drm_mm_node *prev_scanned_node; };
static inline bool drm_mm_node_allocated(struct drm_mm_node *node) @@ -86,6 +87,13 @@ static inline bool drm_mm_initialized(struct drm_mm *mm) #define drm_mm_for_each_node(entry, mm) list_for_each_entry(entry, \ &(mm)->head_node.node_list, \ node_list); +#define drm_mm_for_each_scanned_node_reverse(entry, n, mm) \ + for (entry = (mm)->prev_scanned_node, \ + next = entry ? list_entry(entry->node_list.next, \ + struct drm_mm_node, node_list) : NULL; \ + entry != NULL; entry = next, \ + next = entry ? list_entry(entry->node_list.next, \ + struct drm_mm_node, node_list) : NULL) \ /* * Basic range manager support (drm_mm.c) */
Doesn't really buy much, but looks nicer.
Signed-off-by: Daniel Vetter daniel.vetter@ffwll.ch --- drivers/gpu/drm/i915/i915_gem_evict.c | 31 ++++++++++++++++--------------- 1 files changed, 16 insertions(+), 15 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c index ea252a4..845b6ad 100644 --- a/drivers/gpu/drm/i915/i915_gem_evict.c +++ b/drivers/gpu/drm/i915/i915_gem_evict.c @@ -32,9 +32,8 @@ #include "i915_drm.h"
static bool -mark_free(struct drm_i915_gem_object *obj, struct list_head *unwind) +mark_free(struct drm_i915_gem_object *obj) { - list_add(&obj->evict_list, unwind); drm_gem_object_reference(&obj->base); return drm_mm_scan_add_block(&obj->gtt_space); } @@ -44,8 +43,9 @@ i915_gem_evict_something(struct drm_device *dev, int min_size, unsigned alignment, bool mappable) { drm_i915_private_t *dev_priv = dev->dev_private; - struct list_head eviction_list, unwind_list; + struct list_head eviction_list; struct drm_i915_gem_object *obj; + struct drm_mm_node *node, *next; int ret = 0;
i915_gem_retire_requests(dev); @@ -86,7 +86,6 @@ i915_gem_evict_something(struct drm_device *dev, int min_size, * object on the TAIL. */
- INIT_LIST_HEAD(&unwind_list); if (mappable) drm_mm_init_scan_with_range(&dev_priv->mm.gtt_space, min_size, alignment, 0, @@ -96,7 +95,7 @@ i915_gem_evict_something(struct drm_device *dev, int min_size,
/* First see if there is a large enough contiguous idle region... */ list_for_each_entry(obj, &dev_priv->mm.inactive_list, mm_list) { - if (mark_free(obj, &unwind_list)) + if (mark_free(obj)) goto found; }
@@ -106,7 +105,7 @@ i915_gem_evict_something(struct drm_device *dev, int min_size, if (obj->base.write_domain || obj->pin_count) continue;
- if (mark_free(obj, &unwind_list)) + if (mark_free(obj)) goto found; }
@@ -115,19 +114,22 @@ i915_gem_evict_something(struct drm_device *dev, int min_size, if (obj->pin_count) continue;
- if (mark_free(obj, &unwind_list)) + if (mark_free(obj)) goto found; } list_for_each_entry(obj, &dev_priv->mm.active_list, mm_list) { if (! obj->base.write_domain || obj->pin_count) continue;
- if (mark_free(obj, &unwind_list)) + if (mark_free(obj)) goto found; }
/* Nothing found, clean up and bail out! */ - list_for_each_entry(obj, &unwind_list, evict_list) { + drm_mm_for_each_scanned_node_reverse(node, next, + &dev_priv->mm.gtt_space) { + obj = container_of(node, struct drm_i915_gem_object, gtt_space); + ret = drm_mm_scan_remove_block(&obj->gtt_space); BUG_ON(ret); drm_gem_object_unreference(&obj->base); @@ -143,15 +145,14 @@ found: * scanning, therefore store to be evicted objects on a * temporary list. */ INIT_LIST_HEAD(&eviction_list); - while (!list_empty(&unwind_list)) { - obj = list_first_entry(&unwind_list, - struct drm_i915_gem_object, - evict_list); + drm_mm_for_each_scanned_node_reverse(node, next, + &dev_priv->mm.gtt_space) { + obj = container_of(node, struct drm_i915_gem_object, gtt_space); + if (drm_mm_scan_remove_block(&obj->gtt_space)) { - list_move(&obj->evict_list, &eviction_list); + list_add(&obj->evict_list, &eviction_list); continue; } - list_del(&obj->evict_list); drm_gem_object_unreference(&obj->base); }
On Fri, 12 Nov 2010 18:36:32 +0100, Daniel Vetter daniel.vetter@ffwll.ch wrote:
Hi all,
This patch-set changes the algorithm in drm_mm.c to not need additional allocations to track free space and adds an api to make embedding struct drm_mm_node possible.
I like the end result for i915 in that it couples the bo much more tightly with their allocations; to manage the bo is to manage those allocations. This aligns well with my review of the memory management for i915.
Reviewed-by: Chris Wilson chris@chris-wilson.co.uk -Chris
On 11/12/2010 06:36 PM, Daniel Vetter wrote:
Hi all,
This patch-set changes the algorithm in drm_mm.c to not need additional allocations to track free space and adds an api to make embedding struct drm_mm_node possible. Benefits:
- If struct drm_mm_node is provided, no allocations need to be done anymore in drm_mm. It looks like some decent surgery, but ttm should be able to drop its preallocation dance.
- void *priv is back, but done right ;)
- Avoids a pointer chase when lru-scanning in i915 and saves a few bytes.
As a proof of concept I've converted i915. Beware though, the drm/i915 patches depend on my direct-gtt patches (which are actually the reason for this series here).
Tested on my i855gm, i945gme, ironlake and agp rv570.
Comments, flames, reviews highly welcome.
Hi, Daniel!
Nice work, although I have some comments about general applicability that we perhaps need to think about.
1) The space representation and space allocation algorithm is something that is private to the aperture management system. For a specialized implementation like i915 that is all fine, but Ben has recently abstracted that part out of the core TTM bo implementation. As an example, vmwgfx is now using kernel idas to manage aperture space, and drm_mm objects for traditional VRAM space. Hence, embedding drm_mm objects into ttm bos will not really be worthwile. At least not for aperture space management, and TTM will need to continue to "dance", both in the ida case and in the drm_mm case. For device address space, the situation is different, though, and it should be possible to embed the drm_mm objects, but that brings up the next thing:
2) The algorithm used by drm_mm has been around for a while and has seen a fair amount of changes, but nobody has yet attacked the algorithm used to search for free space, which was just quickly put together as an improvement on what was the old mesa range manager. In moderate fragmentation situations, the performance will degrade, particularly with "best match" searches. In the near future we'd probably want to add something like a "hole rb tree" rather than a "hole list", and a choice of algorithm for the user. With embeddable objects, unless you want to waste space for unused members, you'd need a separate drm_mm node subclass for each algorithm, whereas if you don't embed, you only need to allocate what you need.
/Thomas
Please consider merging the core drm parts (and the nouveau prep patch) for -next (the i915 patches need coordination with Chris Wilson, they're rather invasive).
Thanks, Daniel
Daniel Vetter (9): drm/nouveau: don't munge in drm_mm internals drm: mm: track free areas implicitly drm: mm: extract node insert helper functions drm: mm: add api for embedding struct drm_mm_node drm/i915: embed struct drm_mm_node into struct drm_i915_gem_object drm/i915: kill obj->gtt_offset drm/i915: kill gtt_list drm: mm: add helper to unwind scan state drm/i915: use drm_mm_for_each_scanned_node_reverse helper
drivers/gpu/drm/drm_mm.c | 570 ++++++++++++++++------------- drivers/gpu/drm/i915/i915_debugfs.c | 22 +- drivers/gpu/drm/i915/i915_drv.h | 13 +- drivers/gpu/drm/i915/i915_gem.c | 173 ++++----- drivers/gpu/drm/i915/i915_gem_debug.c | 10 +- drivers/gpu/drm/i915/i915_gem_evict.c | 37 +- drivers/gpu/drm/i915/i915_gem_gtt.c | 14 +- drivers/gpu/drm/i915/i915_gem_tiling.c | 6 +- drivers/gpu/drm/i915/i915_irq.c | 34 +- drivers/gpu/drm/i915/intel_display.c | 26 +- drivers/gpu/drm/i915/intel_fb.c | 6 +- drivers/gpu/drm/i915/intel_overlay.c | 14 +- drivers/gpu/drm/i915/intel_ringbuffer.c | 10 +- drivers/gpu/drm/nouveau/nouveau_object.c | 2 +- drivers/gpu/drm/nouveau/nv50_instmem.c | 2 +- include/drm/drm_mm.h | 49 ++- 16 files changed, 525 insertions(+), 463 deletions(-)
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Hi Thomas,
On Mon, Nov 15, 2010 at 08:58:13AM +0100, Thomas Hellstrom wrote:
Nice work, although I have some comments about general applicability that we perhaps need to think about.
- The space representation and space allocation algorithm is
something that is private to the aperture management system. For a specialized implementation like i915 that is all fine, but Ben has recently abstracted that part out of the core TTM bo implementation. As an example, vmwgfx is now using kernel idas to manage aperture space, and drm_mm objects for traditional VRAM space. Hence, embedding drm_mm objects into ttm bos will not really be worthwile. At least not for aperture space management, and TTM will need to continue to "dance", both in the ida case and in the drm_mm case.
Yep, I've looked into this and noticed the recent addition of the ida support. This is why I've added the "decent surgery" comment. Embedding the drm_mm_node still looks possible, albeit perhaps not feasible (at least I won't tackle this in the immediate future).
For device address space, the situation is different, though, and it should be possible to embed the drm_mm objects, but that brings up the next thing:
- The algorithm used by drm_mm has been around for a while and has
seen a fair amount of changes, but nobody has yet attacked the algorithm used to search for free space, which was just quickly put together as an improvement on what was the old mesa range manager. In moderate fragmentation situations, the performance will degrade, particularly with "best match" searches. In the near future we'd probably want to add something like a "hole rb tree" rather than a "hole list", and a choice of algorithm for the user. With embeddable objects, unless you want to waste space for unused members, you'd need a separate drm_mm node subclass for each algorithm, whereas if you don't embed, you only need to allocate what you need.
First a small rant about "best match" (to get it out of the way;-) - "best match" is simply a misleading name: with alignment > size (at least on older hw) and mixes of unrestricted and range restricted allocations (ironlake has 2G of gtt, just 256 of it mappable), which is all possible with the latest experimental i915 patches, "best match" can do worse than the simpler approach. - doing a full linear scan for every tiny state buffer/pixmap cache is slow. At this, it serves as an excuse to not implement proper eviction support. </rant> [If you agree, I'll happily write the patch to rip it out. It just doesn't bother me 'cause it's only a few lines in drm_mm.c and I can ignore the actual users.]
Now to the more useful discussion: IMHO drm_mm.c should be an allocator for vram/(g)tt, i.e. it needs to support: - a mix of large/small sizes. - fancy alignment constrains (new patches for drm/i915 are pushing things there). - range-restricted allocations. I think current users only ever have one (start, end) set for restricted allocations, so this might actually be simplified. If other users don't fit into this anymore, mea culpa, they need they're own allocator. You've already taken this path for vmwgfx by using the ida allocator. And if the linear scan for the gem mmap offset allocator ever shows up in profiles, I think it's better served with a buddy-style pot-sized, pot-aligned allocator. After all, fragmentation of virtual address space isn't a that severe problem.
Hence I think that drivers with extremely specific needs should roll their own allocator. So I don't think we should anticipate different allocator algorithms. I see driver-specific stuff more in the area of clever eviction algorithms - i915 is currently at 5 lru's for gtt mapped bos, and we're still adding.
Of course I've spent a bunch of brain-cycles on creating a more efficient allocator - O(n) just doesn't look that good. Now - it should be fast in the common case - and not degerate into O(n) for ugly corner cases. Which leaves us for the above allocation requirements of (u64 size, u32 alignment, bool range_restricted) with two 2d-range-trees. Now factoring in that lru-scanning is also O(n_{gtt_mapped}) gives us a data-structure I'm not really eager to create.
Current code seems fares rather well because the hole_stack fifo is good at avoiding the linear scan worst-case. And as soon as we start to strash the gtt, everything is totally snowed under by clflush overhead on i915 anyway.
To make a long story short, I've opted to make the current code faster by avoiding kmalloc and spoiling fewer cache-lines with useless data. And if the linear scan ever shows up in profiles, we could always add some stats to bail out early for large allocations. Or add a tree to heuristically find a suitable hole (assuming worst-case waste due to alignment).
Thanks a lot for your input on this.
Yours, Daniel
On 11/15/2010 08:45 PM, Daniel Vetter wrote:
Hi Thomas,
On Mon, Nov 15, 2010 at 08:58:13AM +0100, Thomas Hellstrom wrote:
Nice work, although I have some comments about general applicability that we perhaps need to think about.
- The space representation and space allocation algorithm is
something that is private to the aperture management system. For a specialized implementation like i915 that is all fine, but Ben has recently abstracted that part out of the core TTM bo implementation. As an example, vmwgfx is now using kernel idas to manage aperture space, and drm_mm objects for traditional VRAM space. Hence, embedding drm_mm objects into ttm bos will not really be worthwile. At least not for aperture space management, and TTM will need to continue to "dance", both in the ida case and in the drm_mm case.
Yep, I've looked into this and noticed the recent addition of the ida support. This is why I've added the "decent surgery" comment. Embedding the drm_mm_node still looks possible, albeit perhaps not feasible (at least I won't tackle this in the immediate future).
Indeed, it's possible but for drivers that don't use it, it will sit unused.
- The algorithm used by drm_mm has been around for a while and has
seen a fair amount of changes, but nobody has yet attacked the algorithm used to search for free space, which was just quickly put together as an improvement on what was the old mesa range manager. In moderate fragmentation situations, the performance will degrade, particularly with "best match" searches. In the near future we'd probably want to add something like a "hole rb tree" rather than a "hole list", and a choice of algorithm for the user. With embeddable objects, unless you want to waste space for unused members, you'd need a separate drm_mm node subclass for each algorithm, whereas if you don't embed, you only need to allocate what you need.
First a small rant about "best match" (to get it out of the way;-)
- "best match" is simply a misleading name: with alignment> size (at least on older hw) and mixes of unrestricted and range restricted allocations (ironlake has 2G of gtt, just 256 of it mappable), which is all possible with the latest experimental i915 patches, "best match" can do worse than the simpler approach.
- doing a full linear scan for every tiny state buffer/pixmap cache is slow.
At this, it serves as an excuse to not implement proper eviction support.
</rant> [If you agree, I'll happily write the patch to rip it out. It just doesn't bother me 'cause it's only a few lines in drm_mm.c and I can ignore the actual users.]
Now to the more useful discussion: IMHO drm_mm.c should be an allocator for vram/(g)tt, i.e. it needs to support:
- a mix of large/small sizes.
- fancy alignment constrains (new patches for drm/i915 are pushing things there).
- range-restricted allocations. I think current users only ever have one (start, end) set for restricted allocations, so this might actually be simplified.
If other users don't fit into this anymore, mea culpa, they need they're own allocator. You've already taken this path for vmwgfx by using the ida allocator. And if the linear scan for the gem mmap offset allocator ever shows up in profiles, I think it's better served with a buddy-style pot-sized, pot-aligned allocator. After all, fragmentation of virtual address space isn't a that severe problem.
I must admit I haven't done detailed benchmarking, particularly not for cases with a _huge_ number of bo's but I'm prepared to accept the fact that "first match" gives good enough results. For the mmap offset fragmentation it's less of a problem since it should be straightforward to move bos in the address space with an additional translation of the user-space offset (if ever needed).
Hence I think that drivers with extremely specific needs should roll their own allocator. So I don't think we should anticipate different allocator algorithms. I see driver-specific stuff more in the area of clever eviction algorithms - i915 is currently at 5 lru's for gtt mapped bos, and we're still adding.
Yes, I agree. My point was merely that one should think twice before embedding drm_mm objects in generic buffer objects intended also for drivers with special needs.
To make a long story short, I've opted to make the current code faster by avoiding kmalloc and spoiling fewer cache-lines with useless data. And if the linear scan ever shows up in profiles, we could always add some stats to bail out early for large allocations. Or add a tree to heuristically find a suitable hole (assuming worst-case waste due to alignment).
Thanks a lot for your input on this.
Yours, Daniel
Thanks, Thomas
On Mon, Nov 15, 2010 at 09:40:14PM +0100, Thomas Hellstrom wrote:
Hence I think that drivers with extremely specific needs should roll their own allocator. So I don't think we should anticipate different allocator algorithms. I see driver-specific stuff more in the area of clever eviction algorithms - i915 is currently at 5 lru's for gtt mapped bos, and we're still adding.
Yes, I agree. My point was merely that one should think twice before embedding drm_mm objects in generic buffer objects intended also for drivers with special needs.
Ok, I see. Looks like I've slightly shot over the mark here ;-) As I've said, I don't have any immediate plans to create havoc in ttm. And if I start doing so, I'll do it in (hopefully) incrementally useful steps.
Thanks, Thomas
Thanks, Daniel
dri-devel@lists.freedesktop.org