On Wed, 19 Dec 2012 11:56:18 +1000, Dave Airlie airlied@gmail.com wrote:
From: Dave Airlie airlied@redhat.com
So we have to offset manager implementations for dealing with VMA offsets. GEM had one using the hash table, TTM had one using an rbtree,
I'd rather we just had one to rule them all, since ttm is using the rbtree variant to allow sub mappings, to keep ABI we need to use that one.
This also adds a bunch of inline helpers to avoid gem/ttm poking around inside the vma_offset objects. TTM needs a reset helper to remove a vma_offset when it copies an object to another one for buffer moves. The helpers also let drivers avoid the map_list.hash_key << PAGE_SHIFT nonsense.
Any clue as to the performance difference between the two implementations? What does it add to the cost of a pagefault?
Hmm, don't have an i-g-t handy for scalability testing of the fault handlers.
+int drm_vma_offset_setup(struct drm_vma_offset_manager *man,
struct drm_vma_offset_node *node,
unsigned long num_pages)
+{
- int ret;
+retry_pre_get:
- ret = drm_mm_pre_get(&man->addr_space_mm);
- if (unlikely(ret != 0))
return ret;
- write_lock(&man->vm_lock);
- node->vm_node = drm_mm_search_free(&man->addr_space_mm,
num_pages, 0, 0);
- if (unlikely(node->vm_node == NULL)) {
ret = -ENOMEM;
ret = -ENOSPC;
Depended upon by the higher layers for determining when to purge their caches; i-g-t/gem_mmap_offset_exhaustion
goto out_unlock;
- }
- node->vm_node = drm_mm_get_block_atomic(node->vm_node,
num_pages, 0);
I'd feel happier if this tried to only take from the preallocated pool rather than actually try a GFP_ATOMIC allocation.
- if (unlikely(node->vm_node == NULL)) {
write_unlock(&man->vm_lock);
goto retry_pre_get;
- }
- node->num_pages = num_pages;
- drm_vma_offset_insert_rb(man, node);
- write_unlock(&man->vm_lock);
- return 0;
+out_unlock:
- write_unlock(&man->vm_lock);
- return ret;
+} +EXPORT_SYMBOL(drm_vma_offset_setup);