Important fix to patch 14, fix accounting of ghost bo. When creating a ghost bo we don't account it, so set its acc_size to 0 so that when ghost is release we don't overfree.
I wonder how i didn't run into this before.
Patch are also at
http://people.freedesktop.org/~glisse/ttmdma/
Cheers, Jerome
From: Konrad Rzeszutek Wilk konrad.wilk@oracle.com
As a mechanism to detect whether SWIOTLB is enabled or not. We also fix the spelling - it was swioltb instead of swiotlb.
CC: FUJITA Tomonori fujita.tomonori@lab.ntt.co.jp [v1: Ripped out swiotlb_enabled] Signed-off-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com --- drivers/xen/swiotlb-xen.c | 2 +- include/linux/swiotlb.h | 2 +- lib/swiotlb.c | 5 +++-- 3 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 8e964b9..4864e5d 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -153,7 +153,7 @@ void __init xen_swiotlb_init(int verbose) char *m = NULL; unsigned int repeat = 3;
- nr_tbl = swioltb_nr_tbl(); + nr_tbl = swiotlb_nr_tbl(); if (nr_tbl) xen_io_tlb_nslabs = nr_tbl; else { diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 445702c..e872526 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -24,7 +24,7 @@ extern int swiotlb_force;
extern void swiotlb_init(int verbose); extern void swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); -extern unsigned long swioltb_nr_tbl(void); +extern unsigned long swiotlb_nr_tbl(void);
/* * Enumeration for sync targets diff --git a/lib/swiotlb.c b/lib/swiotlb.c index 99093b3..058935e 100644 --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -110,11 +110,11 @@ setup_io_tlb_npages(char *str) __setup("swiotlb=", setup_io_tlb_npages); /* make io_tlb_overflow tunable too? */
-unsigned long swioltb_nr_tbl(void) +unsigned long swiotlb_nr_tbl(void) { return io_tlb_nslabs; } - +EXPORT_SYMBOL_GPL(swiotlb_nr_tbl); /* Note that this doesn't work with highmem page */ static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev, volatile void *address) @@ -321,6 +321,7 @@ void __init swiotlb_free(void) free_bootmem_late(__pa(io_tlb_start), PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT)); } + io_tlb_nslabs = 0; }
static int is_swiotlb_buffer(phys_addr_t paddr)
From: Jerome Glisse jglisse@redhat.com
This was never use in none of the driver, properly using userspace page for bo would need more code (vma interaction mostly). Removing this dead code in preparation of ttm_tt & backend merge.
Signed-off-by: Jerome Glisse jglisse@redhat.com Reviewed-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Reviewed-by: Thomas Hellstrom thellstrom@vmware.com --- drivers/gpu/drm/ttm/ttm_bo.c | 22 -------- drivers/gpu/drm/ttm/ttm_tt.c | 105 +-------------------------------------- include/drm/ttm/ttm_bo_api.h | 5 -- include/drm/ttm/ttm_bo_driver.h | 24 --------- 4 files changed, 1 insertions(+), 155 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 617b646..4bde335 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -342,22 +342,6 @@ static int ttm_bo_add_ttm(struct ttm_buffer_object *bo, bool zero_alloc) if (unlikely(bo->ttm == NULL)) ret = -ENOMEM; break; - case ttm_bo_type_user: - bo->ttm = ttm_tt_create(bdev, bo->num_pages << PAGE_SHIFT, - page_flags | TTM_PAGE_FLAG_USER, - glob->dummy_read_page); - if (unlikely(bo->ttm == NULL)) { - ret = -ENOMEM; - break; - } - - ret = ttm_tt_set_user(bo->ttm, current, - bo->buffer_start, bo->num_pages); - if (unlikely(ret != 0)) { - ttm_tt_destroy(bo->ttm); - bo->ttm = NULL; - } - break; default: printk(KERN_ERR TTM_PFX "Illegal buffer object type\n"); ret = -EINVAL; @@ -907,16 +891,12 @@ static uint32_t ttm_bo_select_caching(struct ttm_mem_type_manager *man, }
static bool ttm_bo_mt_compatible(struct ttm_mem_type_manager *man, - bool disallow_fixed, uint32_t mem_type, uint32_t proposed_placement, uint32_t *masked_placement) { uint32_t cur_flags = ttm_bo_type_flags(mem_type);
- if ((man->flags & TTM_MEMTYPE_FLAG_FIXED) && disallow_fixed) - return false; - if ((cur_flags & proposed_placement & TTM_PL_MASK_MEM) == 0) return false;
@@ -961,7 +941,6 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo, man = &bdev->man[mem_type];
type_ok = ttm_bo_mt_compatible(man, - bo->type == ttm_bo_type_user, mem_type, placement->placement[i], &cur_flags); @@ -1009,7 +988,6 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo, if (!man->has_type) continue; if (!ttm_bo_mt_compatible(man, - bo->type == ttm_bo_type_user, mem_type, placement->busy_placement[i], &cur_flags)) diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index f9cc548..c68b0e7 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -63,43 +63,6 @@ static void ttm_tt_free_page_directory(struct ttm_tt *ttm) ttm->dma_address = NULL; }
-static void ttm_tt_free_user_pages(struct ttm_tt *ttm) -{ - int write; - int dirty; - struct page *page; - int i; - struct ttm_backend *be = ttm->be; - - BUG_ON(!(ttm->page_flags & TTM_PAGE_FLAG_USER)); - write = ((ttm->page_flags & TTM_PAGE_FLAG_WRITE) != 0); - dirty = ((ttm->page_flags & TTM_PAGE_FLAG_USER_DIRTY) != 0); - - if (be) - be->func->clear(be); - - for (i = 0; i < ttm->num_pages; ++i) { - page = ttm->pages[i]; - if (page == NULL) - continue; - - if (page == ttm->dummy_read_page) { - BUG_ON(write); - continue; - } - - if (write && dirty && !PageReserved(page)) - set_page_dirty_lock(page); - - ttm->pages[i] = NULL; - ttm_mem_global_free(ttm->glob->mem_glob, PAGE_SIZE); - put_page(page); - } - ttm->state = tt_unpopulated; - ttm->first_himem_page = ttm->num_pages; - ttm->last_lomem_page = -1; -} - static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, int index) { struct page *p; @@ -326,10 +289,7 @@ void ttm_tt_destroy(struct ttm_tt *ttm) }
if (likely(ttm->pages != NULL)) { - if (ttm->page_flags & TTM_PAGE_FLAG_USER) - ttm_tt_free_user_pages(ttm); - else - ttm_tt_free_alloced_pages(ttm); + ttm_tt_free_alloced_pages(ttm);
ttm_tt_free_page_directory(ttm); } @@ -341,45 +301,6 @@ void ttm_tt_destroy(struct ttm_tt *ttm) kfree(ttm); }
-int ttm_tt_set_user(struct ttm_tt *ttm, - struct task_struct *tsk, - unsigned long start, unsigned long num_pages) -{ - struct mm_struct *mm = tsk->mm; - int ret; - int write = (ttm->page_flags & TTM_PAGE_FLAG_WRITE) != 0; - struct ttm_mem_global *mem_glob = ttm->glob->mem_glob; - - BUG_ON(num_pages != ttm->num_pages); - BUG_ON((ttm->page_flags & TTM_PAGE_FLAG_USER) == 0); - - /** - * Account user pages as lowmem pages for now. - */ - - ret = ttm_mem_global_alloc(mem_glob, num_pages * PAGE_SIZE, - false, false); - if (unlikely(ret != 0)) - return ret; - - down_read(&mm->mmap_sem); - ret = get_user_pages(tsk, mm, start, num_pages, - write, 0, ttm->pages, NULL); - up_read(&mm->mmap_sem); - - if (ret != num_pages && write) { - ttm_tt_free_user_pages(ttm); - ttm_mem_global_free(mem_glob, num_pages * PAGE_SIZE); - return -ENOMEM; - } - - ttm->tsk = tsk; - ttm->start = start; - ttm->state = tt_unbound; - - return 0; -} - struct ttm_tt *ttm_tt_create(struct ttm_bo_device *bdev, unsigned long size, uint32_t page_flags, struct page *dummy_read_page) { @@ -453,8 +374,6 @@ int ttm_tt_bind(struct ttm_tt *ttm, struct ttm_mem_reg *bo_mem)
ttm->state = tt_bound;
- if (ttm->page_flags & TTM_PAGE_FLAG_USER) - ttm->page_flags |= TTM_PAGE_FLAG_USER_DIRTY; return 0; } EXPORT_SYMBOL(ttm_tt_bind); @@ -470,16 +389,6 @@ static int ttm_tt_swapin(struct ttm_tt *ttm) int i; int ret = -ENOMEM;
- if (ttm->page_flags & TTM_PAGE_FLAG_USER) { - ret = ttm_tt_set_user(ttm, ttm->tsk, ttm->start, - ttm->num_pages); - if (unlikely(ret != 0)) - return ret; - - ttm->page_flags &= ~TTM_PAGE_FLAG_SWAPPED; - return 0; - } - swap_storage = ttm->swap_storage; BUG_ON(swap_storage == NULL);
@@ -530,18 +439,6 @@ int ttm_tt_swapout(struct ttm_tt *ttm, struct file *persistent_swap_storage) BUG_ON(ttm->state != tt_unbound && ttm->state != tt_unpopulated); BUG_ON(ttm->caching_state != tt_cached);
- /* - * For user buffers, just unpin the pages, as there should be - * vma references. - */ - - if (ttm->page_flags & TTM_PAGE_FLAG_USER) { - ttm_tt_free_user_pages(ttm); - ttm->page_flags |= TTM_PAGE_FLAG_SWAPPED; - ttm->swap_storage = NULL; - return 0; - } - if (!persistent_swap_storage) { swap_storage = shmem_file_setup("ttm swap", ttm->num_pages << PAGE_SHIFT, diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 42e3469..8d95a42 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -122,17 +122,12 @@ struct ttm_mem_reg { * be mmapped by user space. Each of these bos occupy a slot in the * device address space, that can be used for normal vm operations. * - * @ttm_bo_type_user: These are user-space memory areas that are made - * available to the GPU by mapping the buffer pages into the GPU aperture - * space. These buffers cannot be mmaped from the device address space. - * * @ttm_bo_type_kernel: These buffers are like ttm_bo_type_device buffers, * but they cannot be accessed from user-space. For kernel-only use. */
enum ttm_bo_type { ttm_bo_type_device, - ttm_bo_type_user, ttm_bo_type_kernel };
diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 94eb143..37527d6 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -118,8 +118,6 @@ struct ttm_backend { struct ttm_backend_func *func; };
-#define TTM_PAGE_FLAG_USER (1 << 1) -#define TTM_PAGE_FLAG_USER_DIRTY (1 << 2) #define TTM_PAGE_FLAG_WRITE (1 << 3) #define TTM_PAGE_FLAG_SWAPPED (1 << 4) #define TTM_PAGE_FLAG_PERSISTENT_SWAP (1 << 5) @@ -146,8 +144,6 @@ enum ttm_caching_state { * @num_pages: Number of pages in the page array. * @bdev: Pointer to the current struct ttm_bo_device. * @be: Pointer to the ttm backend. - * @tsk: The task for user ttm. - * @start: virtual address for user ttm. * @swap_storage: Pointer to shmem struct file for swap storage. * @caching_state: The current caching state of the pages. * @state: The current binding state of the pages. @@ -167,8 +163,6 @@ struct ttm_tt { unsigned long num_pages; struct ttm_bo_global *glob; struct ttm_backend *be; - struct task_struct *tsk; - unsigned long start; struct file *swap_storage; enum ttm_caching_state caching_state; enum { @@ -618,24 +612,6 @@ extern struct ttm_tt *ttm_tt_create(struct ttm_bo_device *bdev, struct page *dummy_read_page);
/** - * ttm_tt_set_user: - * - * @ttm: The struct ttm_tt to populate. - * @tsk: A struct task_struct for which @start is a valid user-space address. - * @start: A valid user-space address. - * @num_pages: Size in pages of the user memory area. - * - * Populate a struct ttm_tt with a user-space memory area after first pinning - * the pages backing it. - * Returns: - * !0: Error. - */ - -extern int ttm_tt_set_user(struct ttm_tt *ttm, - struct task_struct *tsk, - unsigned long start, unsigned long num_pages); - -/** * ttm_ttm_bind: * * @ttm: The struct ttm_tt containing backing pages.
From: Jerome Glisse jglisse@redhat.com
Split btw highmem and lowmem page was rendered useless by the pool code. Remove it. Note further cleanup would change the ttm page allocation helper to actualy take an array instead of relying on list this could drasticly reduce the number of function call in the common case of allocation whole buffer.
Signed-off-by: Jerome Glisse jglisse@redhat.com Reviewed-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Reviewed-by: Thomas Hellstrom thellstrom@vmware.com --- drivers/gpu/drm/ttm/ttm_tt.c | 11 ++--------- include/drm/ttm/ttm_bo_driver.h | 7 ------- 2 files changed, 2 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index c68b0e7..f0c5ffd 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -70,7 +70,7 @@ static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, int index) struct ttm_mem_global *mem_glob = ttm->glob->mem_glob; int ret;
- while (NULL == (p = ttm->pages[index])) { + if (NULL == (p = ttm->pages[index])) {
INIT_LIST_HEAD(&h);
@@ -86,10 +86,7 @@ static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, int index) if (unlikely(ret != 0)) goto out_err;
- if (PageHighMem(p)) - ttm->pages[--ttm->first_himem_page] = p; - else - ttm->pages[++ttm->last_lomem_page] = p; + ttm->pages[index] = p; } return p; out_err: @@ -271,8 +268,6 @@ static void ttm_tt_free_alloced_pages(struct ttm_tt *ttm) ttm_put_pages(&h, count, ttm->page_flags, ttm->caching_state, ttm->dma_address); ttm->state = tt_unpopulated; - ttm->first_himem_page = ttm->num_pages; - ttm->last_lomem_page = -1; }
void ttm_tt_destroy(struct ttm_tt *ttm) @@ -316,8 +311,6 @@ struct ttm_tt *ttm_tt_create(struct ttm_bo_device *bdev, unsigned long size,
ttm->glob = bdev->glob; ttm->num_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; - ttm->first_himem_page = ttm->num_pages; - ttm->last_lomem_page = -1; ttm->caching_state = tt_cached; ttm->page_flags = page_flags;
diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 37527d6..9da182b 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -136,11 +136,6 @@ enum ttm_caching_state { * @dummy_read_page: Page to map where the ttm_tt page array contains a NULL * pointer. * @pages: Array of pages backing the data. - * @first_himem_page: Himem pages are put last in the page array, which - * enables us to run caching attribute changes on only the first part - * of the page array containing lomem pages. This is the index of the - * first himem page. - * @last_lomem_page: Index of the last lomem page in the page array. * @num_pages: Number of pages in the page array. * @bdev: Pointer to the current struct ttm_bo_device. * @be: Pointer to the ttm backend. @@ -157,8 +152,6 @@ enum ttm_caching_state { struct ttm_tt { struct page *dummy_read_page; struct page **pages; - long first_himem_page; - long last_lomem_page; uint32_t page_flags; unsigned long num_pages; struct ttm_bo_global *glob;
From: Jerome Glisse jglisse@redhat.com
This field is not use by any of the driver just drop it.
Signed-off-by: Jerome Glisse jglisse@redhat.com Reviewed-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Reviewed-by: Thomas Hellstrom thellstrom@vmware.com --- drivers/gpu/drm/radeon/radeon_ttm.c | 1 - include/drm/ttm/ttm_bo_driver.h | 2 -- 2 files changed, 0 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index 0b5468b..97c76ae 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -787,7 +787,6 @@ struct ttm_backend *radeon_ttm_backend_create(struct radeon_device *rdev) return NULL; } gtt->backend.bdev = &rdev->mman.bdev; - gtt->backend.flags = 0; gtt->backend.func = &radeon_backend_func; gtt->rdev = rdev; gtt->pages = NULL; diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 9da182b..6d17140 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -106,7 +106,6 @@ struct ttm_backend_func { * struct ttm_backend * * @bdev: Pointer to a struct ttm_bo_device. - * @flags: For driver use. * @func: Pointer to a struct ttm_backend_func that describes * the backend methods. * @@ -114,7 +113,6 @@ struct ttm_backend_func {
struct ttm_backend { struct ttm_bo_device *bdev; - uint32_t flags; struct ttm_backend_func *func; };
From: Jerome Glisse jglisse@redhat.com
On failure we need to make sure the page we free has wb cache attribute. Do this pas call the proper ttm page helper function.
Signed-off-by: Jerome Glisse jglisse@redhat.com Reviewed-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Reviewed-by: Thomas Hellstrom thellstrom@vmware.com --- drivers/gpu/drm/ttm/ttm_tt.c | 5 ++++- 1 files changed, 4 insertions(+), 1 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index f0c5ffd..90527a2 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -90,7 +90,10 @@ static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, int index) } return p; out_err: - put_page(p); + INIT_LIST_HEAD(&h); + list_add(&p->lru, &h); + ttm_put_pages(&h, 1, ttm->page_flags, + ttm->caching_state, &ttm->dma_address[index]); return NULL; }
From: Jerome Glisse jglisse@redhat.com
Signed-off-by: Jerome Glisse jglisse@redhat.com Reviewed-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Reviewed-by: Thomas Hellstrom thellstrom@vmware.com --- drivers/gpu/drm/ttm/ttm_tt.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index 90527a2..54bbbad 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -320,7 +320,7 @@ struct ttm_tt *ttm_tt_create(struct ttm_bo_device *bdev, unsigned long size, ttm->dummy_read_page = dummy_read_page;
ttm_tt_alloc_page_directory(ttm); - if (!ttm->pages) { + if (!ttm->pages || !ttm->dma_address) { ttm_tt_destroy(ttm); printk(KERN_ERR TTM_PFX "Failed allocating page table\n"); return NULL;
From: Jerome Glisse jglisse@redhat.com
Use the ttm_tt pages array for pages allocations, move the list unwinding into the page allocation functions.
Signed-off-by: Jerome Glisse jglisse@redhat.com --- drivers/gpu/drm/ttm/ttm_page_alloc.c | 85 +++++++++++++++++++++------------- drivers/gpu/drm/ttm/ttm_tt.c | 36 +++------------ include/drm/ttm/ttm_page_alloc.h | 8 ++-- 3 files changed, 63 insertions(+), 66 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c index 727e93d..0f3e6d2 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c @@ -619,8 +619,10 @@ static void ttm_page_pool_fill_locked(struct ttm_page_pool *pool, * @return count of pages still required to fulfill the request. */ static unsigned ttm_page_pool_get_pages(struct ttm_page_pool *pool, - struct list_head *pages, int ttm_flags, - enum ttm_caching_state cstate, unsigned count) + struct list_head *pages, + int ttm_flags, + enum ttm_caching_state cstate, + unsigned count) { unsigned long irq_flags; struct list_head *p; @@ -664,13 +666,15 @@ out: * On success pages list will hold count number of correctly * cached pages. */ -int ttm_get_pages(struct list_head *pages, int flags, - enum ttm_caching_state cstate, unsigned count, +int ttm_get_pages(struct page **pages, int flags, + enum ttm_caching_state cstate, unsigned npages, dma_addr_t *dma_address) { struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); + struct list_head plist; struct page *p = NULL; gfp_t gfp_flags = GFP_USER; + unsigned count; int r;
/* set zero flag for page allocation if required */ @@ -684,7 +688,7 @@ int ttm_get_pages(struct list_head *pages, int flags, else gfp_flags |= GFP_HIGHUSER;
- for (r = 0; r < count; ++r) { + for (r = 0; r < npages; ++r) { p = alloc_page(gfp_flags); if (!p) {
@@ -693,85 +697,100 @@ int ttm_get_pages(struct list_head *pages, int flags, return -ENOMEM; }
- list_add(&p->lru, pages); + pages[r] = p; } return 0; }
- /* combine zero flag to pool flags */ gfp_flags |= pool->gfp_flags;
/* First we take pages from the pool */ - count = ttm_page_pool_get_pages(pool, pages, flags, cstate, count); + INIT_LIST_HEAD(&plist); + npages = ttm_page_pool_get_pages(pool, &plist, flags, cstate, npages); + count = 0; + list_for_each_entry(p, &plist, lru) { + pages[count++] = p; + }
/* clear the pages coming from the pool if requested */ if (flags & TTM_PAGE_FLAG_ZERO_ALLOC) { - list_for_each_entry(p, pages, lru) { + list_for_each_entry(p, &plist, lru) { clear_page(page_address(p)); } }
/* If pool didn't have enough pages allocate new one. */ - if (count > 0) { + if (npages > 0) { /* ttm_alloc_new_pages doesn't reference pool so we can run * multiple requests in parallel. **/ - r = ttm_alloc_new_pages(pages, gfp_flags, flags, cstate, count); + INIT_LIST_HEAD(&plist); + r = ttm_alloc_new_pages(&plist, gfp_flags, flags, cstate, npages); + list_for_each_entry(p, &plist, lru) { + pages[count++] = p; + } if (r) { /* If there is any pages in the list put them back to * the pool. */ printk(KERN_ERR TTM_PFX "Failed to allocate extra pages " "for large request."); - ttm_put_pages(pages, 0, flags, cstate, NULL); + ttm_put_pages(pages, count, flags, cstate, NULL); return r; } }
- return 0; }
/* Put all pages in pages list to correct pool to wait for reuse */ -void ttm_put_pages(struct list_head *pages, unsigned page_count, int flags, +void ttm_put_pages(struct page **pages, unsigned npages, int flags, enum ttm_caching_state cstate, dma_addr_t *dma_address) { unsigned long irq_flags; struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); - struct page *p, *tmp; + unsigned i;
if (pool == NULL) { /* No pool for this memory type so free the pages */ - - list_for_each_entry_safe(p, tmp, pages, lru) { - __free_page(p); + for (i = 0; i < npages; i++) { + if (pages[i]) { + if (page_count(pages[i]) != 1) + printk(KERN_ERR TTM_PFX + "Erroneous page count. " + "Leaking pages.\n"); + __free_page(pages[i]); + pages[i] = NULL; + } } - /* Make the pages list empty */ - INIT_LIST_HEAD(pages); return; } - if (page_count == 0) { - list_for_each_entry_safe(p, tmp, pages, lru) { - ++page_count; - } - }
spin_lock_irqsave(&pool->lock, irq_flags); - list_splice_init(pages, &pool->list); - pool->npages += page_count; + for (i = 0; i < npages; i++) { + if (pages[i]) { + if (page_count(pages[i]) != 1) + printk(KERN_ERR TTM_PFX + "Erroneous page count. " + "Leaking pages.\n"); + list_add_tail(&pages[i]->lru, &pool->list); + pages[i] = NULL; + pool->npages++; + } + } /* Check that we don't go over the pool limit */ - page_count = 0; + npages = 0; if (pool->npages > _manager->options.max_size) { - page_count = pool->npages - _manager->options.max_size; + npages = pool->npages - _manager->options.max_size; /* free at least NUM_PAGES_TO_ALLOC number of pages * to reduce calls to set_memory_wb */ - if (page_count < NUM_PAGES_TO_ALLOC) - page_count = NUM_PAGES_TO_ALLOC; + if (npages < NUM_PAGES_TO_ALLOC) + npages = NUM_PAGES_TO_ALLOC; } spin_unlock_irqrestore(&pool->lock, irq_flags); - if (page_count) - ttm_page_pool_free(pool, page_count); + if (npages) + ttm_page_pool_free(pool, npages); }
static void ttm_page_pool_init_locked(struct ttm_page_pool *pool, int flags, diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index 54bbbad..6e079de 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -66,22 +66,16 @@ static void ttm_tt_free_page_directory(struct ttm_tt *ttm) static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, int index) { struct page *p; - struct list_head h; struct ttm_mem_global *mem_glob = ttm->glob->mem_glob; int ret;
if (NULL == (p = ttm->pages[index])) {
- INIT_LIST_HEAD(&h); - - ret = ttm_get_pages(&h, ttm->page_flags, ttm->caching_state, 1, + ret = ttm_get_pages(&p, ttm->page_flags, ttm->caching_state, 1, &ttm->dma_address[index]); - if (ret != 0) return NULL;
- p = list_first_entry(&h, struct page, lru); - ret = ttm_mem_global_alloc_page(mem_glob, p, false, false); if (unlikely(ret != 0)) goto out_err; @@ -90,9 +84,7 @@ static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, int index) } return p; out_err: - INIT_LIST_HEAD(&h); - list_add(&p->lru, &h); - ttm_put_pages(&h, 1, ttm->page_flags, + ttm_put_pages(&p, 1, ttm->page_flags, ttm->caching_state, &ttm->dma_address[index]); return NULL; } @@ -243,33 +235,19 @@ EXPORT_SYMBOL(ttm_tt_set_placement_caching);
static void ttm_tt_free_alloced_pages(struct ttm_tt *ttm) { - int i; - unsigned count = 0; - struct list_head h; - struct page *cur_page; struct ttm_backend *be = ttm->be; - - INIT_LIST_HEAD(&h); + unsigned i;
if (be) be->func->clear(be); for (i = 0; i < ttm->num_pages; ++i) { - - cur_page = ttm->pages[i]; - ttm->pages[i] = NULL; - if (cur_page) { - if (page_count(cur_page) != 1) - printk(KERN_ERR TTM_PFX - "Erroneous page count. " - "Leaking pages.\n"); + if (ttm->pages[i]) { ttm_mem_global_free_page(ttm->glob->mem_glob, - cur_page); - list_add(&cur_page->lru, &h); - count++; + ttm->pages[i]); + ttm_put_pages(&ttm->pages[i], 1, ttm->page_flags, + ttm->caching_state, &ttm->dma_address[i]); } } - ttm_put_pages(&h, count, ttm->page_flags, ttm->caching_state, - ttm->dma_address); ttm->state = tt_unpopulated; }
diff --git a/include/drm/ttm/ttm_page_alloc.h b/include/drm/ttm/ttm_page_alloc.h index 129de12..fe61c8d 100644 --- a/include/drm/ttm/ttm_page_alloc.h +++ b/include/drm/ttm/ttm_page_alloc.h @@ -38,10 +38,10 @@ * @count: number of pages to allocate. * @dma_address: The DMA (bus) address of pages (if TTM_PAGE_FLAG_DMA32 set). */ -int ttm_get_pages(struct list_head *pages, +int ttm_get_pages(struct page **pages, int flags, enum ttm_caching_state cstate, - unsigned count, + unsigned npages, dma_addr_t *dma_address); /** * Put linked list of pages to pool. @@ -53,8 +53,8 @@ int ttm_get_pages(struct list_head *pages, * @cstate: ttm caching state. * @dma_address: The DMA (bus) address of pages (if TTM_PAGE_FLAG_DMA32 set). */ -void ttm_put_pages(struct list_head *pages, - unsigned page_count, +void ttm_put_pages(struct page **pages, + unsigned npages, int flags, enum ttm_caching_state cstate, dma_addr_t *dma_address);
From: Jerome Glisse jglisse@redhat.com
ttm_backend will only exist with a ttm_tt, and ttm_tt will only be of interest when bound to a backend. Merge them to avoid code and data duplication.
V2 Rebase on top of memory accounting overhaul V3 Rebase on top of more memory accounting changes V4 Rebase on top of no memory account changes (where/when is my delorean when i need it ?) V5 make sure ttm is unbound before destroying, change commit message on suggestion from Tormod Volden
Signed-off-by: Jerome Glisse jglisse@redhat.com Reviewed-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Reviewed-by: Thomas Hellstrom thellstrom@vmware.com --- drivers/gpu/drm/nouveau/nouveau_bo.c | 14 ++- drivers/gpu/drm/nouveau/nouveau_drv.h | 5 +- drivers/gpu/drm/nouveau/nouveau_sgdma.c | 188 ++++++++++++--------------- drivers/gpu/drm/radeon/radeon_ttm.c | 219 ++++++++++++------------------- drivers/gpu/drm/ttm/ttm_agp_backend.c | 88 +++++-------- drivers/gpu/drm/ttm/ttm_bo.c | 9 +- drivers/gpu/drm/ttm/ttm_tt.c | 59 ++------- drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c | 66 +++------- include/drm/ttm/ttm_bo_driver.h | 104 ++++++--------- 9 files changed, 294 insertions(+), 458 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 7cc37e6..3f116ba 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -343,8 +343,10 @@ nouveau_bo_wr32(struct nouveau_bo *nvbo, unsigned index, u32 val) *mem = val; }
-static struct ttm_backend * -nouveau_bo_create_ttm_backend_entry(struct ttm_bo_device *bdev) +static struct ttm_tt * +nouveau_ttm_tt_create(struct ttm_bo_device *bdev, + unsigned long size, uint32_t page_flags, + struct page *dummy_read_page) { struct drm_nouveau_private *dev_priv = nouveau_bdev(bdev); struct drm_device *dev = dev_priv->dev; @@ -352,11 +354,13 @@ nouveau_bo_create_ttm_backend_entry(struct ttm_bo_device *bdev) switch (dev_priv->gart_info.type) { #if __OS_HAS_AGP case NOUVEAU_GART_AGP: - return ttm_agp_backend_init(bdev, dev->agp->bridge); + return ttm_agp_tt_create(bdev, dev->agp->bridge, + size, page_flags, dummy_read_page); #endif case NOUVEAU_GART_PDMA: case NOUVEAU_GART_HW: - return nouveau_sgdma_init_ttm(dev); + return nouveau_sgdma_create_ttm(bdev, size, page_flags, + dummy_read_page); default: NV_ERROR(dev, "Unknown GART type %d\n", dev_priv->gart_info.type); @@ -1045,7 +1049,7 @@ nouveau_bo_fence(struct nouveau_bo *nvbo, struct nouveau_fence *fence) }
struct ttm_bo_driver nouveau_bo_driver = { - .create_ttm_backend_entry = nouveau_bo_create_ttm_backend_entry, + .ttm_tt_create = &nouveau_ttm_tt_create, .invalidate_caches = nouveau_bo_invalidate_caches, .init_mem_type = nouveau_bo_init_mem_type, .evict_flags = nouveau_bo_evict_flags, diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h b/drivers/gpu/drm/nouveau/nouveau_drv.h index 29837da..0c53e39 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drv.h +++ b/drivers/gpu/drm/nouveau/nouveau_drv.h @@ -1000,7 +1000,10 @@ extern int nouveau_sgdma_init(struct drm_device *); extern void nouveau_sgdma_takedown(struct drm_device *); extern uint32_t nouveau_sgdma_get_physical(struct drm_device *, uint32_t offset); -extern struct ttm_backend *nouveau_sgdma_init_ttm(struct drm_device *); +extern struct ttm_tt *nouveau_sgdma_create_ttm(struct ttm_bo_device *bdev, + unsigned long size, + uint32_t page_flags, + struct page *dummy_read_page);
/* nouveau_debugfs.c */ #if defined(CONFIG_DRM_NOUVEAU_DEBUG) diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c index b75258a..bc2ab90 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c +++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c @@ -8,44 +8,23 @@ #define NV_CTXDMA_PAGE_MASK (NV_CTXDMA_PAGE_SIZE - 1)
struct nouveau_sgdma_be { - struct ttm_backend backend; + struct ttm_tt ttm; struct drm_device *dev; - - dma_addr_t *pages; - unsigned nr_pages; - bool unmap_pages; - u64 offset; - bool bound; };
static int -nouveau_sgdma_populate(struct ttm_backend *be, unsigned long num_pages, - struct page **pages, struct page *dummy_read_page, - dma_addr_t *dma_addrs) +nouveau_sgdma_dma_map(struct ttm_tt *ttm) { - struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; + struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; struct drm_device *dev = nvbe->dev; int i;
- NV_DEBUG(nvbe->dev, "num_pages = %ld\n", num_pages); - - nvbe->pages = dma_addrs; - nvbe->nr_pages = num_pages; - nvbe->unmap_pages = true; - - /* this code path isn't called and is incorrect anyways */ - if (0) { /* dma_addrs[0] != DMA_ERROR_CODE) { */ - nvbe->unmap_pages = false; - return 0; - } - - for (i = 0; i < num_pages; i++) { - nvbe->pages[i] = pci_map_page(dev->pdev, pages[i], 0, - PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); - if (pci_dma_mapping_error(dev->pdev, nvbe->pages[i])) { - nvbe->nr_pages = --i; - be->func->clear(be); + for (i = 0; i < ttm->num_pages; i++) { + ttm->dma_address[i] = pci_map_page(dev->pdev, ttm->pages[i], + 0, PAGE_SIZE, + PCI_DMA_BIDIRECTIONAL); + if (pci_dma_mapping_error(dev->pdev, ttm->dma_address[i])) { return -EFAULT; } } @@ -54,53 +33,52 @@ nouveau_sgdma_populate(struct ttm_backend *be, unsigned long num_pages, }
static void -nouveau_sgdma_clear(struct ttm_backend *be) +nouveau_sgdma_dma_unmap(struct ttm_tt *ttm) { - struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; + struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; struct drm_device *dev = nvbe->dev; + int i;
- if (nvbe->bound) - be->func->unbind(be); - - if (nvbe->unmap_pages) { - while (nvbe->nr_pages--) { - pci_unmap_page(dev->pdev, nvbe->pages[nvbe->nr_pages], + for (i = 0; i < ttm->num_pages; i++) { + if (ttm->dma_address[i]) { + pci_unmap_page(dev->pdev, ttm->dma_address[i], PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); } + ttm->dma_address[i] = 0; } }
static void -nouveau_sgdma_destroy(struct ttm_backend *be) +nouveau_sgdma_destroy(struct ttm_tt *ttm) { - struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; + struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm;
- if (be) { + if (ttm) { NV_DEBUG(nvbe->dev, "\n"); - - if (nvbe) { - if (nvbe->pages) - be->func->clear(be); - kfree(nvbe); - } + kfree(nvbe); } }
static int -nv04_sgdma_bind(struct ttm_backend *be, struct ttm_mem_reg *mem) +nv04_sgdma_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem) { - struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; + struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; struct drm_device *dev = nvbe->dev; struct drm_nouveau_private *dev_priv = dev->dev_private; struct nouveau_gpuobj *gpuobj = dev_priv->gart_info.sg_ctxdma; unsigned i, j, pte; + int r;
NV_DEBUG(dev, "pg=0x%lx\n", mem->start); + r = nouveau_sgdma_dma_map(ttm); + if (r) { + return r; + }
nvbe->offset = mem->start << PAGE_SHIFT; pte = (nvbe->offset >> NV_CTXDMA_PAGE_SHIFT) + 2; - for (i = 0; i < nvbe->nr_pages; i++) { - dma_addr_t dma_offset = nvbe->pages[i]; + for (i = 0; i < ttm->num_pages; i++) { + dma_addr_t dma_offset = ttm->dma_address[i]; uint32_t offset_l = lower_32_bits(dma_offset);
for (j = 0; j < PAGE_SIZE / NV_CTXDMA_PAGE_SIZE; j++, pte++) { @@ -109,14 +87,13 @@ nv04_sgdma_bind(struct ttm_backend *be, struct ttm_mem_reg *mem) } }
- nvbe->bound = true; return 0; }
static int -nv04_sgdma_unbind(struct ttm_backend *be) +nv04_sgdma_unbind(struct ttm_tt *ttm) { - struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; + struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; struct drm_device *dev = nvbe->dev; struct drm_nouveau_private *dev_priv = dev->dev_private; struct nouveau_gpuobj *gpuobj = dev_priv->gart_info.sg_ctxdma; @@ -124,22 +101,20 @@ nv04_sgdma_unbind(struct ttm_backend *be)
NV_DEBUG(dev, "\n");
- if (!nvbe->bound) + if (ttm->state != tt_bound) return 0;
pte = (nvbe->offset >> NV_CTXDMA_PAGE_SHIFT) + 2; - for (i = 0; i < nvbe->nr_pages; i++) { + for (i = 0; i < ttm->num_pages; i++) { for (j = 0; j < PAGE_SIZE / NV_CTXDMA_PAGE_SIZE; j++, pte++) nv_wo32(gpuobj, (pte * 4) + 0, 0x00000000); }
- nvbe->bound = false; + nouveau_sgdma_dma_unmap(ttm); return 0; }
static struct ttm_backend_func nv04_sgdma_backend = { - .populate = nouveau_sgdma_populate, - .clear = nouveau_sgdma_clear, .bind = nv04_sgdma_bind, .unbind = nv04_sgdma_unbind, .destroy = nouveau_sgdma_destroy @@ -158,16 +133,21 @@ nv41_sgdma_flush(struct nouveau_sgdma_be *nvbe) }
static int -nv41_sgdma_bind(struct ttm_backend *be, struct ttm_mem_reg *mem) +nv41_sgdma_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem) { - struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; + struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; struct drm_nouveau_private *dev_priv = nvbe->dev->dev_private; struct nouveau_gpuobj *pgt = dev_priv->gart_info.sg_ctxdma; - dma_addr_t *list = nvbe->pages; + dma_addr_t *list = ttm->dma_address; u32 pte = mem->start << 2; - u32 cnt = nvbe->nr_pages; + u32 cnt = ttm->num_pages; + int r;
nvbe->offset = mem->start << PAGE_SHIFT; + r = nouveau_sgdma_dma_map(ttm); + if (r) { + return r; + }
while (cnt--) { nv_wo32(pgt, pte, (*list++ >> 7) | 1); @@ -175,18 +155,17 @@ nv41_sgdma_bind(struct ttm_backend *be, struct ttm_mem_reg *mem) }
nv41_sgdma_flush(nvbe); - nvbe->bound = true; return 0; }
static int -nv41_sgdma_unbind(struct ttm_backend *be) +nv41_sgdma_unbind(struct ttm_tt *ttm) { - struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; + struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; struct drm_nouveau_private *dev_priv = nvbe->dev->dev_private; struct nouveau_gpuobj *pgt = dev_priv->gart_info.sg_ctxdma; u32 pte = (nvbe->offset >> 12) << 2; - u32 cnt = nvbe->nr_pages; + u32 cnt = ttm->num_pages;
while (cnt--) { nv_wo32(pgt, pte, 0x00000000); @@ -194,24 +173,23 @@ nv41_sgdma_unbind(struct ttm_backend *be) }
nv41_sgdma_flush(nvbe); - nvbe->bound = false; + nouveau_sgdma_dma_unmap(ttm); return 0; }
static struct ttm_backend_func nv41_sgdma_backend = { - .populate = nouveau_sgdma_populate, - .clear = nouveau_sgdma_clear, .bind = nv41_sgdma_bind, .unbind = nv41_sgdma_unbind, .destroy = nouveau_sgdma_destroy };
static void -nv44_sgdma_flush(struct nouveau_sgdma_be *nvbe) +nv44_sgdma_flush(struct ttm_tt *ttm) { + struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; struct drm_device *dev = nvbe->dev;
- nv_wr32(dev, 0x100814, (nvbe->nr_pages - 1) << 12); + nv_wr32(dev, 0x100814, (ttm->num_pages - 1) << 12); nv_wr32(dev, 0x100808, nvbe->offset | 0x20); if (!nv_wait(dev, 0x100808, 0x00000001, 0x00000001)) NV_ERROR(dev, "gart flush timeout: 0x%08x\n", @@ -270,17 +248,21 @@ nv44_sgdma_fill(struct nouveau_gpuobj *pgt, dma_addr_t *list, u32 base, u32 cnt) }
static int -nv44_sgdma_bind(struct ttm_backend *be, struct ttm_mem_reg *mem) +nv44_sgdma_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem) { - struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; + struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; struct drm_nouveau_private *dev_priv = nvbe->dev->dev_private; struct nouveau_gpuobj *pgt = dev_priv->gart_info.sg_ctxdma; - dma_addr_t *list = nvbe->pages; + dma_addr_t *list = ttm->dma_address; u32 pte = mem->start << 2, tmp[4]; - u32 cnt = nvbe->nr_pages; - int i; + u32 cnt = ttm->num_pages; + int i, r;
nvbe->offset = mem->start << PAGE_SHIFT; + r = nouveau_sgdma_dma_map(ttm); + if (r) { + return r; + }
if (pte & 0x0000000c) { u32 max = 4 - ((pte >> 2) & 0x3); @@ -305,19 +287,18 @@ nv44_sgdma_bind(struct ttm_backend *be, struct ttm_mem_reg *mem) if (cnt) nv44_sgdma_fill(pgt, list, pte, cnt);
- nv44_sgdma_flush(nvbe); - nvbe->bound = true; + nv44_sgdma_flush(ttm); return 0; }
static int -nv44_sgdma_unbind(struct ttm_backend *be) +nv44_sgdma_unbind(struct ttm_tt *ttm) { - struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; + struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; struct drm_nouveau_private *dev_priv = nvbe->dev->dev_private; struct nouveau_gpuobj *pgt = dev_priv->gart_info.sg_ctxdma; u32 pte = (nvbe->offset >> 12) << 2; - u32 cnt = nvbe->nr_pages; + u32 cnt = ttm->num_pages;
if (pte & 0x0000000c) { u32 max = 4 - ((pte >> 2) & 0x3); @@ -339,55 +320,53 @@ nv44_sgdma_unbind(struct ttm_backend *be) if (cnt) nv44_sgdma_fill(pgt, NULL, pte, cnt);
- nv44_sgdma_flush(nvbe); - nvbe->bound = false; + nv44_sgdma_flush(ttm); + nouveau_sgdma_dma_unmap(ttm); return 0; }
static struct ttm_backend_func nv44_sgdma_backend = { - .populate = nouveau_sgdma_populate, - .clear = nouveau_sgdma_clear, .bind = nv44_sgdma_bind, .unbind = nv44_sgdma_unbind, .destroy = nouveau_sgdma_destroy };
static int -nv50_sgdma_bind(struct ttm_backend *be, struct ttm_mem_reg *mem) +nv50_sgdma_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem) { - struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; struct nouveau_mem *node = mem->mm_node; + int r; + /* noop: bound in move_notify() */ - node->pages = nvbe->pages; - nvbe->pages = (dma_addr_t *)node; - nvbe->bound = true; + r = nouveau_sgdma_dma_map(ttm); + if (r) { + return r; + } + node->pages = ttm->dma_address; return 0; }
static int -nv50_sgdma_unbind(struct ttm_backend *be) +nv50_sgdma_unbind(struct ttm_tt *ttm) { - struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; - struct nouveau_mem *node = (struct nouveau_mem *)nvbe->pages; /* noop: unbound in move_notify() */ - nvbe->pages = node->pages; - node->pages = NULL; - nvbe->bound = false; + nouveau_sgdma_dma_unmap(ttm); return 0; }
static struct ttm_backend_func nv50_sgdma_backend = { - .populate = nouveau_sgdma_populate, - .clear = nouveau_sgdma_clear, .bind = nv50_sgdma_bind, .unbind = nv50_sgdma_unbind, .destroy = nouveau_sgdma_destroy };
-struct ttm_backend * -nouveau_sgdma_init_ttm(struct drm_device *dev) +struct ttm_tt * +nouveau_sgdma_create_ttm(struct ttm_bo_device *bdev, + unsigned long size, uint32_t page_flags, + struct page *dummy_read_page) { - struct drm_nouveau_private *dev_priv = dev->dev_private; + struct drm_nouveau_private *dev_priv = nouveau_bdev(bdev); + struct drm_device *dev = dev_priv->dev; struct nouveau_sgdma_be *nvbe;
nvbe = kzalloc(sizeof(*nvbe), GFP_KERNEL); @@ -395,9 +374,12 @@ nouveau_sgdma_init_ttm(struct drm_device *dev) return NULL;
nvbe->dev = dev; + nvbe->ttm.func = dev_priv->gart_info.func;
- nvbe->backend.func = dev_priv->gart_info.func; - return &nvbe->backend; + if (ttm_tt_init(&nvbe->ttm, bdev, size, page_flags, dummy_read_page)) { + return NULL; + } + return &nvbe->ttm; }
int diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index 97c76ae..af4d5f2 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -114,24 +114,6 @@ static void radeon_ttm_global_fini(struct radeon_device *rdev) } }
-struct ttm_backend *radeon_ttm_backend_create(struct radeon_device *rdev); - -static struct ttm_backend* -radeon_create_ttm_backend_entry(struct ttm_bo_device *bdev) -{ - struct radeon_device *rdev; - - rdev = radeon_get_rdev(bdev); -#if __OS_HAS_AGP - if (rdev->flags & RADEON_IS_AGP) { - return ttm_agp_backend_init(bdev, rdev->ddev->agp->bridge); - } else -#endif - { - return radeon_ttm_backend_create(rdev); - } -} - static int radeon_invalidate_caches(struct ttm_bo_device *bdev, uint32_t flags) { return 0; @@ -515,8 +497,90 @@ static bool radeon_sync_obj_signaled(void *sync_obj, void *sync_arg) return radeon_fence_signaled((struct radeon_fence *)sync_obj); }
+/* + * TTM backend functions. + */ +struct radeon_ttm_tt { + struct ttm_tt ttm; + struct radeon_device *rdev; + u64 offset; +}; + +static int radeon_ttm_backend_bind(struct ttm_tt *ttm, + struct ttm_mem_reg *bo_mem) +{ + struct radeon_ttm_tt *gtt; + int r; + + gtt = container_of(ttm, struct radeon_ttm_tt, ttm); + gtt->offset = (unsigned long)(bo_mem->start << PAGE_SHIFT); + if (!ttm->num_pages) { + WARN(1, "nothing to bind %lu pages for mreg %p back %p!\n", + ttm->num_pages, bo_mem, ttm); + } + r = radeon_gart_bind(gtt->rdev, gtt->offset, + ttm->num_pages, ttm->pages, ttm->dma_address); + if (r) { + DRM_ERROR("failed to bind %lu pages at 0x%08X\n", + ttm->num_pages, (unsigned)gtt->offset); + return r; + } + return 0; +} + +static int radeon_ttm_backend_unbind(struct ttm_tt *ttm) +{ + struct radeon_ttm_tt *gtt; + + gtt = container_of(ttm, struct radeon_ttm_tt, ttm); + radeon_gart_unbind(gtt->rdev, gtt->offset, ttm->num_pages); + return 0; +} + +static void radeon_ttm_backend_destroy(struct ttm_tt *ttm) +{ + struct radeon_ttm_tt *gtt; + + gtt = container_of(ttm, struct radeon_ttm_tt, ttm); + kfree(gtt); +} + +static struct ttm_backend_func radeon_backend_func = { + .bind = &radeon_ttm_backend_bind, + .unbind = &radeon_ttm_backend_unbind, + .destroy = &radeon_ttm_backend_destroy, +}; + +struct ttm_tt *radeon_ttm_tt_create(struct ttm_bo_device *bdev, + unsigned long size, uint32_t page_flags, + struct page *dummy_read_page) +{ + struct radeon_device *rdev; + struct radeon_ttm_tt *gtt; + + rdev = radeon_get_rdev(bdev); +#if __OS_HAS_AGP + if (rdev->flags & RADEON_IS_AGP) { + return ttm_agp_tt_create(bdev, rdev->ddev->agp->bridge, + size, page_flags, dummy_read_page); + } +#endif + + gtt = kzalloc(sizeof(struct radeon_ttm_tt), GFP_KERNEL); + if (gtt == NULL) { + return NULL; + } + gtt->ttm.func = &radeon_backend_func; + gtt->rdev = rdev; + if (ttm_tt_init(>t->ttm, bdev, size, page_flags, dummy_read_page)) { + return NULL; + } + return >t->ttm; +} + + static struct ttm_bo_driver radeon_bo_driver = { - .create_ttm_backend_entry = &radeon_create_ttm_backend_entry, + .ttm_tt_create = &radeon_ttm_tt_create, .invalidate_caches = &radeon_invalidate_caches, .init_mem_type = &radeon_init_mem_type, .evict_flags = &radeon_evict_flags, @@ -680,123 +744,6 @@ int radeon_mmap(struct file *filp, struct vm_area_struct *vma) }
-/* - * TTM backend functions. - */ -struct radeon_ttm_backend { - struct ttm_backend backend; - struct radeon_device *rdev; - unsigned long num_pages; - struct page **pages; - struct page *dummy_read_page; - dma_addr_t *dma_addrs; - bool populated; - bool bound; - unsigned offset; -}; - -static int radeon_ttm_backend_populate(struct ttm_backend *backend, - unsigned long num_pages, - struct page **pages, - struct page *dummy_read_page, - dma_addr_t *dma_addrs) -{ - struct radeon_ttm_backend *gtt; - - gtt = container_of(backend, struct radeon_ttm_backend, backend); - gtt->pages = pages; - gtt->dma_addrs = dma_addrs; - gtt->num_pages = num_pages; - gtt->dummy_read_page = dummy_read_page; - gtt->populated = true; - return 0; -} - -static void radeon_ttm_backend_clear(struct ttm_backend *backend) -{ - struct radeon_ttm_backend *gtt; - - gtt = container_of(backend, struct radeon_ttm_backend, backend); - gtt->pages = NULL; - gtt->dma_addrs = NULL; - gtt->num_pages = 0; - gtt->dummy_read_page = NULL; - gtt->populated = false; - gtt->bound = false; -} - - -static int radeon_ttm_backend_bind(struct ttm_backend *backend, - struct ttm_mem_reg *bo_mem) -{ - struct radeon_ttm_backend *gtt; - int r; - - gtt = container_of(backend, struct radeon_ttm_backend, backend); - gtt->offset = bo_mem->start << PAGE_SHIFT; - if (!gtt->num_pages) { - WARN(1, "nothing to bind %lu pages for mreg %p back %p!\n", - gtt->num_pages, bo_mem, backend); - } - r = radeon_gart_bind(gtt->rdev, gtt->offset, - gtt->num_pages, gtt->pages, gtt->dma_addrs); - if (r) { - DRM_ERROR("failed to bind %lu pages at 0x%08X\n", - gtt->num_pages, gtt->offset); - return r; - } - gtt->bound = true; - return 0; -} - -static int radeon_ttm_backend_unbind(struct ttm_backend *backend) -{ - struct radeon_ttm_backend *gtt; - - gtt = container_of(backend, struct radeon_ttm_backend, backend); - radeon_gart_unbind(gtt->rdev, gtt->offset, gtt->num_pages); - gtt->bound = false; - return 0; -} - -static void radeon_ttm_backend_destroy(struct ttm_backend *backend) -{ - struct radeon_ttm_backend *gtt; - - gtt = container_of(backend, struct radeon_ttm_backend, backend); - if (gtt->bound) { - radeon_ttm_backend_unbind(backend); - } - kfree(gtt); -} - -static struct ttm_backend_func radeon_backend_func = { - .populate = &radeon_ttm_backend_populate, - .clear = &radeon_ttm_backend_clear, - .bind = &radeon_ttm_backend_bind, - .unbind = &radeon_ttm_backend_unbind, - .destroy = &radeon_ttm_backend_destroy, -}; - -struct ttm_backend *radeon_ttm_backend_create(struct radeon_device *rdev) -{ - struct radeon_ttm_backend *gtt; - - gtt = kzalloc(sizeof(struct radeon_ttm_backend), GFP_KERNEL); - if (gtt == NULL) { - return NULL; - } - gtt->backend.bdev = &rdev->mman.bdev; - gtt->backend.func = &radeon_backend_func; - gtt->rdev = rdev; - gtt->pages = NULL; - gtt->num_pages = 0; - gtt->dummy_read_page = NULL; - gtt->populated = false; - gtt->bound = false; - return >t->backend; -} - #define RADEON_DEBUGFS_MEM_TYPES 2
#if defined(CONFIG_DEBUG_FS) diff --git a/drivers/gpu/drm/ttm/ttm_agp_backend.c b/drivers/gpu/drm/ttm/ttm_agp_backend.c index 1c4a72f..14ebd36 100644 --- a/drivers/gpu/drm/ttm/ttm_agp_backend.c +++ b/drivers/gpu/drm/ttm/ttm_agp_backend.c @@ -40,45 +40,33 @@ #include <asm/agp.h>
struct ttm_agp_backend { - struct ttm_backend backend; + struct ttm_tt ttm; struct agp_memory *mem; struct agp_bridge_data *bridge; };
-static int ttm_agp_populate(struct ttm_backend *backend, - unsigned long num_pages, struct page **pages, - struct page *dummy_read_page, - dma_addr_t *dma_addrs) +static int ttm_agp_bind(struct ttm_tt *ttm, struct ttm_mem_reg *bo_mem) { - struct ttm_agp_backend *agp_be = - container_of(backend, struct ttm_agp_backend, backend); - struct page **cur_page, **last_page = pages + num_pages; + struct ttm_agp_backend *agp_be = container_of(ttm, struct ttm_agp_backend, ttm); + struct drm_mm_node *node = bo_mem->mm_node; struct agp_memory *mem; + int ret, cached = (bo_mem->placement & TTM_PL_FLAG_CACHED); + unsigned i;
- mem = agp_allocate_memory(agp_be->bridge, num_pages, AGP_USER_MEMORY); + mem = agp_allocate_memory(agp_be->bridge, ttm->num_pages, AGP_USER_MEMORY); if (unlikely(mem == NULL)) return -ENOMEM;
mem->page_count = 0; - for (cur_page = pages; cur_page < last_page; ++cur_page) { - struct page *page = *cur_page; + for (i = 0; i < ttm->num_pages; i++) { + struct page *page = ttm->pages[i]; + if (!page) - page = dummy_read_page; + page = ttm->dummy_read_page;
mem->pages[mem->page_count++] = page; } agp_be->mem = mem; - return 0; -} - -static int ttm_agp_bind(struct ttm_backend *backend, struct ttm_mem_reg *bo_mem) -{ - struct ttm_agp_backend *agp_be = - container_of(backend, struct ttm_agp_backend, backend); - struct drm_mm_node *node = bo_mem->mm_node; - struct agp_memory *mem = agp_be->mem; - int cached = (bo_mem->placement & TTM_PL_FLAG_CACHED); - int ret;
mem->is_flushed = 1; mem->type = (cached) ? AGP_USER_CACHED_MEMORY : AGP_USER_MEMORY; @@ -90,50 +78,38 @@ static int ttm_agp_bind(struct ttm_backend *backend, struct ttm_mem_reg *bo_mem) return ret; }
-static int ttm_agp_unbind(struct ttm_backend *backend) +static int ttm_agp_unbind(struct ttm_tt *ttm) { - struct ttm_agp_backend *agp_be = - container_of(backend, struct ttm_agp_backend, backend); - - if (agp_be->mem->is_bound) - return agp_unbind_memory(agp_be->mem); - else - return 0; -} + struct ttm_agp_backend *agp_be = container_of(ttm, struct ttm_agp_backend, ttm);
-static void ttm_agp_clear(struct ttm_backend *backend) -{ - struct ttm_agp_backend *agp_be = - container_of(backend, struct ttm_agp_backend, backend); - struct agp_memory *mem = agp_be->mem; - - if (mem) { - ttm_agp_unbind(backend); - agp_free_memory(mem); + if (agp_be->mem) { + if (agp_be->mem->is_bound) + return agp_unbind_memory(agp_be->mem); + agp_free_memory(agp_be->mem); + agp_be->mem = NULL; } - agp_be->mem = NULL; + return 0; }
-static void ttm_agp_destroy(struct ttm_backend *backend) +static void ttm_agp_destroy(struct ttm_tt *ttm) { - struct ttm_agp_backend *agp_be = - container_of(backend, struct ttm_agp_backend, backend); + struct ttm_agp_backend *agp_be = container_of(ttm, struct ttm_agp_backend, ttm);
if (agp_be->mem) - ttm_agp_clear(backend); + ttm_agp_unbind(ttm); kfree(agp_be); }
static struct ttm_backend_func ttm_agp_func = { - .populate = ttm_agp_populate, - .clear = ttm_agp_clear, .bind = ttm_agp_bind, .unbind = ttm_agp_unbind, .destroy = ttm_agp_destroy, };
-struct ttm_backend *ttm_agp_backend_init(struct ttm_bo_device *bdev, - struct agp_bridge_data *bridge) +struct ttm_tt *ttm_agp_tt_create(struct ttm_bo_device *bdev, + struct agp_bridge_data *bridge, + unsigned long size, uint32_t page_flags, + struct page *dummy_read_page) { struct ttm_agp_backend *agp_be;
@@ -143,10 +119,14 @@ struct ttm_backend *ttm_agp_backend_init(struct ttm_bo_device *bdev,
agp_be->mem = NULL; agp_be->bridge = bridge; - agp_be->backend.func = &ttm_agp_func; - agp_be->backend.bdev = bdev; - return &agp_be->backend; + agp_be->ttm.func = &ttm_agp_func; + + if (ttm_tt_init(&agp_be->ttm, bdev, size, page_flags, dummy_read_page)) { + return NULL; + } + + return &agp_be->ttm; } -EXPORT_SYMBOL(ttm_agp_backend_init); +EXPORT_SYMBOL(ttm_agp_tt_create);
#endif diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 4bde335..cb73527 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -337,8 +337,8 @@ static int ttm_bo_add_ttm(struct ttm_buffer_object *bo, bool zero_alloc) if (zero_alloc) page_flags |= TTM_PAGE_FLAG_ZERO_ALLOC; case ttm_bo_type_kernel: - bo->ttm = ttm_tt_create(bdev, bo->num_pages << PAGE_SHIFT, - page_flags, glob->dummy_read_page); + bo->ttm = bdev->driver->ttm_tt_create(bdev, bo->num_pages << PAGE_SHIFT, + page_flags, glob->dummy_read_page); if (unlikely(bo->ttm == NULL)) ret = -ENOMEM; break; @@ -1437,10 +1437,7 @@ int ttm_bo_global_init(struct drm_global_reference *ref) goto out_no_shrink; }
- glob->ttm_bo_extra_size = - ttm_round_pot(sizeof(struct ttm_tt)) + - ttm_round_pot(sizeof(struct ttm_backend)); - + glob->ttm_bo_extra_size = ttm_round_pot(sizeof(struct ttm_tt)); glob->ttm_bo_size = glob->ttm_bo_extra_size + ttm_round_pot(sizeof(struct ttm_buffer_object));
diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index 6e079de..fbc90dc 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -105,7 +105,6 @@ int ttm_tt_populate(struct ttm_tt *ttm) { struct page *page; unsigned long i; - struct ttm_backend *be; int ret;
if (ttm->state != tt_unpopulated) @@ -117,16 +116,11 @@ int ttm_tt_populate(struct ttm_tt *ttm) return ret; }
- be = ttm->be; - for (i = 0; i < ttm->num_pages; ++i) { page = __ttm_tt_get_page(ttm, i); if (!page) return -ENOMEM; } - - be->func->populate(be, ttm->num_pages, ttm->pages, - ttm->dummy_read_page, ttm->dma_address); ttm->state = tt_unbound; return 0; } @@ -235,11 +229,8 @@ EXPORT_SYMBOL(ttm_tt_set_placement_caching);
static void ttm_tt_free_alloced_pages(struct ttm_tt *ttm) { - struct ttm_backend *be = ttm->be; unsigned i;
- if (be) - be->func->clear(be); for (i = 0; i < ttm->num_pages; ++i) { if (ttm->pages[i]) { ttm_mem_global_free_page(ttm->glob->mem_glob, @@ -253,20 +244,15 @@ static void ttm_tt_free_alloced_pages(struct ttm_tt *ttm)
void ttm_tt_destroy(struct ttm_tt *ttm) { - struct ttm_backend *be; - if (unlikely(ttm == NULL)) return;
- be = ttm->be; - if (likely(be != NULL)) { - be->func->destroy(be); - ttm->be = NULL; + if (ttm->state == tt_bound) { + ttm_tt_unbind(ttm); }
if (likely(ttm->pages != NULL)) { ttm_tt_free_alloced_pages(ttm); - ttm_tt_free_page_directory(ttm); }
@@ -274,52 +260,38 @@ void ttm_tt_destroy(struct ttm_tt *ttm) ttm->swap_storage) fput(ttm->swap_storage);
- kfree(ttm); + ttm->swap_storage = NULL; + ttm->func->destroy(ttm); }
-struct ttm_tt *ttm_tt_create(struct ttm_bo_device *bdev, unsigned long size, - uint32_t page_flags, struct page *dummy_read_page) +int ttm_tt_init(struct ttm_tt *ttm, struct ttm_bo_device *bdev, + unsigned long size, uint32_t page_flags, + struct page *dummy_read_page) { - struct ttm_bo_driver *bo_driver = bdev->driver; - struct ttm_tt *ttm; - - if (!bo_driver) - return NULL; - - ttm = kzalloc(sizeof(*ttm), GFP_KERNEL); - if (!ttm) - return NULL; - + ttm->bdev = bdev; ttm->glob = bdev->glob; ttm->num_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; ttm->caching_state = tt_cached; ttm->page_flags = page_flags; - ttm->dummy_read_page = dummy_read_page; + ttm->state = tt_unpopulated;
ttm_tt_alloc_page_directory(ttm); if (!ttm->pages || !ttm->dma_address) { ttm_tt_destroy(ttm); printk(KERN_ERR TTM_PFX "Failed allocating page table\n"); - return NULL; - } - ttm->be = bo_driver->create_ttm_backend_entry(bdev); - if (!ttm->be) { - ttm_tt_destroy(ttm); - printk(KERN_ERR TTM_PFX "Failed creating ttm backend entry\n"); - return NULL; + return -ENOMEM; } - ttm->state = tt_unpopulated; - return ttm; + return 0; } +EXPORT_SYMBOL(ttm_tt_init);
void ttm_tt_unbind(struct ttm_tt *ttm) { int ret; - struct ttm_backend *be = ttm->be;
if (ttm->state == tt_bound) { - ret = be->func->unbind(be); + ret = ttm->func->unbind(ttm); BUG_ON(ret); ttm->state = tt_unbound; } @@ -328,7 +300,6 @@ void ttm_tt_unbind(struct ttm_tt *ttm) int ttm_tt_bind(struct ttm_tt *ttm, struct ttm_mem_reg *bo_mem) { int ret = 0; - struct ttm_backend *be;
if (!ttm) return -EINVAL; @@ -336,13 +307,11 @@ int ttm_tt_bind(struct ttm_tt *ttm, struct ttm_mem_reg *bo_mem) if (ttm->state == tt_bound) return 0;
- be = ttm->be; - ret = ttm_tt_populate(ttm); if (ret) return ret;
- ret = be->func->bind(be, bo_mem); + ret = ttm->func->bind(ttm, bo_mem); if (unlikely(ret != 0)) return ret;
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c index 5a72ed9..cc72435 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c @@ -139,85 +139,61 @@ struct ttm_placement vmw_srf_placement = { .busy_placement = gmr_vram_placement_flags };
-struct vmw_ttm_backend { - struct ttm_backend backend; - struct page **pages; - unsigned long num_pages; +struct vmw_ttm_tt { + struct ttm_tt ttm; struct vmw_private *dev_priv; int gmr_id; };
-static int vmw_ttm_populate(struct ttm_backend *backend, - unsigned long num_pages, struct page **pages, - struct page *dummy_read_page, - dma_addr_t *dma_addrs) +static int vmw_ttm_bind(struct ttm_tt *ttm, struct ttm_mem_reg *bo_mem) { - struct vmw_ttm_backend *vmw_be = - container_of(backend, struct vmw_ttm_backend, backend); - - vmw_be->pages = pages; - vmw_be->num_pages = num_pages; - - return 0; -} - -static int vmw_ttm_bind(struct ttm_backend *backend, struct ttm_mem_reg *bo_mem) -{ - struct vmw_ttm_backend *vmw_be = - container_of(backend, struct vmw_ttm_backend, backend); + struct vmw_ttm_tt *vmw_be = container_of(ttm, struct vmw_ttm_tt, ttm);
vmw_be->gmr_id = bo_mem->start;
- return vmw_gmr_bind(vmw_be->dev_priv, vmw_be->pages, - vmw_be->num_pages, vmw_be->gmr_id); + return vmw_gmr_bind(vmw_be->dev_priv, ttm->pages, + ttm->num_pages, vmw_be->gmr_id); }
-static int vmw_ttm_unbind(struct ttm_backend *backend) +static int vmw_ttm_unbind(struct ttm_tt *ttm) { - struct vmw_ttm_backend *vmw_be = - container_of(backend, struct vmw_ttm_backend, backend); + struct vmw_ttm_tt *vmw_be = container_of(ttm, struct vmw_ttm_tt, ttm);
vmw_gmr_unbind(vmw_be->dev_priv, vmw_be->gmr_id); return 0; }
-static void vmw_ttm_clear(struct ttm_backend *backend) +static void vmw_ttm_destroy(struct ttm_tt *ttm) { - struct vmw_ttm_backend *vmw_be = - container_of(backend, struct vmw_ttm_backend, backend); - - vmw_be->pages = NULL; - vmw_be->num_pages = 0; -} - -static void vmw_ttm_destroy(struct ttm_backend *backend) -{ - struct vmw_ttm_backend *vmw_be = - container_of(backend, struct vmw_ttm_backend, backend); + struct vmw_ttm_tt *vmw_be = container_of(ttm, struct vmw_ttm_tt, ttm);
kfree(vmw_be); }
static struct ttm_backend_func vmw_ttm_func = { - .populate = vmw_ttm_populate, - .clear = vmw_ttm_clear, .bind = vmw_ttm_bind, .unbind = vmw_ttm_unbind, .destroy = vmw_ttm_destroy, };
-struct ttm_backend *vmw_ttm_backend_init(struct ttm_bo_device *bdev) +struct ttm_tt *vmw_ttm_tt_create(struct ttm_bo_device *bdev, + unsigned long size, uint32_t page_flags, + struct page *dummy_read_page) { - struct vmw_ttm_backend *vmw_be; + struct vmw_ttm_tt *vmw_be;
vmw_be = kmalloc(sizeof(*vmw_be), GFP_KERNEL); if (!vmw_be) return NULL;
- vmw_be->backend.func = &vmw_ttm_func; + vmw_be->ttm.func = &vmw_ttm_func; vmw_be->dev_priv = container_of(bdev, struct vmw_private, bdev);
- return &vmw_be->backend; + if (ttm_tt_init(&vmw_be->ttm, bdev, size, page_flags, dummy_read_page)) { + return NULL; + } + + return &vmw_be->ttm; }
int vmw_invalidate_caches(struct ttm_bo_device *bdev, uint32_t flags) @@ -357,7 +333,7 @@ static int vmw_sync_obj_wait(void *sync_obj, void *sync_arg, }
struct ttm_bo_driver vmw_bo_driver = { - .create_ttm_backend_entry = vmw_ttm_backend_init, + .ttm_tt_create = &vmw_ttm_tt_create, .invalidate_caches = vmw_invalidate_caches, .init_mem_type = vmw_init_mem_type, .evict_flags = vmw_evict_flags, diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 6d17140..6b8c5cd 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -43,36 +43,9 @@ struct ttm_backend;
struct ttm_backend_func { /** - * struct ttm_backend_func member populate - * - * @backend: Pointer to a struct ttm_backend. - * @num_pages: Number of pages to populate. - * @pages: Array of pointers to ttm pages. - * @dummy_read_page: Page to be used instead of NULL pages in the - * array @pages. - * @dma_addrs: Array of DMA (bus) address of the ttm pages. - * - * Populate the backend with ttm pages. Depending on the backend, - * it may or may not copy the @pages array. - */ - int (*populate) (struct ttm_backend *backend, - unsigned long num_pages, struct page **pages, - struct page *dummy_read_page, - dma_addr_t *dma_addrs); - /** - * struct ttm_backend_func member clear - * - * @backend: Pointer to a struct ttm_backend. - * - * This is an "unpopulate" function. Release all resources - * allocated with populate. - */ - void (*clear) (struct ttm_backend *backend); - - /** * struct ttm_backend_func member bind * - * @backend: Pointer to a struct ttm_backend. + * @ttm: Pointer to a struct ttm_tt. * @bo_mem: Pointer to a struct ttm_mem_reg describing the * memory type and location for binding. * @@ -80,40 +53,27 @@ struct ttm_backend_func { * indicated by @bo_mem. This function should be able to handle * differences between aperture and system page sizes. */ - int (*bind) (struct ttm_backend *backend, struct ttm_mem_reg *bo_mem); + int (*bind) (struct ttm_tt *ttm, struct ttm_mem_reg *bo_mem);
/** * struct ttm_backend_func member unbind * - * @backend: Pointer to a struct ttm_backend. + * @ttm: Pointer to a struct ttm_tt. * * Unbind previously bound backend pages. This function should be * able to handle differences between aperture and system page sizes. */ - int (*unbind) (struct ttm_backend *backend); + int (*unbind) (struct ttm_tt *ttm);
/** * struct ttm_backend_func member destroy * - * @backend: Pointer to a struct ttm_backend. + * @ttm: Pointer to a struct ttm_tt. * - * Destroy the backend. + * Destroy the backend. This will be call back from ttm_tt_destroy so + * don't call ttm_tt_destroy from the callback or infinite loop. */ - void (*destroy) (struct ttm_backend *backend); -}; - -/** - * struct ttm_backend - * - * @bdev: Pointer to a struct ttm_bo_device. - * @func: Pointer to a struct ttm_backend_func that describes - * the backend methods. - * - */ - -struct ttm_backend { - struct ttm_bo_device *bdev; - struct ttm_backend_func *func; + void (*destroy) (struct ttm_tt *ttm); };
#define TTM_PAGE_FLAG_WRITE (1 << 3) @@ -131,6 +91,9 @@ enum ttm_caching_state { /** * struct ttm_tt * + * @bdev: Pointer to a struct ttm_bo_device. + * @func: Pointer to a struct ttm_backend_func that describes + * the backend methods. * @dummy_read_page: Page to map where the ttm_tt page array contains a NULL * pointer. * @pages: Array of pages backing the data. @@ -148,6 +111,8 @@ enum ttm_caching_state { */
struct ttm_tt { + struct ttm_bo_device *bdev; + struct ttm_backend_func *func; struct page *dummy_read_page; struct page **pages; uint32_t page_flags; @@ -336,15 +301,22 @@ struct ttm_mem_type_manager {
struct ttm_bo_driver { /** - * struct ttm_bo_driver member create_ttm_backend_entry + * ttm_tt_create * - * @bdev: The buffer object device. + * @bdev: pointer to a struct ttm_bo_device: + * @size: Size of the data needed backing. + * @page_flags: Page flags as identified by TTM_PAGE_FLAG_XX flags. + * @dummy_read_page: See struct ttm_bo_device. * - * Create a driver specific struct ttm_backend. + * Create a struct ttm_tt to back data with system memory pages. + * No pages are actually allocated. + * Returns: + * NULL: Out of memory. */ - - struct ttm_backend *(*create_ttm_backend_entry) - (struct ttm_bo_device *bdev); + struct ttm_tt *(*ttm_tt_create)(struct ttm_bo_device *bdev, + unsigned long size, + uint32_t page_flags, + struct page *dummy_read_page);
/** * struct ttm_bo_driver member invalidate_caches @@ -585,8 +557,9 @@ ttm_flag_masked(uint32_t *old, uint32_t new, uint32_t mask) }
/** - * ttm_tt_create + * ttm_tt_init * + * @ttm: The struct ttm_tt. * @bdev: pointer to a struct ttm_bo_device: * @size: Size of the data needed backing. * @page_flags: Page flags as identified by TTM_PAGE_FLAG_XX flags. @@ -597,10 +570,9 @@ ttm_flag_masked(uint32_t *old, uint32_t new, uint32_t mask) * Returns: * NULL: Out of memory. */ -extern struct ttm_tt *ttm_tt_create(struct ttm_bo_device *bdev, - unsigned long size, - uint32_t page_flags, - struct page *dummy_read_page); +extern int ttm_tt_init(struct ttm_tt *ttm, struct ttm_bo_device *bdev, + unsigned long size, uint32_t page_flags, + struct page *dummy_read_page);
/** * ttm_ttm_bind: @@ -626,7 +598,7 @@ extern int ttm_tt_populate(struct ttm_tt *ttm); * * @ttm: The struct ttm_tt. * - * Unbind, unpopulate and destroy a struct ttm_tt. + * Unbind, unpopulate and destroy common struct ttm_tt. */ extern void ttm_tt_destroy(struct ttm_tt *ttm);
@@ -1013,17 +985,23 @@ extern const struct ttm_mem_type_manager_func ttm_bo_manager_func; #include <linux/agp_backend.h>
/** - * ttm_agp_backend_init + * ttm_agp_tt_create * * @bdev: Pointer to a struct ttm_bo_device. * @bridge: The agp bridge this device is sitting on. + * @size: Size of the data needed backing. + * @page_flags: Page flags as identified by TTM_PAGE_FLAG_XX flags. + * @dummy_read_page: See struct ttm_bo_device. + * * * Create a TTM backend that uses the indicated AGP bridge as an aperture * for TT memory. This function uses the linux agpgart interface to * bind and unbind memory backing a ttm_tt. */ -extern struct ttm_backend *ttm_agp_backend_init(struct ttm_bo_device *bdev, - struct agp_bridge_data *bridge); +extern struct ttm_tt *ttm_agp_tt_create(struct ttm_bo_device *bdev, + struct agp_bridge_data *bridge, + unsigned long size, uint32_t page_flags, + struct page *dummy_read_page); #endif
#endif
From: Jerome Glisse jglisse@redhat.com
Move the page allocation and freeing to driver callback and provide ttm code helper function for those.
Most intrusive change, is the fact that we now only fully populate an object this simplify some of code designed around the page fault design.
V2 Rebase on top of memory accounting overhaul V3 New rebase on top of more memory accouting changes V4 Rebase on top of no memory account changes (where/when is my delorean when i need it ?)
Signed-off-by: Jerome Glisse jglisse@redhat.com Reviewed-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Reviewed-by: Thomas Hellstrom thellstrom@vmware.com --- drivers/gpu/drm/nouveau/nouveau_bo.c | 3 + drivers/gpu/drm/radeon/radeon_ttm.c | 2 + drivers/gpu/drm/ttm/ttm_bo_util.c | 31 ++++++----- drivers/gpu/drm/ttm/ttm_bo_vm.c | 9 +++- drivers/gpu/drm/ttm/ttm_page_alloc.c | 57 ++++++++++++++++++++ drivers/gpu/drm/ttm/ttm_tt.c | 91 ++------------------------------ drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c | 3 + include/drm/ttm/ttm_bo_driver.h | 41 ++++++++------ include/drm/ttm/ttm_page_alloc.h | 18 ++++++ 9 files changed, 135 insertions(+), 120 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 3f116ba..3271001 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -28,6 +28,7 @@ */
#include "drmP.h" +#include "ttm/ttm_page_alloc.h"
#include "nouveau_drm.h" #include "nouveau_drv.h" @@ -1050,6 +1051,8 @@ nouveau_bo_fence(struct nouveau_bo *nvbo, struct nouveau_fence *fence)
struct ttm_bo_driver nouveau_bo_driver = { .ttm_tt_create = &nouveau_ttm_tt_create, + .ttm_tt_populate = &ttm_pool_populate, + .ttm_tt_unpopulate = &ttm_pool_unpopulate, .invalidate_caches = nouveau_bo_invalidate_caches, .init_mem_type = nouveau_bo_init_mem_type, .evict_flags = nouveau_bo_evict_flags, diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index af4d5f2..b1768cb 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -581,6 +581,8 @@ struct ttm_tt *radeon_ttm_tt_create(struct ttm_bo_device *bdev,
static struct ttm_bo_driver radeon_bo_driver = { .ttm_tt_create = &radeon_ttm_tt_create, + .ttm_tt_populate = &ttm_pool_populate, + .ttm_tt_unpopulate = &ttm_pool_unpopulate, .invalidate_caches = &radeon_invalidate_caches, .init_mem_type = &radeon_init_mem_type, .evict_flags = &radeon_evict_flags, diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 082fcae..60f204d 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -244,7 +244,7 @@ static int ttm_copy_io_ttm_page(struct ttm_tt *ttm, void *src, unsigned long page, pgprot_t prot) { - struct page *d = ttm_tt_get_page(ttm, page); + struct page *d = ttm->pages[page]; void *dst;
if (!d) @@ -281,7 +281,7 @@ static int ttm_copy_ttm_io_page(struct ttm_tt *ttm, void *dst, unsigned long page, pgprot_t prot) { - struct page *s = ttm_tt_get_page(ttm, page); + struct page *s = ttm->pages[page]; void *src;
if (!s) @@ -342,6 +342,12 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo, if (old_iomap == NULL && ttm == NULL) goto out2;
+ if (ttm->state == tt_unpopulated) { + ret = ttm->bdev->driver->ttm_tt_populate(ttm); + if (ret) + goto out1; + } + add = 0; dir = 1;
@@ -502,10 +508,16 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo, { struct ttm_mem_reg *mem = &bo->mem; pgprot_t prot; struct ttm_tt *ttm = bo->ttm; - struct page *d; - int i; + int ret;
BUG_ON(!ttm); + + if (ttm->state == tt_unpopulated) { + ret = ttm->bdev->driver->ttm_tt_populate(ttm); + if (ret) + return ret; + } + if (num_pages == 1 && (mem->placement & TTM_PL_FLAG_CACHED)) { /* * We're mapping a single page, and the desired @@ -513,18 +525,9 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo, */
map->bo_kmap_type = ttm_bo_map_kmap; - map->page = ttm_tt_get_page(ttm, start_page); + map->page = ttm->pages[start_page]; map->virtual = kmap(map->page); } else { - /* - * Populate the part we're mapping; - */ - for (i = start_page; i < start_page + num_pages; ++i) { - d = ttm_tt_get_page(ttm, i); - if (!d) - return -ENOMEM; - } - /* * We need to use vmap to get the desired page protection * or to make the buffer object look contiguous. diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 221b924..5441284 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -174,18 +174,23 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf) vma->vm_page_prot = (bo->mem.placement & TTM_PL_FLAG_CACHED) ? vm_get_page_prot(vma->vm_flags) : ttm_io_prot(bo->mem.placement, vma->vm_page_prot); + + /* Allocate all page at once, most common usage */ + if (ttm->bdev->driver->ttm_tt_populate(ttm)) { + retval = VM_FAULT_OOM; + goto out_io_unlock; + } }
/* * Speculatively prefault a number of pages. Only error on * first page. */ - for (i = 0; i < TTM_BO_VM_NUM_PREFAULT; ++i) { if (bo->mem.bus.is_iomem) pfn = ((bo->mem.bus.base + bo->mem.bus.offset) >> PAGE_SHIFT) + page_offset; else { - page = ttm_tt_get_page(ttm, page_offset); + page = ttm->pages[page_offset]; if (unlikely(!page && i == 0)) { retval = VM_FAULT_OOM; goto out_io_unlock; diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c index 0f3e6d2..8d6267e 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c @@ -855,6 +855,63 @@ void ttm_page_alloc_fini(void) _manager = NULL; }
+int ttm_pool_populate(struct ttm_tt *ttm) +{ + struct ttm_mem_global *mem_glob = ttm->glob->mem_glob; + unsigned i; + int ret; + + if (ttm->state != tt_unpopulated) + return 0; + + for (i = 0; i < ttm->num_pages; ++i) { + ret = ttm_get_pages(&ttm->pages[i], ttm->page_flags, + ttm->caching_state, 1, + &ttm->dma_address[i]); + if (ret != 0) { + ttm_pool_unpopulate(ttm); + return -ENOMEM; + } + + ret = ttm_mem_global_alloc_page(mem_glob, ttm->pages[i], + false, false); + if (unlikely(ret != 0)) { + ttm_pool_unpopulate(ttm); + return -ENOMEM; + } + } + + if (unlikely(ttm->page_flags & TTM_PAGE_FLAG_SWAPPED)) { + ret = ttm_tt_swapin(ttm); + if (unlikely(ret != 0)) { + ttm_pool_unpopulate(ttm); + return ret; + } + } + + ttm->state = tt_unbound; + return 0; +} +EXPORT_SYMBOL(ttm_pool_populate); + +void ttm_pool_unpopulate(struct ttm_tt *ttm) +{ + unsigned i; + + for (i = 0; i < ttm->num_pages; ++i) { + if (ttm->pages[i]) { + ttm_mem_global_free_page(ttm->glob->mem_glob, + ttm->pages[i]); + ttm_put_pages(&ttm->pages[i], 1, + ttm->page_flags, + ttm->caching_state, + ttm->dma_address); + } + } + ttm->state = tt_unpopulated; +} +EXPORT_SYMBOL(ttm_pool_unpopulate); + int ttm_page_alloc_debugfs(struct seq_file *m, void *data) { struct ttm_page_pool *p; diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index fbc90dc..77f0e6f 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -43,8 +43,6 @@ #include "ttm/ttm_placement.h" #include "ttm/ttm_page_alloc.h"
-static int ttm_tt_swapin(struct ttm_tt *ttm); - /** * Allocates storage for pointers to the pages that back the ttm. */ @@ -63,69 +61,6 @@ static void ttm_tt_free_page_directory(struct ttm_tt *ttm) ttm->dma_address = NULL; }
-static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, int index) -{ - struct page *p; - struct ttm_mem_global *mem_glob = ttm->glob->mem_glob; - int ret; - - if (NULL == (p = ttm->pages[index])) { - - ret = ttm_get_pages(&p, ttm->page_flags, ttm->caching_state, 1, - &ttm->dma_address[index]); - if (ret != 0) - return NULL; - - ret = ttm_mem_global_alloc_page(mem_glob, p, false, false); - if (unlikely(ret != 0)) - goto out_err; - - ttm->pages[index] = p; - } - return p; -out_err: - ttm_put_pages(&p, 1, ttm->page_flags, - ttm->caching_state, &ttm->dma_address[index]); - return NULL; -} - -struct page *ttm_tt_get_page(struct ttm_tt *ttm, int index) -{ - int ret; - - if (unlikely(ttm->page_flags & TTM_PAGE_FLAG_SWAPPED)) { - ret = ttm_tt_swapin(ttm); - if (unlikely(ret != 0)) - return NULL; - } - return __ttm_tt_get_page(ttm, index); -} - -int ttm_tt_populate(struct ttm_tt *ttm) -{ - struct page *page; - unsigned long i; - int ret; - - if (ttm->state != tt_unpopulated) - return 0; - - if (unlikely(ttm->page_flags & TTM_PAGE_FLAG_SWAPPED)) { - ret = ttm_tt_swapin(ttm); - if (unlikely(ret != 0)) - return ret; - } - - for (i = 0; i < ttm->num_pages; ++i) { - page = __ttm_tt_get_page(ttm, i); - if (!page) - return -ENOMEM; - } - ttm->state = tt_unbound; - return 0; -} -EXPORT_SYMBOL(ttm_tt_populate); - #ifdef CONFIG_X86 static inline int ttm_tt_set_page_caching(struct page *p, enum ttm_caching_state c_old, @@ -227,21 +162,6 @@ int ttm_tt_set_placement_caching(struct ttm_tt *ttm, uint32_t placement) } EXPORT_SYMBOL(ttm_tt_set_placement_caching);
-static void ttm_tt_free_alloced_pages(struct ttm_tt *ttm) -{ - unsigned i; - - for (i = 0; i < ttm->num_pages; ++i) { - if (ttm->pages[i]) { - ttm_mem_global_free_page(ttm->glob->mem_glob, - ttm->pages[i]); - ttm_put_pages(&ttm->pages[i], 1, ttm->page_flags, - ttm->caching_state, &ttm->dma_address[i]); - } - } - ttm->state = tt_unpopulated; -} - void ttm_tt_destroy(struct ttm_tt *ttm) { if (unlikely(ttm == NULL)) @@ -252,7 +172,7 @@ void ttm_tt_destroy(struct ttm_tt *ttm) }
if (likely(ttm->pages != NULL)) { - ttm_tt_free_alloced_pages(ttm); + ttm->bdev->driver->ttm_tt_unpopulate(ttm); ttm_tt_free_page_directory(ttm); }
@@ -307,7 +227,7 @@ int ttm_tt_bind(struct ttm_tt *ttm, struct ttm_mem_reg *bo_mem) if (ttm->state == tt_bound) return 0;
- ret = ttm_tt_populate(ttm); + ret = ttm->bdev->driver->ttm_tt_populate(ttm); if (ret) return ret;
@@ -321,7 +241,7 @@ int ttm_tt_bind(struct ttm_tt *ttm, struct ttm_mem_reg *bo_mem) } EXPORT_SYMBOL(ttm_tt_bind);
-static int ttm_tt_swapin(struct ttm_tt *ttm) +int ttm_tt_swapin(struct ttm_tt *ttm) { struct address_space *swap_space; struct file *swap_storage; @@ -343,7 +263,7 @@ static int ttm_tt_swapin(struct ttm_tt *ttm) ret = PTR_ERR(from_page); goto out_err; } - to_page = __ttm_tt_get_page(ttm, i); + to_page = ttm->pages[i]; if (unlikely(to_page == NULL)) goto out_err;
@@ -364,7 +284,6 @@ static int ttm_tt_swapin(struct ttm_tt *ttm)
return 0; out_err: - ttm_tt_free_alloced_pages(ttm); return ret; }
@@ -416,7 +335,7 @@ int ttm_tt_swapout(struct ttm_tt *ttm, struct file *persistent_swap_storage) page_cache_release(to_page); }
- ttm_tt_free_alloced_pages(ttm); + ttm->bdev->driver->ttm_tt_unpopulate(ttm); ttm->swap_storage = swap_storage; ttm->page_flags |= TTM_PAGE_FLAG_SWAPPED; if (persistent_swap_storage) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c index cc72435..3986d74 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c @@ -28,6 +28,7 @@ #include "vmwgfx_drv.h" #include "ttm/ttm_bo_driver.h" #include "ttm/ttm_placement.h" +#include "ttm/ttm_page_alloc.h"
static uint32_t vram_placement_flags = TTM_PL_FLAG_VRAM | TTM_PL_FLAG_CACHED; @@ -334,6 +335,8 @@ static int vmw_sync_obj_wait(void *sync_obj, void *sync_arg,
struct ttm_bo_driver vmw_bo_driver = { .ttm_tt_create = &vmw_ttm_tt_create, + .ttm_tt_populate = &ttm_pool_populate, + .ttm_tt_unpopulate = &ttm_pool_unpopulate, .invalidate_caches = vmw_invalidate_caches, .init_mem_type = vmw_init_mem_type, .evict_flags = vmw_evict_flags, diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 6b8c5cd..ae06e42 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -319,6 +319,26 @@ struct ttm_bo_driver { struct page *dummy_read_page);
/** + * ttm_tt_populate + * + * @ttm: The struct ttm_tt to contain the backing pages. + * + * Allocate all backing pages + * Returns: + * -ENOMEM: Out of memory. + */ + int (*ttm_tt_populate)(struct ttm_tt *ttm); + + /** + * ttm_tt_unpopulate + * + * @ttm: The struct ttm_tt to contain the backing pages. + * + * Free all backing page + */ + void (*ttm_tt_unpopulate)(struct ttm_tt *ttm); + + /** * struct ttm_bo_driver member invalidate_caches * * @bdev: the buffer object device. @@ -585,15 +605,6 @@ extern int ttm_tt_init(struct ttm_tt *ttm, struct ttm_bo_device *bdev, extern int ttm_tt_bind(struct ttm_tt *ttm, struct ttm_mem_reg *bo_mem);
/** - * ttm_tt_populate: - * - * @ttm: The struct ttm_tt to contain the backing pages. - * - * Add backing pages to all of @ttm - */ -extern int ttm_tt_populate(struct ttm_tt *ttm); - -/** * ttm_ttm_destroy: * * @ttm: The struct ttm_tt. @@ -612,19 +623,13 @@ extern void ttm_tt_destroy(struct ttm_tt *ttm); extern void ttm_tt_unbind(struct ttm_tt *ttm);
/** - * ttm_ttm_destroy: + * ttm_tt_swapin: * * @ttm: The struct ttm_tt. - * @index: Index of the desired page. * - * Return a pointer to the struct page backing @ttm at page - * index @index. If the page is unpopulated, one will be allocated to - * populate that index. - * - * Returns: - * NULL on OOM. + * Swap in a previously swap out ttm_tt. */ -extern struct page *ttm_tt_get_page(struct ttm_tt *ttm, int index); +extern int ttm_tt_swapin(struct ttm_tt *ttm);
/** * ttm_tt_cache_flush: diff --git a/include/drm/ttm/ttm_page_alloc.h b/include/drm/ttm/ttm_page_alloc.h index fe61c8d..18deeee 100644 --- a/include/drm/ttm/ttm_page_alloc.h +++ b/include/drm/ttm/ttm_page_alloc.h @@ -68,6 +68,24 @@ int ttm_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages); void ttm_page_alloc_fini(void);
/** + * ttm_pool_populate: + * + * @ttm: The struct ttm_tt to contain the backing pages. + * + * Add backing pages to all of @ttm + */ +extern int ttm_pool_populate(struct ttm_tt *ttm); + +/** + * ttm_pool_unpopulate: + * + * @ttm: The struct ttm_tt which to free backing pages. + * + * Free all pages of @ttm + */ +extern void ttm_pool_unpopulate(struct ttm_tt *ttm); + +/** * Output the state of pools to debugfs file */ extern int ttm_page_alloc_debugfs(struct seq_file *m, void *data);
From: Konrad Rzeszutek Wilk konrad.wilk@oracle.com
In TTM world the pages for the graphic drivers are kept in three different pools: write combined, uncached, and cached (write-back). When the pages are used by the graphic driver the graphic adapter via its built in MMU (or AGP) programs these pages in. The programming requires the virtual address (from the graphic adapter perspective) and the physical address (either System RAM or the memory on the card) which is obtained using the pci_map_* calls (which does the virtual to physical - or bus address translation). During the graphic application's "life" those pages can be shuffled around, swapped out to disk, moved from the VRAM to System RAM or vice-versa. This all works with the existing TTM pool code - except when we want to use the software IOTLB (SWIOTLB) code to "map" the physical addresses to the graphic adapter MMU. We end up programming the bounce buffer's physical address instead of the TTM pool memory's and get a non-worky driver. There are two solutions: 1) using the DMA API to allocate pages that are screened by the DMA API, or 2) using the pci_sync_* calls to copy the pages from the bounce-buffer and back.
This patch fixes the issue by allocating pages using the DMA API. The second is a viable option - but it has performance drawbacks and potential correctness issues - think of the write cache page being bounced (SWIOTLB->TTM), the WC is set on the TTM page and the copy from SWIOTLB not making it to the TTM page until the page has been recycled in the pool (and used by another application).
The bounce buffer does not get activated often - only in cases where we have a 32-bit capable card and we want to use a page that is allocated above the 4GB limit. The bounce buffer offers the solution of copying the contents of that 4GB page to an location below 4GB and then back when the operation has been completed (or vice-versa). This is done by using the 'pci_sync_*' calls. Note: If you look carefully enough in the existing TTM page pool code you will notice the GFP_DMA32 flag is used - which should guarantee that the provided page is under 4GB. It certainly is the case, except this gets ignored in two cases: - If user specifies 'swiotlb=force' which bounces _every_ page. - If user is using a Xen's PV Linux guest (which uses the SWIOTLB and the underlaying PFN's aren't necessarily under 4GB).
To not have this extra copying done the other option is to allocate the pages using the DMA API so that there is not need to map the page and perform the expensive 'pci_sync_*' calls.
This DMA API capable TTM pool requires for this the 'struct device' to properly call the DMA API. It also has to track the virtual and bus address of the page being handed out in case it ends up being swapped out or de-allocated - to make sure it is de-allocated using the proper's 'struct device'.
Implementation wise the code keeps two lists: one that is attached to the 'struct device' (via the dev->dma_pools list) and a global one to be used when the 'struct device' is unavailable (think shrinker code). The global list can iterate over all of the 'struct device' and its associated dma_pool. The list in dev->dma_pools can only iterate the device's dma_pool. /[struct device_pool]\ /---------------------------------------------------| dev | / +-------| dma_pool | /-----+------\ / --------------------/ |struct device| /-->[struct dma_pool for WC]</ /[struct device_pool]\ | dma_pools +----+ /-| dev | | ... | --->[struct dma_pool for uncached]<-/--| dma_pool | -----+------/ / --------------------/ ----------------------------------------------/ [Two pools associated with the device (WC and UC), and the parallel list containing the 'struct dev' and 'struct dma_pool' entries]
The maximum amount of dma pools a device can have is six: write-combined, uncached, and cached; then there are the DMA32 variants which are: write-combined dma32, uncached dma32, and cached dma32.
Currently this code only gets activated when any variant of the SWIOTLB IOMMU code is running (Intel without VT-d, AMD without GART, IBM Calgary and Xen PV with PCI devices).
Tested-by: Michel Dänzer michel@daenzer.net [v1: Using swiotlb_nr_tbl instead of swiotlb_enabled] [v2: Major overhaul - added 'inuse_list' to seperate used from inuse and reorder the order of lists to get better performance.] [v3: Added comments/and some logic based on review, Added Jerome tag] [v4: rebase on top of ttm_tt & ttm_backend merge] [v5: rebase on top of ttm memory accounting overhaul] [v6: New rebase on top of more memory accouting changes] [v7: well rebase on top of no memory accounting changes] [v8: make sure pages list is initialized empty] [v9: calll ttm_mem_global_free_page in unpopulate for accurate accountg] Signed-off-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Reviewed-by: Jerome Glisse jglisse@redhat.com Acked-by: Thomas Hellstrom thellstrom@vmware.com --- drivers/gpu/drm/ttm/Makefile | 4 + drivers/gpu/drm/ttm/ttm_memory.c | 2 + drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 1134 ++++++++++++++++++++++++++++++ drivers/gpu/drm/ttm/ttm_tt.c | 1 + include/drm/ttm/ttm_bo_driver.h | 2 + include/drm/ttm/ttm_page_alloc.h | 36 + 6 files changed, 1179 insertions(+), 0 deletions(-) create mode 100644 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
diff --git a/drivers/gpu/drm/ttm/Makefile b/drivers/gpu/drm/ttm/Makefile index f3cf6f0..b2b33dd 100644 --- a/drivers/gpu/drm/ttm/Makefile +++ b/drivers/gpu/drm/ttm/Makefile @@ -7,4 +7,8 @@ ttm-y := ttm_agp_backend.o ttm_memory.o ttm_tt.o ttm_bo.o \ ttm_object.o ttm_lock.o ttm_execbuf_util.o ttm_page_alloc.o \ ttm_bo_manager.o
+ifeq ($(CONFIG_SWIOTLB),y) +ttm-y += ttm_page_alloc_dma.o +endif + obj-$(CONFIG_DRM_TTM) += ttm.o diff --git a/drivers/gpu/drm/ttm/ttm_memory.c b/drivers/gpu/drm/ttm/ttm_memory.c index e70ddd8..9eba8e9 100644 --- a/drivers/gpu/drm/ttm/ttm_memory.c +++ b/drivers/gpu/drm/ttm/ttm_memory.c @@ -395,6 +395,7 @@ int ttm_mem_global_init(struct ttm_mem_global *glob) zone->name, (unsigned long long) zone->max_mem >> 10); } ttm_page_alloc_init(glob, glob->zone_kernel->max_mem/(2*PAGE_SIZE)); + ttm_dma_page_alloc_init(glob, glob->zone_kernel->max_mem/(2*PAGE_SIZE)); return 0; out_no_zone: ttm_mem_global_release(glob); @@ -409,6 +410,7 @@ void ttm_mem_global_release(struct ttm_mem_global *glob)
/* let the page allocator first stop the shrink work. */ ttm_page_alloc_fini(); + ttm_dma_page_alloc_fini();
flush_workqueue(glob->swap_queue); destroy_workqueue(glob->swap_queue); diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c new file mode 100644 index 0000000..7a47793 --- /dev/null +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -0,0 +1,1134 @@ +/* + * Copyright 2011 (c) Oracle Corp. + + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sub license, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + * + * Author: Konrad Rzeszutek Wilk konrad.wilk@oracle.com + */ + +/* + * A simple DMA pool losely based on dmapool.c. It has certain advantages + * over the DMA pools: + * - Pool collects resently freed pages for reuse (and hooks up to + * the shrinker). + * - Tracks currently in use pages + * - Tracks whether the page is UC, WB or cached (and reverts to WB + * when freed). + */ + +#include <linux/dma-mapping.h> +#include <linux/list.h> +#include <linux/seq_file.h> /* for seq_printf */ +#include <linux/slab.h> +#include <linux/spinlock.h> +#include <linux/highmem.h> +#include <linux/mm_types.h> +#include <linux/module.h> +#include <linux/mm.h> +#include <linux/atomic.h> +#include <linux/device.h> +#include <linux/kthread.h> +#include "ttm/ttm_bo_driver.h" +#include "ttm/ttm_page_alloc.h" +#ifdef TTM_HAS_AGP +#include <asm/agp.h> +#endif + +#define NUM_PAGES_TO_ALLOC (PAGE_SIZE/sizeof(struct page *)) +#define SMALL_ALLOCATION 4 +#define FREE_ALL_PAGES (~0U) +/* times are in msecs */ +#define IS_UNDEFINED (0) +#define IS_WC (1<<1) +#define IS_UC (1<<2) +#define IS_CACHED (1<<3) +#define IS_DMA32 (1<<4) + +enum pool_type { + POOL_IS_UNDEFINED, + POOL_IS_WC = IS_WC, + POOL_IS_UC = IS_UC, + POOL_IS_CACHED = IS_CACHED, + POOL_IS_WC_DMA32 = IS_WC | IS_DMA32, + POOL_IS_UC_DMA32 = IS_UC | IS_DMA32, + POOL_IS_CACHED_DMA32 = IS_CACHED | IS_DMA32, +}; +/* + * The pool structure. There are usually six pools: + * - generic (not restricted to DMA32): + * - write combined, uncached, cached. + * - dma32 (up to 2^32 - so up 4GB): + * - write combined, uncached, cached. + * for each 'struct device'. The 'cached' is for pages that are actively used. + * The other ones can be shrunk by the shrinker API if neccessary. + * @pools: The 'struct device->dma_pools' link. + * @type: Type of the pool + * @lock: Protects the inuse_list and free_list from concurrnet access. Must be + * used with irqsave/irqrestore variants because pool allocator maybe called + * from delayed work. + * @inuse_list: Pool of pages that are in use. The order is very important and + * it is in the order that the TTM pages that are put back are in. + * @free_list: Pool of pages that are free to be used. No order requirements. + * @dev: The device that is associated with these pools. + * @size: Size used during DMA allocation. + * @npages_free: Count of available pages for re-use. + * @npages_in_use: Count of pages that are in use. + * @nfrees: Stats when pool is shrinking. + * @nrefills: Stats when the pool is grown. + * @gfp_flags: Flags to pass for alloc_page. + * @name: Name of the pool. + * @dev_name: Name derieved from dev - similar to how dev_info works. + * Used during shutdown as the dev_info during release is unavailable. + */ +struct dma_pool { + struct list_head pools; /* The 'struct device->dma_pools link */ + enum pool_type type; + spinlock_t lock; + struct list_head inuse_list; + struct list_head free_list; + struct device *dev; + unsigned size; + unsigned npages_free; + unsigned npages_in_use; + unsigned long nfrees; /* Stats when shrunk. */ + unsigned long nrefills; /* Stats when grown. */ + gfp_t gfp_flags; + char name[13]; /* "cached dma32" */ + char dev_name[64]; /* Constructed from dev */ +}; + +/* + * The accounting page keeping track of the allocated page along with + * the DMA address. + * @page_list: The link to the 'page_list' in 'struct dma_pool'. + * @vaddr: The virtual address of the page + * @dma: The bus address of the page. If the page is not allocated + * via the DMA API, it will be -1. + */ +struct dma_page { + struct list_head page_list; + void *vaddr; + struct page *p; + dma_addr_t dma; +}; + +/* + * Limits for the pool. They are handled without locks because only place where + * they may change is in sysfs store. They won't have immediate effect anyway + * so forcing serialization to access them is pointless. + */ + +struct ttm_pool_opts { + unsigned alloc_size; + unsigned max_size; + unsigned small; +}; + +/* + * Contains the list of all of the 'struct device' and their corresponding + * DMA pools. Guarded by _mutex->lock. + * @pools: The link to 'struct ttm_pool_manager->pools' + * @dev: The 'struct device' associated with the 'pool' + * @pool: The 'struct dma_pool' associated with the 'dev' + */ +struct device_pools { + struct list_head pools; + struct device *dev; + struct dma_pool *pool; +}; + +/* + * struct ttm_pool_manager - Holds memory pools for fast allocation + * + * @lock: Lock used when adding/removing from pools + * @pools: List of 'struct device' and 'struct dma_pool' tuples. + * @options: Limits for the pool. + * @npools: Total amount of pools in existence. + * @shrinker: The structure used by [un|]register_shrinker + */ +struct ttm_pool_manager { + struct mutex lock; + struct list_head pools; + struct ttm_pool_opts options; + unsigned npools; + struct shrinker mm_shrink; + struct kobject kobj; +}; + +static struct ttm_pool_manager *_manager; + +static struct attribute ttm_page_pool_max = { + .name = "pool_max_size", + .mode = S_IRUGO | S_IWUSR +}; +static struct attribute ttm_page_pool_small = { + .name = "pool_small_allocation", + .mode = S_IRUGO | S_IWUSR +}; +static struct attribute ttm_page_pool_alloc_size = { + .name = "pool_allocation_size", + .mode = S_IRUGO | S_IWUSR +}; + +static struct attribute *ttm_pool_attrs[] = { + &ttm_page_pool_max, + &ttm_page_pool_small, + &ttm_page_pool_alloc_size, + NULL +}; + +static void ttm_pool_kobj_release(struct kobject *kobj) +{ + struct ttm_pool_manager *m = + container_of(kobj, struct ttm_pool_manager, kobj); + kfree(m); +} + +static ssize_t ttm_pool_store(struct kobject *kobj, struct attribute *attr, + const char *buffer, size_t size) +{ + struct ttm_pool_manager *m = + container_of(kobj, struct ttm_pool_manager, kobj); + int chars; + unsigned val; + chars = sscanf(buffer, "%u", &val); + if (chars == 0) + return size; + + /* Convert kb to number of pages */ + val = val / (PAGE_SIZE >> 10); + + if (attr == &ttm_page_pool_max) + m->options.max_size = val; + else if (attr == &ttm_page_pool_small) + m->options.small = val; + else if (attr == &ttm_page_pool_alloc_size) { + if (val > NUM_PAGES_TO_ALLOC*8) { + printk(KERN_ERR TTM_PFX + "Setting allocation size to %lu " + "is not allowed. Recommended size is " + "%lu\n", + NUM_PAGES_TO_ALLOC*(PAGE_SIZE >> 7), + NUM_PAGES_TO_ALLOC*(PAGE_SIZE >> 10)); + return size; + } else if (val > NUM_PAGES_TO_ALLOC) { + printk(KERN_WARNING TTM_PFX + "Setting allocation size to " + "larger than %lu is not recommended.\n", + NUM_PAGES_TO_ALLOC*(PAGE_SIZE >> 10)); + } + m->options.alloc_size = val; + } + + return size; +} + +static ssize_t ttm_pool_show(struct kobject *kobj, struct attribute *attr, + char *buffer) +{ + struct ttm_pool_manager *m = + container_of(kobj, struct ttm_pool_manager, kobj); + unsigned val = 0; + + if (attr == &ttm_page_pool_max) + val = m->options.max_size; + else if (attr == &ttm_page_pool_small) + val = m->options.small; + else if (attr == &ttm_page_pool_alloc_size) + val = m->options.alloc_size; + + val = val * (PAGE_SIZE >> 10); + + return snprintf(buffer, PAGE_SIZE, "%u\n", val); +} + +static const struct sysfs_ops ttm_pool_sysfs_ops = { + .show = &ttm_pool_show, + .store = &ttm_pool_store, +}; + +static struct kobj_type ttm_pool_kobj_type = { + .release = &ttm_pool_kobj_release, + .sysfs_ops = &ttm_pool_sysfs_ops, + .default_attrs = ttm_pool_attrs, +}; + +#ifndef CONFIG_X86 +static int set_pages_array_wb(struct page **pages, int addrinarray) +{ +#ifdef TTM_HAS_AGP + int i; + + for (i = 0; i < addrinarray; i++) + unmap_page_from_agp(pages[i]); +#endif + return 0; +} + +static int set_pages_array_wc(struct page **pages, int addrinarray) +{ +#ifdef TTM_HAS_AGP + int i; + + for (i = 0; i < addrinarray; i++) + map_page_into_agp(pages[i]); +#endif + return 0; +} + +static int set_pages_array_uc(struct page **pages, int addrinarray) +{ +#ifdef TTM_HAS_AGP + int i; + + for (i = 0; i < addrinarray; i++) + map_page_into_agp(pages[i]); +#endif + return 0; +} +#endif /* for !CONFIG_X86 */ + +static int ttm_set_pages_caching(struct dma_pool *pool, + struct page **pages, unsigned cpages) +{ + int r = 0; + /* Set page caching */ + if (pool->type & IS_UC) { + r = set_pages_array_uc(pages, cpages); + if (r) + pr_err(TTM_PFX + "%s: Failed to set %d pages to uc!\n", + pool->dev_name, cpages); + } + if (pool->type & IS_WC) { + r = set_pages_array_wc(pages, cpages); + if (r) + pr_err(TTM_PFX + "%s: Failed to set %d pages to wc!\n", + pool->dev_name, cpages); + } + return r; +} + +static void __ttm_dma_free_page(struct dma_pool *pool, struct dma_page *d_page) +{ + dma_addr_t dma = d_page->dma; + dma_free_coherent(pool->dev, pool->size, d_page->vaddr, dma); + + kfree(d_page); + d_page = NULL; +} +static struct dma_page *__ttm_dma_alloc_page(struct dma_pool *pool) +{ + struct dma_page *d_page; + + d_page = kmalloc(sizeof(struct dma_page), GFP_KERNEL); + if (!d_page) + return NULL; + + d_page->vaddr = dma_alloc_coherent(pool->dev, pool->size, + &d_page->dma, + pool->gfp_flags); + if (d_page->vaddr) + d_page->p = virt_to_page(d_page->vaddr); + else { + kfree(d_page); + d_page = NULL; + } + return d_page; +} +static enum pool_type ttm_to_type(int flags, enum ttm_caching_state cstate) +{ + enum pool_type type = IS_UNDEFINED; + + if (flags & TTM_PAGE_FLAG_DMA32) + type |= IS_DMA32; + if (cstate == tt_cached) + type |= IS_CACHED; + else if (cstate == tt_uncached) + type |= IS_UC; + else + type |= IS_WC; + + return type; +} + +static void ttm_pool_update_free_locked(struct dma_pool *pool, + unsigned freed_pages) +{ + pool->npages_free -= freed_pages; + pool->nfrees += freed_pages; + +} + +/* set memory back to wb and free the pages. */ +static void ttm_dma_pages_put(struct dma_pool *pool, struct list_head *d_pages, + struct page *pages[], unsigned npages) +{ + struct dma_page *d_page, *tmp; + + if (npages && set_pages_array_wb(pages, npages)) + pr_err(TTM_PFX "%s: Failed to set %d pages to wb!\n", + pool->dev_name, npages); + + list_for_each_entry_safe(d_page, tmp, d_pages, page_list) { + list_del(&d_page->page_list); + __ttm_dma_free_page(pool, d_page); + } +} + +static void ttm_dma_page_put(struct dma_pool *pool, struct dma_page *d_page) +{ + if (set_pages_array_wb(&d_page->p, 1)) + pr_err(TTM_PFX "%s: Failed to set %d pages to wb!\n", + pool->dev_name, 1); + + list_del(&d_page->page_list); + __ttm_dma_free_page(pool, d_page); +} + +/* + * Free pages from pool. + * + * To prevent hogging the ttm_swap process we only free NUM_PAGES_TO_ALLOC + * number of pages in one go. + * + * @pool: to free the pages from + * @nr_free: If set to true will free all pages in pool + **/ +static unsigned ttm_dma_page_pool_free(struct dma_pool *pool, unsigned nr_free) +{ + unsigned long irq_flags; + struct dma_page *dma_p, *tmp; + struct page **pages_to_free; + struct list_head d_pages; + unsigned freed_pages = 0, + npages_to_free = nr_free; + + if (NUM_PAGES_TO_ALLOC < nr_free) + npages_to_free = NUM_PAGES_TO_ALLOC; +#if 0 + if (nr_free > 1) { + pr_debug("%s: (%s:%d) Attempting to free %d (%d) pages\n", + pool->dev_name, pool->name, current->pid, + npages_to_free, nr_free); + } +#endif + pages_to_free = kmalloc(npages_to_free * sizeof(struct page *), + GFP_KERNEL); + + if (!pages_to_free) { + pr_err(TTM_PFX + "%s: Failed to allocate memory for pool free operation.\n", + pool->dev_name); + return 0; + } + INIT_LIST_HEAD(&d_pages); +restart: + spin_lock_irqsave(&pool->lock, irq_flags); + + /* We picking the oldest ones off the list */ + list_for_each_entry_safe_reverse(dma_p, tmp, &pool->free_list, + page_list) { + if (freed_pages >= npages_to_free) + break; + + /* Move the dma_page from one list to another. */ + list_move(&dma_p->page_list, &d_pages); + + pages_to_free[freed_pages++] = dma_p->p; + /* We can only remove NUM_PAGES_TO_ALLOC at a time. */ + if (freed_pages >= NUM_PAGES_TO_ALLOC) { + + ttm_pool_update_free_locked(pool, freed_pages); + /** + * Because changing page caching is costly + * we unlock the pool to prevent stalling. + */ + spin_unlock_irqrestore(&pool->lock, irq_flags); + + ttm_dma_pages_put(pool, &d_pages, pages_to_free, + freed_pages); + + INIT_LIST_HEAD(&d_pages); + + if (likely(nr_free != FREE_ALL_PAGES)) + nr_free -= freed_pages; + + if (NUM_PAGES_TO_ALLOC >= nr_free) + npages_to_free = nr_free; + else + npages_to_free = NUM_PAGES_TO_ALLOC; + + freed_pages = 0; + + /* free all so restart the processing */ + if (nr_free) + goto restart; + + /* Not allowed to fall through or break because + * following context is inside spinlock while we are + * outside here. + */ + goto out; + + } + } + + /* remove range of pages from the pool */ + if (freed_pages) { + ttm_pool_update_free_locked(pool, freed_pages); + nr_free -= freed_pages; + } + + spin_unlock_irqrestore(&pool->lock, irq_flags); + + if (freed_pages) + ttm_dma_pages_put(pool, &d_pages, pages_to_free, freed_pages); +out: + kfree(pages_to_free); + return nr_free; +} + +static void ttm_dma_free_pool(struct device *dev, enum pool_type type) +{ + struct device_pools *p; + struct dma_pool *pool; + + if (!dev) + return; + + mutex_lock(&_manager->lock); + list_for_each_entry_reverse(p, &_manager->pools, pools) { + if (p->dev != dev) + continue; + pool = p->pool; + if (pool->type != type) + continue; + + list_del(&p->pools); + kfree(p); + _manager->npools--; + break; + } + list_for_each_entry_reverse(pool, &dev->dma_pools, pools) { + if (pool->type != type) + continue; + /* Takes a spinlock.. */ + ttm_dma_page_pool_free(pool, FREE_ALL_PAGES); + WARN_ON(((pool->npages_in_use + pool->npages_free) != 0)); + /* This code path is called after _all_ references to the + * struct device has been dropped - so nobody should be + * touching it. In case somebody is trying to _add_ we are + * guarded by the mutex. */ + list_del(&pool->pools); + kfree(pool); + break; + } + mutex_unlock(&_manager->lock); +} + +/* + * On free-ing of the 'struct device' this deconstructor is run. + * Albeit the pool might have already been freed earlier. + */ +static void ttm_dma_pool_release(struct device *dev, void *res) +{ + struct dma_pool *pool = *(struct dma_pool **)res; + + if (pool) + ttm_dma_free_pool(dev, pool->type); +} + +static int ttm_dma_pool_match(struct device *dev, void *res, void *match_data) +{ + return *(struct dma_pool **)res == match_data; +} + +static struct dma_pool *ttm_dma_pool_init(struct device *dev, gfp_t flags, + enum pool_type type) +{ + char *n[] = {"wc", "uc", "cached", " dma32", "unknown",}; + enum pool_type t[] = {IS_WC, IS_UC, IS_CACHED, IS_DMA32, IS_UNDEFINED}; + struct device_pools *sec_pool = NULL; + struct dma_pool *pool = NULL, **ptr; + unsigned i; + int ret = -ENODEV; + char *p; + + if (!dev) + return NULL; + + ptr = devres_alloc(ttm_dma_pool_release, sizeof(*ptr), GFP_KERNEL); + if (!ptr) + return NULL; + + ret = -ENOMEM; + + pool = kmalloc_node(sizeof(struct dma_pool), GFP_KERNEL, + dev_to_node(dev)); + if (!pool) + goto err_mem; + + sec_pool = kmalloc_node(sizeof(struct device_pools), GFP_KERNEL, + dev_to_node(dev)); + if (!sec_pool) + goto err_mem; + + INIT_LIST_HEAD(&sec_pool->pools); + sec_pool->dev = dev; + sec_pool->pool = pool; + + INIT_LIST_HEAD(&pool->free_list); + INIT_LIST_HEAD(&pool->inuse_list); + INIT_LIST_HEAD(&pool->pools); + spin_lock_init(&pool->lock); + pool->dev = dev; + pool->npages_free = pool->npages_in_use = 0; + pool->nfrees = 0; + pool->gfp_flags = flags; + pool->size = PAGE_SIZE; + pool->type = type; + pool->nrefills = 0; + p = pool->name; + for (i = 0; i < 5; i++) { + if (type & t[i]) { + p += snprintf(p, sizeof(pool->name) - (p - pool->name), + "%s", n[i]); + } + } + *p = 0; + /* We copy the name for pr_ calls b/c when dma_pool_destroy is called + * - the kobj->name has already been deallocated.*/ + snprintf(pool->dev_name, sizeof(pool->dev_name), "%s %s", + dev_driver_string(dev), dev_name(dev)); + mutex_lock(&_manager->lock); + /* You can get the dma_pool from either the global: */ + list_add(&sec_pool->pools, &_manager->pools); + _manager->npools++; + /* or from 'struct device': */ + list_add(&pool->pools, &dev->dma_pools); + mutex_unlock(&_manager->lock); + + *ptr = pool; + devres_add(dev, ptr); + + return pool; +err_mem: + devres_free(ptr); + kfree(sec_pool); + kfree(pool); + return ERR_PTR(ret); +} + +static struct dma_pool *ttm_dma_find_pool(struct device *dev, + enum pool_type type) +{ + struct dma_pool *pool, *tmp, *found = NULL; + + if (type == IS_UNDEFINED) + return found; + + /* NB: We iterate on the 'struct dev' which has no spinlock, but + * it does have a kref which we have taken. The kref is taken during + * graphic driver loading - in the drm_pci_init it calls either + * pci_dev_get or pci_register_driver which both end up taking a kref + * on 'struct device'. + * + * On teardown, the graphic drivers end up quiescing the TTM (put_pages) + * and calls the dev_res deconstructors: ttm_dma_pool_release. The nice + * thing is at that point of time there are no pages associated with the + * driver so this function will not be called. + */ + list_for_each_entry_safe(pool, tmp, &dev->dma_pools, pools) { + if (pool->type != type) + continue; + found = pool; + break; + } + return found; +} + +/* + * Free pages the pages that failed to change the caching state. If there + * are pages that have changed their caching state already put them to the + * pool. + */ +static void ttm_dma_handle_caching_state_failure(struct dma_pool *pool, + struct list_head *d_pages, + struct page **failed_pages, + unsigned cpages) +{ + struct dma_page *d_page, *tmp; + struct page *p; + unsigned i = 0; + + p = failed_pages[0]; + if (!p) + return; + /* Find the failed page. */ + list_for_each_entry_safe(d_page, tmp, d_pages, page_list) { + if (d_page->p != p) + continue; + /* .. and then progress over the full list. */ + list_del(&d_page->page_list); + __ttm_dma_free_page(pool, d_page); + if (++i < cpages) + p = failed_pages[i]; + else + break; + } + +} + +/* + * Allocate 'count' pages, and put 'need' number of them on the + * 'pages' and as well on the 'dma_address' starting at 'dma_offset' offset. + * The full list of pages should also be on 'd_pages'. + * We return zero for success, and negative numbers as errors. + */ +static int ttm_dma_pool_alloc_new_pages(struct dma_pool *pool, + struct list_head *d_pages, + unsigned count) +{ + struct page **caching_array; + struct dma_page *dma_p; + struct page *p; + int r = 0; + unsigned i, cpages; + unsigned max_cpages = min(count, + (unsigned)(PAGE_SIZE/sizeof(struct page *))); + + /* allocate array for page caching change */ + caching_array = kmalloc(max_cpages*sizeof(struct page *), GFP_KERNEL); + + if (!caching_array) { + pr_err(TTM_PFX + "%s: Unable to allocate table for new pages.", + pool->dev_name); + return -ENOMEM; + } + + if (count > 1) { + pr_debug("%s: (%s:%d) Getting %d pages\n", + pool->dev_name, pool->name, current->pid, + count); + } + + for (i = 0, cpages = 0; i < count; ++i) { + dma_p = __ttm_dma_alloc_page(pool); + if (!dma_p) { + pr_err(TTM_PFX "%s: Unable to get page %u.\n", + pool->dev_name, i); + + /* store already allocated pages in the pool after + * setting the caching state */ + if (cpages) { + r = ttm_set_pages_caching(pool, caching_array, + cpages); + if (r) + ttm_dma_handle_caching_state_failure( + pool, d_pages, caching_array, + cpages); + } + r = -ENOMEM; + goto out; + } + p = dma_p->p; +#ifdef CONFIG_HIGHMEM + /* gfp flags of highmem page should never be dma32 so we + * we should be fine in such case + */ + if (!PageHighMem(p)) +#endif + { + caching_array[cpages++] = p; + if (cpages == max_cpages) { + /* Note: Cannot hold the spinlock */ + r = ttm_set_pages_caching(pool, caching_array, + cpages); + if (r) { + ttm_dma_handle_caching_state_failure( + pool, d_pages, caching_array, + cpages); + goto out; + } + cpages = 0; + } + } + list_add(&dma_p->page_list, d_pages); + } + + if (cpages) { + r = ttm_set_pages_caching(pool, caching_array, cpages); + if (r) + ttm_dma_handle_caching_state_failure(pool, d_pages, + caching_array, cpages); + } +out: + kfree(caching_array); + return r; +} + +/* + * @return count of pages still required to fulfill the request. +*/ +static int ttm_dma_page_pool_fill_locked(struct dma_pool *pool, + unsigned long *irq_flags) +{ + unsigned count = _manager->options.small; + int r = pool->npages_free; + + if (count > pool->npages_free) { + struct list_head d_pages; + + INIT_LIST_HEAD(&d_pages); + + spin_unlock_irqrestore(&pool->lock, *irq_flags); + + /* Returns how many more are neccessary to fulfill the + * request. */ + r = ttm_dma_pool_alloc_new_pages(pool, &d_pages, count); + + spin_lock_irqsave(&pool->lock, *irq_flags); + if (!r) { + /* Add the fresh to the end.. */ + list_splice(&d_pages, &pool->free_list); + ++pool->nrefills; + pool->npages_free += count; + r = count; + } else { + struct dma_page *d_page; + unsigned cpages = 0; + + pr_err(TTM_PFX "%s: Failed to fill %s pool (r:%d)!\n", + pool->dev_name, pool->name, r); + + list_for_each_entry(d_page, &d_pages, page_list) { + cpages++; + } + list_splice_tail(&d_pages, &pool->free_list); + pool->npages_free += cpages; + r = cpages; + } + } + return r; +} + +/* + * @return count of pages still required to fulfill the request. + * The populate list is actually a stack (not that is matters as TTM + * allocates one page at a time. + */ +static int ttm_dma_pool_get_pages(struct dma_pool *pool, + struct ttm_tt *ttm, + unsigned index) +{ + struct dma_page *d_page; + unsigned long irq_flags; + int count, r = -ENOMEM; + + spin_lock_irqsave(&pool->lock, irq_flags); + count = ttm_dma_page_pool_fill_locked(pool, &irq_flags); + if (count) { + d_page = list_first_entry(&pool->free_list, struct dma_page, page_list); + ttm->pages[index] = d_page->p; + ttm->dma_address[index] = d_page->dma; + list_move_tail(&d_page->page_list, &ttm->alloc_list); + r = 0; + pool->npages_in_use += 1; + pool->npages_free -= 1; + } + spin_unlock_irqrestore(&pool->lock, irq_flags); + return r; +} + +/* + * On success pages list will hold count number of correctly + * cached pages. On failure will hold the negative return value (-ENOMEM, etc). + */ +int ttm_dma_populate(struct ttm_tt *ttm, struct device *dev) +{ + struct ttm_mem_global *mem_glob = ttm->glob->mem_glob; + struct dma_pool *pool; + enum pool_type type; + unsigned i; + gfp_t gfp_flags; + int ret; + + if (ttm->state != tt_unpopulated) + return 0; + + type = ttm_to_type(ttm->page_flags, ttm->caching_state); + if (ttm->page_flags & TTM_PAGE_FLAG_DMA32) + gfp_flags = GFP_USER | GFP_DMA32; + else + gfp_flags = GFP_HIGHUSER; + if (ttm->page_flags & TTM_PAGE_FLAG_ZERO_ALLOC) + gfp_flags |= __GFP_ZERO; + + pool = ttm_dma_find_pool(dev, type); + if (!pool) { + pool = ttm_dma_pool_init(dev, gfp_flags, type); + if (IS_ERR_OR_NULL(pool)) { + return -ENOMEM; + } + } + + INIT_LIST_HEAD(&ttm->alloc_list); + for (i = 0; i < ttm->num_pages; ++i) { + ret = ttm_dma_pool_get_pages(pool, ttm, i); + if (ret != 0) { + ttm_dma_unpopulate(ttm, dev); + return -ENOMEM; + } + + ret = ttm_mem_global_alloc_page(mem_glob, ttm->pages[i], + false, false); + if (unlikely(ret != 0)) { + ttm_dma_unpopulate(ttm, dev); + return -ENOMEM; + } + } + + if (unlikely(ttm->page_flags & TTM_PAGE_FLAG_SWAPPED)) { + ret = ttm_tt_swapin(ttm); + if (unlikely(ret != 0)) { + ttm_dma_unpopulate(ttm, dev); + return ret; + } + } + + ttm->state = tt_unbound; + return 0; +} +EXPORT_SYMBOL_GPL(ttm_dma_populate); + +/* Get good estimation how many pages are free in pools */ +static int ttm_dma_pool_get_num_unused_pages(void) +{ + struct device_pools *p; + unsigned total = 0; + + mutex_lock(&_manager->lock); + list_for_each_entry(p, &_manager->pools, pools) { + if (p) + total += p->pool->npages_free; + } + mutex_unlock(&_manager->lock); + return total; +} + +/* Put all pages in pages list to correct pool to wait for reuse */ +void ttm_dma_unpopulate(struct ttm_tt *ttm, struct device *dev) +{ + struct dma_pool *pool; + struct dma_page *d_page, *next; + enum pool_type type; + bool is_cached = false; + unsigned count = 0, i; + unsigned long irq_flags; + + type = ttm_to_type(ttm->page_flags, ttm->caching_state); + pool = ttm_dma_find_pool(dev, type); + if (!pool) { + WARN_ON(!pool); + return; + } + is_cached = (ttm_dma_find_pool(pool->dev, + ttm_to_type(ttm->page_flags, tt_cached)) == pool); + + /* make sure pages array match list and count number of pages */ + list_for_each_entry(d_page, &ttm->alloc_list, page_list) { + ttm->pages[count] = d_page->p; + count++; + } + + spin_lock_irqsave(&pool->lock, irq_flags); + pool->npages_in_use -= count; + if (is_cached) { + pool->nfrees += count; + } else { + pool->npages_free += count; + list_splice(&ttm->alloc_list, &pool->free_list); + if (pool->npages_free > _manager->options.max_size) { + count = pool->npages_free - _manager->options.max_size; + } + } + spin_unlock_irqrestore(&pool->lock, irq_flags); + + if (is_cached) { + list_for_each_entry_safe(d_page, next, &ttm->alloc_list, page_list) { + ttm_mem_global_free_page(ttm->glob->mem_glob, + d_page->p); + ttm_dma_page_put(pool, d_page); + } + } else { + for (i = 0; i < count; i++) { + ttm_mem_global_free_page(ttm->glob->mem_glob, + ttm->pages[i]); + } + } + + INIT_LIST_HEAD(&ttm->alloc_list); + for (i = 0; i < ttm->num_pages; i++) { + ttm->pages[i] = NULL; + ttm->dma_address[i] = 0; + } + + /* shrink pool if necessary */ + if (count) + ttm_dma_page_pool_free(pool, count); + ttm->state = tt_unpopulated; +} +EXPORT_SYMBOL_GPL(ttm_dma_unpopulate); + +/** + * Callback for mm to request pool to reduce number of page held. + */ +static int ttm_dma_pool_mm_shrink(struct shrinker *shrink, + struct shrink_control *sc) +{ + static atomic_t start_pool = ATOMIC_INIT(0); + unsigned idx = 0; + unsigned pool_offset = atomic_add_return(1, &start_pool); + unsigned shrink_pages = sc->nr_to_scan; + struct device_pools *p; + + if (list_empty(&_manager->pools)) + return 0; + + mutex_lock(&_manager->lock); + pool_offset = pool_offset % _manager->npools; + list_for_each_entry(p, &_manager->pools, pools) { + unsigned nr_free; + + if (!p && !p->dev) + continue; + if (shrink_pages == 0) + break; + /* Do it in round-robin fashion. */ + if (++idx < pool_offset) + continue; + nr_free = shrink_pages; + shrink_pages = ttm_dma_page_pool_free(p->pool, nr_free); + pr_debug("%s: (%s:%d) Asked to shrink %d, have %d more to go\n", + p->pool->dev_name, p->pool->name, current->pid, nr_free, + shrink_pages); + } + mutex_unlock(&_manager->lock); + /* return estimated number of unused pages in pool */ + return ttm_dma_pool_get_num_unused_pages(); +} + +static void ttm_dma_pool_mm_shrink_init(struct ttm_pool_manager *manager) +{ + manager->mm_shrink.shrink = &ttm_dma_pool_mm_shrink; + manager->mm_shrink.seeks = 1; + register_shrinker(&manager->mm_shrink); +} + +static void ttm_dma_pool_mm_shrink_fini(struct ttm_pool_manager *manager) +{ + unregister_shrinker(&manager->mm_shrink); +} + +int ttm_dma_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages) +{ + int ret = -ENOMEM; + + WARN_ON(_manager); + + printk(KERN_INFO TTM_PFX "Initializing DMA pool allocator.\n"); + + _manager = kzalloc(sizeof(*_manager), GFP_KERNEL); + if (!_manager) + goto err_manager; + + mutex_init(&_manager->lock); + INIT_LIST_HEAD(&_manager->pools); + + _manager->options.max_size = max_pages; + _manager->options.small = SMALL_ALLOCATION; + _manager->options.alloc_size = NUM_PAGES_TO_ALLOC; + + /* This takes care of auto-freeing the _manager */ + ret = kobject_init_and_add(&_manager->kobj, &ttm_pool_kobj_type, + &glob->kobj, "dma_pool"); + if (unlikely(ret != 0)) { + kobject_put(&_manager->kobj); + goto err; + } + ttm_dma_pool_mm_shrink_init(_manager); + return 0; +err_manager: + kfree(_manager); + _manager = NULL; +err: + return ret; +} + +void ttm_dma_page_alloc_fini(void) +{ + struct device_pools *p, *t; + + printk(KERN_INFO TTM_PFX "Finalizing DMA pool allocator.\n"); + ttm_dma_pool_mm_shrink_fini(_manager); + + list_for_each_entry_safe_reverse(p, t, &_manager->pools, pools) { + dev_dbg(p->dev, "(%s:%d) Freeing.\n", p->pool->name, + current->pid); + WARN_ON(devres_destroy(p->dev, ttm_dma_pool_release, + ttm_dma_pool_match, p->pool)); + ttm_dma_free_pool(p->dev, p->pool->type); + } + kobject_put(&_manager->kobj); + _manager = NULL; +} + +int ttm_dma_page_alloc_debugfs(struct seq_file *m, void *data) +{ + struct device_pools *p; + struct dma_pool *pool = NULL; + char *h[] = {"pool", "refills", "pages freed", "inuse", "available", + "name", "virt", "busaddr"}; + + if (!_manager) { + seq_printf(m, "No pool allocator running.\n"); + return 0; + } + seq_printf(m, "%13s %12s %13s %8s %8s %8s\n", + h[0], h[1], h[2], h[3], h[4], h[5]); + mutex_lock(&_manager->lock); + list_for_each_entry(p, &_manager->pools, pools) { + struct device *dev = p->dev; + if (!dev) + continue; + pool = p->pool; + seq_printf(m, "%13s %12ld %13ld %8d %8d %8s\n", + pool->name, pool->nrefills, + pool->nfrees, pool->npages_in_use, + pool->npages_free, + pool->dev_name); + } + mutex_unlock(&_manager->lock); + return 0; +} +EXPORT_SYMBOL_GPL(ttm_dma_page_alloc_debugfs); diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index 77f0e6f..1625739 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -196,6 +196,7 @@ int ttm_tt_init(struct ttm_tt *ttm, struct ttm_bo_device *bdev, ttm->dummy_read_page = dummy_read_page; ttm->state = tt_unpopulated;
+ INIT_LIST_HEAD(&ttm->alloc_list); ttm_tt_alloc_page_directory(ttm); if (!ttm->pages || !ttm->dma_address) { ttm_tt_destroy(ttm); diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index ae06e42..beef9ab 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -104,6 +104,7 @@ enum ttm_caching_state { * @caching_state: The current caching state of the pages. * @state: The current binding state of the pages. * @dma_address: The DMA (bus) addresses of the pages (if TTM_PAGE_FLAG_DMA32) + * @alloc_list: used by some page allocation backend * * This is a structure holding the pages, caching- and aperture binding * status for a buffer object that isn't backed by fixed (VRAM / AGP) @@ -127,6 +128,7 @@ struct ttm_tt { tt_unpopulated, } state; dma_addr_t *dma_address; + struct list_head alloc_list; };
#define TTM_MEMTYPE_FLAG_FIXED (1 << 0) /* Fixed (on-card) PCI memory */ diff --git a/include/drm/ttm/ttm_page_alloc.h b/include/drm/ttm/ttm_page_alloc.h index 18deeee..1e1337e 100644 --- a/include/drm/ttm/ttm_page_alloc.h +++ b/include/drm/ttm/ttm_page_alloc.h @@ -89,4 +89,40 @@ extern void ttm_pool_unpopulate(struct ttm_tt *ttm); * Output the state of pools to debugfs file */ extern int ttm_page_alloc_debugfs(struct seq_file *m, void *data); + + +#ifdef CONFIG_SWIOTLB +/** + * Initialize pool allocator. + */ +int ttm_dma_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages); + +/** + * Free pool allocator. + */ +void ttm_dma_page_alloc_fini(void); + +/** + * Output the state of pools to debugfs file + */ +extern int ttm_dma_page_alloc_debugfs(struct seq_file *m, void *data); + +int ttm_dma_populate(struct ttm_tt *ttm, struct device *dev); +extern void ttm_dma_unpopulate(struct ttm_tt *ttm, struct device *dev); + +#else +static inline int ttm_dma_page_alloc_init(struct ttm_mem_global *glob, + unsigned max_pages) +{ + return -ENODEV; +} + +static inline void ttm_dma_page_alloc_fini(void) { return; } + +static inline int ttm_dma_page_alloc_debugfs(struct seq_file *m, void *data) +{ + return 0; +} +#endif + #endif
From: Konrad Rzeszutek Wilk konrad.wilk@oracle.com
With the exception that we do not handle the AGP case. We only deal with PCIe cards such as ATI ES1000 or HD3200 that have been detected to only do DMA up to 32-bits.
V2 force dma32 if we fail to set bigger dma mask V3 Rebase on top of no memory account changes (where/when is my delorean when i need it ?) V4 add debugfs entry is swiotlb is active not only if we are on dma 32bits only gpu
CC: Dave Airlie airlied@redhat.com CC: Alex Deucher alexdeucher@gmail.com Signed-off-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Reviewed-by: Jerome Glisse jglisse@redhat.com --- drivers/gpu/drm/radeon/radeon.h | 1 - drivers/gpu/drm/radeon/radeon_device.c | 6 ++ drivers/gpu/drm/radeon/radeon_gart.c | 29 +----------- drivers/gpu/drm/radeon/radeon_ttm.c | 83 +++++++++++++++++++++++++++++-- 4 files changed, 84 insertions(+), 35 deletions(-)
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h index fc5a1d6..de38e70 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -320,7 +320,6 @@ struct radeon_gart { unsigned table_size; struct page **pages; dma_addr_t *pages_addr; - bool *ttm_alloced; bool ready; };
diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c index c4d00a1..fb347a8 100644 --- a/drivers/gpu/drm/radeon/radeon_device.c +++ b/drivers/gpu/drm/radeon/radeon_device.c @@ -765,8 +765,14 @@ int radeon_device_init(struct radeon_device *rdev, r = pci_set_dma_mask(rdev->pdev, DMA_BIT_MASK(dma_bits)); if (r) { rdev->need_dma32 = true; + dma_bits = 32; printk(KERN_WARNING "radeon: No suitable DMA available.\n"); } + r = pci_set_consistent_dma_mask(rdev->pdev, DMA_BIT_MASK(dma_bits)); + if (r) { + pci_set_consistent_dma_mask(rdev->pdev, DMA_BIT_MASK(32)); + printk(KERN_WARNING "radeon: No coherent DMA available.\n"); + }
/* Registers mapping */ /* TODO: block userspace mapping of io register */ diff --git a/drivers/gpu/drm/radeon/radeon_gart.c b/drivers/gpu/drm/radeon/radeon_gart.c index ba7ab79..a4d9816 100644 --- a/drivers/gpu/drm/radeon/radeon_gart.c +++ b/drivers/gpu/drm/radeon/radeon_gart.c @@ -157,9 +157,6 @@ void radeon_gart_unbind(struct radeon_device *rdev, unsigned offset, p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); for (i = 0; i < pages; i++, p++) { if (rdev->gart.pages[p]) { - if (!rdev->gart.ttm_alloced[p]) - pci_unmap_page(rdev->pdev, rdev->gart.pages_addr[p], - PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); rdev->gart.pages[p] = NULL; rdev->gart.pages_addr[p] = rdev->dummy_page.addr; page_base = rdev->gart.pages_addr[p]; @@ -191,23 +188,7 @@ int radeon_gart_bind(struct radeon_device *rdev, unsigned offset, p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE);
for (i = 0; i < pages; i++, p++) { - /* we reverted the patch using dma_addr in TTM for now but this - * code stops building on alpha so just comment it out for now */ - if (0) { /*dma_addr[i] != DMA_ERROR_CODE) */ - rdev->gart.ttm_alloced[p] = true; - rdev->gart.pages_addr[p] = dma_addr[i]; - } else { - /* we need to support large memory configurations */ - /* assume that unbind have already been call on the range */ - rdev->gart.pages_addr[p] = pci_map_page(rdev->pdev, pagelist[i], - 0, PAGE_SIZE, - PCI_DMA_BIDIRECTIONAL); - if (pci_dma_mapping_error(rdev->pdev, rdev->gart.pages_addr[p])) { - /* FIXME: failed to map page (return -ENOMEM?) */ - radeon_gart_unbind(rdev, offset, pages); - return -ENOMEM; - } - } + rdev->gart.pages_addr[p] = dma_addr[i]; rdev->gart.pages[p] = pagelist[i]; if (rdev->gart.ptr) { page_base = rdev->gart.pages_addr[p]; @@ -274,12 +255,6 @@ int radeon_gart_init(struct radeon_device *rdev) radeon_gart_fini(rdev); return -ENOMEM; } - rdev->gart.ttm_alloced = kzalloc(sizeof(bool) * - rdev->gart.num_cpu_pages, GFP_KERNEL); - if (rdev->gart.ttm_alloced == NULL) { - radeon_gart_fini(rdev); - return -ENOMEM; - } /* set GART entry to point to the dummy page by default */ for (i = 0; i < rdev->gart.num_cpu_pages; i++) { rdev->gart.pages_addr[i] = rdev->dummy_page.addr; @@ -296,10 +271,8 @@ void radeon_gart_fini(struct radeon_device *rdev) rdev->gart.ready = false; kfree(rdev->gart.pages); kfree(rdev->gart.pages_addr); - kfree(rdev->gart.ttm_alloced); rdev->gart.pages = NULL; rdev->gart.pages_addr = NULL; - rdev->gart.ttm_alloced = NULL;
radeon_dummy_page_fini(rdev); } diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index b1768cb..f499b2c 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -578,11 +578,73 @@ struct ttm_tt *radeon_ttm_tt_create(struct ttm_bo_device *bdev, return >t->ttm; }
+static int radeon_ttm_tt_populate(struct ttm_tt *ttm) +{ + struct radeon_device *rdev; + unsigned i; + int r; + + if (ttm->state != tt_unpopulated) + return 0; + + rdev = radeon_get_rdev(ttm->bdev); + +#ifdef CONFIG_SWIOTLB + if (swiotlb_nr_tbl()) { + return ttm_dma_populate(ttm, rdev->dev); + } +#endif + + r = ttm_pool_populate(ttm); + if (r) { + return r; + } + + for (i = 0; i < ttm->num_pages; i++) { + ttm->dma_address[i] = pci_map_page(rdev->pdev, ttm->pages[i], + 0, PAGE_SIZE, + PCI_DMA_BIDIRECTIONAL); + if (pci_dma_mapping_error(rdev->pdev, ttm->dma_address[i])) { + while (--i) { + pci_unmap_page(rdev->pdev, ttm->dma_address[i], + PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); + ttm->dma_address[i] = 0; + } + ttm_pool_unpopulate(ttm); + return -EFAULT; + } + } + return 0; +} + +static void radeon_ttm_tt_unpopulate(struct ttm_tt *ttm) +{ + struct radeon_device *rdev; + unsigned i; + + rdev = radeon_get_rdev(ttm->bdev); + +#ifdef CONFIG_SWIOTLB + if (swiotlb_nr_tbl()) { + ttm_dma_unpopulate(ttm, rdev->dev); + return; + } +#endif + + for (i = 0; i < ttm->num_pages; i++) { + if (ttm->dma_address[i]) { + pci_unmap_page(rdev->pdev, ttm->dma_address[i], + PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); + } + } + + ttm_pool_unpopulate(ttm); +}
static struct ttm_bo_driver radeon_bo_driver = { .ttm_tt_create = &radeon_ttm_tt_create, - .ttm_tt_populate = &ttm_pool_populate, - .ttm_tt_unpopulate = &ttm_pool_unpopulate, + .ttm_tt_populate = &radeon_ttm_tt_populate, + .ttm_tt_unpopulate = &radeon_ttm_tt_unpopulate, .invalidate_caches = &radeon_invalidate_caches, .init_mem_type = &radeon_init_mem_type, .evict_flags = &radeon_evict_flags, @@ -768,8 +830,8 @@ static int radeon_mm_dump_table(struct seq_file *m, void *data) static int radeon_ttm_debugfs_init(struct radeon_device *rdev) { #if defined(CONFIG_DEBUG_FS) - static struct drm_info_list radeon_mem_types_list[RADEON_DEBUGFS_MEM_TYPES+1]; - static char radeon_mem_types_names[RADEON_DEBUGFS_MEM_TYPES+1][32]; + static struct drm_info_list radeon_mem_types_list[RADEON_DEBUGFS_MEM_TYPES+2]; + static char radeon_mem_types_names[RADEON_DEBUGFS_MEM_TYPES+2][32]; unsigned i;
for (i = 0; i < RADEON_DEBUGFS_MEM_TYPES; i++) { @@ -791,8 +853,17 @@ static int radeon_ttm_debugfs_init(struct radeon_device *rdev) radeon_mem_types_list[i].name = radeon_mem_types_names[i]; radeon_mem_types_list[i].show = &ttm_page_alloc_debugfs; radeon_mem_types_list[i].driver_features = 0; - radeon_mem_types_list[i].data = NULL; - return radeon_debugfs_add_files(rdev, radeon_mem_types_list, RADEON_DEBUGFS_MEM_TYPES+1); + radeon_mem_types_list[i++].data = NULL; +#ifdef CONFIG_SWIOTLB + if (swiotlb_nr_tbl()) { + sprintf(radeon_mem_types_names[i], "ttm_dma_page_pool"); + radeon_mem_types_list[i].name = radeon_mem_types_names[i]; + radeon_mem_types_list[i].show = &ttm_dma_page_alloc_debugfs; + radeon_mem_types_list[i].driver_features = 0; + radeon_mem_types_list[i++].data = NULL; + } +#endif + return radeon_debugfs_add_files(rdev, radeon_mem_types_list, i);
#endif return 0;
From: Konrad Rzeszutek Wilk konrad.wilk@oracle.com
If the card is capable of more than 32-bit, then use the default TTM page pool code which allocates from anywhere in the memory.
Note: If the 'ttm.no_dma' parameter is set, the override is ignored and the default TTM pool is used.
V2 use pci_set_consistent_dma_mask V3 Rebase on top of no memory account changes (where/when is my delorean when i need it ?)
CC: Ben Skeggs bskeggs@redhat.com CC: Francisco Jerez currojerez@riseup.net CC: Dave Airlie airlied@redhat.com Signed-off-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Reviewed-by: Jerome Glisse jglisse@redhat.com --- drivers/gpu/drm/nouveau/nouveau_bo.c | 73 ++++++++++++++++++++++++++++- drivers/gpu/drm/nouveau/nouveau_debugfs.c | 1 + drivers/gpu/drm/nouveau/nouveau_mem.c | 6 ++ drivers/gpu/drm/nouveau/nouveau_sgdma.c | 60 +----------------------- 4 files changed, 79 insertions(+), 61 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 3271001..e603909 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -1049,10 +1049,79 @@ nouveau_bo_fence(struct nouveau_bo *nvbo, struct nouveau_fence *fence) nouveau_fence_unref(&old_fence); }
+static int +nouveau_ttm_tt_populate(struct ttm_tt *ttm) +{ + struct drm_nouveau_private *dev_priv; + struct drm_device *dev; + unsigned i; + int r; + + if (ttm->state != tt_unpopulated) + return 0; + + dev_priv = nouveau_bdev(ttm->bdev); + dev = dev_priv->dev; + +#ifdef CONFIG_SWIOTLB + if (swiotlb_nr_tbl()) { + return ttm_dma_populate(ttm, dev->dev); + } +#endif + + r = ttm_pool_populate(ttm); + if (r) { + return r; + } + + for (i = 0; i < ttm->num_pages; i++) { + ttm->dma_address[i] = pci_map_page(dev->pdev, ttm->pages[i], + 0, PAGE_SIZE, + PCI_DMA_BIDIRECTIONAL); + if (pci_dma_mapping_error(dev->pdev, ttm->dma_address[i])) { + while (--i) { + pci_unmap_page(dev->pdev, ttm->dma_address[i], + PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); + ttm->dma_address[i] = 0; + } + ttm_pool_unpopulate(ttm); + return -EFAULT; + } + } + return 0; +} + +static void +nouveau_ttm_tt_unpopulate(struct ttm_tt *ttm) +{ + struct drm_nouveau_private *dev_priv; + struct drm_device *dev; + unsigned i; + + dev_priv = nouveau_bdev(ttm->bdev); + dev = dev_priv->dev; + +#ifdef CONFIG_SWIOTLB + if (swiotlb_nr_tbl()) { + ttm_dma_unpopulate(ttm, dev->dev); + return; + } +#endif + + for (i = 0; i < ttm->num_pages; i++) { + if (ttm->dma_address[i]) { + pci_unmap_page(dev->pdev, ttm->dma_address[i], + PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); + } + } + + ttm_pool_unpopulate(ttm); +} + struct ttm_bo_driver nouveau_bo_driver = { .ttm_tt_create = &nouveau_ttm_tt_create, - .ttm_tt_populate = &ttm_pool_populate, - .ttm_tt_unpopulate = &ttm_pool_unpopulate, + .ttm_tt_populate = &nouveau_ttm_tt_populate, + .ttm_tt_unpopulate = &nouveau_ttm_tt_unpopulate, .invalidate_caches = nouveau_bo_invalidate_caches, .init_mem_type = nouveau_bo_init_mem_type, .evict_flags = nouveau_bo_evict_flags, diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c index 8e15923..f52c2db 100644 --- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c +++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c @@ -178,6 +178,7 @@ static struct drm_info_list nouveau_debugfs_list[] = { { "memory", nouveau_debugfs_memory_info, 0, NULL }, { "vbios.rom", nouveau_debugfs_vbios_image, 0, NULL }, { "ttm_page_pool", ttm_page_alloc_debugfs, 0, NULL }, + { "ttm_dma_page_pool", ttm_dma_page_alloc_debugfs, 0, NULL }, }; #define NOUVEAU_DEBUGFS_ENTRIES ARRAY_SIZE(nouveau_debugfs_list)
diff --git a/drivers/gpu/drm/nouveau/nouveau_mem.c b/drivers/gpu/drm/nouveau/nouveau_mem.c index 36bec48..37fcaa2 100644 --- a/drivers/gpu/drm/nouveau/nouveau_mem.c +++ b/drivers/gpu/drm/nouveau/nouveau_mem.c @@ -407,6 +407,12 @@ nouveau_mem_vram_init(struct drm_device *dev) ret = pci_set_dma_mask(dev->pdev, DMA_BIT_MASK(dma_bits)); if (ret) return ret; + ret = pci_set_consistent_dma_mask(dev->pdev, DMA_BIT_MASK(dma_bits)); + if (ret) { + /* Reset to default value. */ + pci_set_consistent_dma_mask(dev->pdev, DMA_BIT_MASK(32)); + } +
ret = nouveau_ttm_global_init(dev_priv); if (ret) diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c index bc2ab90..ee1eb7c 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c +++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c @@ -13,41 +13,6 @@ struct nouveau_sgdma_be { u64 offset; };
-static int -nouveau_sgdma_dma_map(struct ttm_tt *ttm) -{ - struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; - struct drm_device *dev = nvbe->dev; - int i; - - for (i = 0; i < ttm->num_pages; i++) { - ttm->dma_address[i] = pci_map_page(dev->pdev, ttm->pages[i], - 0, PAGE_SIZE, - PCI_DMA_BIDIRECTIONAL); - if (pci_dma_mapping_error(dev->pdev, ttm->dma_address[i])) { - return -EFAULT; - } - } - - return 0; -} - -static void -nouveau_sgdma_dma_unmap(struct ttm_tt *ttm) -{ - struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; - struct drm_device *dev = nvbe->dev; - int i; - - for (i = 0; i < ttm->num_pages; i++) { - if (ttm->dma_address[i]) { - pci_unmap_page(dev->pdev, ttm->dma_address[i], - PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); - } - ttm->dma_address[i] = 0; - } -} - static void nouveau_sgdma_destroy(struct ttm_tt *ttm) { @@ -67,13 +32,8 @@ nv04_sgdma_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem) struct drm_nouveau_private *dev_priv = dev->dev_private; struct nouveau_gpuobj *gpuobj = dev_priv->gart_info.sg_ctxdma; unsigned i, j, pte; - int r;
NV_DEBUG(dev, "pg=0x%lx\n", mem->start); - r = nouveau_sgdma_dma_map(ttm); - if (r) { - return r; - }
nvbe->offset = mem->start << PAGE_SHIFT; pte = (nvbe->offset >> NV_CTXDMA_PAGE_SHIFT) + 2; @@ -110,7 +70,6 @@ nv04_sgdma_unbind(struct ttm_tt *ttm) nv_wo32(gpuobj, (pte * 4) + 0, 0x00000000); }
- nouveau_sgdma_dma_unmap(ttm); return 0; }
@@ -141,13 +100,8 @@ nv41_sgdma_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem) dma_addr_t *list = ttm->dma_address; u32 pte = mem->start << 2; u32 cnt = ttm->num_pages; - int r;
nvbe->offset = mem->start << PAGE_SHIFT; - r = nouveau_sgdma_dma_map(ttm); - if (r) { - return r; - }
while (cnt--) { nv_wo32(pgt, pte, (*list++ >> 7) | 1); @@ -173,7 +127,6 @@ nv41_sgdma_unbind(struct ttm_tt *ttm) }
nv41_sgdma_flush(nvbe); - nouveau_sgdma_dma_unmap(ttm); return 0; }
@@ -256,13 +209,9 @@ nv44_sgdma_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem) dma_addr_t *list = ttm->dma_address; u32 pte = mem->start << 2, tmp[4]; u32 cnt = ttm->num_pages; - int i, r; + int i;
nvbe->offset = mem->start << PAGE_SHIFT; - r = nouveau_sgdma_dma_map(ttm); - if (r) { - return r; - }
if (pte & 0x0000000c) { u32 max = 4 - ((pte >> 2) & 0x3); @@ -321,7 +270,6 @@ nv44_sgdma_unbind(struct ttm_tt *ttm) nv44_sgdma_fill(pgt, NULL, pte, cnt);
nv44_sgdma_flush(ttm); - nouveau_sgdma_dma_unmap(ttm); return 0; }
@@ -335,13 +283,8 @@ static int nv50_sgdma_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem) { struct nouveau_mem *node = mem->mm_node; - int r;
/* noop: bound in move_notify() */ - r = nouveau_sgdma_dma_map(ttm); - if (r) { - return r; - } node->pages = ttm->dma_address; return 0; } @@ -350,7 +293,6 @@ static int nv50_sgdma_unbind(struct ttm_tt *ttm) { /* noop: unbound in move_notify() */ - nouveau_sgdma_dma_unmap(ttm); return 0; }
From: Jerome Glisse jglisse@redhat.com
Move dma data to a superset ttm_dma_tt structure which herit from ttm_tt. This allow driver that don't use dma functionalities to not have to waste memory for it.
V2 Rebase on top of no memory account changes (where/when is my delorean when i need it ?) V3 Make sure page list is initialized empty V4 typo/syntax fixes
Signed-off-by: Jerome Glisse jglisse@redhat.com Reviewed-by: Thomas Hellstrom thellstrom@vmware.com --- drivers/gpu/drm/nouveau/nouveau_bo.c | 18 +++-- drivers/gpu/drm/nouveau/nouveau_sgdma.c | 22 ++++-- drivers/gpu/drm/radeon/radeon_ttm.c | 43 ++++++------ drivers/gpu/drm/ttm/ttm_page_alloc.c | 114 +++++++++++++++--------------- drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 35 +++++---- drivers/gpu/drm/ttm/ttm_tt.c | 60 +++++++++++++--- drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c | 2 + include/drm/ttm/ttm_bo_driver.h | 32 ++++++++- include/drm/ttm/ttm_page_alloc.h | 33 +-------- 9 files changed, 203 insertions(+), 156 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index e603909..4347776 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -1052,6 +1052,7 @@ nouveau_bo_fence(struct nouveau_bo *nvbo, struct nouveau_fence *fence) static int nouveau_ttm_tt_populate(struct ttm_tt *ttm) { + struct ttm_dma_tt *ttm_dma = (void *)ttm; struct drm_nouveau_private *dev_priv; struct drm_device *dev; unsigned i; @@ -1065,7 +1066,7 @@ nouveau_ttm_tt_populate(struct ttm_tt *ttm)
#ifdef CONFIG_SWIOTLB if (swiotlb_nr_tbl()) { - return ttm_dma_populate(ttm, dev->dev); + return ttm_dma_populate((void *)ttm, dev->dev); } #endif
@@ -1075,14 +1076,14 @@ nouveau_ttm_tt_populate(struct ttm_tt *ttm) }
for (i = 0; i < ttm->num_pages; i++) { - ttm->dma_address[i] = pci_map_page(dev->pdev, ttm->pages[i], + ttm_dma->dma_address[i] = pci_map_page(dev->pdev, ttm->pages[i], 0, PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); - if (pci_dma_mapping_error(dev->pdev, ttm->dma_address[i])) { + if (pci_dma_mapping_error(dev->pdev, ttm_dma->dma_address[i])) { while (--i) { - pci_unmap_page(dev->pdev, ttm->dma_address[i], + pci_unmap_page(dev->pdev, ttm_dma->dma_address[i], PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); - ttm->dma_address[i] = 0; + ttm_dma->dma_address[i] = 0; } ttm_pool_unpopulate(ttm); return -EFAULT; @@ -1094,6 +1095,7 @@ nouveau_ttm_tt_populate(struct ttm_tt *ttm) static void nouveau_ttm_tt_unpopulate(struct ttm_tt *ttm) { + struct ttm_dma_tt *ttm_dma = (void *)ttm; struct drm_nouveau_private *dev_priv; struct drm_device *dev; unsigned i; @@ -1103,14 +1105,14 @@ nouveau_ttm_tt_unpopulate(struct ttm_tt *ttm)
#ifdef CONFIG_SWIOTLB if (swiotlb_nr_tbl()) { - ttm_dma_unpopulate(ttm, dev->dev); + ttm_dma_unpopulate((void *)ttm, dev->dev); return; } #endif
for (i = 0; i < ttm->num_pages; i++) { - if (ttm->dma_address[i]) { - pci_unmap_page(dev->pdev, ttm->dma_address[i], + if (ttm_dma->dma_address[i]) { + pci_unmap_page(dev->pdev, ttm_dma->dma_address[i], PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); } } diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c index ee1eb7c..47f245e 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c +++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c @@ -8,7 +8,10 @@ #define NV_CTXDMA_PAGE_MASK (NV_CTXDMA_PAGE_SIZE - 1)
struct nouveau_sgdma_be { - struct ttm_tt ttm; + /* this has to be the first field so populate/unpopulated in + * nouve_bo.c works properly, otherwise have to move them here + */ + struct ttm_dma_tt ttm; struct drm_device *dev; u64 offset; }; @@ -20,6 +23,7 @@ nouveau_sgdma_destroy(struct ttm_tt *ttm)
if (ttm) { NV_DEBUG(nvbe->dev, "\n"); + ttm_dma_tt_fini(&nvbe->ttm); kfree(nvbe); } } @@ -38,7 +42,7 @@ nv04_sgdma_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem) nvbe->offset = mem->start << PAGE_SHIFT; pte = (nvbe->offset >> NV_CTXDMA_PAGE_SHIFT) + 2; for (i = 0; i < ttm->num_pages; i++) { - dma_addr_t dma_offset = ttm->dma_address[i]; + dma_addr_t dma_offset = nvbe->ttm.dma_address[i]; uint32_t offset_l = lower_32_bits(dma_offset);
for (j = 0; j < PAGE_SIZE / NV_CTXDMA_PAGE_SIZE; j++, pte++) { @@ -97,7 +101,7 @@ nv41_sgdma_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem) struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; struct drm_nouveau_private *dev_priv = nvbe->dev->dev_private; struct nouveau_gpuobj *pgt = dev_priv->gart_info.sg_ctxdma; - dma_addr_t *list = ttm->dma_address; + dma_addr_t *list = nvbe->ttm.dma_address; u32 pte = mem->start << 2; u32 cnt = ttm->num_pages;
@@ -206,7 +210,7 @@ nv44_sgdma_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem) struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; struct drm_nouveau_private *dev_priv = nvbe->dev->dev_private; struct nouveau_gpuobj *pgt = dev_priv->gart_info.sg_ctxdma; - dma_addr_t *list = ttm->dma_address; + dma_addr_t *list = nvbe->ttm.dma_address; u32 pte = mem->start << 2, tmp[4]; u32 cnt = ttm->num_pages; int i; @@ -282,10 +286,11 @@ static struct ttm_backend_func nv44_sgdma_backend = { static int nv50_sgdma_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem) { + struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm; struct nouveau_mem *node = mem->mm_node;
/* noop: bound in move_notify() */ - node->pages = ttm->dma_address; + node->pages = nvbe->ttm.dma_address; return 0; }
@@ -316,12 +321,13 @@ nouveau_sgdma_create_ttm(struct ttm_bo_device *bdev, return NULL;
nvbe->dev = dev; - nvbe->ttm.func = dev_priv->gart_info.func; + nvbe->ttm.ttm.func = dev_priv->gart_info.func;
- if (ttm_tt_init(&nvbe->ttm, bdev, size, page_flags, dummy_read_page)) { + if (ttm_dma_tt_init(&nvbe->ttm, bdev, size, page_flags, dummy_read_page)) { + kfree(nvbe); return NULL; } - return &nvbe->ttm; + return &nvbe->ttm.ttm; }
int diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index f499b2c..e111a38 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -501,7 +501,7 @@ static bool radeon_sync_obj_signaled(void *sync_obj, void *sync_arg) * TTM backend functions. */ struct radeon_ttm_tt { - struct ttm_tt ttm; + struct ttm_dma_tt ttm; struct radeon_device *rdev; u64 offset; }; @@ -509,17 +509,16 @@ struct radeon_ttm_tt { static int radeon_ttm_backend_bind(struct ttm_tt *ttm, struct ttm_mem_reg *bo_mem) { - struct radeon_ttm_tt *gtt; + struct radeon_ttm_tt *gtt = (void*)ttm; int r;
- gtt = container_of(ttm, struct radeon_ttm_tt, ttm); gtt->offset = (unsigned long)(bo_mem->start << PAGE_SHIFT); if (!ttm->num_pages) { WARN(1, "nothing to bind %lu pages for mreg %p back %p!\n", ttm->num_pages, bo_mem, ttm); } r = radeon_gart_bind(gtt->rdev, gtt->offset, - ttm->num_pages, ttm->pages, ttm->dma_address); + ttm->num_pages, ttm->pages, gtt->ttm.dma_address); if (r) { DRM_ERROR("failed to bind %lu pages at 0x%08X\n", ttm->num_pages, (unsigned)gtt->offset); @@ -530,18 +529,17 @@ static int radeon_ttm_backend_bind(struct ttm_tt *ttm,
static int radeon_ttm_backend_unbind(struct ttm_tt *ttm) { - struct radeon_ttm_tt *gtt; + struct radeon_ttm_tt *gtt = (void *)ttm;
- gtt = container_of(ttm, struct radeon_ttm_tt, ttm); radeon_gart_unbind(gtt->rdev, gtt->offset, ttm->num_pages); return 0; }
static void radeon_ttm_backend_destroy(struct ttm_tt *ttm) { - struct radeon_ttm_tt *gtt; + struct radeon_ttm_tt *gtt = (void *)ttm;
- gtt = container_of(ttm, struct radeon_ttm_tt, ttm); + ttm_dma_tt_fini(>t->ttm); kfree(gtt); }
@@ -570,17 +568,19 @@ struct ttm_tt *radeon_ttm_tt_create(struct ttm_bo_device *bdev, if (gtt == NULL) { return NULL; } - gtt->ttm.func = &radeon_backend_func; + gtt->ttm.ttm.func = &radeon_backend_func; gtt->rdev = rdev; - if (ttm_tt_init(>t->ttm, bdev, size, page_flags, dummy_read_page)) { + if (ttm_dma_tt_init(>t->ttm, bdev, size, page_flags, dummy_read_page)) { + kfree(gtt); return NULL; } - return >t->ttm; + return >t->ttm.ttm; }
static int radeon_ttm_tt_populate(struct ttm_tt *ttm) { struct radeon_device *rdev; + struct radeon_ttm_tt *gtt = (void *)ttm; unsigned i; int r;
@@ -591,7 +591,7 @@ static int radeon_ttm_tt_populate(struct ttm_tt *ttm)
#ifdef CONFIG_SWIOTLB if (swiotlb_nr_tbl()) { - return ttm_dma_populate(ttm, rdev->dev); + return ttm_dma_populate(>t->ttm, rdev->dev); } #endif
@@ -601,14 +601,14 @@ static int radeon_ttm_tt_populate(struct ttm_tt *ttm) }
for (i = 0; i < ttm->num_pages; i++) { - ttm->dma_address[i] = pci_map_page(rdev->pdev, ttm->pages[i], - 0, PAGE_SIZE, - PCI_DMA_BIDIRECTIONAL); - if (pci_dma_mapping_error(rdev->pdev, ttm->dma_address[i])) { + gtt->ttm.dma_address[i] = pci_map_page(rdev->pdev, ttm->pages[i], + 0, PAGE_SIZE, + PCI_DMA_BIDIRECTIONAL); + if (pci_dma_mapping_error(rdev->pdev, gtt->ttm.dma_address[i])) { while (--i) { - pci_unmap_page(rdev->pdev, ttm->dma_address[i], + pci_unmap_page(rdev->pdev, gtt->ttm.dma_address[i], PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); - ttm->dma_address[i] = 0; + gtt->ttm.dma_address[i] = 0; } ttm_pool_unpopulate(ttm); return -EFAULT; @@ -620,20 +620,21 @@ static int radeon_ttm_tt_populate(struct ttm_tt *ttm) static void radeon_ttm_tt_unpopulate(struct ttm_tt *ttm) { struct radeon_device *rdev; + struct radeon_ttm_tt *gtt = (void *)ttm; unsigned i;
rdev = radeon_get_rdev(ttm->bdev);
#ifdef CONFIG_SWIOTLB if (swiotlb_nr_tbl()) { - ttm_dma_unpopulate(ttm, rdev->dev); + ttm_dma_unpopulate(>t->ttm, rdev->dev); return; } #endif
for (i = 0; i < ttm->num_pages; i++) { - if (ttm->dma_address[i]) { - pci_unmap_page(rdev->pdev, ttm->dma_address[i], + if (gtt->ttm.dma_address[i]) { + pci_unmap_page(rdev->pdev, gtt->ttm.dma_address[i], PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); } } diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c index 8d6267e..499debd 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c @@ -662,13 +662,61 @@ out: return count; }
+/* Put all pages in pages list to correct pool to wait for reuse */ +static void ttm_put_pages(struct page **pages, unsigned npages, int flags, + enum ttm_caching_state cstate) +{ + unsigned long irq_flags; + struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); + unsigned i; + + if (pool == NULL) { + /* No pool for this memory type so free the pages */ + for (i = 0; i < npages; i++) { + if (pages[i]) { + if (page_count(pages[i]) != 1) + printk(KERN_ERR TTM_PFX + "Erroneous page count. " + "Leaking pages.\n"); + __free_page(pages[i]); + pages[i] = NULL; + } + } + return; + } + + spin_lock_irqsave(&pool->lock, irq_flags); + for (i = 0; i < npages; i++) { + if (pages[i]) { + if (page_count(pages[i]) != 1) + printk(KERN_ERR TTM_PFX + "Erroneous page count. " + "Leaking pages.\n"); + list_add_tail(&pages[i]->lru, &pool->list); + pages[i] = NULL; + pool->npages++; + } + } + /* Check that we don't go over the pool limit */ + npages = 0; + if (pool->npages > _manager->options.max_size) { + npages = pool->npages - _manager->options.max_size; + /* free at least NUM_PAGES_TO_ALLOC number of pages + * to reduce calls to set_memory_wb */ + if (npages < NUM_PAGES_TO_ALLOC) + npages = NUM_PAGES_TO_ALLOC; + } + spin_unlock_irqrestore(&pool->lock, irq_flags); + if (npages) + ttm_page_pool_free(pool, npages); +} + /* * On success pages list will hold count number of correctly * cached pages. */ -int ttm_get_pages(struct page **pages, int flags, - enum ttm_caching_state cstate, unsigned npages, - dma_addr_t *dma_address) +static int ttm_get_pages(struct page **pages, unsigned npages, int flags, + enum ttm_caching_state cstate) { struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); struct list_head plist; @@ -736,7 +784,7 @@ int ttm_get_pages(struct page **pages, int flags, printk(KERN_ERR TTM_PFX "Failed to allocate extra pages " "for large request."); - ttm_put_pages(pages, count, flags, cstate, NULL); + ttm_put_pages(pages, count, flags, cstate); return r; } } @@ -744,55 +792,6 @@ int ttm_get_pages(struct page **pages, int flags, return 0; }
-/* Put all pages in pages list to correct pool to wait for reuse */ -void ttm_put_pages(struct page **pages, unsigned npages, int flags, - enum ttm_caching_state cstate, dma_addr_t *dma_address) -{ - unsigned long irq_flags; - struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); - unsigned i; - - if (pool == NULL) { - /* No pool for this memory type so free the pages */ - for (i = 0; i < npages; i++) { - if (pages[i]) { - if (page_count(pages[i]) != 1) - printk(KERN_ERR TTM_PFX - "Erroneous page count. " - "Leaking pages.\n"); - __free_page(pages[i]); - pages[i] = NULL; - } - } - return; - } - - spin_lock_irqsave(&pool->lock, irq_flags); - for (i = 0; i < npages; i++) { - if (pages[i]) { - if (page_count(pages[i]) != 1) - printk(KERN_ERR TTM_PFX - "Erroneous page count. " - "Leaking pages.\n"); - list_add_tail(&pages[i]->lru, &pool->list); - pages[i] = NULL; - pool->npages++; - } - } - /* Check that we don't go over the pool limit */ - npages = 0; - if (pool->npages > _manager->options.max_size) { - npages = pool->npages - _manager->options.max_size; - /* free at least NUM_PAGES_TO_ALLOC number of pages - * to reduce calls to set_memory_wb */ - if (npages < NUM_PAGES_TO_ALLOC) - npages = NUM_PAGES_TO_ALLOC; - } - spin_unlock_irqrestore(&pool->lock, irq_flags); - if (npages) - ttm_page_pool_free(pool, npages); -} - static void ttm_page_pool_init_locked(struct ttm_page_pool *pool, int flags, char *name) { @@ -865,9 +864,9 @@ int ttm_pool_populate(struct ttm_tt *ttm) return 0;
for (i = 0; i < ttm->num_pages; ++i) { - ret = ttm_get_pages(&ttm->pages[i], ttm->page_flags, - ttm->caching_state, 1, - &ttm->dma_address[i]); + ret = ttm_get_pages(&ttm->pages[i], 1, + ttm->page_flags, + ttm->caching_state); if (ret != 0) { ttm_pool_unpopulate(ttm); return -ENOMEM; @@ -904,8 +903,7 @@ void ttm_pool_unpopulate(struct ttm_tt *ttm) ttm->pages[i]); ttm_put_pages(&ttm->pages[i], 1, ttm->page_flags, - ttm->caching_state, - ttm->dma_address); + ttm->caching_state); } } ttm->state = tt_unpopulated; diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index 7a47793..6678abc 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -789,7 +789,7 @@ out:
/* * @return count of pages still required to fulfill the request. -*/ + */ static int ttm_dma_page_pool_fill_locked(struct dma_pool *pool, unsigned long *irq_flags) { @@ -838,10 +838,11 @@ static int ttm_dma_page_pool_fill_locked(struct dma_pool *pool, * allocates one page at a time. */ static int ttm_dma_pool_get_pages(struct dma_pool *pool, - struct ttm_tt *ttm, + struct ttm_dma_tt *ttm_dma, unsigned index) { struct dma_page *d_page; + struct ttm_tt *ttm = &ttm_dma->ttm; unsigned long irq_flags; int count, r = -ENOMEM;
@@ -850,8 +851,8 @@ static int ttm_dma_pool_get_pages(struct dma_pool *pool, if (count) { d_page = list_first_entry(&pool->free_list, struct dma_page, page_list); ttm->pages[index] = d_page->p; - ttm->dma_address[index] = d_page->dma; - list_move_tail(&d_page->page_list, &ttm->alloc_list); + ttm_dma->dma_address[index] = d_page->dma; + list_move_tail(&d_page->page_list, &ttm_dma->pages_list); r = 0; pool->npages_in_use += 1; pool->npages_free -= 1; @@ -864,8 +865,9 @@ static int ttm_dma_pool_get_pages(struct dma_pool *pool, * On success pages list will hold count number of correctly * cached pages. On failure will hold the negative return value (-ENOMEM, etc). */ -int ttm_dma_populate(struct ttm_tt *ttm, struct device *dev) +int ttm_dma_populate(struct ttm_dma_tt *ttm_dma, struct device *dev) { + struct ttm_tt *ttm = &ttm_dma->ttm; struct ttm_mem_global *mem_glob = ttm->glob->mem_glob; struct dma_pool *pool; enum pool_type type; @@ -892,18 +894,18 @@ int ttm_dma_populate(struct ttm_tt *ttm, struct device *dev) } }
- INIT_LIST_HEAD(&ttm->alloc_list); + INIT_LIST_HEAD(&ttm_dma->pages_list); for (i = 0; i < ttm->num_pages; ++i) { - ret = ttm_dma_pool_get_pages(pool, ttm, i); + ret = ttm_dma_pool_get_pages(pool, ttm_dma, i); if (ret != 0) { - ttm_dma_unpopulate(ttm, dev); + ttm_dma_unpopulate(ttm_dma, dev); return -ENOMEM; }
ret = ttm_mem_global_alloc_page(mem_glob, ttm->pages[i], false, false); if (unlikely(ret != 0)) { - ttm_dma_unpopulate(ttm, dev); + ttm_dma_unpopulate(ttm_dma, dev); return -ENOMEM; } } @@ -911,7 +913,7 @@ int ttm_dma_populate(struct ttm_tt *ttm, struct device *dev) if (unlikely(ttm->page_flags & TTM_PAGE_FLAG_SWAPPED)) { ret = ttm_tt_swapin(ttm); if (unlikely(ret != 0)) { - ttm_dma_unpopulate(ttm, dev); + ttm_dma_unpopulate(ttm_dma, dev); return ret; } } @@ -937,8 +939,9 @@ static int ttm_dma_pool_get_num_unused_pages(void) }
/* Put all pages in pages list to correct pool to wait for reuse */ -void ttm_dma_unpopulate(struct ttm_tt *ttm, struct device *dev) +void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev) { + struct ttm_tt *ttm = &ttm_dma->ttm; struct dma_pool *pool; struct dma_page *d_page, *next; enum pool_type type; @@ -956,7 +959,7 @@ void ttm_dma_unpopulate(struct ttm_tt *ttm, struct device *dev) ttm_to_type(ttm->page_flags, tt_cached)) == pool);
/* make sure pages array match list and count number of pages */ - list_for_each_entry(d_page, &ttm->alloc_list, page_list) { + list_for_each_entry(d_page, &ttm_dma->pages_list, page_list) { ttm->pages[count] = d_page->p; count++; } @@ -967,7 +970,7 @@ void ttm_dma_unpopulate(struct ttm_tt *ttm, struct device *dev) pool->nfrees += count; } else { pool->npages_free += count; - list_splice(&ttm->alloc_list, &pool->free_list); + list_splice(&ttm_dma->pages_list, &pool->free_list); if (pool->npages_free > _manager->options.max_size) { count = pool->npages_free - _manager->options.max_size; } @@ -975,7 +978,7 @@ void ttm_dma_unpopulate(struct ttm_tt *ttm, struct device *dev) spin_unlock_irqrestore(&pool->lock, irq_flags);
if (is_cached) { - list_for_each_entry_safe(d_page, next, &ttm->alloc_list, page_list) { + list_for_each_entry_safe(d_page, next, &ttm_dma->pages_list, page_list) { ttm_mem_global_free_page(ttm->glob->mem_glob, d_page->p); ttm_dma_page_put(pool, d_page); @@ -987,10 +990,10 @@ void ttm_dma_unpopulate(struct ttm_tt *ttm, struct device *dev) } }
- INIT_LIST_HEAD(&ttm->alloc_list); + INIT_LIST_HEAD(&ttm_dma->pages_list); for (i = 0; i < ttm->num_pages; i++) { ttm->pages[i] = NULL; - ttm->dma_address[i] = 0; + ttm_dma->dma_address[i] = 0; }
/* shrink pool if necessary */ diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index 1625739..58e1fa1 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -48,17 +48,14 @@ */ static void ttm_tt_alloc_page_directory(struct ttm_tt *ttm) { - ttm->pages = drm_calloc_large(ttm->num_pages, sizeof(*ttm->pages)); - ttm->dma_address = drm_calloc_large(ttm->num_pages, - sizeof(*ttm->dma_address)); + ttm->pages = drm_calloc_large(ttm->num_pages, sizeof(void*)); }
-static void ttm_tt_free_page_directory(struct ttm_tt *ttm) +static void ttm_dma_tt_alloc_page_directory(struct ttm_dma_tt *ttm) { - drm_free_large(ttm->pages); - ttm->pages = NULL; - drm_free_large(ttm->dma_address); - ttm->dma_address = NULL; + ttm->ttm.pages = drm_calloc_large(ttm->ttm.num_pages, sizeof(void*)); + ttm->dma_address = drm_calloc_large(ttm->ttm.num_pages, + sizeof(*ttm->dma_address)); }
#ifdef CONFIG_X86 @@ -173,7 +170,6 @@ void ttm_tt_destroy(struct ttm_tt *ttm)
if (likely(ttm->pages != NULL)) { ttm->bdev->driver->ttm_tt_unpopulate(ttm); - ttm_tt_free_page_directory(ttm); }
if (!(ttm->page_flags & TTM_PAGE_FLAG_PERSISTENT_SWAP) && @@ -196,9 +192,8 @@ int ttm_tt_init(struct ttm_tt *ttm, struct ttm_bo_device *bdev, ttm->dummy_read_page = dummy_read_page; ttm->state = tt_unpopulated;
- INIT_LIST_HEAD(&ttm->alloc_list); ttm_tt_alloc_page_directory(ttm); - if (!ttm->pages || !ttm->dma_address) { + if (!ttm->pages) { ttm_tt_destroy(ttm); printk(KERN_ERR TTM_PFX "Failed allocating page table\n"); return -ENOMEM; @@ -207,6 +202,49 @@ int ttm_tt_init(struct ttm_tt *ttm, struct ttm_bo_device *bdev, } EXPORT_SYMBOL(ttm_tt_init);
+void ttm_tt_fini(struct ttm_tt *ttm) +{ + drm_free_large(ttm->pages); + ttm->pages = NULL; +} +EXPORT_SYMBOL(ttm_tt_fini); + +int ttm_dma_tt_init(struct ttm_dma_tt *ttm_dma, struct ttm_bo_device *bdev, + unsigned long size, uint32_t page_flags, + struct page *dummy_read_page) +{ + struct ttm_tt *ttm = &ttm_dma->ttm; + + ttm->bdev = bdev; + ttm->glob = bdev->glob; + ttm->num_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; + ttm->caching_state = tt_cached; + ttm->page_flags = page_flags; + ttm->dummy_read_page = dummy_read_page; + ttm->state = tt_unpopulated; + + INIT_LIST_HEAD(&ttm_dma->pages_list); + ttm_dma_tt_alloc_page_directory(ttm_dma); + if (!ttm->pages || !ttm_dma->dma_address) { + ttm_tt_destroy(ttm); + printk(KERN_ERR TTM_PFX "Failed allocating page table\n"); + return -ENOMEM; + } + return 0; +} +EXPORT_SYMBOL(ttm_dma_tt_init); + +void ttm_dma_tt_fini(struct ttm_dma_tt *ttm_dma) +{ + struct ttm_tt *ttm = &ttm_dma->ttm; + + drm_free_large(ttm->pages); + ttm->pages = NULL; + drm_free_large(ttm_dma->dma_address); + ttm_dma->dma_address = NULL; +} +EXPORT_SYMBOL(ttm_dma_tt_fini); + void ttm_tt_unbind(struct ttm_tt *ttm) { int ret; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c index 3986d74..1e2c0fb 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c @@ -168,6 +168,7 @@ static void vmw_ttm_destroy(struct ttm_tt *ttm) { struct vmw_ttm_tt *vmw_be = container_of(ttm, struct vmw_ttm_tt, ttm);
+ ttm_tt_fini(ttm); kfree(vmw_be); }
@@ -191,6 +192,7 @@ struct ttm_tt *vmw_ttm_tt_create(struct ttm_bo_device *bdev, vmw_be->dev_priv = container_of(bdev, struct vmw_private, bdev);
if (ttm_tt_init(&vmw_be->ttm, bdev, size, page_flags, dummy_read_page)) { + kfree(vmw_be); return NULL; }
diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index beef9ab..b2a0848 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -103,8 +103,6 @@ enum ttm_caching_state { * @swap_storage: Pointer to shmem struct file for swap storage. * @caching_state: The current caching state of the pages. * @state: The current binding state of the pages. - * @dma_address: The DMA (bus) addresses of the pages (if TTM_PAGE_FLAG_DMA32) - * @alloc_list: used by some page allocation backend * * This is a structure holding the pages, caching- and aperture binding * status for a buffer object that isn't backed by fixed (VRAM / AGP) @@ -127,8 +125,23 @@ struct ttm_tt { tt_unbound, tt_unpopulated, } state; +}; + +/** + * struct ttm_dma_tt + * + * @ttm: Base ttm_tt struct. + * @dma_address: The DMA (bus) addresses of the pages + * @pages_list: used by some page allocation backend + * + * This is a structure holding the pages, caching- and aperture binding + * status for a buffer object that isn't backed by fixed (VRAM / AGP) + * memory. + */ +struct ttm_dma_tt { + struct ttm_tt ttm; dma_addr_t *dma_address; - struct list_head alloc_list; + struct list_head pages_list; };
#define TTM_MEMTYPE_FLAG_FIXED (1 << 0) /* Fixed (on-card) PCI memory */ @@ -595,6 +608,19 @@ ttm_flag_masked(uint32_t *old, uint32_t new, uint32_t mask) extern int ttm_tt_init(struct ttm_tt *ttm, struct ttm_bo_device *bdev, unsigned long size, uint32_t page_flags, struct page *dummy_read_page); +extern int ttm_dma_tt_init(struct ttm_dma_tt *ttm_dma, struct ttm_bo_device *bdev, + unsigned long size, uint32_t page_flags, + struct page *dummy_read_page); + +/** + * ttm_tt_fini + * + * @ttm: the ttm_tt structure. + * + * Free memory of ttm_tt structure + */ +extern void ttm_tt_fini(struct ttm_tt *ttm); +extern void ttm_dma_tt_fini(struct ttm_dma_tt *ttm_dma);
/** * ttm_ttm_bind: diff --git a/include/drm/ttm/ttm_page_alloc.h b/include/drm/ttm/ttm_page_alloc.h index 1e1337e..5fe2740 100644 --- a/include/drm/ttm/ttm_page_alloc.h +++ b/include/drm/ttm/ttm_page_alloc.h @@ -30,35 +30,6 @@ #include "ttm_memory.h"
/** - * Get count number of pages from pool to pages list. - * - * @pages: head of empty linked list where pages are filled. - * @flags: ttm flags for page allocation. - * @cstate: ttm caching state for the page. - * @count: number of pages to allocate. - * @dma_address: The DMA (bus) address of pages (if TTM_PAGE_FLAG_DMA32 set). - */ -int ttm_get_pages(struct page **pages, - int flags, - enum ttm_caching_state cstate, - unsigned npages, - dma_addr_t *dma_address); -/** - * Put linked list of pages to pool. - * - * @pages: list of pages to free. - * @page_count: number of pages in the list. Zero can be passed for unknown - * count. - * @flags: ttm flags for page allocation. - * @cstate: ttm caching state. - * @dma_address: The DMA (bus) address of pages (if TTM_PAGE_FLAG_DMA32 set). - */ -void ttm_put_pages(struct page **pages, - unsigned npages, - int flags, - enum ttm_caching_state cstate, - dma_addr_t *dma_address); -/** * Initialize pool allocator. */ int ttm_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages); @@ -107,8 +78,8 @@ void ttm_dma_page_alloc_fini(void); */ extern int ttm_dma_page_alloc_debugfs(struct seq_file *m, void *data);
-int ttm_dma_populate(struct ttm_tt *ttm, struct device *dev); -extern void ttm_dma_unpopulate(struct ttm_tt *ttm, struct device *dev); +extern int ttm_dma_populate(struct ttm_dma_tt *ttm_dma, struct device *dev); +extern void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev);
#else static inline int ttm_dma_page_alloc_init(struct ttm_mem_global *glob,
From: Jerome Glisse jglisse@redhat.com
Provide helper function to compute the kernel memory size needed for each buffer object. Move all the accounting inside ttm, simplifying driver and avoiding code duplication accross them.
v2 fix accounting of ghost object, one would have thought that i would have run into the issue since a longtime but it seems ghost object are rare when you have plenty of vram ;)
Signed-off-by: Jerome Glisse jglisse@redhat.com --- drivers/gpu/drm/nouveau/nouveau_bo.c | 6 +++- drivers/gpu/drm/radeon/radeon_object.c | 8 +++- drivers/gpu/drm/ttm/ttm_bo.c | 52 +++++++++++++++++++++++------- drivers/gpu/drm/ttm/ttm_bo_util.c | 1 + drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 35 +------------------- include/drm/ttm/ttm_bo_api.h | 19 ++++++++++- include/drm/ttm/ttm_bo_driver.h | 5 --- 7 files changed, 71 insertions(+), 55 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 4347776..857bca4 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -93,6 +93,7 @@ nouveau_bo_new(struct drm_device *dev, int size, int align, { struct drm_nouveau_private *dev_priv = dev->dev_private; struct nouveau_bo *nvbo; + size_t acc_size; int ret;
nvbo = kzalloc(sizeof(struct nouveau_bo), GFP_KERNEL); @@ -115,9 +116,12 @@ nouveau_bo_new(struct drm_device *dev, int size, int align, nvbo->bo.mem.num_pages = size >> PAGE_SHIFT; nouveau_bo_placement_set(nvbo, flags, 0);
+ acc_size = ttm_bo_dma_acc_size(&dev_priv->ttm.bdev, size, + sizeof(struct nouveau_bo)); + ret = ttm_bo_init(&dev_priv->ttm.bdev, &nvbo->bo, size, ttm_bo_type_device, &nvbo->placement, - align >> PAGE_SHIFT, 0, false, NULL, size, + align >> PAGE_SHIFT, 0, false, NULL, acc_size, nouveau_bo_del_ttm); if (ret) { /* ttm will call nouveau_bo_del_ttm if it fails.. */ diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c index 1c85152..695b480 100644 --- a/drivers/gpu/drm/radeon/radeon_object.c +++ b/drivers/gpu/drm/radeon/radeon_object.c @@ -95,6 +95,7 @@ int radeon_bo_create(struct radeon_device *rdev, enum ttm_bo_type type; unsigned long page_align = roundup(byte_align, PAGE_SIZE) >> PAGE_SHIFT; unsigned long max_size = 0; + size_t acc_size; int r;
size = ALIGN(size, PAGE_SIZE); @@ -117,6 +118,9 @@ int radeon_bo_create(struct radeon_device *rdev, return -ENOMEM; }
+ acc_size = ttm_bo_dma_acc_size(&rdev->mman.bdev, size, + sizeof(struct radeon_bo)); + retry: bo = kzalloc(sizeof(struct radeon_bo), GFP_KERNEL); if (bo == NULL) @@ -134,8 +138,8 @@ retry: /* Kernel allocation are uninterruptible */ mutex_lock(&rdev->vram_mutex); r = ttm_bo_init(&rdev->mman.bdev, &bo->tbo, size, type, - &bo->placement, page_align, 0, !kernel, NULL, size, - &radeon_ttm_bo_destroy); + &bo->placement, page_align, 0, !kernel, NULL, + acc_size, &radeon_ttm_bo_destroy); mutex_unlock(&rdev->vram_mutex); if (unlikely(r != 0)) { if (r != -ERESTARTSYS) { diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index cb73527..de7ad99 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -137,6 +137,7 @@ static void ttm_bo_release_list(struct kref *list_kref) struct ttm_buffer_object *bo = container_of(list_kref, struct ttm_buffer_object, list_kref); struct ttm_bo_device *bdev = bo->bdev; + size_t acc_size = bo->acc_size;
BUG_ON(atomic_read(&bo->list_kref.refcount)); BUG_ON(atomic_read(&bo->kref.refcount)); @@ -152,9 +153,9 @@ static void ttm_bo_release_list(struct kref *list_kref) if (bo->destroy) bo->destroy(bo); else { - ttm_mem_global_free(bdev->glob->mem_glob, bo->acc_size); kfree(bo); } + ttm_mem_global_free(bdev->glob->mem_glob, acc_size); }
int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo, bool interruptible) @@ -1157,6 +1158,17 @@ int ttm_bo_init(struct ttm_bo_device *bdev, { int ret = 0; unsigned long num_pages; + struct ttm_mem_global *mem_glob = bdev->glob->mem_glob; + + ret = ttm_mem_global_alloc(mem_glob, acc_size, false, false); + if (ret) { + printk(KERN_ERR TTM_PFX "Out of kernel memory.\n"); + if (destroy) + (*destroy)(bo); + else + kfree(bo); + return -ENOMEM; + }
size += buffer_start & ~PAGE_MASK; num_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; @@ -1227,14 +1239,34 @@ out_err: } EXPORT_SYMBOL(ttm_bo_init);
-static inline size_t ttm_bo_size(struct ttm_bo_global *glob, - unsigned long num_pages) +size_t ttm_bo_acc_size(struct ttm_bo_device *bdev, + unsigned long bo_size, + unsigned struct_size) { - size_t page_array_size = (num_pages * sizeof(void *) + PAGE_SIZE - 1) & - PAGE_MASK; + unsigned npages = (PAGE_ALIGN(bo_size)) >> PAGE_SHIFT; + size_t size = 0;
- return glob->ttm_bo_size + 2 * page_array_size; + size += ttm_round_pot(struct_size); + size += PAGE_ALIGN(npages * sizeof(void *)); + size += ttm_round_pot(sizeof(struct ttm_tt)); + return size; } +EXPORT_SYMBOL(ttm_bo_acc_size); + +size_t ttm_bo_dma_acc_size(struct ttm_bo_device *bdev, + unsigned long bo_size, + unsigned struct_size) +{ + unsigned npages = (PAGE_ALIGN(bo_size)) >> PAGE_SHIFT; + size_t size = 0; + + size += ttm_round_pot(struct_size); + size += PAGE_ALIGN(npages * sizeof(void *)); + size += PAGE_ALIGN(npages * sizeof(dma_addr_t)); + size += ttm_round_pot(sizeof(struct ttm_dma_tt)); + return size; +} +EXPORT_SYMBOL(ttm_bo_dma_acc_size);
int ttm_bo_create(struct ttm_bo_device *bdev, unsigned long size, @@ -1248,10 +1280,10 @@ int ttm_bo_create(struct ttm_bo_device *bdev, { struct ttm_buffer_object *bo; struct ttm_mem_global *mem_glob = bdev->glob->mem_glob; + size_t acc_size; int ret;
- size_t acc_size = - ttm_bo_size(bdev->glob, (size + PAGE_SIZE - 1) >> PAGE_SHIFT); + acc_size = ttm_bo_acc_size(bdev, size, sizeof(struct ttm_buffer_object)); ret = ttm_mem_global_alloc(mem_glob, acc_size, false, false); if (unlikely(ret != 0)) return ret; @@ -1437,10 +1469,6 @@ int ttm_bo_global_init(struct drm_global_reference *ref) goto out_no_shrink; }
- glob->ttm_bo_extra_size = ttm_round_pot(sizeof(struct ttm_tt)); - glob->ttm_bo_size = glob->ttm_bo_extra_size + - ttm_round_pot(sizeof(struct ttm_buffer_object)); - atomic_set(&glob->bo_count, 0);
ret = kobject_init_and_add( diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 60f204d..f8187ea 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -445,6 +445,7 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo, kref_init(&fbo->list_kref); kref_init(&fbo->kref); fbo->destroy = &ttm_transfered_destroy; + fbo->acc_size = 0;
*new_obj = fbo; return 0; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index 86c5e4c..2eb84a5 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -1517,29 +1517,10 @@ out_bad_surface: /** * Buffer management. */ - -static size_t vmw_dmabuf_acc_size(struct ttm_bo_global *glob, - unsigned long num_pages) -{ - static size_t bo_user_size = ~0; - - size_t page_array_size = - (num_pages * sizeof(void *) + PAGE_SIZE - 1) & PAGE_MASK; - - if (unlikely(bo_user_size == ~0)) { - bo_user_size = glob->ttm_bo_extra_size + - ttm_round_pot(sizeof(struct vmw_dma_buffer)); - } - - return bo_user_size + page_array_size; -} - void vmw_dmabuf_bo_free(struct ttm_buffer_object *bo) { struct vmw_dma_buffer *vmw_bo = vmw_dma_buffer(bo); - struct ttm_bo_global *glob = bo->glob;
- ttm_mem_global_free(glob->mem_glob, bo->acc_size); kfree(vmw_bo); }
@@ -1550,24 +1531,12 @@ int vmw_dmabuf_init(struct vmw_private *dev_priv, void (*bo_free) (struct ttm_buffer_object *bo)) { struct ttm_bo_device *bdev = &dev_priv->bdev; - struct ttm_mem_global *mem_glob = bdev->glob->mem_glob; size_t acc_size; int ret;
BUG_ON(!bo_free);
- acc_size = - vmw_dmabuf_acc_size(bdev->glob, - (size + PAGE_SIZE - 1) >> PAGE_SHIFT); - - ret = ttm_mem_global_alloc(mem_glob, acc_size, false, false); - if (unlikely(ret != 0)) { - /* we must free the bo here as - * ttm_buffer_object_init does so as well */ - bo_free(&vmw_bo->base); - return ret; - } - + acc_size = ttm_bo_acc_size(bdev, size, sizeof(struct vmw_dma_buffer)); memset(vmw_bo, 0, sizeof(*vmw_bo));
INIT_LIST_HEAD(&vmw_bo->validate_list); @@ -1582,9 +1551,7 @@ int vmw_dmabuf_init(struct vmw_private *dev_priv, static void vmw_user_dmabuf_destroy(struct ttm_buffer_object *bo) { struct vmw_user_dma_buffer *vmw_user_bo = vmw_user_dma_buffer(bo); - struct ttm_bo_global *glob = bo->glob;
- ttm_mem_global_free(glob->mem_glob, bo->acc_size); kfree(vmw_user_bo); }
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 8d95a42..974c8f8 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -429,9 +429,9 @@ extern void ttm_bo_unlock_delayed_workqueue(struct ttm_bo_device *bdev, * -EBUSY if the buffer is busy and no_wait is true. * -ERESTARTSYS if interrupted by a signal. */ - extern int ttm_bo_synccpu_write_grab(struct ttm_buffer_object *bo, bool no_wait); + /** * ttm_bo_synccpu_write_release: * @@ -442,6 +442,22 @@ ttm_bo_synccpu_write_grab(struct ttm_buffer_object *bo, bool no_wait); extern void ttm_bo_synccpu_write_release(struct ttm_buffer_object *bo);
/** + * ttm_bo_acc_size + * + * @bdev: Pointer to a ttm_bo_device struct. + * @bo_size: size of the buffer object in byte. + * @struct_size: size of the structure holding buffer object datas + * + * Returns size to account for a buffer object + */ +size_t ttm_bo_acc_size(struct ttm_bo_device *bdev, + unsigned long bo_size, + unsigned struct_size); +size_t ttm_bo_dma_acc_size(struct ttm_bo_device *bdev, + unsigned long bo_size, + unsigned struct_size); + +/** * ttm_bo_init * * @bdev: Pointer to a ttm_bo_device struct. @@ -488,6 +504,7 @@ extern int ttm_bo_init(struct ttm_bo_device *bdev, struct file *persistent_swap_storage, size_t acc_size, void (*destroy) (struct ttm_buffer_object *)); + /** * ttm_bo_synccpu_object_init * diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index b2a0848..2be8891 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -469,9 +469,6 @@ struct ttm_bo_global_ref { * @dummy_read_page: Pointer to a dummy page used for mapping requests * of unpopulated pages. * @shrink: A shrink callback object used for buffer object swap. - * @ttm_bo_extra_size: Extra size (sizeof(struct ttm_buffer_object) excluded) - * used by a buffer object. This is excluding page arrays and backing pages. - * @ttm_bo_size: This is @ttm_bo_extra_size + sizeof(struct ttm_buffer_object). * @device_list_mutex: Mutex protecting the device list. * This mutex is held while traversing the device list for pm options. * @lru_lock: Spinlock protecting the bo subsystem lru lists. @@ -489,8 +486,6 @@ struct ttm_bo_global { struct ttm_mem_global *mem_glob; struct page *dummy_read_page; struct ttm_mem_shrink shrink; - size_t ttm_bo_extra_size; - size_t ttm_bo_size; struct mutex device_list_mutex; spinlock_t lru_lock;
On Fri, Nov 18, 2011 at 08:04:47PM -0500, j.glisse@gmail.com wrote:
Important fix to patch 14, fix accounting of ghost bo. When creating a ghost bo we don't account it, so set its acc_size to 0 so that when ghost is release we don't overfree.
I wonder how i didn't run into this before.
Patch are also at
http://people.freedesktop.org/~glisse/ttmdma/
Cheers, Jerome
Oh i forgot to add some of the reviewed by, i updated patches on fdo, i don't resend to the ml.
Cheers, Jerome
On Fri, Nov 18, 2011 at 08:08:36PM -0500, Jerome Glisse wrote:
On Fri, Nov 18, 2011 at 08:04:47PM -0500, j.glisse at gmail.com wrote:
Important fix to patch 14, fix accounting of ghost bo. When creating a ghost bo we don't account it, so set its acc_size to 0 so that when ghost is release we don't overfree.
I wonder how i didn't run into this before.
Patch are also at
http://people.freedesktop.org/~glisse/ttmdma/
Cheers, Jerome
Oh i forgot to add some of the reviewed by, i updated patches on fdo, i don't resend to the ml.
Great! How should we ask Dave to pull them? Does he prefer to do it via git tree? In which I can create a branch with those patches and send him a GIT PULL email? Or is there a more convient way?
Cheers, Jerome
----- Original Message -----
From: "Konrad Rzeszutek Wilk" konrad.wilk@oracle.com To: "Jerome Glisse" j.glisse@gmail.com, dri-devel@lists.freedesktop.org, "konrad wilk" konrad.wilk@oracle.com, thellstrom@vmware.com, airlied@gmail.com, "Jerome Glisse" jglisse@redhat.com Sent: Friday, 2 December, 2011 2:19:01 PM Subject: Re: ttm: merge ttm_backend & ttm_tt, introduce ttm dma allocator V7
On Fri, Nov 18, 2011 at 08:08:36PM -0500, Jerome Glisse wrote:
On Fri, Nov 18, 2011 at 08:04:47PM -0500, j.glisse at gmail.com wrote:
Important fix to patch 14, fix accounting of ghost bo. When creating a ghost bo we don't account it, so set its acc_size to 0 so that when ghost is release we don't overfree.
I wonder how i didn't run into this before.
Patch are also at
http://people.freedesktop.org/~glisse/ttmdma/
Cheers, Jerome
Oh i forgot to add some of the reviewed by, i updated patches on fdo, i don't resend to the ml.
Great! How should we ask Dave to pull them? Does he prefer to do it via git tree? In which I can create a branch with those patches and send him a GIT PULL email? Or is there a more convient way?
If someone could suck them all into a git tree + all the Reviewed-by tags from the people who reviewed them it would make it a lot easier,
I lost track of where this ended up as Jerome had a few balls in the air and I know Thomas wasn't liking some of them.
So please send me a git url and I'll go review that next week.
Dave.
On Sat, Dec 3, 2011 at 9:52 AM, David Airlie airlied@redhat.com wrote:
----- Original Message -----
From: "Konrad Rzeszutek Wilk" konrad.wilk@oracle.com To: "Jerome Glisse" j.glisse@gmail.com, dri-devel@lists.freedesktop.org, "konrad wilk" konrad.wilk@oracle.com, thellstrom@vmware.com, airlied@gmail.com, "Jerome Glisse" jglisse@redhat.com Sent: Friday, 2 December, 2011 2:19:01 PM Subject: Re: ttm: merge ttm_backend & ttm_tt, introduce ttm dma allocator V7
On Fri, Nov 18, 2011 at 08:08:36PM -0500, Jerome Glisse wrote:
On Fri, Nov 18, 2011 at 08:04:47PM -0500, j.glisse at gmail.com wrote:
Important fix to patch 14, fix accounting of ghost bo. When creating a ghost bo we don't account it, so set its acc_size to 0 so that when ghost is release we don't overfree.
I wonder how i didn't run into this before.
Patch are also at
http://people.freedesktop.org/~glisse/ttmdma/
Cheers, Jerome
Oh i forgot to add some of the reviewed by, i updated patches on fdo, i don't resend to the ml.
Great! How should we ask Dave to pull them? Does he prefer to do it via git tree? In which I can create a branch with those patches and send him a GIT PULL email? Or is there a more convient way?
If someone could suck them all into a git tree + all the Reviewed-by tags from the people who reviewed them it would make it a lot easier,
I lost track of where this ended up as Jerome had a few balls in the air and I know Thomas wasn't liking some of them.
So please send me a git url and I'll go review that next week.
Dave.
http://cgit.freedesktop.org/~glisse/linux/log/
I need to add last reviewed by from Thomas. Note this tree also has other patches (multi ring lastest patchset) which we want for next too i believe.
I will update this tree on monday with all the missing reviewed by
Cheers, Jerome
On Sat, Dec 3, 2011 at 4:41 PM, Jerome Glisse j.glisse@gmail.com wrote:
On Sat, Dec 3, 2011 at 9:52 AM, David Airlie airlied@redhat.com wrote:
----- Original Message -----
From: "Konrad Rzeszutek Wilk" konrad.wilk@oracle.com To: "Jerome Glisse" j.glisse@gmail.com, dri-devel@lists.freedesktop.org, "konrad wilk" konrad.wilk@oracle.com, thellstrom@vmware.com, airlied@gmail.com, "Jerome Glisse" jglisse@redhat.com Sent: Friday, 2 December, 2011 2:19:01 PM Subject: Re: ttm: merge ttm_backend & ttm_tt, introduce ttm dma allocator V7
On Fri, Nov 18, 2011 at 08:08:36PM -0500, Jerome Glisse wrote:
On Fri, Nov 18, 2011 at 08:04:47PM -0500, j.glisse at gmail.com wrote:
Important fix to patch 14, fix accounting of ghost bo. When creating a ghost bo we don't account it, so set its acc_size to 0 so that when ghost is release we don't overfree.
I wonder how i didn't run into this before.
Patch are also at
http://people.freedesktop.org/~glisse/ttmdma/
Cheers, Jerome
Oh i forgot to add some of the reviewed by, i updated patches on fdo, i don't resend to the ml.
Great! How should we ask Dave to pull them? Does he prefer to do it via git tree? In which I can create a branch with those patches and send him a GIT PULL email? Or is there a more convient way?
If someone could suck them all into a git tree + all the Reviewed-by tags from the people who reviewed them it would make it a lot easier,
I lost track of where this ended up as Jerome had a few balls in the air and I know Thomas wasn't liking some of them.
So please send me a git url and I'll go review that next week.
Dave.
http://cgit.freedesktop.org/~glisse/linux/log/
I need to add last reviewed by from Thomas. Note this tree also has other patches (multi ring lastest patchset) which we want for next too i believe.
I will update this tree on monday with all the missing reviewed by
Cheers, Jerome
Just to be extra informative my tree has:
drm/ttm: callback move_notify any time bo placement change v4 Got the reviewed by recently from Thomas
drm/ttm: simplify memory accounting for ttm user v2 Thomas reviewed version1, version2 only fix a bug so i think it's safe to assume i have reviewed by Thomas for v2.
Cheers, Jerome
dri-devel@lists.freedesktop.org