From: Jérôme Glisse jglisse@redhat.com
Current code never allowed the page pool to actualy fill in anyway. This fix it, so that we only start freeing page from the pool when we go over the pool size.
Signed-off-by: Jérôme Glisse jglisse@redhat.com Reviewed-by: Mario Kleiner mario.kleiner.de@gmail.com Tested-by: Michel Dänzer michel@daenzer.net Cc: Thomas Hellstrom thellstrom@vmware.com Cc: Konrad Rzeszutek Wilk konrad.wilk@oracle.com --- drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index c96db43..0194a93 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -953,14 +953,8 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev) } else { pool->npages_free += count; list_splice(&ttm_dma->pages_list, &pool->free_list); - npages = count; - if (pool->npages_free > _manager->options.max_size) { + if (pool->npages_free > _manager->options.max_size) npages = pool->npages_free - _manager->options.max_size; - /* free at least NUM_PAGES_TO_ALLOC number of pages - * to reduce calls to set_memory_wb */ - if (npages < NUM_PAGES_TO_ALLOC) - npages = NUM_PAGES_TO_ALLOC; - } } spin_unlock_irqrestore(&pool->lock, irq_flags);
From: Jérôme Glisse jglisse@redhat.com
Calls to set_memory_wb() incure heavy TLB flush and IPI cost. To minimize those wait until pool grow beyond batch size before draining the pool.
Signed-off-by: Jérôme Glisse jglisse@redhat.com Reviewed-by: Mario Kleiner mario.kleiner.de@gmail.com Cc: Michel Dänzer michel@daenzer.net Cc: Thomas Hellstrom thellstrom@vmware.com Cc: Konrad Rzeszutek Wilk konrad.wilk@oracle.com --- drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index 0194a93..8028dd6 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -953,7 +953,12 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev) } else { pool->npages_free += count; list_splice(&ttm_dma->pages_list, &pool->free_list); - if (pool->npages_free > _manager->options.max_size) + /* + * Wait to have at at least NUM_PAGES_TO_ALLOC number of pages + * to free in order to minimize calls to set_memory_wb(). + */ + if (pool->npages_free >= (_manager->options.max_size + + NUM_PAGES_TO_ALLOC)) npages = pool->npages_free - _manager->options.max_size; } spin_unlock_irqrestore(&pool->lock, irq_flags);
On Wed, Jul 08, 2015 at 02:16:37PM -0400, j.glisse@gmail.com wrote:
From: Jérôme Glisse jglisse@redhat.com
Calls to set_memory_wb() incure heavy TLB flush and IPI cost. To minimize those wait until pool grow beyond batch size before draining the pool.
Signed-off-by: Jérôme Glisse jglisse@redhat.com Reviewed-by: Mario Kleiner mario.kleiner.de@gmail.com Cc: Michel Dänzer michel@daenzer.net Cc: Thomas Hellstrom thellstrom@vmware.com Cc: Konrad Rzeszutek Wilk konrad.wilk@oracle.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com
drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index 0194a93..8028dd6 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -953,7 +953,12 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev) } else { pool->npages_free += count; list_splice(&ttm_dma->pages_list, &pool->free_list);
if (pool->npages_free > _manager->options.max_size)
/*
* Wait to have at at least NUM_PAGES_TO_ALLOC number of pages
* to free in order to minimize calls to set_memory_wb().
*/
if (pool->npages_free >= (_manager->options.max_size +
} spin_unlock_irqrestore(&pool->lock, irq_flags);NUM_PAGES_TO_ALLOC)) npages = pool->npages_free - _manager->options.max_size;
-- 1.8.3.1
On Wed, Jul 08, 2015 at 02:16:36PM -0400, j.glisse@gmail.com wrote:
From: Jérôme Glisse jglisse@redhat.com
Current code never allowed the page pool to actualy fill in anyway. This fix it, so that we only start freeing page from the pool when we go over the pool size.
Signed-off-by: Jérôme Glisse jglisse@redhat.com Reviewed-by: Mario Kleiner mario.kleiner.de@gmail.com Tested-by: Michel Dänzer michel@daenzer.net Cc: Thomas Hellstrom thellstrom@vmware.com Cc: Konrad Rzeszutek Wilk konrad.wilk@oracle.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com
drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index c96db43..0194a93 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -953,14 +953,8 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev) } else { pool->npages_free += count; list_splice(&ttm_dma->pages_list, &pool->free_list);
npages = count;
if (pool->npages_free > _manager->options.max_size) {
if (pool->npages_free > _manager->options.max_size) npages = pool->npages_free - _manager->options.max_size;
/* free at least NUM_PAGES_TO_ALLOC number of pages
* to reduce calls to set_memory_wb */
if (npages < NUM_PAGES_TO_ALLOC)
npages = NUM_PAGES_TO_ALLOC;
} spin_unlock_irqrestore(&pool->lock, irq_flags);}
-- 1.8.3.1
On 09.07.2015 03:16, j.glisse@gmail.com wrote:
From: Jérôme Glisse jglisse@redhat.com
Current code never allowed the page pool to actualy fill in anyway. This fix it, so that we only start freeing page from the pool when we go over the pool size.
Signed-off-by: Jérôme Glisse jglisse@redhat.com
[...]
if (pool->npages_free > _manager->options.max_size) {
if (pool->npages_free > _manager->options.max_size) npages = pool->npages_free - _manager->options.max_size;
/* free at least NUM_PAGES_TO_ALLOC number of pages
* to reduce calls to set_memory_wb */
if (npages < NUM_PAGES_TO_ALLOC)
npages = NUM_PAGES_TO_ALLOC;
}
This should be part of patch 2. With that fixed, both patches are
Reviewed-and-Tested-by: Michel Dänzer michel.daenzer@amd.com
dri-devel@lists.freedesktop.org