From: Jérôme Glisse jglisse@redhat.com
Current code never allowed the page pool to actualy fill in anyway. This fix it, so that we only start freeing page from the pool when we go over the pool size.
Changed since v1: - Move the page batching optimization to its separate patch.
Changed since v2: - Do not remove code part of the batching optimization with this patch. - Better commit message.
Signed-off-by: Jérôme Glisse jglisse@redhat.com Reviewed-by: Mario Kleiner mario.kleiner.de@gmail.com Reviewed-and-Tested-by: Michel Dänzer michel.daenzer@amd.com Reviewed-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Cc: Thomas Hellstrom thellstrom@vmware.com --- drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index 3077f15..af23080 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -963,7 +963,6 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev) } else { pool->npages_free += count; list_splice(&ttm_dma->pages_list, &pool->free_list); - npages = count; if (pool->npages_free > _manager->options.max_size) { npages = pool->npages_free - _manager->options.max_size; /* free at least NUM_PAGES_TO_ALLOC number of pages
From: Jérôme Glisse jglisse@redhat.com
Calls to set_memory_wb() incure heavy TLB flush and IPI cost. To minimize those wait until pool grow beyond batch size before draining the pool.
Signed-off-by: Jérôme Glisse jglisse@redhat.com Reviewed-by: Mario Kleiner mario.kleiner.de@gmail.com Reviewed-and-Tested-by: Michel Dänzer michel@daenzer.net Reviewed-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Cc: Thomas Hellstrom thellstrom@vmware.com --- drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index af23080..624d941 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -963,13 +963,13 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev) } else { pool->npages_free += count; list_splice(&ttm_dma->pages_list, &pool->free_list); - if (pool->npages_free > _manager->options.max_size) { + /* + * Wait to have at at least NUM_PAGES_TO_ALLOC number of pages + * to free in order to minimize calls to set_memory_wb(). + */ + if (pool->npages_free >= (_manager->options.max_size + + NUM_PAGES_TO_ALLOC)) npages = pool->npages_free - _manager->options.max_size; - /* free at least NUM_PAGES_TO_ALLOC number of pages - * to reduce calls to set_memory_wb */ - if (npages < NUM_PAGES_TO_ALLOC) - npages = NUM_PAGES_TO_ALLOC; - } } spin_unlock_irqrestore(&pool->lock, irq_flags);
On Thu, Jul 9, 2015 at 2:19 PM, j.glisse@gmail.com wrote:
From: Jérôme Glisse jglisse@redhat.com
Calls to set_memory_wb() incure heavy TLB flush and IPI cost. To minimize those wait until pool grow beyond batch size before draining the pool.
Signed-off-by: Jérôme Glisse jglisse@redhat.com Reviewed-by: Mario Kleiner mario.kleiner.de@gmail.com Reviewed-and-Tested-by: Michel Dänzer michel@daenzer.net Reviewed-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Cc: Thomas Hellstrom thellstrom@vmware.com
Reviewed-by: Alex Deucher alexander.deucher@amd.com
drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index af23080..624d941 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -963,13 +963,13 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev) } else { pool->npages_free += count; list_splice(&ttm_dma->pages_list, &pool->free_list);
if (pool->npages_free > _manager->options.max_size) {
/*
* Wait to have at at least NUM_PAGES_TO_ALLOC number of pages
* to free in order to minimize calls to set_memory_wb().
*/
if (pool->npages_free >= (_manager->options.max_size +
NUM_PAGES_TO_ALLOC)) npages = pool->npages_free - _manager->options.max_size;
/* free at least NUM_PAGES_TO_ALLOC number of pages
* to reduce calls to set_memory_wb */
if (npages < NUM_PAGES_TO_ALLOC)
npages = NUM_PAGES_TO_ALLOC;
} } spin_unlock_irqrestore(&pool->lock, irq_flags);
-- 1.8.3.1
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Jul 9, 2015 at 2:19 PM, j.glisse@gmail.com wrote:
From: Jérôme Glisse jglisse@redhat.com
Current code never allowed the page pool to actualy fill in anyway. This fix it, so that we only start freeing page from the pool when we go over the pool size.
Changed since v1:
- Move the page batching optimization to its separate patch.
Changed since v2:
- Do not remove code part of the batching optimization with this patch.
- Better commit message.
Signed-off-by: Jérôme Glisse jglisse@redhat.com Reviewed-by: Mario Kleiner mario.kleiner.de@gmail.com Reviewed-and-Tested-by: Michel Dänzer michel.daenzer@amd.com Reviewed-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Cc: Thomas Hellstrom thellstrom@vmware.com
Reviewed-by: Alex Deucher alexander.deucher@amd.com
drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index 3077f15..af23080 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -963,7 +963,6 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev) } else { pool->npages_free += count; list_splice(&ttm_dma->pages_list, &pool->free_list);
npages = count; if (pool->npages_free > _manager->options.max_size) { npages = pool->npages_free - _manager->options.max_size; /* free at least NUM_PAGES_TO_ALLOC number of pages
-- 1.8.3.1
dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
dri-devel@lists.freedesktop.org