Hi Jérôme,
On 13.08.2014 12:52, Jérôme Glisse wrote:
From: Jérôme Glisse jglisse@redhat.com
Current code never allowed the page pool to actualy fill in anyway. This fix it and also allow it to grow over its limit until it grow beyond the batch size for allocation and deallocation.
Signed-off-by: Jérôme Glisse jglisse@redhat.com Reviewed-by: Mario Kleiner mario.kleiner.de@gmail.com Tested-by: Michel Dänzer michel@daenzer.net Cc: Thomas Hellstrom thellstrom@vmware.com Cc: Konrad Rzeszutek Wilk konrad.wilk@oracle.com
drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index c96db43..a076ff3 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -953,14 +953,9 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev) } else { pool->npages_free += count; list_splice(&ttm_dma->pages_list, &pool->free_list);
npages = count;
if (pool->npages_free > _manager->options.max_size) {
if (pool->npages_free >= (_manager->options.max_size +
NUM_PAGES_TO_ALLOC)) npages = pool->npages_free - _manager->options.max_size;
/* free at least NUM_PAGES_TO_ALLOC number of pages
* to reduce calls to set_memory_wb */
if (npages < NUM_PAGES_TO_ALLOC)
npages = NUM_PAGES_TO_ALLOC;
} spin_unlock_irqrestore(&pool->lock, irq_flags);}
Colleagues of mine have measured significant performance gains for some workloads with this patch. Without it, a lot of CPU cycles are spent changing the caching attributes of pages on allocation.
Note that the performance effect seems to mostly disappear when applying patch 1 as well, so apparently 64MB is too small for the max pool size.
Is there any chance this patch could be applied without the controversial patch 3? If not, do you have ideas for addressing the concerns raised against patch 3?