On 09/26/2014 12:40 PM, Chuck Ebbert wrote:
On Fri, 26 Sep 2014 09:15:57 +0200 Thomas Hellstrom thellstrom@vmware.com wrote:
On 09/26/2014 01:52 AM, Peter Hurley wrote:
On 09/25/2014 03:35 PM, Chuck Ebbert wrote:
There are six ttm patches queued for 3.16.4:
drm-ttm-choose-a-pool-to-shrink-correctly-in-ttm_dma_pool_shrink_scan.patch drm-ttm-fix-handling-of-ttm_pl_flag_topdown-v2.patch drm-ttm-fix-possible-division-by-0-in-ttm_dma_pool_shrink_scan.patch drm-ttm-fix-possible-stack-overflow-by-recursive-shrinker-calls.patch drm-ttm-pass-gfp-flags-in-order-to-avoid-deadlock.patch drm-ttm-use-mutex_trylock-to-avoid-deadlock-inside-shrinker-functions.patch
Thanks for info, Chuck.
Unfortunately, none of these fix TTM dma allocation doing CMA dma allocation, which is the root problem.
Regards, Peter Hurley
The problem is not really in TTM but in CMA, There was a guy offering to fix this in the CMA code but I guess he didn't probably because he didn't receive any feedback.
Yeah, the "solution" to this problem seems to be "don't enable CMA on x86". Maybe it should even be disabled in the config system.
Or, as previously suggested, don't use CMA for order 0 (single page) allocations....
/Thomas