Hi,
While doing some changes to x86's pat code and thus having 'debugpat', I noticed some weird behavior in a server running linux-next as of -- yes, reverting does 'fix' the issue:
90f479ae51a (drm/mgag200: Replace struct mga_fbdev with generic framebuffer emulation)
Where the following splat is seen over and over endlessly for the same range:
x86/PAT: Overlap at 0xd0000000-0xd1000000 x86/PAT: reserve_memtype added [mem 0xd0000000-0xd02fffff], track write-combining, req write-combining, ret write-combining x86/PAT: free_memtype request [mem 0xd0000000-0xd02fffff]
And all these are doing ioremap from drm_fb_helper_dirty_work():
[ 114.330825] reserve_memtype+0x1b0/0x410 [ 114.330829] ? ttm_bo_kmap+0x1d7/0x270 [ttm] [ 114.330830] __ioremap_caller.constprop.14+0xf6/0x300 [ 114.330832] ? soft_cursor+0x1f9/0x220 [ 114.330835] ttm_bo_kmap+0x1d7/0x270 [ttm] [ 114.330838] ? ttm_bo_del_sub_from_lru+0x29/0x40 [ttm] [ 114.330841] drm_gem_vram_kmap+0x54/0x70 [drm_vram_helper] [ 114.330842] drm_gem_vram_object_vmap+0x23/0x40 [drm_vram_helper] [ 114.330853] drm_gem_vmap+0x1f/0x60 [drm] [ 114.477697] drm_client_buffer_vmap+0x1d/0x30 [drm] [ 114.477703] drm_fb_helper_dirty_work+0x92/0x180 [drm_kms_helper] [ 114.477706] process_one_work+0x1f4/0x3e0 [ 114.477707] worker_thread+0x2d/0x3e0
Before, the same range was also added, but only once, and fwiw it was the same either with 24 or 32 bpp.
Any thoughts?
Thanks, Davidlohr
Hi
Am 04.09.19 um 08:49 schrieb Davidlohr Bueso:
Hi,
While doing some changes to x86's pat code and thus having 'debugpat', I noticed some weird behavior in a server running linux-next as of -- yes, reverting does 'fix' the issue:
90f479ae51a (drm/mgag200: Replace struct mga_fbdev with generic framebuffer emulation)
Where the following splat is seen over and over endlessly for the same range:
x86/PAT: Overlap at 0xd0000000-0xd1000000 x86/PAT: reserve_memtype added [mem 0xd0000000-0xd02fffff], track write-combining, req write-combining, ret write-combining x86/PAT: free_memtype request [mem 0xd0000000-0xd02fffff]
And all these are doing ioremap from drm_fb_helper_dirty_work():
[ 114.330825] reserve_memtype+0x1b0/0x410 [ 114.330829] ? ttm_bo_kmap+0x1d7/0x270 [ttm] [ 114.330830] __ioremap_caller.constprop.14+0xf6/0x300 [ 114.330832] ? soft_cursor+0x1f9/0x220 [ 114.330835] ttm_bo_kmap+0x1d7/0x270 [ttm] [ 114.330838] ? ttm_bo_del_sub_from_lru+0x29/0x40 [ttm] [ 114.330841] drm_gem_vram_kmap+0x54/0x70 [drm_vram_helper] [ 114.330842] drm_gem_vram_object_vmap+0x23/0x40 [drm_vram_helper] [ 114.330853] drm_gem_vmap+0x1f/0x60 [drm] [ 114.477697] drm_client_buffer_vmap+0x1d/0x30 [drm] [ 114.477703] drm_fb_helper_dirty_work+0x92/0x180 [drm_kms_helper] [ 114.477706] process_one_work+0x1f4/0x3e0 [ 114.477707] worker_thread+0x2d/0x3e0
Before, the same range was also added, but only once, and fwiw it was the same either with 24 or 32 bpp.
Any thoughts?
Thanks for reporting. The original code kept around memory mappings for a longer time, while the new code remapped frequently. I've just submitted a patch set that restores the old behavior. [1] Fixes the problem on my test machine.
Best regards Thomas
[1] https://patchwork.freedesktop.org/series/66210/
Thanks, Davidlohr
dri-devel@lists.freedesktop.org