On Sun, Nov 20, 2011 at 4:30 AM, Thomas Hellstrom thellstrom@vmware.com wrote:
On 11/19/2011 11:54 PM, Jerome Glisse wrote:
As mentioned previously, and in the discussion with Ben, the page tables would not need to be rebuilt on each CS. They would be rebuilt only on the first CS following a move_notify that caused a page table invalidation.
move_notify: if (is_incompatible(new_mem_type)) { bo->page_tables_invalid = true; invalidate_page_tables(bo); }
command_submission: if (bo->page_tables_invalid) { set_up_page_tables(bo); bo->page_tables_invalid = false; }
Why is it different from updating page table in move notify ? I don't see any bonus here, all the information we need are already available in move_notify.
I've iterated the pros of this approach at least two times before, but for completeness let's do it again:
8<----------------------------------------------------------------------------------------------------
- TTM doesn't need to care about the driver re-populating its GPU page
tables. Since swapin is handled from the tt layer not the bo layer, this makes it a bit easier on us. 2) Transition to page-faulted GPU virtual maps is straightforward and consistent. A non-page-faulting driver sets up the maps at CS time, A pagefaulting driver can set them up directly from an irq handler without reserving, since the bo is properly fenced or pinned when the pagefault happens. 3) A non-page-faulting driver knows at CS time exactly which page-table-entries really do need populating, and can do this more efficiently.
8<-----------------------------------------------------------------------------------------------------
And some extra items like partially populated TTMs that were mentioned elsewhere.
If done in move_notify i don't see why 1 would be different or 2. I agree that in some case 3 is true. Given when move notify is call the ttm_tt is always fully populated at that point (only exception is in destroy path but it's a special on its own). If driver populate in move_notify is doesn't change anything from ttm pov.
Memory types in TTM are completely orthogonal to allowed GPU usage. The GPU may access a bo if it's reserved, fenced or pinned, regardless of its placement.
A TT memory type is a *single* GPU aperture that may be mapped from the aperture side by the CPU (AGP). It may also be used by a single unmappable aperture that wants to use TTM's range management and eviction (vmwgfx GMR). The driver can define more than one such memory type (psb), but a bo can only be placed in one of those at a time, so this approach is unsuitable for multiple apertures pointing to the same pages.
radeon virtual memory have a special address space, the system address space, it's managed by ttm through a TTM_TT (exact same code as current one). All the other address space are not managed by ttm but we require a bo to be bound to ttm_tt to be use, thought we can relax that. That's the reason why i consider system placement as different.
Yes for Radeon system memory may be different, and that's fine. But as also previously mentioned we're trying to design a generic interface here, in which we need to consider GPU- mappable system memory.
I think the pros of this interface design compared to populating in bo_move are pretty well established, so can you please explain why you keep arguing against it? What is it that I have missed?
/Thomas
It's just i absolutely see no diff in doing it in move_notify (point 1 and 2 doesn't change from my pov). I fail to see the pro that's all.
Cheers, Jerome