Den 30.08.2017 09.40, skrev Daniel Vetter:
On Tue, Aug 29, 2017 at 10:40:04AM -0700, Eric Anholt wrote:
Daniel Vetter daniel@ffwll.ch writes:
On Mon, Aug 28, 2017 at 8:44 PM, Noralf Trønnes noralf@tronnes.org wrote:
Hi,
Currently I'm using the cma library with tinydrm because it was so simple to use even though I have to work around the fact that reads are uncached. A bigger problem that I have become aware of, is that it restricts the dma buffers it can import since they have to be continous.
So I looked to udl and it uses shmem. Fine, let's make a shmem gem library similar to the cma library.
Now I have done so and have started to think about the DOC: section, explaining what the library does. And I'm stuck, what's the benefit of using shmem compared to just using alloc_page()?
Gives you swapping (and eventually maybe even migration) since there's a real filesystem behind it. Atm this only works if you register a shrinker callback, which for display drivers is a bit overkill. See i915 or msm for examples (or ttm, if you want an entire fancy framework), and git grep shrinker -- drivers/gpu.
The shrinker is only needed if you need some impetus to unbind objects from your page tables, right? If you're just binding the pages for the moment that you're doing SPI transfers to the display, then in the remaining time it could be swapped out, right?
Yup, and for SPI the setup overhead shouldn't matter. But everyone else probably wants to cache mappings and page lists, and that means some kind of shrinker to drop them when needed.
Let me see if I've understood this correctly:
The first time I call drm_gem_get_pages() the buffer pages are allocated and pinned. When I then call drm_gem_put_pages() the pages are unpinned, but not freed. The kernel is now free to swap out the pages if necessary. Calling drm_gem_get_pages() a second time will swapin the pages if necessary and pin them.
If this is correct, where are pages freed?
Noralf.