On Tue, May 8, 2012 at 12:23 PM, Daniel Vetter daniel@ffwll.ch wrote:
On Tue, May 08, 2012 at 08:25:38AM -0700, Linus Torvalds wrote:
On Mon, May 7, 2012 at 4:13 PM, Stéphane Marchesin marcheu@chromium.org wrote:
In the end, I came up with the ugly workaround of just leaking the offending pages in shmem.c.
Don't leak it.
Instead, add it to some RCU list, and free it using RCU. Or some one-second timer or something.
That kind of approach should guarantee that it
(a) gets returned to the system
but
(b) the returning to the system gets delayed sufficiently that if the i915 driver is doing lots of allocations it will be getting other pages.
Hmm?
The problem is also that this only affects Sandybdrige gpus, so we'd need to funnel this down to shmfs somehow ... Rob Clarke from Linaro will be working on a gemfs to make backing storage allocation more flexible - they need that to support some arm gpus. That way round we wouldn't need to put some ugly drm/i915 stuff into core shmfs. Rob?
Well, a bit hand-wavey at this point, but the idea is to let the driver have control of the page allocation via 'struct address_space_operations'.. but otherwise work in a similar way as shmfs.
Something like get_xip_mem() is almost what we want, except we don't want it to populate the pages, we don't want to force a kernel mapping, and shmem doesn't use it..
I suppose we still need a short term fix for i915, but at least it would only be temporary.
BR, -R
-Daniel
Daniel Vetter Mail: daniel@ffwll.ch Mobile: +41 (0)79 365 57 48 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel