On Wed, Dec 22, 2010 at 4:06 PM, Mario Kleiner mario.kleiner@tuebingen.mpg.de wrote:
There's a new drm module parameter for selecting the timeout: echo 50 > /sys/module/drm/parameters/vblankoffdelay would set the timeout to 50 msecs. A setting of zero will disable the timer, so vblank irq's would stay on all the time.
The default setting is still 5000 msecs as before, but reducing this to 100 msecs wouldn't be a real problem imho. At least i didn't observe any miscounting during extensive testing with 100 msecs.
The patches in drm-next fix a couple of races that i observed on intel and radeon during testing and a few that i didn't see but that i could imagine happening. It tries to make sure that the saved final count at vblank irq disable of the software vblank_count and the gpu counter are consistent - no off by one errors. They also try to detect and filter out spurious vblank interrupts at vblank enable time, e.g., on the radeon.
There's still one possible race in the disable path which i will try to fix: We don't know when exactly the hardware counter increments wrt. the processing of the vblank interrupt - it could increment a few (dozen/hundred) microseconds before or after the irq handler runs, so if you happen to query the hardware counter while the gpu is inside the vblank you can't be sure if you picked up the old count or the new count for that vblank.
That's disgusting. Does this affect many GPUs? (I can't imagine why any sensible implementation wouldn't guarantee that the counter increments just before the IRQ.)
This only matters during vblank disable. For that reason it's not such a good idea to disable vblank irq's from within the vblank irq handler. I tried that and it didn't work well --> When doing it from within irq you basically maximize the chance of hitting the time window when the race can happen. Delaying within the irq handler by a millisecond would fix that, but that's not what we want.
Having the disable in a function triggered by a timer like now is the most simple solution i could come up with. There we can burn a few dozen microseconds if neccessary to remove this race.
Maybe I'm missing something, but other than the race above (which seems like it shouldn't exist on sane hardware), the new code seems more complicated than necessary.
Why not do nothing at all on vblank disable and, on enable, do this:
Call get_vblank_counter and declare the return value to be correct. On each vblank IRQ (or maybe just the first one after enable), read the counter again and if it matches the cached value, then assume we raced on enable (i.e. we enabled right at the beginning of the vblank interval, and this interrupt is redundant). In that case, do nothing. Otherwise increment the cached value by one.
On hardware with the silly race where get_vblank_counter in the IRQ can return a number one too low, use the scanout pos to fix it (i.e. if the scanout position is at the end of the frame, assume we hit that race and increment by 1).
This means that it would be safe to disable vblanks from any context at all, because it has no effect other than turning off the interupt.
--Andy
There could be other races that i couldn't think of and that i didn't see during my testing with my 2 radeons and 1 intel gpu. Therefore i think we should keep vblank irq's enabled for at least 2 or maybe 3 refresh cycles if they aren't used by anyone. Apps that want to schedule swaps very precisely and the ddx/drm/kms-driver itself do multiple queries in quick succession for a typical swapbuffers call, i.e., drm_vblank_get() -> query -> drm_vblank_put(), so on an otherwise idle graphics system the refcount will toggle between zero and one multiple times during a swap, usually within a few milliseconds. If the timeout is big enough so that irq's don't get disabled within such a sequence of toggles, even if there's a bit of scheduling delay for the x-server or client, then a client will see at least consistent vblank count during a swap, even if there are still some races hiding somewhere that we don't handle properly. That should be good enough, and paranoid clients can always increase the timeout value or set it to infinite.
E.g., my toolkit schedules a swap for a specific system time like this:
- glXGetSyncValuesOML(... &base_msc, &base_ust);
- calculate target_msc based on user provided swap deadline t and
(base_msc, base_ust) as a baseline. 3. glXSwapBuffersMscOML(...., target_msc,...); 4. glXWaitForSbcOML() or use Intel_swap_events for retrieving the true msc and ust of swap completion.
=> Doesn't matter if there would be an off-by-one error in vblank counting due to an unknown race, as long as it doesn't happen between 1. and 4. As long as there aren't any client/x-server scheduling delays between step 1 and 3 of more than /sys/module/drm/parameters/vblankoffdelay msecs, nothing can go wrong even if there are race conditions left in that area.
=> 50-100 msecs as new default would be probably good enough and at the same time prevent the "blinking cursor keeps vblank irq's on all the time" problem.
I didn't reduce the timeout in the current patches because the filtering for race-conditions and other gpu funkyness relies on somewhat precise vblank timestamps and the timestamping hooks aren't yet implemented in the nouveau kms. Maybe i manage to get this working over christmas. Patches to nouveau would be simple, i just don't know the mmio register addresses for crtc scanout position on nvidia gpu's.
-mario
Mario Kleiner Max Planck Institute for Biological Cybernetics Spemannstr. 38 72076 Tuebingen Germany
e-mail: mario.kleiner@tuebingen.mpg.de office: +49 (0)7071/601-1623 fax: +49 (0)7071/601-616 www: http://www.kyb.tuebingen.mpg.de/~kleinerm
"For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled." (Richard Feynman)