On Fri, Jul 19, 2019 at 05:23:12PM +0200, Daniel Vetter wrote:
Noticed while reviewing code. I'm not sure whether this might or might not explain some of the missed vblank hilarity we've been seeing. I think those all go through the vblank completion event, which has unconditional barriers - it always takes the spinlock. Therefore no cc stable.
v2:
- Barrriers are hard, put them in in the right order (Chris).
- Improve the comments a bit.
Cc: Rodrigo Siqueira rodrigosiqueiramelo@gmail.com Cc: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Daniel Vetter daniel.vetter@intel.com
drivers/gpu/drm/drm_vblank.c | 38 +++++++++++++++++++++++++++++++++++- include/drm/drm_vblank.h | 13 +++++++++++- 2 files changed, 49 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c index 603ab105125d..eb2a8304536c 100644 --- a/drivers/gpu/drm/drm_vblank.c +++ b/drivers/gpu/drm/drm_vblank.c @@ -295,11 +295,23 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe, static u64 drm_vblank_count(struct drm_device *dev, unsigned int pipe) { struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
u64 count;
if (WARN_ON(pipe >= dev->num_crtcs)) return 0;
- return vblank->count;
- count = vblank->count;
Hmm. This is now a 64bit quantity, which means on 32bit the load/store won't be atomic. That doesn't seem particularly great.
- /*
* This read barrier corresponds to the implicit write barrier of the
* write seqlock in store_vblank(). Note that this is the only place
* where we need an explicit barrier, since all other access goes
* through drm_vblank_count_and_time(), which already has the required
* read barrier curtesy of the read seqlock.
*/
- smp_rmb();
- return count;
}
/**