From: Nicolai Hähnle Nicolai.Haehnle@amd.com
The function will be re-used in subsequent patches.
Cc: Peter Zijlstra peterz@infradead.org Cc: Ingo Molnar mingo@redhat.com Cc: Maarten Lankhorst dev@mblankhorst.nl Cc: Daniel Vetter daniel@ffwll.ch Cc: Chris Wilson chris@chris-wilson.co.uk Cc: dri-devel@lists.freedesktop.org Signed-off-by: Nicolai Hähnle Nicolai.Haehnle@amd.com --- kernel/locking/mutex.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 0afa998..200629a 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -277,6 +277,13 @@ static __always_inline void ww_mutex_lock_acquired(struct ww_mutex *ww, ww_ctx->acquired++; }
+static inline bool __sched +__ww_mutex_stamp_after(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b) +{ + return a->stamp - b->stamp <= LONG_MAX && + (a->stamp != b->stamp || a > b); +} + /* * After acquiring lock with fastpath or when we lost out in contested * slowpath, set ctx and wake up any waiters so they can recheck. @@ -610,8 +617,7 @@ __ww_mutex_lock_check_stamp(struct mutex *lock, struct ww_acquire_ctx *ctx) if (!hold_ctx) return 0;
- if (ctx->stamp - hold_ctx->stamp <= LONG_MAX && - (ctx->stamp != hold_ctx->stamp || ctx > hold_ctx)) { + if (__ww_mutex_stamp_after(ctx, hold_ctx)) { #ifdef CONFIG_DEBUG_MUTEXES DEBUG_LOCKS_WARN_ON(ctx->contending_lock); ctx->contending_lock = ww;
On Thu, Dec 01, 2016 at 03:06:46PM +0100, Nicolai Hähnle wrote:
From: Nicolai Hähnle Nicolai.Haehnle@amd.com
The function will be re-used in subsequent patches.
Cc: Peter Zijlstra peterz@infradead.org Cc: Ingo Molnar mingo@redhat.com Cc: Maarten Lankhorst dev@mblankhorst.nl Cc: Daniel Vetter daniel@ffwll.ch Cc: Chris Wilson chris@chris-wilson.co.uk Cc: dri-devel@lists.freedesktop.org Signed-off-by: Nicolai Hähnle Nicolai.Haehnle@amd.com
kernel/locking/mutex.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 0afa998..200629a 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -277,6 +277,13 @@ static __always_inline void ww_mutex_lock_acquired(struct ww_mutex *ww, ww_ctx->acquired++; }
+static inline bool __sched +__ww_mutex_stamp_after(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
Should it be ww_mutex_stamp or ww_acquire_stamp / ww_ctx_stamp?
Nothing else operates on the ww_acquire_ctx without a ww_mutex so it might look a bit odd if it didn't use ww_mutex.
Patch only does what it says on tin, so Reviewed-by: Chris Wilson chris@chris-wilson.co.uk -Chris
dri-devel@lists.freedesktop.org