By request forwarded patch
This is also Reviewed-by: Thomas Hellstrom thellstrom@vmware.com
/Thomas
-------- Forwarded Message -------- Subject: [PATCH] kref: prefer atomic_inc_not_zero to atomic_add_unless Date: Sat, 10 Oct 2015 12:56:34 +0200 From: Jason A. Donenfeld Jason@zx2c4.com To: Dave Airlie airlied@redhat.com, Thomas Hellstrom thellstrom@vmware.com, linux-kernel@vger.kernel.org CC: Jason A. Donenfeld Jason@zx2c4.com
On most platforms, there exists this ifdef:
#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)
This makes this patch functionally useless. However, on PPC, there is actually an explicit definition of atomic_inc_not_zero with its own assembly that is slightly more optimized than atomic_add_unless. So, this patch changes kref to use atomic_inc_not_zero instead, for PPC and any future platforms that might provide an explicit implementation.
This also puts this usage of kref more in line with a verbatim reading of the examples in Paul McKenney's paper [1] in the section titled "2.4 Atomic Counting With Check and Release Memory Barrier", which uses atomic_inc_not_zero.
[1] https://urldefense.proofpoint.com/v2/url?u=http-3A__open-2Dstd.org_jtc1_sc22...
Signed-off-by: Jason A. Donenfeld Jason@zx2c4.com --- include/linux/kref.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/kref.h b/include/linux/kref.h index 484604d..83d1f94 100644 --- a/include/linux/kref.h +++ b/include/linux/kref.h @@ -166,6 +166,6 @@ static inline int kref_put_mutex(struct kref *kref, */ static inline int __must_check kref_get_unless_zero(struct kref *kref) { - return atomic_add_unless(&kref->refcount, 1, 0); + return atomic_inc_not_zero(&kref->refcount); } #endif /* _KREF_H_ */
dri-devel@lists.freedesktop.org