Hey,
Op 05-11-12 14:31, Thomas Hellstrom schreef:
Reservation locking currently always takes place under the LRU spinlock. Hence, strictly there is no need for an atomic_cmpxchg call; we can use atomic_read followed by atomic_write since nobody else will ever reserve without the lru spinlock held. At least on Intel this should remove a locked bus cycle on successful reserve.
Signed-off-by: Thomas Hellstrom thellstrom@vmware.com
Is that really a good thing to submit when I am busy killing lru lock around reserve? :-)
- while (unlikely(atomic_cmpxchg(&bo->reserved, 0, 1) != 0)) { + while (unlikely(atomic_xchg(&bo->reserved, 1) != 0)) {
Works without lru lock too!
In fact mutexes are done in a similar way[1], except with some more magic, and unlocked state is 1, not 0. However I do think that to get that right (saves a irq disable in unlock path, and less wakeups in contended case), I should really just post the mutex extension patches for reservations and ride the flames. It's getting too close to real mutexes so I really want it to be a mutex in that case. So lets convert it.. Soon! :-)
~Maarten
[1] See linux/include/asm-generic/mutex-xchg.h and linux/include/asm-generic/mutex-dec.h for how archs generally implement mutex fastpaths.