Maarten, Daniel,
Do we have any ww-mutex performance tests somewhere that can be used to test the impact of implementation details on various locking scenarios?
Thanks,
/Thomas
Op 02-04-18 om 19:49 schreef Thomas Hellstrom:
Maarten, Daniel,
Do we have any ww-mutex performance tests somewhere that can be used to test the impact of implementation details on various locking scenarios?
Thanks,
/Thomas
The thing that comes to my mind are some of the kms_cursor_legacy tests that have been proven to be sensitive to locking issues before. All subtests with (pipe-*/all-pipes)-(single/forked/torture)-(bo/move) try to do cursor updates as fast as possible, and report how many updates have been done.
~Maarten
On 04/03/2018 10:37 AM, Maarten Lankhorst wrote:
Op 02-04-18 om 19:49 schreef Thomas Hellstrom:
Maarten, Daniel,
Do we have any ww-mutex performance tests somewhere that can be used to test the impact of implementation details on various locking scenarios?
Thanks,
/Thomas
The thing that comes to my mind are some of the kms_cursor_legacy tests that have been proven to be sensitive to locking issues before. All subtests with (pipe-*/all-pipes)-(single/forked/torture)-(bo/move) try to do cursor updates as fast as possible, and report how many updates have been done.
~Maarten
Thanks, Maarten.
/Thomas
On Tue, Apr 3, 2018 at 10:37 AM, Maarten Lankhorst maarten.lankhorst@linux.intel.com wrote:
Op 02-04-18 om 19:49 schreef Thomas Hellstrom:
Maarten, Daniel,
Do we have any ww-mutex performance tests somewhere that can be used to test the impact of implementation details on various locking scenarios?
Thanks,
/Thomas
The thing that comes to my mind are some of the kms_cursor_legacy tests that have been proven to be sensitive to locking issues before. All subtests with (pipe-*/all-pipes)-(single/forked/torture)-(bo/move) try to do cursor updates as fast as possible, and report how many updates have been done.
AMD folks have a bunch of tests to exercise their CS paths I think, that should be interesting for the multi-ww_mutex/backoff paths, adding Christian et al. -Daniel
Am 04.04.2018 um 20:02 schrieb Daniel Vetter:
On Tue, Apr 3, 2018 at 10:37 AM, Maarten Lankhorst maarten.lankhorst@linux.intel.com wrote:
Op 02-04-18 om 19:49 schreef Thomas Hellstrom:
Maarten, Daniel,
Do we have any ww-mutex performance tests somewhere that can be used to test the impact of implementation details on various locking scenarios?
Thanks,
/Thomas
The thing that comes to my mind are some of the kms_cursor_legacy tests that have been proven to be sensitive to locking issues before. All subtests with (pipe-*/all-pipes)-(single/forked/torture)-(bo/move) try to do cursor updates as fast as possible, and report how many updates have been done.
AMD folks have a bunch of tests to exercise their CS paths I think, that should be interesting for the multi-ww_mutex/backoff paths, adding Christian et al.
Well not a dedicated test for this, but at least I usually run glmark with thread offloading disabled to measure the command submission overhead.
Except for the usual stuff like copying things from userspace the last time I looked our command submission overhead was dominated by work done for individual BOs.
Not sure how much of that accounts for locking the ww_mutex of the BOs, but I would guess it is quite a bit.
Christian.
-Daniel
dri-devel@lists.freedesktop.org