From: Tvrtko Ursulin tvrtko.ursulin@intel.com
Current processing landscape seems to be more and more composed of pipelines where computations are done on multiple hardware devices. Furthermore some of the non-CPU devices, like in this case many GPUs supported by the i915 driver, actually support priority based scheduling which is currently rather inaccessible to the user (in terms of being able to control it from the outside).
From these two statements a question arises on how to allow for a simple,
effective and consolidated user experience. In other words why user would not be able to do something like:
$ nice ffmmpeg ...transcode my videos... $ my-favourite-game
And have the nice hint apply to GPU parts of the transcode pipeline as well?
This would in fact follow the approach taken by kernel's block scheduler where ionice is by default inherited from process nice.
This series implements the same idea by inheriting context creator and batch buffer submitted nice value as context nice. To avoid influencing GPU scheduling aware clients this is done only one for contexts where userspace hasn't explicitly specified a non-default scheduling priority
The approach is completely compatible with GuC and drm/scheduler since all support at least low/normal/high priority levels with just the granularity of available control differing. In other words with GuC scheduling there is no difference between nice 5 and 10, both would map to low priority, but the default case of positive or negative nice, versus nice 0, is still correctly propagated to the firmware scheduler.
With the series applied I simulated the scenario of a background GPU task running simultaneously with an interactive client, varying the former's nice value.
Simulating a non-interactive GPU background task was: vblank_mode=0 nice -n <N> glxgears -geometry 1600x800
Interactive client was simulated with: gem_wsim -w ~/test.wsim -r 300 -v # (This one is self-capped at ~60fps.)
These were the results on DG1, first with execlists (default):
Background nice | Interactive FPS -------------------+-------------------- <not running> | 59 0 | 35 10 | 42
As we can see running the background load with nice 10 can somewhat help the performance of the interactive/foreground task. (Although to be noted is that without the fair scheduler completed there are possible starvation issues depending on the workload which cannot be fixed by this patch.)
Now results with GuC (although it is not default on DG1):
Background nice | Interactive FPS -------------------+-------------------- <not running> | 58 0 | 26 10 | 25
Unfortunately GuC is not showing any change (25 vs 26 is rounding/run error). But reverse mesurement with background client at nice 0 and foreground at nice -10 does give 40 FPS proving the priority adjustment does work. (Same reverse test gives 46 FPS with execlists). What is happening with GuC here is something to be looked at since it seems normal-vs-low GuC priority time slices differently than normal-vs-high. Normal does not seem to be preferred over low, in this test at least.
v2: * Moved notifier outside task_rq_lock. * Some improvements and restructuring on the i915 side of the series.
v3: * Dropped task nice notifier - inheriting nice on request submit time is good enough.
v4: * Realisation came that this can be heavily simplified and only one simple patch is enough to achieve the desired behaviour. * Fixed the priority adjustment location to actually worked after rebase! * Re-done the benchmarking.
v5: * I am sending wrong files out yet again (v4), apologies for the spam..
Tvrtko Ursulin (1): drm/i915: Inherit submitter nice when scheduling requests
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 8 ++++++++ 1 file changed, 8 insertions(+)
From: Tvrtko Ursulin tvrtko.ursulin@intel.com
Inherit submitter nice at point of request submission to account for long running processes getting either externally or self re-niced.
This accounts for the current processing landscape where computational pipelines are composed of CPU and GPU parts working in tandem.
Nice value will only apply to requests which originate from user contexts and have default context priority. This is to avoid disturbing any application made choices of low and high (batch processing and latency sensitive compositing). In this case nice value adjusts the effective priority in the narrow band of -19 to +20 around I915_CONTEXT_DEFAULT_PRIORITY.
This means that userspace using the context priority uapi directly has a wider range of possible adjustments (in practice that only applies to execlists platforms - with GuC there are only three priority buckets), but in all cases nice adjustment has the expected effect: positive nice lowering the scheduling priority and negative nice raising it.
Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 50cbc8b4885b..2d5e71029d7c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -3043,6 +3043,14 @@ static int eb_request_add(struct i915_execbuffer *eb, struct i915_request *rq, /* Check that the context wasn't destroyed before submission */ if (likely(!intel_context_is_closed(eb->context))) { attr = eb->gem_context->sched; + /* + * Inherit process nice when scheduling user contexts but only + * if context has the default priority to avoid touching + * contexts where GEM uapi has been used to explicitly lower + * or elevate it. + */ + if (attr.priority == I915_CONTEXT_DEFAULT_PRIORITY) + attr.priority = -task_nice(current); } else { /* Serialise with context_close via the add_to_timeline */ i915_request_set_error_once(rq, -ENOENT);
dri-devel@lists.freedesktop.org