On Wed, Oct 6, 2021 at 5:15 PM Wanghui (John) john.wanghui@huawei.com wrote:
HI Tvrtko
On 2021/10/4 22:36, Tvrtko Ursulin wrote:
void set_user_nice(struct task_struct *p, long nice) { bool queued, running;
int old_prio;
int old_prio, ret; struct rq_flags rf; struct rq *rq;
@@ -6915,6 +6947,9 @@ void set_user_nice(struct task_struct *p, long nice)
out_unlock: task_rq_unlock(rq, p, &rf);
ret = atomic_notifier_call_chain(&user_nice_notifier_list, nice, p);
}WARN_ON_ONCE(ret != NOTIFY_DONE);
How about adding a new "io_nice" to task_struct,and move the call chain to sched_setattr/getattr, there are two benefits:
We already have an ionice for block io scheduler. hardly can this new io_nice be generic to all I/O. it seems the patchset is trying to link process' nice with GPU's scheduler, to some extent, it makes more senses than having a common ionice because we have a lot of IO devices in the systems, we don't know which I/O the ionice of task_struct should be applied to.
Maybe we could have an ionice dedicated for GPU just like ionice for CFQ of bio/request scheduler.
- Decoupled with fair scheduelr. In our use case, high priority tasks often use rt scheduler.
Is it possible to tell GPU RT as we are telling them CFS nice?
- The range of value don't need to be bound to -20~19 or 0~139
could build a mapping between the priorities of process and GPU. It seems not a big deal.
Thanks barry