Hi all,
Background of this version: This is the v8 of the patch set and is has smaller scope. I had to split the series into two: EM changes and thermal changes due to devfreq dependencies. The patches from v7 9-14 which change devfreq cooling are going to be sent in separate patch series, just after this set get merged into mainline. These patches related to EM got acks and hopefully can go through linux-pm tree. The later thermal patches will go through thermal tree.
The idea and purpose of the Energy Model framework changes: This patch set introduces support for devices in the Energy Model (EM) framework. It will unify the power model for thermal subsystem. It will make simpler to add support for new devices willing to use more advanced features (like Intelligent Power Allocation). Now it should require less knowledge and effort for driver developer to add e.g. GPU driver with simple energy model. A more sophisticated energy model in the thermal framework is also possible, driver needs to provide a dedicated callback function. More information can be found in the updated documentation file.
First 7 patches are refactoring Energy Model framework to add support of other devices that CPUs. They change: - naming convention from 'capacity' to 'performance' state, - API arguments adding device pointer and not rely only on cpumask, - change naming when 'cpu' was used, now it's a 'device' - internal structure to maintain registered devices - update users to the new API Patch 8 updates OPP framework helper function to be more generic, not CPU specific.
The patch set is based on linux-pm branch linux-next 813946019dfd.
Changes: v8: - split the patch set in two: EM changes and thermal changes - re-based on top of linux-pm branch linux-next (asked by Daniel) - EM: changed comments content pointed out by Quentin - added Acked-by from Quentin to almost all EM patches v7 [7]: - EM: added em_perf_domain structure into struct device in order to simplify code (suggested by Daniel) - EM: removed kref, em_pd, debug_dir from em_device (suggested by Daniel) - EM: refactored code and removed unsed functions - EM: refactored checking if EM exists for a CPU and deleted em_cpus_pd_exist() - EM: simplified em_pd_get() and em_cpu_get() - EM: removed em_debug_type_show() - EM: removed a few unused debug prints - EM: changed debug dir name in order to remove em_debug_type_show() and em_debug_dev_show() functions - EM: removed em_dev_list and em_device since it is now possible to use struct device - thermal: split patch 9/10 from v6 as requested by Daniel and created 5 new: -- patch v7 9/15 with only tracing change - exactly the same code so I keep 'Reviewed-by # for tracing code' from Steven Rostedt -- patch 10/15 and 11/14 takes more consistent state of devfreq device -- patch 12/15 which adds Energy Model register/unregister functions -- patch 13/15 adjust headers license into ne SPDX v6 [6]: - split patch 1/5 from v5 into smaller patches as requested by Daniel and dropped ACK from Quentin which was in the old there - added function em_dev_register_perf_domain as suggested by Daniel, which would help transition into the new API - changed 'cs' (capacity state) in different places into 'ps' (performance state), since now there are many smaller patches (previously skipped because of too big size of the patch with main features and left to do later) - changed cpumask_equal() to cpumask_intersects() when checking if 'cpus' coming as an argument to registration function might overlap with already known; this shouldn't be an issue when cpufreq policy is OK, but a check doesn't harm - added Reviewed-by from Alyssa into Panfrost related patch - dropped Matthias patch with PM QoS from the series since it's in the next now v5 [5]: - devfreq cooling: rebased on top of pending patch introducing PM QoS limits - devfreq cooling: added Matthias's patch to make this series build check pass - devfreq cooling: removed OPP disable code and switched to PM QoS - devfreq cooling: since thermal code always used a pointer to devfreq_dev_status, switched to work on a local copy and avoid potential race when either busy_time or total_time could change in the background - devfreq cooling: added _normalize_load() and handle all scenarios when busy_time and total_time could have odd values (even raw counters) - Energy Model patch 2/4: removed prints from cpufreq drivers and added print inside dev_pm_opp_of_register_em() - update patch 2/4 description to better reflect upcoming changes - collected ACK from Quentin for patch 1/4 and Reviewed-by from Steven for 4/4 v4 [4]: - devfreq cooling: added two new registration functions, which will take care of registering EM for the device and simplify drivers code (suggested by Robin and Rob) - Energy Model: changed unregistering code, added kref to track usage, added code freeing tables, added helper function - added return value to function dev_pm_opp_of_register_em() and updated CPUFreq drivers code, added debug prints in case of failure - updated comments in devfreq cooling removing statement that only simple_ondemand devfreq governor is supported to work with power extentions - fixed spelling in the documentation (reported by Randy) v3 [3]: - added back the cpumask 'cpus' in the em_perf_domain due potential cache misses - removed _is_cpu_em() since there is no need for it - changed function name from em_pd_energy() to em_cpu_energy(), which is optimized for usage from the scheduler making some assumptions and not validating arguments to speed-up, there is a comment stressing that it should be used only for CPUs em_perf_domain - changed em_get_pd() to em_pd_get() which is now aligned with em_cpu_get() naming - Energy Model: add code which checks if the EM is already registered for the devfreq device - extended comment in em_cpu_get() describing the need for this function - fixed build warning reported on x86 by kbuild test robot in devfreq_cooling.c - updated documentation in the energy-model.rst - changed print messages from 'energy_model' to 'EM' - changed dev_warn to dev_dbg, should calm down test scripts in case the platform has OPPs less efficient in the OPP table (some of them are there for cooling reasons, we shouldn't warn in this case, debug info is enough) v2 [2]: - changed EM API em_register_perf_domain() adding cpumask_t pointer as last argument (which was discussed with Dietmar and Quentin) - removed dependency on PM_OPP, thanks to the cpumask_t argument - removed enum em_type and em->type dependent code - em_get_pd() can handle CPU device as well as devfreq device - updated EM documentation - in devfreq cooling added code which prevents from race condition with devfreq governors which are trying to use OPPs while thermal is in the middle of disabling them. - in devfreq cooling added code which updates state of the devfreq device to avoid working on stale data when governor has not updated it for a long time - in devfreq cooling added backward compatibility frequency table for drivers which did not provide EM - added Steven's Reviewed-by to trace code in thermal - added another CPUFreq driver which needs to be updated to the new API The v1 can be found here [1].
Regards, Lukasz Luba
[1] https://lkml.org/lkml/2020/1/16/619 [2] https://lkml.org/lkml/2020/2/6/377 [3] https://lkml.org/lkml/2020/2/21/1910 [4] https://lkml.org/lkml/2020/3/9/471 [5] https://lkml.org/lkml/2020/3/18/351 [6] https://lkml.org/lkml/2020/4/10/108 [7] https://lkml.org/lkml/2020/5/11/326
Lukasz Luba (8): PM / EM: change naming convention from 'capacity' to 'performance' PM / EM: introduce em_dev_register_perf_domain function PM / EM: update callback structure and add device pointer PM / EM: add support for other devices than CPUs in Energy Model PM / EM: remove em_register_perf_domain PM / EM: change name of em_pd_energy to em_cpu_energy Documentation: power: update Energy Model description OPP: refactor dev_pm_opp_of_register_em() and update related drivers
Documentation/power/energy-model.rst | 135 ++++++------ drivers/cpufreq/cpufreq-dt.c | 2 +- drivers/cpufreq/imx6q-cpufreq.c | 2 +- drivers/cpufreq/mediatek-cpufreq.c | 2 +- drivers/cpufreq/omap-cpufreq.c | 2 +- drivers/cpufreq/qcom-cpufreq-hw.c | 2 +- drivers/cpufreq/scmi-cpufreq.c | 11 +- drivers/cpufreq/scpi-cpufreq.c | 2 +- drivers/cpufreq/vexpress-spc-cpufreq.c | 2 +- drivers/opp/of.c | 76 ++++--- drivers/thermal/cpufreq_cooling.c | 12 +- include/linux/device.h | 5 + include/linux/energy_model.h | 149 +++++++------ include/linux/pm_opp.h | 15 +- kernel/power/energy_model.c | 285 ++++++++++++++++--------- kernel/sched/fair.c | 2 +- kernel/sched/topology.c | 20 +- 17 files changed, 441 insertions(+), 283 deletions(-)
The Energy Model uses concept of performance domain and capacity states in order to calculate power used by CPUs. Change naming convention from capacity to performance state would enable wider usage in future, e.g. upcoming support for other devices other than CPUs.
Acked-by: Daniel Lezcano daniel.lezcano@linaro.org Acked-by: Quentin Perret qperret@google.com Signed-off-by: Lukasz Luba lukasz.luba@arm.com --- drivers/thermal/cpufreq_cooling.c | 12 ++--- include/linux/energy_model.h | 86 +++++++++++++++++-------------- kernel/power/energy_model.c | 44 ++++++++-------- kernel/sched/topology.c | 20 +++---- 4 files changed, 84 insertions(+), 78 deletions(-)
diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c index e297e135c031..ad8971e26538 100644 --- a/drivers/thermal/cpufreq_cooling.c +++ b/drivers/thermal/cpufreq_cooling.c @@ -333,18 +333,18 @@ static inline bool em_is_sane(struct cpufreq_cooling_device *cpufreq_cdev, return false;
policy = cpufreq_cdev->policy; - if (!cpumask_equal(policy->related_cpus, to_cpumask(em->cpus))) { + if (!cpumask_equal(policy->related_cpus, em_span_cpus(em))) { pr_err("The span of pd %*pbl is misaligned with cpufreq policy %*pbl\n", - cpumask_pr_args(to_cpumask(em->cpus)), + cpumask_pr_args(em_span_cpus(em)), cpumask_pr_args(policy->related_cpus)); return false; }
nr_levels = cpufreq_cdev->max_level + 1; - if (em->nr_cap_states != nr_levels) { - pr_err("The number of cap states in pd %*pbl (%u) doesn't match the number of cooling levels (%u)\n", - cpumask_pr_args(to_cpumask(em->cpus)), - em->nr_cap_states, nr_levels); + if (em_pd_nr_perf_states(em) != nr_levels) { + pr_err("The number of performance states in pd %*pbl (%u) doesn't match the number of cooling levels (%u)\n", + cpumask_pr_args(em_span_cpus(em)), + em_pd_nr_perf_states(em), nr_levels); return false; }
diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index ade6486a3382..fe336a9eb5d4 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -10,13 +10,13 @@ #include <linux/types.h>
/** - * em_cap_state - Capacity state of a performance domain + * em_perf_state - Performance state of a performance domain * @frequency: The CPU frequency in KHz, for consistency with CPUFreq * @power: The power consumed by 1 CPU at this level, in milli-watts * @cost: The cost coefficient associated with this level, used during * energy calculation. Equal to: power * max_frequency / frequency */ -struct em_cap_state { +struct em_perf_state { unsigned long frequency; unsigned long power; unsigned long cost; @@ -24,8 +24,8 @@ struct em_cap_state {
/** * em_perf_domain - Performance domain - * @table: List of capacity states, in ascending order - * @nr_cap_states: Number of capacity states + * @table: List of performance states, in ascending order + * @nr_perf_states: Number of performance states * @cpus: Cpumask covering the CPUs of the domain * * A "performance domain" represents a group of CPUs whose performance is @@ -34,22 +34,27 @@ struct em_cap_state { * CPUFreq policies. */ struct em_perf_domain { - struct em_cap_state *table; - int nr_cap_states; + struct em_perf_state *table; + int nr_perf_states; unsigned long cpus[]; };
+#define em_span_cpus(em) (to_cpumask((em)->cpus)) + #ifdef CONFIG_ENERGY_MODEL #define EM_CPU_MAX_POWER 0xFFFF
struct em_data_callback { /** - * active_power() - Provide power at the next capacity state of a CPU - * @power : Active power at the capacity state in mW (modified) - * @freq : Frequency at the capacity state in kHz (modified) + * active_power() - Provide power at the next performance state of + * a CPU + * @power : Active power at the performance state in mW + * (modified) + * @freq : Frequency at the performance state in kHz + * (modified) * @cpu : CPU for which we do this operation * - * active_power() must find the lowest capacity state of 'cpu' above + * active_power() must find the lowest performance state of 'cpu' above * 'freq' and update 'power' and 'freq' to the matching active power * and frequency. * @@ -80,46 +85,46 @@ static inline unsigned long em_pd_energy(struct em_perf_domain *pd, unsigned long max_util, unsigned long sum_util) { unsigned long freq, scale_cpu; - struct em_cap_state *cs; + struct em_perf_state *ps; int i, cpu;
/* - * In order to predict the capacity state, map the utilization of the - * most utilized CPU of the performance domain to a requested frequency, - * like schedutil. + * In order to predict the performance state, map the utilization of + * the most utilized CPU of the performance domain to a requested + * frequency, like schedutil. */ cpu = cpumask_first(to_cpumask(pd->cpus)); scale_cpu = arch_scale_cpu_capacity(cpu); - cs = &pd->table[pd->nr_cap_states - 1]; - freq = map_util_freq(max_util, cs->frequency, scale_cpu); + ps = &pd->table[pd->nr_perf_states - 1]; + freq = map_util_freq(max_util, ps->frequency, scale_cpu);
/* - * Find the lowest capacity state of the Energy Model above the + * Find the lowest performance state of the Energy Model above the * requested frequency. */ - for (i = 0; i < pd->nr_cap_states; i++) { - cs = &pd->table[i]; - if (cs->frequency >= freq) + for (i = 0; i < pd->nr_perf_states; i++) { + ps = &pd->table[i]; + if (ps->frequency >= freq) break; }
/* - * The capacity of a CPU in the domain at that capacity state (cs) + * The capacity of a CPU in the domain at the performance state (ps) * can be computed as: * - * cs->freq * scale_cpu - * cs->cap = -------------------- (1) + * ps->freq * scale_cpu + * ps->cap = -------------------- (1) * cpu_max_freq * * So, ignoring the costs of idle states (which are not available in - * the EM), the energy consumed by this CPU at that capacity state is - * estimated as: + * the EM), the energy consumed by this CPU at that performance state + * is estimated as: * - * cs->power * cpu_util + * ps->power * cpu_util * cpu_nrg = -------------------- (2) - * cs->cap + * ps->cap * - * since 'cpu_util / cs->cap' represents its percentage of busy time. + * since 'cpu_util / ps->cap' represents its percentage of busy time. * * NOTE: Although the result of this computation actually is in * units of power, it can be manipulated as an energy value @@ -129,34 +134,35 @@ static inline unsigned long em_pd_energy(struct em_perf_domain *pd, * By injecting (1) in (2), 'cpu_nrg' can be re-expressed as a product * of two terms: * - * cs->power * cpu_max_freq cpu_util + * ps->power * cpu_max_freq cpu_util * cpu_nrg = ------------------------ * --------- (3) - * cs->freq scale_cpu + * ps->freq scale_cpu * - * The first term is static, and is stored in the em_cap_state struct - * as 'cs->cost'. + * The first term is static, and is stored in the em_perf_state struct + * as 'ps->cost'. * * Since all CPUs of the domain have the same micro-architecture, they - * share the same 'cs->cost', and the same CPU capacity. Hence, the + * share the same 'ps->cost', and the same CPU capacity. Hence, the * total energy of the domain (which is the simple sum of the energy of * all of its CPUs) can be factorized as: * - * cs->cost * \Sum cpu_util + * ps->cost * \Sum cpu_util * pd_nrg = ------------------------ (4) * scale_cpu */ - return cs->cost * sum_util / scale_cpu; + return ps->cost * sum_util / scale_cpu; }
/** - * em_pd_nr_cap_states() - Get the number of capacity states of a perf. domain + * em_pd_nr_perf_states() - Get the number of performance states of a perf. + * domain * @pd : performance domain for which this must be done * - * Return: the number of capacity states in the performance domain table + * Return: the number of performance states in the performance domain table */ -static inline int em_pd_nr_cap_states(struct em_perf_domain *pd) +static inline int em_pd_nr_perf_states(struct em_perf_domain *pd) { - return pd->nr_cap_states; + return pd->nr_perf_states; }
#else @@ -177,7 +183,7 @@ static inline unsigned long em_pd_energy(struct em_perf_domain *pd, { return 0; } -static inline int em_pd_nr_cap_states(struct em_perf_domain *pd) +static inline int em_pd_nr_perf_states(struct em_perf_domain *pd) { return 0; } diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c index 0a9326f5f421..9892d548a0fa 100644 --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -27,18 +27,18 @@ static DEFINE_MUTEX(em_pd_mutex); #ifdef CONFIG_DEBUG_FS static struct dentry *rootdir;
-static void em_debug_create_cs(struct em_cap_state *cs, struct dentry *pd) +static void em_debug_create_ps(struct em_perf_state *ps, struct dentry *pd) { struct dentry *d; char name[24];
- snprintf(name, sizeof(name), "cs:%lu", cs->frequency); + snprintf(name, sizeof(name), "ps:%lu", ps->frequency);
- /* Create per-cs directory */ + /* Create per-ps directory */ d = debugfs_create_dir(name, pd); - debugfs_create_ulong("frequency", 0444, d, &cs->frequency); - debugfs_create_ulong("power", 0444, d, &cs->power); - debugfs_create_ulong("cost", 0444, d, &cs->cost); + debugfs_create_ulong("frequency", 0444, d, &ps->frequency); + debugfs_create_ulong("power", 0444, d, &ps->power); + debugfs_create_ulong("cost", 0444, d, &ps->cost); }
static int em_debug_cpus_show(struct seq_file *s, void *unused) @@ -62,9 +62,9 @@ static void em_debug_create_pd(struct em_perf_domain *pd, int cpu)
debugfs_create_file("cpus", 0444, d, pd->cpus, &em_debug_cpus_fops);
- /* Create a sub-directory for each capacity state */ - for (i = 0; i < pd->nr_cap_states; i++) - em_debug_create_cs(&pd->table[i], d); + /* Create a sub-directory for each performance state */ + for (i = 0; i < pd->nr_perf_states; i++) + em_debug_create_ps(&pd->table[i], d); }
static int __init em_debug_init(void) @@ -84,7 +84,7 @@ static struct em_perf_domain *em_create_pd(cpumask_t *span, int nr_states, unsigned long opp_eff, prev_opp_eff = ULONG_MAX; unsigned long power, freq, prev_freq = 0; int i, ret, cpu = cpumask_first(span); - struct em_cap_state *table; + struct em_perf_state *table; struct em_perf_domain *pd; u64 fmax;
@@ -99,26 +99,26 @@ static struct em_perf_domain *em_create_pd(cpumask_t *span, int nr_states, if (!table) goto free_pd;
- /* Build the list of capacity states for this performance domain */ + /* Build the list of performance states for this performance domain */ for (i = 0, freq = 0; i < nr_states; i++, freq++) { /* * active_power() is a driver callback which ceils 'freq' to - * lowest capacity state of 'cpu' above 'freq' and updates + * lowest performance state of 'cpu' above 'freq' and updates * 'power' and 'freq' accordingly. */ ret = cb->active_power(&power, &freq, cpu); if (ret) { - pr_err("pd%d: invalid cap. state: %d\n", cpu, ret); - goto free_cs_table; + pr_err("pd%d: invalid perf. state: %d\n", cpu, ret); + goto free_ps_table; }
/* * We expect the driver callback to increase the frequency for - * higher capacity states. + * higher performance states. */ if (freq <= prev_freq) { pr_err("pd%d: non-increasing freq: %lu\n", cpu, freq); - goto free_cs_table; + goto free_ps_table; }
/* @@ -127,7 +127,7 @@ static struct em_perf_domain *em_create_pd(cpumask_t *span, int nr_states, */ if (!power || power > EM_CPU_MAX_POWER) { pr_err("pd%d: invalid power: %lu\n", cpu, power); - goto free_cs_table; + goto free_ps_table; }
table[i].power = power; @@ -141,12 +141,12 @@ static struct em_perf_domain *em_create_pd(cpumask_t *span, int nr_states, */ opp_eff = freq / power; if (opp_eff >= prev_opp_eff) - pr_warn("pd%d: hertz/watts ratio non-monotonically decreasing: em_cap_state %d >= em_cap_state%d\n", + pr_warn("pd%d: hertz/watts ratio non-monotonically decreasing: em_perf_state %d >= em_perf_state%d\n", cpu, i, i - 1); prev_opp_eff = opp_eff; }
- /* Compute the cost of each capacity_state. */ + /* Compute the cost of each performance state. */ fmax = (u64) table[nr_states - 1].frequency; for (i = 0; i < nr_states; i++) { table[i].cost = div64_u64(fmax * table[i].power, @@ -154,14 +154,14 @@ static struct em_perf_domain *em_create_pd(cpumask_t *span, int nr_states, }
pd->table = table; - pd->nr_cap_states = nr_states; + pd->nr_perf_states = nr_states; cpumask_copy(to_cpumask(pd->cpus), span);
em_debug_create_pd(pd, cpu);
return pd;
-free_cs_table: +free_ps_table: kfree(table); free_pd: kfree(pd); @@ -185,7 +185,7 @@ EXPORT_SYMBOL_GPL(em_cpu_get); /** * em_register_perf_domain() - Register the Energy Model of a performance domain * @span : Mask of CPUs in the performance domain - * @nr_states : Number of capacity states to register + * @nr_states : Number of performance states to register * @cb : Callback functions providing the data of the Energy Model * * Create Energy Model tables for a performance domain using the callbacks diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 8344757bba6e..0a411df05fb3 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -282,10 +282,10 @@ static void perf_domain_debug(const struct cpumask *cpu_map, printk(KERN_DEBUG "root_domain %*pbl:", cpumask_pr_args(cpu_map));
while (pd) { - printk(KERN_CONT " pd%d:{ cpus=%*pbl nr_cstate=%d }", + printk(KERN_CONT " pd%d:{ cpus=%*pbl nr_pstate=%d }", cpumask_first(perf_domain_span(pd)), cpumask_pr_args(perf_domain_span(pd)), - em_pd_nr_cap_states(pd->em_pd)); + em_pd_nr_perf_states(pd->em_pd)); pd = pd->next; }
@@ -323,26 +323,26 @@ static void sched_energy_set(bool has_eas) * * The complexity of the Energy Model is defined as: * - * C = nr_pd * (nr_cpus + nr_cs) + * C = nr_pd * (nr_cpus + nr_ps) * * with parameters defined as: * - nr_pd: the number of performance domains * - nr_cpus: the number of CPUs - * - nr_cs: the sum of the number of capacity states of all performance + * - nr_ps: the sum of the number of performance states of all performance * domains (for example, on a system with 2 performance domains, - * with 10 capacity states each, nr_cs = 2 * 10 = 20). + * with 10 performance states each, nr_ps = 2 * 10 = 20). * * It is generally not a good idea to use such a model in the wake-up path on * very complex platforms because of the associated scheduling overheads. The * arbitrary constraint below prevents that. It makes EAS usable up to 16 CPUs - * with per-CPU DVFS and less than 8 capacity states each, for example. + * with per-CPU DVFS and less than 8 performance states each, for example. */ #define EM_MAX_COMPLEXITY 2048
extern struct cpufreq_governor schedutil_gov; static bool build_perf_domains(const struct cpumask *cpu_map) { - int i, nr_pd = 0, nr_cs = 0, nr_cpus = cpumask_weight(cpu_map); + int i, nr_pd = 0, nr_ps = 0, nr_cpus = cpumask_weight(cpu_map); struct perf_domain *pd = NULL, *tmp; int cpu = cpumask_first(cpu_map); struct root_domain *rd = cpu_rq(cpu)->rd; @@ -394,15 +394,15 @@ static bool build_perf_domains(const struct cpumask *cpu_map) pd = tmp;
/* - * Count performance domains and capacity states for the + * Count performance domains and performance states for the * complexity check. */ nr_pd++; - nr_cs += em_pd_nr_cap_states(pd->em_pd); + nr_ps += em_pd_nr_perf_states(pd->em_pd); }
/* Bail out if the Energy Model complexity is too high. */ - if (nr_pd * (nr_cs + nr_cpus) > EM_MAX_COMPLEXITY) { + if (nr_pd * (nr_ps + nr_cpus) > EM_MAX_COMPLEXITY) { WARN(1, "rd %*pbl: Failed to start EAS, EM complexity is too high\n", cpumask_pr_args(cpu_map)); goto free;
Add now function in the Energy Model framework which is going to support new devices. This function will help in transition and make it smoother. For now it still checks if the cpumask is a valid pointer, which will be removed later when the new structures and infrastructure will be ready.
Acked-by: Daniel Lezcano daniel.lezcano@linaro.org Acked-by: Quentin Perret qperret@google.com Signed-off-by: Lukasz Luba lukasz.luba@arm.com --- include/linux/energy_model.h | 13 ++++++++++-- kernel/power/energy_model.c | 40 ++++++++++++++++++++++++++++++------ 2 files changed, 45 insertions(+), 8 deletions(-)
diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index fe336a9eb5d4..7c048df98447 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -2,6 +2,7 @@ #ifndef _LINUX_ENERGY_MODEL_H #define _LINUX_ENERGY_MODEL_H #include <linux/cpumask.h> +#include <linux/device.h> #include <linux/jump_label.h> #include <linux/kobject.h> #include <linux/rcupdate.h> @@ -42,7 +43,7 @@ struct em_perf_domain { #define em_span_cpus(em) (to_cpumask((em)->cpus))
#ifdef CONFIG_ENERGY_MODEL -#define EM_CPU_MAX_POWER 0xFFFF +#define EM_MAX_POWER 0xFFFF
struct em_data_callback { /** @@ -59,7 +60,7 @@ struct em_data_callback { * and frequency. * * The power is the one of a single CPU in the domain, expressed in - * milli-watts. It is expected to fit in the [0, EM_CPU_MAX_POWER] + * milli-watts. It is expected to fit in the [0, EM_MAX_POWER] * range. * * Return 0 on success. @@ -71,6 +72,8 @@ struct em_data_callback { struct em_perf_domain *em_cpu_get(int cpu); int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, struct em_data_callback *cb); +int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, + struct em_data_callback *cb, cpumask_t *span);
/** * em_pd_energy() - Estimates the energy consumed by the CPUs of a perf. domain @@ -174,6 +177,12 @@ static inline int em_register_perf_domain(cpumask_t *span, { return -EINVAL; } +static inline +int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, + struct em_data_callback *cb, cpumask_t *span) +{ + return -EINVAL; +} static inline struct em_perf_domain *em_cpu_get(int cpu) { return NULL; diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c index 9892d548a0fa..875b163e54ab 100644 --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -125,7 +125,7 @@ static struct em_perf_domain *em_create_pd(cpumask_t *span, int nr_states, * The power returned by active_state() is expected to be * positive, in milli-watts and to fit into 16 bits. */ - if (!power || power > EM_CPU_MAX_POWER) { + if (!power || power > EM_MAX_POWER) { pr_err("pd%d: invalid power: %lu\n", cpu, power); goto free_ps_table; } @@ -183,10 +183,13 @@ struct em_perf_domain *em_cpu_get(int cpu) EXPORT_SYMBOL_GPL(em_cpu_get);
/** - * em_register_perf_domain() - Register the Energy Model of a performance domain - * @span : Mask of CPUs in the performance domain + * em_dev_register_perf_domain() - Register the Energy Model (EM) for a device + * @dev : Device for which the EM is to register * @nr_states : Number of performance states to register * @cb : Callback functions providing the data of the Energy Model + * @span : Pointer to cpumask_t, which in case of a CPU device is + * obligatory. It can be taken from i.e. 'policy->cpus'. For other + * type of devices this should be set to NULL. * * Create Energy Model tables for a performance domain using the callbacks * defined in cb. @@ -196,14 +199,14 @@ EXPORT_SYMBOL_GPL(em_cpu_get); * * Return 0 on success */ -int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, - struct em_data_callback *cb) +int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, + struct em_data_callback *cb, cpumask_t *span) { unsigned long cap, prev_cap = 0; struct em_perf_domain *pd; int cpu, ret = 0;
- if (!span || !nr_states || !cb) + if (!dev || !span || !nr_states || !cb) return -EINVAL;
/* @@ -255,4 +258,29 @@ int em_register_perf_domain(cpumask_t *span, unsigned int nr_states,
return ret; } +EXPORT_SYMBOL_GPL(em_dev_register_perf_domain); + +/** + * em_register_perf_domain() - Register the Energy Model of a performance domain + * @span : Mask of CPUs in the performance domain + * @nr_states : Number of capacity states to register + * @cb : Callback functions providing the data of the Energy Model + * + * Create Energy Model tables for a performance domain using the callbacks + * defined in cb. + * + * If multiple clients register the same performance domain, all but the first + * registration will be ignored. + * + * Return 0 on success + */ +int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, + struct em_data_callback *cb) +{ + struct device *cpu_dev; + + cpu_dev = get_cpu_device(cpumask_first(span)); + + return em_dev_register_perf_domain(cpu_dev, nr_states, cb, span); +} EXPORT_SYMBOL_GPL(em_register_perf_domain);
The Energy Model framework is going to support devices other that CPUs. In order to make this happen change the callback function and add pointer to a device as an argument.
Update the related users to use new function and new callback from the Energy Model.
Acked-by: Quentin Perret qperret@google.com Signed-off-by: Lukasz Luba lukasz.luba@arm.com --- drivers/cpufreq/scmi-cpufreq.c | 11 +++-------- drivers/opp/of.c | 9 ++------- include/linux/energy_model.h | 15 ++++++++------- kernel/power/energy_model.c | 9 +++++---- 4 files changed, 18 insertions(+), 26 deletions(-)
diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c index 61623e2ff149..11ee24e06d12 100644 --- a/drivers/cpufreq/scmi-cpufreq.c +++ b/drivers/cpufreq/scmi-cpufreq.c @@ -103,17 +103,12 @@ scmi_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) }
static int __maybe_unused -scmi_get_cpu_power(unsigned long *power, unsigned long *KHz, int cpu) +scmi_get_cpu_power(unsigned long *power, unsigned long *KHz, + struct device *cpu_dev) { - struct device *cpu_dev = get_cpu_device(cpu); unsigned long Hz; int ret, domain;
- if (!cpu_dev) { - pr_err("failed to get cpu%d device\n", cpu); - return -ENODEV; - } - domain = handle->perf_ops->device_domain_id(cpu_dev); if (domain < 0) return domain; @@ -200,7 +195,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
policy->fast_switch_possible = true;
- em_register_perf_domain(policy->cpus, nr_opp, &em_cb); + em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus);
return 0;
diff --git a/drivers/opp/of.c b/drivers/opp/of.c index 9cd8f0adacae..5b75829a915d 100644 --- a/drivers/opp/of.c +++ b/drivers/opp/of.c @@ -1047,9 +1047,8 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_of_node); * calculation failed because of missing parameters, 0 otherwise. */ static int __maybe_unused _get_cpu_power(unsigned long *mW, unsigned long *kHz, - int cpu) + struct device *cpu_dev) { - struct device *cpu_dev; struct dev_pm_opp *opp; struct device_node *np; unsigned long mV, Hz; @@ -1057,10 +1056,6 @@ static int __maybe_unused _get_cpu_power(unsigned long *mW, unsigned long *kHz, u64 tmp; int ret;
- cpu_dev = get_cpu_device(cpu); - if (!cpu_dev) - return -ENODEV; - np = of_node_get(cpu_dev->of_node); if (!np) return -EINVAL; @@ -1128,6 +1123,6 @@ void dev_pm_opp_of_register_em(struct cpumask *cpus) if (ret || !cap) return;
- em_register_perf_domain(cpus, nr_opp, &em_cb); + em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, cpus); } EXPORT_SYMBOL_GPL(dev_pm_opp_of_register_em); diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 7c048df98447..7076cb22b247 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -48,24 +48,25 @@ struct em_perf_domain { struct em_data_callback { /** * active_power() - Provide power at the next performance state of - * a CPU + * a device * @power : Active power at the performance state in mW * (modified) * @freq : Frequency at the performance state in kHz * (modified) - * @cpu : CPU for which we do this operation + * @dev : Device for which we do this operation (can be a CPU) * - * active_power() must find the lowest performance state of 'cpu' above + * active_power() must find the lowest performance state of 'dev' above * 'freq' and update 'power' and 'freq' to the matching active power * and frequency. * - * The power is the one of a single CPU in the domain, expressed in - * milli-watts. It is expected to fit in the [0, EM_MAX_POWER] - * range. + * In case of CPUs, the power is the one of a single CPU in the domain, + * expressed in milli-watts. It is expected to fit in the + * [0, EM_MAX_POWER] range. * * Return 0 on success. */ - int (*active_power)(unsigned long *power, unsigned long *freq, int cpu); + int (*active_power)(unsigned long *power, unsigned long *freq, + struct device *dev); }; #define EM_DATA_CB(_active_power_cb) { .active_power = &_active_power_cb }
diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c index 875b163e54ab..5b8a1566526a 100644 --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -78,8 +78,9 @@ core_initcall(em_debug_init); #else /* CONFIG_DEBUG_FS */ static void em_debug_create_pd(struct em_perf_domain *pd, int cpu) {} #endif -static struct em_perf_domain *em_create_pd(cpumask_t *span, int nr_states, - struct em_data_callback *cb) +static struct em_perf_domain * +em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, + cpumask_t *span) { unsigned long opp_eff, prev_opp_eff = ULONG_MAX; unsigned long power, freq, prev_freq = 0; @@ -106,7 +107,7 @@ static struct em_perf_domain *em_create_pd(cpumask_t *span, int nr_states, * lowest performance state of 'cpu' above 'freq' and updates * 'power' and 'freq' accordingly. */ - ret = cb->active_power(&power, &freq, cpu); + ret = cb->active_power(&power, &freq, dev); if (ret) { pr_err("pd%d: invalid perf. state: %d\n", cpu, ret); goto free_ps_table; @@ -237,7 +238,7 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, }
/* Create the performance domain and add it to the Energy Model. */ - pd = em_create_pd(span, nr_states, cb); + pd = em_create_pd(dev, nr_states, cb, span); if (!pd) { ret = -EINVAL; goto unlock;
On 27/05/2020 11:58, Lukasz Luba wrote:
The Energy Model framework is going to support devices other that CPUs. In order to make this happen change the callback function and add pointer to a device as an argument.
Update the related users to use new function and new callback from the Energy Model.
Acked-by: Quentin Perret qperret@google.com Signed-off-by: Lukasz Luba lukasz.luba@arm.com
Acked-by: Daniel Lezcano daniel.lezcano@linaro.org
[ ... ]
On 5/29/20 6:43 PM, Daniel Lezcano wrote:
On 27/05/2020 11:58, Lukasz Luba wrote:
The Energy Model framework is going to support devices other that CPUs. In order to make this happen change the callback function and add pointer to a device as an argument.
Update the related users to use new function and new callback from the Energy Model.
Acked-by: Quentin Perret qperret@google.com Signed-off-by: Lukasz Luba lukasz.luba@arm.com
Acked-by: Daniel Lezcano daniel.lezcano@linaro.org
Thank you Daniel!
Regards, Lukasz
Add support for other devices than CPUs. The registration function does not require a valid cpumask pointer and is ready to handle new devices. Some of the internal structures has been reorganized in order to keep consistent view (like removing per_cpu pd pointers).
Signed-off-by: Lukasz Luba lukasz.luba@arm.com --- include/linux/device.h | 5 + include/linux/energy_model.h | 29 ++++- kernel/power/energy_model.c | 239 ++++++++++++++++++++++++----------- 3 files changed, 189 insertions(+), 84 deletions(-)
diff --git a/include/linux/device.h b/include/linux/device.h index ac8e37cd716a..7023d3ea189b 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -13,6 +13,7 @@ #define _DEVICE_H_
#include <linux/dev_printk.h> +#include <linux/energy_model.h> #include <linux/ioport.h> #include <linux/kobject.h> #include <linux/klist.h> @@ -559,6 +560,10 @@ struct device { struct dev_pm_info power; struct dev_pm_domain *pm_domain;
+#ifdef CONFIG_ENERGY_MODEL + struct em_perf_domain *em_pd; +#endif + #ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN struct irq_domain *msi_domain; #endif diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 7076cb22b247..0ebb083b15a0 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -12,8 +12,10 @@
/** * em_perf_state - Performance state of a performance domain - * @frequency: The CPU frequency in KHz, for consistency with CPUFreq - * @power: The power consumed by 1 CPU at this level, in milli-watts + * @frequency: The frequency in KHz, for consistency with CPUFreq + * @power: The power consumed at this level, in milli-watts (by 1 CPU or + by a registered device). It can be a total power: static and + dynamic. * @cost: The cost coefficient associated with this level, used during * energy calculation. Equal to: power * max_frequency / frequency */ @@ -27,12 +29,16 @@ struct em_perf_state { * em_perf_domain - Performance domain * @table: List of performance states, in ascending order * @nr_perf_states: Number of performance states - * @cpus: Cpumask covering the CPUs of the domain + * @cpus: Cpumask covering the CPUs of the domain. It's here + * for performance reasons to avoid potential cache + * misses during energy calculations in the scheduler + * and simplifies allocating/freeing that memory region. * - * A "performance domain" represents a group of CPUs whose performance is - * scaled together. All CPUs of a performance domain must have the same - * micro-architecture. Performance domains often have a 1-to-1 mapping with - * CPUFreq policies. + * In case of CPU device, a "performance domain" represents a group of CPUs + * whose performance is scaled together. All CPUs of a performance domain + * must have the same micro-architecture. Performance domains often have + * a 1-to-1 mapping with CPUFreq policies. In case of other devices the 'cpus' + * field is unused. */ struct em_perf_domain { struct em_perf_state *table; @@ -71,10 +77,12 @@ struct em_data_callback { #define EM_DATA_CB(_active_power_cb) { .active_power = &_active_power_cb }
struct em_perf_domain *em_cpu_get(int cpu); +struct em_perf_domain *em_pd_get(struct device *dev); int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, struct em_data_callback *cb); int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, struct em_data_callback *cb, cpumask_t *span); +void em_dev_unregister_perf_domain(struct device *dev);
/** * em_pd_energy() - Estimates the energy consumed by the CPUs of a perf. domain @@ -184,10 +192,17 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, { return -EINVAL; } +static inline void em_dev_unregister_perf_domain(struct device *dev) +{ +} static inline struct em_perf_domain *em_cpu_get(int cpu) { return NULL; } +static inline struct em_perf_domain *em_pd_get(struct device *dev) +{ + return NULL; +} static inline unsigned long em_pd_energy(struct em_perf_domain *pd, unsigned long max_util, unsigned long sum_util) { diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c index 5b8a1566526a..07e8307460bc 100644 --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -1,9 +1,10 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Energy Model of CPUs + * Energy Model of devices * - * Copyright (c) 2018, Arm ltd. + * Copyright (c) 2018-2020, Arm ltd. * Written by: Quentin Perret, Arm ltd. + * Improvements provided by: Lukasz Luba, Arm ltd. */
#define pr_fmt(fmt) "energy_model: " fmt @@ -15,15 +16,17 @@ #include <linux/sched/topology.h> #include <linux/slab.h>
-/* Mapping of each CPU to the performance domain to which it belongs. */ -static DEFINE_PER_CPU(struct em_perf_domain *, em_data); - /* * Mutex serializing the registrations of performance domains and letting * callbacks defined by drivers sleep. */ static DEFINE_MUTEX(em_pd_mutex);
+static bool _is_cpu_device(struct device *dev) +{ + return (dev->bus == &cpu_subsys); +} + #ifdef CONFIG_DEBUG_FS static struct dentry *rootdir;
@@ -49,22 +52,30 @@ static int em_debug_cpus_show(struct seq_file *s, void *unused) } DEFINE_SHOW_ATTRIBUTE(em_debug_cpus);
-static void em_debug_create_pd(struct em_perf_domain *pd, int cpu) +static void em_debug_create_pd(struct device *dev) { struct dentry *d; - char name[8]; int i;
- snprintf(name, sizeof(name), "pd%d", cpu); - /* Create the directory of the performance domain */ - d = debugfs_create_dir(name, rootdir); + d = debugfs_create_dir(dev_name(dev), rootdir);
- debugfs_create_file("cpus", 0444, d, pd->cpus, &em_debug_cpus_fops); + if (_is_cpu_device(dev)) + debugfs_create_file("cpus", 0444, d, dev->em_pd->cpus, + &em_debug_cpus_fops);
/* Create a sub-directory for each performance state */ - for (i = 0; i < pd->nr_perf_states; i++) - em_debug_create_ps(&pd->table[i], d); + for (i = 0; i < dev->em_pd->nr_perf_states; i++) + em_debug_create_ps(&dev->em_pd->table[i], d); + +} + +static void em_debug_remove_pd(struct device *dev) +{ + struct dentry *debug_dir; + + debug_dir = debugfs_lookup(dev_name(dev), rootdir); + debugfs_remove_recursive(debug_dir); }
static int __init em_debug_init(void) @@ -76,40 +87,34 @@ static int __init em_debug_init(void) } core_initcall(em_debug_init); #else /* CONFIG_DEBUG_FS */ -static void em_debug_create_pd(struct em_perf_domain *pd, int cpu) {} +static void em_debug_create_pd(struct device *dev) {} +static void em_debug_remove_pd(struct device *dev) {} #endif -static struct em_perf_domain * -em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, - cpumask_t *span) + +static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd, + int nr_states, struct em_data_callback *cb) { unsigned long opp_eff, prev_opp_eff = ULONG_MAX; unsigned long power, freq, prev_freq = 0; - int i, ret, cpu = cpumask_first(span); struct em_perf_state *table; - struct em_perf_domain *pd; + int i, ret; u64 fmax;
- if (!cb->active_power) - return NULL; - - pd = kzalloc(sizeof(*pd) + cpumask_size(), GFP_KERNEL); - if (!pd) - return NULL; - table = kcalloc(nr_states, sizeof(*table), GFP_KERNEL); if (!table) - goto free_pd; + return -ENOMEM;
/* Build the list of performance states for this performance domain */ for (i = 0, freq = 0; i < nr_states; i++, freq++) { /* * active_power() is a driver callback which ceils 'freq' to - * lowest performance state of 'cpu' above 'freq' and updates + * lowest performance state of 'dev' above 'freq' and updates * 'power' and 'freq' accordingly. */ ret = cb->active_power(&power, &freq, dev); if (ret) { - pr_err("pd%d: invalid perf. state: %d\n", cpu, ret); + dev_err(dev, "EM: invalid perf. state: %d\n", + ret); goto free_ps_table; }
@@ -118,7 +123,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, * higher performance states. */ if (freq <= prev_freq) { - pr_err("pd%d: non-increasing freq: %lu\n", cpu, freq); + dev_err(dev, "EM: non-increasing freq: %lu\n", + freq); goto free_ps_table; }
@@ -127,7 +133,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, * positive, in milli-watts and to fit into 16 bits. */ if (!power || power > EM_MAX_POWER) { - pr_err("pd%d: invalid power: %lu\n", cpu, power); + dev_err(dev, "EM: invalid power: %lu\n", + power); goto free_ps_table; }
@@ -142,8 +149,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, */ opp_eff = freq / power; if (opp_eff >= prev_opp_eff) - pr_warn("pd%d: hertz/watts ratio non-monotonically decreasing: em_perf_state %d >= em_perf_state%d\n", - cpu, i, i - 1); + dev_dbg(dev, "EM: hertz/watts ratio non-monotonically decreasing: em_perf_state %d >= em_perf_state%d\n", + i, i - 1); prev_opp_eff = opp_eff; }
@@ -156,30 +163,82 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb,
pd->table = table; pd->nr_perf_states = nr_states; - cpumask_copy(to_cpumask(pd->cpus), span); - - em_debug_create_pd(pd, cpu);
- return pd; + return 0;
free_ps_table: kfree(table); -free_pd: - kfree(pd); + return -EINVAL; +} + +static int em_create_pd(struct device *dev, int nr_states, + struct em_data_callback *cb, cpumask_t *cpus) +{ + struct em_perf_domain *pd; + struct device *cpu_dev; + int cpu, ret; + + if (_is_cpu_device(dev)) { + pd = kzalloc(sizeof(*pd) + cpumask_size(), GFP_KERNEL); + if (!pd) + return -ENOMEM; + + cpumask_copy(em_span_cpus(pd), cpus); + } else { + pd = kzalloc(sizeof(*pd), GFP_KERNEL); + if (!pd) + return -ENOMEM; + } + + ret = em_create_perf_table(dev, pd, nr_states, cb); + if (ret) { + kfree(pd); + return ret; + }
- return NULL; + if (_is_cpu_device(dev)) + for_each_cpu(cpu, cpus) { + cpu_dev = get_cpu_device(cpu); + cpu_dev->em_pd = pd; + } + else + dev->em_pd = pd; + + return 0; }
+/** + * em_pd_get() - Return the performance domain for a device + * @dev : Device to find the performance domain for + * + * Returns the performance domain to which 'dev' belongs, or NULL if it doesn't + * exist. + */ +struct em_perf_domain *em_pd_get(struct device *dev) +{ + if (IS_ERR_OR_NULL(dev)) + return NULL; + + return dev->em_pd; +} +EXPORT_SYMBOL_GPL(em_pd_get); + /** * em_cpu_get() - Return the performance domain for a CPU * @cpu : CPU to find the performance domain for * - * Return: the performance domain to which 'cpu' belongs, or NULL if it doesn't + * Returns the performance domain to which 'cpu' belongs, or NULL if it doesn't * exist. */ struct em_perf_domain *em_cpu_get(int cpu) { - return READ_ONCE(per_cpu(em_data, cpu)); + struct device *cpu_dev; + + cpu_dev = get_cpu_device(cpu); + if (!cpu_dev) + return NULL; + + return em_pd_get(cpu_dev); } EXPORT_SYMBOL_GPL(em_cpu_get);
@@ -188,7 +247,7 @@ EXPORT_SYMBOL_GPL(em_cpu_get); * @dev : Device for which the EM is to register * @nr_states : Number of performance states to register * @cb : Callback functions providing the data of the Energy Model - * @span : Pointer to cpumask_t, which in case of a CPU device is + * @cpus : Pointer to cpumask_t, which in case of a CPU device is * obligatory. It can be taken from i.e. 'policy->cpus'. For other * type of devices this should be set to NULL. * @@ -201,13 +260,12 @@ EXPORT_SYMBOL_GPL(em_cpu_get); * Return 0 on success */ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, - struct em_data_callback *cb, cpumask_t *span) + struct em_data_callback *cb, cpumask_t *cpus) { unsigned long cap, prev_cap = 0; - struct em_perf_domain *pd; - int cpu, ret = 0; + int cpu, ret;
- if (!dev || !span || !nr_states || !cb) + if (!dev || !nr_states || !cb) return -EINVAL;
/* @@ -216,47 +274,50 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, */ mutex_lock(&em_pd_mutex);
- for_each_cpu(cpu, span) { - /* Make sure we don't register again an existing domain. */ - if (READ_ONCE(per_cpu(em_data, cpu))) { - ret = -EEXIST; - goto unlock; - } + if (dev->em_pd) { + ret = -EEXIST; + goto unlock; + }
- /* - * All CPUs of a domain must have the same micro-architecture - * since they all share the same table. - */ - cap = arch_scale_cpu_capacity(cpu); - if (prev_cap && prev_cap != cap) { - pr_err("CPUs of %*pbl must have the same capacity\n", - cpumask_pr_args(span)); + if (_is_cpu_device(dev)) { + if (!cpus) { + dev_err(dev, "EM: invalid CPU mask\n"); ret = -EINVAL; goto unlock; } - prev_cap = cap; + + for_each_cpu(cpu, cpus) { + if (em_cpu_get(cpu)) { + dev_err(dev, "EM: exists for CPU%d\n", cpu); + ret = -EEXIST; + goto unlock; + } + /* + * All CPUs of a domain must have the same + * micro-architecture since they all share the same + * table. + */ + cap = arch_scale_cpu_capacity(cpu); + if (prev_cap && prev_cap != cap) { + dev_err(dev, "EM: CPUs of %*pbl must have the same capacity\n", + cpumask_pr_args(cpus)); + + ret = -EINVAL; + goto unlock; + } + prev_cap = cap; + } }
- /* Create the performance domain and add it to the Energy Model. */ - pd = em_create_pd(dev, nr_states, cb, span); - if (!pd) { - ret = -EINVAL; + ret = em_create_pd(dev, nr_states, cb, cpus); + if (ret) goto unlock; - }
- for_each_cpu(cpu, span) { - /* - * The per-cpu array can be read concurrently from em_cpu_get(). - * The barrier enforces the ordering needed to make sure readers - * can only access well formed em_perf_domain structs. - */ - smp_store_release(per_cpu_ptr(&em_data, cpu), pd); - } + em_debug_create_pd(dev); + dev_info(dev, "EM: created perf domain\n");
- pr_debug("Created perf domain %*pbl\n", cpumask_pr_args(span)); unlock: mutex_unlock(&em_pd_mutex); - return ret; } EXPORT_SYMBOL_GPL(em_dev_register_perf_domain); @@ -285,3 +346,27 @@ int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, return em_dev_register_perf_domain(cpu_dev, nr_states, cb, span); } EXPORT_SYMBOL_GPL(em_register_perf_domain); + +/** + * em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device + * @dev : Device for which the EM is registered + * + * Try to unregister the EM for the specified device (but not a CPU). + */ +void em_dev_unregister_perf_domain(struct device *dev) +{ + if (IS_ERR_OR_NULL(dev) || !dev->em_pd) + return; + + if (_is_cpu_device(dev)) + return; + + mutex_lock(&em_pd_mutex); + em_debug_remove_pd(dev); + + kfree(dev->em_pd->table); + kfree(dev->em_pd); + dev->em_pd = NULL; + mutex_unlock(&em_pd_mutex); +} +EXPORT_SYMBOL_GPL(em_dev_unregister_perf_domain);
On 27/05/2020 11:58, Lukasz Luba wrote:
Add support for other devices than CPUs. The registration function does not require a valid cpumask pointer and is ready to handle new devices. Some of the internal structures has been reorganized in order to keep consistent view (like removing per_cpu pd pointers).
Signed-off-by: Lukasz Luba lukasz.luba@arm.com
[ ... ]
} EXPORT_SYMBOL_GPL(em_register_perf_domain);
+/**
- em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device
- @dev : Device for which the EM is registered
- Try to unregister the EM for the specified device (but not a CPU).
- */
+void em_dev_unregister_perf_domain(struct device *dev) +{
- if (IS_ERR_OR_NULL(dev) || !dev->em_pd)
return;
- if (_is_cpu_device(dev))
return;
- mutex_lock(&em_pd_mutex);
Is the mutex really needed?
If this function is called that means there is no more user of the em_pd, no?
- em_debug_remove_pd(dev);
- kfree(dev->em_pd->table);
- kfree(dev->em_pd);
- dev->em_pd = NULL;
- mutex_unlock(&em_pd_mutex);
+} +EXPORT_SYMBOL_GPL(em_dev_unregister_perf_domain);
Hi Daniel,
On 6/1/20 10:44 PM, Daniel Lezcano wrote:
On 27/05/2020 11:58, Lukasz Luba wrote:
Add support for other devices than CPUs. The registration function does not require a valid cpumask pointer and is ready to handle new devices. Some of the internal structures has been reorganized in order to keep consistent view (like removing per_cpu pd pointers).
Signed-off-by: Lukasz Luba lukasz.luba@arm.com
[ ... ]
} EXPORT_SYMBOL_GPL(em_register_perf_domain);
+/**
- em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device
- @dev : Device for which the EM is registered
- Try to unregister the EM for the specified device (but not a CPU).
- */
+void em_dev_unregister_perf_domain(struct device *dev) +{
- if (IS_ERR_OR_NULL(dev) || !dev->em_pd)
return;
- if (_is_cpu_device(dev))
return;
- mutex_lock(&em_pd_mutex);
Is the mutex really needed?
I just wanted to align this unregister code with register. Since there is debugfs dir lookup and the device's EM existence checks I thought it wouldn't harm just to lock for a while and make sure the registration path is not used. These two paths shouldn't affect each other, but with modules loading/unloading I wanted to play safe. I can change it maybe to just dmb() and the end of the function if it's a big performance problem in this unloading path. What do you think?
If this function is called that means there is no more user of the em_pd, no?
True, that EM users should already be unregistered i.e. thermal cooling.
- em_debug_remove_pd(dev);
- kfree(dev->em_pd->table);
- kfree(dev->em_pd);
- dev->em_pd = NULL;
- mutex_unlock(&em_pd_mutex);
+} +EXPORT_SYMBOL_GPL(em_dev_unregister_perf_domain);
Thank you for reviewing this.
Regards, Lukasz
On Tue, Jun 2, 2020 at 1:31 PM Lukasz Luba lukasz.luba@arm.com wrote:
Hi Daniel,
On 6/1/20 10:44 PM, Daniel Lezcano wrote:
On 27/05/2020 11:58, Lukasz Luba wrote:
Add support for other devices than CPUs. The registration function does not require a valid cpumask pointer and is ready to handle new devices. Some of the internal structures has been reorganized in order to keep consistent view (like removing per_cpu pd pointers).
Signed-off-by: Lukasz Luba lukasz.luba@arm.com
[ ... ]
} EXPORT_SYMBOL_GPL(em_register_perf_domain);
+/**
- em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device
- @dev : Device for which the EM is registered
- Try to unregister the EM for the specified device (but not a CPU).
- */
+void em_dev_unregister_perf_domain(struct device *dev) +{
- if (IS_ERR_OR_NULL(dev) || !dev->em_pd)
return;
- if (_is_cpu_device(dev))
return;
- mutex_lock(&em_pd_mutex);
Is the mutex really needed?
I just wanted to align this unregister code with register. Since there is debugfs dir lookup and the device's EM existence checks I thought it wouldn't harm just to lock for a while and make sure the registration path is not used. These two paths shouldn't affect each other, but with modules loading/unloading I wanted to play safe.
I can change it maybe to just dmb() and the end of the function if it's a big performance problem in this unloading path. What do you think?
I would rather leave the mutex locking as is.
However, the question to ask is what exactly may go wrong without that locking in place? Is there any specific race condition that you are concerned about?
On 6/3/20 4:13 PM, Rafael J. Wysocki wrote:
On Tue, Jun 2, 2020 at 1:31 PM Lukasz Luba lukasz.luba@arm.com wrote:
Hi Daniel,
On 6/1/20 10:44 PM, Daniel Lezcano wrote:
On 27/05/2020 11:58, Lukasz Luba wrote:
Add support for other devices than CPUs. The registration function does not require a valid cpumask pointer and is ready to handle new devices. Some of the internal structures has been reorganized in order to keep consistent view (like removing per_cpu pd pointers).
Signed-off-by: Lukasz Luba lukasz.luba@arm.com
[ ... ]
} EXPORT_SYMBOL_GPL(em_register_perf_domain);
+/**
- em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device
- @dev : Device for which the EM is registered
- Try to unregister the EM for the specified device (but not a CPU).
- */
+void em_dev_unregister_perf_domain(struct device *dev) +{
- if (IS_ERR_OR_NULL(dev) || !dev->em_pd)
return;
- if (_is_cpu_device(dev))
return;
- mutex_lock(&em_pd_mutex);
Is the mutex really needed?
I just wanted to align this unregister code with register. Since there is debugfs dir lookup and the device's EM existence checks I thought it wouldn't harm just to lock for a while and make sure the registration path is not used. These two paths shouldn't affect each other, but with modules loading/unloading I wanted to play safe.
I can change it maybe to just dmb() and the end of the function if it's a big performance problem in this unloading path. What do you think?
I would rather leave the mutex locking as is.
However, the question to ask is what exactly may go wrong without that locking in place? Is there any specific race condition that you are concerned about?
I tried to test this with module loading & unloading with panfrost driver and CPU hotplug (which should bail out quickly) and was OK. I don't see any particular race. I don't too much about the debugfs code, though. That's why I tried to protect from some scripts/services which try to re-load the driver.
Apart from that, maybe just this dev->em = NULL to be populated to all CPUs, which mutex_unlock synchronizes for free here.
Regards, Lukasz
On Wed, Jun 3, 2020 at 5:26 PM Lukasz Luba lukasz.luba@arm.com wrote:
On 6/3/20 4:13 PM, Rafael J. Wysocki wrote:
On Tue, Jun 2, 2020 at 1:31 PM Lukasz Luba lukasz.luba@arm.com wrote:
Hi Daniel,
On 6/1/20 10:44 PM, Daniel Lezcano wrote:
On 27/05/2020 11:58, Lukasz Luba wrote:
Add support for other devices than CPUs. The registration function does not require a valid cpumask pointer and is ready to handle new devices. Some of the internal structures has been reorganized in order to keep consistent view (like removing per_cpu pd pointers).
Signed-off-by: Lukasz Luba lukasz.luba@arm.com
[ ... ]
} EXPORT_SYMBOL_GPL(em_register_perf_domain);
+/**
- em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device
- @dev : Device for which the EM is registered
- Try to unregister the EM for the specified device (but not a CPU).
- */
+void em_dev_unregister_perf_domain(struct device *dev) +{
- if (IS_ERR_OR_NULL(dev) || !dev->em_pd)
return;
- if (_is_cpu_device(dev))
return;
- mutex_lock(&em_pd_mutex);
Is the mutex really needed?
I just wanted to align this unregister code with register. Since there is debugfs dir lookup and the device's EM existence checks I thought it wouldn't harm just to lock for a while and make sure the registration path is not used. These two paths shouldn't affect each other, but with modules loading/unloading I wanted to play safe.
I can change it maybe to just dmb() and the end of the function if it's a big performance problem in this unloading path. What do you think?
I would rather leave the mutex locking as is.
However, the question to ask is what exactly may go wrong without that locking in place? Is there any specific race condition that you are concerned about?
I tried to test this with module loading & unloading with panfrost driver and CPU hotplug (which should bail out quickly) and was OK. I don't see any particular race. I don't too much about the debugfs code, though. That's why I tried to protect from some scripts/services which try to re-load the driver.
Apart from that, maybe just this dev->em = NULL to be populated to all CPUs, which mutex_unlock synchronizes for free here.
If it may run concurrently with the registration for the same device, the locking is necessary, but in that case the !dev->em_pd check needs to go under the mutex too IMO, or you may end up leaking the pd if the registration can run between that check and the point at which the mutex is taken.
Apart from this your kerneldoc comments might me improved IMO.
First of all, you can use @dev inside of a kerneldoc (if @dev represents an argument of the documented function) and that will produce the right output automatically.
Second, it is better to avoid saying things like "Try to unregister ..." in kerneldoc comments (the "Try to" part is redundant). Simply say "Unregister ..." instead.
Thanks!
On 6/3/20 4:40 PM, Rafael J. Wysocki wrote:
On Wed, Jun 3, 2020 at 5:26 PM Lukasz Luba lukasz.luba@arm.com wrote:
On 6/3/20 4:13 PM, Rafael J. Wysocki wrote:
On Tue, Jun 2, 2020 at 1:31 PM Lukasz Luba lukasz.luba@arm.com wrote:
Hi Daniel,
On 6/1/20 10:44 PM, Daniel Lezcano wrote:
On 27/05/2020 11:58, Lukasz Luba wrote:
Add support for other devices than CPUs. The registration function does not require a valid cpumask pointer and is ready to handle new devices. Some of the internal structures has been reorganized in order to keep consistent view (like removing per_cpu pd pointers).
Signed-off-by: Lukasz Luba lukasz.luba@arm.com
[ ... ]
} EXPORT_SYMBOL_GPL(em_register_perf_domain);
+/**
- em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device
- @dev : Device for which the EM is registered
- Try to unregister the EM for the specified device (but not a CPU).
- */
+void em_dev_unregister_perf_domain(struct device *dev) +{
- if (IS_ERR_OR_NULL(dev) || !dev->em_pd)
return;
- if (_is_cpu_device(dev))
return;
- mutex_lock(&em_pd_mutex);
Is the mutex really needed?
I just wanted to align this unregister code with register. Since there is debugfs dir lookup and the device's EM existence checks I thought it wouldn't harm just to lock for a while and make sure the registration path is not used. These two paths shouldn't affect each other, but with modules loading/unloading I wanted to play safe.
I can change it maybe to just dmb() and the end of the function if it's a big performance problem in this unloading path. What do you think?
I would rather leave the mutex locking as is.
However, the question to ask is what exactly may go wrong without that locking in place? Is there any specific race condition that you are concerned about?
I tried to test this with module loading & unloading with panfrost driver and CPU hotplug (which should bail out quickly) and was OK. I don't see any particular race. I don't too much about the debugfs code, though. That's why I tried to protect from some scripts/services which try to re-load the driver.
Apart from that, maybe just this dev->em = NULL to be populated to all CPUs, which mutex_unlock synchronizes for free here.
If it may run concurrently with the registration for the same device, the locking is necessary, but in that case the !dev->em_pd check needs to go under the mutex too IMO, or you may end up leaking the pd if the registration can run between that check and the point at which the mutex is taken.
They don't run concurrently for the same device and users of that EM are already gone. I just wanted to be sure that everything is cleaned and synced properly. Here is some example of the directories under /sys/kernel/debug/energy_model cpu0, cpu4, gpu, dsp, etc
The only worry that I had was the debugfs dir name, which is a string from dev_name() and will be the same for the next registration if module is re-loaded. So the 'name' is reused and debugfs_create_dir() and debugfs_remove_recursive() uses this fsnotify, but they are operating under inode_lock/unlock() on the parent dir 'energy_model'. Then there is also this debugfs_lookup() which is slightly different.
That's why I put a mutex to separate all registration and unregistration for all devices. It should work without the mutex in unregister path, but I think it does not harm to take it just in case and also have the CPU variable sync for free.
Apart from this your kerneldoc comments might me improved IMO.
First of all, you can use @dev inside of a kerneldoc (if @dev represents an argument of the documented function) and that will produce the right output automatically.
OK
Second, it is better to avoid saying things like "Try to unregister ..." in kerneldoc comments (the "Try to" part is redundant). Simply say "Unregister ..." instead.
Good point, thanks, I will use "Unregister ..." then.
Thanks!
Shell I send a 'resend patch' which changes these @dev and 'unregister' comments? Or wait for you to finish reviewing the other patches and send v9?
On Wed, Jun 3, 2020 at 6:12 PM Lukasz Luba lukasz.luba@arm.com wrote:
On 6/3/20 4:40 PM, Rafael J. Wysocki wrote:
On Wed, Jun 3, 2020 at 5:26 PM Lukasz Luba lukasz.luba@arm.com wrote:
On 6/3/20 4:13 PM, Rafael J. Wysocki wrote:
On Tue, Jun 2, 2020 at 1:31 PM Lukasz Luba lukasz.luba@arm.com wrote:
Hi Daniel,
On 6/1/20 10:44 PM, Daniel Lezcano wrote:
On 27/05/2020 11:58, Lukasz Luba wrote: > Add support for other devices than CPUs. The registration function > does not require a valid cpumask pointer and is ready to handle new > devices. Some of the internal structures has been reorganized in order to > keep consistent view (like removing per_cpu pd pointers). > > Signed-off-by: Lukasz Luba lukasz.luba@arm.com > ---
[ ... ]
> } > EXPORT_SYMBOL_GPL(em_register_perf_domain); > + > +/** > + * em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device > + * @dev : Device for which the EM is registered > + * > + * Try to unregister the EM for the specified device (but not a CPU). > + */ > +void em_dev_unregister_perf_domain(struct device *dev) > +{ > + if (IS_ERR_OR_NULL(dev) || !dev->em_pd) > + return; > + > + if (_is_cpu_device(dev)) > + return; > + > + mutex_lock(&em_pd_mutex);
Is the mutex really needed?
I just wanted to align this unregister code with register. Since there is debugfs dir lookup and the device's EM existence checks I thought it wouldn't harm just to lock for a while and make sure the registration path is not used. These two paths shouldn't affect each other, but with modules loading/unloading I wanted to play safe.
I can change it maybe to just dmb() and the end of the function if it's a big performance problem in this unloading path. What do you think?
I would rather leave the mutex locking as is.
However, the question to ask is what exactly may go wrong without that locking in place? Is there any specific race condition that you are concerned about?
I tried to test this with module loading & unloading with panfrost driver and CPU hotplug (which should bail out quickly) and was OK. I don't see any particular race. I don't too much about the debugfs code, though. That's why I tried to protect from some scripts/services which try to re-load the driver.
Apart from that, maybe just this dev->em = NULL to be populated to all CPUs, which mutex_unlock synchronizes for free here.
If it may run concurrently with the registration for the same device, the locking is necessary, but in that case the !dev->em_pd check needs to go under the mutex too IMO, or you may end up leaking the pd if the registration can run between that check and the point at which the mutex is taken.
They don't run concurrently for the same device and users of that EM are already gone. I just wanted to be sure that everything is cleaned and synced properly. Here is some example of the directories under /sys/kernel/debug/energy_model cpu0, cpu4, gpu, dsp, etc
The only worry that I had was the debugfs dir name, which is a string from dev_name() and will be the same for the next registration if module is re-loaded.
OK, so that needs to be explained in a comment.
So the 'name' is reused and debugfs_create_dir() and debugfs_remove_recursive() uses this fsnotify, but they are operating under inode_lock/unlock() on the parent dir 'energy_model'. Then there is also this debugfs_lookup() which is slightly different.
That's why I put a mutex to separate all registration and unregistration for all devices. It should work without the mutex in unregister path, but I think it does not harm to take
Well, fair enough, but I still think that the !dev->em_pd check should be done under the mutex or it will be confusing.
it just in case and also have the CPU variable sync for free.
I'm not sure what you mean by the last part here?
Apart from this your kerneldoc comments might me improved IMO.
First of all, you can use @dev inside of a kerneldoc (if @dev represents an argument of the documented function) and that will produce the right output automatically.
OK
Second, it is better to avoid saying things like "Try to unregister ..." in kerneldoc comments (the "Try to" part is redundant). Simply say "Unregister ..." instead.
Good point, thanks, I will use "Unregister ..." then.
Thanks!
Shell I send a 'resend patch' which changes these @dev and 'unregister' comments?
Yes, please, but see the comments above too.
Or wait for you to finish reviewing the other patches and send v9?
That is not necessary, unless you want to make kerneldoc improvements in the other patches.
Thanks!
On 6/3/20 5:22 PM, Rafael J. Wysocki wrote:
On Wed, Jun 3, 2020 at 6:12 PM Lukasz Luba lukasz.luba@arm.com wrote:
On 6/3/20 4:40 PM, Rafael J. Wysocki wrote:
On Wed, Jun 3, 2020 at 5:26 PM Lukasz Luba lukasz.luba@arm.com wrote:
On 6/3/20 4:13 PM, Rafael J. Wysocki wrote:
On Tue, Jun 2, 2020 at 1:31 PM Lukasz Luba lukasz.luba@arm.com wrote:
Hi Daniel,
On 6/1/20 10:44 PM, Daniel Lezcano wrote: > On 27/05/2020 11:58, Lukasz Luba wrote: >> Add support for other devices than CPUs. The registration function >> does not require a valid cpumask pointer and is ready to handle new >> devices. Some of the internal structures has been reorganized in order to >> keep consistent view (like removing per_cpu pd pointers). >> >> Signed-off-by: Lukasz Luba lukasz.luba@arm.com >> --- > > [ ... ] > >> } >> EXPORT_SYMBOL_GPL(em_register_perf_domain); >> + >> +/** >> + * em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device >> + * @dev : Device for which the EM is registered >> + * >> + * Try to unregister the EM for the specified device (but not a CPU). >> + */ >> +void em_dev_unregister_perf_domain(struct device *dev) >> +{ >> + if (IS_ERR_OR_NULL(dev) || !dev->em_pd) >> + return; >> + >> + if (_is_cpu_device(dev)) >> + return; >> + >> + mutex_lock(&em_pd_mutex); > > Is the mutex really needed?
I just wanted to align this unregister code with register. Since there is debugfs dir lookup and the device's EM existence checks I thought it wouldn't harm just to lock for a while and make sure the registration path is not used. These two paths shouldn't affect each other, but with modules loading/unloading I wanted to play safe.
I can change it maybe to just dmb() and the end of the function if it's a big performance problem in this unloading path. What do you think?
I would rather leave the mutex locking as is.
However, the question to ask is what exactly may go wrong without that locking in place? Is there any specific race condition that you are concerned about?
I tried to test this with module loading & unloading with panfrost driver and CPU hotplug (which should bail out quickly) and was OK. I don't see any particular race. I don't too much about the debugfs code, though. That's why I tried to protect from some scripts/services which try to re-load the driver.
Apart from that, maybe just this dev->em = NULL to be populated to all CPUs, which mutex_unlock synchronizes for free here.
If it may run concurrently with the registration for the same device, the locking is necessary, but in that case the !dev->em_pd check needs to go under the mutex too IMO, or you may end up leaking the pd if the registration can run between that check and the point at which the mutex is taken.
They don't run concurrently for the same device and users of that EM are already gone. I just wanted to be sure that everything is cleaned and synced properly. Here is some example of the directories under /sys/kernel/debug/energy_model cpu0, cpu4, gpu, dsp, etc
The only worry that I had was the debugfs dir name, which is a string from dev_name() and will be the same for the next registration if module is re-loaded.
OK, so that needs to be explained in a comment.
OK, I will add it.
So the 'name' is reused and debugfs_create_dir() and debugfs_remove_recursive() uses this fsnotify, but they are operating under inode_lock/unlock() on the parent dir 'energy_model'. Then there is also this debugfs_lookup() which is slightly different.
That's why I put a mutex to separate all registration and unregistration for all devices. It should work without the mutex in unregister path, but I think it does not harm to take
Well, fair enough, but I still think that the !dev->em_pd check should be done under the mutex or it will be confusing.
it just in case and also have the CPU variable sync for free.
I'm not sure what you mean by the last part here?
The mutex_unlock for me also means dmb() took place. ARM has slightly different memory model than x86 and I just wanted to be sure that this new values reach memory and become visible to other cores. mutex_unlock just guaranties this for me.
Apart from this your kerneldoc comments might me improved IMO.
First of all, you can use @dev inside of a kerneldoc (if @dev represents an argument of the documented function) and that will produce the right output automatically.
OK
Second, it is better to avoid saying things like "Try to unregister ..." in kerneldoc comments (the "Try to" part is redundant). Simply say "Unregister ..." instead.
Good point, thanks, I will use "Unregister ..." then.
Thanks!
Shell I send a 'resend patch' which changes these @dev and 'unregister' comments?
Yes, please, but see the comments above too.
Saw it
Or wait for you to finish reviewing the other patches and send v9?
That is not necessary, unless you want to make kerneldoc improvements in the other patches.
I will check them, but if they are OKish then I will just resend this one.
Thank you for the review.
Regards, Lukasz
Hi Lukasz,
I love your patch! Perhaps something to improve:
url: https://github.com/0day-ci/linux/commits/Lukasz-Luba/Add-support-for-devices... base: https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
config: i386-randconfig-m021-20200605 (attached as .config) compiler: gcc-9 (Debian 9.3.0-13) 9.3.0
If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot lkp@intel.com Reported-by: Dan Carpenter dan.carpenter@oracle.com
smatch warnings: kernel/power/energy_model.c:316 em_dev_register_perf_domain() error: we previously assumed 'dev->em_pd' could be null (see line 277)
# https://github.com/0day-ci/linux/commit/110d050cb7ba1c96e63ada498979d1fd9952... git remote add linux-review https://github.com/0day-ci/linux git remote update linux-review git checkout 110d050cb7ba1c96e63ada498979d1fd99529be2 vim +316 kernel/power/energy_model.c
0e294e607adaf3 Lukasz Luba 2020-05-27 262 int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, 110d050cb7ba1c Lukasz Luba 2020-05-27 263 struct em_data_callback *cb, cpumask_t *cpus) 27871f7a8a341e Quentin Perret 2018-12-03 264 { 27871f7a8a341e Quentin Perret 2018-12-03 265 unsigned long cap, prev_cap = 0; 110d050cb7ba1c Lukasz Luba 2020-05-27 266 int cpu, ret; 27871f7a8a341e Quentin Perret 2018-12-03 267 110d050cb7ba1c Lukasz Luba 2020-05-27 268 if (!dev || !nr_states || !cb) 27871f7a8a341e Quentin Perret 2018-12-03 269 return -EINVAL; 27871f7a8a341e Quentin Perret 2018-12-03 270 27871f7a8a341e Quentin Perret 2018-12-03 271 /* 27871f7a8a341e Quentin Perret 2018-12-03 272 * Use a mutex to serialize the registration of performance domains and 27871f7a8a341e Quentin Perret 2018-12-03 273 * let the driver-defined callback functions sleep. 27871f7a8a341e Quentin Perret 2018-12-03 274 */ 27871f7a8a341e Quentin Perret 2018-12-03 275 mutex_lock(&em_pd_mutex); 27871f7a8a341e Quentin Perret 2018-12-03 276 110d050cb7ba1c Lukasz Luba 2020-05-27 @277 if (dev->em_pd) { ^^^^^^^^^^ Check for NULL.
27871f7a8a341e Quentin Perret 2018-12-03 278 ret = -EEXIST; 27871f7a8a341e Quentin Perret 2018-12-03 279 goto unlock; 27871f7a8a341e Quentin Perret 2018-12-03 280 } 27871f7a8a341e Quentin Perret 2018-12-03 281 110d050cb7ba1c Lukasz Luba 2020-05-27 282 if (_is_cpu_device(dev)) { 110d050cb7ba1c Lukasz Luba 2020-05-27 283 if (!cpus) { 110d050cb7ba1c Lukasz Luba 2020-05-27 284 dev_err(dev, "EM: invalid CPU mask\n"); 110d050cb7ba1c Lukasz Luba 2020-05-27 285 ret = -EINVAL; 110d050cb7ba1c Lukasz Luba 2020-05-27 286 goto unlock; 110d050cb7ba1c Lukasz Luba 2020-05-27 287 } 110d050cb7ba1c Lukasz Luba 2020-05-27 288 110d050cb7ba1c Lukasz Luba 2020-05-27 289 for_each_cpu(cpu, cpus) { 110d050cb7ba1c Lukasz Luba 2020-05-27 290 if (em_cpu_get(cpu)) { 110d050cb7ba1c Lukasz Luba 2020-05-27 291 dev_err(dev, "EM: exists for CPU%d\n", cpu); 110d050cb7ba1c Lukasz Luba 2020-05-27 292 ret = -EEXIST; 110d050cb7ba1c Lukasz Luba 2020-05-27 293 goto unlock; 110d050cb7ba1c Lukasz Luba 2020-05-27 294 } 27871f7a8a341e Quentin Perret 2018-12-03 295 /* 110d050cb7ba1c Lukasz Luba 2020-05-27 296 * All CPUs of a domain must have the same 110d050cb7ba1c Lukasz Luba 2020-05-27 297 * micro-architecture since they all share the same 110d050cb7ba1c Lukasz Luba 2020-05-27 298 * table. 27871f7a8a341e Quentin Perret 2018-12-03 299 */ 8ec59c0f5f4966 Vincent Guittot 2019-06-17 300 cap = arch_scale_cpu_capacity(cpu); 27871f7a8a341e Quentin Perret 2018-12-03 301 if (prev_cap && prev_cap != cap) { 110d050cb7ba1c Lukasz Luba 2020-05-27 302 dev_err(dev, "EM: CPUs of %*pbl must have the same capacity\n", 110d050cb7ba1c Lukasz Luba 2020-05-27 303 cpumask_pr_args(cpus)); 110d050cb7ba1c Lukasz Luba 2020-05-27 304 27871f7a8a341e Quentin Perret 2018-12-03 305 ret = -EINVAL; 27871f7a8a341e Quentin Perret 2018-12-03 306 goto unlock; 27871f7a8a341e Quentin Perret 2018-12-03 307 } 27871f7a8a341e Quentin Perret 2018-12-03 308 prev_cap = cap; 27871f7a8a341e Quentin Perret 2018-12-03 309 } 110d050cb7ba1c Lukasz Luba 2020-05-27 310 } 27871f7a8a341e Quentin Perret 2018-12-03 311 110d050cb7ba1c Lukasz Luba 2020-05-27 312 ret = em_create_pd(dev, nr_states, cb, cpus);
If it's a _is_cpu_device() then it iterates through a bunch of devices and sets up cpu_dev->em_pd for each. Presumably one of the devices is "dev" or this would crash pretty early on in testing?
110d050cb7ba1c Lukasz Luba 2020-05-27 313 if (ret) 27871f7a8a341e Quentin Perret 2018-12-03 314 goto unlock; 27871f7a8a341e Quentin Perret 2018-12-03 315 110d050cb7ba1c Lukasz Luba 2020-05-27 @316 em_debug_create_pd(dev); ^^^ Dereferences dev->em_pd.
110d050cb7ba1c Lukasz Luba 2020-05-27 317 dev_info(dev, "EM: created perf domain\n"); 27871f7a8a341e Quentin Perret 2018-12-03 318 27871f7a8a341e Quentin Perret 2018-12-03 319 unlock: 27871f7a8a341e Quentin Perret 2018-12-03 320 mutex_unlock(&em_pd_mutex); 27871f7a8a341e Quentin Perret 2018-12-03 321 return ret; 27871f7a8a341e Quentin Perret 2018-12-03 322 } 0e294e607adaf3 Lukasz Luba 2020-05-27 323 EXPORT_SYMBOL_GPL(em_dev_register_perf_domain);
--- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Hi Dan,
Thank you for your analyzes.
On 6/8/20 12:51 PM, Dan Carpenter wrote:
Hi Lukasz,
I love your patch! Perhaps something to improve:
url: https://github.com/0day-ci/linux/commits/Lukasz-Luba/Add-support-for-devices... base: https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
config: i386-randconfig-m021-20200605 (attached as .config) compiler: gcc-9 (Debian 9.3.0-13) 9.3.0
If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot lkp@intel.com Reported-by: Dan Carpenter dan.carpenter@oracle.com
smatch warnings: kernel/power/energy_model.c:316 em_dev_register_perf_domain() error: we previously assumed 'dev->em_pd' could be null (see line 277)
# https://github.com/0day-ci/linux/commit/110d050cb7ba1c96e63ada498979d1fd9952... git remote add linux-review https://github.com/0day-ci/linux git remote update linux-review git checkout 110d050cb7ba1c96e63ada498979d1fd99529be2 vim +316 kernel/power/energy_model.c
0e294e607adaf3 Lukasz Luba 2020-05-27 262 int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, 110d050cb7ba1c Lukasz Luba 2020-05-27 263 struct em_data_callback *cb, cpumask_t *cpus) 27871f7a8a341e Quentin Perret 2018-12-03 264 { 27871f7a8a341e Quentin Perret 2018-12-03 265 unsigned long cap, prev_cap = 0; 110d050cb7ba1c Lukasz Luba 2020-05-27 266 int cpu, ret; 27871f7a8a341e Quentin Perret 2018-12-03 267 110d050cb7ba1c Lukasz Luba 2020-05-27 268 if (!dev || !nr_states || !cb) 27871f7a8a341e Quentin Perret 2018-12-03 269 return -EINVAL; 27871f7a8a341e Quentin Perret 2018-12-03 270 27871f7a8a341e Quentin Perret 2018-12-03 271 /* 27871f7a8a341e Quentin Perret 2018-12-03 272 * Use a mutex to serialize the registration of performance domains and 27871f7a8a341e Quentin Perret 2018-12-03 273 * let the driver-defined callback functions sleep. 27871f7a8a341e Quentin Perret 2018-12-03 274 */ 27871f7a8a341e Quentin Perret 2018-12-03 275 mutex_lock(&em_pd_mutex); 27871f7a8a341e Quentin Perret 2018-12-03 276 110d050cb7ba1c Lukasz Luba 2020-05-27 @277 if (dev->em_pd) { ^^^^^^^^^^ Check for NULL.
27871f7a8a341e Quentin Perret 2018-12-03 278 ret = -EEXIST; 27871f7a8a341e Quentin Perret 2018-12-03 279 goto unlock; 27871f7a8a341e Quentin Perret 2018-12-03 280 } 27871f7a8a341e Quentin Perret 2018-12-03 281 110d050cb7ba1c Lukasz Luba 2020-05-27 282 if (_is_cpu_device(dev)) { 110d050cb7ba1c Lukasz Luba 2020-05-27 283 if (!cpus) { 110d050cb7ba1c Lukasz Luba 2020-05-27 284 dev_err(dev, "EM: invalid CPU mask\n"); 110d050cb7ba1c Lukasz Luba 2020-05-27 285 ret = -EINVAL; 110d050cb7ba1c Lukasz Luba 2020-05-27 286 goto unlock; 110d050cb7ba1c Lukasz Luba 2020-05-27 287 } 110d050cb7ba1c Lukasz Luba 2020-05-27 288 110d050cb7ba1c Lukasz Luba 2020-05-27 289 for_each_cpu(cpu, cpus) { 110d050cb7ba1c Lukasz Luba 2020-05-27 290 if (em_cpu_get(cpu)) { 110d050cb7ba1c Lukasz Luba 2020-05-27 291 dev_err(dev, "EM: exists for CPU%d\n", cpu); 110d050cb7ba1c Lukasz Luba 2020-05-27 292 ret = -EEXIST; 110d050cb7ba1c Lukasz Luba 2020-05-27 293 goto unlock; 110d050cb7ba1c Lukasz Luba 2020-05-27 294 } 27871f7a8a341e Quentin Perret 2018-12-03 295 /* 110d050cb7ba1c Lukasz Luba 2020-05-27 296 * All CPUs of a domain must have the same 110d050cb7ba1c Lukasz Luba 2020-05-27 297 * micro-architecture since they all share the same 110d050cb7ba1c Lukasz Luba 2020-05-27 298 * table. 27871f7a8a341e Quentin Perret 2018-12-03 299 */ 8ec59c0f5f4966 Vincent Guittot 2019-06-17 300 cap = arch_scale_cpu_capacity(cpu); 27871f7a8a341e Quentin Perret 2018-12-03 301 if (prev_cap && prev_cap != cap) { 110d050cb7ba1c Lukasz Luba 2020-05-27 302 dev_err(dev, "EM: CPUs of %*pbl must have the same capacity\n", 110d050cb7ba1c Lukasz Luba 2020-05-27 303 cpumask_pr_args(cpus)); 110d050cb7ba1c Lukasz Luba 2020-05-27 304 27871f7a8a341e Quentin Perret 2018-12-03 305 ret = -EINVAL; 27871f7a8a341e Quentin Perret 2018-12-03 306 goto unlock; 27871f7a8a341e Quentin Perret 2018-12-03 307 } 27871f7a8a341e Quentin Perret 2018-12-03 308 prev_cap = cap; 27871f7a8a341e Quentin Perret 2018-12-03 309 } 110d050cb7ba1c Lukasz Luba 2020-05-27 310 } 27871f7a8a341e Quentin Perret 2018-12-03 311 110d050cb7ba1c Lukasz Luba 2020-05-27 312 ret = em_create_pd(dev, nr_states, cb, cpus);
If it's a _is_cpu_device() then it iterates through a bunch of devices and sets up cpu_dev->em_pd for each. Presumably one of the devices is "dev" or this would crash pretty early on in testing?
Yes, all of the devices taken from 'cpus' mask will get the em_pd set including the suspected @dev. To calm down this static analyzer I can remove the 'else' in line 204 to make 'dev->em_pd = pd' set always. 199 if (_is_cpu_device(dev)) 200 for_each_cpu(cpu, cpus) { 201 cpu_dev = get_cpu_device(cpu); 202 cpu_dev->em_pd = pd; 203 } 204 else 205 dev->em_pd = pd;
Do you think it's a good solution and will work for this tool?
Regards, Lukasz
110d050cb7ba1c Lukasz Luba 2020-05-27 313 if (ret) 27871f7a8a341e Quentin Perret 2018-12-03 314 goto unlock; 27871f7a8a341e Quentin Perret 2018-12-03 315 110d050cb7ba1c Lukasz Luba 2020-05-27 @316 em_debug_create_pd(dev); ^^^ Dereferences dev->em_pd.
110d050cb7ba1c Lukasz Luba 2020-05-27 317 dev_info(dev, "EM: created perf domain\n"); 27871f7a8a341e Quentin Perret 2018-12-03 318 27871f7a8a341e Quentin Perret 2018-12-03 319 unlock: 27871f7a8a341e Quentin Perret 2018-12-03 320 mutex_unlock(&em_pd_mutex); 27871f7a8a341e Quentin Perret 2018-12-03 321 return ret; 27871f7a8a341e Quentin Perret 2018-12-03 322 } 0e294e607adaf3 Lukasz Luba 2020-05-27 323 EXPORT_SYMBOL_GPL(em_dev_register_perf_domain);
0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
On Mon, Jun 08, 2020 at 01:34:37PM +0100, Lukasz Luba wrote:
Hi Dan,
Thank you for your analyzes.
On 6/8/20 12:51 PM, Dan Carpenter wrote:
Hi Lukasz,
I love your patch! Perhaps something to improve:
url: https://github.com/0day-ci/linux/commits/Lukasz-Luba/Add-support-for-devices... base: https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
config: i386-randconfig-m021-20200605 (attached as .config) compiler: gcc-9 (Debian 9.3.0-13) 9.3.0
If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot lkp@intel.com Reported-by: Dan Carpenter dan.carpenter@oracle.com
smatch warnings: kernel/power/energy_model.c:316 em_dev_register_perf_domain() error: we previously assumed 'dev->em_pd' could be null (see line 277)
# https://github.com/0day-ci/linux/commit/110d050cb7ba1c96e63ada498979d1fd9952... git remote add linux-review https://github.com/0day-ci/linux git remote update linux-review git checkout 110d050cb7ba1c96e63ada498979d1fd99529be2 vim +316 kernel/power/energy_model.c
0e294e607adaf3 Lukasz Luba 2020-05-27 262 int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, 110d050cb7ba1c Lukasz Luba 2020-05-27 263 struct em_data_callback *cb, cpumask_t *cpus) 27871f7a8a341e Quentin Perret 2018-12-03 264 { 27871f7a8a341e Quentin Perret 2018-12-03 265 unsigned long cap, prev_cap = 0; 110d050cb7ba1c Lukasz Luba 2020-05-27 266 int cpu, ret; 27871f7a8a341e Quentin Perret 2018-12-03 267 110d050cb7ba1c Lukasz Luba 2020-05-27 268 if (!dev || !nr_states || !cb) 27871f7a8a341e Quentin Perret 2018-12-03 269 return -EINVAL; 27871f7a8a341e Quentin Perret 2018-12-03 270 27871f7a8a341e Quentin Perret 2018-12-03 271 /* 27871f7a8a341e Quentin Perret 2018-12-03 272 * Use a mutex to serialize the registration of performance domains and 27871f7a8a341e Quentin Perret 2018-12-03 273 * let the driver-defined callback functions sleep. 27871f7a8a341e Quentin Perret 2018-12-03 274 */ 27871f7a8a341e Quentin Perret 2018-12-03 275 mutex_lock(&em_pd_mutex); 27871f7a8a341e Quentin Perret 2018-12-03 276 110d050cb7ba1c Lukasz Luba 2020-05-27 @277 if (dev->em_pd) { ^^^^^^^^^^ Check for NULL.
27871f7a8a341e Quentin Perret 2018-12-03 278 ret = -EEXIST; 27871f7a8a341e Quentin Perret 2018-12-03 279 goto unlock; 27871f7a8a341e Quentin Perret 2018-12-03 280 } 27871f7a8a341e Quentin Perret 2018-12-03 281 110d050cb7ba1c Lukasz Luba 2020-05-27 282 if (_is_cpu_device(dev)) { 110d050cb7ba1c Lukasz Luba 2020-05-27 283 if (!cpus) { 110d050cb7ba1c Lukasz Luba 2020-05-27 284 dev_err(dev, "EM: invalid CPU mask\n"); 110d050cb7ba1c Lukasz Luba 2020-05-27 285 ret = -EINVAL; 110d050cb7ba1c Lukasz Luba 2020-05-27 286 goto unlock; 110d050cb7ba1c Lukasz Luba 2020-05-27 287 } 110d050cb7ba1c Lukasz Luba 2020-05-27 288 110d050cb7ba1c Lukasz Luba 2020-05-27 289 for_each_cpu(cpu, cpus) { 110d050cb7ba1c Lukasz Luba 2020-05-27 290 if (em_cpu_get(cpu)) { 110d050cb7ba1c Lukasz Luba 2020-05-27 291 dev_err(dev, "EM: exists for CPU%d\n", cpu); 110d050cb7ba1c Lukasz Luba 2020-05-27 292 ret = -EEXIST; 110d050cb7ba1c Lukasz Luba 2020-05-27 293 goto unlock; 110d050cb7ba1c Lukasz Luba 2020-05-27 294 } 27871f7a8a341e Quentin Perret 2018-12-03 295 /* 110d050cb7ba1c Lukasz Luba 2020-05-27 296 * All CPUs of a domain must have the same 110d050cb7ba1c Lukasz Luba 2020-05-27 297 * micro-architecture since they all share the same 110d050cb7ba1c Lukasz Luba 2020-05-27 298 * table. 27871f7a8a341e Quentin Perret 2018-12-03 299 */ 8ec59c0f5f4966 Vincent Guittot 2019-06-17 300 cap = arch_scale_cpu_capacity(cpu); 27871f7a8a341e Quentin Perret 2018-12-03 301 if (prev_cap && prev_cap != cap) { 110d050cb7ba1c Lukasz Luba 2020-05-27 302 dev_err(dev, "EM: CPUs of %*pbl must have the same capacity\n", 110d050cb7ba1c Lukasz Luba 2020-05-27 303 cpumask_pr_args(cpus)); 110d050cb7ba1c Lukasz Luba 2020-05-27 304 27871f7a8a341e Quentin Perret 2018-12-03 305 ret = -EINVAL; 27871f7a8a341e Quentin Perret 2018-12-03 306 goto unlock; 27871f7a8a341e Quentin Perret 2018-12-03 307 } 27871f7a8a341e Quentin Perret 2018-12-03 308 prev_cap = cap; 27871f7a8a341e Quentin Perret 2018-12-03 309 } 110d050cb7ba1c Lukasz Luba 2020-05-27 310 } 27871f7a8a341e Quentin Perret 2018-12-03 311 110d050cb7ba1c Lukasz Luba 2020-05-27 312 ret = em_create_pd(dev, nr_states, cb, cpus);
If it's a _is_cpu_device() then it iterates through a bunch of devices and sets up cpu_dev->em_pd for each. Presumably one of the devices is "dev" or this would crash pretty early on in testing?
Yes, all of the devices taken from 'cpus' mask will get the em_pd set including the suspected @dev. To calm down this static analyzer I can remove the 'else' in line 204 to make 'dev->em_pd = pd' set always. 199 if (_is_cpu_device(dev)) 200 for_each_cpu(cpu, cpus) { 201 cpu_dev = get_cpu_device(cpu); 202 cpu_dev->em_pd = pd; 203 } 204 else 205 dev->em_pd = pd;
Do you think it's a good solution and will work for this tool?
It's not about the tool... Ignore the tool when it's wrong. But I do think the code is slightly more clear without the else statement.
Arguments could be made either way. Removing the else statement means we set dev->em_pd twice... But I *personally* lean vaguely towards removing the else statement. :P
That would make the warning go away as well.
regards, dan carpenter
On 6/8/20 1:51 PM, Dan Carpenter wrote:
On Mon, Jun 08, 2020 at 01:34:37PM +0100, Lukasz Luba wrote:
Hi Dan,
Thank you for your analyzes.
On 6/8/20 12:51 PM, Dan Carpenter wrote:
Hi Lukasz,
I love your patch! Perhaps something to improve:
url: https://github.com/0day-ci/linux/commits/Lukasz-Luba/Add-support-for-devices... base: https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
config: i386-randconfig-m021-20200605 (attached as .config) compiler: gcc-9 (Debian 9.3.0-13) 9.3.0
If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot lkp@intel.com Reported-by: Dan Carpenter dan.carpenter@oracle.com
smatch warnings: kernel/power/energy_model.c:316 em_dev_register_perf_domain() error: we previously assumed 'dev->em_pd' could be null (see line 277)
# https://github.com/0day-ci/linux/commit/110d050cb7ba1c96e63ada498979d1fd9952... git remote add linux-review https://github.com/0day-ci/linux git remote update linux-review git checkout 110d050cb7ba1c96e63ada498979d1fd99529be2 vim +316 kernel/power/energy_model.c
0e294e607adaf3 Lukasz Luba 2020-05-27 262 int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, 110d050cb7ba1c Lukasz Luba 2020-05-27 263 struct em_data_callback *cb, cpumask_t *cpus) 27871f7a8a341e Quentin Perret 2018-12-03 264 { 27871f7a8a341e Quentin Perret 2018-12-03 265 unsigned long cap, prev_cap = 0; 110d050cb7ba1c Lukasz Luba 2020-05-27 266 int cpu, ret; 27871f7a8a341e Quentin Perret 2018-12-03 267 110d050cb7ba1c Lukasz Luba 2020-05-27 268 if (!dev || !nr_states || !cb) 27871f7a8a341e Quentin Perret 2018-12-03 269 return -EINVAL; 27871f7a8a341e Quentin Perret 2018-12-03 270 27871f7a8a341e Quentin Perret 2018-12-03 271 /* 27871f7a8a341e Quentin Perret 2018-12-03 272 * Use a mutex to serialize the registration of performance domains and 27871f7a8a341e Quentin Perret 2018-12-03 273 * let the driver-defined callback functions sleep. 27871f7a8a341e Quentin Perret 2018-12-03 274 */ 27871f7a8a341e Quentin Perret 2018-12-03 275 mutex_lock(&em_pd_mutex); 27871f7a8a341e Quentin Perret 2018-12-03 276 110d050cb7ba1c Lukasz Luba 2020-05-27 @277 if (dev->em_pd) { ^^^^^^^^^^ Check for NULL.
27871f7a8a341e Quentin Perret 2018-12-03 278 ret = -EEXIST; 27871f7a8a341e Quentin Perret 2018-12-03 279 goto unlock; 27871f7a8a341e Quentin Perret 2018-12-03 280 } 27871f7a8a341e Quentin Perret 2018-12-03 281 110d050cb7ba1c Lukasz Luba 2020-05-27 282 if (_is_cpu_device(dev)) { 110d050cb7ba1c Lukasz Luba 2020-05-27 283 if (!cpus) { 110d050cb7ba1c Lukasz Luba 2020-05-27 284 dev_err(dev, "EM: invalid CPU mask\n"); 110d050cb7ba1c Lukasz Luba 2020-05-27 285 ret = -EINVAL; 110d050cb7ba1c Lukasz Luba 2020-05-27 286 goto unlock; 110d050cb7ba1c Lukasz Luba 2020-05-27 287 } 110d050cb7ba1c Lukasz Luba 2020-05-27 288 110d050cb7ba1c Lukasz Luba 2020-05-27 289 for_each_cpu(cpu, cpus) { 110d050cb7ba1c Lukasz Luba 2020-05-27 290 if (em_cpu_get(cpu)) { 110d050cb7ba1c Lukasz Luba 2020-05-27 291 dev_err(dev, "EM: exists for CPU%d\n", cpu); 110d050cb7ba1c Lukasz Luba 2020-05-27 292 ret = -EEXIST; 110d050cb7ba1c Lukasz Luba 2020-05-27 293 goto unlock; 110d050cb7ba1c Lukasz Luba 2020-05-27 294 } 27871f7a8a341e Quentin Perret 2018-12-03 295 /* 110d050cb7ba1c Lukasz Luba 2020-05-27 296 * All CPUs of a domain must have the same 110d050cb7ba1c Lukasz Luba 2020-05-27 297 * micro-architecture since they all share the same 110d050cb7ba1c Lukasz Luba 2020-05-27 298 * table. 27871f7a8a341e Quentin Perret 2018-12-03 299 */ 8ec59c0f5f4966 Vincent Guittot 2019-06-17 300 cap = arch_scale_cpu_capacity(cpu); 27871f7a8a341e Quentin Perret 2018-12-03 301 if (prev_cap && prev_cap != cap) { 110d050cb7ba1c Lukasz Luba 2020-05-27 302 dev_err(dev, "EM: CPUs of %*pbl must have the same capacity\n", 110d050cb7ba1c Lukasz Luba 2020-05-27 303 cpumask_pr_args(cpus)); 110d050cb7ba1c Lukasz Luba 2020-05-27 304 27871f7a8a341e Quentin Perret 2018-12-03 305 ret = -EINVAL; 27871f7a8a341e Quentin Perret 2018-12-03 306 goto unlock; 27871f7a8a341e Quentin Perret 2018-12-03 307 } 27871f7a8a341e Quentin Perret 2018-12-03 308 prev_cap = cap; 27871f7a8a341e Quentin Perret 2018-12-03 309 } 110d050cb7ba1c Lukasz Luba 2020-05-27 310 } 27871f7a8a341e Quentin Perret 2018-12-03 311 110d050cb7ba1c Lukasz Luba 2020-05-27 312 ret = em_create_pd(dev, nr_states, cb, cpus);
If it's a _is_cpu_device() then it iterates through a bunch of devices and sets up cpu_dev->em_pd for each. Presumably one of the devices is "dev" or this would crash pretty early on in testing?
Yes, all of the devices taken from 'cpus' mask will get the em_pd set including the suspected @dev. To calm down this static analyzer I can remove the 'else' in line 204 to make 'dev->em_pd = pd' set always. 199 if (_is_cpu_device(dev)) 200 for_each_cpu(cpu, cpus) { 201 cpu_dev = get_cpu_device(cpu); 202 cpu_dev->em_pd = pd; 203 } 204 else 205 dev->em_pd = pd;
Do you think it's a good solution and will work for this tool?
It's not about the tool... Ignore the tool when it's wrong. But I do think the code is slightly more clear without the else statement.
Arguments could be made either way. Removing the else statement means we set dev->em_pd twice... But I *personally* lean vaguely towards removing the else statement. :P
Thanks, I will remove the else statement and add your 'Reported-by'
Regards, Lukasz
That would make the warning go away as well.
regards, dan carpenter
It's not really a proper bug report so it doesn't deserve a reported-by.
Thanks, though!
regards, dan carpenter
On 6/8/20 2:25 PM, Dan Carpenter wrote:
It's not really a proper bug report so it doesn't deserve a reported-by.
Thanks, though!
regards, dan carpenter
Thank you Dan for your work. This is very much appreciated!
Lukasz
Add support for other devices than CPUs. The registration function does not require a valid cpumask pointer and is ready to handle new devices. Some of the internal structures has been reorganized in order to keep consistent view (like removing per_cpu pd pointers).
Signed-off-by: Lukasz Luba lukasz.luba@arm.com --- Hi all,
This is just a small change compared to v8 addressing Rafael's comments an Dan's static analyzes. Here are the changes: - added comment about mutex usage in the unregister function - changed 'dev' into @dev in the kerneldoc comments - removed 'else' statement from em_create_pd() to calm down static analizers
Regards, Lukasz
include/linux/device.h | 5 + include/linux/energy_model.h | 29 ++++- kernel/power/energy_model.c | 244 ++++++++++++++++++++++++----------- 3 files changed, 194 insertions(+), 84 deletions(-)
diff --git a/include/linux/device.h b/include/linux/device.h index ac8e37cd716a..7023d3ea189b 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -13,6 +13,7 @@ #define _DEVICE_H_
#include <linux/dev_printk.h> +#include <linux/energy_model.h> #include <linux/ioport.h> #include <linux/kobject.h> #include <linux/klist.h> @@ -559,6 +560,10 @@ struct device { struct dev_pm_info power; struct dev_pm_domain *pm_domain;
+#ifdef CONFIG_ENERGY_MODEL + struct em_perf_domain *em_pd; +#endif + #ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN struct irq_domain *msi_domain; #endif diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 7076cb22b247..2d4689964029 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -12,8 +12,10 @@
/** * em_perf_state - Performance state of a performance domain - * @frequency: The CPU frequency in KHz, for consistency with CPUFreq - * @power: The power consumed by 1 CPU at this level, in milli-watts + * @frequency: The frequency in KHz, for consistency with CPUFreq + * @power: The power consumed at this level, in milli-watts (by 1 CPU or + by a registered device). It can be a total power: static and + dynamic. * @cost: The cost coefficient associated with this level, used during * energy calculation. Equal to: power * max_frequency / frequency */ @@ -27,12 +29,16 @@ struct em_perf_state { * em_perf_domain - Performance domain * @table: List of performance states, in ascending order * @nr_perf_states: Number of performance states - * @cpus: Cpumask covering the CPUs of the domain + * @cpus: Cpumask covering the CPUs of the domain. It's here + * for performance reasons to avoid potential cache + * misses during energy calculations in the scheduler + * and simplifies allocating/freeing that memory region. * - * A "performance domain" represents a group of CPUs whose performance is - * scaled together. All CPUs of a performance domain must have the same - * micro-architecture. Performance domains often have a 1-to-1 mapping with - * CPUFreq policies. + * In case of CPU device, a "performance domain" represents a group of CPUs + * whose performance is scaled together. All CPUs of a performance domain + * must have the same micro-architecture. Performance domains often have + * a 1-to-1 mapping with CPUFreq policies. In case of other devices the @cpus + * field is unused. */ struct em_perf_domain { struct em_perf_state *table; @@ -71,10 +77,12 @@ struct em_data_callback { #define EM_DATA_CB(_active_power_cb) { .active_power = &_active_power_cb }
struct em_perf_domain *em_cpu_get(int cpu); +struct em_perf_domain *em_pd_get(struct device *dev); int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, struct em_data_callback *cb); int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, struct em_data_callback *cb, cpumask_t *span); +void em_dev_unregister_perf_domain(struct device *dev);
/** * em_pd_energy() - Estimates the energy consumed by the CPUs of a perf. domain @@ -184,10 +192,17 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, { return -EINVAL; } +static inline void em_dev_unregister_perf_domain(struct device *dev) +{ +} static inline struct em_perf_domain *em_cpu_get(int cpu) { return NULL; } +static inline struct em_perf_domain *em_pd_get(struct device *dev) +{ + return NULL; +} static inline unsigned long em_pd_energy(struct em_perf_domain *pd, unsigned long max_util, unsigned long sum_util) { diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c index 5b8a1566526a..32d76e78f992 100644 --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -1,9 +1,10 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Energy Model of CPUs + * Energy Model of devices * - * Copyright (c) 2018, Arm ltd. + * Copyright (c) 2018-2020, Arm ltd. * Written by: Quentin Perret, Arm ltd. + * Improvements provided by: Lukasz Luba, Arm ltd. */
#define pr_fmt(fmt) "energy_model: " fmt @@ -15,15 +16,17 @@ #include <linux/sched/topology.h> #include <linux/slab.h>
-/* Mapping of each CPU to the performance domain to which it belongs. */ -static DEFINE_PER_CPU(struct em_perf_domain *, em_data); - /* * Mutex serializing the registrations of performance domains and letting * callbacks defined by drivers sleep. */ static DEFINE_MUTEX(em_pd_mutex);
+static bool _is_cpu_device(struct device *dev) +{ + return (dev->bus == &cpu_subsys); +} + #ifdef CONFIG_DEBUG_FS static struct dentry *rootdir;
@@ -49,22 +52,30 @@ static int em_debug_cpus_show(struct seq_file *s, void *unused) } DEFINE_SHOW_ATTRIBUTE(em_debug_cpus);
-static void em_debug_create_pd(struct em_perf_domain *pd, int cpu) +static void em_debug_create_pd(struct device *dev) { struct dentry *d; - char name[8]; int i;
- snprintf(name, sizeof(name), "pd%d", cpu); - /* Create the directory of the performance domain */ - d = debugfs_create_dir(name, rootdir); + d = debugfs_create_dir(dev_name(dev), rootdir);
- debugfs_create_file("cpus", 0444, d, pd->cpus, &em_debug_cpus_fops); + if (_is_cpu_device(dev)) + debugfs_create_file("cpus", 0444, d, dev->em_pd->cpus, + &em_debug_cpus_fops);
/* Create a sub-directory for each performance state */ - for (i = 0; i < pd->nr_perf_states; i++) - em_debug_create_ps(&pd->table[i], d); + for (i = 0; i < dev->em_pd->nr_perf_states; i++) + em_debug_create_ps(&dev->em_pd->table[i], d); + +} + +static void em_debug_remove_pd(struct device *dev) +{ + struct dentry *debug_dir; + + debug_dir = debugfs_lookup(dev_name(dev), rootdir); + debugfs_remove_recursive(debug_dir); }
static int __init em_debug_init(void) @@ -76,40 +87,34 @@ static int __init em_debug_init(void) } core_initcall(em_debug_init); #else /* CONFIG_DEBUG_FS */ -static void em_debug_create_pd(struct em_perf_domain *pd, int cpu) {} +static void em_debug_create_pd(struct device *dev) {} +static void em_debug_remove_pd(struct device *dev) {} #endif -static struct em_perf_domain * -em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, - cpumask_t *span) + +static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd, + int nr_states, struct em_data_callback *cb) { unsigned long opp_eff, prev_opp_eff = ULONG_MAX; unsigned long power, freq, prev_freq = 0; - int i, ret, cpu = cpumask_first(span); struct em_perf_state *table; - struct em_perf_domain *pd; + int i, ret; u64 fmax;
- if (!cb->active_power) - return NULL; - - pd = kzalloc(sizeof(*pd) + cpumask_size(), GFP_KERNEL); - if (!pd) - return NULL; - table = kcalloc(nr_states, sizeof(*table), GFP_KERNEL); if (!table) - goto free_pd; + return -ENOMEM;
/* Build the list of performance states for this performance domain */ for (i = 0, freq = 0; i < nr_states; i++, freq++) { /* * active_power() is a driver callback which ceils 'freq' to - * lowest performance state of 'cpu' above 'freq' and updates + * lowest performance state of 'dev' above 'freq' and updates * 'power' and 'freq' accordingly. */ ret = cb->active_power(&power, &freq, dev); if (ret) { - pr_err("pd%d: invalid perf. state: %d\n", cpu, ret); + dev_err(dev, "EM: invalid perf. state: %d\n", + ret); goto free_ps_table; }
@@ -118,7 +123,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, * higher performance states. */ if (freq <= prev_freq) { - pr_err("pd%d: non-increasing freq: %lu\n", cpu, freq); + dev_err(dev, "EM: non-increasing freq: %lu\n", + freq); goto free_ps_table; }
@@ -127,7 +133,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, * positive, in milli-watts and to fit into 16 bits. */ if (!power || power > EM_MAX_POWER) { - pr_err("pd%d: invalid power: %lu\n", cpu, power); + dev_err(dev, "EM: invalid power: %lu\n", + power); goto free_ps_table; }
@@ -142,8 +149,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, */ opp_eff = freq / power; if (opp_eff >= prev_opp_eff) - pr_warn("pd%d: hertz/watts ratio non-monotonically decreasing: em_perf_state %d >= em_perf_state%d\n", - cpu, i, i - 1); + dev_dbg(dev, "EM: hertz/watts ratio non-monotonically decreasing: em_perf_state %d >= em_perf_state%d\n", + i, i - 1); prev_opp_eff = opp_eff; }
@@ -156,30 +163,82 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb,
pd->table = table; pd->nr_perf_states = nr_states; - cpumask_copy(to_cpumask(pd->cpus), span);
- em_debug_create_pd(pd, cpu); - - return pd; + return 0;
free_ps_table: kfree(table); -free_pd: - kfree(pd); + return -EINVAL; +} + +static int em_create_pd(struct device *dev, int nr_states, + struct em_data_callback *cb, cpumask_t *cpus) +{ + struct em_perf_domain *pd; + struct device *cpu_dev; + int cpu, ret; + + if (_is_cpu_device(dev)) { + pd = kzalloc(sizeof(*pd) + cpumask_size(), GFP_KERNEL); + if (!pd) + return -ENOMEM; + + cpumask_copy(em_span_cpus(pd), cpus); + } else { + pd = kzalloc(sizeof(*pd), GFP_KERNEL); + if (!pd) + return -ENOMEM; + } + + ret = em_create_perf_table(dev, pd, nr_states, cb); + if (ret) { + kfree(pd); + return ret; + } + + if (_is_cpu_device(dev)) + for_each_cpu(cpu, cpus) { + cpu_dev = get_cpu_device(cpu); + cpu_dev->em_pd = pd; + } + + dev->em_pd = pd; + + return 0; +} + +/** + * em_pd_get() - Return the performance domain for a device + * @dev : Device to find the performance domain for + * + * Returns the performance domain to which @dev belongs, or NULL if it doesn't + * exist. + */ +struct em_perf_domain *em_pd_get(struct device *dev) +{ + if (IS_ERR_OR_NULL(dev)) + return NULL;
- return NULL; + return dev->em_pd; } +EXPORT_SYMBOL_GPL(em_pd_get);
/** * em_cpu_get() - Return the performance domain for a CPU * @cpu : CPU to find the performance domain for * - * Return: the performance domain to which 'cpu' belongs, or NULL if it doesn't + * Returns the performance domain to which @cpu belongs, or NULL if it doesn't * exist. */ struct em_perf_domain *em_cpu_get(int cpu) { - return READ_ONCE(per_cpu(em_data, cpu)); + struct device *cpu_dev; + + cpu_dev = get_cpu_device(cpu); + if (!cpu_dev) + return NULL; + + return em_pd_get(cpu_dev); } EXPORT_SYMBOL_GPL(em_cpu_get);
@@ -188,7 +247,7 @@ EXPORT_SYMBOL_GPL(em_cpu_get); * @dev : Device for which the EM is to register * @nr_states : Number of performance states to register * @cb : Callback functions providing the data of the Energy Model - * @span : Pointer to cpumask_t, which in case of a CPU device is + * @cpus : Pointer to cpumask_t, which in case of a CPU device is * obligatory. It can be taken from i.e. 'policy->cpus'. For other * type of devices this should be set to NULL. * @@ -201,13 +260,12 @@ EXPORT_SYMBOL_GPL(em_cpu_get); * Return 0 on success */ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, - struct em_data_callback *cb, cpumask_t *span) + struct em_data_callback *cb, cpumask_t *cpus) { unsigned long cap, prev_cap = 0; - struct em_perf_domain *pd; - int cpu, ret = 0; + int cpu, ret;
- if (!dev || !span || !nr_states || !cb) + if (!dev || !nr_states || !cb) return -EINVAL;
/* @@ -216,47 +274,50 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, */ mutex_lock(&em_pd_mutex);
- for_each_cpu(cpu, span) { - /* Make sure we don't register again an existing domain. */ - if (READ_ONCE(per_cpu(em_data, cpu))) { - ret = -EEXIST; - goto unlock; - } + if (dev->em_pd) { + ret = -EEXIST; + goto unlock; + }
- /* - * All CPUs of a domain must have the same micro-architecture - * since they all share the same table. - */ - cap = arch_scale_cpu_capacity(cpu); - if (prev_cap && prev_cap != cap) { - pr_err("CPUs of %*pbl must have the same capacity\n", - cpumask_pr_args(span)); + if (_is_cpu_device(dev)) { + if (!cpus) { + dev_err(dev, "EM: invalid CPU mask\n"); ret = -EINVAL; goto unlock; } - prev_cap = cap; + + for_each_cpu(cpu, cpus) { + if (em_cpu_get(cpu)) { + dev_err(dev, "EM: exists for CPU%d\n", cpu); + ret = -EEXIST; + goto unlock; + } + /* + * All CPUs of a domain must have the same + * micro-architecture since they all share the same + * table. + */ + cap = arch_scale_cpu_capacity(cpu); + if (prev_cap && prev_cap != cap) { + dev_err(dev, "EM: CPUs of %*pbl must have the same capacity\n", + cpumask_pr_args(cpus)); + + ret = -EINVAL; + goto unlock; + } + prev_cap = cap; + } }
- /* Create the performance domain and add it to the Energy Model. */ - pd = em_create_pd(dev, nr_states, cb, span); - if (!pd) { - ret = -EINVAL; + ret = em_create_pd(dev, nr_states, cb, cpus); + if (ret) goto unlock; - }
- for_each_cpu(cpu, span) { - /* - * The per-cpu array can be read concurrently from em_cpu_get(). - * The barrier enforces the ordering needed to make sure readers - * can only access well formed em_perf_domain structs. - */ - smp_store_release(per_cpu_ptr(&em_data, cpu), pd); - } + em_debug_create_pd(dev); + dev_info(dev, "EM: created perf domain\n");
- pr_debug("Created perf domain %*pbl\n", cpumask_pr_args(span)); unlock: mutex_unlock(&em_pd_mutex); - return ret; } EXPORT_SYMBOL_GPL(em_dev_register_perf_domain); @@ -285,3 +346,32 @@ int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, return em_dev_register_perf_domain(cpu_dev, nr_states, cb, span); } EXPORT_SYMBOL_GPL(em_register_perf_domain); + +/** + * em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device + * @dev : Device for which the EM is registered + * + * Unregister the EM for the specified @dev (but not a CPU device). + */ +void em_dev_unregister_perf_domain(struct device *dev) +{ + if (IS_ERR_OR_NULL(dev) || !dev->em_pd) + return; + + if (_is_cpu_device(dev)) + return; + + /* + * The mutex separates all register/unregister requests and protects + * from potential clean-up/setup issues in the debugfs directories. + * The debugfs directory name is the same as device's name. + */ + mutex_lock(&em_pd_mutex); + em_debug_remove_pd(dev); + + kfree(dev->em_pd->table); + kfree(dev->em_pd); + dev->em_pd = NULL; + mutex_unlock(&em_pd_mutex); +} +EXPORT_SYMBOL_GPL(em_dev_unregister_perf_domain);
On Wed, Jun 10, 2020 at 12:12 PM Lukasz Luba lukasz.luba@arm.com wrote:
Add support for other devices than CPUs. The registration function does not require a valid cpumask pointer and is ready to handle new devices. Some of the internal structures has been reorganized in order to keep consistent view (like removing per_cpu pd pointers).
Signed-off-by: Lukasz Luba lukasz.luba@arm.com
Hi all,
This is just a small change compared to v8 addressing Rafael's comments an Dan's static analyzes. Here are the changes:
- added comment about mutex usage in the unregister function
- changed 'dev' into @dev in the kerneldoc comments
- removed 'else' statement from em_create_pd() to calm down static analizers
I've applied the series as 5.9 material with patch [4/8] replaced with this one.
Sorry for the delays in handling this!
Thanks!
include/linux/device.h | 5 + include/linux/energy_model.h | 29 ++++- kernel/power/energy_model.c | 244 ++++++++++++++++++++++++----------- 3 files changed, 194 insertions(+), 84 deletions(-)
diff --git a/include/linux/device.h b/include/linux/device.h index ac8e37cd716a..7023d3ea189b 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -13,6 +13,7 @@ #define _DEVICE_H_
#include <linux/dev_printk.h> +#include <linux/energy_model.h> #include <linux/ioport.h> #include <linux/kobject.h> #include <linux/klist.h> @@ -559,6 +560,10 @@ struct device { struct dev_pm_info power; struct dev_pm_domain *pm_domain;
+#ifdef CONFIG_ENERGY_MODEL
struct em_perf_domain *em_pd;
+#endif
#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN struct irq_domain *msi_domain; #endif diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 7076cb22b247..2d4689964029 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -12,8 +12,10 @@
/**
- em_perf_state - Performance state of a performance domain
- @frequency: The CPU frequency in KHz, for consistency with CPUFreq
- @power: The power consumed by 1 CPU at this level, in milli-watts
- @frequency: The frequency in KHz, for consistency with CPUFreq
- @power: The power consumed at this level, in milli-watts (by 1 CPU or
by a registered device). It can be a total power: static and
dynamic.
*/
- @cost: The cost coefficient associated with this level, used during
energy calculation. Equal to: power * max_frequency / frequency
@@ -27,12 +29,16 @@ struct em_perf_state {
- em_perf_domain - Performance domain
- @table: List of performance states, in ascending order
- @nr_perf_states: Number of performance states
- @cpus: Cpumask covering the CPUs of the domain
- @cpus: Cpumask covering the CPUs of the domain. It's here
for performance reasons to avoid potential cache
misses during energy calculations in the scheduler
and simplifies allocating/freeing that memory region.
- A "performance domain" represents a group of CPUs whose performance is
- scaled together. All CPUs of a performance domain must have the same
- micro-architecture. Performance domains often have a 1-to-1 mapping with
- CPUFreq policies.
- In case of CPU device, a "performance domain" represents a group of CPUs
- whose performance is scaled together. All CPUs of a performance domain
- must have the same micro-architecture. Performance domains often have
- a 1-to-1 mapping with CPUFreq policies. In case of other devices the @cpus
*/
- field is unused.
struct em_perf_domain { struct em_perf_state *table; @@ -71,10 +77,12 @@ struct em_data_callback { #define EM_DATA_CB(_active_power_cb) { .active_power = &_active_power_cb }
struct em_perf_domain *em_cpu_get(int cpu); +struct em_perf_domain *em_pd_get(struct device *dev); int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, struct em_data_callback *cb); int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, struct em_data_callback *cb, cpumask_t *span); +void em_dev_unregister_perf_domain(struct device *dev);
/**
- em_pd_energy() - Estimates the energy consumed by the CPUs of a perf. domain
@@ -184,10 +192,17 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, { return -EINVAL; } +static inline void em_dev_unregister_perf_domain(struct device *dev) +{ +} static inline struct em_perf_domain *em_cpu_get(int cpu) { return NULL; } +static inline struct em_perf_domain *em_pd_get(struct device *dev) +{
return NULL;
+} static inline unsigned long em_pd_energy(struct em_perf_domain *pd, unsigned long max_util, unsigned long sum_util) { diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c index 5b8a1566526a..32d76e78f992 100644 --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -1,9 +1,10 @@ // SPDX-License-Identifier: GPL-2.0 /*
- Energy Model of CPUs
- Energy Model of devices
- Copyright (c) 2018, Arm ltd.
- Copyright (c) 2018-2020, Arm ltd.
- Written by: Quentin Perret, Arm ltd.
*/
- Improvements provided by: Lukasz Luba, Arm ltd.
#define pr_fmt(fmt) "energy_model: " fmt @@ -15,15 +16,17 @@ #include <linux/sched/topology.h> #include <linux/slab.h>
-/* Mapping of each CPU to the performance domain to which it belongs. */ -static DEFINE_PER_CPU(struct em_perf_domain *, em_data);
/*
- Mutex serializing the registrations of performance domains and letting
- callbacks defined by drivers sleep.
*/ static DEFINE_MUTEX(em_pd_mutex);
+static bool _is_cpu_device(struct device *dev) +{
return (dev->bus == &cpu_subsys);
+}
#ifdef CONFIG_DEBUG_FS static struct dentry *rootdir;
@@ -49,22 +52,30 @@ static int em_debug_cpus_show(struct seq_file *s, void *unused) } DEFINE_SHOW_ATTRIBUTE(em_debug_cpus);
-static void em_debug_create_pd(struct em_perf_domain *pd, int cpu) +static void em_debug_create_pd(struct device *dev) { struct dentry *d;
char name[8]; int i;
snprintf(name, sizeof(name), "pd%d", cpu);
/* Create the directory of the performance domain */
d = debugfs_create_dir(name, rootdir);
d = debugfs_create_dir(dev_name(dev), rootdir);
debugfs_create_file("cpus", 0444, d, pd->cpus, &em_debug_cpus_fops);
if (_is_cpu_device(dev))
debugfs_create_file("cpus", 0444, d, dev->em_pd->cpus,
&em_debug_cpus_fops); /* Create a sub-directory for each performance state */
for (i = 0; i < pd->nr_perf_states; i++)
em_debug_create_ps(&pd->table[i], d);
for (i = 0; i < dev->em_pd->nr_perf_states; i++)
em_debug_create_ps(&dev->em_pd->table[i], d);
+}
+static void em_debug_remove_pd(struct device *dev) +{
struct dentry *debug_dir;
debug_dir = debugfs_lookup(dev_name(dev), rootdir);
debugfs_remove_recursive(debug_dir);
}
static int __init em_debug_init(void) @@ -76,40 +87,34 @@ static int __init em_debug_init(void) } core_initcall(em_debug_init); #else /* CONFIG_DEBUG_FS */ -static void em_debug_create_pd(struct em_perf_domain *pd, int cpu) {} +static void em_debug_create_pd(struct device *dev) {} +static void em_debug_remove_pd(struct device *dev) {} #endif -static struct em_perf_domain * -em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb,
cpumask_t *span)
+static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd,
int nr_states, struct em_data_callback *cb)
{ unsigned long opp_eff, prev_opp_eff = ULONG_MAX; unsigned long power, freq, prev_freq = 0;
int i, ret, cpu = cpumask_first(span); struct em_perf_state *table;
struct em_perf_domain *pd;
int i, ret; u64 fmax;
if (!cb->active_power)
return NULL;
pd = kzalloc(sizeof(*pd) + cpumask_size(), GFP_KERNEL);
if (!pd)
return NULL;
table = kcalloc(nr_states, sizeof(*table), GFP_KERNEL); if (!table)
goto free_pd;
return -ENOMEM; /* Build the list of performance states for this performance domain */ for (i = 0, freq = 0; i < nr_states; i++, freq++) { /* * active_power() is a driver callback which ceils 'freq' to
* lowest performance state of 'cpu' above 'freq' and updates
* lowest performance state of 'dev' above 'freq' and updates * 'power' and 'freq' accordingly. */ ret = cb->active_power(&power, &freq, dev); if (ret) {
pr_err("pd%d: invalid perf. state: %d\n", cpu, ret);
dev_err(dev, "EM: invalid perf. state: %d\n",
ret); goto free_ps_table; }
@@ -118,7 +123,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, * higher performance states. */ if (freq <= prev_freq) {
pr_err("pd%d: non-increasing freq: %lu\n", cpu, freq);
dev_err(dev, "EM: non-increasing freq: %lu\n",
freq); goto free_ps_table; }
@@ -127,7 +133,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, * positive, in milli-watts and to fit into 16 bits. */ if (!power || power > EM_MAX_POWER) {
pr_err("pd%d: invalid power: %lu\n", cpu, power);
dev_err(dev, "EM: invalid power: %lu\n",
power); goto free_ps_table; }
@@ -142,8 +149,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, */ opp_eff = freq / power; if (opp_eff >= prev_opp_eff)
pr_warn("pd%d: hertz/watts ratio non-monotonically decreasing: em_perf_state %d >= em_perf_state%d\n",
cpu, i, i - 1);
dev_dbg(dev, "EM: hertz/watts ratio non-monotonically decreasing: em_perf_state %d >= em_perf_state%d\n",
i, i - 1); prev_opp_eff = opp_eff; }
@@ -156,30 +163,82 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb,
pd->table = table; pd->nr_perf_states = nr_states;
cpumask_copy(to_cpumask(pd->cpus), span);
em_debug_create_pd(pd, cpu);
return pd;
return 0;
free_ps_table: kfree(table); -free_pd:
kfree(pd);
return -EINVAL;
+}
+static int em_create_pd(struct device *dev, int nr_states,
struct em_data_callback *cb, cpumask_t *cpus)
+{
struct em_perf_domain *pd;
struct device *cpu_dev;
int cpu, ret;
if (_is_cpu_device(dev)) {
pd = kzalloc(sizeof(*pd) + cpumask_size(), GFP_KERNEL);
if (!pd)
return -ENOMEM;
cpumask_copy(em_span_cpus(pd), cpus);
} else {
pd = kzalloc(sizeof(*pd), GFP_KERNEL);
if (!pd)
return -ENOMEM;
}
ret = em_create_perf_table(dev, pd, nr_states, cb);
if (ret) {
kfree(pd);
return ret;
}
if (_is_cpu_device(dev))
for_each_cpu(cpu, cpus) {
cpu_dev = get_cpu_device(cpu);
cpu_dev->em_pd = pd;
}
dev->em_pd = pd;
return 0;
+}
+/**
- em_pd_get() - Return the performance domain for a device
- @dev : Device to find the performance domain for
- Returns the performance domain to which @dev belongs, or NULL if it doesn't
- exist.
- */
+struct em_perf_domain *em_pd_get(struct device *dev) +{
if (IS_ERR_OR_NULL(dev))
return NULL;
return NULL;
return dev->em_pd;
} +EXPORT_SYMBOL_GPL(em_pd_get);
/**
- em_cpu_get() - Return the performance domain for a CPU
- @cpu : CPU to find the performance domain for
- Return: the performance domain to which 'cpu' belongs, or NULL if it doesn't
*/
- Returns the performance domain to which @cpu belongs, or NULL if it doesn't
- exist.
struct em_perf_domain *em_cpu_get(int cpu) {
return READ_ONCE(per_cpu(em_data, cpu));
struct device *cpu_dev;
cpu_dev = get_cpu_device(cpu);
if (!cpu_dev)
return NULL;
return em_pd_get(cpu_dev);
} EXPORT_SYMBOL_GPL(em_cpu_get);
@@ -188,7 +247,7 @@ EXPORT_SYMBOL_GPL(em_cpu_get);
- @dev : Device for which the EM is to register
- @nr_states : Number of performance states to register
- @cb : Callback functions providing the data of the Energy Model
- @span : Pointer to cpumask_t, which in case of a CPU device is
- @cpus : Pointer to cpumask_t, which in case of a CPU device is
obligatory. It can be taken from i.e. 'policy->cpus'. For other
type of devices this should be set to NULL.
@@ -201,13 +260,12 @@ EXPORT_SYMBOL_GPL(em_cpu_get);
- Return 0 on success
*/ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
struct em_data_callback *cb, cpumask_t *span)
struct em_data_callback *cb, cpumask_t *cpus)
{ unsigned long cap, prev_cap = 0;
struct em_perf_domain *pd;
int cpu, ret = 0;
int cpu, ret;
if (!dev || !span || !nr_states || !cb)
if (!dev || !nr_states || !cb) return -EINVAL; /*
@@ -216,47 +274,50 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, */ mutex_lock(&em_pd_mutex);
for_each_cpu(cpu, span) {
/* Make sure we don't register again an existing domain. */
if (READ_ONCE(per_cpu(em_data, cpu))) {
ret = -EEXIST;
goto unlock;
}
if (dev->em_pd) {
ret = -EEXIST;
goto unlock;
}
/*
* All CPUs of a domain must have the same micro-architecture
* since they all share the same table.
*/
cap = arch_scale_cpu_capacity(cpu);
if (prev_cap && prev_cap != cap) {
pr_err("CPUs of %*pbl must have the same capacity\n",
cpumask_pr_args(span));
if (_is_cpu_device(dev)) {
if (!cpus) {
dev_err(dev, "EM: invalid CPU mask\n"); ret = -EINVAL; goto unlock; }
prev_cap = cap;
for_each_cpu(cpu, cpus) {
if (em_cpu_get(cpu)) {
dev_err(dev, "EM: exists for CPU%d\n", cpu);
ret = -EEXIST;
goto unlock;
}
/*
* All CPUs of a domain must have the same
* micro-architecture since they all share the same
* table.
*/
cap = arch_scale_cpu_capacity(cpu);
if (prev_cap && prev_cap != cap) {
dev_err(dev, "EM: CPUs of %*pbl must have the same capacity\n",
cpumask_pr_args(cpus));
ret = -EINVAL;
goto unlock;
}
prev_cap = cap;
} }
/* Create the performance domain and add it to the Energy Model. */
pd = em_create_pd(dev, nr_states, cb, span);
if (!pd) {
ret = -EINVAL;
ret = em_create_pd(dev, nr_states, cb, cpus);
if (ret) goto unlock;
}
for_each_cpu(cpu, span) {
/*
* The per-cpu array can be read concurrently from em_cpu_get().
* The barrier enforces the ordering needed to make sure readers
* can only access well formed em_perf_domain structs.
*/
smp_store_release(per_cpu_ptr(&em_data, cpu), pd);
}
em_debug_create_pd(dev);
dev_info(dev, "EM: created perf domain\n");
pr_debug("Created perf domain %*pbl\n", cpumask_pr_args(span));
unlock: mutex_unlock(&em_pd_mutex);
return ret;
} EXPORT_SYMBOL_GPL(em_dev_register_perf_domain); @@ -285,3 +346,32 @@ int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, return em_dev_register_perf_domain(cpu_dev, nr_states, cb, span); } EXPORT_SYMBOL_GPL(em_register_perf_domain);
+/**
- em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device
- @dev : Device for which the EM is registered
- Unregister the EM for the specified @dev (but not a CPU device).
- */
+void em_dev_unregister_perf_domain(struct device *dev) +{
if (IS_ERR_OR_NULL(dev) || !dev->em_pd)
return;
if (_is_cpu_device(dev))
return;
/*
* The mutex separates all register/unregister requests and protects
* from potential clean-up/setup issues in the debugfs directories.
* The debugfs directory name is the same as device's name.
*/
mutex_lock(&em_pd_mutex);
em_debug_remove_pd(dev);
kfree(dev->em_pd->table);
kfree(dev->em_pd);
dev->em_pd = NULL;
mutex_unlock(&em_pd_mutex);
+}
+EXPORT_SYMBOL_GPL(em_dev_unregister_perf_domain);
2.17.1
On 6/24/20 4:21 PM, Rafael J. Wysocki wrote:
On Wed, Jun 10, 2020 at 12:12 PM Lukasz Luba lukasz.luba@arm.com wrote:
Add support for other devices than CPUs. The registration function does not require a valid cpumask pointer and is ready to handle new devices. Some of the internal structures has been reorganized in order to keep consistent view (like removing per_cpu pd pointers).
Signed-off-by: Lukasz Luba lukasz.luba@arm.com
Hi all,
This is just a small change compared to v8 addressing Rafael's comments an Dan's static analyzes. Here are the changes:
- added comment about mutex usage in the unregister function
- changed 'dev' into @dev in the kerneldoc comments
- removed 'else' statement from em_create_pd() to calm down static analizers
I've applied the series as 5.9 material with patch [4/8] replaced with this one.
Sorry for the delays in handling this!
No worries. Thank you very much!
Regards, Lukasz
Thanks!
include/linux/device.h | 5 + include/linux/energy_model.h | 29 ++++- kernel/power/energy_model.c | 244 ++++++++++++++++++++++++----------- 3 files changed, 194 insertions(+), 84 deletions(-)
diff --git a/include/linux/device.h b/include/linux/device.h index ac8e37cd716a..7023d3ea189b 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -13,6 +13,7 @@ #define _DEVICE_H_
#include <linux/dev_printk.h> +#include <linux/energy_model.h> #include <linux/ioport.h> #include <linux/kobject.h> #include <linux/klist.h> @@ -559,6 +560,10 @@ struct device { struct dev_pm_info power; struct dev_pm_domain *pm_domain;
+#ifdef CONFIG_ENERGY_MODEL
struct em_perf_domain *em_pd;
+#endif
- #ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN struct irq_domain *msi_domain; #endif
diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 7076cb22b247..2d4689964029 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -12,8 +12,10 @@
/**
- em_perf_state - Performance state of a performance domain
- @frequency: The CPU frequency in KHz, for consistency with CPUFreq
- @power: The power consumed by 1 CPU at this level, in milli-watts
- @frequency: The frequency in KHz, for consistency with CPUFreq
- @power: The power consumed at this level, in milli-watts (by 1 CPU or
by a registered device). It can be a total power: static and
dynamic.
*/
- @cost: The cost coefficient associated with this level, used during
energy calculation. Equal to: power * max_frequency / frequency
@@ -27,12 +29,16 @@ struct em_perf_state {
- em_perf_domain - Performance domain
- @table: List of performance states, in ascending order
- @nr_perf_states: Number of performance states
- @cpus: Cpumask covering the CPUs of the domain
- @cpus: Cpumask covering the CPUs of the domain. It's here
for performance reasons to avoid potential cache
misses during energy calculations in the scheduler
and simplifies allocating/freeing that memory region.
- A "performance domain" represents a group of CPUs whose performance is
- scaled together. All CPUs of a performance domain must have the same
- micro-architecture. Performance domains often have a 1-to-1 mapping with
- CPUFreq policies.
- In case of CPU device, a "performance domain" represents a group of CPUs
- whose performance is scaled together. All CPUs of a performance domain
- must have the same micro-architecture. Performance domains often have
- a 1-to-1 mapping with CPUFreq policies. In case of other devices the @cpus
*/ struct em_perf_domain { struct em_perf_state *table;
- field is unused.
@@ -71,10 +77,12 @@ struct em_data_callback { #define EM_DATA_CB(_active_power_cb) { .active_power = &_active_power_cb }
struct em_perf_domain *em_cpu_get(int cpu); +struct em_perf_domain *em_pd_get(struct device *dev); int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, struct em_data_callback *cb); int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, struct em_data_callback *cb, cpumask_t *span); +void em_dev_unregister_perf_domain(struct device *dev);
/**
- em_pd_energy() - Estimates the energy consumed by the CPUs of a perf. domain
@@ -184,10 +192,17 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, { return -EINVAL; } +static inline void em_dev_unregister_perf_domain(struct device *dev) +{ +} static inline struct em_perf_domain *em_cpu_get(int cpu) { return NULL; } +static inline struct em_perf_domain *em_pd_get(struct device *dev) +{
return NULL;
+} static inline unsigned long em_pd_energy(struct em_perf_domain *pd, unsigned long max_util, unsigned long sum_util) { diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c index 5b8a1566526a..32d76e78f992 100644 --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -1,9 +1,10 @@ // SPDX-License-Identifier: GPL-2.0 /*
- Energy Model of CPUs
- Energy Model of devices
- Copyright (c) 2018, Arm ltd.
- Copyright (c) 2018-2020, Arm ltd.
- Written by: Quentin Perret, Arm ltd.
- Improvements provided by: Lukasz Luba, Arm ltd.
*/
#define pr_fmt(fmt) "energy_model: " fmt
@@ -15,15 +16,17 @@ #include <linux/sched/topology.h> #include <linux/slab.h>
-/* Mapping of each CPU to the performance domain to which it belongs. */ -static DEFINE_PER_CPU(struct em_perf_domain *, em_data);
- /*
*/ static DEFINE_MUTEX(em_pd_mutex);
- Mutex serializing the registrations of performance domains and letting
- callbacks defined by drivers sleep.
+static bool _is_cpu_device(struct device *dev) +{
return (dev->bus == &cpu_subsys);
+}
- #ifdef CONFIG_DEBUG_FS static struct dentry *rootdir;
@@ -49,22 +52,30 @@ static int em_debug_cpus_show(struct seq_file *s, void *unused) } DEFINE_SHOW_ATTRIBUTE(em_debug_cpus);
-static void em_debug_create_pd(struct em_perf_domain *pd, int cpu) +static void em_debug_create_pd(struct device *dev) { struct dentry *d;
char name[8]; int i;
snprintf(name, sizeof(name), "pd%d", cpu);
/* Create the directory of the performance domain */
d = debugfs_create_dir(name, rootdir);
d = debugfs_create_dir(dev_name(dev), rootdir);
debugfs_create_file("cpus", 0444, d, pd->cpus, &em_debug_cpus_fops);
if (_is_cpu_device(dev))
debugfs_create_file("cpus", 0444, d, dev->em_pd->cpus,
&em_debug_cpus_fops); /* Create a sub-directory for each performance state */
for (i = 0; i < pd->nr_perf_states; i++)
em_debug_create_ps(&pd->table[i], d);
for (i = 0; i < dev->em_pd->nr_perf_states; i++)
em_debug_create_ps(&dev->em_pd->table[i], d);
+}
+static void em_debug_remove_pd(struct device *dev) +{
struct dentry *debug_dir;
debug_dir = debugfs_lookup(dev_name(dev), rootdir);
debugfs_remove_recursive(debug_dir);
}
static int __init em_debug_init(void)
@@ -76,40 +87,34 @@ static int __init em_debug_init(void) } core_initcall(em_debug_init); #else /* CONFIG_DEBUG_FS */ -static void em_debug_create_pd(struct em_perf_domain *pd, int cpu) {} +static void em_debug_create_pd(struct device *dev) {} +static void em_debug_remove_pd(struct device *dev) {} #endif -static struct em_perf_domain * -em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb,
cpumask_t *span)
+static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd,
{ unsigned long opp_eff, prev_opp_eff = ULONG_MAX; unsigned long power, freq, prev_freq = 0;int nr_states, struct em_data_callback *cb)
int i, ret, cpu = cpumask_first(span); struct em_perf_state *table;
struct em_perf_domain *pd;
int i, ret; u64 fmax;
if (!cb->active_power)
return NULL;
pd = kzalloc(sizeof(*pd) + cpumask_size(), GFP_KERNEL);
if (!pd)
return NULL;
table = kcalloc(nr_states, sizeof(*table), GFP_KERNEL); if (!table)
goto free_pd;
return -ENOMEM; /* Build the list of performance states for this performance domain */ for (i = 0, freq = 0; i < nr_states; i++, freq++) { /* * active_power() is a driver callback which ceils 'freq' to
* lowest performance state of 'cpu' above 'freq' and updates
* lowest performance state of 'dev' above 'freq' and updates * 'power' and 'freq' accordingly. */ ret = cb->active_power(&power, &freq, dev); if (ret) {
pr_err("pd%d: invalid perf. state: %d\n", cpu, ret);
dev_err(dev, "EM: invalid perf. state: %d\n",
ret); goto free_ps_table; }
@@ -118,7 +123,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, * higher performance states. */ if (freq <= prev_freq) {
pr_err("pd%d: non-increasing freq: %lu\n", cpu, freq);
dev_err(dev, "EM: non-increasing freq: %lu\n",
freq); goto free_ps_table; }
@@ -127,7 +133,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, * positive, in milli-watts and to fit into 16 bits. */ if (!power || power > EM_MAX_POWER) {
pr_err("pd%d: invalid power: %lu\n", cpu, power);
dev_err(dev, "EM: invalid power: %lu\n",
power); goto free_ps_table; }
@@ -142,8 +149,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, */ opp_eff = freq / power; if (opp_eff >= prev_opp_eff)
pr_warn("pd%d: hertz/watts ratio non-monotonically decreasing: em_perf_state %d >= em_perf_state%d\n",
cpu, i, i - 1);
dev_dbg(dev, "EM: hertz/watts ratio non-monotonically decreasing: em_perf_state %d >= em_perf_state%d\n",
i, i - 1); prev_opp_eff = opp_eff; }
@@ -156,30 +163,82 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb,
pd->table = table; pd->nr_perf_states = nr_states;
cpumask_copy(to_cpumask(pd->cpus), span);
em_debug_create_pd(pd, cpu);
return pd;
return 0;
free_ps_table: kfree(table);
-free_pd:
kfree(pd);
return -EINVAL;
+}
+static int em_create_pd(struct device *dev, int nr_states,
struct em_data_callback *cb, cpumask_t *cpus)
+{
struct em_perf_domain *pd;
struct device *cpu_dev;
int cpu, ret;
if (_is_cpu_device(dev)) {
pd = kzalloc(sizeof(*pd) + cpumask_size(), GFP_KERNEL);
if (!pd)
return -ENOMEM;
cpumask_copy(em_span_cpus(pd), cpus);
} else {
pd = kzalloc(sizeof(*pd), GFP_KERNEL);
if (!pd)
return -ENOMEM;
}
ret = em_create_perf_table(dev, pd, nr_states, cb);
if (ret) {
kfree(pd);
return ret;
}
if (_is_cpu_device(dev))
for_each_cpu(cpu, cpus) {
cpu_dev = get_cpu_device(cpu);
cpu_dev->em_pd = pd;
}
dev->em_pd = pd;
return 0;
+}
+/**
- em_pd_get() - Return the performance domain for a device
- @dev : Device to find the performance domain for
- Returns the performance domain to which @dev belongs, or NULL if it doesn't
- exist.
- */
+struct em_perf_domain *em_pd_get(struct device *dev) +{
if (IS_ERR_OR_NULL(dev))
return NULL;
return NULL;
}return dev->em_pd;
+EXPORT_SYMBOL_GPL(em_pd_get);
/**
- em_cpu_get() - Return the performance domain for a CPU
- @cpu : CPU to find the performance domain for
- Return: the performance domain to which 'cpu' belongs, or NULL if it doesn't
*/ struct em_perf_domain *em_cpu_get(int cpu) {
- Returns the performance domain to which @cpu belongs, or NULL if it doesn't
- exist.
return READ_ONCE(per_cpu(em_data, cpu));
struct device *cpu_dev;
cpu_dev = get_cpu_device(cpu);
if (!cpu_dev)
return NULL;
} EXPORT_SYMBOL_GPL(em_cpu_get);return em_pd_get(cpu_dev);
@@ -188,7 +247,7 @@ EXPORT_SYMBOL_GPL(em_cpu_get);
- @dev : Device for which the EM is to register
- @nr_states : Number of performance states to register
- @cb : Callback functions providing the data of the Energy Model
- @span : Pointer to cpumask_t, which in case of a CPU device is
- @cpus : Pointer to cpumask_t, which in case of a CPU device is
obligatory. It can be taken from i.e. 'policy->cpus'. For other
type of devices this should be set to NULL.
@@ -201,13 +260,12 @@ EXPORT_SYMBOL_GPL(em_cpu_get);
- Return 0 on success
*/ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
struct em_data_callback *cb, cpumask_t *span)
{ unsigned long cap, prev_cap = 0;struct em_data_callback *cb, cpumask_t *cpus)
struct em_perf_domain *pd;
int cpu, ret = 0;
int cpu, ret;
if (!dev || !span || !nr_states || !cb)
if (!dev || !nr_states || !cb) return -EINVAL; /*
@@ -216,47 +274,50 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, */ mutex_lock(&em_pd_mutex);
for_each_cpu(cpu, span) {
/* Make sure we don't register again an existing domain. */
if (READ_ONCE(per_cpu(em_data, cpu))) {
ret = -EEXIST;
goto unlock;
}
if (dev->em_pd) {
ret = -EEXIST;
goto unlock;
}
/*
* All CPUs of a domain must have the same micro-architecture
* since they all share the same table.
*/
cap = arch_scale_cpu_capacity(cpu);
if (prev_cap && prev_cap != cap) {
pr_err("CPUs of %*pbl must have the same capacity\n",
cpumask_pr_args(span));
if (_is_cpu_device(dev)) {
if (!cpus) {
dev_err(dev, "EM: invalid CPU mask\n"); ret = -EINVAL; goto unlock; }
prev_cap = cap;
for_each_cpu(cpu, cpus) {
if (em_cpu_get(cpu)) {
dev_err(dev, "EM: exists for CPU%d\n", cpu);
ret = -EEXIST;
goto unlock;
}
/*
* All CPUs of a domain must have the same
* micro-architecture since they all share the same
* table.
*/
cap = arch_scale_cpu_capacity(cpu);
if (prev_cap && prev_cap != cap) {
dev_err(dev, "EM: CPUs of %*pbl must have the same capacity\n",
cpumask_pr_args(cpus));
ret = -EINVAL;
goto unlock;
}
prev_cap = cap;
} }
/* Create the performance domain and add it to the Energy Model. */
pd = em_create_pd(dev, nr_states, cb, span);
if (!pd) {
ret = -EINVAL;
ret = em_create_pd(dev, nr_states, cb, cpus);
if (ret) goto unlock;
}
for_each_cpu(cpu, span) {
/*
* The per-cpu array can be read concurrently from em_cpu_get().
* The barrier enforces the ordering needed to make sure readers
* can only access well formed em_perf_domain structs.
*/
smp_store_release(per_cpu_ptr(&em_data, cpu), pd);
}
em_debug_create_pd(dev);
dev_info(dev, "EM: created perf domain\n");
unlock: mutex_unlock(&em_pd_mutex);pr_debug("Created perf domain %*pbl\n", cpumask_pr_args(span));
} EXPORT_SYMBOL_GPL(em_dev_register_perf_domain);return ret;
@@ -285,3 +346,32 @@ int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, return em_dev_register_perf_domain(cpu_dev, nr_states, cb, span); } EXPORT_SYMBOL_GPL(em_register_perf_domain);
+/**
- em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device
- @dev : Device for which the EM is registered
- Unregister the EM for the specified @dev (but not a CPU device).
- */
+void em_dev_unregister_perf_domain(struct device *dev) +{
if (IS_ERR_OR_NULL(dev) || !dev->em_pd)
return;
if (_is_cpu_device(dev))
return;
/*
* The mutex separates all register/unregister requests and protects
* from potential clean-up/setup issues in the debugfs directories.
* The debugfs directory name is the same as device's name.
*/
mutex_lock(&em_pd_mutex);
em_debug_remove_pd(dev);
kfree(dev->em_pd->table);
kfree(dev->em_pd);
dev->em_pd = NULL;
mutex_unlock(&em_pd_mutex);
+}
+EXPORT_SYMBOL_GPL(em_dev_unregister_perf_domain);
2.17.1
Remove old function em_register_perf_domain which is no longer needed. There is em_dev_register_perf_domain that covers old use cases and new as well.
Acked-by: Daniel Lezcano daniel.lezcano@linaro.org Acked-by: Quentin Perret qperret@google.com Signed-off-by: Lukasz Luba lukasz.luba@arm.com --- include/linux/energy_model.h | 7 ------- kernel/power/energy_model.c | 25 ------------------------- 2 files changed, 32 deletions(-)
diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 0ebb083b15a0..3aaa0bc43429 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -78,8 +78,6 @@ struct em_data_callback {
struct em_perf_domain *em_cpu_get(int cpu); struct em_perf_domain *em_pd_get(struct device *dev); -int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, - struct em_data_callback *cb); int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, struct em_data_callback *cb, cpumask_t *span); void em_dev_unregister_perf_domain(struct device *dev); @@ -181,11 +179,6 @@ static inline int em_pd_nr_perf_states(struct em_perf_domain *pd) struct em_data_callback {}; #define EM_DATA_CB(_active_power_cb) { }
-static inline int em_register_perf_domain(cpumask_t *span, - unsigned int nr_states, struct em_data_callback *cb) -{ - return -EINVAL; -} static inline int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, struct em_data_callback *cb, cpumask_t *span) diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c index 07e8307460bc..73e44c1fc4bd 100644 --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -322,31 +322,6 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, } EXPORT_SYMBOL_GPL(em_dev_register_perf_domain);
-/** - * em_register_perf_domain() - Register the Energy Model of a performance domain - * @span : Mask of CPUs in the performance domain - * @nr_states : Number of capacity states to register - * @cb : Callback functions providing the data of the Energy Model - * - * Create Energy Model tables for a performance domain using the callbacks - * defined in cb. - * - * If multiple clients register the same performance domain, all but the first - * registration will be ignored. - * - * Return 0 on success - */ -int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, - struct em_data_callback *cb) -{ - struct device *cpu_dev; - - cpu_dev = get_cpu_device(cpumask_first(span)); - - return em_dev_register_perf_domain(cpu_dev, nr_states, cb, span); -} -EXPORT_SYMBOL_GPL(em_register_perf_domain); - /** * em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device * @dev : Device for which the EM is registered
Energy Model framework now supports other devices than CPUs. Refactor some of the functions in order to prevent wrong usage. The old function em_pd_energy has to generic name. It must not be used without proper cpumask pointer, which is possible only for CPU devices. Thus, rename it and add proper description to warn of potential wrong usage for other devices.
Acked-by: Daniel Lezcano daniel.lezcano@linaro.org Acked-by: Quentin Perret qperret@google.com Signed-off-by: Lukasz Luba lukasz.luba@arm.com --- include/linux/energy_model.h | 11 ++++++++--- kernel/sched/fair.c | 2 +- 2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 3aaa0bc43429..a0aff705b535 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -83,15 +83,20 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, void em_dev_unregister_perf_domain(struct device *dev);
/** - * em_pd_energy() - Estimates the energy consumed by the CPUs of a perf. domain + * em_cpu_energy() - Estimates the energy consumed by the CPUs of a + performance domain * @pd : performance domain for which energy has to be estimated * @max_util : highest utilization among CPUs of the domain * @sum_util : sum of the utilization of all CPUs in the domain * + * This function must be used only for CPU devices. There is no validation, + * i.e. if the EM is a CPU type and has cpumask allocated. It is called from + * the scheduler code quite frequently and that is why there is not checks. + * * Return: the sum of the energy consumed by the CPUs of the domain assuming * a capacity state satisfying the max utilization of the domain. */ -static inline unsigned long em_pd_energy(struct em_perf_domain *pd, +static inline unsigned long em_cpu_energy(struct em_perf_domain *pd, unsigned long max_util, unsigned long sum_util) { unsigned long freq, scale_cpu; @@ -196,7 +201,7 @@ static inline struct em_perf_domain *em_pd_get(struct device *dev) { return NULL; } -static inline unsigned long em_pd_energy(struct em_perf_domain *pd, +static inline unsigned long em_cpu_energy(struct em_perf_domain *pd, unsigned long max_util, unsigned long sum_util) { return 0; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 538ba5d94e99..c28f1de43f5e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6489,7 +6489,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) max_util = max(max_util, cpu_util); }
- return em_pd_energy(pd->em_pd, max_util, sum_util); + return em_cpu_energy(pd->em_pd, max_util, sum_util); }
/*
The Energy Model framework supports also other devices than CPUs. Update related information and add description for the new usage.
Acked-by: Daniel Lezcano daniel.lezcano@linaro.org Acked-by: Quentin Perret qperret@google.com Signed-off-by: Lukasz Luba lukasz.luba@arm.com --- Documentation/power/energy-model.rst | 135 +++++++++++++++------------ 1 file changed, 75 insertions(+), 60 deletions(-)
diff --git a/Documentation/power/energy-model.rst b/Documentation/power/energy-model.rst index 90a345d57ae9..a6fb986abe3c 100644 --- a/Documentation/power/energy-model.rst +++ b/Documentation/power/energy-model.rst @@ -1,15 +1,17 @@ -==================== -Energy Model of CPUs -==================== +.. SPDX-License-Identifier: GPL-2.0 + +======================= +Energy Model of devices +=======================
1. Overview -----------
The Energy Model (EM) framework serves as an interface between drivers knowing -the power consumed by CPUs at various performance levels, and the kernel +the power consumed by devices at various performance levels, and the kernel subsystems willing to use that information to make energy-aware decisions.
-The source of the information about the power consumed by CPUs can vary greatly +The source of the information about the power consumed by devices can vary greatly from one platform to another. These power costs can be estimated using devicetree data in some cases. In others, the firmware will know better. Alternatively, userspace might be best positioned. And so on. In order to avoid @@ -25,7 +27,7 @@ framework, and interested clients reading the data from it:: +---------------+ +-----------------+ +---------------+ | Thermal (IPA) | | Scheduler (EAS) | | Other | +---------------+ +-----------------+ +---------------+ - | | em_pd_energy() | + | | em_cpu_energy() | | | em_cpu_get() | +---------+ | +---------+ | | | @@ -35,7 +37,7 @@ framework, and interested clients reading the data from it:: | Framework | +---------------------+ ^ ^ ^ - | | | em_register_perf_domain() + | | | em_dev_register_perf_domain() +----------+ | +---------+ | | | +---------------+ +---------------+ +--------------+ @@ -47,12 +49,12 @@ framework, and interested clients reading the data from it:: | Device Tree | | Firmware | | ? | +--------------+ +---------------+ +--------------+
-The EM framework manages power cost tables per 'performance domain' in the -system. A performance domain is a group of CPUs whose performance is scaled -together. Performance domains generally have a 1-to-1 mapping with CPUFreq -policies. All CPUs in a performance domain are required to have the same -micro-architecture. CPUs in different performance domains can have different -micro-architectures. +In case of CPU devices the EM framework manages power cost tables per +'performance domain' in the system. A performance domain is a group of CPUs +whose performance is scaled together. Performance domains generally have a +1-to-1 mapping with CPUFreq policies. All CPUs in a performance domain are +required to have the same micro-architecture. CPUs in different performance +domains can have different micro-architectures.
2. Core APIs @@ -70,14 +72,16 @@ CONFIG_ENERGY_MODEL must be enabled to use the EM framework. Drivers are expected to register performance domains into the EM framework by calling the following API::
- int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, - struct em_data_callback *cb); + int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, + struct em_data_callback *cb, cpumask_t *cpus);
-Drivers must specify the CPUs of the performance domains using the cpumask -argument, and provide a callback function returning <frequency, power> tuples -for each capacity state. The callback function provided by the driver is free +Drivers must provide a callback function returning <frequency, power> tuples +for each performance state. The callback function provided by the driver is free to fetch data from any relevant location (DT, firmware, ...), and by any mean -deemed necessary. See Section 3. for an example of driver implementing this +deemed necessary. Only for CPU devices, drivers must specify the CPUs of the +performance domains using cpumask. For other devices than CPUs the last +argument must be set to NULL. +See Section 3. for an example of driver implementing this callback, and kernel/power/energy_model.c for further documentation on this API.
@@ -85,13 +89,20 @@ API. 2.3 Accessing performance domains ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+There are two API functions which provide the access to the energy model: +em_cpu_get() which takes CPU id as an argument and em_pd_get() with device +pointer as an argument. It depends on the subsystem which interface it is +going to use, but in case of CPU devices both functions return the same +performance domain. + Subsystems interested in the energy model of a CPU can retrieve it using the em_cpu_get() API. The energy model tables are allocated once upon creation of the performance domains, and kept in memory untouched.
The energy consumed by a performance domain can be estimated using the -em_pd_energy() API. The estimation is performed assuming that the schedutil -CPUfreq governor is in use. +em_cpu_energy() API. The estimation is performed assuming that the schedutil +CPUfreq governor is in use in case of CPU device. Currently this calculation is +not provided for other type of devices.
More details about the above APIs can be found in include/linux/energy_model.h.
@@ -106,42 +117,46 @@ EM framework::
-> drivers/cpufreq/foo_cpufreq.c
- 01 static int est_power(unsigned long *mW, unsigned long *KHz, int cpu) - 02 { - 03 long freq, power; - 04 - 05 /* Use the 'foo' protocol to ceil the frequency */ - 06 freq = foo_get_freq_ceil(cpu, *KHz); - 07 if (freq < 0); - 08 return freq; - 09 - 10 /* Estimate the power cost for the CPU at the relevant freq. */ - 11 power = foo_estimate_power(cpu, freq); - 12 if (power < 0); - 13 return power; - 14 - 15 /* Return the values to the EM framework */ - 16 *mW = power; - 17 *KHz = freq; - 18 - 19 return 0; - 20 } - 21 - 22 static int foo_cpufreq_init(struct cpufreq_policy *policy) - 23 { - 24 struct em_data_callback em_cb = EM_DATA_CB(est_power); - 25 int nr_opp, ret; - 26 - 27 /* Do the actual CPUFreq init work ... */ - 28 ret = do_foo_cpufreq_init(policy); - 29 if (ret) - 30 return ret; - 31 - 32 /* Find the number of OPPs for this policy */ - 33 nr_opp = foo_get_nr_opp(policy); - 34 - 35 /* And register the new performance domain */ - 36 em_register_perf_domain(policy->cpus, nr_opp, &em_cb); - 37 - 38 return 0; - 39 } + 01 static int est_power(unsigned long *mW, unsigned long *KHz, + 02 struct device *dev) + 03 { + 04 long freq, power; + 05 + 06 /* Use the 'foo' protocol to ceil the frequency */ + 07 freq = foo_get_freq_ceil(dev, *KHz); + 08 if (freq < 0); + 09 return freq; + 10 + 11 /* Estimate the power cost for the dev at the relevant freq. */ + 12 power = foo_estimate_power(dev, freq); + 13 if (power < 0); + 14 return power; + 15 + 16 /* Return the values to the EM framework */ + 17 *mW = power; + 18 *KHz = freq; + 19 + 20 return 0; + 21 } + 22 + 23 static int foo_cpufreq_init(struct cpufreq_policy *policy) + 24 { + 25 struct em_data_callback em_cb = EM_DATA_CB(est_power); + 26 struct device *cpu_dev; + 27 int nr_opp, ret; + 28 + 29 cpu_dev = get_cpu_device(cpumask_first(policy->cpus)); + 30 + 31 /* Do the actual CPUFreq init work ... */ + 32 ret = do_foo_cpufreq_init(policy); + 33 if (ret) + 34 return ret; + 35 + 36 /* Find the number of OPPs for this policy */ + 37 nr_opp = foo_get_nr_opp(policy); + 38 + 39 /* And register the new performance domain */ + 40 em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus); + 41 + 42 return 0; + 43 }
The Energy Model framework supports not only CPU devices. Drop the CPU specific interface with cpumask and add struct device. Add also a return value, user might use it. This new interface provides easy way to create a simple Energy Model, which then might be used by e.g. thermal subsystem.
Acked-by: Daniel Lezcano daniel.lezcano@linaro.org Signed-off-by: Lukasz Luba lukasz.luba@arm.com --- drivers/cpufreq/cpufreq-dt.c | 2 +- drivers/cpufreq/imx6q-cpufreq.c | 2 +- drivers/cpufreq/mediatek-cpufreq.c | 2 +- drivers/cpufreq/omap-cpufreq.c | 2 +- drivers/cpufreq/qcom-cpufreq-hw.c | 2 +- drivers/cpufreq/scpi-cpufreq.c | 2 +- drivers/cpufreq/vexpress-spc-cpufreq.c | 2 +- drivers/opp/of.c | 71 ++++++++++++++++---------- include/linux/pm_opp.h | 15 +++++- 9 files changed, 65 insertions(+), 35 deletions(-)
diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c index 26fe8dfb9ce6..f9f03fd49b83 100644 --- a/drivers/cpufreq/cpufreq-dt.c +++ b/drivers/cpufreq/cpufreq-dt.c @@ -275,7 +275,7 @@ static int cpufreq_init(struct cpufreq_policy *policy) policy->cpuinfo.transition_latency = transition_latency; policy->dvfs_possible_from_any_cpu = true;
- dev_pm_opp_of_register_em(policy->cpus); + dev_pm_opp_of_register_em(cpu_dev, policy->cpus);
return 0;
diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c index fdb2ffffbd15..ef7b34c1fd2b 100644 --- a/drivers/cpufreq/imx6q-cpufreq.c +++ b/drivers/cpufreq/imx6q-cpufreq.c @@ -193,7 +193,7 @@ static int imx6q_cpufreq_init(struct cpufreq_policy *policy) policy->clk = clks[ARM].clk; cpufreq_generic_init(policy, freq_table, transition_latency); policy->suspend_freq = max_freq; - dev_pm_opp_of_register_em(policy->cpus); + dev_pm_opp_of_register_em(cpu_dev, policy->cpus);
return 0; } diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c index 0c98dd08273d..7d1212c9b7c8 100644 --- a/drivers/cpufreq/mediatek-cpufreq.c +++ b/drivers/cpufreq/mediatek-cpufreq.c @@ -448,7 +448,7 @@ static int mtk_cpufreq_init(struct cpufreq_policy *policy) policy->driver_data = info; policy->clk = info->cpu_clk;
- dev_pm_opp_of_register_em(policy->cpus); + dev_pm_opp_of_register_em(info->cpu_dev, policy->cpus);
return 0; } diff --git a/drivers/cpufreq/omap-cpufreq.c b/drivers/cpufreq/omap-cpufreq.c index 8d14b42a8c6f..3694bb030df3 100644 --- a/drivers/cpufreq/omap-cpufreq.c +++ b/drivers/cpufreq/omap-cpufreq.c @@ -131,7 +131,7 @@ static int omap_cpu_init(struct cpufreq_policy *policy)
/* FIXME: what's the actual transition time? */ cpufreq_generic_init(policy, freq_table, 300 * 1000); - dev_pm_opp_of_register_em(policy->cpus); + dev_pm_opp_of_register_em(mpu_dev, policy->cpus);
return 0; } diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c index fc92a8842e25..0a04b6f03b9a 100644 --- a/drivers/cpufreq/qcom-cpufreq-hw.c +++ b/drivers/cpufreq/qcom-cpufreq-hw.c @@ -238,7 +238,7 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy) goto error; }
- dev_pm_opp_of_register_em(policy->cpus); + dev_pm_opp_of_register_em(cpu_dev, policy->cpus);
policy->fast_switch_possible = true;
diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c index 20d1f85d5f5a..b0f5388b8854 100644 --- a/drivers/cpufreq/scpi-cpufreq.c +++ b/drivers/cpufreq/scpi-cpufreq.c @@ -167,7 +167,7 @@ static int scpi_cpufreq_init(struct cpufreq_policy *policy)
policy->fast_switch_possible = false;
- dev_pm_opp_of_register_em(policy->cpus); + dev_pm_opp_of_register_em(cpu_dev, policy->cpus);
return 0;
diff --git a/drivers/cpufreq/vexpress-spc-cpufreq.c b/drivers/cpufreq/vexpress-spc-cpufreq.c index 83c85d3d67e3..4e8b1dee7c9a 100644 --- a/drivers/cpufreq/vexpress-spc-cpufreq.c +++ b/drivers/cpufreq/vexpress-spc-cpufreq.c @@ -450,7 +450,7 @@ static int ve_spc_cpufreq_init(struct cpufreq_policy *policy) policy->freq_table = freq_table[cur_cluster]; policy->cpuinfo.transition_latency = 1000000; /* 1 ms */
- dev_pm_opp_of_register_em(policy->cpus); + dev_pm_opp_of_register_em(cpu_dev, policy->cpus);
if (is_bL_switching_enabled()) per_cpu(cpu_last_req_freq, policy->cpu) = diff --git a/drivers/opp/of.c b/drivers/opp/of.c index 5b75829a915d..4500ce46d476 100644 --- a/drivers/opp/of.c +++ b/drivers/opp/of.c @@ -1036,18 +1036,18 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_of_node);
/* * Callback function provided to the Energy Model framework upon registration. - * This computes the power estimated by @CPU at @kHz if it is the frequency + * This computes the power estimated by @dev at @kHz if it is the frequency * of an existing OPP, or at the frequency of the first OPP above @kHz otherwise * (see dev_pm_opp_find_freq_ceil()). This function updates @kHz to the ceiled * frequency and @mW to the associated power. The power is estimated as - * P = C * V^2 * f with C being the CPU's capacitance and V and f respectively - * the voltage and frequency of the OPP. + * P = C * V^2 * f with C being the device's capacitance and V and f + * respectively the voltage and frequency of the OPP. * - * Returns -ENODEV if the CPU device cannot be found, -EINVAL if the power - * calculation failed because of missing parameters, 0 otherwise. + * Returns -EINVAL if the power calculation failed because of missing + * parameters, 0 otherwise. */ -static int __maybe_unused _get_cpu_power(unsigned long *mW, unsigned long *kHz, - struct device *cpu_dev) +static int __maybe_unused _get_power(unsigned long *mW, unsigned long *kHz, + struct device *dev) { struct dev_pm_opp *opp; struct device_node *np; @@ -1056,7 +1056,7 @@ static int __maybe_unused _get_cpu_power(unsigned long *mW, unsigned long *kHz, u64 tmp; int ret;
- np = of_node_get(cpu_dev->of_node); + np = of_node_get(dev->of_node); if (!np) return -EINVAL;
@@ -1066,7 +1066,7 @@ static int __maybe_unused _get_cpu_power(unsigned long *mW, unsigned long *kHz, return -EINVAL;
Hz = *kHz * 1000; - opp = dev_pm_opp_find_freq_ceil(cpu_dev, &Hz); + opp = dev_pm_opp_find_freq_ceil(dev, &Hz); if (IS_ERR(opp)) return -EINVAL;
@@ -1086,30 +1086,38 @@ static int __maybe_unused _get_cpu_power(unsigned long *mW, unsigned long *kHz,
/** * dev_pm_opp_of_register_em() - Attempt to register an Energy Model - * @cpus : CPUs for which an Energy Model has to be registered + * @dev : Device for which an Energy Model has to be registered + * @cpus : CPUs for which an Energy Model has to be registered. For + * other type of devices it should be set to NULL. * * This checks whether the "dynamic-power-coefficient" devicetree property has * been specified, and tries to register an Energy Model with it if it has. + * Having this property means the voltages are known for OPPs and the EM + * might be calculated. */ -void dev_pm_opp_of_register_em(struct cpumask *cpus) +int dev_pm_opp_of_register_em(struct device *dev, struct cpumask *cpus) { - struct em_data_callback em_cb = EM_DATA_CB(_get_cpu_power); - int ret, nr_opp, cpu = cpumask_first(cpus); - struct device *cpu_dev; + struct em_data_callback em_cb = EM_DATA_CB(_get_power); struct device_node *np; + int ret, nr_opp; u32 cap;
- cpu_dev = get_cpu_device(cpu); - if (!cpu_dev) - return; + if (IS_ERR_OR_NULL(dev)) { + ret = -EINVAL; + goto failed; + }
- nr_opp = dev_pm_opp_get_opp_count(cpu_dev); - if (nr_opp <= 0) - return; + nr_opp = dev_pm_opp_get_opp_count(dev); + if (nr_opp <= 0) { + ret = -EINVAL; + goto failed; + }
- np = of_node_get(cpu_dev->of_node); - if (!np) - return; + np = of_node_get(dev->of_node); + if (!np) { + ret = -EINVAL; + goto failed; + }
/* * Register an EM only if the 'dynamic-power-coefficient' property is @@ -1120,9 +1128,20 @@ void dev_pm_opp_of_register_em(struct cpumask *cpus) */ ret = of_property_read_u32(np, "dynamic-power-coefficient", &cap); of_node_put(np); - if (ret || !cap) - return; + if (ret || !cap) { + dev_dbg(dev, "Couldn't find proper 'dynamic-power-coefficient' in DT\n"); + ret = -EINVAL; + goto failed; + }
- em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, cpus); + ret = em_dev_register_perf_domain(dev, nr_opp, &em_cb, cpus); + if (ret) + goto failed; + + return 0; + +failed: + dev_dbg(dev, "Couldn't register Energy Model %d\n", ret); + return ret; } EXPORT_SYMBOL_GPL(dev_pm_opp_of_register_em); diff --git a/include/linux/pm_opp.h b/include/linux/pm_opp.h index 747861816f4f..8df8b82054e5 100644 --- a/include/linux/pm_opp.h +++ b/include/linux/pm_opp.h @@ -11,6 +11,7 @@ #ifndef __LINUX_OPP_H__ #define __LINUX_OPP_H__
+#include <linux/energy_model.h> #include <linux/err.h> #include <linux/notifier.h>
@@ -360,7 +361,11 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpuma struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev); struct device_node *dev_pm_opp_get_of_node(struct dev_pm_opp *opp); int of_get_required_opp_performance_state(struct device_node *np, int index); -void dev_pm_opp_of_register_em(struct cpumask *cpus); +int dev_pm_opp_of_register_em(struct device *dev, struct cpumask *cpus); +static inline void dev_pm_opp_of_unregister_em(struct device *dev) +{ + em_dev_unregister_perf_domain(dev); +} #else static inline int dev_pm_opp_of_add_table(struct device *dev) { @@ -400,7 +405,13 @@ static inline struct device_node *dev_pm_opp_get_of_node(struct dev_pm_opp *opp) return NULL; }
-static inline void dev_pm_opp_of_register_em(struct cpumask *cpus) +static inline int dev_pm_opp_of_register_em(struct device *dev, + struct cpumask *cpus) +{ + return -ENOTSUPP; +} + +static inline void dev_pm_opp_of_unregister_em(struct device *dev) { }
Hi Rafael,
On 5/27/20 10:58 AM, Lukasz Luba wrote:
Hi all,
Background of this version: This is the v8 of the patch set and is has smaller scope. I had to split the series into two: EM changes and thermal changes due to devfreq dependencies. The patches from v7 9-14 which change devfreq cooling are going to be sent in separate patch series, just after this set get merged into mainline. These patches related to EM got acks and hopefully can go through linux-pm tree. The later thermal patches will go through thermal tree.
The idea and purpose of the Energy Model framework changes: This patch set introduces support for devices in the Energy Model (EM) framework. It will unify the power model for thermal subsystem. It will make simpler to add support for new devices willing to use more advanced features (like Intelligent Power Allocation). Now it should require less knowledge and effort for driver developer to add e.g. GPU driver with simple energy model. A more sophisticated energy model in the thermal framework is also possible, driver needs to provide a dedicated callback function. More information can be found in the updated documentation file.
First 7 patches are refactoring Energy Model framework to add support of other devices that CPUs. They change:
- naming convention from 'capacity' to 'performance' state,
- API arguments adding device pointer and not rely only on cpumask,
- change naming when 'cpu' was used, now it's a 'device'
- internal structure to maintain registered devices
- update users to the new API
Patch 8 updates OPP framework helper function to be more generic, not CPU specific.
The patch set is based on linux-pm branch linux-next 813946019dfd.
Could you take the patch set via your linux-pm?
Regards, Lukasz
On Fri, May 29, 2020 at 5:01 PM Lukasz Luba lukasz.luba@arm.com wrote:
Hi Rafael,
On 5/27/20 10:58 AM, Lukasz Luba wrote:
Hi all,
Background of this version: This is the v8 of the patch set and is has smaller scope. I had to split the series into two: EM changes and thermal changes due to devfreq dependencies. The patches from v7 9-14 which change devfreq cooling are going to be sent in separate patch series, just after this set get merged into mainline. These patches related to EM got acks and hopefully can go through linux-pm tree. The later thermal patches will go through thermal tree.
The idea and purpose of the Energy Model framework changes: This patch set introduces support for devices in the Energy Model (EM) framework. It will unify the power model for thermal subsystem. It will make simpler to add support for new devices willing to use more advanced features (like Intelligent Power Allocation). Now it should require less knowledge and effort for driver developer to add e.g. GPU driver with simple energy model. A more sophisticated energy model in the thermal framework is also possible, driver needs to provide a dedicated callback function. More information can be found in the updated documentation file.
First 7 patches are refactoring Energy Model framework to add support of other devices that CPUs. They change:
- naming convention from 'capacity' to 'performance' state,
- API arguments adding device pointer and not rely only on cpumask,
- change naming when 'cpu' was used, now it's a 'device'
- internal structure to maintain registered devices
- update users to the new API
Patch 8 updates OPP framework helper function to be more generic, not CPU specific.
The patch set is based on linux-pm branch linux-next 813946019dfd.
Could you take the patch set via your linux-pm?
I can do that, but I didn't realize that it was targeted at me, so I need some more time to review the patches.
Thanks!
On 5/29/20 5:18 PM, Rafael J. Wysocki wrote:
On Fri, May 29, 2020 at 5:01 PM Lukasz Luba lukasz.luba@arm.com wrote:
Hi Rafael,
On 5/27/20 10:58 AM, Lukasz Luba wrote:
Hi all,
Background of this version: This is the v8 of the patch set and is has smaller scope. I had to split the series into two: EM changes and thermal changes due to devfreq dependencies. The patches from v7 9-14 which change devfreq cooling are going to be sent in separate patch series, just after this set get merged into mainline. These patches related to EM got acks and hopefully can go through linux-pm tree. The later thermal patches will go through thermal tree.
The idea and purpose of the Energy Model framework changes: This patch set introduces support for devices in the Energy Model (EM) framework. It will unify the power model for thermal subsystem. It will make simpler to add support for new devices willing to use more advanced features (like Intelligent Power Allocation). Now it should require less knowledge and effort for driver developer to add e.g. GPU driver with simple energy model. A more sophisticated energy model in the thermal framework is also possible, driver needs to provide a dedicated callback function. More information can be found in the updated documentation file.
First 7 patches are refactoring Energy Model framework to add support of other devices that CPUs. They change:
- naming convention from 'capacity' to 'performance' state,
- API arguments adding device pointer and not rely only on cpumask,
- change naming when 'cpu' was used, now it's a 'device'
- internal structure to maintain registered devices
- update users to the new API
Patch 8 updates OPP framework helper function to be more generic, not CPU specific.
The patch set is based on linux-pm branch linux-next 813946019dfd.
Could you take the patch set via your linux-pm?
I can do that, but I didn't realize that it was targeted at me, so I need some more time to review the patches.
Thanks!
No worries. Thank you for your time!
Regards, Lukasz
Hi Rafael,
On 5/29/20 5:18 PM, Rafael J. Wysocki wrote:
On Fri, May 29, 2020 at 5:01 PM Lukasz Luba lukasz.luba@arm.com wrote:
Hi Rafael,
On 5/27/20 10:58 AM, Lukasz Luba wrote:
Hi all,
Background of this version: This is the v8 of the patch set and is has smaller scope. I had to split the series into two: EM changes and thermal changes due to devfreq dependencies. The patches from v7 9-14 which change devfreq cooling are going to be sent in separate patch series, just after this set get merged into mainline. These patches related to EM got acks and hopefully can go through linux-pm tree. The later thermal patches will go through thermal tree.
The idea and purpose of the Energy Model framework changes: This patch set introduces support for devices in the Energy Model (EM) framework. It will unify the power model for thermal subsystem. It will make simpler to add support for new devices willing to use more advanced features (like Intelligent Power Allocation). Now it should require less knowledge and effort for driver developer to add e.g. GPU driver with simple energy model. A more sophisticated energy model in the thermal framework is also possible, driver needs to provide a dedicated callback function. More information can be found in the updated documentation file.
First 7 patches are refactoring Energy Model framework to add support of other devices that CPUs. They change:
- naming convention from 'capacity' to 'performance' state,
- API arguments adding device pointer and not rely only on cpumask,
- change naming when 'cpu' was used, now it's a 'device'
- internal structure to maintain registered devices
- update users to the new API
Patch 8 updates OPP framework helper function to be more generic, not CPU specific.
The patch set is based on linux-pm branch linux-next 813946019dfd.
Could you take the patch set via your linux-pm?
I can do that, but I didn't realize that it was targeted at me, so I need some more time to review the patches.
Thanks!
Gentle ping. I've resend the patch 4/8 [1] which addresses your comments and a comment regarding static analyzes conducted by Dan [2].
Do you need more time for this patch set? Maybe you would like me to help you with rebasing this series? Then please point me to the desired branch, I am happy to do this.
Regards, Lukasz
[1] https://lore.kernel.org/lkml/20200610101223.7152-1-lukasz.luba@arm.com/ [2] https://lore.kernel.org/lkml/da0debe1-73da-33f1-c24e-154c2123c522@arm.com/
dri-devel@lists.freedesktop.org