From: Tvrtko Ursulin tvrtko.ursulin@intel.com
Continuing on the identically named thread. First six patches are i915 specific so please mostly look at the last one only which discusses the common options for DRM drivers.
I haven't updated intel_gpu_top to use this yet so can't report any performance numbers.
Tvrtko Ursulin (7): drm/i915: Explicitly track DRM clients drm/i915: Update client name on context create drm/i915: Make GEM contexts track DRM clients drm/i915: Track runtime spent in closed and unreachable GEM contexts drm/i915: Track all user contexts per client drm/i915: Track context current active time drm/i915: Expose client engine utilisation via fdinfo
drivers/gpu/drm/i915/Makefile | 5 +- drivers/gpu/drm/i915/gem/i915_gem_context.c | 61 ++++- .../gpu/drm/i915/gem/i915_gem_context_types.h | 16 +- drivers/gpu/drm/i915/gt/intel_context.c | 27 +- drivers/gpu/drm/i915/gt/intel_context.h | 15 +- drivers/gpu/drm/i915/gt/intel_context_types.h | 24 +- .../drm/i915/gt/intel_execlists_submission.c | 23 +- .../gpu/drm/i915/gt/intel_gt_clock_utils.c | 4 + drivers/gpu/drm/i915/gt/intel_lrc.c | 27 +- drivers/gpu/drm/i915/gt/intel_lrc.h | 24 ++ drivers/gpu/drm/i915/gt/selftest_lrc.c | 10 +- drivers/gpu/drm/i915/i915_drm_client.c | 238 ++++++++++++++++++ drivers/gpu/drm/i915/i915_drm_client.h | 107 ++++++++ drivers/gpu/drm/i915/i915_drv.c | 9 + drivers/gpu/drm/i915/i915_drv.h | 5 + drivers/gpu/drm/i915/i915_gem.c | 21 +- drivers/gpu/drm/i915/i915_gpu_error.c | 31 +-- drivers/gpu/drm/i915/i915_gpu_error.h | 2 +- 18 files changed, 568 insertions(+), 81 deletions(-) create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h
From: Tvrtko Ursulin tvrtko.ursulin@intel.com
Tracking DRM clients more explicitly will allow later patches to accumulate past and current GPU usage in a centralised place and also consolidate access to owning task pid/name.
Unique client id is also assigned for the purpose of distinguishing/ consolidating between multiple file descriptors owned by the same process.
v2: Chris Wilson: * Enclose new members into dedicated structs. * Protect against failed sysfs registration.
v3: * sysfs_attr_init.
v4: * Fix for internal clients.
v5: * Use cyclic ida for client id. (Chris) * Do not leak pid reference. (Chris) * Tidy code with some locals.
v6: * Use xa_alloc_cyclic to simplify locking. (Chris) * No need to unregister individial sysfs files. (Chris) * Rebase on top of fpriv kref. * Track client closed status and reflect in sysfs.
v7: * Make drm_client more standalone concept.
v8: * Simplify sysfs show. (Chris) * Always track name and pid.
v9: * Fix cyclic id assignment.
v10: * No need for a mutex around xa_alloc_cyclic. * Refactor sysfs into own function. * Unregister sysfs before freeing pid and name. * Move clients setup into own function.
v11: * Call clients init directly from driver init. (Chris)
v12: * Do not fail client add on id wrap. (Maciej)
v13 (Lucas): Rebase.
v14: * Dropped sysfs bits.
Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Reviewed-by: Chris Wilson chris@chris-wilson.co.uk # v11 Reviewed-by: Aravind Iddamsetty aravind.iddamsetty@intel.com # v11 Signed-off-by: Chris Wilson chris@chris-wilson.co.uk --- drivers/gpu/drm/i915/Makefile | 5 +- drivers/gpu/drm/i915/i915_drm_client.c | 113 +++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_drm_client.h | 61 +++++++++++++ drivers/gpu/drm/i915/i915_drv.c | 6 ++ drivers/gpu/drm/i915/i915_drv.h | 5 ++ drivers/gpu/drm/i915/i915_gem.c | 21 ++++- 6 files changed, 206 insertions(+), 5 deletions(-) create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h
diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index 6947495bf34b..f3f5c4571623 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -33,8 +33,9 @@ subdir-ccflags-y += -I$(srctree)/$(src) # Please keep these build lists sorted!
# core driver code -i915-y += i915_drv.o \ - i915_config.o \ +i915-y += i915_config.o \ + i915_drm_client.o \ + i915_drv.o \ i915_irq.o \ i915_getparam.o \ i915_mitigations.o \ diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c new file mode 100644 index 000000000000..83080d9836b0 --- /dev/null +++ b/drivers/gpu/drm/i915/i915_drm_client.c @@ -0,0 +1,113 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2020 Intel Corporation + */ + +#include <linux/kernel.h> +#include <linux/slab.h> +#include <linux/types.h> + +#include "i915_drm_client.h" +#include "i915_gem.h" +#include "i915_utils.h" + +void i915_drm_clients_init(struct i915_drm_clients *clients, + struct drm_i915_private *i915) +{ + clients->i915 = i915; + + clients->next_id = 0; + xa_init_flags(&clients->xarray, XA_FLAGS_ALLOC); +} + +static int +__i915_drm_client_register(struct i915_drm_client *client, + struct task_struct *task) +{ + char *name; + + name = kstrdup(task->comm, GFP_KERNEL); + if (!name) + return -ENOMEM; + + client->pid = get_task_pid(task, PIDTYPE_PID); + client->name = name; + + return 0; +} + +static void __i915_drm_client_unregister(struct i915_drm_client *client) +{ + put_pid(fetch_and_zero(&client->pid)); + kfree(fetch_and_zero(&client->name)); +} + +static void __rcu_i915_drm_client_free(struct work_struct *wrk) +{ + struct i915_drm_client *client = + container_of(wrk, typeof(*client), rcu.work); + + xa_erase(&client->clients->xarray, client->id); + + __i915_drm_client_unregister(client); + + kfree(client); +} + +struct i915_drm_client * +i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task) +{ + struct i915_drm_client *client; + int ret; + + client = kzalloc(sizeof(*client), GFP_KERNEL); + if (!client) + return ERR_PTR(-ENOMEM); + + kref_init(&client->kref); + client->clients = clients; + INIT_RCU_WORK(&client->rcu, __rcu_i915_drm_client_free); + + ret = xa_alloc_cyclic(&clients->xarray, &client->id, client, + xa_limit_32b, &clients->next_id, GFP_KERNEL); + if (ret < 0) + goto err_id; + + ret = __i915_drm_client_register(client, task); + if (ret) + goto err_register; + + return client; + +err_register: + xa_erase(&clients->xarray, client->id); +err_id: + kfree(client); + + return ERR_PTR(ret); +} + +void __i915_drm_client_free(struct kref *kref) +{ + struct i915_drm_client *client = + container_of(kref, typeof(*client), kref); + + queue_rcu_work(system_wq, &client->rcu); +} + +void i915_drm_client_close(struct i915_drm_client *client) +{ + GEM_BUG_ON(READ_ONCE(client->closed)); + WRITE_ONCE(client->closed, true); + i915_drm_client_put(client); +} + +void i915_drm_clients_fini(struct i915_drm_clients *clients) +{ + while (!xa_empty(&clients->xarray)) { + rcu_barrier(); + flush_workqueue(system_wq); + } + + xa_destroy(&clients->xarray); +} diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h new file mode 100644 index 000000000000..396f1e336b3f --- /dev/null +++ b/drivers/gpu/drm/i915/i915_drm_client.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2020 Intel Corporation + */ + +#ifndef __I915_DRM_CLIENT_H__ +#define __I915_DRM_CLIENT_H__ + +#include <linux/kref.h> +#include <linux/pid.h> +#include <linux/rcupdate.h> +#include <linux/sched.h> +#include <linux/xarray.h> + +struct drm_i915_private; + +struct i915_drm_clients { + struct drm_i915_private *i915; + + struct xarray xarray; + u32 next_id; +}; + +struct i915_drm_client { + struct kref kref; + + struct rcu_work rcu; + + unsigned int id; + struct pid *pid; + char *name; + bool closed; + + struct i915_drm_clients *clients; +}; + +void i915_drm_clients_init(struct i915_drm_clients *clients, + struct drm_i915_private *i915); + +static inline struct i915_drm_client * +i915_drm_client_get(struct i915_drm_client *client) +{ + kref_get(&client->kref); + return client; +} + +void __i915_drm_client_free(struct kref *kref); + +static inline void i915_drm_client_put(struct i915_drm_client *client) +{ + kref_put(&client->kref, __i915_drm_client_free); +} + +void i915_drm_client_close(struct i915_drm_client *client); + +struct i915_drm_client *i915_drm_client_add(struct i915_drm_clients *clients, + struct task_struct *task); + +void i915_drm_clients_fini(struct i915_drm_clients *clients); + +#endif /* !__I915_DRM_CLIENT_H__ */ diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index 2f06bb7b3ed2..33eb7b52b58b 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -69,6 +69,7 @@ #include "gt/intel_rc6.h"
#include "i915_debugfs.h" +#include "i915_drm_client.h" #include "i915_drv.h" #include "i915_ioc32.h" #include "i915_irq.h" @@ -339,6 +340,8 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
intel_gt_init_early(&dev_priv->gt, dev_priv);
+ i915_drm_clients_init(&dev_priv->clients, dev_priv); + i915_gem_init_early(dev_priv);
/* This must be called before any calls to HAS_PCH_* */ @@ -358,6 +361,7 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
err_gem: i915_gem_cleanup_early(dev_priv); + i915_drm_clients_fini(&dev_priv->clients); intel_gt_driver_late_release(&dev_priv->gt); vlv_suspend_cleanup(dev_priv); err_workqueues: @@ -375,6 +379,7 @@ static void i915_driver_late_release(struct drm_i915_private *dev_priv) intel_irq_fini(dev_priv); intel_power_domains_cleanup(dev_priv); i915_gem_cleanup_early(dev_priv); + i915_drm_clients_fini(&dev_priv->clients); intel_gt_driver_late_release(&dev_priv->gt); vlv_suspend_cleanup(dev_priv); i915_workqueues_cleanup(dev_priv); @@ -984,6 +989,7 @@ static void i915_driver_postclose(struct drm_device *dev, struct drm_file *file) struct drm_i915_file_private *file_priv = file->driver_priv;
i915_gem_context_close(file); + i915_drm_client_close(file_priv->client);
kfree_rcu(file_priv, rcu);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 9cb02618ba15..f0bbe8d3195e 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -94,6 +94,7 @@ #include "intel_wakeref.h" #include "intel_wopcm.h"
+#include "i915_drm_client.h" #include "i915_gem.h" #include "i915_gem_gtt.h" #include "i915_gpu_error.h" @@ -220,6 +221,8 @@ struct drm_i915_file_private { /** ban_score: Accumulated score of all ctx bans and fast hangs. */ atomic_t ban_score; unsigned long hang_timestamp; + + struct i915_drm_client *client; };
/* Interface history: @@ -1161,6 +1164,8 @@ struct drm_i915_private {
struct i915_pmu pmu;
+ struct i915_drm_clients clients; + struct i915_hdcp_comp_master *hdcp_master; bool hdcp_comp_added;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index cffd7f4f87dc..1445c7a7ef76 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1181,25 +1181,40 @@ void i915_gem_cleanup_early(struct drm_i915_private *dev_priv) int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file) { struct drm_i915_file_private *file_priv; - int ret; + struct i915_drm_client *client; + int ret = -ENOMEM;
DRM_DEBUG("\n");
file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL); if (!file_priv) - return -ENOMEM; + goto err_alloc; + + client = i915_drm_client_add(&i915->clients, current); + if (IS_ERR(client)) { + ret = PTR_ERR(client); + goto err_client; + }
file->driver_priv = file_priv; file_priv->dev_priv = i915; file_priv->file = file; + file_priv->client = client;
file_priv->bsd_engine = -1; file_priv->hang_timestamp = jiffies;
ret = i915_gem_context_open(i915, file); if (ret) - kfree(file_priv); + goto err_context; + + return 0;
+err_context: + i915_drm_client_close(client); +err_client: + kfree(file_priv); +err_alloc: return ret; }
From: Tvrtko Ursulin tvrtko.ursulin@intel.com
Some clients have the DRM fd passed to them over a socket by the X server.
Grab the real client and pid when they create their first context and update the exposed data for more useful enumeration.
To enable lockless access to client name and pid data from the following patches, we also make these fields rcu protected. In this way asynchronous code paths where both contexts which remain after the client exit, and access to client name and pid as they are getting updated due context creation running in parallel with name/pid queries.
v2: * Do not leak the pid reference and borrow context idr_lock. (Chris)
v3: * More avoiding leaks. (Chris)
v4: * Move update completely to drm client. (Chris) * Do not lose previous client data on failure to re-register and simplify update to only touch what it needs.
v5: * Reuse ext_data local. (Chris)
Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Reviewed-by: Chris Wilson chris@chris-wilson.co.uk Reviewed-by: Aravind Iddamsetty aravind.iddamsetty@intel.com Signed-off-by: Chris Wilson chris@chris-wilson.co.uk --- drivers/gpu/drm/i915/gem/i915_gem_context.c | 5 ++ drivers/gpu/drm/i915/i915_drm_client.c | 66 +++++++++++++++++++-- drivers/gpu/drm/i915/i915_drm_client.h | 34 ++++++++++- 3 files changed, 97 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index 188dee13e017..e5f8d94666e8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -76,6 +76,7 @@ #include "gt/intel_gpu_commands.h" #include "gt/intel_ring.h"
+#include "i915_drm_client.h" #include "i915_gem_context.h" #include "i915_globals.h" #include "i915_trace.h" @@ -2321,6 +2322,10 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data, return -EIO; }
+ ret = i915_drm_client_update(ext_data.fpriv->client, current); + if (ret) + return ret; + ext_data.ctx = i915_gem_create_context(i915, args->flags); if (IS_ERR(ext_data.ctx)) return PTR_ERR(ext_data.ctx); diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c index 83080d9836b0..0b7a70ed61d0 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.c +++ b/drivers/gpu/drm/i915/i915_drm_client.c @@ -7,7 +7,10 @@ #include <linux/slab.h> #include <linux/types.h>
+#include <drm/drm_print.h> + #include "i915_drm_client.h" +#include "i915_drv.h" #include "i915_gem.h" #include "i915_utils.h"
@@ -20,26 +23,57 @@ void i915_drm_clients_init(struct i915_drm_clients *clients, xa_init_flags(&clients->xarray, XA_FLAGS_ALLOC); }
+static struct i915_drm_client_name *get_name(struct i915_drm_client *client, + struct task_struct *task) +{ + struct i915_drm_client_name *name; + int len = strlen(task->comm); + + name = kmalloc(struct_size(name, name, len + 1), GFP_KERNEL); + if (!name) + return NULL; + + init_rcu_head(&name->rcu); + name->client = client; + name->pid = get_task_pid(task, PIDTYPE_PID); + memcpy(name->name, task->comm, len + 1); + + return name; +} + +static void free_name(struct rcu_head *rcu) +{ + struct i915_drm_client_name *name = + container_of(rcu, typeof(*name), rcu); + + put_pid(name->pid); + kfree(name); +} + static int __i915_drm_client_register(struct i915_drm_client *client, struct task_struct *task) { - char *name; + struct i915_drm_client_name *name;
- name = kstrdup(task->comm, GFP_KERNEL); + name = get_name(client, task); if (!name) return -ENOMEM;
- client->pid = get_task_pid(task, PIDTYPE_PID); - client->name = name; + RCU_INIT_POINTER(client->name, name);
return 0; }
static void __i915_drm_client_unregister(struct i915_drm_client *client) { - put_pid(fetch_and_zero(&client->pid)); - kfree(fetch_and_zero(&client->name)); + struct i915_drm_client_name *name; + + mutex_lock(&client->update_lock); + name = rcu_replace_pointer(client->name, NULL, true); + mutex_unlock(&client->update_lock); + + call_rcu(&name->rcu, free_name); }
static void __rcu_i915_drm_client_free(struct work_struct *wrk) @@ -65,6 +99,7 @@ i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task) return ERR_PTR(-ENOMEM);
kref_init(&client->kref); + mutex_init(&client->update_lock); client->clients = clients; INIT_RCU_WORK(&client->rcu, __rcu_i915_drm_client_free);
@@ -102,6 +137,25 @@ void i915_drm_client_close(struct i915_drm_client *client) i915_drm_client_put(client); }
+int +i915_drm_client_update(struct i915_drm_client *client, + struct task_struct *task) +{ + struct i915_drm_client_name *name; + + name = get_name(client, task); + if (!name) + return -ENOMEM; + + mutex_lock(&client->update_lock); + if (name->pid != rcu_dereference_protected(client->name, true)->pid) + name = rcu_replace_pointer(client->name, name, true); + mutex_unlock(&client->update_lock); + + call_rcu(&name->rcu, free_name); + return 0; +} + void i915_drm_clients_fini(struct i915_drm_clients *clients) { while (!xa_empty(&clients->xarray)) { diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h index 396f1e336b3f..6d55f77a08f1 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.h +++ b/drivers/gpu/drm/i915/i915_drm_client.h @@ -7,6 +7,7 @@ #define __I915_DRM_CLIENT_H__
#include <linux/kref.h> +#include <linux/mutex.h> #include <linux/pid.h> #include <linux/rcupdate.h> #include <linux/sched.h> @@ -21,14 +22,22 @@ struct i915_drm_clients { u32 next_id; };
+struct i915_drm_client_name { + struct rcu_head rcu; + struct i915_drm_client *client; + struct pid *pid; + char name[]; +}; + struct i915_drm_client { struct kref kref;
struct rcu_work rcu;
+ struct mutex update_lock; /* Serializes name and pid updates. */ + unsigned int id; - struct pid *pid; - char *name; + struct i915_drm_client_name __rcu *name; bool closed;
struct i915_drm_clients *clients; @@ -56,6 +65,27 @@ void i915_drm_client_close(struct i915_drm_client *client); struct i915_drm_client *i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task);
+int i915_drm_client_update(struct i915_drm_client *client, + struct task_struct *task); + +static inline const struct i915_drm_client_name * +__i915_drm_client_name(const struct i915_drm_client *client) +{ + return rcu_dereference(client->name); +} + +static inline const char * +i915_drm_client_name(const struct i915_drm_client *client) +{ + return __i915_drm_client_name(client)->name; +} + +static inline struct pid * +i915_drm_client_pid(const struct i915_drm_client *client) +{ + return __i915_drm_client_name(client)->pid; +} + void i915_drm_clients_fini(struct i915_drm_clients *clients);
#endif /* !__I915_DRM_CLIENT_H__ */
From: Tvrtko Ursulin tvrtko.ursulin@intel.com
If we make GEM contexts keep a reference to i915_drm_client for the whole of their lifetime, we can consolidate the current task pid and name usage by getting it from the client.
v2: Don't bother supporting selftests contexts from debugfs. (Chris) v3 (Lucas): Finish constructing ctx before adding it to the list v4 (Ram): Rebase.
Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Reviewed-by: Chris Wilson chris@chris-wilson.co.uk Reviewed-by: Aravind Iddamsetty aravind.iddamsetty@intel.com Signed-off-by: Chris Wilson chris@chris-wilson.co.uk --- drivers/gpu/drm/i915/gem/i915_gem_context.c | 20 ++++++++++++----- .../gpu/drm/i915/gem/i915_gem_context_types.h | 13 +++-------- drivers/gpu/drm/i915/i915_gpu_error.c | 22 +++++++++++-------- 3 files changed, 30 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index e5f8d94666e8..5ea42d5b0b1a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -345,13 +345,14 @@ void i915_gem_context_release(struct kref *ref) trace_i915_context_free(ctx); GEM_BUG_ON(!i915_gem_context_is_closed(ctx));
- mutex_destroy(&ctx->engines_mutex); - mutex_destroy(&ctx->lut_mutex); + if (ctx->client) + i915_drm_client_put(ctx->client);
if (ctx->timeline) intel_timeline_put(ctx->timeline);
- put_pid(ctx->pid); + mutex_destroy(&ctx->engines_mutex); + mutex_destroy(&ctx->lut_mutex); mutex_destroy(&ctx->mutex);
kfree_rcu(ctx, rcu); @@ -895,6 +896,7 @@ static int gem_context_register(struct i915_gem_context *ctx, u32 *id) { struct drm_i915_private *i915 = ctx->i915; + struct i915_drm_client *client; struct i915_address_space *vm; int ret;
@@ -906,15 +908,21 @@ static int gem_context_register(struct i915_gem_context *ctx, WRITE_ONCE(vm->file, fpriv); /* XXX */ mutex_unlock(&ctx->mutex);
- ctx->pid = get_task_pid(current, PIDTYPE_PID); + client = i915_drm_client_get(fpriv->client); + + rcu_read_lock(); snprintf(ctx->name, sizeof(ctx->name), "%s[%d]", - current->comm, pid_nr(ctx->pid)); + i915_drm_client_name(client), + pid_nr(i915_drm_client_pid(client))); + rcu_read_unlock();
/* And finally expose ourselves to userspace via the idr */ ret = xa_alloc(&fpriv->context_xa, id, ctx, xa_limit_32b, GFP_KERNEL); if (ret) goto err_pid;
+ ctx->client = client; + spin_lock(&i915->gem.contexts.lock); list_add_tail(&ctx->link, &i915->gem.contexts.list); spin_unlock(&i915->gem.contexts.lock); @@ -922,7 +930,7 @@ static int gem_context_register(struct i915_gem_context *ctx, return 0;
err_pid: - put_pid(fetch_and_zero(&ctx->pid)); + i915_drm_client_put(client); return ret; }
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h index 340473aa70de..eb098f2896c5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h @@ -96,19 +96,12 @@ struct i915_gem_context { */ struct i915_address_space __rcu *vm;
- /** - * @pid: process id of creator - * - * Note that who created the context may not be the principle user, - * as the context may be shared across a local socket. However, - * that should only affect the default context, all contexts created - * explicitly by the client are expected to be isolated. - */ - struct pid *pid; - /** link: place with &drm_i915_private.context_list */ struct list_head link;
+ /** client: struct i915_drm_client */ + struct i915_drm_client *client; + /** * @ref: reference count * diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index 8b964e355cb5..f5dfc15f5f76 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -1235,7 +1235,9 @@ static void record_request(const struct i915_request *request,
ctx = rcu_dereference(request->context->gem_context); if (ctx) - erq->pid = pid_nr(ctx->pid); + erq->pid = I915_SELFTEST_ONLY(!ctx->client) ? + 0 : + pid_nr(i915_drm_client_pid(ctx->client)); } rcu_read_unlock(); } @@ -1256,23 +1258,25 @@ static bool record_context(struct i915_gem_context_coredump *e, const struct i915_request *rq) { struct i915_gem_context *ctx; - struct task_struct *task; bool simulated;
rcu_read_lock(); + ctx = rcu_dereference(rq->context->gem_context); if (ctx && !kref_get_unless_zero(&ctx->ref)) ctx = NULL; - rcu_read_unlock(); - if (!ctx) + if (!ctx) { + rcu_read_unlock(); return true; + }
- rcu_read_lock(); - task = pid_task(ctx->pid, PIDTYPE_PID); - if (task) { - strcpy(e->comm, task->comm); - e->pid = task->pid; + if (I915_SELFTEST_ONLY(!ctx->client)) { + strcpy(e->comm, "[kernel]"); + } else { + strcpy(e->comm, i915_drm_client_name(ctx->client)); + e->pid = pid_nr(i915_drm_client_pid(ctx->client)); } + rcu_read_unlock();
e->sched_attr = ctx->sched;
From: Tvrtko Ursulin tvrtko.ursulin@intel.com
As contexts are abandoned we want to remember how much GPU time they used (per class) so later we can used it for smarter purposes.
As GEM contexts are closed we want to have the DRM client remember how much GPU time they used (per class) so later we can used it for smarter purposes.
Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Reviewed-by: Aravind Iddamsetty aravind.iddamsetty@intel.com Reviewed-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Chris Wilson chris@chris-wilson.co.uk --- drivers/gpu/drm/i915/gem/i915_gem_context.c | 24 +++++++++++++++++++-- drivers/gpu/drm/i915/i915_drm_client.h | 7 ++++++ 2 files changed, 29 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index 5ea42d5b0b1a..b8d8366a2cce 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -262,23 +262,43 @@ static void free_engines_rcu(struct rcu_head *rcu) free_engines(engines); }
+static void accumulate_runtime(struct i915_drm_client *client, + struct i915_gem_engines *engines) +{ + struct i915_gem_engines_iter it; + struct intel_context *ce; + + if (!client) + return; + + /* Transfer accumulated runtime to the parent GEM context. */ + for_each_gem_engine(ce, engines, it) { + unsigned int class = ce->engine->uabi_class; + + GEM_BUG_ON(class >= ARRAY_SIZE(client->past_runtime)); + atomic64_add(intel_context_get_total_runtime_ns(ce), + &client->past_runtime[class]); + } +} + static int __i915_sw_fence_call engines_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state) { struct i915_gem_engines *engines = container_of(fence, typeof(*engines), fence); + struct i915_gem_context *ctx = engines->ctx;
switch (state) { case FENCE_COMPLETE: if (!list_empty(&engines->link)) { - struct i915_gem_context *ctx = engines->ctx; unsigned long flags;
spin_lock_irqsave(&ctx->stale.lock, flags); list_del(&engines->link); spin_unlock_irqrestore(&ctx->stale.lock, flags); } - i915_gem_context_put(engines->ctx); + accumulate_runtime(ctx->client, engines); + i915_gem_context_put(ctx); break;
case FENCE_FREE: diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h index 6d55f77a08f1..db82180f5859 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.h +++ b/drivers/gpu/drm/i915/i915_drm_client.h @@ -13,6 +13,8 @@ #include <linux/sched.h> #include <linux/xarray.h>
+#include "gt/intel_engine_types.h" + struct drm_i915_private;
struct i915_drm_clients { @@ -41,6 +43,11 @@ struct i915_drm_client { bool closed;
struct i915_drm_clients *clients; + + /** + * @past_runtime: Accumulation of pphwsp runtimes from closed contexts. + */ + atomic64_t past_runtime[MAX_ENGINE_CLASS + 1]; };
void i915_drm_clients_init(struct i915_drm_clients *clients,
From: Tvrtko Ursulin tvrtko.ursulin@intel.com
We soon want to start answering questions like how much GPU time is the context belonging to a client which exited still using.
To enable this we start tracking all context belonging to a client on a separate list.
Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Reviewed-by: Aravind Iddamsetty aravind.iddamsetty@intel.com Reviewed-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Chris Wilson chris@chris-wilson.co.uk --- drivers/gpu/drm/i915/gem/i915_gem_context.c | 12 ++++++++++++ drivers/gpu/drm/i915/gem/i915_gem_context_types.h | 3 +++ drivers/gpu/drm/i915/i915_drm_client.c | 3 +++ drivers/gpu/drm/i915/i915_drm_client.h | 5 +++++ 4 files changed, 23 insertions(+)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index b8d8366a2cce..1595a608de92 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -573,6 +573,7 @@ static void set_closed_name(struct i915_gem_context *ctx) static void context_close(struct i915_gem_context *ctx) { struct i915_address_space *vm; + struct i915_drm_client *client;
/* Flush any concurrent set_engines() */ mutex_lock(&ctx->engines_mutex); @@ -601,6 +602,13 @@ static void context_close(struct i915_gem_context *ctx) list_del(&ctx->link); spin_unlock(&ctx->i915->gem.contexts.lock);
+ client = ctx->client; + if (client) { + spin_lock(&client->ctx_lock); + list_del_rcu(&ctx->client_link); + spin_unlock(&client->ctx_lock); + } + mutex_unlock(&ctx->mutex);
/* @@ -943,6 +951,10 @@ static int gem_context_register(struct i915_gem_context *ctx,
ctx->client = client;
+ spin_lock(&client->ctx_lock); + list_add_tail_rcu(&ctx->client_link, &client->ctx_list); + spin_unlock(&client->ctx_lock); + spin_lock(&i915->gem.contexts.lock); list_add_tail(&ctx->link, &i915->gem.contexts.list); spin_unlock(&i915->gem.contexts.lock); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h index eb098f2896c5..8ea3fe3e7414 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h @@ -102,6 +102,9 @@ struct i915_gem_context { /** client: struct i915_drm_client */ struct i915_drm_client *client;
+ /** link: &drm_client.context_list */ + struct list_head client_link; + /** * @ref: reference count * diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c index 0b7a70ed61d0..1e5db7753276 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.c +++ b/drivers/gpu/drm/i915/i915_drm_client.c @@ -100,6 +100,9 @@ i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
kref_init(&client->kref); mutex_init(&client->update_lock); + spin_lock_init(&client->ctx_lock); + INIT_LIST_HEAD(&client->ctx_list); + client->clients = clients; INIT_RCU_WORK(&client->rcu, __rcu_i915_drm_client_free);
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h index db82180f5859..b2b69d6985e4 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.h +++ b/drivers/gpu/drm/i915/i915_drm_client.h @@ -7,10 +7,12 @@ #define __I915_DRM_CLIENT_H__
#include <linux/kref.h> +#include <linux/list.h> #include <linux/mutex.h> #include <linux/pid.h> #include <linux/rcupdate.h> #include <linux/sched.h> +#include <linux/spinlock.h> #include <linux/xarray.h>
#include "gt/intel_engine_types.h" @@ -42,6 +44,9 @@ struct i915_drm_client { struct i915_drm_client_name __rcu *name; bool closed;
+ spinlock_t ctx_lock; /* For add/remove from ctx_list. */ + struct list_head ctx_list; /* List of contexts belonging to client. */ + struct i915_drm_clients *clients;
/**
From: Tvrtko Ursulin tvrtko.ursulin@intel.com
Track context active (on hardware) status together with the start timestamp.
This will be used to provide better granularity of context runtime reporting in conjunction with already tracked pphwsp accumulated runtime.
The latter is only updated on context save so does not give us visibility to any currently executing work.
As part of the patch the existing runtime tracking data is moved under the new ce->stats member and updated under the seqlock. This provides the ability to atomically read out accumulated plus active runtime.
v2: * Rename and make __intel_context_get_active_time unlocked.
Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Reviewed-by: Aravind Iddamsetty aravind.iddamsetty@intel.com # v1 Reviewed-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Chris Wilson chris@chris-wilson.co.uk --- drivers/gpu/drm/i915/gt/intel_context.c | 27 ++++++++++++++++++- drivers/gpu/drm/i915/gt/intel_context.h | 15 ++++------- drivers/gpu/drm/i915/gt/intel_context_types.h | 24 +++++++++++------ .../drm/i915/gt/intel_execlists_submission.c | 23 ++++++++++++---- .../gpu/drm/i915/gt/intel_gt_clock_utils.c | 4 +++ drivers/gpu/drm/i915/gt/intel_lrc.c | 27 ++++++++++--------- drivers/gpu/drm/i915/gt/intel_lrc.h | 24 +++++++++++++++++ drivers/gpu/drm/i915/gt/selftest_lrc.c | 10 +++---- drivers/gpu/drm/i915/i915_gpu_error.c | 9 +++---- drivers/gpu/drm/i915/i915_gpu_error.h | 2 +- 10 files changed, 116 insertions(+), 49 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index 4033184f13b9..bc021244c3b2 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -373,7 +373,7 @@ intel_context_init(struct intel_context *ce, struct intel_engine_cs *engine) ce->sseu = engine->sseu; ce->ring = __intel_context_ring_size(SZ_4K);
- ewma_runtime_init(&ce->runtime.avg); + ewma_runtime_init(&ce->stats.runtime.avg);
ce->vm = i915_vm_get(engine->gt->vm);
@@ -499,6 +499,31 @@ struct i915_request *intel_context_create_request(struct intel_context *ce) return rq; }
+u64 intel_context_get_total_runtime_ns(const struct intel_context *ce) +{ + u64 total, active; + + total = ce->stats.runtime.total; + if (ce->ops->flags & COPS_RUNTIME_CYCLES) + total *= ce->engine->gt->clock_period_ns; + + active = READ_ONCE(ce->stats.active); + if (active) + active = intel_context_clock() - active; + + return total + active; +} + +u64 intel_context_get_avg_runtime_ns(struct intel_context *ce) +{ + u64 avg = ewma_runtime_read(&ce->stats.runtime.avg); + + if (ce->ops->flags & COPS_RUNTIME_CYCLES) + avg *= ce->engine->gt->clock_period_ns; + + return avg; +} + #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) #include "selftest_context.c" #endif diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index f83a73a2b39f..a9125768b1b4 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -250,18 +250,13 @@ intel_context_clear_nopreempt(struct intel_context *ce) clear_bit(CONTEXT_NOPREEMPT, &ce->flags); }
-static inline u64 intel_context_get_total_runtime_ns(struct intel_context *ce) -{ - const u32 period = ce->engine->gt->clock_period_ns; - - return READ_ONCE(ce->runtime.total) * period; -} +u64 intel_context_get_total_runtime_ns(const struct intel_context *ce); +u64 intel_context_get_avg_runtime_ns(struct intel_context *ce);
-static inline u64 intel_context_get_avg_runtime_ns(struct intel_context *ce) +static inline u64 intel_context_clock(void) { - const u32 period = ce->engine->gt->clock_period_ns; - - return mul_u32_u32(ewma_runtime_read(&ce->runtime.avg), period); + /* As we mix CS cycles with CPU clocks, use the raw monotonic clock. */ + return ktime_get_raw_fast_ns(); }
#endif /* __INTEL_CONTEXT_H__ */ diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index ed8c447a7346..65a5730a4f5b 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -33,6 +33,9 @@ struct intel_context_ops { #define COPS_HAS_INFLIGHT_BIT 0 #define COPS_HAS_INFLIGHT BIT(COPS_HAS_INFLIGHT_BIT)
+#define COPS_RUNTIME_CYCLES_BIT 1 +#define COPS_RUNTIME_CYCLES BIT(COPS_RUNTIME_CYCLES_BIT) + int (*alloc)(struct intel_context *ce);
int (*pre_pin)(struct intel_context *ce, struct i915_gem_ww_ctx *ww, void **vaddr); @@ -110,14 +113,19 @@ struct intel_context { } lrc; u32 tag; /* cookie passed to HW to track this context on submission */
- /* Time on GPU as tracked by the hw. */ - struct { - struct ewma_runtime avg; - u64 total; - u32 last; - I915_SELFTEST_DECLARE(u32 num_underflow); - I915_SELFTEST_DECLARE(u32 max_underflow); - } runtime; + /** stats: Context GPU engine busyness tracking. */ + struct intel_context_stats { + u64 active; + + /* Time on GPU as tracked by the hw. */ + struct { + struct ewma_runtime avg; + u64 total; + u32 last; + I915_SELFTEST_DECLARE(u32 num_underflow); + I915_SELFTEST_DECLARE(u32 max_underflow); + } runtime; + } stats;
unsigned int active_count; /* protected by timeline->mutex */
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c index de124870af44..18d9c1d96d36 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -605,8 +605,6 @@ static void __execlists_schedule_out(struct i915_request * const rq, GEM_BUG_ON(test_bit(ccid - 1, &engine->context_tag)); __set_bit(ccid - 1, &engine->context_tag); } - - lrc_update_runtime(ce); intel_engine_context_out(engine); execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT); if (engine->fw_domain && !--engine->fw_active) @@ -1955,8 +1953,23 @@ process_csb(struct intel_engine_cs *engine, struct i915_request **inactive) * and merits a fresh timeslice. We reinstall the timer after * inspecting the queue to see if we need to resumbit. */ - if (*prev != *execlists->active) /* elide lite-restores */ + if (*prev != *execlists->active) { /* elide lite-restores */ + /* + * Note the inherent discrepancy between the HW runtime, + * recorded as part of the context switch, and the CPU + * adjustment for active contexts. We have to hope that + * the delay in processing the CS event is very small + * and consistent. It works to our advantage to have + * the CPU adjustment _undershoot_ (i.e. start later than) + * the CS timestamp so we never overreport the runtime + * and correct overselves later when updating from HW. + */ + if (*prev) + lrc_runtime_stop((*prev)->context); + if (*execlists->active) + lrc_runtime_start((*execlists->active)->context); new_timeslice(execlists); + }
return inactive; } @@ -2495,7 +2508,7 @@ static int execlists_context_alloc(struct intel_context *ce) }
static const struct intel_context_ops execlists_context_ops = { - .flags = COPS_HAS_INFLIGHT, + .flags = COPS_HAS_INFLIGHT | COPS_RUNTIME_CYCLES,
.alloc = execlists_context_alloc,
@@ -3401,7 +3414,7 @@ static void virtual_context_exit(struct intel_context *ce) }
static const struct intel_context_ops virtual_context_ops = { - .flags = COPS_HAS_INFLIGHT, + .flags = COPS_HAS_INFLIGHT | COPS_RUNTIME_CYCLES,
.alloc = virtual_context_alloc,
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_clock_utils.c b/drivers/gpu/drm/i915/gt/intel_gt_clock_utils.c index 582fcaee11aa..f8c79efb1a87 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_clock_utils.c +++ b/drivers/gpu/drm/i915/gt/intel_gt_clock_utils.c @@ -159,6 +159,10 @@ void intel_gt_init_clock_frequency(struct intel_gt *gt) if (gt->clock_frequency) gt->clock_period_ns = intel_gt_clock_interval_to_ns(gt, 1);
+ /* Icelake appears to use another fixed frequency for CTX_TIMESTAMP */ + if (IS_GEN(gt->i915, 11)) + gt->clock_period_ns = NSEC_PER_SEC / 13750000; + GT_TRACE(gt, "Using clock frequency: %dkHz, period: %dns, wrap: %lldms\n", gt->clock_frequency / 1000, diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c index aafe2a4df496..c145e4723279 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.c +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c @@ -642,7 +642,7 @@ static void init_common_regs(u32 * const regs, CTX_CTRL_RS_CTX_ENABLE); regs[CTX_CONTEXT_CONTROL] = ctl;
- regs[CTX_TIMESTAMP] = ce->runtime.last; + regs[CTX_TIMESTAMP] = ce->stats.runtime.last; }
static void init_wa_bb_regs(u32 * const regs, @@ -1565,35 +1565,36 @@ void lrc_init_wa_ctx(struct intel_engine_cs *engine) } }
-static void st_update_runtime_underflow(struct intel_context *ce, s32 dt) +static void st_runtime_underflow(struct intel_context_stats *stats, s32 dt) { #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) - ce->runtime.num_underflow++; - ce->runtime.max_underflow = max_t(u32, ce->runtime.max_underflow, -dt); + stats->runtime.num_underflow++; + stats->runtime.max_underflow = + max_t(u32, stats->runtime.max_underflow, -dt); #endif }
void lrc_update_runtime(struct intel_context *ce) { + struct intel_context_stats *stats = &ce->stats; u32 old; s32 dt;
- if (intel_context_is_barrier(ce)) + old = stats->runtime.last; + stats->runtime.last = lrc_get_runtime(ce); + dt = stats->runtime.last - old; + if (!dt) return;
- old = ce->runtime.last; - ce->runtime.last = lrc_get_runtime(ce); - dt = ce->runtime.last - old; - if (unlikely(dt < 0)) { CE_TRACE(ce, "runtime underflow: last=%u, new=%u, delta=%d\n", - old, ce->runtime.last, dt); - st_update_runtime_underflow(ce, dt); + old, stats->runtime.last, dt); + st_runtime_underflow(stats, dt); return; }
- ewma_runtime_add(&ce->runtime.avg, dt); - ce->runtime.total += dt; + ewma_runtime_add(&stats->runtime.avg, dt); + stats->runtime.total += dt; }
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.h b/drivers/gpu/drm/i915/gt/intel_lrc.h index 7f697845c4cf..8073674538d7 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.h +++ b/drivers/gpu/drm/i915/gt/intel_lrc.h @@ -79,4 +79,28 @@ static inline u32 lrc_get_runtime(const struct intel_context *ce) return READ_ONCE(ce->lrc_reg_state[CTX_TIMESTAMP]); }
+static inline void lrc_runtime_start(struct intel_context *ce) +{ + struct intel_context_stats *stats = &ce->stats; + + if (intel_context_is_barrier(ce)) + return; + + if (stats->active) + return; + + WRITE_ONCE(stats->active, intel_context_clock()); +} + +static inline void lrc_runtime_stop(struct intel_context *ce) +{ + struct intel_context_stats *stats = &ce->stats; + + if (!stats->active) + return; + + lrc_update_runtime(ce); + WRITE_ONCE(stats->active, 0); +} + #endif /* __INTEL_LRC_H__ */ diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c index d8f6623524e8..d2f950a5d9b5 100644 --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c @@ -1751,8 +1751,8 @@ static int __live_pphwsp_runtime(struct intel_engine_cs *engine) if (IS_ERR(ce)) return PTR_ERR(ce);
- ce->runtime.num_underflow = 0; - ce->runtime.max_underflow = 0; + ce->stats.runtime.num_underflow = 0; + ce->stats.runtime.max_underflow = 0;
do { unsigned int loop = 1024; @@ -1790,11 +1790,11 @@ static int __live_pphwsp_runtime(struct intel_engine_cs *engine) intel_context_get_avg_runtime_ns(ce));
err = 0; - if (ce->runtime.num_underflow) { + if (ce->stats.runtime.num_underflow) { pr_err("%s: pphwsp underflow %u time(s), max %u cycles!\n", engine->name, - ce->runtime.num_underflow, - ce->runtime.max_underflow); + ce->stats.runtime.num_underflow, + ce->stats.runtime.max_underflow); GEM_TRACE_DUMP(); err = -EOVERFLOW; } diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index f5dfc15f5f76..67410dba6faf 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -484,13 +484,10 @@ static void error_print_context(struct drm_i915_error_state_buf *m, const char *header, const struct i915_gem_context_coredump *ctx) { - const u32 period = m->i915->gt.clock_period_ns; - err_printf(m, "%s%s[%d] prio %d, guilty %d active %d, runtime total %lluns, avg %lluns\n", header, ctx->comm, ctx->pid, ctx->sched_attr.priority, ctx->guilty, ctx->active, - ctx->total_runtime * period, - mul_u32_u32(ctx->avg_runtime, period)); + ctx->total_runtime, ctx->avg_runtime); }
static struct i915_vma_coredump * @@ -1283,8 +1280,8 @@ static bool record_context(struct i915_gem_context_coredump *e, e->guilty = atomic_read(&ctx->guilty_count); e->active = atomic_read(&ctx->active_count);
- e->total_runtime = rq->context->runtime.total; - e->avg_runtime = ewma_runtime_read(&rq->context->runtime.avg); + e->total_runtime = intel_context_get_total_runtime_ns(rq->context); + e->avg_runtime = intel_context_get_avg_runtime_ns(rq->context);
simulated = i915_gem_context_no_error_capture(ctx);
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.h b/drivers/gpu/drm/i915/i915_gpu_error.h index b98d8cdbe4f2..b11deb547672 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.h +++ b/drivers/gpu/drm/i915/i915_gpu_error.h @@ -90,7 +90,7 @@ struct intel_engine_coredump { char comm[TASK_COMM_LEN];
u64 total_runtime; - u32 avg_runtime; + u64 avg_runtime;
pid_t pid; int active;
From: Tvrtko Ursulin tvrtko.ursulin@intel.com
Similar to AMD commit 874442541133 ("drm/amdgpu: Add show_fdinfo() interface"), using the infrastructure added in previous patches, we add basic client info and GPU engine utilisation for i915.
Example of the output:
pos: 0 flags: 0100002 mnt_id: 21 drm-driver: i915 drm-pdev: 0000:00:02.0 drm-client-id: 7 drm-engine-render: 9288864723 ns drm-engine-copy: 2035071108 ns drm-engine-video: 0 ns drm-engine-video-enhance: 0 ns
DRM related fields are appropriately prefixed for easy parsing and separation from generic fdinfo fields.
Idea is for some fields to become either fully or partially standardised in order to enable writting of generic top-like tools.
Initial proposal for fully standardised common fields:
drm-driver: <str> drm-pdev: aaaa:bb.cc.d
Optional fully standardised:
drm-client-id: <uint>
Optional partially standardised:
engine-<str>: <u64> ns memory-<str>: <u64> KiB
Once agreed the format would need to go to some README or kerneldoc in DRM core.
Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Cc: David M Nieto David.Nieto@amd.com Cc: Christian König christian.koenig@amd.com Cc: Daniel Vetter daniel@ffwll.ch --- drivers/gpu/drm/i915/i915_drm_client.c | 68 ++++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_drm_client.h | 4 ++ drivers/gpu/drm/i915/i915_drv.c | 3 ++ 3 files changed, 75 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c index 1e5db7753276..5e9cfba1116b 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.c +++ b/drivers/gpu/drm/i915/i915_drm_client.c @@ -9,6 +9,11 @@
#include <drm/drm_print.h>
+#include <uapi/drm/i915_drm.h> + +#include "gem/i915_gem_context.h" +#include "gt/intel_engine_user.h" + #include "i915_drm_client.h" #include "i915_drv.h" #include "i915_gem.h" @@ -168,3 +173,66 @@ void i915_drm_clients_fini(struct i915_drm_clients *clients)
xa_destroy(&clients->xarray); } + +#ifdef CONFIG_PROC_FS +static const char * const uabi_class_names[] = { + [I915_ENGINE_CLASS_RENDER] = "render", + [I915_ENGINE_CLASS_COPY] = "copy", + [I915_ENGINE_CLASS_VIDEO] = "video", + [I915_ENGINE_CLASS_VIDEO_ENHANCE] = "video-enhance", +}; + +static u64 busy_add(struct i915_gem_context *ctx, unsigned int class) +{ + struct i915_gem_engines_iter it; + struct intel_context *ce; + u64 total = 0; + + for_each_gem_engine(ce, rcu_dereference(ctx->engines), it) { + if (ce->engine->uabi_class != class) + continue; + + total += intel_context_get_total_runtime_ns(ce); + } + + return total; +} + +static void +show_client_class(struct seq_file *m, + struct i915_drm_client *client, + unsigned int class) +{ + const struct list_head *list = &client->ctx_list; + u64 total = atomic64_read(&client->past_runtime[class]); + struct i915_gem_context *ctx; + + rcu_read_lock(); + list_for_each_entry_rcu(ctx, list, client_link) + total += busy_add(ctx, class); + rcu_read_unlock(); + + return seq_printf(m, "drm-engine-%s:\t%llu ns\n", + uabi_class_names[class], total); +} + +void i915_drm_client_fdinfo(struct seq_file *m, struct file *f) +{ + struct drm_file *file = f->private_data; + struct drm_i915_file_private *file_priv = file->driver_priv; + struct drm_i915_private *i915 = file_priv->dev_priv; + struct i915_drm_client *client = file_priv->client; + struct pci_dev *pdev = to_pci_dev(i915->drm.dev); + unsigned int i; + + seq_printf(m, "drm-driver:\ti915\n"); + seq_printf(m, "drm-pdev:\t%04x:%02x:%02x.%d\n", + pci_domain_nr(pdev->bus), pdev->bus->number, + PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn)); + + seq_printf(m, "drm-client-id:\t%u\n", client->id); + + for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) + show_client_class(m, client, i); +} +#endif diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h index b2b69d6985e4..9885002433a0 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.h +++ b/drivers/gpu/drm/i915/i915_drm_client.h @@ -98,6 +98,10 @@ i915_drm_client_pid(const struct i915_drm_client *client) return __i915_drm_client_name(client)->pid; }
+#ifdef CONFIG_PROC_FS +void i915_drm_client_fdinfo(struct seq_file *m, struct file *f); +#endif + void i915_drm_clients_fini(struct i915_drm_clients *clients);
#endif /* !__I915_DRM_CLIENT_H__ */ diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index 33eb7b52b58b..6b63fe4b3c26 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -1694,6 +1694,9 @@ static const struct file_operations i915_driver_fops = { .read = drm_read, .compat_ioctl = i915_ioc32_compat_ioctl, .llseek = noop_llseek, +#ifdef CONFIG_PROC_FS + .show_fdinfo = i915_drm_client_fdinfo, +#endif };
static int
[AMD Official Use Only]
i would like to add a unit marker for the stats that we monitor in the fd, as we discussed currently we are displaying the usage percentage, because we wanted to to provide single query percentages, but this may evolve with time.
May I suggest to add two new fields
drm-stat-interval: <64 bit> ns drm-stat-timestamp: <64 bit> ns
If interval is set, engine utilization is calculated by doing <perc render> = 100*<drm_engine_render>/<drm_stat_interval> if interval is not set, two reads are needed : <perc render> = 100*<drm_engine_render1 - drm_engine_render0> / <drm-stat-timestamp1 - drm-stat-timestamp0>
Regards,
David
________________________________ From: Tvrtko Ursulin tvrtko.ursulin@linux.intel.com Sent: Thursday, May 20, 2021 8:12 AM To: Intel-gfx@lists.freedesktop.org Intel-gfx@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org dri-devel@lists.freedesktop.org; Tvrtko Ursulin tvrtko.ursulin@intel.com; Nieto, David M David.Nieto@amd.com; Koenig, Christian Christian.Koenig@amd.com; Daniel Vetter daniel@ffwll.ch Subject: [RFC 7/7] drm/i915: Expose client engine utilisation via fdinfo
From: Tvrtko Ursulin tvrtko.ursulin@intel.com
Similar to AMD commit 874442541133 ("drm/amdgpu: Add show_fdinfo() interface"), using the infrastructure added in previous patches, we add basic client info and GPU engine utilisation for i915.
Example of the output:
pos: 0 flags: 0100002 mnt_id: 21 drm-driver: i915 drm-pdev: 0000:00:02.0 drm-client-id: 7 drm-engine-render: 9288864723 ns drm-engine-copy: 2035071108 ns drm-engine-video: 0 ns drm-engine-video-enhance: 0 ns
DRM related fields are appropriately prefixed for easy parsing and separation from generic fdinfo fields.
Idea is for some fields to become either fully or partially standardised in order to enable writting of generic top-like tools.
Initial proposal for fully standardised common fields:
drm-driver: <str> drm-pdev: aaaa:bb.cc.d
Optional fully standardised:
drm-client-id: <uint>
Optional partially standardised:
engine-<str>: <u64> ns memory-<str>: <u64> KiB
Once agreed the format would need to go to some README or kerneldoc in DRM core.
Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Cc: David M Nieto David.Nieto@amd.com Cc: Christian König christian.koenig@amd.com Cc: Daniel Vetter daniel@ffwll.ch --- drivers/gpu/drm/i915/i915_drm_client.c | 68 ++++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_drm_client.h | 4 ++ drivers/gpu/drm/i915/i915_drv.c | 3 ++ 3 files changed, 75 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c index 1e5db7753276..5e9cfba1116b 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.c +++ b/drivers/gpu/drm/i915/i915_drm_client.c @@ -9,6 +9,11 @@
#include <drm/drm_print.h>
+#include <uapi/drm/i915_drm.h> + +#include "gem/i915_gem_context.h" +#include "gt/intel_engine_user.h" + #include "i915_drm_client.h" #include "i915_drv.h" #include "i915_gem.h" @@ -168,3 +173,66 @@ void i915_drm_clients_fini(struct i915_drm_clients *clients)
xa_destroy(&clients->xarray); } + +#ifdef CONFIG_PROC_FS +static const char * const uabi_class_names[] = { + [I915_ENGINE_CLASS_RENDER] = "render", + [I915_ENGINE_CLASS_COPY] = "copy", + [I915_ENGINE_CLASS_VIDEO] = "video", + [I915_ENGINE_CLASS_VIDEO_ENHANCE] = "video-enhance", +}; + +static u64 busy_add(struct i915_gem_context *ctx, unsigned int class) +{ + struct i915_gem_engines_iter it; + struct intel_context *ce; + u64 total = 0; + + for_each_gem_engine(ce, rcu_dereference(ctx->engines), it) { + if (ce->engine->uabi_class != class) + continue; + + total += intel_context_get_total_runtime_ns(ce); + } + + return total; +} + +static void +show_client_class(struct seq_file *m, + struct i915_drm_client *client, + unsigned int class) +{ + const struct list_head *list = &client->ctx_list; + u64 total = atomic64_read(&client->past_runtime[class]); + struct i915_gem_context *ctx; + + rcu_read_lock(); + list_for_each_entry_rcu(ctx, list, client_link) + total += busy_add(ctx, class); + rcu_read_unlock(); + + return seq_printf(m, "drm-engine-%s:\t%llu ns\n", + uabi_class_names[class], total); +} + +void i915_drm_client_fdinfo(struct seq_file *m, struct file *f) +{ + struct drm_file *file = f->private_data; + struct drm_i915_file_private *file_priv = file->driver_priv; + struct drm_i915_private *i915 = file_priv->dev_priv; + struct i915_drm_client *client = file_priv->client; + struct pci_dev *pdev = to_pci_dev(i915->drm.dev); + unsigned int i; + + seq_printf(m, "drm-driver:\ti915\n"); + seq_printf(m, "drm-pdev:\t%04x:%02x:%02x.%d\n", + pci_domain_nr(pdev->bus), pdev->bus->number, + PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn)); + + seq_printf(m, "drm-client-id:\t%u\n", client->id); + + for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) + show_client_class(m, client, i); +} +#endif diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h index b2b69d6985e4..9885002433a0 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.h +++ b/drivers/gpu/drm/i915/i915_drm_client.h @@ -98,6 +98,10 @@ i915_drm_client_pid(const struct i915_drm_client *client) return __i915_drm_client_name(client)->pid; }
+#ifdef CONFIG_PROC_FS +void i915_drm_client_fdinfo(struct seq_file *m, struct file *f); +#endif + void i915_drm_clients_fini(struct i915_drm_clients *clients);
#endif /* !__I915_DRM_CLIENT_H__ */ diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index 33eb7b52b58b..6b63fe4b3c26 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -1694,6 +1694,9 @@ static const struct file_operations i915_driver_fops = { .read = drm_read, .compat_ioctl = i915_ioc32_compat_ioctl, .llseek = noop_llseek, +#ifdef CONFIG_PROC_FS + .show_fdinfo = i915_drm_client_fdinfo, +#endif };
static int -- 2.30.2
Yeah, having the timestamp is a good idea as well.
drm-driver: i915
I think we should rather add something like printing file_operations->owner->name to the common fdinfo code.
This way we would have something common for all drivers in the system. I'm just not sure if that also works if they are compiled into the kernel.
Regards, Christian.
Am 20.05.21 um 18:26 schrieb Nieto, David M:
[AMD Official Use Only]
i would like to add a unit marker for the stats that we monitor in the fd, as we discussed currently we are displaying the usage percentage, because we wanted to to provide single query percentages, but this may evolve with time.
May I suggest to add two new fields
drm-stat-interval: <64 bit> ns drm-stat-timestamp: <64 bit> ns
If interval is set, engine utilization is calculated by doing <perc render> = 100*<drm_engine_render>/<drm_stat_interval> if interval is not set, two reads are needed : <perc render> = 100*<drm_engine_render1 - drm_engine_render0> / <drm-stat-timestamp1 - drm-stat-timestamp0>
Regards,
David
*From:* Tvrtko Ursulin tvrtko.ursulin@linux.intel.com *Sent:* Thursday, May 20, 2021 8:12 AM *To:* Intel-gfx@lists.freedesktop.org Intel-gfx@lists.freedesktop.org *Cc:* dri-devel@lists.freedesktop.org dri-devel@lists.freedesktop.org; Tvrtko Ursulin tvrtko.ursulin@intel.com; Nieto, David M David.Nieto@amd.com; Koenig, Christian Christian.Koenig@amd.com; Daniel Vetter daniel@ffwll.ch *Subject:* [RFC 7/7] drm/i915: Expose client engine utilisation via fdinfo From: Tvrtko Ursulin tvrtko.ursulin@intel.com
Similar to AMD commit 874442541133 ("drm/amdgpu: Add show_fdinfo() interface"), using the infrastructure added in previous patches, we add basic client info and GPU engine utilisation for i915.
Example of the output:
pos: 0 flags: 0100002 mnt_id: 21 drm-driver: i915 drm-pdev: 0000:00:02.0 drm-client-id: 7 drm-engine-render: 9288864723 ns drm-engine-copy: 2035071108 ns drm-engine-video: 0 ns drm-engine-video-enhance: 0 ns
DRM related fields are appropriately prefixed for easy parsing and separation from generic fdinfo fields.
Idea is for some fields to become either fully or partially standardised in order to enable writting of generic top-like tools.
Initial proposal for fully standardised common fields:
drm-driver: <str> drm-pdev: aaaa:bb.cc.d
Optional fully standardised:
drm-client-id: <uint>
Optional partially standardised:
engine-<str>: <u64> ns memory-<str>: <u64> KiB
Once agreed the format would need to go to some README or kerneldoc in DRM core.
Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Cc: David M Nieto David.Nieto@amd.com Cc: Christian König christian.koenig@amd.com Cc: Daniel Vetter daniel@ffwll.ch
drivers/gpu/drm/i915/i915_drm_client.c | 68 ++++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_drm_client.h | 4 ++ drivers/gpu/drm/i915/i915_drv.c | 3 ++ 3 files changed, 75 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c index 1e5db7753276..5e9cfba1116b 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.c +++ b/drivers/gpu/drm/i915/i915_drm_client.c @@ -9,6 +9,11 @@
#include <drm/drm_print.h>
+#include <uapi/drm/i915_drm.h>
+#include "gem/i915_gem_context.h" +#include "gt/intel_engine_user.h"
#include "i915_drm_client.h" #include "i915_drv.h" #include "i915_gem.h" @@ -168,3 +173,66 @@ void i915_drm_clients_fini(struct i915_drm_clients *clients)
xa_destroy(&clients->xarray); }
+#ifdef CONFIG_PROC_FS +static const char * const uabi_class_names[] = { + [I915_ENGINE_CLASS_RENDER] = "render", + [I915_ENGINE_CLASS_COPY] = "copy", + [I915_ENGINE_CLASS_VIDEO] = "video", + [I915_ENGINE_CLASS_VIDEO_ENHANCE] = "video-enhance", +};
+static u64 busy_add(struct i915_gem_context *ctx, unsigned int class) +{ + struct i915_gem_engines_iter it; + struct intel_context *ce; + u64 total = 0;
+ for_each_gem_engine(ce, rcu_dereference(ctx->engines), it) { + if (ce->engine->uabi_class != class) + continue;
+ total += intel_context_get_total_runtime_ns(ce); + }
+ return total; +}
+static void +show_client_class(struct seq_file *m, + struct i915_drm_client *client, + unsigned int class) +{ + const struct list_head *list = &client->ctx_list; + u64 total = atomic64_read(&client->past_runtime[class]); + struct i915_gem_context *ctx;
+ rcu_read_lock(); + list_for_each_entry_rcu(ctx, list, client_link) + total += busy_add(ctx, class); + rcu_read_unlock();
+ return seq_printf(m, "drm-engine-%s:\t%llu ns\n", + uabi_class_names[class], total); +}
+void i915_drm_client_fdinfo(struct seq_file *m, struct file *f) +{ + struct drm_file *file = f->private_data; + struct drm_i915_file_private *file_priv = file->driver_priv; + struct drm_i915_private *i915 = file_priv->dev_priv; + struct i915_drm_client *client = file_priv->client; + struct pci_dev *pdev = to_pci_dev(i915->drm.dev); + unsigned int i;
+ seq_printf(m, "drm-driver:\ti915\n"); + seq_printf(m, "drm-pdev:\t%04x:%02x:%02x.%d\n", + pci_domain_nr(pdev->bus), pdev->bus->number, + PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
+ seq_printf(m, "drm-client-id:\t%u\n", client->id);
+ for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) + show_client_class(m, client, i); +} +#endif diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h index b2b69d6985e4..9885002433a0 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.h +++ b/drivers/gpu/drm/i915/i915_drm_client.h @@ -98,6 +98,10 @@ i915_drm_client_pid(const struct i915_drm_client *client) return __i915_drm_client_name(client)->pid; }
+#ifdef CONFIG_PROC_FS +void i915_drm_client_fdinfo(struct seq_file *m, struct file *f); +#endif
void i915_drm_clients_fini(struct i915_drm_clients *clients);
#endif /* !__I915_DRM_CLIENT_H__ */ diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index 33eb7b52b58b..6b63fe4b3c26 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -1694,6 +1694,9 @@ static const struct file_operations i915_driver_fops = { .read = drm_read, .compat_ioctl = i915_ioc32_compat_ioctl, .llseek = noop_llseek, +#ifdef CONFIG_PROC_FS + .show_fdinfo = i915_drm_client_fdinfo, +#endif };
static int
2.30.2
On Thu, May 20, 2021 at 6:31 PM Christian König christian.koenig@amd.com wrote:
Yeah, having the timestamp is a good idea as well.
drm-driver: i915
I think we should rather add something like printing file_operations->owner->name to the common fdinfo code.
This way we would have something common for all drivers in the system. I'm just not sure if that also works if they are compiled into the kernel.
Yeah common code could print driver name, busid and all that stuff. I think the common code should also provide some helpers for the key: value pair formatting (and maybe check for all lower-case and stuff like that) because if we don't then this is going to be a complete mess that's not parseable.
And value should be real semantic stuff, not "here's a string". So accumulated time as a struct ktime as the example. -Daniel
Regards, Christian.
Am 20.05.21 um 18:26 schrieb Nieto, David M:
[AMD Official Use Only]
i would like to add a unit marker for the stats that we monitor in the fd, as we discussed currently we are displaying the usage percentage, because we wanted to to provide single query percentages, but this may evolve with time.
May I suggest to add two new fields
drm-stat-interval: <64 bit> ns drm-stat-timestamp: <64 bit> ns
If interval is set, engine utilization is calculated by doing <perc render> = 100*<drm_engine_render>/<drm_stat_interval> if interval is not set, two reads are needed : <perc render> = 100*<drm_engine_render1 - drm_engine_render0> / <drm-stat-timestamp1 - drm-stat-timestamp0>
Regards,
David
From: Tvrtko Ursulin tvrtko.ursulin@linux.intel.com Sent: Thursday, May 20, 2021 8:12 AM To: Intel-gfx@lists.freedesktop.org Intel-gfx@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org dri-devel@lists.freedesktop.org; Tvrtko Ursulin tvrtko.ursulin@intel.com; Nieto, David M David.Nieto@amd.com; Koenig, Christian Christian.Koenig@amd.com; Daniel Vetter daniel@ffwll.ch Subject: [RFC 7/7] drm/i915: Expose client engine utilisation via fdinfo
From: Tvrtko Ursulin tvrtko.ursulin@intel.com
Similar to AMD commit 874442541133 ("drm/amdgpu: Add show_fdinfo() interface"), using the infrastructure added in previous patches, we add basic client info and GPU engine utilisation for i915.
Example of the output:
pos: 0 flags: 0100002 mnt_id: 21 drm-driver: i915 drm-pdev: 0000:00:02.0 drm-client-id: 7 drm-engine-render: 9288864723 ns drm-engine-copy: 2035071108 ns drm-engine-video: 0 ns drm-engine-video-enhance: 0 ns
DRM related fields are appropriately prefixed for easy parsing and separation from generic fdinfo fields.
Idea is for some fields to become either fully or partially standardised in order to enable writting of generic top-like tools.
Initial proposal for fully standardised common fields:
drm-driver: <str> drm-pdev: aaaa:bb.cc.d
Optional fully standardised:
drm-client-id: <uint>
Optional partially standardised:
engine-<str>: <u64> ns memory-<str>: <u64> KiB
Once agreed the format would need to go to some README or kerneldoc in DRM core.
Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Cc: David M Nieto David.Nieto@amd.com Cc: Christian König christian.koenig@amd.com Cc: Daniel Vetter daniel@ffwll.ch
drivers/gpu/drm/i915/i915_drm_client.c | 68 ++++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_drm_client.h | 4 ++ drivers/gpu/drm/i915/i915_drv.c | 3 ++ 3 files changed, 75 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c index 1e5db7753276..5e9cfba1116b 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.c +++ b/drivers/gpu/drm/i915/i915_drm_client.c @@ -9,6 +9,11 @@
#include <drm/drm_print.h>
+#include <uapi/drm/i915_drm.h>
+#include "gem/i915_gem_context.h" +#include "gt/intel_engine_user.h"
#include "i915_drm_client.h" #include "i915_drv.h" #include "i915_gem.h" @@ -168,3 +173,66 @@ void i915_drm_clients_fini(struct i915_drm_clients *clients)
xa_destroy(&clients->xarray);
}
+#ifdef CONFIG_PROC_FS +static const char * const uabi_class_names[] = {
[I915_ENGINE_CLASS_RENDER] = "render",
[I915_ENGINE_CLASS_COPY] = "copy",
[I915_ENGINE_CLASS_VIDEO] = "video",
[I915_ENGINE_CLASS_VIDEO_ENHANCE] = "video-enhance",
+};
+static u64 busy_add(struct i915_gem_context *ctx, unsigned int class) +{
struct i915_gem_engines_iter it;
struct intel_context *ce;
u64 total = 0;
for_each_gem_engine(ce, rcu_dereference(ctx->engines), it) {
if (ce->engine->uabi_class != class)
continue;
total += intel_context_get_total_runtime_ns(ce);
}
return total;
+}
+static void +show_client_class(struct seq_file *m,
struct i915_drm_client *client,
unsigned int class)
+{
const struct list_head *list = &client->ctx_list;
u64 total = atomic64_read(&client->past_runtime[class]);
struct i915_gem_context *ctx;
rcu_read_lock();
list_for_each_entry_rcu(ctx, list, client_link)
total += busy_add(ctx, class);
rcu_read_unlock();
return seq_printf(m, "drm-engine-%s:\t%llu ns\n",
uabi_class_names[class], total);
+}
+void i915_drm_client_fdinfo(struct seq_file *m, struct file *f) +{
struct drm_file *file = f->private_data;
struct drm_i915_file_private *file_priv = file->driver_priv;
struct drm_i915_private *i915 = file_priv->dev_priv;
struct i915_drm_client *client = file_priv->client;
struct pci_dev *pdev = to_pci_dev(i915->drm.dev);
unsigned int i;
seq_printf(m, "drm-driver:\ti915\n");
seq_printf(m, "drm-pdev:\t%04x:%02x:%02x.%d\n",
pci_domain_nr(pdev->bus), pdev->bus->number,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
seq_printf(m, "drm-client-id:\t%u\n", client->id);
for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++)
show_client_class(m, client, i);
+} +#endif diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h index b2b69d6985e4..9885002433a0 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.h +++ b/drivers/gpu/drm/i915/i915_drm_client.h @@ -98,6 +98,10 @@ i915_drm_client_pid(const struct i915_drm_client *client) return __i915_drm_client_name(client)->pid; }
+#ifdef CONFIG_PROC_FS +void i915_drm_client_fdinfo(struct seq_file *m, struct file *f); +#endif
void i915_drm_clients_fini(struct i915_drm_clients *clients);
#endif /* !__I915_DRM_CLIENT_H__ */ diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index 33eb7b52b58b..6b63fe4b3c26 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -1694,6 +1694,9 @@ static const struct file_operations i915_driver_fops = { .read = drm_read, .compat_ioctl = i915_ioc32_compat_ioctl, .llseek = noop_llseek, +#ifdef CONFIG_PROC_FS
.show_fdinfo = i915_drm_client_fdinfo,
+#endif };
static int
2.30.2
On 20/05/2021 18:47, Daniel Vetter wrote:
On Thu, May 20, 2021 at 6:31 PM Christian König christian.koenig@amd.com wrote:
Yeah, having the timestamp is a good idea as well.
drm-driver: i915
I think we should rather add something like printing file_operations->owner->name to the common fdinfo code.
This way we would have something common for all drivers in the system. I'm just not sure if that also works if they are compiled into the kernel.
Yeah common code could print driver name, busid and all that stuff. I think the common code should also provide some helpers for the key: value pair formatting (and maybe check for all lower-case and stuff like that) because if we don't then this is going to be a complete mess that's not parseable.
I see we could have a few options here, non exhaustive list (especially omitting some sub-options):
1) DRM core implements fdinfo, which emits the common parts, calling into the driver to do the rest.
2) DRM adds helpers for driver to emit common parts of fdinfo.
3) DRM core establishes a "spec" defining the common fields, the optional ones, and formats.
I was trending towards 3) because it is most lightweight and feeling is there isn't that much value in extracting a tiny bit of commonality in code. Proof in the pudding is how short the fdinfo vfunc is in this patch.
And value should be real semantic stuff, not "here's a string". So accumulated time as a struct ktime as the example.
Ideally yes, but I have a feeling the ways how amdgpu and i915 track things are so different so first lets learn more about that.
Am 20.05.21 um 18:26 schrieb Nieto, David M:
[AMD Official Use Only]
i would like to add a unit marker for the stats that we monitor in the fd, as we discussed currently we are displaying the usage percentage, because we wanted to to provide single query percentages, but this may evolve with time.
May I suggest to add two new fields
drm-stat-interval: <64 bit> ns drm-stat-timestamp: <64 bit> ns
If interval is set, engine utilization is calculated by doing <perc render> = 100*<drm_engine_render>/<drm_stat_interval> if interval is not set, two reads are needed : <perc render> = 100*<drm_engine_render1 - drm_engine_render0> / <drm-stat-timestamp1 - drm-stat-timestamp0>
I would like to understand how admgpu tracks GPU time since I am not getting these fields yet.
1) You suggest to have a timestamp because of different clock domains?
2) With the interval option - you actually have a restarting counter? Do you keep that in the driver or get it from hw itself?
Regards,
Tvrtko
Am 21.05.21 um 14:26 schrieb Tvrtko Ursulin:
On 20/05/2021 18:47, Daniel Vetter wrote:
On Thu, May 20, 2021 at 6:31 PM Christian König christian.koenig@amd.com wrote:
Yeah, having the timestamp is a good idea as well.
drm-driver: i915
I think we should rather add something like printing file_operations->owner->name to the common fdinfo code.
This way we would have something common for all drivers in the system. I'm just not sure if that also works if they are compiled into the kernel.
Yeah common code could print driver name, busid and all that stuff. I think the common code should also provide some helpers for the key: value pair formatting (and maybe check for all lower-case and stuff like that) because if we don't then this is going to be a complete mess that's not parseable.
I see we could have a few options here, non exhaustive list (especially omitting some sub-options):
DRM core implements fdinfo, which emits the common parts, calling into the driver to do the rest.
DRM adds helpers for driver to emit common parts of fdinfo.
DRM core establishes a "spec" defining the common fields, the optional ones, and formats.
I was trending towards 3) because it is most lightweight and feeling is there isn't that much value in extracting a tiny bit of commonality in code. Proof in the pudding is how short the fdinfo vfunc is in this patch.
I would say that we should add printing the module name to the common fdinfo function for the whole kernel.
And for the DRM specific stuff either 2 or 3 is the way to go I think. Number 1 sounds to much like mid-layering to me.
Regards, Christian.
And value should be real semantic stuff, not "here's a string". So accumulated time as a struct ktime as the example.
Ideally yes, but I have a feeling the ways how amdgpu and i915 track things are so different so first lets learn more about that.
Am 20.05.21 um 18:26 schrieb Nieto, David M:
[AMD Official Use Only]
i would like to add a unit marker for the stats that we monitor in the fd, as we discussed currently we are displaying the usage percentage, because we wanted to to provide single query percentages, but this may evolve with time.
May I suggest to add two new fields
drm-stat-interval: <64 bit> ns drm-stat-timestamp: <64 bit> ns
If interval is set, engine utilization is calculated by doing <perc render> = 100*<drm_engine_render>/<drm_stat_interval> if interval is not set, two reads are needed : <perc render> = 100*<drm_engine_render1 - drm_engine_render0> / <drm-stat-timestamp1
- drm-stat-timestamp0>
I would like to understand how admgpu tracks GPU time since I am not getting these fields yet.
You suggest to have a timestamp because of different clock domains?
With the interval option - you actually have a restarting counter? Do you keep that in the driver or get it from hw itself?
Regards,
Tvrtko
dri-devel@lists.freedesktop.org