[RFC PATCH 1/8] tracing/gpu: modify gpu_mem_total
Gurchetan Singh
gurchetansingh at chromium.org
Thu Oct 21 03:10:20 UTC 2021
The existing gpu_mem_total tracepoint [1] is not currently used by
any in-tree consumers, we should add some.
In addition, there's a desire to report imported memory via the
counters too [2].
To do this, we'll have to redefine the event to:
a) Change 'pid' to 'ctx_id'
The reason is DRM subsystem is created with GEM objects, DRM devices
and DRM files in mind. A GEM object is associated with DRM device,
and it may be shared between one or more DRM files.
Per-instance (or "context") counters make more sense than per-process
counters for DRM. For GPUs that per process counters (kgsl), this
change is backwards compatible.
b) add an "import_mem_total" field
We're just appending a field, so no problem here. Change "size" to
"mem_total" as well (name changes are backwards compatible).
[1] https://lore.kernel.org/r/20200302234840.57188-1-zzyiwei@google.com/
[2] https://www.spinics.net/lists/kernel/msg4062769.html
Signed-off-by: Gurchetan Singh <gurchetansingh at chromium.org>
---
include/trace/events/gpu_mem.h | 61 ++++++++++++++++++++++++----------
1 file changed, 43 insertions(+), 18 deletions(-)
diff --git a/include/trace/events/gpu_mem.h b/include/trace/events/gpu_mem.h
index 26d871f96e94..198b87f50356 100644
--- a/include/trace/events/gpu_mem.h
+++ b/include/trace/events/gpu_mem.h
@@ -14,41 +14,66 @@
#include <linux/tracepoint.h>
/*
- * The gpu_memory_total event indicates that there's an update to either the
- * global or process total gpu memory counters.
+ * The gpu_mem_total event indicates that there's an update to local or
+ * global gpu memory counters.
*
- * This event should be emitted whenever the kernel device driver allocates,
- * frees, imports, unimports memory in the GPU addressable space.
+ * This event should be emitted whenever a GPU device (ctx_id == 0):
*
- * @gpu_id: This is the gpu id.
+ * 1) allocates memory.
+ * 2) frees memory.
+ * 3) imports memory from an external exporter.
*
- * @pid: Put 0 for global total, while positive pid for process total.
+ * OR when a GPU device instance (ctx_id != 0):
*
- * @size: Size of the allocation in bytes.
+ * 1) allocates or acquires a reference to memory from another instance.
+ * 2) frees or releases a reference to memory from another instance.
+ * 3) imports memory from another GPU device instance.
*
+ * When ctx_id == 0, both mem_total and import_mem_total total counters
+ * represent a global total. When ctx_id == 0, these counters represent
+ * an instance specifical total.
+ *
+ * Note allocation does not necessarily mean backing the memory with pages.
+ *
+ * @gpu_id: unique ID of the GPU.
+ *
+ * @ctx_id: an ID for specific instance of the GPU device.
+ *
+ * @mem_total: - total size of memory known to a GPU device, including
+ * imports (ctx_id == 0)
+ * - total size of memory known to a GPU device instance
+ * (ctx_id != 0)
+ *
+ * @import_mem_total: - size of memory imported from outside GPU
+ * device (ctx_id == 0)
+ * - size of memory imported into GPU device instance.
+ * (ctx_id == 0)
*/
TRACE_EVENT(gpu_mem_total,
- TP_PROTO(uint32_t gpu_id, uint32_t pid, uint64_t size),
+ TP_PROTO(u32 gpu_id, u32 ctx_id, u64 mem_total, u64 import_mem_total),
- TP_ARGS(gpu_id, pid, size),
+ TP_ARGS(gpu_id, ctx_id, mem_total, import_mem_total),
TP_STRUCT__entry(
- __field(uint32_t, gpu_id)
- __field(uint32_t, pid)
- __field(uint64_t, size)
+ __field(u32, gpu_id)
+ __field(u32, ctx_id)
+ __field(u64, mem_total)
+ __field(u64, import_mem_total)
),
TP_fast_assign(
__entry->gpu_id = gpu_id;
- __entry->pid = pid;
- __entry->size = size;
+ __entry->ctx_id = ctx_id;
+ __entry->mem_total = mem_total;
+ __entry->import_mem_total = import_mem_total;
),
- TP_printk("gpu_id=%u pid=%u size=%llu",
- __entry->gpu_id,
- __entry->pid,
- __entry->size)
+ TP_printk("gpu_id=%u, ctx_id=%u, mem total=%llu, mem import total=%llu",
+ __entry->gpu_id,
+ __entry->ctx_id,
+ __entry->mem_total,
+ __entry->import_mem_total)
);
#endif /* _TRACE_GPU_MEM_H */
--
2.25.1
More information about the dri-devel
mailing list