[Intel-gfx] [RFC 00/12] Per client engine busyness

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Mon Mar 9 18:31:17 UTC 2020


From: Tvrtko Ursulin <tvrtko.ursulin at intel.com>

Another re-spin of the per-client engine busyness series. Highlights from this
version:

 * Different way of tracking runtime of exited/unreachable context. This time
   round I accumulate those per context/client and engine class, but active
   contexts are kept in a list and tallied on sysfs reads.
 * I had to do a small tweak in the engine release code since I needed the
   GEM context for a bit longer. (So I can accumulate the intel_context runtime
   into it as it is getting freed, because context complete can be late.)
 * PPHWSP method is back and even comes first in the series this time. It still
   can't show the currently running workloads but the software tracking method
   suffers from the CSB processing delay with high frequency and very short
   batches.

Internally we track time spent on engines for each struct intel_context. This
can serve as a building block for several features from the want list:
smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
wanted by some customers, cgroups controller, dynamic SSEU tuning,...

Externally, in sysfs, we expose time spent on GPU per client and per engine
class.

Sysfs interface enables us to implement a "top-like" tool for GPU tasks. Or with
a "screenshot":
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
intel-gpu-top -  906/ 955 MHz;    0% RC6;  5.30 Watts;      933 irqs/s

      IMC reads:     4414 MiB/s
     IMC writes:     3805 MiB/s

          ENGINE      BUSY                                      MI_SEMA MI_WAIT
     Render/3D/0   93.46% |████████████████████████████████▋  |      0%      0%
       Blitter/0    0.00% |                                   |      0%      0%
         Video/0    0.00% |                                   |      0%      0%
  VideoEnhance/0    0.00% |                                   |      0%      0%

  PID            NAME  Render/3D      Blitter        Video      VideoEnhance
 2733       neverball |██████▌     ||            ||            ||            |
 2047            Xorg |███▊        ||            ||            ||            |
 2737        glxgears |█▍          ||            ||            ||            |
 2128           xfwm4 |            ||            ||            ||            |
 2047            Xorg |            ||            ||            ||            |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Implementation wise we add a a bunch of files in sysfs like:

	# cd /sys/class/drm/card0/clients/
	# tree
	.
	├── 7
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 8
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	└── 9
	    ├── busy
	    │   ├── 0
	    │   ├── 1
	    │   ├── 2
	    │   └── 3
	    ├── name
	    └── pid

Files in 'busy' directories are numbered using the engine class ABI values and
they contain accumulated nanoseconds each client spent on engines of a
respective class.

It is stil a RFC since it misses dedicated test cases to ensure things really
work as advertised.

Tvrtko Ursulin (12):
  drm/i915: Expose list of clients in sysfs
  drm/i915: Update client name on context create
  drm/i915: Make GEM contexts track DRM clients
  drm/i915: Use explicit flag to mark unreachable intel_context
  drm/i915: Track runtime spent in unreachable intel_contexts
  drm/i915: Track runtime spent in closed GEM contexts
  drm/i915: Track all user contexts per client
  drm/i915: Expose per-engine client busyness
  drm/i915: Track per-context engine busyness
  drm/i915: Carry over past software tracked context runtime
  drm/i915: Prefer software tracked context busyness
  compare runtimes

 drivers/gpu/drm/i915/Makefile                 |   3 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  83 +++-
 .../gpu/drm/i915/gem/i915_gem_context_types.h |  26 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |   4 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  20 +
 drivers/gpu/drm/i915/gt/intel_context.h       |  13 +
 drivers/gpu/drm/i915/gt/intel_context_types.h |  10 +
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |  15 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  34 +-
 drivers/gpu/drm/i915/i915_debugfs.c           |  31 +-
 drivers/gpu/drm/i915/i915_drm_client.c        | 413 ++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        | 100 +++++
 drivers/gpu/drm/i915/i915_drv.h               |   5 +
 drivers/gpu/drm/i915/i915_gem.c               |  37 +-
 drivers/gpu/drm/i915/i915_gpu_error.c         |  21 +-
 drivers/gpu/drm/i915/i915_sysfs.c             |   8 +
 16 files changed, 767 insertions(+), 56 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h

-- 
2.20.1



More information about the Intel-gfx mailing list