[PATCH v5 00/11] drm/msm+PM+icc: Make job_run() reclaim-safe

Rob Clark robdclark at gmail.com
Tue Aug 22 18:01:47 UTC 2023


From: Rob Clark <robdclark at chromium.org>

Inspired by https://lore.kernel.org/dri-devel/20200604081224.863494-10-daniel.vetter@ffwll.ch/
it seemed like a good idea to get rid of memory allocation in job_run()
fence signaling path, and use lockdep annotations to yell at us about
anything that could deadlock against shrinker/reclaim.  Anything that
can trigger reclaim, or block on any other thread that has triggered
reclaim, can block the GPU shrinker from releasing memory if it is
waiting the job to complete, causing deadlock.

The first two patches decouple allocation from devfreq->lock, and teach
lockdep that devfreq->lock can be acquired in paths that the shrinker
indirectly depends on.

The next three patches do the same for PM QoS.  And the next two do a
similar thing for interconnect.

And then finally the last two patches enable the lockdep fence-
signalling annotations.


v2: Switch from embedding hw_fence in submit/job object to preallocating
    the hw_fence.  Rework "fenced unpin" locking to drop obj lock from
    fence signaling path (ie. the part that was still WIP in the first
    iteration of the patchset).  Adds the final patch to enable fence
    signaling annotations now that job_run() and job_free() are safe.
    The PM devfreq/QoS and interconnect patches are unchanged.

v3: Mostly unchanged, but series is much smaller now that drm changes
    have landed, mostly consisting of the remaining devfreq/qos/
    interconnect fixes.

v4: Re-work PM / QoS patch based on Rafael's suggestion

v5: Add a couple more drm/msm patches for issues I found as making
    my way to the bottom of the rabbit hole.  In particular, I had
    to move power enable earlier, before enqueing to the scheduler,
    rather than after the scheduler waits for in-fences, which means
    we could be powering up slightly earlier than needed.  If runpm
    had a separate prepare + enable similar to the clk framework, we
    wouldn't need this.

Rob Clark (11):
  PM / devfreq: Drop unneed locking to appease lockdep
  PM / devfreq: Teach lockdep about locking order
  PM / QoS: Fix constraints alloc vs reclaim locking
  PM / QoS: Decouple request alloc from dev_pm_qos_mtx
  PM / QoS: Teach lockdep about dev_pm_qos_mtx locking order
  interconnect: Fix locking for runpm vs reclaim
  interconnect: Teach lockdep about icc_bw_lock order
  drm/msm/a6xx: Remove GMU lock from runpm paths
  drm/msm: Move runpm enable in submit path
  drm/sched: Add (optional) fence signaling annotation
  drm/msm: Enable fence signalling annotations

 drivers/base/power/qos.c               | 98 +++++++++++++++++++-------
 drivers/devfreq/devfreq.c              | 52 +++++++-------
 drivers/gpu/drm/msm/adreno/a6xx_gpu.c  | 15 +---
 drivers/gpu/drm/msm/msm_gem_submit.c   |  2 +
 drivers/gpu/drm/msm/msm_gpu.c          |  2 -
 drivers/gpu/drm/msm/msm_ringbuffer.c   |  1 +
 drivers/gpu/drm/scheduler/sched_main.c |  9 +++
 drivers/interconnect/core.c            | 18 ++++-
 include/drm/gpu_scheduler.h            |  2 +
 9 files changed, 130 insertions(+), 69 deletions(-)

-- 
2.41.0



More information about the dri-devel mailing list