[PATCH v2 1/2] drm: Add GPU reset sysfs event

Rob Clark robdclark at gmail.com
Thu Mar 10 17:16:05 UTC 2022


On Thu, Mar 10, 2022 at 8:27 AM Andrey Grodzovsky
<andrey.grodzovsky at amd.com> wrote:
>
>
> On 2022-03-10 11:21, Sharma, Shashank wrote:
> >
> >
> > On 3/10/2022 4:24 PM, Rob Clark wrote:
> >> On Thu, Mar 10, 2022 at 1:55 AM Christian König
> >> <ckoenig.leichtzumerken at gmail.com> wrote:
> >>>
> >>>
> >>>
> >>> Am 09.03.22 um 19:12 schrieb Rob Clark:
> >>>> On Tue, Mar 8, 2022 at 11:40 PM Shashank Sharma
> >>>> <contactshashanksharma at gmail.com> wrote:
> >>>>> From: Shashank Sharma <shashank.sharma at amd.com>
> >>>>>
> >>>>> This patch adds a new sysfs event, which will indicate
> >>>>> the userland about a GPU reset, and can also provide
> >>>>> some information like:
> >>>>> - process ID of the process involved with the GPU reset
> >>>>> - process name of the involved process
> >>>>> - the GPU status info (using flags)
> >>>>>
> >>>>> This patch also introduces the first flag of the flags
> >>>>> bitmap, which can be appended as and when required.
> >>>> Why invent something new, rather than using the already existing
> >>>> devcoredump?
> >>>
> >>> Yeah, that's a really valid question.
> >>>
> >>>> I don't think we need (or should encourage/allow) something drm
> >>>> specific when there is already an existing solution used by both drm
> >>>> and non-drm drivers.  Userspace should not have to learn to support
> >>>> yet another mechanism to do the same thing.
> >>>
> >>> Question is how is userspace notified about new available core dumps?
> >>
> >> I haven't looked into it too closely, as the CrOS userspace
> >> crash-reporter already had support for devcoredump, so it "just
> >> worked" out of the box[1].  I believe a udev event is what triggers
> >> the crash-reporter to go read the devcore dump out of sysfs.
> >
> > I had a quick look at the devcoredump code, and it doesn't look like
> > that is sending an event to the user, so we still need an event to
> > indicate a GPU reset.
> >
> > - Shashank
>
>
> Another point I raised in another thread is that it looks like we might
> want to use devcoredump during ASIC reset to dump more HW related data
> which is useful
> for debugging. It means the use client will have to extract the pid and
> process name out from a bigger data set - Is that ok ? We can probably
> put it at the beginning
> for easiest parsing.
>

Yes, this is what we do for drm/msm.. the start of the devcore file
has something like:

----
kernel: 5.14.0-rc3-debug+
module: msm
time: 1632763923.453207637
comm: deqp-gles3:sq0
cmdline: ./deqp-gles31 --deqp-surface-width=256
--deqp-surface-height=256 --deqp-gl-config-name=rgba8888d24s8ms0
--deqp-visibility=hidden
--deqp-caselist-file=/home/robclark/src/deqp/build/modules/gles31/new-run/c33.r1.caselist.txt
--deqp-log-filename=/home/robclark/src/deqp/build/modules/gles31/new-run/c33.r1.qpa
--deqp-log-flush=disable
--deqp-shadercache-filename=/home/robclark/src/deqp/build/modules/gles31/new-run/t499826814672.shader_cache
--deqp-shadercache-truncate=disable
revision: 618 (6.1.8.0)
----

We capture quite a lot of state, cmdstream that triggered the hang,
register/state dumps, microcontroller state, etc.  But we do go out of
our way to not capture textures or caches that might contain texture
data by default (for privacy reasons)

It has been hugely useful for debugging a few issues that happen
rarely enough that they are difficult to reproduce.  I guess that is
crowd-sourced debugging ;-)

BR,
-R


More information about the dri-devel mailing list