[Intel-gfx] [RFC 10/11] drm/i915: Debugfs interface for per-engine hang recovery.

Tomas Elf tomas.elf at intel.com
Tue Jun 9 10:28:26 PDT 2015


On 09/06/2015 13:27, Chris Wilson wrote:
> On Tue, Jun 09, 2015 at 12:18:28PM +0100, Tomas Elf wrote:
>> On 08/06/2015 18:45, Chris Wilson wrote:
>>> On Mon, Jun 08, 2015 at 06:03:28PM +0100, Tomas Elf wrote:
>>>> 1. The i915_wedged_set function allows us to schedule three forms of hang recovery:
>>>>
>>>> 	a) Legacy hang recovery: By passing e.g. -1 we trigger the legacy full
>>>> 	GPU reset recovery path.
>>>>
>>>> 	b) Single engine hang recovery: By passing an engine ID in the interval
>>>> 	of [0, I915_NUM_RINGS) we can schedule hang recovery of any single
>>>> 	engine assuming that the context submission consistency requirements
>>>> 	are met (otherwise the hang recovery path will simply exit early and
>>>> 	wait for another hang detection). The values are assumed to use up bits
>>>> 	3:0 only since we certainly do not support as many as 16 engines.
>>>>
>>>> 	This mode is supported since there are several legacy test applications
>>>> 	that rely on this interface.
>>>
>>> Are there? I don't see them in igt - and let's not start making debugfs
>>> ABI.
>>
>> They're not in IGT only internal to VPG. I guess we could limit
>> these changes and adapt the internal test suite in VPG instead of
>> upstreaming changes that only VPG validation cares about.
>
> Also note that there are quite a few concurrent hang tests in igt that
> this series should aim to fix.
>
> You will be expected to provide basic validation tests for igt as well,
> which will be using the debugfs interface I guess.
>

Yeah, once we get past the RFC stage I can start looking into the IGTs. 
Obviously, the existing tests must not break and I'll add tests for 
per-engine recovery, full GPU reset promotion and watchdog timeout.

Daniel Vetter has already said that he wants me to add more hang 
concurrency tests that run a wider variety of tests with intermittent 
hangs trigging different hang recovery modes. Having already dealt with 
long-duration operations testing with concurrent rendering for more than 
a year now during TDR development I know what kind of havoc the TDR 
mechanism can raise when interacting with stuff like display driver and 
shrinker, so I'm hoping that we can distill some of those system-level 
tests into a smaller IGT form that can be run more easily and perhaps 
more determistically.

I haven't started looking into that yet, though, but it will have to be 
done once I start submitting the patch series proper.

>>>> 	c) Multiple engine hang recovery: By passing in an engine flag mask in
>>>> 	bits 31:8 (bit 8 corresponds to engine 0 = RCS, bit 9 corresponds to
>>>> 	engine 1 = VCS etc) we can schedule any combination of engine hang
>>>> 	recoveries as we please. For example, by passing in the value 0x3 << 8
>>>> 	we would schedule hang recovery for engines 0 and 1 (RCS and VCS) at
>>>> 	the same time.
>>>
>>> Seems fine. But I don't see the reason for the extra complication.
>>
>> I wanted to make sure that we could test multiple concurrent hang
>> recoveries, but to be fair nobody is actually using this at the
>> moment so unless someone actually _needs_ this we probably don't
>> need to upstream it.
>>
>> I guess we could leave it in its currently upstreamed form where it
>> only allows full GPU reset. Or would it be of use to anyone to
>> support per-engine recovery?
>
> I like the per-engine flags, I was just arguing about having the
> interface do both seems overly compliated (when the existing behaviour
> can be retrained to use -1).
>

Sure, we'll just go with the per-engine flags then.

>>>> 	If bits in fields 3:0 and 31:8 are both used then single engine hang
>>>> 	recovery mode takes presidence and bits 31:8 are ignored.
>>>>
>>>> 2. The i915_wedged_get function produces a set of statistics related to:
>>>
>>> Add it to hangcheck_info instead.
>>
>> Yeah, I considered that but I felt that hangcheck_info had too much
>> text and it would be too much of a hassle to parse out the data. But
>> having spoken to the validation guys it seems like they're fine with
>> updating the parser. So I could update hangcheck_info with this new
>> information.
>
> It can be more or less just be searching for the start of your info block.
> A quick string search on new debugfs name, old debugfs name could even
> provide backwards compatibility in the test.
>
>>> i915_wedged_get could be updated to give the ring mask of wedged rings?
>>> If that concept exists.
>>> -Chris
>>>
>>
>> Nah, no need, I'll just add the information to hangcheck_info.
>> Besides, wedged_get needs to provide more information than just the
>> current wedged state. It also provides information about the number
>> of resets, the number of watchdog timeouts etc. So it's not that
>> easy to summarise it as a ring mask.
>
> We are still talking about today's single valued debugfs/i915_wedged
> rather than the extended info?

Gah, you're right, I completely screwed up there (both in the patch and 
in the subsequent discussion): I'm not talking about i915_wedged_get 
(which produces a single value), I'm talking about i915_hangcheck_read 
(which produces extended info). For some reason my brain keeps 
convincing me that I've changed i915_wedged_set and i915_wedged_get, 
though (probably because it's one setter function and one getter 
function of sorts, but they're named differently). So, to make sure that 
we're all on the same page here:

* I've updated i915_wedged_set. This one will be changed to only accept 
engine flags.
* I've added i915_hangcheck_read. This function will be removed and 
replaced by i915_hangcheck_info instead.
* I won't be touching i915_wedged_get. Unless that is something that is 
requested. The VPG tests don't need it at least.

Anything else?

>
> Oh, whilst I am thinking of it, you could also add the reset stats to
> the error state.

Sure.

Thanks,
Tomas

> -Chris
>



More information about the Intel-gfx mailing list