[Intel-gfx] [PATCH 12/12] drm/i915: Cache the reset_counter for the request
Daniel Vetter
daniel at ffwll.ch
Wed Nov 25 01:12:53 PST 2015
On Tue, Nov 24, 2015 at 09:43:32PM +0000, Chris Wilson wrote:
> On Tue, Nov 24, 2015 at 05:43:22PM +0100, Daniel Vetter wrote:
> > On Fri, Nov 20, 2015 at 12:43:52PM +0000, Chris Wilson wrote:
> > > Instead of querying the reset counter before every access to the ring,
> > > query it the first time we touch the ring, and do a final compare when
> > > submitting the request. For correctness, we need to then sanitize how
> > > the reset_counter is incremented to prevent broken submission and
> > > waiting across resets, in the process fixing the persistent -EIO we
> > > still see today on failed waits.
> >
> > tbh that explanation went straight over my head. Questions:
> > - Where do we wait again over a reset? With all the wakeups we should
> > catch them properly. I guess this needs the detailed scenario to help me
> > out, since I have dim memories that there is something wrong ;-)
>
> TDR. In the future (and in past speculative patches) we have proposed
> keeping requests over a reset and requeuing them. That is a complication
> to the simplification of bailing out from the wait. What I have in mind,
> is the recovery code has to fix up the request list somehow, though that
> will be fun.
But even with requeueing the waiters need to bail out and restart the
ioctl. So I don't see why requeueing of requests matter.
And my question was about what exactly is broken when waiting over a
reset? As long as we detect the reset and restart the ioctl we should pick
up any kinds of state fixups the reset work has done and recover
perfectly. Irrespective of whether the reset work has requeued some of the
requests or not.
> > - What precisely is the effect for waiters? I only see moving the
> > atomic_inc under the dev->struct_mutex, which shouldn't do a hole lot
> > for anyone. Plus not returning -EAGAIN when reset_in_progress, which
> > looks like it might result in missed wakeups and deadlocks with the
> > reset work.
>
> At the moment, I can trivially wedge the machine. This patch fixes that.
> The patch also ensures that we cannot raise unhandled errors from
> wait-request (EIO/EAGAIN handling has to be explicitly added to the
> caller).
Hm, how/where do we wedge the machine, and how does the fix work?
> The wait-request interface still has the wait-seqno legacy of having to
> acquire the reset_counter under the mutex before calling. That is quite
> hairy and causes a major complication later where we want to amalgamate
> waiters.
Yeah, that's a sensible change. But since I don't yet understand where
exactly the current logic results in a wedge gpu I can't convince myself
that the new one is ok.
> Re EAGAIN. No, it cannot result in missed wakeups since that is internal
> to the wait_request function, nor can it result in new deadlocks with the
> reset worker.
Yeah I think today with just struct_mutex it's impossible. But if we have
more locks the waiting thread could continue to grab more locks instead of
bailing out straight. And then the reset handler would be somewhat screwed
since it isn't guaranteed to make forward progress.
> > I /thought/ that the -EIO is from check_wedge in intel_ring_begin, but
> > that part seems essentially unchanged.
>
> For two reasons, we need to to protect the access to the ring, and you
> argued that (reporting of EIO from previous wedged GPUs) it was part of
> the ABI. The other ABI that I think is important, is the reporting of
> EIO if the user explicits waits on a request and it is incomplete (i.e.
> wait_ioctl).
Well then I don't get where the -EIO is from. That leaves a truly wedge
gpu as the cause, and for that case I don't get how it can happen by
accident with the current code.
> > Aside from all that, shuffling the atomic_inc (if I can convince myself
> > that it's safe) to avoid the reload_in_reset hack looks like a good
> > improvement.
>
> That's what I said at the time :-p
>
> > More confused comments below.
>
> A large proportion of the actual patch is spent trying to make using the
> atomic reset_counter safer (wrt to it changing values between multiple
> tests and subsequent action).
>
> > > diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> > > index d83f35c8df34..107711ec956f 100644
> > > --- a/drivers/gpu/drm/i915/i915_debugfs.c
> > > +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> > > @@ -4415,7 +4415,7 @@ i915_wedged_get(void *data, u64 *val)
> > > struct drm_device *dev = data;
> > > struct drm_i915_private *dev_priv = dev->dev_private;
> > >
> > > - *val = atomic_read(&dev_priv->gpu_error.reset_counter);
> > > + *val = i915_terminally_wedged(&dev_priv->gpu_error);
> >
> > Don't we have a few tests left that look at this still?
>
> One. I strongly believe that the debug interface should not be reporting
> the reset_counter but the wedged status (as that is what it is called).
And that one only writes, so we're perfectly fine I think.
> > > diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> > > index ffd99d27fac7..5838882e10a1 100644
> > > --- a/drivers/gpu/drm/i915/i915_drv.c
> > > +++ b/drivers/gpu/drm/i915/i915_drv.c
> > > @@ -841,23 +841,31 @@ int i915_resume_legacy(struct drm_device *dev)
> > > int i915_reset(struct drm_device *dev)
> > > {
> > > struct drm_i915_private *dev_priv = dev->dev_private;
> > > + struct i915_gpu_error *error = &dev_priv->gpu_error;
> > > + unsigned reset_counter;
> > > bool simulated;
> > > int ret;
> > >
> > > intel_reset_gt_powersave(dev);
> > >
> > > mutex_lock(&dev->struct_mutex);
> > > + atomic_clear_mask(I915_WEDGED, &error->reset_counter);
> > > + reset_counter = atomic_inc_return(&error->reset_counter);
> >
> > This publishes the reset-complete too early for the modeset code I think.
> > At least it crucially relies upon dev->struct_mutex serializing
> > everything in our driver, and I don't like to cement that assumption in
> > even more.
>
> Hmm? It is already concreted in as the reset / continuation is ordered
> with the struct_mutex. We have kicks in place for the modesetting code,
> but we have not actually ordered that with anything inside the reset
> recovery code. Judging by the few changes to i915_reset(), I am doubtful
> that has actually been improved for atomic, so that it basically relies
> on display being unaffected by anything but pending work.
Currently the reset/continuation is ordered with the 2-phase atomic_inc of
the reset counter, with the requirement that anyone who sees
reset_in_progress hauls out of the kernel asap. That most of gem is
serialized with struct_mutex is just a side-effect of the current gem
locking. With the current design we could rework gem locking and it should
all still work, with this approach we pretty much cement the BKL locking
approach I think.
> > Why do we need to move that atomic_inc again?
>
> The inc is to acknowledge the reset and to allow us to begin using the
> device again (inside the reset func). It is the equivalent of adding
> another bit for reload_in_reset. I moved it to the beginning for my
> convenience.
Ok, this needs a code comment I think.
But I thought that the crucial change was to move check_wedge from
ring_begin to wait_for_space and so move it out from ever being hit from
the reset code.
> > I guess what we could try is set it right after we've reset the hw, but
> > before we start to re-initialize it. So move the atomic stuff below
> > intel_gpu_reset, with some neat barrier again. Maybe we should even push
> > i915_gem_reset down below that too. Not sure.
>
> Sure, I liked the simplicity though :)
>
> > > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > > index d3609e111647..6ac9c80244fa 100644
> > > --- a/drivers/gpu/drm/i915/i915_gem.c
> > > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > > @@ -85,9 +85,7 @@ i915_gem_wait_for_error(struct i915_gpu_error *error)
> > > {
> > > int ret;
> > >
> > > -#define EXIT_COND (!i915_reset_in_progress(error) || \
> > > - i915_terminally_wedged(error))
> > > - if (EXIT_COND)
> > > + if (!i915_reset_in_progress(error))
> > > return 0;
> > >
> > > /*
> > > @@ -96,17 +94,16 @@ i915_gem_wait_for_error(struct i915_gpu_error *error)
> > > * we should simply try to bail out and fail as gracefully as possible.
> > > */
> > > ret = wait_event_interruptible_timeout(error->reset_queue,
> > > - EXIT_COND,
> > > + !i915_reset_in_progress(error),
> > > 10*HZ);
> > > if (ret == 0) {
> > > DRM_ERROR("Timed out waiting for the gpu reset to complete\n");
> > > return -EIO;
> > > } else if (ret < 0) {
> > > return ret;
> > > + } else {
> > > + return 0;
> > > }
> > > -#undef EXIT_COND
> > > -
> > > - return 0;
> >
> > I like the existing color better ;-)
>
> You would like
> #define EXIT_COND (!i915_reset_in_progress(error))) ?
>
> > > int
> > > -i915_gem_check_wedge(struct i915_gpu_error *error,
> > > +i915_gem_check_wedge(unsigned reset_counter,
> > > bool interruptible)
> > > {
> > > - if (i915_reset_in_progress(error)) {
> > > + if (__i915_reset_in_progress_or_wedged(reset_counter)) {
> > > /* Non-interruptible callers can't handle -EAGAIN, hence return
> > > * -EIO unconditionally for these. */
> > > if (!interruptible)
> > > return -EIO;
> > >
> > > /* Recovery complete, but the reset failed ... */
> > > - if (i915_terminally_wedged(error))
> > > + if (__i915_terminally_wedged(reset_counter))
> > > return -EIO;
> > >
> > > - /*
> > > - * Check if GPU Reset is in progress - we need intel_ring_begin
> > > - * to work properly to reinit the hw state while the gpu is
> > > - * still marked as reset-in-progress. Handle this with a flag.
> > > - */
> > > - if (!error->reload_in_reset)
> > > - return -EAGAIN;
> > > + return -EAGAIN;
> >
> > This only works because you move the check_wedge from ring_begin to
> > wait_space, so assumes that we never change that again. Rather fragile
> > imo.
>
> Pardon? If you argue that intel_ring_begin() is a better place to check
> you have misunderstood the reason behind the patch entirely. Note that
> we still call this function to preserve ABI. The second reason for
> checking in wait_for_space is that is the only place where we do care
> about the masked pending reset.
I looked at this under the assumption that the atomic_inc(reset_counter)
stays where it currently is.
> > > @@ -1200,7 +1191,6 @@ static int __i915_spin_request(struct drm_i915_gem_request *req, int state)
> > > /**
> > > * __i915_wait_request - wait until execution of request has finished
> > > * @req: duh!
> > > - * @reset_counter: reset sequence associated with the given request
> > > * @interruptible: do an interruptible wait (normally yes)
> > > * @timeout: in - how long to wait (NULL forever); out - how much time remaining
> > > *
> > > @@ -1215,7 +1205,6 @@ static int __i915_spin_request(struct drm_i915_gem_request *req, int state)
> > > * errno with remaining time filled in timeout argument.
> > > */
> > > int __i915_wait_request(struct drm_i915_gem_request *req,
> > > - unsigned reset_counter,
> > > bool interruptible,
> > > s64 *timeout,
> > > struct intel_rps_client *rps)
> > > @@ -1264,12 +1253,12 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
> > >
> > > /* We need to check whether any gpu reset happened in between
> > > * the caller grabbing the seqno and now ... */
> > > - if (reset_counter != atomic_read(&dev_priv->gpu_error.reset_counter)) {
> > > - /* ... but upgrade the -EAGAIN to an -EIO if the gpu
> > > - * is truely gone. */
> > > - ret = i915_gem_check_wedge(&dev_priv->gpu_error, interruptible);
> > > - if (ret == 0)
> > > - ret = -EAGAIN;
> > > + if (req->reset_counter != i915_reset_counter(&dev_priv->gpu_error)) {
> > > + /* As we do not requeue the request over a GPU reset,
> > > + * if one does occur we know that the request is
> > > + * effectively complete.
> > > + */
> > > + ret = 0;
> > > break;
> >
> > This now no longer bails out straight when a reset is in progress with
> > -EAGAIN. I fear for the deadlocks.
>
> You fear incorrectly here. Either we hold the struct_mutex in which case
> the reset waits and we complete our work without needing anything else,
> or we may want to take the struct_mutex to complete our work and so may
> end up waiting for the reset to complete. Either way the state of this
> request is the same as to when the GPU reset actually occurs. There is
> an inconsistency that we don't mark it as complete though, so we would
> may try and wait on it again (only to find that the reset is in
> progress and bail out again).
We only bail once per reset since we wait for resets to complete before
acquiring dev->struct_mutex. On the modeset side we unfortunately can't do
that since that code is in the drm core.
> The issue is not a potential deadlock (because that would be an already
> existing ABBA in the code) but resources that can be depleted by
> requests.
Besides deadlocks my other concern is that you encode reset behaviour
outside of the reset code here. If we unconditionally EAGAIN here, then
everyone will pick up whatever kind of fixup the reset code applied. Which
means requeueing of requests is contained to the reset logic. But if we
stop doing the unconditional EAGAIN then we have to fix up things here
too.
On the flip side I don't see what's easier by returning 0 here instead of
EAGAIN?
> > > @@ -2252,6 +2250,14 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
> > > if (ret)
> > > return ret;
> > >
> > > + /* If the request was completed due to a GPU hang, we want to
> > > + * error out before we continue to emit more commands to the GPU.
> > > + */
> > > + ret = i915_gem_check_wedge(i915_reset_counter(&to_i915(engine->dev)->gpu_error),
> > > + to_i915(engine->dev)->mm.interruptible);
> > > + if (ret)
> > > + return ret;
> >
> > Don't we still have the problem with -EIO now here, just less likely than
> > from intel_ring_begin?
>
> What's the problem? intel_ring_begin() is supposed to return
> -EIO/-EAGAIN.
Hm, I thought the entire problem was that ring_begin can return -EIO,
since that's how mesa usually dies. But with your mail your saying that we
accidentally wedge the gpu, and that's the part I don't understand yet.
Probably best if I figure this out with an irc chat directly ;-)
But without that figured out I'm not sure how to best untangle the other
bits (specifically reload_in_reset) you're changing in here.
Thanks, Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
More information about the Intel-gfx
mailing list