[Linaro-mm-sig] [RFC 1/3] dma-fence: Add boost fence op

Daniel Vetter daniel at ffwll.ch
Thu May 20 17:08:13 UTC 2021


On Thu, May 20, 2021 at 6:41 PM Christian König
<christian.koenig at amd.com> wrote:
>
> Am 20.05.21 um 18:34 schrieb Daniel Vetter:
> > On Thu, May 20, 2021 at 06:01:39PM +0200, Christian König wrote:
> >> Am 20.05.21 um 16:54 schrieb Rob Clark:
> >>> On Thu, May 20, 2021 at 7:11 AM Christian König
> >>> <christian.koenig at amd.com> wrote:
> >>>>
> >>>> Am 20.05.21 um 16:07 schrieb Rob Clark:
> >>>>> On Wed, May 19, 2021 at 11:47 PM Christian König
> >>>>> <christian.koenig at amd.com> wrote:
> >>>>>> Uff, that looks very hardware specific to me.
> >>>>> Howso?  I'm not sure I agree.. and even if it was not useful for some
> >>>>> hw, it should be useful for enough drivers (and harm no drivers), so I
> >>>>> still think it is a good idea
> >>>>>
> >>>>> The fallback plan is to go the i915 route and stop using atomic
> >>>>> helpers and do the same thing inside the driver, but that doesn't help
> >>>>> any of the cases where you have a separate kms and gpu driver.
> >>>> Yeah, that's certainly not something we want.
> >>>>
> >>>>>> As far as I can see you can also implement completely inside the backend
> >>>>>> by starting a timer on enable_signaling, don't you?
> >>>>> Not really.. I mean, the fact that something waited on a fence could
> >>>>> be a useful input signal to gpu freq governor, but it is entirely
> >>>>> insufficient..
> >>>>>
> >>>>> If the cpu is spending a lot of time waiting on a fence, cpufreq will
> >>>>> clock down so you spend less time waiting.  And no problem has been
> >>>>> solved.  You absolutely need the concept of a missed deadline, and a
> >>>>> timer doesn't give you that.
> >>>> Ok then I probably don't understand the use case here.
> >>>>
> >>>> What exactly do you try to solve?
> >>> Basically situations where you are ping-ponging between GPU and CPU..
> >>> for example if you are double buffering instead of triple buffering,
> >>> and doing vblank sync'd pageflips.  The GPU, without any extra signal,
> >>> could get stuck at 30fps and a low gpu freq, because it ends up idle
> >>> while waiting for an extra vblank cycle for the next back-buffer to
> >>> become available.  Whereas if it boosted up to a higher freq and
> >>> stopped missing a vblank deadline, it would be less idle due to
> >>> getting the next back-buffer sooner (due to not missing a vblank
> >>> deadline).
> >> Ok the is the why, but what about the how?
> >>
> >> How does it help to have this boost callback and not just start a time on
> >> enable signaling and stop it when the signal arrives?
> > Because the render side (or drm/scheduler, if msm would use that) has no
> > idea for which vblank a rendering actually is for.
>
> AH! So we are basically telling the fence backend that we have just
> missed an event we waited for.
>
> So what we want to know is how long the frontend wanted to wait instead
> of how long the backend took for rendering.

tbh I'm not sure the timestamp matters at all. What we do in i915 is
boost quite aggressively, and then let the usual clock tuning wittle
it down if we overshot. Plus soom cool-down to prevent
abuse/continuous boosting. I think we also differentiate between
display boost and userspace waits.

On the display side we also wait until the vblank has passed we aimed
for (atm always the next, we don't have target_frame support like
amdgpu), to avoid boosting when there's no point.

> > So boosting right when you've missed your frame (not what Rob implements
> > currently, but fixable) is the right semantics.
> >
> > The other issue is that for cpu waits, we want to differentiate from fence
> > waits that userspace does intentially (e.g. wait ioctl) and waits that
> > random other things are doing within the kernel to keep track of progress.
> >
> > For the former we know that userspace is stuck waiting for the gpu, and we
> > probably want to boost. For the latter we most definitely do _not_ want to
> > boost.
> >
> > Otoh I do agree with you that the current api is a bit awkward, so perhaps
> > we do need a dma_fence_userspace_wait wrapper which boosts automatically
> > after a bit. And similarly perhaps a drm_vblank_dma_fence_wait, where you
> > give it a vblank target, and if the fence isn't signalled by then, we kick
> > it real hard.
>
> Yeah, something like an use case driven API would be nice to have.
>
> For this particular case I suggest that we somehow extend the enable
> signaling callback.
>
> > But otherwise yes this is absolutely a thing that matters a ton. If you
> > look at Matt Brost's scheduler rfc, there's also a line item in there
> > about adding this kind of boosting to drm/scheduler.
>
> BTW: I still can't see this in my inbox.

You've replied already:

https://lore.kernel.org/dri-devel/20210518235830.133834-1-matthew.brost@intel.com/

It's just the big picture plan of what areas we're all trying to
tackle with some why, so that everyone knows what's coming in the next
half year at least. Probably longer until this is all sorted. I think
Matt has some poc hacked-up pile, but nothing really to show.
-Daniel

> Do you have a link?
>
> Christian.
>
> > -Daniel
> >
> >
> >> Regards,
> >> Christian.
> >>
> >>> BR,
> >>> -R
> >>>
> >>>> Thanks,
> >>>> Christian.
> >>>>
> >>>>> BR,
> >>>>> -R
> >>>>>
> >>>>>> Christian.
> >>>>>>
> >>>>>> Am 19.05.21 um 20:38 schrieb Rob Clark:
> >>>>>>> From: Rob Clark <robdclark at chromium.org>
> >>>>>>>
> >>>>>>> Add a way to hint to the fence signaler that a fence waiter has missed a
> >>>>>>> deadline waiting on the fence.
> >>>>>>>
> >>>>>>> In some cases, missing a vblank can result in lower gpu utilization,
> >>>>>>> when really we want to go in the opposite direction and boost gpu freq.
> >>>>>>> The boost callback gives some feedback to the fence signaler that we
> >>>>>>> are missing deadlines, so it can take this into account in it's freq/
> >>>>>>> utilization calculations.
> >>>>>>>
> >>>>>>> Signed-off-by: Rob Clark <robdclark at chromium.org>
> >>>>>>> ---
> >>>>>>>      include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++
> >>>>>>>      1 file changed, 26 insertions(+)
> >>>>>>>
> >>>>>>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
> >>>>>>> index 9f12efaaa93a..172702521acc 100644
> >>>>>>> --- a/include/linux/dma-fence.h
> >>>>>>> +++ b/include/linux/dma-fence.h
> >>>>>>> @@ -231,6 +231,17 @@ struct dma_fence_ops {
> >>>>>>>          signed long (*wait)(struct dma_fence *fence,
> >>>>>>>                              bool intr, signed long timeout);
> >>>>>>>
> >>>>>>> +     /**
> >>>>>>> +      * @boost:
> >>>>>>> +      *
> >>>>>>> +      * Optional callback, to indicate that a fence waiter missed a deadline.
> >>>>>>> +      * This can serve as a signal that (if possible) whatever signals the
> >>>>>>> +      * fence should boost it's clocks.
> >>>>>>> +      *
> >>>>>>> +      * This can be called in any context that can call dma_fence_wait().
> >>>>>>> +      */
> >>>>>>> +     void (*boost)(struct dma_fence *fence);
> >>>>>>> +
> >>>>>>>          /**
> >>>>>>>           * @release:
> >>>>>>>           *
> >>>>>>> @@ -586,6 +597,21 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr)
> >>>>>>>          return ret < 0 ? ret : 0;
> >>>>>>>      }
> >>>>>>>
> >>>>>>> +/**
> >>>>>>> + * dma_fence_boost - hint from waiter that it missed a deadline
> >>>>>>> + *
> >>>>>>> + * @fence: the fence that caused the missed deadline
> >>>>>>> + *
> >>>>>>> + * This function gives a hint from a fence waiter that a deadline was
> >>>>>>> + * missed, so that the fence signaler can factor this in to device
> >>>>>>> + * power state decisions
> >>>>>>> + */
> >>>>>>> +static inline void dma_fence_boost(struct dma_fence *fence)
> >>>>>>> +{
> >>>>>>> +     if (fence->ops->boost)
> >>>>>>> +             fence->ops->boost(fence);
> >>>>>>> +}
> >>>>>>> +
> >>>>>>>      struct dma_fence *dma_fence_get_stub(void);
> >>>>>>>      u64 dma_fence_context_alloc(unsigned num);
> >>>>>>>
> >>> _______________________________________________
> >>> Linaro-mm-sig mailing list
> >>> Linaro-mm-sig at lists.linaro.org
> >>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.linaro.org%2Fmailman%2Flistinfo%2Flinaro-mm-sig&data=04%7C01%7Cchristian.koenig%40amd.com%7C69c1843a93ec4888abd308d91bad18bd%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637571252548030247%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=EJBA9rVl5xTRmdEPzyCyGX7xyZMKAGVhTmoEnsPfOxw%3D&reserved=0
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list