[Intel-gfx] Time, where did it go?
Dave Airlie
airlied at gmail.com
Sun Aug 9 20:01:08 UTC 2020
On Fri, 7 Aug 2020 at 17:12, Chris Wilson <chris at chris-wilson.co.uk> wrote:
>
> Quoting Dave Airlie (2020-08-04 22:45:25)
> > On Mon, 3 Aug 2020 at 05:36, Chris Wilson <chris at chris-wilson.co.uk> wrote:
> > >
> > > Quoting Dave Airlie (2020-08-02 18:56:44)
> > > > On Mon, 3 Aug 2020 at 02:44, Chris Wilson <chris at chris-wilson.co.uk> wrote:
> > > > >
> > > > > Lots of small incremental improvements to reduce execution latency
> > > > > which basically offsets the small regressions incurred when compared to
> > > > > 5.7. And then there are some major fixes found while staring agape at
> > > > > lockstat.
> > > >
> > > > What introduced the 5.7 regressions? are they documented somewhere.
> > >
> > > No. There's a 5.8-rc1 bisect (to the merge but not into rc1) for
> > > something in the core causing perf fluctuations, but I have not yet
> > > reproduced that one to bisect into the rc1 merge. [The system that showed
> > > the issue has historically seen strong swings from p-state setup, might
> > > be that again?]. This is from measuring simulated transcode workloads that
> > > we've built up to track KPI. That we can then compare against the real
> > > workloads run by other groups.
> > >
> > > > What is the goal here, is there a benchmark or application that this
> > > > benefits that you can quantify the benefits?
> > >
> > > Entirely motivated by not wanting to have to explain why there's even a
> > > 1% regression in their client metrics. They wouldn't even notice for a
> > > few releases by which point the problem is likely compounded and we
> > > suddenly have crisis meetings.
> > >
> > > > Is the lack of userspace command submission a problem vs other vendors here?
> > >
> > > If you mean HW scheduling (which is the bit that we are most in dire need
> > > of for replacing this series), not really, our closest equivalent has not
> > > yet proven itself, at least in previous incarnations, adequate to their
> > > requirements.
> >
> > I don't think this sort of thing is acceptable for upstream. This is
> > the platform problem going crazy.
> > Something regresses in the kernel core, and you refactor the i915
> > driver to get horribly more complicated to avoid fixing the core
> > kernel regressions?
>
> Far from it. We are removing the complication we added to submit to the
> HW from two places and only allowing it to be done from one, with the
> resulting simplification and removal of the associated locking.
>
Care to share the software you are using to produce this?
Why isn't the initial patch bisected here? As I said this isn't how we
respond to regression reports.
none of this tells me if the initial regression is still there, and
you've just optimised something else to avoid the problem.
I'll have to dig into the stats since I've no idea what code is
producing them, is there a latency target for the driver,
a defined workload we are trying to achieve. What userspace is causing
this, could the userspace be fixed?
> As for the impact of shaving an average of 0.4us from the submission
> paths?
You know you've just defined nano-optimisation here, where is the real
world benefit, what applications are we seeing that this affects.
I'm still not sure you get the message here, stop micro-optimising
stuff to appease userspace that could be fixed. Adding complexity,
not just in these patches but across i915 GEM (lockless lists, nested
locking, trylocking, lockdep avoidance strategies, GPU relocs) to
avoid
some other teams inside Intel fixing their userspace isn't maintaining
i915 to upstream expectations. The expression you're org chart is
showing
comes to mind. If the media driver is broken, *you* go fix the media
driver, just because you can optimise things in the kernel, doesn't
mean you should.
Please provide the simulation software so we can review these patches
on a level playing field.
Dave.
More information about the Intel-gfx
mailing list