[Intel-gfx] [PATCH 0/4] Drop wbinvd_on_all_cpus usage
Thomas Hellström
thomas.hellstrom at linux.intel.com
Mon Mar 21 12:33:05 UTC 2022
On Mon, 2022-03-21 at 12:22 +0000, Tvrtko Ursulin wrote:
>
> On 21/03/2022 11:03, Thomas Hellström wrote:
> > Hi, Tvrtko.
> >
> > On 3/21/22 11:27, Tvrtko Ursulin wrote:
> > >
> > > On 19/03/2022 19:42, Michael Cheng wrote:
> > > > To align with the discussion in [1][2], this patch series drops
> > > > all
> > > > usage of
> > > > wbvind_on_all_cpus within i915 by either replacing the call
> > > > with certain
> > > > drm clflush helpers, or reverting to a previous logic.
> > >
> > > AFAIU, complaint from [1] was that it is wrong to provide non x86
> > > implementations under the wbinvd_on_all_cpus name. Instead an
> > > arch
> > > agnostic helper which achieves the same effect could be created.
> > > Does
> > > Arm have such concept?
> >
> > I also understand Linus' email like we shouldn't leak incoherent IO
> > to
> > other architectures, meaning any remaining wbinvd()s should be X86
> > only.
>
> The last part is completely obvious since it is a x86 instruction
> name.
Yeah, I meant the function implementing wbinvd() semantics.
>
> But I think we can't pick a solution until we know how the concept
> maps
> to Arm and that will also include seeing how the drm_clflush_sg for
> Arm
> would look. Is there a range based solution, or just a big hammer
> there.
> If the latter, then it is no good to churn all these reverts but
> instead
> an arch agnostic wrapper, with a generic name, would be the way to
> go.
But my impression was that ARM would not need the range-based interface
either, because ARM is only for discrete and with discrete we're always
coherent.
So in essence it all would become:
1) Any cache flushing intended for incoherent IO is x86 only.
2) Prefer range-based flushing if possible and any implications sorted
out.
/Thomas
>
> Regards,
>
> Tvrtko
>
> > Also, wbinvd_on_all_cpus() can become very costly, hence prefer the
> > range apis when possible if they can be verified not to degrade
> > performance.
> >
> >
> > >
> > > Given that the series seems to be taking a different route,
> > > avoiding
> > > the need to call wbinvd_on_all_cpus rather than what [1] suggests
> > > (note drm_clflush_sg can still call it!?), concern is that the
> > > series
> > > has a bunch of reverts and each one needs to be analyzed.
> >
> >
> > Agreed.
> >
> > /Thomas
> >
> >
> >
> > >
> > > For instance looking at just the last one, 64b95df91f44, who has
> > > looked at the locking consequences that commit describes:
> > >
> > > """
> > > Inside gtt_restore_mappings() we currently take the
> > > obj->resv->lock, but
> > > in the future we need to avoid taking this fs-reclaim tainted
> > > lock
> > > as we
> > > need to extend the coverage of the vm->mutex. Take advantage
> > > of the
> > > single-threaded nature of the early resume phase, and do a
> > > single
> > > wbinvd() to flush all the GTT objects en masse.
> > >
> > > """
> > >
> > > ?
> > >
> > > Then there are suspend and freeze reverts which presumably can
> > > regress
> > > the suspend times. Any data on those?
> > >
> > > Adding Matt since he was the reviewer for that work so might
> > > remember
> > > something.
> > >
> > > Regards,
> > >
> > > Tvrtko
> > >
> > >
> > > > [1].
> > > > https://lists.freedesktop.org/archives/dri-devel/2021-November/330928.html
> > > >
> > > >
> > > > [2].
> > > > https://patchwork.freedesktop.org/patch/475752/?series=99991&rev=5
> > > >
> > > > Michael Cheng (4):
> > > > i915/gem: drop wbinvd_on_all_cpus usage
> > > > Revert "drm/i915/gem: Almagamate clflushes on suspend"
> > > > i915/gem: Revert i915_gem_freeze to previous logic
> > > > drm/i915/gt: Revert ggtt_resume to previous logic
> > > >
> > > > drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 +---
> > > > drivers/gpu/drm/i915/gem/i915_gem_pm.c | 56
> > > > ++++++++++++++--------
> > > > drivers/gpu/drm/i915/gt/intel_ggtt.c | 17 +++----
> > > > drivers/gpu/drm/i915/gt/intel_gtt.h | 2 +-
> > > > 4 files changed, 46 insertions(+), 38 deletions(-)
> > > >
More information about the Intel-gfx
mailing list