[Intel-gfx] [PATCH 2/4] drm/cache: Try to be smarter about clflushing on x86

Ben Widawsky ben at bwidawsk.net
Mon Dec 15 11:54:04 PST 2014


On Sun, Dec 14, 2014 at 08:06:20PM -0800, Jesse Barnes wrote:
> On 12/14/2014 4:59 AM, Chris Wilson wrote:
> >One of the things wbinvd is considered evil for is that it blocks the
> >CPU for an indeterminate amount of time - upsetting latency critcial
> >aspects of the OS. For example, the x86/mm has similar code to use
> >wbinvd for large clflushes that caused a bit of controversy with RT:
> >
> >http://linux-kernel.2935.n7.nabble.com/PATCH-x86-Use-clflush-instead-of-wbinvd-whenever-possible-when-changing-mapping-td493751.html
> >
> >and also the use of wbinvd in the nvidia driver has also been noted as
> >evil by RT folks.
> >
> >However as the wbinvd still exists, it can't be all that bad...

That patch looks eerily similar. I guess I am slightly better in that I take the
cache size into account.

> 
> Yeah there are definitely tradeoffs here.  In this particular case, we're
> trying to flush out a ~140M object on every frame, which just seems silly.
> 
> There's definitely room for optimization in Mesa too; avoiding a mapping
> that marks a large bo as dirty would be good, but if we improve the kernel
> in this area, even sloppy apps and existing binaries will speed up.
> 
> Maybe we could apply this only on !llc systems or something?  I wonder how
> much wbinvd performance varies across microarchitectures; maybe Thomas's
> issue isn't really relevant anymore (at least one can hope).

I am pretty sure from the mesa perspective we do not hit this path on non-llc
systems because we end up with a linear buffer, and just CPU map the buffer. In
addition to trying to manage the cache dirtiness ourselves, the current code
which determines how it wants to manage subregions within large textures could
probably use some eyes to make sure the order in which we decide the various
fallbacks makes sense for all sizes, and all platforms. I /think/ we can do
better, but when I was writing a patch, the code got messy fast.

> 
> When digging into this, I found that an optimization to remove the IPI for
> wbinvd was clobbered during a merge; maybe that should be resurrected too.
> Surely a single, global wbinvd is sufficient; we don't need to do n_cpus^2
> wbinvd + the associated invalidation bus signals here...

If this actually works, then there should be no CPU stall at all.

> 
> Alternately, we could insert some delays into this path just to make it
> extra clear to userspace that they really shouldn't be hitting this in the
> common case (and provide some additional interfaces to let them avoid it by
> allowing flushing and dirty management in userspace).

I don't think such an extreme step is needed. If we really don't need the IPI,
then I think this path can only be faster than CLFLUSH.

> 
> Jesse


More information about the dri-devel mailing list