[Intel-gfx] [PATCH 2/4] drm/cache: Try to be smarter about clflushing on x86

Jesse Barnes jbarnes at virtuousgeek.org
Sun Dec 14 20:06:20 PST 2014


On 12/14/2014 4:59 AM, Chris Wilson wrote:
> One of the things wbinvd is considered evil for is that it blocks the
> CPU for an indeterminate amount of time - upsetting latency critcial
> aspects of the OS. For example, the x86/mm has similar code to use
> wbinvd for large clflushes that caused a bit of controversy with RT:
>
> http://linux-kernel.2935.n7.nabble.com/PATCH-x86-Use-clflush-instead-of-wbinvd-whenever-possible-when-changing-mapping-td493751.html
>
> and also the use of wbinvd in the nvidia driver has also been noted as
> evil by RT folks.
>
> However as the wbinvd still exists, it can't be all that bad...

Yeah there are definitely tradeoffs here.  In this particular case, 
we're trying to flush out a ~140M object on every frame, which just 
seems silly.

There's definitely room for optimization in Mesa too; avoiding a mapping 
that marks a large bo as dirty would be good, but if we improve the 
kernel in this area, even sloppy apps and existing binaries will speed up.

Maybe we could apply this only on !llc systems or something?  I wonder 
how much wbinvd performance varies across microarchitectures; maybe 
Thomas's issue isn't really relevant anymore (at least one can hope).

When digging into this, I found that an optimization to remove the IPI 
for wbinvd was clobbered during a merge; maybe that should be 
resurrected too.  Surely a single, global wbinvd is sufficient; we don't 
need to do n_cpus^2 wbinvd + the associated invalidation bus signals here...

Alternately, we could insert some delays into this path just to make it 
extra clear to userspace that they really shouldn't be hitting this in 
the common case (and provide some additional interfaces to let them 
avoid it by allowing flushing and dirty management in userspace).

Jesse


More information about the dri-devel mailing list