[Intel-gfx] [PATCH 4/4] drm/i915: Use unlocked register accesses for LUT loads

Ville Syrjälä ville.syrjala at linux.intel.com
Tue Nov 9 22:56:30 UTC 2021


On Mon, Nov 08, 2021 at 08:59:31PM +0000, Shankar, Uma wrote:
> 
> 
> > -----Original Message-----
> > From: Intel-gfx <intel-gfx-bounces at lists.freedesktop.org> On Behalf Of Ville Syrjala
> > Sent: Thursday, October 21, 2021 4:04 AM
> > To: intel-gfx at lists.freedesktop.org
> > Subject: [Intel-gfx] [PATCH 4/4] drm/i915: Use unlocked register accesses for LUT
> > loads
> > 
> > From: Ville Syrjälä <ville.syrjala at linux.intel.com>
> > 
> > We have to bash in a lot of registers to load the higher precision LUT modes. The
> > locking overhead is significant, especially as we have to get this done as quickly as
> > possible during vblank.
> > So let's switch to unlocked accesses for these. Fortunately the LUT registers are
> > mostly spread around such that two pipes do not have any registers on the same
> > cacheline. So as long as commits on the same pipe are serialized (which they are) we
> > should get away with this without angering the hardware.
> > 
> > The only exceptions are the PREC_PIPEGCMAX registers on ilk/snb which we don't
> > use atm as they are only used in the 12bit gamma mode. If/when we add support for
> > that we may need to remember to still serialize those registers, though I'm not sure
> > ilk/snb are actually affected by the same cacheline issue. I think ivb/hsw at least
> > were, but they use a different set of registers for the precision LUT.
> > 
> > I have a test case which is updating the LUTs on two pipes from a single atomic
> > commit. Running that in a loop for a minute I get the following worst case with the
> > locks in place:
> >  intel_crtc_vblank_work_start: pipe B, frame=10037, scanline=1081
> >  intel_crtc_vblank_work_start: pipe A, frame=12274, scanline=769
> >  intel_crtc_vblank_work_end: pipe A, frame=12274, scanline=58
> >  intel_crtc_vblank_work_end: pipe B, frame=10037, scanline=74
> > 
> > And here's the worst case with the locks removed:
> >  intel_crtc_vblank_work_start: pipe B, frame=5869, scanline=1081
> >  intel_crtc_vblank_work_start: pipe A, frame=7616, scanline=769
> >  intel_crtc_vblank_work_end: pipe B, frame=5869, scanline=1096
> >  intel_crtc_vblank_work_end: pipe A, frame=7616, scanline=777
> > 
> > The test was done on a snb using the 10bit 1024 entry LUT mode.
> > The vtotals for the two displays are 793 and 1125. So we can see that with the locks
> > ripped out the LUT updates are pretty nicely confined within the vblank, whereas
> > with the locks in place we're routinely blasting past the vblank end which causes
> > visual artifacts near the top of the screen.
> 
> Unprotected writes should be ok to use in lut updates. Looks good to me.
> Reviewed-by: Uma Shankar <uma.shankar at intel.com>
> 
> One side query though, what happens when we go for higher refresh rates like 300+,
> Some cases where vblank period is shrunk to as low as 8us (480Hz RR panels).

If the vblank is short enough then we're pretty much screwed and will
have to accept tearing. Maybe there's a little of bit of micro 
optimization left we could do to the .load_lut(). But I wouldn't expect
miracles from that since we still have to do the actual mmio, and that
we can't speed up.

Long ago I did have some numbers on how long each mmio access will
take on specific platforms but I don't recall the details anymore.
I guess it might be interesting to measure it again just to have
some idea on the lower bound, and then compare that to what we
currently achieve in .load_lut(). Would at least tell us if there's
any juice left in the tank.

And of course DSB might save us on new platforms since it should
be faster than mmio, but I've not done any measurements on it.
Should be interesting to find out how it performs.

-- 
Ville Syrjälä
Intel


More information about the Intel-gfx mailing list