[Intel-gfx] [PATCH v4 0/6] Finally fix watermarks

Ville Syrjälä ville.syrjala at linux.intel.com
Tue Aug 2 15:59:33 UTC 2016


On Tue, Aug 02, 2016 at 05:41:50PM +0200, Maarten Lankhorst wrote:
> Op 01-08-16 om 13:48 schreef Ville Syrjälä:
> > On Mon, Aug 01, 2016 at 10:48:37AM +0200, Maarten Lankhorst wrote:
> >> Op 29-07-16 om 22:33 schreef Matt Roper:
> >>> On Fri, Jul 29, 2016 at 12:39:05PM +0300, Ville Syrjälä wrote:
> >>>> On Thu, Jul 28, 2016 at 05:03:52PM -0700, Matt Roper wrote:
> >>>>> This is completely untested (and probably horribly broken/buggy), but
> >>>>> here's a quick mockup of the general approach I was thinking for
> >>>>> ensuring DDB & WM's can be updated together while ensuring the
> >>>>> three-step pipe flushing process is honored:
> >>>>>
> >>>>>         https://github.com/mattrope/kernel/commits/experimental/lyude_ddb
> >>>>>
> >>>>> Basically the idea is to take note of what's happening to the pipe's DDB
> >>>>> allocation (shrinking, growing, unchanged, etc.) during the atomic check
> >>>>> phase;
> >>>> Didn't look too closely, but I think you can't actually do that unless
> >>>> you lock all the crtcs whenever the number of active pipes is goind to
> >>>> change. Meaning we'd essentially be back to the one-big-modeset-lock
> >>>> apporach, which will cause missed flips and whanot on the other pipes.
> >>>>
> >>>> The alternative I think would consist of:
> >>>> - make sure level 0 watermark never exceeds total_ddb_size/max_pipes,
> >>>>   so that a modeset doesn't have to care about the wms for the other
> >>>>   pipes not fitting in
> >>> Unfortunately this part is the problem.  You might get away with doing
> >>> this on SKL/KBL which only have three planes max per pipe and a large
> >>> (896 block) DDB, but on BXT you have up to four planes (we don't
> >>> actually enable the topmost plane in a full-featured manner just yet,
> >>> but need to soon), yet the total DDB is only 512 blocks.  Sadly, the
> >>> platform with more planes was given a smaller DDB...   :-(
> >>> We're already in trouble because users that are running setups like 3x
> >>> 4K with most/all planes in use at large sizes can't find level 0
> >>> watermarks that satisfy their needs and we have to reject the whole
> >>> configuration.  If we further limit each pipe's usage to total/maxpipes
> >>> (i.e., 170 blocks per pipe on BXT), then we're going to hit similar
> >>> issues when only driving one or two displays with with all of the planes
> >>> in use, even though we should have had more DDB space to work with.
> >>>
> >>> I guess serious plane usage isn't too common in desktop setups today,
> >>> but it's a very critical feature in the embedded world; we can't really
> >>> afford to cripple plane usage further.  Unfortunately, as you point out
> >>> above, this means that we have to follow the bspec's DDB allocation
> >>> method, which in turn means we need to grab _all_ CRTC locks any time
> >>> _any_ CRTC is being turned on or turned off which is a BKL-style way of
> >>> doing things.
> >> Meh, I'm running into a similar issue on vlv/chv. I don't see a way around it. :(
> > Now are you hitting it w/ vlv/chv? They don't even have a shared DDB.
> Not really the same, but determining what power saving levels + values to set with possibly parallel updates.

Just like ILK we just need to track reasonably closely how many pipes
are active, and redo the wm level selection when that changes. Just
make sure the code never thinks there are less pipes active than in
reality, and it should all be safe.

In theory we wouldn't even need to do that since the PM5/DDR DVFS don't
actually depend on the number of active pipes. It's just that they never
validated that combination, so it's not something that's recommended to
be used. And since the whole DDR DVFS extra latency vs. DDL deadlines
is such a mess, it probably would end up in more underruns.

-- 
Ville Syrjälä
Intel OTC


More information about the dri-devel mailing list