[PATCH v4 12/21] drm/i915/dmc: Reload PIPEDMC MMIO registers for pipe C/D on PTL+
Shankar, Uma
uma.shankar at intel.com
Thu Jun 12 05:05:35 UTC 2025
> -----Original Message-----
> From: Ville Syrjälä <ville.syrjala at linux.intel.com>
> Sent: Wednesday, June 11, 2025 7:56 PM
> To: Shankar, Uma <uma.shankar at intel.com>
> Cc: intel-gfx at lists.freedesktop.org; intel-xe at lists.freedesktop.org
> Subject: Re: [PATCH v4 12/21] drm/i915/dmc: Reload PIPEDMC MMIO registers
> for pipe C/D on PTL+
>
> On Tue, Jun 10, 2025 at 11:24:58PM +0000, Shankar, Uma wrote:
> >
> >
> > > -----Original Message-----
> > > From: Intel-xe <intel-xe-bounces at lists.freedesktop.org> On Behalf Of
> > > Ville Syrjala
> > > Sent: Monday, June 9, 2025 7:41 PM
> > > To: intel-gfx at lists.freedesktop.org
> > > Cc: intel-xe at lists.freedesktop.org
> > > Subject: [PATCH v4 12/21] drm/i915/dmc: Reload PIPEDMC MMIO
> > > registers for pipe C/D on PTL+
> > >
> > > From: Ville Syrjälä <ville.syrjala at linux.intel.com>
> > >
> > > On PTL+ the PIPEDMC on pipes C/D loses its MMIO state occasionally.
> > > Not quite sure what the specific sequence is that makes this happen
> > > (eg. simply disbling PG2 doesn't seem to be enough to trigger this on its own).
> >
> > Nit: Typo in disabling
> >
> > > Reload the MMIO registers for the affected pipes when enabling the
> > > PIPEDMC. So far I've not see this happen on PTL pipe A/B, nor on any
> > > pipe on any other platform.
> > >
> > > The DMC program RAM doesn't appear to need manual restoring, though
> > > Windows appears to be doing exactly that on most platforms (for some
> > > of the pipes). None of this is properly documented anywhere it seems.
> >
> > Yeah can't find any documentation for the same. Lets go with the
> > empirical behaviour, will try to get this updated in spec as well.
>
> CI did catch the fact that TGL/derivatives lose the entire pipe DMC state when
> PG1 is disabled, and the main DMC does not restore any of it. I'll cook up some
> extra patches to deal with that. The pipe DMC is only needed for LACE on these
> platforms, so could perhaps just not load it at all, but I think I'll keep it around just
> in case we ever want to implement some LACE stuff at some point.
Oh ok, sure. Yeah, good to keep it.
> >
> > Looks Good to me.
> > Reviewed-by: Uma Shankar <uma.shankar at intel.com>
> >
> > > Signed-off-by: Ville Syrjälä <ville.syrjala at linux.intel.com>
> > > ---
> > > drivers/gpu/drm/i915/display/intel_dmc.c | 23
> > > +++++++++++++++++------
> > > 1 file changed, 17 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/i915/display/intel_dmc.c
> > > b/drivers/gpu/drm/i915/display/intel_dmc.c
> > > index 5a43298cd0e7..247e88265cf3 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_dmc.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_dmc.c
> > > @@ -578,6 +578,17 @@ static u32 dmc_mmiodata(struct intel_display
> *display,
> > > return dmc->dmc_info[dmc_id].mmiodata[i];
> > > }
> > >
> > > +static void intel_dmc_load_mmio(struct intel_display *display, enum
> > > +intel_dmc_id dmc_id) {
> > > + struct intel_dmc *dmc = display_to_dmc(display);
> > > + int i;
> > > +
> > > + for (i = 0; i < dmc->dmc_info[dmc_id].mmio_count; i++) {
> > > + intel_de_write(display, dmc->dmc_info[dmc_id].mmioaddr[i],
> > > + dmc_mmiodata(display, dmc, dmc_id, i));
> > > + }
> > > +}
> > > +
> > > void intel_dmc_enable_pipe(struct intel_display *display, enum pipe pipe) {
> > > enum intel_dmc_id dmc_id = PIPE_TO_DMC_ID(pipe); @@ -585,6
> > > +596,10 @@ void intel_dmc_enable_pipe(struct intel_display *display,
> > > +enum pipe
> > > pipe)
> > > if (!is_valid_dmc_id(dmc_id) || !has_dmc_id_fw(display, dmc_id))
> > > return;
> > >
> > > + /* on PTL pipe C/D PIPEDMC MMIO state is lost sometimes */
> > > + if (DISPLAY_VER(display) >= 30 && pipe >= PIPE_C)
> > > + intel_dmc_load_mmio(display, dmc_id);
> > > +
> > > if (DISPLAY_VER(display) >= 20) {
> > > intel_de_write(display, PIPEDMC_INTERRUPT(pipe),
> > > pipedmc_interrupt_mask(display));
> > > intel_de_write(display, PIPEDMC_INTERRUPT_MASK(pipe),
> > > ~pipedmc_interrupt_mask(display));
> > > @@ -710,12 +725,8 @@ void intel_dmc_load_program(struct
> > > intel_display
> > > *display)
> > >
> > > preempt_enable();
> > >
> > > - for_each_dmc_id(dmc_id) {
> > > - for (i = 0; i < dmc->dmc_info[dmc_id].mmio_count; i++) {
> > > - intel_de_write(display, dmc-
> > > >dmc_info[dmc_id].mmioaddr[i],
> > > - dmc_mmiodata(display, dmc, dmc_id, i));
> > > - }
> > > - }
> > > + for_each_dmc_id(dmc_id)
> > > + intel_dmc_load_mmio(display, dmc_id);
> > >
> > > power_domains->dc_state = 0;
> > >
> > > --
> > > 2.49.0
> >
>
> --
> Ville Syrjälä
> Intel
More information about the Intel-xe
mailing list