[PATCH 1/2] drm/xe: Init MCR before any mcr register read

Upadhyay, Tejas tejas.upadhyay at intel.com
Fri Aug 9 09:11:50 UTC 2024



> -----Original Message-----
> From: Auld, Matthew <matthew.auld at intel.com>
> Sent: Friday, August 9, 2024 2:25 PM
> To: Roper, Matthew D <matthew.d.roper at intel.com>; Upadhyay, Tejas
> <tejas.upadhyay at intel.com>
> Cc: intel-xe at lists.freedesktop.org; De Marchi, Lucas
> <lucas.demarchi at intel.com>
> Subject: Re: [PATCH 1/2] drm/xe: Init MCR before any mcr register read
> 
> On 08/08/2024 20:51, Matt Roper wrote:
> > On Thu, Aug 08, 2024 at 02:58:25PM +0530, Tejas Upadhyay wrote:
> >> enable host l2 VRAM is example where MCR register is getting read
> >> before fully MCR init is done. Lets move
> >
> > "is example" makes it sound like this is just one of multiple places
> > where we're incorrectly using MCR registers before MCR handling is
> > initialized.  If there are indeed other cases, then I'd expect to see
> > more patches in this series making other changes as well.
> >
> > Also, the title seems slightly misleading as well; it makes it sound
> > like we're changing where we do the MCR init (which would run into
> > other chicken-and-egg problems with topology and GuC init), but in
> > reality we're just moving the L2 thing for Wa_16023588340 a bit later
> > in the init process, so it might be best to focus the wording on that.
> >
> >  From a quick skim of Wa_16023588340 it just says that we need to do
> > this as part of initialization (and GT reset) without really
> > specifying any specific ordering with respect to other initialization
> > tasks.  If I remember this workaround correctly it's probably
> > something that we want to do before we start having a bunch of CPU
> > writes to the LMEMBAR.  I think there are some CPU -> VRAM writes
> > during GuC initialization (e.g., the GuC ADS), but maybe not enough to
> > require doing this workaround earlier; we should probably test this
> > carefully and/or double check with the architects on whether they
> > think this would be a concern.  I know the really heavy CPU -> LMEMBAR
> > traffic would probably come later once we initialize display and the
> > fbcon starts writing to the fbdev framebuffer, so the new placement is
> > still early enough to avoid those at least.
> 
> Yeah, it seemed reasonable to enable the host l2 caching before we started
> touching vram from the host. I don't recall if it was this version of the WA or
> the one that did the timer reg write thing, but I do seem to remember the
> placement being sensitive even during module load (IIRC gt_init vs
> gt_init_early), where if done too late (gt_init) you would sometimes still get
> the issue with the register read timing out in the interrupt handler. But in
> xe_gt_init_hwconfig perhaps not an issue, since it is still quite early.

Should we get this extensively tested on BMG by validation before going ahead?

Thanks,
Tejas
> 
> >
> >
> > Matt
> >
> >> enable host l2 VRAM after MCR init.
> >>
> >> V1(Lucas):
> >>   - Reorder patch and reorder flow of L2 VRAM enable
> >>
> >> Cc: Lucas De Marchi <lucas.demarchi at intel.com>
> >> Signed-off-by: Tejas Upadhyay <tejas.upadhyay at intel.com>
> >> ---
> >>   drivers/gpu/drm/xe/xe_gt.c | 2 +-
> >>   1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
> >> index 58895ed22f6e..238c7d1053f0 100644
> >> --- a/drivers/gpu/drm/xe/xe_gt.c
> >> +++ b/drivers/gpu/drm/xe/xe_gt.c
> >> @@ -557,7 +557,6 @@ int xe_gt_init_hwconfig(struct xe_gt *gt)
> >>
> >>   	xe_gt_mcr_init_early(gt);
> >>   	xe_pat_init(gt);
> >> -	xe_gt_enable_host_l2_vram(gt);
> >>
> >>   	err = xe_uc_init(&gt->uc);
> >>   	if (err)
> >> @@ -569,6 +568,7 @@ int xe_gt_init_hwconfig(struct xe_gt *gt)
> >>
> >>   	xe_gt_topology_init(gt);
> >>   	xe_gt_mcr_init(gt);
> >> +	xe_gt_enable_host_l2_vram(gt);
> >>
> >>   out_fw:
> >>   	xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
> >> --
> >> 2.25.1
> >>
> >


More information about the Intel-xe mailing list