[PATCH] drm/xe/migrate: fix copy direction in access_memory

Matthew Brost matthew.brost at intel.com
Thu Jul 10 20:10:38 UTC 2025


On Thu, Jul 10, 2025 at 03:04:39PM -0500, Lucas De Marchi wrote:
> On Thu, Jul 10, 2025 at 10:22:47AM -0700, Matthew Brost wrote:
> > On Thu, Jul 10, 2025 at 02:41:29PM +0100, Matthew Auld wrote:
> > > After we do the modification on the host side, ensure we write the
> > > result back to VRAM and not the other way around, otherwise the
> > > modification will be lost if treated like a read.
> > > 
> > > Fixes: 270172f64b11 ("drm/xe: Update xe_ttm_access_memory to use GPU for non-visible access")
> > > Signed-off-by: Matthew Auld <matthew.auld at intel.com>
> > > Cc: Matthew Brost <matthew.brost at intel.com>
> > 
> > Reviewed-by: Matthew Brost <matthew.brost at intel.com>
> > 
> > > ---
> > >  drivers/gpu/drm/xe/xe_migrate.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
> > > index 4e2bdf70eb70..2adf95d35c31 100644
> > > --- a/drivers/gpu/drm/xe/xe_migrate.c
> > > +++ b/drivers/gpu/drm/xe/xe_migrate.c
> > > @@ -1848,7 +1848,7 @@ int xe_migrate_access_memory(struct xe_migrate *m, struct xe_bo *bo,
> > >  				err = xe_migrate_access_memory(m, bo,
> > >  							       offset & ~XE_CACHELINE_MASK,
> > >  							       (void *)ptr,
> > > -							       sizeof(bounce), 0);
> > > +							       sizeof(bounce), write);
> 
> pass by comment... isn't the check for alignment above this snippet also
> wrong? This one:
> 
> 	/* Use bounce buffer for small access and unaligned access */
> 	if (len & XE_CACHELINE_MASK ||
> 	    ((uintptr_t)buf | offset) & XE_CACHELINE_MASK) {
> 
> We have:
> 
> 	XE_CACHELINE_MASK == 0x3f
> 
> and supposing:
> 
> 	buf == 0xffffffff1234563f
> 	offset == 1
> 	len == XE_CACHELINE_BYTES
> 
> this would go through the small/unaligned access, but it's actually
> aligned: we are copying XE_CACHELINE_BYTES from 0xffffffff12345640.
> 
> I guess we wanted this instead?
> 
> 	if (len & XE_CACHELINE_MASK ||
> 	    ((unsigned long)buf + offset) & XE_CACHELINE_MASK)
> 
> or even:
> 
> 	if (!IS_ALIGNED(len, XE_CACHELINE_BYTES) ||
> 	    !IS_ALIGNED((unsigned long)buf + offset, XE_CACHELINE_BYTES))
> 

Yea, I think you are right. Likely need another fixes patch here.

Want to post this one, since you spotted this?

Matt

> Lucas De Marchi
> 
> > >  				if (err)
> > >  					return err;
> > >  			} else {
> > > --
> > > 2.50.0
> > > 


More information about the Intel-xe mailing list