[PATCH 3/7] drm/gpusvm: mark pages as dirty

Matthew Brost matthew.brost at intel.com
Thu Mar 20 19:33:22 UTC 2025


On Thu, Mar 20, 2025 at 08:29:42PM +0100, Thomas Hellström wrote:
> On Thu, 2025-03-20 at 17:30 +0000, Matthew Auld wrote:
> > If the memory is going to be accessed by the device, make sure we
> > mark
> > the pages accordingly such that the kernel knows this. This aligns
> > with
> > the xe-userptr code.
> > 
> > Signed-off-by: Matthew Auld <matthew.auld at intel.com>
> > Cc: Thomas Hellström <thomas.hellstrom at linux.intel.com>
> > Cc: Matthew Brost <matthew.brost at intel.com>
> > ---
> >  drivers/gpu/drm/drm_gpusvm.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/drm_gpusvm.c
> > b/drivers/gpu/drm/drm_gpusvm.c
> > index 7f1cf5492bba..5b4ecd36dff1 100644
> > --- a/drivers/gpu/drm/drm_gpusvm.c
> > +++ b/drivers/gpu/drm/drm_gpusvm.c
> > @@ -1471,6 +1471,7 @@ int drm_gpusvm_range_get_pages(struct
> > drm_gpusvm *gpusvm,
> >  			pages[i] = page;
> >  		} else {
> >  			dma_addr_t addr;
> > +			unsigned int k;
> >  
> >  			if (is_zone_device_page(page) || zdd) {
> >  				err = -EOPNOTSUPP;
> > @@ -1489,6 +1490,14 @@ int drm_gpusvm_range_get_pages(struct
> > drm_gpusvm *gpusvm,
> >  			range->dma_addr[j] =
> > drm_pagemap_device_addr_encode
> >  				(addr, DRM_INTERCONNECT_SYSTEM,
> > order,
> >  				 dma_dir);
> > +
> > +			for (k = 0; k < 1u << order; k++) {
> > +				if (!ctx->read_only)
> > +					set_page_dirty_lock(page);
> > +
> > +				mark_page_accessed(page);
> > +				page++;
> > +			}
> 
> Actually I think the userptr code did this unnecessarily. This is done
> in the CPU page-fault handler, which means it's taken care of during
> hmm_range_fault(). Now if the CPU PTE happens to be present and
> writeable there will be no fault, but that was done when the page was
> faulted in anyway.
> 
> If there was a page cleaning event in between so the dirty flag was
> dropped, then my understanding is that in addition to an invalidation
> notifier, also the CPU PTE is zapped, so that it will be dirtied again
> on the next write access, either by the CPU faulting the page or
> hmm_range_fault() if there is a GPU page-fault.
> 
> So I think we're good without this patch.
> 

I was going to suggest the same thing as Thomas - we are good without
this patch for the reasons he states.

Matt

> /Thomas
> 
> 
> 
> >  		}
> >  		i += 1 << order;
> >  		num_dma_mapped = i;
> 


More information about the Intel-xe mailing list