[PATCH v2 1/1] drm/xe: Disable compression on SVM
Matthew Brost
matthew.brost at intel.com
Thu Aug 7 00:46:46 UTC 2025
On Wed, Aug 06, 2025 at 10:12:42AM +0100, Matthew Auld wrote:
> On 05/08/2025 23:06, Matthew Brost wrote:
> > This is not yet supported, forcefully disable by setting pat_index to
> > zero for CPU address mirror VMAs.
> >
> > v2:
> > - Use XE_CACHE_WB (Auld)
> > - Only modify pat_index if compressed (Himal)
> >
> > Cc: stable at vger.kernel.org
> > Fixes: b43e864af0d4 ("drm/xe/uapi: Add DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR")
> > Signed-off-by: Matthew Brost <matthew.brost at intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_pat.c | 10 ++++++++++
> > drivers/gpu/drm/xe/xe_pat.h | 10 ++++++++++
> > drivers/gpu/drm/xe/xe_vm.c | 7 ++++++-
> > 3 files changed, 26 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_pat.c b/drivers/gpu/drm/xe/xe_pat.c
> > index 2e7cb99ae87a..ac1767c812aa 100644
> > --- a/drivers/gpu/drm/xe/xe_pat.c
> > +++ b/drivers/gpu/drm/xe/xe_pat.c
> > @@ -154,6 +154,16 @@ static const struct xe_pat_table_entry xe2_pat_table[] = {
> > static const struct xe_pat_table_entry xe2_pat_ats = XE2_PAT( 0, 0, 0, 0, 3, 3 );
> > static const struct xe_pat_table_entry xe2_pat_pta = XE2_PAT( 0, 0, 0, 0, 3, 0 );
> > +bool xe_pat_index_get_comp_mode(struct xe_device *xe, u16 pat_index)
> > +{
> > + WARN_ON(pat_index >= xe->pat.n_entries);
> > +
> > + if (xe->pat.table != xe2_pat_table)
> > + return false;
> > +
> > + return xe->pat.table[pat_index].value & XE2_COMP_EN;
> > +}
> > +
> > u16 xe_pat_index_get_coh_mode(struct xe_device *xe, u16 pat_index)
> > {
> > WARN_ON(pat_index >= xe->pat.n_entries);
> > diff --git a/drivers/gpu/drm/xe/xe_pat.h b/drivers/gpu/drm/xe/xe_pat.h
> > index fa0dfbe525cd..8be2856a73af 100644
> > --- a/drivers/gpu/drm/xe/xe_pat.h
> > +++ b/drivers/gpu/drm/xe/xe_pat.h
> > @@ -58,4 +58,14 @@ void xe_pat_dump(struct xe_gt *gt, struct drm_printer *p);
> > */
> > u16 xe_pat_index_get_coh_mode(struct xe_device *xe, u16 pat_index);
> > +/**
> > + * xe_pat_index_get_comp_mode() = Extract the compression mode for the given
> > + * pat_index.
> > + * @xe: xe device
> > + * @pat_index: The pat_index to query
> > + *
> > + * Return: True if pat_index is compressed, False otherwise
> > + */
> > +bool xe_pat_index_get_comp_mode(struct xe_device *xe, u16 pat_index);
> > +
> > #endif
> > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > index 432ea325677d..83f507ea7f30 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.c
> > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > @@ -2362,7 +2362,12 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
> > op->map.is_cpu_addr_mirror = flags &
> > DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR;
> > op->map.dumpable = flags & DRM_XE_VM_BIND_FLAG_DUMPABLE;
> > - op->map.pat_index = pat_index;
> > + /* XXX: We don't support SVM + compression yet */
> > + if (op->map.is_cpu_addr_mirror &&
> > + xe_pat_index_get_comp_mode(vm->xe, pat_index))
> > + op->map.pat_index = vm->xe->pat.idx[XE_CACHE_WB];
> > + else
> > + op->map.pat_index = pat_index;
>
> Just to double check here, there is nothing scary with coh_none here on igpu
I think don't igpu vs. dgpu matter here as on either system memory could
be mapped into the gpu. If we apply userptr rules here, I think we'd
disallow coh_none too.
Matt
> + svm? It gets rejected somewhere or somehow there is no way to bypass CPU
> clearing on host side?
>
> > op->map.invalidate_on_bind =
> > __xe_vm_needs_clear_scratch_pages(vm, flags);
> > } else if (__op->op == DRM_GPUVA_OP_PREFETCH) {
>
More information about the Intel-xe
mailing list