[PATCH] drm/i915/ttm: don't leak the ccs state
C, Ramalingam
ramalingam.c at intel.com
Thu Jul 28 09:36:13 UTC 2022
> -----Original Message-----
> From: Auld, Matthew <matthew.auld at intel.com>
> Sent: Thursday, July 28, 2022 1:38 PM
> To: C, Ramalingam <ramalingam.c at intel.com>; intel-gfx at lists.freedesktop.org
> Cc: dri-devel at lists.freedesktop.org; Thomas Hellström <thomas.hellstrom at linux.intel.com>
> Subject: Re: [PATCH] drm/i915/ttm: don't leak the ccs state
>
> On 28/07/2022 00:16, C, Ramalingam wrote:
> >> -----Original Message-----
> >> From: Auld, Matthew <matthew.auld at intel.com>
> >> Sent: Wednesday, July 27, 2022 10:14 PM
> >> To: intel-gfx at lists.freedesktop.org
> >> Cc: dri-devel at lists.freedesktop.org; Thomas Hellström
> >> <thomas.hellstrom at linux.intel.com>; C, Ramalingam
> >> <ramalingam.c at intel.com>
> >> Subject: [PATCH] drm/i915/ttm: don't leak the ccs state
> >>
> >> The kernel only manages the ccs state with lmem-only objects, however
> >> the kernel should still take care not to leak the CCS state from the previous user.
> >>
> >> Fixes: 48760ffe923a ("drm/i915/gt: Clear compress metadata for
> >> Flat-ccs objects")
> >> Signed-off-by: Matthew Auld <matthew.auld at intel.com>
> >> Cc: Thomas Hellström <thomas.hellstrom at linux.intel.com>
> >> Cc: Ramalingam C <ramalingam.c at intel.com>
> >> ---
> >> drivers/gpu/drm/i915/gt/intel_migrate.c | 23 ++++++++++++++++++++++-
> >> 1 file changed, 22 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c
> >> b/drivers/gpu/drm/i915/gt/intel_migrate.c
> >> index a69b244f14d0..9a0814422ba4 100644
> >> --- a/drivers/gpu/drm/i915/gt/intel_migrate.c
> >> +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c
> >> @@ -708,7 +708,7 @@ intel_context_migrate_copy(struct intel_context *ce,
> >> u8 src_access, dst_access;
> >> struct i915_request *rq;
> >> int src_sz, dst_sz;
> >> - bool ccs_is_src;
> >> + bool ccs_is_src, overwrite_ccs;
> >> int err;
> >>
> >> GEM_BUG_ON(ce->vm != ce->engine->gt->migrate.context->vm);
> >> @@ -749,6 +749,8 @@ intel_context_migrate_copy(struct intel_context *ce,
> >> get_ccs_sg_sgt(&it_ccs, bytes_to_cpy);
> >> }
> >>
> >> + overwrite_ccs = HAS_FLAT_CCS(i915) && !ccs_bytes_to_cpy &&
> >> +dst_is_lmem;
> >> +
> >> src_offset = 0;
> >> dst_offset = CHUNK_SZ;
> >> if (HAS_64K_PAGES(ce->engine->i915)) { @@ -852,6 +854,25 @@
> >> intel_context_migrate_copy(struct intel_context *ce,
> >> if (err)
> >> goto out_rq;
> >> ccs_bytes_to_cpy -= ccs_sz;
> >> + } else if (overwrite_ccs) {
> >> + err = rq->engine->emit_flush(rq, EMIT_INVALIDATE);
> >> + if (err)
> >> + goto out_rq;
> >> +
> >> + /*
> >> + * While we can't always restore/manage the CCS state,
> >> + * we still need to ensure we don't leak the CCS state
> >> + * from the previous user, so make sure we overwrite it
> >> + * with something.
> >> + */
> >> + err = emit_copy_ccs(rq, dst_offset, INDIRECT_ACCESS,
> >> + dst_offset, DIRECT_ACCESS, len);
> >> + if (err)
> >> + goto out_rq;
> >> +
> >> + err = rq->engine->emit_flush(rq, EMIT_INVALIDATE);
> >> + if (err)
> >> + goto out_rq;
> > The change is looking good to the purpose. But shouldn't this be the part of lmem allocation itself?
>
> Hmm, what do you mean by the lmem allocation itself? The scenarios I had in mind here were:
>
> - { lmem, smem } buffer, object is allocated in smem (like with initial
> mmap) and then moved to lmem (smem -> lmem).
>
> - { lmem, smem } buffer, object is allocated in lmem, but then evicted to smem. Object then moved
> back to lmem (smem -> lmem).
>
> - { lmem, smem} buffer with CPU_ACCESS flag on small-bar system, object is allocated in non-
> mappable lmem, and them moved to the mappable part of lmem on fault (lmem -> lmem).
>
> In all the above cases the CCS state is left uninitialised, AFAICT.
As we discussed offline, this will clear the ccs state(old) of the dst lmem in case of migration
from smem to lmem of smem+lmem obj. this seems to be right place to fix the info leak of
previous ccs state.
Reviewed-by: Ramalingam C <ramalingam.c at intel.com>
>
> >
> > Ram.
> >> }
> >>
> >> /* Arbitration is re-enabled between requests. */
> >> --
> >> 2.37.1
> >
More information about the dri-devel
mailing list