[PATCH 2/2] drm/i915/gem: Migrate to system at dma-buf attach time (v5)

Matthew Auld matthew.william.auld at gmail.com
Tue Jul 13 15:06:13 UTC 2021


On Tue, 13 Jul 2021 at 15:44, Daniel Vetter <daniel at ffwll.ch> wrote:
>
> On Mon, Jul 12, 2021 at 06:12:34PM -0500, Jason Ekstrand wrote:
> > From: Thomas Hellström <thomas.hellstrom at linux.intel.com>
> >
> > Until we support p2p dma or as a complement to that, migrate data
> > to system memory at dma-buf attach time if possible.
> >
> > v2:
> > - Rebase on dynamic exporter. Update the igt_dmabuf_import_same_driver
> >   selftest to migrate if we are LMEM capable.
> > v3:
> > - Migrate also in the pin() callback.
> > v4:
> > - Migrate in attach
> > v5: (jason)
> > - Lock around the migration
> >
> > Signed-off-by: Thomas Hellström <thomas.hellstrom at linux.intel.com>
> > Signed-off-by: Michael J. Ruhl <michael.j.ruhl at intel.com>
> > Reported-by: kernel test robot <lkp at intel.com>
> > Signed-off-by: Jason Ekstrand <jason at jlekstrand.net>
> > Reviewed-by: Jason Ekstrand <jason at jlekstrand.net>
> > ---
> >  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    | 25 ++++++++++++++++++-
> >  .../drm/i915/gem/selftests/i915_gem_dmabuf.c  |  4 ++-
> >  2 files changed, 27 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > index 9a655f69a0671..3163f00554476 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> > @@ -170,8 +170,31 @@ static int i915_gem_dmabuf_attach(struct dma_buf *dmabuf,
> >                                 struct dma_buf_attachment *attach)
> >  {
> >       struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf);
> > +     struct i915_gem_ww_ctx ww;
> > +     int err;
> > +
> > +     for_i915_gem_ww(&ww, err, true) {
> > +             err = i915_gem_object_lock(obj, &ww);
> > +             if (err)
> > +                     continue;
> > +
> > +             if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) {
> > +                     err = -EOPNOTSUPP;
> > +                     continue;
> > +             }
> > +
> > +             err = i915_gem_object_migrate(obj, &ww, INTEL_REGION_SMEM);
> > +             if (err)
> > +                     continue;
> >
> > -     return i915_gem_object_pin_pages_unlocked(obj);
> > +             err = i915_gem_object_wait_migration(obj, 0);
> > +             if (err)
> > +                     continue;
> > +
> > +             err = i915_gem_object_pin_pages(obj);
> > +     }
> > +
> > +     return err;
> >  }
> >
> >  static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf,
> > diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > index 3dc0f8b3cdab0..4f7e77b1c0152 100644
> > --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> > @@ -106,7 +106,9 @@ static int igt_dmabuf_import_same_driver(void *arg)
> >       int err;
> >
> >       force_different_devices = true;
> > -     obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
> > +     obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, 0);
>
> I'm wondering (and couldn't answer) whether this creates an lmem+smem
> buffer, since if we create an lmem-only buffer then the migration above
> should fail.

It's lmem-only, but it's also a kernel internal object, so the
migration path will still happily migrate it if asked. On the other
hand if it's a userspace object then we always have to respect the
placements.

I think for now the only usecase for that is in the selftests.

>
> Which I'm also not sure we have a testcase for that testcase either ...
>
> I tried to read some code here, but got a bit lost. Ideas?
> -Daniel
>
> > +     if (IS_ERR(obj))
> > +             obj = i915_gem_object_create_shmem(i915, PAGE_SIZE);
> >       if (IS_ERR(obj))
> >               goto out_ret;
> >
> > --
> > 2.31.1
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch


More information about the dri-devel mailing list