[PATCH 5/7] drm/i915/gem/ttm: Respect the objection region in placement_from_obj
Matthew Auld
matthew.william.auld at gmail.com
Mon Jul 19 13:34:44 UTC 2021
On Fri, 16 Jul 2021 at 20:49, Jason Ekstrand <jason at jlekstrand.net> wrote:
>
> On Fri, Jul 16, 2021 at 1:45 PM Matthew Auld
> <matthew.william.auld at gmail.com> wrote:
> >
> > On Fri, 16 Jul 2021 at 18:39, Jason Ekstrand <jason at jlekstrand.net> wrote:
> > >
> > > On Fri, Jul 16, 2021 at 11:00 AM Matthew Auld
> > > <matthew.william.auld at gmail.com> wrote:
> > > >
> > > > On Fri, 16 Jul 2021 at 16:52, Matthew Auld
> > > > <matthew.william.auld at gmail.com> wrote:
> > > > >
> > > > > On Fri, 16 Jul 2021 at 15:10, Jason Ekstrand <jason at jlekstrand.net> wrote:
> > > > > >
> > > > > > On Fri, Jul 16, 2021 at 8:54 AM Matthew Auld
> > > > > > <matthew.william.auld at gmail.com> wrote:
> > > > > > >
> > > > > > > On Thu, 15 Jul 2021 at 23:39, Jason Ekstrand <jason at jlekstrand.net> wrote:
> > > > > > > >
> > > > > > > > Whenever we had a user object (n_placements > 0), we were ignoring
> > > > > > > > obj->mm.region and always putting obj->placements[0] as the requested
> > > > > > > > region. For LMEM+SMEM objects, this was causing them to get shoved into
> > > > > > > > LMEM on every i915_ttm_get_pages() even when SMEM was requested by, say,
> > > > > > > > i915_gem_object_migrate().
> > > > > > >
> > > > > > > i915_ttm_migrate calls i915_ttm_place_from_region() directly with the
> > > > > > > requested region, so there shouldn't be an issue with migration right?
> > > > > > > Do you have some more details?
> > > > > >
> > > > > > With i915_ttm_migrate directly, no. But, in the last patch in the
> > > > > > series, we're trying to migrate LMEM+SMEM buffers into SMEM on
> > > > > > attach() and pin it there. This blows up in a very unexpected (IMO)
> > > > > > way. The flow goes something like this:
> > > > > >
> > > > > > - Client attempts a dma-buf import from another device
> > > > > > - In attach() we call i915_gem_object_migrate() which calls
> > > > > > i915_ttm_migrate() which migrates as requested.
> > > > > > - Once the migration is complete, we call i915_gem_object_pin_pages()
> > > > > > which calls i915_ttm_get_pages() which depends on
> > > > > > i915_ttm_placement_from_obj() and so migrates it right back to LMEM.
> > > > >
> > > > > The mm.pages must be NULL here, otherwise it would just increment the
> > > > > pages_pin_count?
> > >
> > > Given that the test is using the ____four_underscores version, it
> > > doesn't have that check. However, this executes after we've done the
> > > dma-buf import which pinned pages. So we should definitely have
> > > pages.
> >
> > We shouldn't call ____four_underscores() if we might already have
> > pages though. Under non-TTM that would leak the pages, and in TTM we
> > might hit the WARN_ON(mm->pages) in __i915_ttm_get_pages(), if for
> > example nothing was moved. I take it we can't just call pin_pages()?
> > Four scary underscores usually means "don't call this in normal code".
>
> I've switched the ____four_underscores call to a __two_underscores in
> the selftests and it had no effect, good or bad. But, still, probably
> better to call that one.
>
> > >
> > > > > >
> > > > > > Maybe the problem here is actually that our TTM code isn't respecting
> > > > > > obj->mm.pages_pin_count?
> > > > >
> > > > > I think if the resource is moved, we always nuke the mm.pages after
> > > > > being notified of the move. Also TTM is also not allowed to move
> > > > > pinned buffers.
> > > > >
> > > > > I guess if we are evicted/swapped, so assuming we are not holding the
> > > > > object lock, and it's not pinned, the future call to get_pages() will
> > > > > see mm.pages = NULL, even though the ttm_resource is still there, and
> > > > > because we prioritise the placements[0], instead of mm.region we end
> > > > > up moving it for no good reason. But in your case you are holding the
> > > > > lock, or it's pinned? Also is this just with the selftest, or
> > > > > something real?
> > > >
> > > > Or at least in the selftest I see ____i915_gem_object_get_pages()
> > > > which doesn't even consider the mm.pages AFAIK.
> > >
> > > The bogus migration is happening as part of the
> > > __i915_gem_object_get_pages() (2 __underscores) call in
> > > i915_gem_dmabuf_attach (see last patch). That code is attempting to
> > > migrate the BO to SMEM and then pin it there using the obvious calls
> > > to do so. However, in the pin_pages call, it gets implicitly migrated
> > > back to LMEM thanks to i915_ttm_get_pages(). Why is _get_pages()
> > > migrating things at all?
> >
> > Not sure yet, but __two_underscores() checks if
> > i915_gem_object_has_pages() before actually calling into
> > i915_ttm_get_pages(), so the mm.pages would have to be NULL here for
> > some reason, so best guess is something to do with move_notify().
>
> Did a bit of experimenting along those lines and added the following
> to the self-test BEFORE the export/import:
>
> i915_gem_object_lock(obj, NULL);
> err = __i915_gem_object_get_pages(obj);
> __i915_gem_object_unpin_pages(obj);
> i915_gem_object_unlock(obj);
> if (err) {
> pr_err("__i915_gem_object_get_pages failed with err=%d\n", err);
> goto out_ret;
> }
>
> This seems to make the migration happen as expected without this
> patch. So it seems the problem only exists on buffers that haven't
> gotten any backing storage yet (if I'm understanding get_pages
> correctly).
>
> One potential work-around (not sure if this is a good idea or not!)
> would be to do this inside dmabuf_attach(). Is this reliable? Once
> it has pages will it always have pages? Or are there crazy races I
> need to be worried about here?
It turns out that the i915_ttm_adjust_gem_after_move() call in
ttm_object_init will always update the mm.region to system memory(so
that it matches the ttm resource), which seems reasonable given the
default system placeholder thing, but does seem slightly iffy since we
haven't actually moved/allocated anything.
So effectively i915_ttm_migrate(SYSTEM) becomes a noop here since
mm.region == mr. Which ofc means when we actually call get_pages() all
that happens is that we allocate the pages in system memory(or without
this patch placements[0]). Also with this patch lmem+smem, will always
be placed in smem first, regardless of the placements ordering.
For now we could maybe just split i915_ttm_adjust_gem_after_move() so
we skip the part which updates the mm.region here in the init portion,
since that should only happen when we try to place the object for
real?
>
> --Jason
More information about the dri-devel
mailing list