[PATCH 37/45] drm/ttm: add a helper to allocate a temp tt for copies.

Daniel Vetter daniel at ffwll.ch
Fri Sep 25 13:17:55 UTC 2020


On Fri, Sep 25, 2020 at 11:34 AM Christian König
<christian.koenig at amd.com> wrote:
>
> Am 25.09.20 um 10:18 schrieb Daniel Vetter:
> > On Fri, Sep 25, 2020 at 10:16 AM Daniel Vetter <daniel at ffwll.ch> wrote:
> >> On Fri, Sep 25, 2020 at 9:39 AM Christian König
> >> <christian.koenig at amd.com> wrote:
> >>> Am 25.09.20 um 01:14 schrieb Dave Airlie:
> >>>> On Thu, 24 Sep 2020 at 22:42, Christian König <christian.koenig at amd.com> wrote:
> >>>>> Am 24.09.20 um 07:18 schrieb Dave Airlie:
> >>>>>> From: Dave Airlie <airlied at redhat.com>
> >>>>>>
> >>>>>> All the accel moves do the same pattern here, provide a helper
> >>>>> And exactly that pattern I want to get away from.
> >>>> Currently this is just refactoring out the helper code in each driver, but I see
> >>>> since it calls bo_mem_space we are probably moving a bit in the wrong direction.
> >>> Exactly that's why I'm noting this.
> >>>
> >>>>> See what happens if we (for example) have a VRAM -> SYSTEM move is the
> >>>>> following:
> >>>>>
> >>>>> 1. TTM allocates a new ttm_resource object in the SYSTEM domain.
> >>>>> 2. We call the driver to move from VRAM to SYSTEM.
> >>>>> 3. Driver finds that it can't do this and calls TTM  to allocate GTT.
> >>>>> 4. Since we are maybe out of GTT TTM evicts a different BO from GTT to
> >>>>> SYSTEM and call driver again.
> >>>>>
> >>>>> This is a horrible ping/pong between driver/TTM/driver/TTM/driver and we
> >>>>> should stop that immediately.
> >>>>>
> >>>>> My suggestion is that we rewrite how drivers call the ttm_bo_validate()
> >>>>> function so that we can guarantee that this never happens.
> >>>>>
> >>>>> What do you think?
> >>>> I think that is likely the next step I'd like to take after this
> >>>> refactor, it's a lot bigger, and I'm not sure how it will look yet.
> >>> Agree, yes. I have some ideas in mind for that, but not fully baked either.
> >>>
> >>>> Do we envision the driver calling validate in a loop but when it can't
> >>>> find space it tells the driver and the driver does eviction and
> >>>> recalls validate?
> >>> Not in a loop, but more like in a chain.
> >>>
> >>> My plan is something like this:
> >>> Instead of having "normal" and "busy" placement we have a flag in the
> >>> context if evictions are allowed or not.
> >>> The call to ttm_bo_validate are then replaced with two calls, first
> >>> without evictions and if that didn't worked one with evictions.
> >>>
> >>> Then the normal validate sequence should look like this:
> >>> 1. If a BO is in the SYSTEM (or SWAP domain) we validate it to GTT first
> >>> with evictions=true.
> >>> 2. If a BO should be in VRAM we then validate it to VRAM. If evictions
> >>> are only allowed if the GEM flags say that GTT is not desired.
> >> That solves the trouble when you move a bo into vram as part of
> >> validate. But I'm not seeing how this solves the "need gtt mapping to
> >> move something out of vram" problem.
>
> Eviction is not a problem because the driver gets asked where to put an
> evicted BO and then TTM does all the moving.

Hm I guess then I don't quite get where you see the ping-pong
happening, I thought that only happens for evicting stuff. But hey not
much actual working experience with ttm over here, I'm just reading
:-) I thought the issue is that ttm wants to evict from $something to
SYSTEM, and to do that the driver first needs to set a GTT mapping for
the SYSTEM ttm_resource allocation, so that it can use the
blitter/sdma engine or whatever to move the data over. But for
swap-in/validation I'm confused how you can end up with the "wrong"
placement, that feels like a driver bug.

How exactly can you get into a situation with validation where ttm
gives you SYSTEM, but not GTT and the driver has to fix that up? I'm
not really following I think, I guess there's something obvious I'm
missing.

> >> Or should we instead move the entire eviction logic out from ttm into
> >> drivers, building it up from helpers?
>
> I've been playing with that thought for a while as well, but then
> decided against it.
>
> The main problem I see is that we sometimes need to evict things from
> other drivers.
>
> E.g. when we overcommitted system memory and move things to swap for
> example.

Hm yeah ttm has that limit to avoid stepping into the shrinker,
directly calling into another driver to keep within the limit while
ignoring that there's other memory users and caches out there still
feels wrong, it's kinda a parallel world vs shrinker callbacks. And
there's nothing stopping you from doing the SYSTEM->SWAP movement from
a shrinker callback with the locking rules we've established around
dma_resv (just needs to be a trylock).

So feels a bit backwards if we design ttm eviction around this part of it ...

> >> Then drivers which need gtt for
> >> moving stuff out of vram can do that right away. Also, this would
> >> allow us to implement very fancy eviction algorithms like all the
> >> nonsense we're doing in i915 for gtt handling on gen2/3 (but I really
> >> hope that never ever becomes a thing again in future gpus, so this is
> >> maybe more a what-if kind of thing). Not sure how that would look
> >> like, maybe a special validate function which takes a ttm_resource the
> >> driver already found (through evicting stuff or whatever) and then ttm
> >> just does the move and book-keeping and everything. And drivers would
> >> at first only call validate without allowing any eviction. Ofc anyone
> >> without special needs could use the standard eviction function that
> >> validate already has.
> > Spinning this a bit more, we could have different default eviction
> > functions with this, e.g. so all the drivers that need gtt mapping for
> > moving stuff around can share that code, but with specific&flat
> > control flow instead of lots of ping-ping. And drivers that don't need
> > gtt mapping (like i915, we just need dma_map_sg which we assume works
> > always, or something from the ttm dma page pool, which really always
> > works) can then use something simpler that's completely flat.
>
> Ok you need to explain a bit more what exactly the problem with the GTT
> eviction is here :)

So the full set of limitations are
- range limits
- power-of-two alignement of start
- some other (smaller) power of two alignment for size (lol)
- "color", i.e. different caching modes needs at least one page of
empty space in-between

Stuffing all that into a generic eviction logic is imo silly. On top
of that we have the eviction collector where we scan the entire thing
until we've built up a sufficiently big hole, then evict just these
buffers. If we don't do this then pretty much any big buffer with
constraints results in the entire GTT getting evicted. Again something
that's only worth it if you have ridiculous placement constraints like
these old intel chips. gen2/3 in i915.ko is maybe a bit extreme, but
having the driver in control of the eviction code feels like a much
better design than ttm inflicting a one-size-fits-all on everyone. Ofc
with defaults and building blocks and all that.
-Daniel

> Christian.
>
> > -Daniel
> >
> >>> For special BOs, like amdgpus GDS, GWS and OA domain or VMWGFX special
> >>> domains that will obviously look a bit different.
> >>>
> >>> Christian.
> >>>
> >>>> Dave.
> >>> _______________________________________________
> >>> dri-devel mailing list
> >>> dri-devel at lists.freedesktop.org
> >>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C66b5c2bfed8541cd986208d8612b8e1d%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637366187002213152&sdata=OPdXMe2rG1Uy%2BZD%2BjYKHfkLc9xVAQ1AL23QbEu3drnE%3D&reserved=0
> >>
> >>
> >> --
> >> Daniel Vetter
> >> Software Engineer, Intel Corporation
> >> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&data=02%7C01%7Cchristian.koenig%40amd.com%7C66b5c2bfed8541cd986208d8612b8e1d%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637366187002223136&sdata=7OMV01n5pzZq%2F1C9tdrB81E0KsuXXaGk0raFAyQXiYg%3D&reserved=0
> >
> >
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list