[PATCH 4/7] drm/ttm: move LRU walk defines into new internal header
Daniel Vetter
daniel.vetter at ffwll.ch
Mon Aug 19 14:14:36 UTC 2024
On Mon, Aug 19, 2024 at 01:38:56PM +0200, Thomas Hellström wrote:
> Hi, Christian,
>
> On Mon, 2024-08-19 at 13:03 +0200, Christian König wrote:
> > Am 06.08.24 um 10:29 schrieb Thomas Hellström:
> > > Hi, Christian.
> > >
> > > On Thu, 2024-07-11 at 14:01 +0200, Christian König wrote:
> > > > Am 10.07.24 um 20:19 schrieb Matthew Brost:
> > > > > On Wed, Jul 10, 2024 at 02:42:58PM +0200, Christian König
> > > > > wrote:
> > > > > > That is something drivers really shouldn't mess with.
> > > > > >
> > > > > Thomas uses this in Xe to implement a shrinker [1]. Seems to
> > > > > need
> > > > > to
> > > > > remain available for drivers.
> > > > No, that is exactly what I try to prevent with that.
> > > >
> > > > This is an internally thing of TTM and drivers should never use
> > > > it
> > > > directly.
> > > That driver-facing LRU walker is a direct response/solution to this
> > > comment that you made in the first shrinker series:
> > >
> > > https://lore.kernel.org/linux-mm/b7491378-defd-4f1c-31e2-29e4c77e2d67@amd.com/T/#ma918844aa8a6efe8768fdcda0c6590d5c93850c9
> >
> > Ah, yeah that was about how we are should be avoiding middle layer
> > design.
> >
> > But a function which returns the next candidate for eviction and a
> > function which calls a callback for eviction is exactly the opposite.
> >
> > > That is also mentioned in the cover letter of the recent shrinker
> > > series, and this walker has been around in that shrinker series for
> > > more than half a year now so if you think it's not the correct
> > > driver
> > > facing API IMHO that should be addressed by a review comment in
> > > that
> > > series rather than in posting a conflicting patch?
> >
> > I actually outlined that in the review comments for the patch series.
> > E.g. a walker function with a callback is basically a middle layer.
> >
> > What outlined in the link above is that a function which returns the
> > next eviction candidate is a better approach than a callback.
> >
> > > So assuming that we still want the driver to register the shrinker,
> > > IMO that helper abstracts away all the nasty locking and pitfalls
> > > for a
> > > driver-registered shrinker, and is similar in structure to for
> > > example
> > > the pagewalk helper in mm/pagewalk.c.
> > >
> > > An alternative that could be tried as a driver-facing API is to
> > > provide
> > > a for_each_bo_in_lru_lock() macro where the driver open-codes
> > > "process_bo()" inside the for loop but I tried this and found it
> > > quite
> > > fragile since the driver might exit the loop without performing the
> > > necessary cleanup.
> >
> > The point is that the shrinker should *never* need to have context.
> > E.g.
> > a walker which allows going over multiple BOs for eviction is exactly
> > the wrong approach for that.
> >
> > The shrinker should evict always only exactly one BO and the next
> > invocation of a shrinker should not depend on the result of the
> > previous
> > one.
> >
> > Or am I missing something vital here?
>
> A couple of things,
>
> 1) I'd like to think of the middle-layer vs helper like the helper has
> its own ops, and can be used optionally from the driver. IIRC, the
> atomic modesetting / pageflip ops are implemented in exactly this way.
>
> Sometimes a certain loop operation can't be easily or at least robustly
> implemented using a for_each_.. approach. Like for example
> mm/pagewalk.c. In this shrinking case I think it's probably possible
> using the scoped_guard() in cleanup.h. This way we could get rid of
> this middle layer discussion by turning the interface inside-out:
>
> for_each_bo_on_lru_locked(xxx)
> driver_shrink();
>
> But I do think the currently suggested approach is less fragile and
> prone to driver error.
>
> FWIW in addition to the above examples, also drm_gem_lru_scan works
> like this.
a iteration state structure (like drm_connector_iter) plus then a macro
for the common case that uses cleanup.h should get the job done.
> 2) The shrinkers suggest to shrink a number of pages, not a single bo,
> again drm_gem_lru_scan works like this. i915 works like this. I think
> we should align with those.
Yeah that's how shrinkers work, so if we demidlayer then it realls needs
to be a loop in the driver, not a "here's the next bo to nuke" I think.
> 3) Even if we had a function to obtain the driver to shrink, the driver
> needs to have its say about the suitability for shrinking, so a
> callback is needed anyway (eviction_valuable).
> In addition, if shrinking fails for some reason, what would stop that
> function to return the same bo, again and again just like the problem
> we previously had in TTM?
Yeah I think if consensus moves back to drivers not looking at ttm lru
directly, then that entire shrinker looping needs to be as some kind of
midlayer in ttm itself. Otherwise I don't think it works, so agreeing with
Thomas here.
> So basically all the restartable LRU work was motivated by this use-
> case in mind, so making that private I must say comes as a pretty major
> surprise.
>
> I could have a look at the
>
> for_each_bo_on_lru_locked(xxx)
> driver_shrink();
>
> approach, but having TTM just blindly return a single bo without
> context will not work IMO.
Another thing to keep in mind is that at least experience from really
resource-starved igpu platforms says that cpu consumption for shrinking
matters. So really need to not toss list walk state, and also at least
from I think msm experience and maybe also i915 (i915-gem's a bit ... too
complex to really understand it anymore) is that parallelism matters too.
Eventually under memory pressures multiple cpu cores just aboslutely
hammer the shrinkers, so being stuck on locks is no good.
But maybe let's get this off the ground first.
-Sima
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
More information about the dri-devel
mailing list