[PATCH 01/13] drm: execution context for GEM buffers v4
Boris Brezillon
boris.brezillon at collabora.com
Mon Jun 19 10:23:11 UTC 2023
On Mon, 19 Jun 2023 11:20:06 +0200
Christian König <christian.koenig at amd.com> wrote:
> Hi guys,
>
> Am 19.06.23 um 10:59 schrieb Thomas Hellström (Intel):
> > [SNIP]
> >>>>
> >>>> I really need to find some time to work on that anyway.
> >> I've been playing with drm_exec for a couple weeks now, and I wanted
> >> to share something I hacked to try and make the API simpler and
> >> more robust against misuse (see the below diff, which is a slightly
> >> adjusted version of your work).
> >
> > It would be good if we could have someone taking charge of this series
> > and address all review comments, I see some of my comments getting
> > lost, we have multiple submitters and I can't find a dri-devel
> > patchwork entry for this. Anyway some comments below.
>
> I can try to find some time for the series this week (As long as nobody
> comes along and has any burning roof).
That's great news!
>
> >
> >>
> >> In this version, the user is no longer in control of the retry
> >> loop. Instead, it provides an expression (a call to a
> >> sub-function) to be re-evaluated each time a contention is
> >> detected. IMHO, this makes the 'prepare-objs' functions easier to
> >> apprehend, and avoids any mistake like calling
> >> drm_exec_continue_on_contention() in an inner loop, or breaking
> >> out of the drm_exec_while_all_locked() loop unintentionally.
> >
> > In i915 we've had a very similar helper to this, and while I agree
> > this newer version would probably help make code cleaner, but OTOH
> > there also are some places where the short drm_exec_while_all_locked()
> > -likeblock don't really motivate a separate function. Porting i915 to
> > the current version will take some work, For the xe driver both
> > versions would work fine.
>
> Yeah, this is actually what my first version of this looked like. But I
> abandoned that approach because we have a lot of cases were we just
> quickly want to lock a few GEM objects and don't want the extra overhead
> of putting all the state into some bag to forward it to a function.
If you're talking about verbosity, it might be the case, though I guess
it mostly a matter of taste (I do like when things are well isolated).
As for runtime overhead, I'd expect the compiler to inline the function
anyway, so it's unlikely to change anything.
> >> +/* Track the locked object in the array */
> >> +static int drm_exec_obj_locked(struct drm_exec *exec,
> >> + struct drm_gem_object *obj)
> >> +{
> >> + if (unlikely(exec->num_objects == exec->max_objects)) {
> >> + size_t size = exec->max_objects * sizeof(void *);
> >> + void *tmp;
> >> +
> >> + tmp = kvrealloc(exec->objects, size, size + PAGE_SIZE,
> >> + GFP_KERNEL);
> >> + if (!tmp)
> >> + return -ENOMEM;
> >
> > Sometimes you need to just temporarily lock an object and then unlock
> > it again if it goes out of scope before reaching the end of
> > _until_all_locked(). In that case you might need to remove a lock from
> > the array. I *think* for all use-cases in i915 it would suffice to
> > take a snapshot of num_objects, and unlock everything above that,
> > having exec->objects behave like a stack, but was ever a list
> > considered instead of a realloced array?
>
> Yes, the problem is that linked lists really suck regarding their cache
> line locality. That's why I've came up with this approach here.
Hm, maybe I'm missing something, but if you place the list_head obj you
use to stack the locked objects close enough to the resv pointer, and
aligned on cache line, it shouldn't really be a problem, given you have
to dereference the GEM object to retrieve its resv anyway.
More information about the dri-devel
mailing list