[PATCH v4 04/40] drm/sched: Add enqueue credit limit
Danilo Krummrich
dakr at kernel.org
Fri May 23 06:58:58 UTC 2025
On Thu, May 22, 2025 at 07:31:28PM -0700, Rob Clark wrote:
> On Thu, May 22, 2025 at 8:53 AM Danilo Krummrich <dakr at kernel.org> wrote:
> > On Thu, May 22, 2025 at 07:47:17AM -0700, Rob Clark wrote:
> > > On Thu, May 22, 2025 at 4:00 AM Danilo Krummrich <dakr at kernel.org> wrote:
> > > > Ok, but what about the other way around? What's the performance impact if the
> > > > limit is chosen rather small, but we're running on a very powerful machine?
> > > >
> > > > Since you already have the implementation for hardware you have access to, can
> > > > you please check if and how performance degrades when you use a very small
> > > > threshold?
> > >
> > > I mean, considering that some drivers (asahi, at least), _only_
> > > implement synchronous VM_BIND, I guess blocking in extreme cases isn't
> > > so bad.
> >
> > Which is not even upstream yet and eventually will support async VM_BIND too,
> > AFAIK.
>
> the uapi is upstream
And will be extended once they have the corresponding async implementation in
the driver.
> > > But I think you are overthinking this. 4MB of pagetables is
> > > enough to map ~8GB of buffers.
> > >
> > > Perhaps drivers would want to set their limit based on the amount of
> > > memory the GPU could map, which might land them on a # larger than
> > > 1024, but still not an order of magnitude more.
> >
> > Nouveau currently supports an address space width of 128TiB.
> >
> > In general, we have to cover the range of some small laptop or handheld devices
> > to huge datacenter machines.
>
> sure.. and? It is still up to the user of sched to set their own
> limits, I'm not proposing that sched takes charge of that policy
>
> Maybe msm doesn't have to scale up quite as much (yet).. but it has to
> scale quite a bit further down (like watches). In the end it is the
> same. And also not really the point here.
>
> > > I don't really have a good setup for testing games that use this, atm,
> > > fex-emu isn't working for me atm. But I think Connor has a setup with
> > > proton working?
> >
> > I just want to be sure that an arbitrary small limit doing the job for a small
> > device to not fail VK CTS can't regress the performance on large machines.
>
> why are we debating the limit I set outside of sched.. even that might
> be subject to some tuning for devices that have more memory, but that
> really outside the scope of this patch
We are not debating the number you set in MSM, we're talking about whether a
statically set number will be sufficient.
Also, do we really want it to be our quality standard that we introduce some
throttling mechanism as generic infrastructure for driver and don't even add a
comment guiding drivers how to choose a proper limit and what are the potential
pitfalls in choosing the limit?
When working on a driver, do you want to run into APIs that don't give you
proper guidance on how to use them correctly?
I think it would not be very nice to tell drivers, "Look, here's a throttling API
for when VK CTS (unknown test) ruins your day. We also can't give any advise on
the limit that should be set depending on the scale of the machine, since we
never looked into it.".
> > So, kindly try to prove that we're not prone to extreme performance regression
> > with a static value as you propose.
> >
> > > > Also, I think we should probably put this throttle mechanism in a separate
> > > > component, that just wraps a counter of bytes or rather pages that can be
> > > > increased and decreased through an API and the increase just blocks at a certain
> > > > threshold.
> > >
> > > Maybe? I don't see why we need to explicitly define the units for the
> > > credit. This wasn't done for the existing credit mechanism.. which,
> > > seems like if you used some extra fences could also have been
> > > implemented externally.
> >
> > If you are referring to the credit mechanism in the scheduler for ring buffers,
> > that's a different case. Drivers know the size of their ring buffers exactly and
> > the scheduler has the responsibility of when to submit tasks to the ring buffer.
> > So the scheduler kind of owns the resource.
> >
> > However, the throttle mechanism you propose is independent from the scheduler,
> > it depends on the available system memory, a resource the scheduler doesn't own.
>
> it is a distinction that is perhaps a matter of opinion. I don't see
> such a big difference, it is all just a matter of managing physical
> resource usage in different stages of a scheduled job's lifetime.
Yes, but the ring buffer as a resource is owned by the scheduler, and hence
having the scheduler care about flow control makes sense.
Here you want to flow control the uAPI (i.e. VM_BIND ioctl) -- let's do this in
a seaparate component please.
> > > Maybe? This still has the same complaint I had about just
> > > implementing this in msm.. it would have to reach in and use the
> > > scheduler's job_scheduled wait-queue. Which, to me at least, seems
> > > like more of an internal detail about how the scheduler works.
> >
> > Why? The component should use its own waitqueue. Subsequently, from your code
> > that releases the pre-allocated memory, you can decrement the counter through
> > the drm_throttle API, which automatically kicks its the waitqueue.
> >
> > For instance from your VM_BIND IOCTL you can call
> >
> > drm_throttle_inc(value)
> >
> > which blocks if the increment goes above the threshold. And when you release the
> > pre-allocated memory you call
> >
> > drm_throttle_dec(value)
> >
> > which wakes the waitqueue and unblocks the drm_throttle_inc() call from your
> > VM_BIND IOCTL.
>
> ok, sure, we could introduce another waitqueue, but with my proposal
> that is not needed. And like I said, the existing throttling could
> also be implemented externally to the scheduler.. so I'm not seeing
> any fundamental difference.
Yes, but you also implicitly force drivers to actually release the pre-allocated
memory before the scheduler's internal waitqueue is woken. Having such implicit
rules isn't nice.
Also, with that drivers would need to do so in run_job(), i.e. in the fence
signalling critical path, which some drivers may not be able to do.
And, it also adds complexity to the scheduler, which we're trying to reduce.
All this goes away with making this a separate component -- please do that
instead.
More information about the dri-devel
mailing list