[RFC PATCH 0/3] gpu: nova-core: add basic timer subdevice implementation

Alexandre Courbot acourbot at nvidia.com
Tue Feb 18 13:47:21 UTC 2025


Hi Dave,

On Tue Feb 18, 2025 at 10:42 AM JST, Dave Airlie wrote:
> On Tue, 18 Feb 2025 at 00:04, Alexandre Courbot <acourbot at nvidia.com> wrote:
>>
>> Hi everyone,
>>
>> This short RFC is based on top of Danilo's initial driver stub series
>> [1] and has for goal to initiate discussions and hopefully some design
>> decisions using the simplest subdevice of the GPU (the timer) as an
>> example, before implementing more devices allowing the GPU
>> initialization sequence to progress (Falcon being the logical next step
>> so we can get the GSP rolling).
>>
>> It is kept simple and short for that purpose, and to avoid bumping into
>> a wall with much more device code because my assumptions were incorrect.
>>
>> This is my first time trying to write Rust kernel code, and some of my
>> questions below are probably due to me not understanding yet how to use
>> the core kernel interfaces. So before going further I thought it would
>> make sense to raise the most obvious questions that came to my mind
>> while writing this draft:
>>
>> - Where and how to store subdevices. The timer device is currently a
>>   direct member of the GPU structure. It might work for GSP devices
>>   which are IIUC supposed to have at least a few fixed devices required
>>   to bring the GSP up ; but as a general rule this probably won't scale
>>   as not all subdevices are present on all GPU variants, or in the same
>>   numbers. So we will probably need to find an equivalent to the
>>   `subdev` linked list in Nouveau.
>
> I deliberately avoided doing this.
>
> My reasoning is this isn't like nouveau, where we control a bunch of
> devices, we have one mission, bring up GSP, if that entails a bunch of
> fixed function blocks being setup in a particular order then let's
> just deal with that.
>
> If things become optional later we can move to Option<> or just have a
> completely new path. But in those cases I'd make the Option
> <TuringGSPBootDevices> rather than Option<Sec2>, Option<NVDEC>, etc,
> but I think we need to look at the boot requirements on other GSP
> devices to know.
>
> I just don't see any case where we need to iterate over the subdevices
> in any form of list that makes sense and doesn't lead to the nouveau
> design which is a pain in the ass to tease out any sense of ordering
> or hierarchy.
>
> Just be explicit, boot the devices you need in the order you need to
> get GSP up and running.

Right, I was looking too far ahead and lost track of the fact that our
main purpose is to get the GSP to run. For that a fixed set of devices
should do the job just fine.

I still suspect that at some point we will need to keep some kernel-side
state for the functions supported by the GSP thus will have to introduce
more flexibility, but let's think about it when we arrive there. Core
GSP boot + communication is fixed.

>
>>
>> - BAR sharing between subdevices. Right now each subdevice gets access
>>   to the full BAR range. I am wondering whether we could not split it
>>   into the relevant slices for each-subdevice, and transfer ownership of
>>   each slice to the device that is supposed to use it. That way each
>>   register would have a single owner, which is arguably safer - but
>>   maybe not as flexible as we will need down the road?
>
> This could be useful, again the mission is mostly not to be hitting
> registers since GSP will deal with it, the only case I know that it
> won't work is, the GSP CPU sequencer code gets a script from the
> device, the script tells you to directly hit registers, with no real
> bounds checking, so I don't know if this is practical.
>
>>
>> - On a related note, since the BAR is behind a Devres its availability
>>   must first be secured before any hardware access using try_access().
>>   Doing this on a per-register or per-operation basis looks overkill, so
>>   all methods that access the BAR take a reference to it, allowing to
>>   call try_access() from the highest-level caller and thus reducing the
>>   number of times this needs to be performed. Doing so comes at the cost
>>   of an extra argument to most subdevice methods ; but also with the
>>   benefit that we don't need to put the BAR behind another Arc and share
>>   it across all subdevices. I don't know which design is better here,
>>   and input would be very welcome.
>
> We can't pass down the bar, because it takes a devres lock and it
> interferes with lockdep ordering, even hanging onto devres too long
> caused me lockdep issues.
>
> We should only be doing try access on registers that are runtime
> sized, is this a lot of them? Do we expect to be hitting a lot of
> registers in an actual fast path?

For this particular driver, I don't think we do. But other drivers will
probably also have this issue, and the challenge for them will to be
find the right granularity at which to invoke try_access(). My worry
being that this can explode down the road without warning if invoked at
the wrong place, which is the kind of behavior we are trying to avoid by
introducing Rust.

>
>> - We will probably need sometime like a `Subdevice` trait or something
>>   down the road, but I'll wait until we have more than one subdevice to
>>   think about it.
>
> Again I'm kinda opposed to this idea, I don't think it buys anything,
> with GSP we just want to boot, after that we never touch most of the
> subdevices ever again.

Yep, it's definitely not the thing we need to worry at the moment.
Minimal static set of devices, and let's get the GSP running. :)

Thanks,
Alex.



More information about the Nouveau mailing list