[Intel-gfx] [PATCH 22/22] drm/i915/guc: Add GuC kernel doc

Michal Wajdeczko michal.wajdeczko at intel.com
Tue Aug 17 20:41:00 UTC 2021



On 17.08.2021 19:34, Daniel Vetter wrote:
> On Tue, Aug 17, 2021 at 07:27:18PM +0200, Michal Wajdeczko wrote:
>>
>>
>> On 17.08.2021 19:20, Daniel Vetter wrote:
>>> On Tue, Aug 17, 2021 at 09:36:49AM -0700, Matthew Brost wrote:
>>>> On Tue, Aug 17, 2021 at 01:11:41PM +0200, Daniel Vetter wrote:
>>>>> On Mon, Aug 16, 2021 at 06:51:39AM -0700, Matthew Brost wrote:
>>>>>> Add GuC kernel doc for all structures added thus far for GuC submission
>>>>>> and update the main GuC submission section with the new interface
>>>>>> details.
>>>>>>
>>>>>> Signed-off-by: Matthew Brost <matthew.brost at intel.com>
>>>>>
>>>>> There's quite a bit more, e.g. intel_guc_ct, which has it's own world of
>>>>> locking design that also doesn't feel too consistent.
>>>>>
>>>>
>>>> That is a different layer than GuC submission so I don't we should
>>>> mention anything about that layer here. Didn't really write that layer
>>>> and it super painful to touch that code so I'm going to stay out of any
>>>> rework you think we need to do there. 
>>>
>>> Well there's three locks 
>>
>> It's likely me.
>>
>> There is one lock for the recv CTB, one for the send CTB, one for the
>> list of read messages ready to post process - do you want to use single
>> lock for both CTBs or single lock for all cases in CT ?
>>
>> Michal
>>
>> disclaimer: outstanding_g2h are not part of the CTB layer
> 
> Why? Like apparently there's not enough provided by that right now, so
> Matt is now papering over that gap with more book-keeping in the next
> layer. If the layer is not doing a good job it's either the wrong layer,
> or shouldn't be a layer.

Note that all "outstanding g2h" used by Matt are kind of unsolicited
"event" messages received from the GuC, that CTB layer is unable
correlate. CTB only tracks "requests" messages for which "response" (or
"error") reply is expected. Thus if CTB client is expecting some extra
message for its previous communication with GuC, it must track it on its
own, as only client knows where in the CTB message payload, actual
correlation data (like context ID) is stored.

> 
> And yeah the locking looks like serious amounts of overkill, was it
> benchmarked that we need the 3 separate locks for this?

I'm not aware of any (micro)benchmarking, but definitely we need some,
we were just gradually moving from single threaded blocking CTB calls
(waiting for CTB descriptor updates under mutex) to non-blocking calls
(protecting only reads/writes to CTB descriptors with spinlock - to
allow CTB usage from tasklet/irq).

And I was just assuming that we can sacrifice few more integers [1] and
have dedicated spinlocks and avoid early over-optimization.

> 
> While reading ctb code I also noticed that a bunch of stuff is checked
> before we grab the relevant spinlocks, and it's not
> - wrapped in a WARN_ON or GEM_BUG_ON or similar to just check everything
>   works as expected
> - there's no other locks
> 
> So either racy, buggy or playing some extremely clever tricks. None of
> which is very good.

I'm open to improve that code as needed, but maybe in exchange and to
increase motivation please provide feedback on already posted fixes [2] ;)

Michal

[1]
https://elixir.bootlin.com/linux/latest/source/arch/ia64/include/asm/spinlock_types.h#L10
[2] https://patchwork.freedesktop.org/series/92118/

> -Daniel
> 
>>
>>
>>> there plus it leaks out (you have your
>>> outstanding_submission_g2h atomic_t which is very closed tied to well,
>>> outstanding guc transmissions), so I guess I need someone else for that?
>>>
> 


More information about the Intel-gfx mailing list