[Intel-gfx] [PATCH v4] x86/quirks: Replace QFLAG_APPLY_ONCE with static locals

Lucas De Marchi lucas.demarchi at intel.com
Thu Jan 13 01:28:29 UTC 2022


On Wed, Jan 12, 2022 at 07:06:45PM -0600, Bjorn Helgaas wrote:
>On Wed, Jan 12, 2022 at 04:21:28PM -0800, Lucas De Marchi wrote:
>> On Wed, Jan 12, 2022 at 06:08:05PM -0600, Bjorn Helgaas wrote:
>> > On Wed, Jan 12, 2022 at 03:30:43PM -0800, Lucas De Marchi wrote:
>> > > The flags are only used to mark a quirk to be called once and nothing
>> > > else. Also, that logic may not be appropriate if the quirk wants to
>> > > do additional filtering and set quirk as applied by itself.
>> > >
>> > > So replace the uses of QFLAG_APPLY_ONCE with static local variables in
>> > > the few quirks that use this logic and remove all the flags logic.
>> > >
>> > > Signed-off-by: Lucas De Marchi <lucas.demarchi at intel.com>
>> > > Reviewed-by: Bjorn Helgaas <bhelgaas at google.com>
>> >
>> > Only occurred to me now, but another, less intrusive approach would be
>> > to just remove QFLAG_APPLY_ONCE from intel_graphics_quirks() and do
>> > its bookkeeping internally, e.g.,
>>
>> that is actually what I suggested after your comment in v2: this would
>> be the first patch with "minimal fix". But then to keep it consistent
>> with the other calls to follow up with additional patches on top
>> converting them as well.  Maybe what I wrote wasn't clear in the
>> direction? Copying it here:
>>
>> 	1) add the static local only to intel graphics quirk  and remove the
>> 	flag from this item
>> 	2 and 3) add the static local to other functions and remove the flag
>> 	from those items
>> 	4) remove the flag from the table, the defines and its usage.
>> 	5) fix the coding style (to be clear, it's already wrong, not
>> 	something wrong introduced here... maybe could be squashed in (4)?)
>
>Oh, sorry, I guess I just skimmed over that without really
>comprehending it.
>
>Although the patch below is basically just 1 from above and doesn't
>require any changes to the other functions or the flags themselves
>(2-4 above).

Yes, but I would do the rest of the conversion anyway. It would be odd
to be inconsistent with just a few functions. So in the end I think we
would achieve the same goal.

I would really prefer this approach, having the bug fix first, if I was
concerned about having to backport this to linux-stable beyond 5.10.y
(we have a trivial conflict on 5.10).

However given this situation is new (Intel GPU + Intel Discrete GPU)
rare (it also needs a PCI topology in a certain way to reproduce it),
I'm not too concerned. Not even sure if it's worth submitting to
linux-stable.

I'll wait others to chime in on one way vs the other.

thanks
Lucas De Marchi


More information about the Intel-gfx mailing list