[PATCH v9 04/18] kunit: test: add kunit_stream a std::stream like logger
John Ogness
john.ogness at linutronix.de
Fri Aug 2 07:37:53 UTC 2019
On 2019-08-01, Brendan Higgins <brendanhiggins at google.com> wrote:
> On Fri, Jul 26, 2019 at 1:31 AM Petr Mladek <pmladek at suse.com> wrote:
>> On Thu 2019-07-25 13:21:12, Brendan Higgins wrote:
>>> On Wed, Jul 24, 2019 at 12:31 AM Petr Mladek <pmladek at suse.com> wrote:
>>>> On Mon 2019-07-22 16:54:10, Stephen Boyd wrote:
>>>>> Quoting Brendan Higgins (2019-07-22 15:30:49)
>>>>>> On Mon, Jul 22, 2019 at 1:03 PM Stephen Boyd <sboyd at kernel.org> wrote:
>>>>>>> What's the calling context of the assertions and expectations? I
>>>>>>> still don't like the fact that string stream needs to allocate
>>>>>>> buffers and throw them into a list somewhere because the calling
>>>>>>> context matters there.
>>>>>>
>>>>>> The calling context is the same as before, which is anywhere.
>>>>>
>>>>> Ok. That's concerning then.
>>>>>
>>>>>>> I'd prefer we just wrote directly to the console/log via printk
>>>>>>> instead. That way things are simple because we use the existing
>>>>>>> buffering path of printk, but maybe there's some benefit to the
>>>>>>> string stream that I don't see? Right now it looks like it
>>>>>>> builds a string and then dumps it to printk so I'm sort of lost
>>>>>>> what the benefit is over just writing directly with printk.
>>>>>>
>>>>>> It's just buffering it so the whole string gets printed
>>>>>> uninterrupted. If we were to print out piecemeal to printk,
>>>>>> couldn't we have another call to printk come in causing it to
>>>>>> garble the KUnit message we are in the middle of printing?
>>>>>
>>>>> Yes, printing piecemeal by calling printk many times could lead to
>>>>> interleaving of messages if something else comes in such as an
>>>>> interrupt printing something. Printk has some support to hold
>>>>> "records" but I'm not sure how that would work here because
>>>>> KERN_CONT talks about only being used early on in boot code. I
>>>>> haven't looked at printk in detail though so maybe I'm all wrong
>>>>> and KERN_CONT just works?
>>>>
>>>> KERN_CONT does not guarantee that the message will get printed
>>>> together. The pieces get interleaved with messages printed in
>>>> parallel.
>>>>
>>>> Note that KERN_CONT was originally really meant to be used only
>>>> during boot. It was later used more widely and ended in the best
>>>> effort category.
>>>>
>>>> There were several attempts to make it more reliable. But it was
>>>> always either too complicated or error prone or both.
>>>>
>>>> You need to use your own buffering if you rely want perfect output.
>>>> The question is if it is really worth the complexity. Also note
>>>> that any buffering reduces the chance that the messages will reach
>>>> the console.
>>>
>>> Seems like that settles it then. Thanks!
>>>
>>>> BTW: There is a work in progress on a lockless printk ring buffer.
>>>> It will make printk() more secure regarding deadlocks. But it might
>>>> make transparent handling of continuous lines even more tricky.
>>>>
>>>> I guess that local buffering, before calling printk(), will be
>>>> even more important then. Well, it might really force us to create
>>>> an API for it.
>>>
>>> Cool! Can you CC me on that discussion?
>>
>> Adding John Oggness into CC.
>>
>> John, please CC Brendan Higgins on the patchsets eventually switching
>> printk() into the lockless buffer. The test framework is going to
>> do its own buffering to keep the related messages together.
>>
>> The lockless ringbuffer might make handling of related (partial)
>> lines worse or better. It might justify KUnit's extra buffering
>> or it might allow to get rid of it.
>
> Thanks for CC'ing me on the printk ringbuffer thread. It looks like it
> actually probably won't affect my needs for KUnit logging. The biggest
> reason I need some sort of buffering system is to be able to compose
> messages piece meal into a single message that will be printed out to
> the user as a single message with no messages from other printk
> callers printed out in the middle of mine.
printk has this same requirement for its CONT messages. You can read
about how I propose to implement that here[0], using a separate prb
ringbuffer for buffered storage until all the pieces are available.
It is not my goal that multiple subsystems start making use of the prb
ringbuffer. However, its features can be attractive if you don't want to
worry about multiple writers/readers or context (including NMI). Before
writing "yet another ringbuffer" [1] it might be worth the effort to at
least see if one of the existing implementations can work (or be
extended to work) for you.
John Ogness
[0] https://lkml.kernel.org/r/87imt2bl0k.fsf@linutronix.de
[1] https://lwn.net/Articles/789603/
> The prb does look interesting; however, it appears that to get the
> semantics that I need, I would have to put my entire message in a
> single data block and would consequently need to know the size of my
> message a priori, which is problematic. Consequently, it seems as
> though I will probably need to compose my entire message using my own
> buffering system.
>
>>>> Note that stroring the messages into the printk log is basically
>>>> safe in any context. It uses temporary per-CPU buffers for
>>>> recursive messages and in NMI. The only problem is panic() when
>>>> some CPU gets stuck with the lock taken. This will get solved by
>>>> the lockless ringbuffer. Also the temporary buffers will not be
>>>> necessary any longer.
>>>
>>> Sure, I think Stephen's concern is all the supporting code that is
>>> involved. Not printk specifically. It just means a lot more of KUnit
>>> has to be IRQ safe.
>>
>> I see.
>>
>> BTW: I wonder if KUnit could reuse the existing seq_buf
>> implementation for buffering messages.
>>
>> I am sorry if it has already been proposed and rejected for some
>> reason. I might have missed it. Feel free to just point me to
>> same older mail.
>
> Yeah, we discussed it briefly here:
>
> https://lkml.org/lkml/2019/5/17/497
>
> Looks like I forgot to include my reasoning in the commit text, sorry
> about that.
>
>>>> Much bigger problems are with consoles. There are many of them. It
>>>> means a lot of code and more locks involved, including scheduler
>>>> locks. Note that console lock is a semaphore.
>>>
>>> That shouldn't affect us though, right? As long as we continue to
>>> use the printk interface?
>>
>> I guess that it should not affect KUnit.
>>
>> The only problem might be if the testing framework calls printk()
>> inside scheduler or console code. And only when the tested code
>> uses the same locks that will be used by the called printk().
>
> Yeah, well printk will not be our only problem in those instances.
>
>> To be honest I do not fully understand KUnit design. I am not
>> completely sure how the tested code is isolated from the running
>> system. Namely, I do not know if the tested code shares
>> the same locks with the system running the test.
>
> No worries, I don't expect printk to be the hang up in those cases. It
> sounds like KUnit has a long way to evolve before printk is going to
> be a limitation.
>
> Thanks!
More information about the dri-devel
mailing list