[Piglit] [Patch v2 1/4] log_tests.py: Add tests from framework/log.py

Ilia Mirkin imirkin at alum.mit.edu
Tue Feb 18 16:07:19 PST 2014


On Tue, Feb 18, 2014 at 6:41 PM, Tom Stellard <tom at stellard.net> wrote:
> Hi Dylan,
>
> I've tested version 2 of this series, and I have a few
> questions/comments:
>
> + I really like being able to see how many tests have run and
>   how many have completed.  The pass/fail rates are nice and
>   help me identify bad test runs right away.
>
> + Would it be possible to print the test names in the non-verbose
>   output mode?  Would this work in concurrency mode?
>
> + When I use the verbose output, the output looks like this:
>
> running :: Program/Execute/get-global-id, skip: 11, warn: 1 Running Test(s): 253

Bah, the test name is too short... the OpenCL tests need to get with
the program -- at least 50 chars per test name, like the OpenGL tests
:) The old contents of the line aren't being fully overwritten...

I wonder if instead of relying on \r + overwriting, we should instead
emit a ^L (clear screen) -- although obviously not for the verbose
output. I also wonder if this style of output should only happen if
the output is a tty. This would also allow one to show all the
currently-running tests, one per line or something (to satisfy the
previous request). And it would resolve the issue when the line
becomes longer than the terminal width and you get wrapping.

And verbose mode could live without the summary.

>
>   It looks like the "running" lines are being merged with the "status" lines.
>   Is there any way to fix this, it makes the output hard to read.
>
> + When I use piglit I need to be able to determine the following from
>   the output:
>
>   1. Whether or not the test run is invalid.  Meaning that I have some
>      configuration problem on my system which is causing all the tests
>      to fail. With the old output I would do this by watching the first 20
>      and checking if they were all fails.  With the new output I can do
>      this by looking at the pass/fail stats and/or enabling verbose
>      output.
>
>   2. Which test is running when the system hangs.  Enabling
>      verbose output in the new logging allows me to do this.

IME, a history is good for this too -- sometimes a test does something
bad, but is able to exit before the system dies.

>
>
> Thanks,
> Tom
>
> On Tue, Feb 18, 2014 at 05:32:34AM -0800, Dylan Baker wrote:
>> Signed-off-by: Dylan Baker <baker.dylan.c at gmail.com>
>> Reviewed-by: Ilia Mirkin <imirkin at alum.mit.edu>
>> ---
>>  framework/tests/log_tests.py | 85 ++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 85 insertions(+)
>>  create mode 100644 framework/tests/log_tests.py
>>
>> diff --git a/framework/tests/log_tests.py b/framework/tests/log_tests.py
>> new file mode 100644
>> index 0000000..5f0640f
>> --- /dev/null
>> +++ b/framework/tests/log_tests.py
>> @@ -0,0 +1,85 @@
>> +# Copyright (c) 2014 Intel Corporation
>> +
>> +# Permission is hereby granted, free of charge, to any person obtaining a copy
>> +# of this software and associated documentation files (the "Software"), to deal
>> +# in the Software without restriction, including without limitation the rights
>> +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
>> +# copies of the Software, and to permit persons to whom the Software is
>> +# furnished to do so, subject to the following conditions:
>> +
>> +# The above copyright notice and this permission notice shall be included in
>> +# all copies or substantial portions of the Software.
>> +
>> +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
>> +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
>> +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
>> +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
>> +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
>> +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
>> +# SOFTWARE.
>> +
>> +""" Module provides tests for log.py module """
>> +
>> +from types import *  # This is a special * safe module
>> +import nose.tools as nt
>> +from framework.log import Log
>> +
>> +
>> +def test_initialize_log():
>> +    """ Test that Log initializes with """
>> +    log = Log(100)
>> +    assert log
>> +
>> +
>> +def test_get_current_return():
>> +    """ Test that pre_log returns a number """
>> +    log = Log(100)
>> +
>> +    ret = log.get_current()
>> +    nt.assert_true(isinstance(ret, (IntType, FloatType, LongType)),
>> +                   msg="Log.get_current() didn't return a numeric type!")
>> +
>> +
>> +def test_mark_complete_increment_complete():
>> +    """ Tests that Log.mark_complete() increments self.__complete """
>> +    log = Log(100)
>> +    ret = log.get_current()
>> +    log.mark_complete(ret, 'pass')
>> +    nt.assert_equal(log._Log__complete, 1,
>> +                    msg="Log.mark_complete() did not properly incremented "
>> +                        "Log.__current")
>> +
>> +
>> +def check_mark_complete_increment_summary(stat):
>> +    """ Test that passing a result to mark_complete works correctly """
>> +    log = Log(100)
>> +    ret = log.get_current()
>> +    log.mark_complete(ret, stat)
>> +    print log._Log__summary
>> +    nt.assert_equal(log._Log__summary[stat], 1,
>> +                    msg="Log.__summary[{}] was not properly "
>> +                        "incremented".format(stat))
>> +
>> +
>> +def test_mark_complete_increment_summary():
>> +    """ Generator that creates tests for self.__summary """
>> +
>> +
>> +    valid_statuses = ('pass', 'fail', 'crash', 'warn', 'dmesg-warn',
>> +                      'dmesg-fail', 'skip')
>> +
>> +    yieldable = check_mark_complete_increment_summary
>> +
>> +    for stat in valid_statuses:
>> +        yieldable.description = ("Test that Log.mark_complete increments "
>> +                                 "self._summary[{}]".format(stat))
>> +        yield yieldable, stat
>> +
>> +
>> +def test_mark_complete_removes_complete():
>> +    """ Test that Log.mark_complete() removes finished tests from __running """
>> +    log = Log(100)
>> +    ret = log.get_current()
>> +    log.mark_complete(ret, 'pass')
>> +    nt.assert_not_in(ret, log._Log__running,
>> +                     msg="Running tests not removed from running list")
>> --
>> 1.9.0
>>
>> _______________________________________________
>> Piglit mailing list
>> Piglit at lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/piglit
> _______________________________________________
> Piglit mailing list
> Piglit at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/piglit


More information about the Piglit mailing list