[Piglit] [Patch v2 1/4] log_tests.py: Add tests from framework/log.py

Tom Stellard tom at stellard.net
Tue Feb 18 15:41:51 PST 2014


Hi Dylan,

I've tested version 2 of this series, and I have a few
questions/comments:

+ I really like being able to see how many tests have run and
  how many have completed.  The pass/fail rates are nice and
  help me identify bad test runs right away.

+ Would it be possible to print the test names in the non-verbose
  output mode?  Would this work in concurrency mode?

+ When I use the verbose output, the output looks like this:

running :: Program/Execute/get-global-id, skip: 11, warn: 1 Running Test(s): 253

  It looks like the "running" lines are being merged with the "status" lines.
  Is there any way to fix this, it makes the output hard to read.

+ When I use piglit I need to be able to determine the following from
  the output:

  1. Whether or not the test run is invalid.  Meaning that I have some
     configuration problem on my system which is causing all the tests
     to fail. With the old output I would do this by watching the first 20
     and checking if they were all fails.  With the new output I can do
     this by looking at the pass/fail stats and/or enabling verbose
     output.

  2. Which test is running when the system hangs.  Enabling
     verbose output in the new logging allows me to do this.


Thanks,
Tom

On Tue, Feb 18, 2014 at 05:32:34AM -0800, Dylan Baker wrote:
> Signed-off-by: Dylan Baker <baker.dylan.c at gmail.com>
> Reviewed-by: Ilia Mirkin <imirkin at alum.mit.edu>
> ---
>  framework/tests/log_tests.py | 85 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 85 insertions(+)
>  create mode 100644 framework/tests/log_tests.py
> 
> diff --git a/framework/tests/log_tests.py b/framework/tests/log_tests.py
> new file mode 100644
> index 0000000..5f0640f
> --- /dev/null
> +++ b/framework/tests/log_tests.py
> @@ -0,0 +1,85 @@
> +# Copyright (c) 2014 Intel Corporation
> +
> +# Permission is hereby granted, free of charge, to any person obtaining a copy
> +# of this software and associated documentation files (the "Software"), to deal
> +# in the Software without restriction, including without limitation the rights
> +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> +# copies of the Software, and to permit persons to whom the Software is
> +# furnished to do so, subject to the following conditions:
> +
> +# The above copyright notice and this permission notice shall be included in
> +# all copies or substantial portions of the Software.
> +
> +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
> +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> +# SOFTWARE.
> +
> +""" Module provides tests for log.py module """
> +
> +from types import *  # This is a special * safe module
> +import nose.tools as nt
> +from framework.log import Log
> +
> +
> +def test_initialize_log():
> +    """ Test that Log initializes with """
> +    log = Log(100)
> +    assert log
> +
> +
> +def test_get_current_return():
> +    """ Test that pre_log returns a number """
> +    log = Log(100)
> +
> +    ret = log.get_current()
> +    nt.assert_true(isinstance(ret, (IntType, FloatType, LongType)),
> +                   msg="Log.get_current() didn't return a numeric type!")
> +
> +
> +def test_mark_complete_increment_complete():
> +    """ Tests that Log.mark_complete() increments self.__complete """
> +    log = Log(100)
> +    ret = log.get_current()
> +    log.mark_complete(ret, 'pass')
> +    nt.assert_equal(log._Log__complete, 1,
> +                    msg="Log.mark_complete() did not properly incremented "
> +                        "Log.__current")
> +
> +
> +def check_mark_complete_increment_summary(stat):
> +    """ Test that passing a result to mark_complete works correctly """
> +    log = Log(100)
> +    ret = log.get_current()
> +    log.mark_complete(ret, stat)
> +    print log._Log__summary
> +    nt.assert_equal(log._Log__summary[stat], 1,
> +                    msg="Log.__summary[{}] was not properly "
> +                        "incremented".format(stat))
> +
> +
> +def test_mark_complete_increment_summary():
> +    """ Generator that creates tests for self.__summary """
> +
> +
> +    valid_statuses = ('pass', 'fail', 'crash', 'warn', 'dmesg-warn',
> +                      'dmesg-fail', 'skip')
> +
> +    yieldable = check_mark_complete_increment_summary
> +
> +    for stat in valid_statuses:
> +        yieldable.description = ("Test that Log.mark_complete increments "
> +                                 "self._summary[{}]".format(stat))
> +        yield yieldable, stat
> +
> +
> +def test_mark_complete_removes_complete():
> +    """ Test that Log.mark_complete() removes finished tests from __running """
> +    log = Log(100)
> +    ret = log.get_current()
> +    log.mark_complete(ret, 'pass')
> +    nt.assert_not_in(ret, log._Log__running,
> +                     msg="Running tests not removed from running list")
> -- 
> 1.9.0
> 
> _______________________________________________
> Piglit mailing list
> Piglit at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/piglit


More information about the Piglit mailing list