<p dir="ltr"><br>
On Aug 19, 2014 8:13 PM, "Michel Dänzer" <<a href="mailto:michel@daenzer.net">michel@daenzer.net</a>> wrote:<br>
><br>
> From: Michel Dänzer <<a href="mailto:michel.daenzer@amd.com">michel.daenzer@amd.com</a>><br>
><br>
> This reverts commit acb824ddc53c446124d88e37db610a4f8c59d56c.<br>
><br>
> This decreases the runtime of the gpu.py profile from around 15 minutes to<br>
> around 12 minutes on my machine, with no change in results.</p>
<p dir="ltr">Do you use a compositing window manager? If so, then each window gets its own front buffer and you can use the -c option and run all the tests in parallel without problems. If you are not running a compositor then there could be a problem if the concurrent test pops up a window on top of the non-concurrent test. In this case the concurrent test's window would destroy the front buffer of the nom-concurrent test.</p>
<p dir="ltr">This will only work if we can guarantee that the non-concurrent test doesn't pop up any windows.</p>
<p dir="ltr">--Jason Ekstrand<br>
><br>
> If in the future there are tests which really can't run in parallel with any<br>
> other tests, a new category should be added for them.<br>
><br>
> Signed-off-by: Michel Dänzer <<a href="mailto:michel.daenzer@amd.com">michel.daenzer@amd.com</a>><br>
> ---<br>
> framework/profile.py | 29 +++++++++++++++--------------<br>
> 1 file changed, 15 insertions(+), 14 deletions(-)<br>
><br>
> diff --git a/framework/profile.py b/framework/profile.py<br>
> index 5428890..1bb2b50 100644<br>
> --- a/framework/profile.py<br>
> +++ b/framework/profile.py<br>
> @@ -189,8 +189,6 @@ class TestProfile(object):<br>
> self._pre_run_hook()<br>
> framework.exectest.Test.OPTS = opts<br>
><br>
> - chunksize = 1<br>
> -<br>
> self._prepare_test_list(opts)<br>
> log = Log(len(self.test_list), opts.verbose)<br>
><br>
> @@ -203,30 +201,33 @@ class TestProfile(object):<br>
> name, test = pair<br>
> test.execute(name, log, json_writer, self.dmesg)<br>
><br>
> - def run_threads(pool, testlist):<br>
> - """ Open a pool, close it, and join it """<br>
> - pool.imap(test, testlist, chunksize)<br>
> - pool.close()<br>
> - pool.join()<br>
> -<br>
> # Multiprocessing.dummy is a wrapper around Threading that provides a<br>
> # multiprocessing compatible API<br>
> #<br>
> # The default value of pool is the number of virtual processor cores<br>
> single = multiprocessing.dummy.Pool(1)<br>
> multi = multiprocessing.dummy.Pool()<br>
> + chunksize = 1<br>
><br>
> if opts.concurrent == "all":<br>
> - run_threads(multi, self.test_list.iteritems())<br>
> + multi.imap(test, self.test_list.iteritems(), chunksize)<br>
> elif opts.concurrent == "none":<br>
> - run_threads(single, self.test_list.iteritems())<br>
> + single.imap(test, self.test_list.iteritems(), chunksize)<br>
> else:<br>
> # Filter and return only thread safe tests to the threaded pool<br>
> - run_threads(multi, (x for x in self.test_list.iteritems()<br>
> - if x[1].run_concurrent))<br>
> + multi.imap(test, (x for x in self.test_list.iteritems()<br>
> + if x[1].run_concurrent), chunksize)<br>
> # Filter and return the non thread safe tests to the single pool<br>
> - run_threads(single, (x for x in self.test_list.iteritems()<br>
> - if not x[1].run_concurrent))<br>
> + single.imap(test, (x for x in self.test_list.iteritems()<br>
> + if not x[1].run_concurrent), chunksize)<br>
> +<br>
> + # Close and join the pools<br>
> + # If we don't close and the join the pools the script will exit before<br>
> + # the pools finish running<br>
> + multi.close()<br>
> + single.close()<br>
> + multi.join()<br>
> + single.join()<br>
><br>
> log.summary()<br>
><br>
> --<br>
> 2.1.0<br>
><br>
> _______________________________________________<br>
> Piglit mailing list<br>
> <a href="mailto:Piglit@lists.freedesktop.org">Piglit@lists.freedesktop.org</a><br>
> <a href="http://lists.freedesktop.org/mailman/listinfo/piglit">http://lists.freedesktop.org/mailman/listinfo/piglit</a><br>
</p>