[Mesa-dev] A proposal for new testing requirements for stable releases

Carl Worth cworth at cworth.org
Tue Jul 8 16:10:02 PDT 2014


I've been doing stable-branch release of mesa for close to a year now.

In all of that time, there's one thing that I've never been very
comfortable with. Namely, release candidates are not tested very
thoroughly prior to release.[*] I'm glad that we haven't had any major
problems yet with broken releases. But I'd like to improve our release
testing rather than just trust to future luck.

Here are the changes that I'm proposing now for comments:

  1. The release schedule needs to be predictable.

	The current ideal is "every 2 weeks" for stable releases. I
	believe that is healthy and feasible.

	I propose that releases occur consistently every 2 weeks on
	Friday, (in the release manager's timezone---currently
	America/Los_Angeles).

  2. The release candidate needs to be made available in advance to
     allow for testing.

	I propose that the release manager push a candidate branch to
	the upstream repository by the end of the Monday prior to each
	scheduled Friday release.

  3. I'd like to receive a testing report from each driver team

	This is the meat of my proposal. I'm requesting that each driver
	team designate one (or more) people that will be responsible for
	running (or collecting) tests for each release-candidate and
	reporting the results to me.

	With a new release-candidate pushed by the end of the day on
	Monday, and me starting the actual release work on Friday, that
	gives 72 hours for each team to perform testing.

	I'm happy to let each driver team decide its own testing
	strategy. I would hope it would be based on running piglit
	across a set of "interesting" hardware configurations, (and
	augmenting piglit as necessary as bug fixes are performed). But
	I do not plan to be involved in the specification of what those
	configurations are. All I need is something along the lines of:

		"The radeon team is OK with releasing commit <foo>"

	sent to me before the scheduled day of the release.

	Obviously, any negative report from any team can trigger changes
	to the patches to be included in the release.

	And in order to put some teeth into this scheme:

	I propose that on the day of the release, the release manager
	drop all driver-specific patches for any driver for which the
	driver team has not provided an affirmative testing report.

Does the above sound like a reasonable scheme? If so,who from each driver
team is willing to volunteer to send me testing reports? I'll be happy
to send personal reminders to such volunteers as releases are getting
close and testing reports are missing, (before yanking out any
driver-specific patches).

Thanks in advance for any thoughts or comments,

-Carl

PS. This same testing scheme can obviously be used for major releases as
well, presumably with the same volunteers, (but different details on
release timing).

[*] For background, here's the testing scheme I've been using so far:

  1. Running each release candidate through piglit on my laptop

  2. Trying to leave the stable branch pushed out for at least a day or
     so to allow testing prior to release.

The testing I do on my laptop at least gives some sanity checking for
core, but very little driver-specific testing. Meanwhile, the majority
of stable-branch fixes seem to be in driver-specific code.

And I don't know if there is any serious testing of the stable branch by
anyone else between when I push it and when I release, (I haven't heard
much either way). I know it doesn't help that the timing of my releases
has not often been predictable.


More information about the mesa-dev mailing list