[Mesa-dev] [RFC] Mesa 17.3.x release problems and process improvements

Juan A. Suarez Romero jasuarez at igalia.com
Wed Apr 4 17:48:16 UTC 2018


On Wed, 2018-04-04 at 10:07 -0700, Mark Janes wrote:
> Emil Velikov <emil.l.velikov at gmail.com> writes:
> 
> > Hi all,
> > 
> > Having gone through the thread a few times, I believe it can be
> > summarised as follows:
> >  * Greater transparency is needed.
> >  * Subsystem/team maintainers.
> >  * Unfit and late nominations.
> >  * Developers/everyone should be more involved.
> >  * Greater automation must be explored.
> > 
> > 
> > NOTES:
> >  * Some of the details are not listed in the thread, but have been
> >    raised in one form or another.
> >  * The details focuses more on the goals, than the actual means.
> >  * Above said, some details may have been missed - I'm mere human.
> > 
> > 
> > In detail:
> >  * make the patch queue, release date and blockers accessible at any
> >    point in time:
> >     * queued patches can be accessed, via a branch - say wip/17.3,
> >       wip/18.0, wip/18.1, etc. The branch _will be_ rebased, although
> >       normally reverts are recommended.
> 
> I created a bot that applies commits from master to wip stable branches
> and tests in CI.  It runs several times a day and identifies patches
> that do not cherry-pick cleanly.  Branches are here:
> 
>  https://github.com/janesma/mesa/tree/wip/17.3
>  https://github.com/janesma/mesa/tree/wip/18.0
> 
> I've sent a couple of mails to developers when their recent patches
> don't apply.  So far it handles about 85% of the commits containing
> stable tags without intervention.
> 

Cool! I was thinking on a similar approach here:

* Everytime a push happens, a job/bot scans the pushed patches, and creates a
pull request with the stable patches. If some of the patches that does not
apply, then it sends an email to the authors informing. I group all the stable
patches in one PR because when a push is done, I assume that all the patches
belong to the same feature/bugfix, and thus it makes sense to deal with them as
in one PR.

* There's a bot that is listening for the PR. Everytime a new PR arrives, it
starts the proper testing. If test is successful, it automatically merges the
PR; otherwise it just sends an email informing the failure. An important point
here is that if a PR is under testing, then the bot waits until the test
finishes and the PR under testing is either merged or rejected. If it is merged,
the bot rebases the new PR and starts the test. This way, we guarantee the test
is done with a version that won't change after while the test is happening. If
you are more interested, I was thinking on using Marge bot for this.


	J.A.



> >     * rejected patches must be listed alongside the reason why and
> >       author+reviewer must be informed (email & IRC?) ASAP.
> >        * we already document and track those in .cherry-ignore. can we
> >          reuse that?
> > 
> >     * patches with trivial conflicts can be merged to the wip branch
> >       after another release manager, or patch author/reviewer has
> >       confirmed the changes.
> > 
> >     * patches that require backports will be rejected. usual rejection
> >       procedure applies (described above).
> > 
> >     * if there is delay due to extra testing time or otherwise, the
> >       release manager must list the blocking issues and ETA must be
> >       provided. ETA must be updated before it's reached. it may be
> >       worth having the ETA and rejections in a single place - inside
> >       the wip/ branch, html page, elsewhere.
> > 
> >     * the current metabug with release blockers must be made more
> >       obvious.
> > 
> >     * release manager can contact Phoronix and/or similar media to
> >       publicise expected delays, blockers or seek request for testing.
> > 
> > 
> >  * teams are encouraged to have one or multiple maintainers. some of
> >    the goals of having such people include:
> >     * individuals that have greater interaction with the team and
> >       knowledge about the team plans. rough examples include:
> >        * backport/bug is needed, yet person is not available - on a
> >          leave (sick, sabbatical, other) or busy with other things.
> >        * team has higher priority with details not publicly available.
> > 
> >     * can approve unfit or late nominations - see next section.
> >     * to ensure cover and minimise stress it's encouraged to have
> >       multiple maintainers per team and they are rotated regularly.
> >     * list of maintainers must be documented
> > 
> > 
> >  * unfit and late nominations:
> >     * any rejections that are unfit based on the existing criteria can
> >       be merged as long as:
> >        * subsystem specific patches are approved by the team
> >          maintainer(s).
> >        * patches that cover multiple subsystems are approved by 50%+1
> >          of the maintainers of the affected subsystems.
> > 
> >     * late nominations can be made after the pre-release announcement.
> >       they must be approved by the subsystem maintainers up-to X hours
> >       before the actual release. approval specifics are identical to the
> >       ones listed in 'unfit' section just above.
> > 
> > 
> >  * developers/everyone should be more involved:
> >     * with the patch queue accessible at any point, everyone is
> >       encouraged to keep an eye open and report issues.
> > 
> >     * developers should be more active in providing backports and
> >       updating the status of release blocking bugs.
> > 
> >     * release managers and team maintainers must check with developer
> >       (via email, IRC, other) if no action has been made for X days.
> > 
> >     * everyone is encouraged to provide a piglit/dEQP/etc testing
> >       summary (via email, attachment, html page., etc). if they do,
> >       please ensure that summary consistently available, regardless if
> >       there's any regressions or not. if extra time is needed reply to
> >       the list informing release managers
> > 
> >     * in case of regressions bisection must be provided.
> > 
> > 
> >  * testing - pre and post merge, automation:
> > 
> >    NOTE: implementation specifics is up-to each team, with goals of:
> >    a) results must be accessible reasonably easy
> >    b) high level documentation of the setup and contact points are
> >       documented
> > 
> >     * with over 120 developers contributing to mesa, ambiguous patch
> >       nominations will always exist.
> > 
> >     * the obvious ones can be automated, others will be applied manually.
> > 
> >     * release manager should deploy automation ensuring that all common
> >       combinations build correctly. if particular combination is missing
> >       interested parties should provide basic information/assistance for
> >       setting one up.
> > 
> >     * release manager will push the wip branch, after ensuring that
> >       patches follow the criteria and passes build testing
> > 
> >     * pre: automated runtime testing can be utilised at a later stage
> >       with gitlab. it's does not seem feasible atm.
> > 
> >     * post: teams can setup piglit/dEQP/etc testing, summary and/or
> >       bisection. it should be documented if the testing is triggered on
> >       push, polled every so often or otherwise.
> > 
> > 
> > I believe that we all agree on the above. If so the next step is to
> > update the documentation and each of us to grab a piece.
> > 
> > If you have feedback on any point, be that positive or negative, please
> > reply only with the hunk you have in mind.
> > 
> > Thanks
> > Emil
> 
> 


More information about the mesa-dev mailing list