[PATCH xorg-gtest 1/4] xserver: increase timout to 3 seconds

Peter Hutterer peter.hutterer at who-t.net
Sun Oct 28 15:48:21 PDT 2012


On Fri, Oct 26, 2012 at 09:27:11AM -0700, Chase Douglas wrote:
> On Thu, Oct 25, 2012 at 4:11 PM, Peter Hutterer
> <peter.hutterer at who-t.net> wrote:
> > A server without a config file that inits all input devices takes just over
> > a second on my machine. Up this timeout so we don't get spurious signals
> > later.
> >
> > Signed-off-by: Peter Hutterer <peter.hutterer at who-t.net>
> > ---
> >  src/xserver.cpp | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/src/xserver.cpp b/src/xserver.cpp
> > index 29c0430..9b163bb 100644
> > --- a/src/xserver.cpp
> > +++ b/src/xserver.cpp
> > @@ -419,7 +419,7 @@ void xorg::testing::XServer::Start(const std::string &program) {
> >    std::string err_msg;
> >
> >    sigset_t sig_mask;
> > -  struct timespec sig_timeout = {1, 0}; /* 1 sec + 0 nsec */
> > +  struct timespec sig_timeout = {3, 0}; /* 3 sec + 0 nsec */
> >
> >    /* add SIGUSR1 to the signal mask */
> >    sigemptyset(&sig_mask);
> 
> One might wonder if the test failing to start the server in 1 second
> is a real failure :). I'm ok with this change, it just seems like it
> shouldn't take that long...

The case where I did hit this was a VM which was in general a bit on the
slower side. So yeah, shouldn't take that long but at the same time this is
a use-case that can happen and shouldn't lead to spurious signals later.

> For the series:
> 
> Reviewed-by: Chase Douglas <chase.douglas at ubuntu.com>

Thanks

Cheers,
   Peter


More information about the xorg-devel mailing list