[igt-dev] [PATCH i-g-t] igt/drm_read: Exercise waking up the next waiter
Chris Wilson
chris at chris-wilson.co.uk
Wed Feb 27 21:13:02 UTC 2019
Quoting Ville Syrjälä (2019-02-27 20:46:12)
> On Wed, Feb 27, 2019 at 10:20:19AM +0000, Chris Wilson wrote:
> > Try to hit a bug in the kernel whereby a short reader does not wakeup
> > the next waiter (on the same fd) leading to forever blocking.
> >
> > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> > Cc: Daniel Vetter <daniel.vetter at ffwll.ch>
> > ---
> > tests/drm_read.c | 88 ++++++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 88 insertions(+)
> >
> > diff --git a/tests/drm_read.c b/tests/drm_read.c
> > index 309f389f6..923a9f750 100644
> > --- a/tests/drm_read.c
> > +++ b/tests/drm_read.c
> > @@ -43,6 +43,7 @@
> > #include <sys/ioctl.h>
> > #include <sys/time.h>
> > #include <sys/poll.h>
> > +#include <pthread.h>
> > #include "drm.h"
> >
> > IGT_TEST_DESCRIPTION("Call read(drm) and see if it behaves.");
> > @@ -166,6 +167,90 @@ static void test_short_buffer(int in, int nonblock, enum pipe pipe)
> > teardown(fd);
> > }
> >
> > +struct short_buffer_wakeup {
> > + pthread_mutex_t mutex;
> > + pthread_cond_t send, recv;
> > + int counter;
> > + int done;
> > + int fd;
> > +};
> > +
> > +static void *thread_short_buffer_wakeup(void *arg)
> > +{
> > + struct short_buffer_wakeup *w = arg;
> > + char buffer[1024]; /* events are typically 32 bytes */
> > +
> > + while (!w->done) {
> > + /* Short read, does not consume the event. */
> > + igt_assert_eq(read(w->fd, buffer, 4), 0);
> > +
> > + pthread_mutex_lock(&w->mutex);
> > + if (!--w->counter)
> > + pthread_cond_signal(&w->send);
> > + pthread_cond_wait(&w->recv, &w->mutex);
> > + pthread_mutex_unlock(&w->mutex);
> > + }
> > +
> > + return NULL;
> > +}
> > +
> > +static void test_short_buffer_wakeup(int in, enum pipe pipe)
> > +{
> > + const int nt = sysconf(_SC_NPROCESSORS_ONLN) + 1;
>
> Hmm. I guess with less threads there's a better chance
> they'd all escape the wait_for_event() before the list
> becomes empty and just all pile up on the mutex. So I
> was wondering if even more would be better. But since
> CI already found the problem with this I guess it's
> sufficient.
I think the biggest challenge is actually making sure that at least 2
threads enter the read and hit the wait condition, before the parent
generates the event. If we bound everyone to a single CPU, made the
children RT and the parent low priority, we probably could do it with
just 2 child threads successfully.
But since it only takes about 3 passes (less than 0.1s) for this to
trigger across the machines, I think that's, as you say, good enough.
-Chris
More information about the igt-dev
mailing list