[igt-dev] [PATCH i-g-t] tests/gem_evict_everything fix for mlocked-* subtests

Ewelina Musial ewelina.musial at intel.com
Thu Jun 7 14:27:23 UTC 2018


On Thu, Jun 07, 2018 at 03:07:31PM +0100, Chris Wilson wrote:
> Quoting Ewelina Musial (2018-06-07 14:21:41)
> > Add check to avoid locking biggest mem size than is available.
> > Trying to allocate size returned by intel_get_avail_ram_mb()
> > triggers OOM killer. It looks like available RAM is not fully free
> > so my idea is to decrease this size of 1024MB and before allocating
> > increase mem size in loop till will be maximally big.
> > 
> > Also, changed surface_count. Few thousands of iterations is not
> > necessary and is taking a lot of time. This change should cover
> > previous functionality.
> > 
> > Using __igt_waitchildren() instead of igt_waitchildren() avoids
> > failing test if OOM killer stops process run by igt_fork() (OOM killer
> > is expected).
> > 
> > __igt_waitchildren() is implemented by Chris in other patch so this will
> > be failing now. I will update this patch after merging his code :)
> > 
> > Signed-off-by: Ewelina Musial <ewelina.musial at intel.com>
> > ---
> >  tests/eviction_common.c      | 51 ++++++++++++++++++++++++++++++++++++++------
> >  tests/gem_evict_everything.c |  6 +++---
> >  2 files changed, 48 insertions(+), 9 deletions(-)
> > 
> > diff --git a/tests/eviction_common.c b/tests/eviction_common.c
> > index 300eb03d..7ae48654 100644
> > --- a/tests/eviction_common.c
> > +++ b/tests/eviction_common.c
> > @@ -134,14 +134,20 @@ static void mlocked_evictions(int fd, struct igt_eviction_test_ops *ops,
> >                               uint64_t surface_count)
> >  {
> >         unsigned int *can_mlock;
> > -       uint64_t sz, pin;
> > +       uint64_t sz, pin, *update_pin = 0;
> > +       bool flag = false;
> >  
> >         intel_require_memory(surface_count, surface_size, CHECK_RAM);
> >  
> >         sz = surface_size*surface_count;
> > -       pin = intel_get_avail_ram_mb();
> > +       pin = intel_get_avail_ram_mb() - 1024;
> >         pin *= 1024 * 1024;
> >         igt_require(pin > sz);
> > +
> > +       update_pin = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0);
> > +       igt_assert(update_pin != MAP_FAILED);
> > +       *update_pin = pin;
> > +
> >         pin -= sz;
> >  
> >         igt_debug("Pinning [%'lld, %'lld] MiB\n",
> > @@ -151,16 +157,48 @@ static void mlocked_evictions(int fd, struct igt_eviction_test_ops *ops,
> >         can_mlock = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0);
> >         igt_assert(can_mlock != MAP_FAILED);
> >  
> > +       igt_fork(child, 1)
> > +       {
> > +               void *lock;
> > +               uint64_t update_step = (5 << 22);
> 
> 5 * 4 MiB
> 
> Any particular reason? Just curious.

Not really, if that value is too small this process will take a loooong time,
if will be too big rest of test could loose sense so I just tried to choose
    some optimal one.
> 
> We try to follow the kernel coding style (K&R braces, just the right
> amount of brackets, and start multiline comments with
> /*
>  * Hello World!
>  */)
> 
You are right, thanks :)

> > +
> > +               do {
> > +                       lock = malloc(*update_pin + update_step);
> > +                       if (lock && !mlock(lock, *update_pin + update_step)) {
> > +                               free(lock);
> > +                               *update_pin += update_step;
> > +                       } else {
> > +                               /* If mlock() fails this fork will be killed
> > +                                * and flag state will not be changed but to
> > +                                * better understanding it is good to put it
> > +                                * here.
> > +                                */
> > +                               flag = true;
> > +                       }
> > +               } while (!flag);
> > +       }
> 
> flag is only used inside this loop; a separate process no less, so the
> value set above doesn't reach here anyway.
> 
> You can kill flag entirely and make that a break; inside a
> do { } while (1);

Sounds good, I will use your suggestion
> 
> > +
> > +       __igt_waitchildren();
> > +       igt_require(*update_pin);
> > +
> >         igt_fork(child, 1) {
> >                 void *locked;
> >  
> > -               locked = malloc(pin + sz);
> > -               if (locked != NULL && !mlock(locked, pin + sz))
> > +               locked = malloc(*update_pin);
> > +               if (locked && !mlock(locked, *update_pin)) {
> >                         *can_mlock = 1;
> > +                       if (*update_pin >= pin + sz)
> > +                               pin = *update_pin - sz;
> > +                       else
> > +                               exit(1);
> > +               }
> >         }
> > +
> >         igt_waitchildren();
> 
> I typically place the igt_waitchildren() tight against the igt_fork()
> loop unless the code is very, very busy.
> 
> Overall, looks to be the same approach as the discovery mechanism I
> used. So it should work so long as the kernel's oomkiller doesn't kill
> everything on the system other than our child.
> -Chris

-Ewelina


More information about the igt-dev mailing list