[PATCH i-g-t 2/6] RELOC WIP SQUASH

Thomas Hellström thomas.hellstrom at linux.intel.com
Fri Aug 6 09:42:05 UTC 2021


From: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>

Squashed commit of the following:

commit 2013c7885cf325c093fc028ceda597f153c6e35b
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:17 2021 +0200

    HAX: remove gttfill for tgl ci

commit 616d28ad12ec654958ed94cc73067c25755f6926
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:16 2021 +0200

    WIP: tests/gem_exec_schedule WIP: tests/gem_exec_schedule at deep - NOT WORKING WIP: tests/gem_exec_schedule at wide (ok) + reorder_wide (still to be fixed)

commit a5c9123efeb4e9230a6d6d08ad5e5fecd690d19a
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:15 2021 +0200

    WIP: NOTWORKING: gem_exec_await

commit 41cb243db92082738ae00af8091035bba7f7d1dd
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:14 2021 +0200

    WIP: tests/gem_ctx_shared: Convert to use no-reloc

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>

commit 37c507cca4ef11bead1113af550ff82b75d42651
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:13 2021 +0200

    WIP: tests/gem_ctx_persistence: Adopt to use allocator

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>

commit 076264f58e1504a5c4aa69b4989c2fbed370441b
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:12 2021 +0200

    WIP: tests/gem_exec_fence: rewrite to no-reloc

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Signed-off-by: Andrzej Turko <andrzej.turko at linux.intel.com>

commit 48698a85620d62daf547b9d6d81d3a0374fb9120
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:11 2021 +0200

    WIP: tests/gem_exec_whisper

commit b9d570496805f3381b5b10e1135bd59f9310fbc0
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:10 2021 +0200

    tests/sysfs_timeslice_duration: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 3a9c63c2883b01c0b3b00556e5fc8157edacb31d
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:09 2021 +0200

    tests/sysfs_preempt_timeout: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit a4e703bcca5ff1e9cbff223bc8bf17bc93c216a7
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:08 2021 +0200

    tests/sysfs_heartbeat_interval: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 378633a8dff65f3ae0a44d83f7415a6308f4b01c
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:07 2021 +0200

    tests/perf_pmu: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Lionel Landwerlin <lionel.g.landwerlin at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit d087c567f95c9823c1e706f669ee2365b0495d52
Author: Bhanuprakash Modem <bhanuprakash.modem at intel.com>
Date:   Tue Aug 3 11:19:06 2021 +0200

    tests/kms_vblank: Adopt to use allocator

    For newer gens kernel will reject relocations returning -EINVAL
    so we should just provide the allocator handle to inject the hang.

    Signed-off-by: Bhanuprakash Modem <bhanuprakash.modem at intel.com>
    Cc: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 60aa339bb1301deb5db428d2e592b0696d7f3378
Author: Bhanuprakash Modem <bhanuprakash.modem at intel.com>
Date:   Tue Aug 3 11:19:05 2021 +0200

    tests/kms_flip: Adopt to use allocator

    For newer gens kernel will reject relocations returning -EINVAL
    so we should just provide the allocator handle to inject the hang.

    Signed-off-by: Bhanuprakash Modem <bhanuprakash.modem at intel.com>
    Cc: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 82da29de483b3943d30c4f7e678c4bca347e7b69
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:04 2021 +0200

    tests/kms_cursor_legacy: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>

commit 29228dfd9d07ee9f4b44fb5d48b099be99bb238c
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:03 2021 +0200

    tests/kms_busy: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 36f36d6f5a9654c1bc69bc7871f0f8be10022f9f
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:02 2021 +0200

    tests/i915_pm_rps: Alter to use no-reloc

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 9b771584f7964e1aecc7b414d71acf727cad75aa
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:01 2021 +0200

    tests/i915_pm_rpm: Adopt to use no-reloc

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 9aa66563a1b46411bafc5152068c6627513ce1b4
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:19:00 2021 +0200

    tests/i915_pm_rc6_residency: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 1dba442b22f9cf7de8147ddb8a281d912bad4fc5
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:59 2021 +0200

    tests/i915_module_load: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 7347fecadbdd1e60e7b4486af7bd9444f0d1b7c5
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:58 2021 +0200

    tests/i915_hangman: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 196dd9e34decef4cf646889c65e3572d4a66113a
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:57 2021 +0200

    tests/gem_workarounds: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit a4cb838b7b0836c8080c8e598ae3ebe66333aa04
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:56 2021 +0200

    tests/gem_watchdog: Adopt to use no-reloc

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit e27739aea2ca2cde78b1cb27e0540e2628ee5928
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:55 2021 +0200

    tests/gem_wait: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 84423ce10391e7499f9efff9f4948ab51f39f504
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:54 2021 +0200

    tests/gem_userptr_blits: Adopt to use allocator

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 5bebc94b7b27db84a085996901069e7748ab7e49
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:53 2021 +0200

    tests/gem_unref_active_buffers: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 9e1dfecc225494071f96c3b1db0607593e838b4f
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:52 2021 +0200

    tests/gem_unfence_active_buffers: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 35bd5bc1f7853c708a71bc462efbc601cd031475
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:51 2021 +0200

    tests/gem_tiled_fence_blits: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit b47809868c296994d3b007208b69c8e2adb16e6f
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:50 2021 +0200

    tests/gem_spin_batch: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 719d611177b52059ebf863fd38d88fa89565cdd2
Author: Andrzej Turko <andrzej.turko at linux.intel.com>
Date:   Tue Aug 3 11:18:49 2021 +0200

    tests/gem_softpin: Exercise eviction with softpinning

    Exercise eviction of many gem objects. The added tests are analogous
    to gem_exec_gttfill, but they use softpin and do not require relocation
    support.

    Signed-off-by: Andrzej Turko <andrzej.turko at linux.intel.com>
    Cc: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 83dacba88acc59ac5becc46f0a95bc31a1fdc23c
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:48 2021 +0200

    tests/gem_ringfill: Adopt to use allocator

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 9b88fd566289eff5a0c1200fc0fb43d88a49ca72
Author: Sai Gowtham <sai.gowtham.ch at intel.com>
Date:   Tue Aug 3 11:18:47 2021 +0200

    tests/gem_request_retire: Add allocator support

    Signed-off-by: Sai Gowtham <sai.gowtham.ch at intel.com>
    Cc: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit df8d086b970ea2b30d4401e2ef67d229481487da
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:46 2021 +0200

    tests/gem_mmap_wc: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 4db77ffa0af80f9b9e1a4e4f7e05f63919fe49af
Author: Sai Gowtham <sai.gowtham.ch at intel.com>
Date:   Tue Aug 3 11:18:45 2021 +0200

    tests/gem_mmap_offset: Add allocator support

    Signed-off-by: Sai Gowtham <sai.gowtham.ch at intel.com>
    Cc: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 84d3a9a56502efd9a6b057dee2dcb307e4b70884
Author: Ch Sai Gowtham <sai.gowtham.ch at intel.com>
Date:   Tue Aug 3 11:18:44 2021 +0200

    tests/gem_mmap_gtt: Add allocator support

    Signed-off-by: Ch Sai Gowtham <sai.gowtham.ch at intel.com>
    Cc: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit e9d9f6986943f67c9ce0c6d6c7e4c1f81dfa1a0e
Author: Sai Gowtham <sai.gowtham.ch at intel.com>
Date:   Tue Aug 3 11:18:43 2021 +0200

    tests/gem_mmap: Add allocator support

    Signed-off-by: Sai Gowtham <sai.gowtham.ch at intel.com>
    Cc: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit a1b2c975b66bf610c8ef324d61a7ebc35b066944
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:42 2021 +0200

    tests/gem_fenced_exec_thrash: Adopt to use allocator

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>

commit 80682ef590f6b9c8f5210c07e14f1cccd343c5ba
Author: Sai Gowtham <sai.gowtham.ch at intel.com>
Date:   Tue Aug 3 11:18:41 2021 +0200

    tests/gem_exec_params: Support gens without relocations

    When relocations are not available tests must assign addresses to objects
    by themselves instead of relying on the driver. We use allocator for
    that purpose.

    Signed-off-by: Sai Gowtham <sai.gowtham.ch at intel.com>
    Cc: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit a32baa5a2bfcf7bd27fce68cab4c965467008b72
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:40 2021 +0200

    tests/gem_exec_parallel: Adopt to use alloctor

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 581f02f3ec1ffc9376f001099bb43073f87c0aa3
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:39 2021 +0200

    tests/gem_exec_suspend: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 6909e00d5d77474af2aabc7aac199285149919e1
Author: Andrzej Turko <andrzej.turko at linux.intel.com>
Date:   Tue Aug 3 11:18:38 2021 +0200

    tests/gem_exec_store: Support gens without relocations

    With relocations disabled on newer generations
    tests must assign addresses to objects by
    themselves instead of relying on the driver.

    Signed-off-by: Andrzej Turko <andrzej.turko at linux.intel.com>
    Cc: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>

commit ae15aad8d1161053c2e769104d3d74b33e16d23d
Author: Andrzej Turko <andrzej.turko at linux.intel.com>
Date:   Tue Aug 3 11:18:37 2021 +0200

    tests/gem_exec_gttfill: Require relocation support

    Since this test uses relocations, which are now disabled on newer
    generations, we need to skip the test if they are not supported.
    In order to maintain coverage a slightly modified version of this test
    using softpinning instead of relocations is added to gem_softpin.

    Signed-off-by: Andrzej Turko <andrzej.turko at linux.intel.com>
    Cc: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>

commit ae490ff0e05108b684776a6c9c069848d09fa300
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:36 2021 +0200

    tests/gem_exec_fair: Add softpin support

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 76c2ab01f649e7656caf698767d25c754f237097
Author: Andrzej Turko <andrzej.turko at linux.intel.com>
Date:   Tue Aug 3 11:18:35 2021 +0200

    tests/gem_exec_capture: Support gens without relocations

    With relocations disabled on newer generations tests must assign addresses
    to objects by themselves instead of relying on the driver.

    Signed-off-by: Andrzej Turko <andrzej.turko at linux.intel.com>
    Cc: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>

commit 880b8dbf6ec5a2de155814133d01c8a7147f158f
Author: Andrzej Turko <andrzej.turko at linux.intel.com>
Date:   Tue Aug 3 11:18:34 2021 +0200

    tests/gem_exec_big: Require relocation support

    This test only verifies the correctness of relocations so it should be
    skipped if running on a platform which does not support them.

    Signed-off-by: Andrzej Turko <andrzej.turko at linux.intel.com>
    Cc: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit e601be656b00b1c21ba3d5b5d05dd49e2daeb500
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:33 2021 +0200

    tests/gem_exec_async: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 6b9200037edf58fd670aeab8a4b24b570498837a
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:32 2021 +0200

    tests/gem_eio: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 781904cf85339d47276a568016cccac366a8b4fd
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:31 2021 +0200

    tests/gem_ctx_param: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 2957597e2dd33afc6932cf732c7ea4f2fa2f5337
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:30 2021 +0200

    tests/gem_ctx_isolation: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 00a832719f1a79e47dbe29e6bcf939d2d82505eb
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:29 2021 +0200

    tests/gem_ctx_freq: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 976dfd2eaba08265198a7666d81134afbee0fa2a
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:28 2021 +0200

    tests/gem_ctx_exec: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit e84f5a72619179c29be83533781cfc585ed00b3b
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:27 2021 +0200

    tests/gem_ctx_engines: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit fd9659fe910c69d2d160947102d12d206a19797a
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:26 2021 +0200

    tests/gem_create: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 5619567812c77be991418b28490c69b92c85c650
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:25 2021 +0200

    tests/gem_busy: Adopt to use allocator

    For newer gens we're not able to rely on relocations. Adopt to use
    offsets acquired from the allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 244c1ca98b099aa4d95878d5c08a4c1ff31f8fae
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:24 2021 +0200

    tests/gem_bad_reloc: Skip on gens where relocations are not supported

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 8bef22614beff25cc5b96c42a5aa2a1e84d68870
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:23 2021 +0200

    lib/huc_copy: Extend huc copy prototype to pass allocator handle

    For testing gem_huc_copy on no-reloc platforms we need to pass
    allocator handle and object sizes to properly acquire offsets from
    allocator.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>

commit 6a2f9518a4f4ed412ce7c2f47aff9ab0f8f2dd35
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:22 2021 +0200

    lib/intel_batchbuffer: Try to avoid relocations in blitting

    We're proposing not overlapping offsets in both blitter copying functions
    so we can try to skip relocations.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>
    Cc: Chris Wilson <chris at chris-wilson.co.uk>

commit 768f6499592b807c55e84ade829e9792776f955e
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:21 2021 +0200

    lib/intel_batchbuffer: Add allocator support in blitter src copy

    Adjust igt_fb library + prime_vgem test as they are blitter src copy
    users.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>
    Cc: Chris Wilson <chris at chris-wilson.co.uk>

commit f0806a51ce6cc47a4c64efa75ea8a0eef21d3991
Author: Bhanuprakash Modem <bhanuprakash.modem at intel.com>
Date:   Tue Aug 3 11:18:20 2021 +0200

    lib/intel_batchbuffer: Add allocator support in blitter fast copy

    For newer gens kernel will reject relocations by returning -EINVAL
    so we should support allocator and acquire offsets for blit.

    Signed-off-by: Bhanuprakash Modem <bhanuprakash.modem at intel.com>
    Cc: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>
    Cc: Chris Wilson <chris at chris-wilson.co.uk>

commit d38bfcbe20c5f50cc6d73dae1f15c8476a7cb1f6
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:19 2021 +0200

    lib/intel_batchbuffer: Ensure relocation code will be called

    Currently we're not sure relocations code will be called (presumed_offset
    == offset == 0) so enforce them. Passing presumed_offset and offset to
    auxiliary functions will prepare code to switch to no-reloc mode.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>
    Cc: Chris Wilson <chris at chris-wilson.co.uk>

commit 1b0d8923912e1c9d247b9c8dca374ee938b1bfb3
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:18 2021 +0200

    lib/igt_gt: Add passing ahnd as an argument to igt_hang

    Required as spinner is used, see gem_ringfill.c

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>
    Cc: Chris Wilson <chris at chris-wilson.co.uk>

commit fd9b68e29fc9dd5586551a3fc8722560868eacb9
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:17 2021 +0200

    lib/intel_allocator: Add few helper functions for common use

    Add few helper functions which can be used in reloc/no-reloc tests.

    Common name is get_<ALLOCATOR_TYPE>_ahnd(i915, ctx), like:
    get_reloc_ahnd(), get_simple_ahnd(). As simple allows acquiring
    offsets starting from top or bottom of vm additional two were added:
    get_simple_l2h_ahnd() and get_simple_h2l_ahnd(). put_ahnd() closes
    allocator handle (if it is valid).

    To acquire / release an offset get_offset() and put_offset() were
    added. When allocator handle is invalid (equal to zero) get_offset()
    just returns 0, put_offset() does nothing in this case. We can then
    call them regardless reloc/no-reloc code keeping conditional code in
    these functions.

    Be aware that each get_..._ahnd() functions calls checking kernel
    relocation capabilities. This generates extra execbuf ioctl() call (but
    without queueing job to gpu). If that is a problem and we want to avoid
    additional execbuf calls, checking relocation caps should be done on the
    beginning of the test. Allocator handle (open()) should be acquired then
    conditionally according to the result of this check.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>
    Cc: Chris Wilson <chris at chris-wilson.co.uk>

commit 51c0d0d1a7a03f4d14929ac9af2733640a3742e1
Author: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
Date:   Tue Aug 3 11:18:16 2021 +0200

    lib/igt_dummyload: Add support of using allocator in igt spinner

    For gens without relocations we need to use softpin with valid offsets
    which do not overlap other execbuf objects. As spinner during creation
    knows nothing about vm it has to run into allocator handle must be
    passed to properly acquire offsets from allocator instance.

    Signed-off-by: Zbigniew Kempczyński <zbigniew.kempczynski at intel.com>
    Cc: Petri Latvala <petri.latvala at intel.com>
    Cc: Ashutosh Dixit <ashutosh.dixit at intel.com>
    Cc: Chris Wilson <chris at chris-wilson.co.uk>
---
 tests/i915/gem_ctx_persistence.c      | 120 ++++++--
 tests/i915/gem_ctx_shared.c           | 111 +++++--
 tests/i915/gem_exec_await.c           |  14 +-
 tests/i915/gem_exec_fence.c           | 244 +++++++++++-----
 tests/i915/gem_exec_schedule.c        | 405 ++++++++++++++++++++------
 tests/i915/gem_exec_whisper.c         |  39 ++-
 tests/intel-ci/fast-feedback.testlist |   1 -
 7 files changed, 724 insertions(+), 210 deletions(-)

diff --git a/tests/i915/gem_ctx_persistence.c b/tests/i915/gem_ctx_persistence.c
index c6db06b8..fafd8bb2 100644
--- a/tests/i915/gem_ctx_persistence.c
+++ b/tests/i915/gem_ctx_persistence.c
@@ -43,6 +43,7 @@
 #include "igt_sysfs.h"
 #include "igt_params.h"
 #include "ioctl_wrappers.h" /* gem_wait()! */
+#include "intel_allocator.h"
 #include "sw_sync.h"
 
 #define RESET_TIMEOUT_MS 2 * MSEC_PER_SEC; /* default: 640ms */
@@ -161,6 +162,7 @@ static void test_persistence(int i915, const intel_ctx_cfg_t *cfg,
 	igt_spin_t *spin;
 	int64_t timeout;
 	const intel_ctx_t *ctx;
+	uint64_t ahnd;
 
 	/*
 	 * Default behaviour are contexts remain alive until their last active
@@ -168,8 +170,9 @@ static void test_persistence(int i915, const intel_ctx_cfg_t *cfg,
 	 */
 
 	ctx = ctx_create_persistence(i915, cfg, true);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 
-	spin = igt_spin_new(i915, .ctx = ctx,
+	spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 			    .engine = engine,
 			    .flags = IGT_SPIN_FENCE_OUT);
 	intel_ctx_destroy(i915, ctx);
@@ -184,6 +187,7 @@ static void test_persistence(int i915, const intel_ctx_cfg_t *cfg,
 	igt_assert_eq(sync_fence_status(spin->out_fence), 1);
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static void test_nonpersistent_cleanup(int i915, const intel_ctx_cfg_t *cfg,
@@ -192,6 +196,7 @@ static void test_nonpersistent_cleanup(int i915, const intel_ctx_cfg_t *cfg,
 	int64_t timeout = reset_timeout_ms * NSEC_PER_MSEC;
 	igt_spin_t *spin;
 	const intel_ctx_t *ctx;
+	uint64_t ahnd;
 
 	/*
 	 * A nonpersistent context is terminated immediately upon closure,
@@ -199,8 +204,9 @@ static void test_nonpersistent_cleanup(int i915, const intel_ctx_cfg_t *cfg,
 	 */
 
 	ctx = ctx_create_persistence(i915, cfg, false);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 
-	spin = igt_spin_new(i915, .ctx = ctx,
+	spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 			    .engine = engine,
 			    .flags = IGT_SPIN_FENCE_OUT);
 	intel_ctx_destroy(i915, ctx);
@@ -209,6 +215,7 @@ static void test_nonpersistent_cleanup(int i915, const intel_ctx_cfg_t *cfg,
 	igt_assert_eq(sync_fence_status(spin->out_fence), -EIO);
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static void test_nonpersistent_mixed(int i915, const intel_ctx_cfg_t *cfg,
@@ -225,15 +232,18 @@ static void test_nonpersistent_mixed(int i915, const intel_ctx_cfg_t *cfg,
 	for (int i = 0; i < ARRAY_SIZE(fence); i++) {
 		igt_spin_t *spin;
 		const intel_ctx_t *ctx;
+		uint64_t ahnd;
 
 		ctx = ctx_create_persistence(i915, cfg, i & 1);
+		ahnd = get_reloc_ahnd(i915, ctx->id);
 
-		spin = igt_spin_new(i915, .ctx = ctx,
+		spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 				    .engine = engine,
 				    .flags = IGT_SPIN_FENCE_OUT);
 		intel_ctx_destroy(i915, ctx);
 
 		fence[i] = spin->out_fence;
+		put_ahnd(ahnd);
 	}
 
 	/* Outer pair of contexts were non-persistent and killed */
@@ -250,6 +260,7 @@ static void test_nonpersistent_hostile(int i915, const intel_ctx_cfg_t *cfg,
 	int64_t timeout = reset_timeout_ms * NSEC_PER_MSEC;
 	igt_spin_t *spin;
 	const intel_ctx_t *ctx;
+	uint64_t ahnd;
 
 	/*
 	 * If we cannot cleanly cancel the non-persistent context on closure,
@@ -258,8 +269,9 @@ static void test_nonpersistent_hostile(int i915, const intel_ctx_cfg_t *cfg,
 	 */
 
 	ctx = ctx_create_persistence(i915, cfg, false);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 
-	spin = igt_spin_new(i915, .ctx = ctx,
+	spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 			    .engine = engine,
 			    .flags = IGT_SPIN_NO_PREEMPTION);
 	intel_ctx_destroy(i915, ctx);
@@ -267,6 +279,7 @@ static void test_nonpersistent_hostile(int i915, const intel_ctx_cfg_t *cfg,
 	igt_assert_eq(gem_wait(i915, spin->handle, &timeout), 0);
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static void test_nonpersistent_hostile_preempt(int i915, const intel_ctx_cfg_t *cfg,
@@ -275,6 +288,7 @@ static void test_nonpersistent_hostile_preempt(int i915, const intel_ctx_cfg_t *
 	int64_t timeout = reset_timeout_ms * NSEC_PER_MSEC;
 	igt_spin_t *spin[2];
 	const intel_ctx_t *ctx;
+	uint64_t ahnd;
 
 	/*
 	 * Double plus ungood.
@@ -289,7 +303,8 @@ static void test_nonpersistent_hostile_preempt(int i915, const intel_ctx_cfg_t *
 
 	ctx = ctx_create_persistence(i915, cfg, true);
 	gem_context_set_priority(i915, ctx->id, 0);
-	spin[0] = igt_spin_new(i915, .ctx = ctx,
+	ahnd = get_reloc_ahnd(i915, ctx->id);
+	spin[0] = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 			       .engine = engine,
 			       .flags = (IGT_SPIN_NO_PREEMPTION |
 					 IGT_SPIN_POLL_RUN));
@@ -299,7 +314,7 @@ static void test_nonpersistent_hostile_preempt(int i915, const intel_ctx_cfg_t *
 
 	ctx = ctx_create_persistence(i915, cfg, false);
 	gem_context_set_priority(i915, ctx->id, 1); /* higher priority than 0 */
-	spin[1] = igt_spin_new(i915, .ctx = ctx,
+	spin[1] = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 			       .engine = engine,
 			       .flags = IGT_SPIN_NO_PREEMPTION);
 	intel_ctx_destroy(i915, ctx);
@@ -308,6 +323,7 @@ static void test_nonpersistent_hostile_preempt(int i915, const intel_ctx_cfg_t *
 
 	igt_spin_free(i915, spin[1]);
 	igt_spin_free(i915, spin[0]);
+	put_ahnd(ahnd);
 }
 
 static void test_nonpersistent_hang(int i915, const intel_ctx_cfg_t *cfg,
@@ -316,15 +332,16 @@ static void test_nonpersistent_hang(int i915, const intel_ctx_cfg_t *cfg,
 	int64_t timeout = reset_timeout_ms * NSEC_PER_MSEC;
 	igt_spin_t *spin;
 	const intel_ctx_t *ctx;
-
+	uint64_t ahnd;
 	/*
 	 * The user made a simple mistake and submitted an invalid batch,
 	 * but fortunately under a nonpersistent context. Do we detect it?
 	 */
 
 	ctx = ctx_create_persistence(i915, cfg, false);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 
-	spin = igt_spin_new(i915, .ctx = ctx,
+	spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 			    .engine = engine,
 			    .flags = IGT_SPIN_INVALID_CS);
 	intel_ctx_destroy(i915, ctx);
@@ -332,6 +349,7 @@ static void test_nonpersistent_hang(int i915, const intel_ctx_cfg_t *cfg,
 	igt_assert_eq(gem_wait(i915, spin->handle, &timeout), 0);
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static void test_nohangcheck_hostile(int i915, const intel_ctx_cfg_t *cfg)
@@ -354,9 +372,10 @@ static void test_nohangcheck_hostile(int i915, const intel_ctx_cfg_t *cfg)
 	for_each_ctx_cfg_engine(i915, cfg, e) {
 		int64_t timeout = reset_timeout_ms * NSEC_PER_MSEC;
 		const intel_ctx_t *ctx = intel_ctx_create(i915, cfg);
+		uint64_t ahnd = get_reloc_ahnd(i915, ctx->id);
 		igt_spin_t *spin;
 
-		spin = igt_spin_new(i915, .ctx = ctx,
+		spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 				    .engine = e->flags,
 				    .flags = IGT_SPIN_NO_PREEMPTION);
 		intel_ctx_destroy(i915, ctx);
@@ -364,6 +383,7 @@ static void test_nohangcheck_hostile(int i915, const intel_ctx_cfg_t *cfg)
 		igt_assert_eq(gem_wait(i915, spin->handle, &timeout), 0);
 
 		igt_spin_free(i915, spin);
+		put_ahnd(ahnd);
 	}
 
 	igt_require(__enable_hangcheck(dir, true));
@@ -398,12 +418,14 @@ static void test_nohangcheck_hang(int i915, const intel_ctx_cfg_t *cfg)
 		int64_t timeout = reset_timeout_ms * NSEC_PER_MSEC;
 		const intel_ctx_t *ctx;
 		igt_spin_t *spin;
+		uint64_t ahnd;
 
 		if (!gem_engine_has_cmdparser(i915, cfg, e->flags))
 			continue;
 
 		ctx = intel_ctx_create(i915, cfg);
-		spin = igt_spin_new(i915, .ctx = ctx,
+		ahnd = get_reloc_ahnd(i915, ctx->id);
+		spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 				    .engine = e->flags,
 				    .flags = IGT_SPIN_INVALID_CS);
 		intel_ctx_destroy(i915, ctx);
@@ -411,6 +433,7 @@ static void test_nohangcheck_hang(int i915, const intel_ctx_cfg_t *cfg)
 		igt_assert_eq(gem_wait(i915, spin->handle, &timeout), 0);
 
 		igt_spin_free(i915, spin);
+		put_ahnd(ahnd);
 	}
 
 	igt_require(__enable_hangcheck(dir, true));
@@ -468,6 +491,7 @@ static void test_noheartbeat_many(int i915, int count, unsigned int flags)
 
 	for_each_physical_ring(e, i915) {
 		igt_spin_t *spin[count];
+		uint64_t ahnd;
 
 		if (!set_preempt_timeout(i915, e->full_name, 250))
 			continue;
@@ -481,8 +505,8 @@ static void test_noheartbeat_many(int i915, int count, unsigned int flags)
 			const intel_ctx_t *ctx;
 
 			ctx = intel_ctx_create(i915, NULL);
-
-			spin[n] = igt_spin_new(i915, .ctx = ctx,
+			ahnd = get_reloc_ahnd(i915, ctx->id);
+			spin[n] = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 					       .engine = eb_ring(e),
 					       .flags = (IGT_SPIN_FENCE_OUT |
 							 IGT_SPIN_POLL_RUN |
@@ -499,8 +523,11 @@ static void test_noheartbeat_many(int i915, int count, unsigned int flags)
 				      -EIO);
 		}
 
-		for (int n = 0; n < ARRAY_SIZE(spin); n++)
+		for (int n = 0; n < ARRAY_SIZE(spin); n++) {
+			ahnd = spin[n]->ahnd;
 			igt_spin_free(i915, spin[n]);
+			put_ahnd(ahnd);
+		}
 
 		set_heartbeat(i915, e->full_name, 2500);
 		cleanup(i915);
@@ -525,6 +552,7 @@ static void test_noheartbeat_close(int i915, unsigned int flags)
 	for_each_physical_ring(e, i915) {
 		igt_spin_t *spin;
 		const intel_ctx_t *ctx;
+		uint64_t ahnd;
 		int err;
 
 		if (!set_preempt_timeout(i915, e->full_name, 250))
@@ -534,7 +562,8 @@ static void test_noheartbeat_close(int i915, unsigned int flags)
 			continue;
 
 		ctx = intel_ctx_create(i915, NULL);
-		spin = igt_spin_new(i915, .ctx = ctx,
+		ahnd = get_reloc_ahnd(i915, ctx->id);
+		spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 				    .engine = eb_ring(e),
 				    .flags = (IGT_SPIN_FENCE_OUT |
 					      IGT_SPIN_POLL_RUN |
@@ -547,6 +576,7 @@ static void test_noheartbeat_close(int i915, unsigned int flags)
 
 		set_heartbeat(i915, e->full_name, 2500);
 		igt_spin_free(i915, spin);
+		put_ahnd(ahnd);
 
 		igt_assert_eq(err, -EIO);
 		cleanup(i915);
@@ -559,6 +589,7 @@ static void test_nonpersistent_file(int i915)
 {
 	int debugfs = i915;
 	igt_spin_t *spin;
+	uint64_t ahnd;
 
 	cleanup(i915);
 
@@ -569,8 +600,9 @@ static void test_nonpersistent_file(int i915)
 
 	i915 = gem_reopen_driver(i915);
 
+	ahnd = get_reloc_ahnd(i915, 0);
 	gem_context_set_persistence(i915, 0, false);
-	spin = igt_spin_new(i915, .flags = IGT_SPIN_FENCE_OUT);
+	spin = igt_spin_new(i915, .ahnd = ahnd, .flags = IGT_SPIN_FENCE_OUT);
 
 	close(i915);
 	flush_delayed_fput(debugfs);
@@ -579,6 +611,7 @@ static void test_nonpersistent_file(int i915)
 
 	spin->handle = 0;
 	igt_spin_free(-1, spin);
+	put_ahnd(ahnd);
 }
 
 static int __execbuf_wr(int i915, struct drm_i915_gem_execbuffer2 *execbuf)
@@ -607,6 +640,7 @@ static void test_nonpersistent_queued(int i915, const intel_ctx_cfg_t *cfg,
 	igt_spin_t *spin;
 	int fence = -1;
 	const intel_ctx_t *ctx;
+	uint64_t ahnd;
 
 	/*
 	 * Not only must the immediate batch be cancelled, but
@@ -614,7 +648,8 @@ static void test_nonpersistent_queued(int i915, const intel_ctx_cfg_t *cfg,
 	 */
 
 	ctx = ctx_create_persistence(i915, cfg, false);
-	spin = igt_spin_new(i915, .ctx = ctx,
+	ahnd = get_reloc_ahnd(i915, ctx->id);
+	spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 			    .engine = engine,
 			    .flags = IGT_SPIN_FENCE_OUT);
 
@@ -648,6 +683,7 @@ static void test_nonpersistent_queued(int i915, const intel_ctx_cfg_t *cfg,
 	igt_assert_eq(wait_for_status(fence, reset_timeout_ms), -EIO);
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static void sendfd(int socket, int fd)
@@ -703,12 +739,16 @@ static void test_process(int i915)
 
 	igt_fork(child, 1) {
 		igt_spin_t *spin;
+		uint64_t ahnd;
 
+		intel_allocator_init();
 		i915 = gem_reopen_driver(i915);
 		gem_quiescent_gpu(i915);
 
 		gem_context_set_persistence(i915, 0, false);
-		spin = igt_spin_new(i915, .flags = IGT_SPIN_FENCE_OUT);
+		ahnd = get_reloc_ahnd(i915, 0);
+		spin = igt_spin_new(i915, .ahnd = ahnd,
+				    .flags = IGT_SPIN_FENCE_OUT);
 		sendfd(sv[0], spin->out_fence);
 
 		igt_list_del(&spin->link); /* prevent autocleanup */
@@ -747,12 +787,16 @@ static void test_userptr(int i915)
 
 	igt_fork(child, 1) {
 		igt_spin_t *spin;
+		uint64_t ahnd;
 
+		intel_allocator_init();
 		i915 = gem_reopen_driver(i915);
 		gem_quiescent_gpu(i915);
 
 		gem_context_set_persistence(i915, 0, false);
-		spin = igt_spin_new(i915, .flags = IGT_SPIN_FENCE_OUT | IGT_SPIN_USERPTR);
+		ahnd = get_reloc_ahnd(i915, 0);
+		spin = igt_spin_new(i915, .ahnd = ahnd,
+				    .flags = IGT_SPIN_FENCE_OUT | IGT_SPIN_USERPTR);
 		sendfd(sv[0], spin->out_fence);
 
 		igt_list_del(&spin->link); /* prevent autocleanup */
@@ -795,9 +839,12 @@ static void test_process_mixed(int pfd, const intel_ctx_cfg_t *cfg,
 		for (int persists = 0; persists <= 1; persists++) {
 			igt_spin_t *spin;
 			const intel_ctx_t *ctx;
+			uint64_t ahnd;
 
+			intel_allocator_init();
 			ctx = ctx_create_persistence(i915, cfg, persists);
-			spin = igt_spin_new(i915, .ctx = ctx,
+			ahnd = get_reloc_ahnd(i915, ctx->id);
+			spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 					    .engine = engine,
 					    .flags = IGT_SPIN_FENCE_OUT);
 
@@ -835,6 +882,7 @@ test_saturated_hostile(int i915, const intel_ctx_t *base_ctx,
 	const struct intel_execution_engine2 *other;
 	igt_spin_t *spin;
 	const intel_ctx_t *ctx;
+	uint64_t ahnd = get_reloc_ahnd(i915, base_ctx->id);
 	int fence = -1;
 
 	cleanup(i915);
@@ -855,7 +903,7 @@ test_saturated_hostile(int i915, const intel_ctx_t *base_ctx,
 		if (other->flags == engine->flags)
 			continue;
 
-		spin = igt_spin_new(i915, .ctx = base_ctx,
+		spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = base_ctx,
 				   .engine = other->flags,
 				   .flags = (IGT_SPIN_NO_PREEMPTION |
 					     IGT_SPIN_FENCE_OUT));
@@ -873,10 +921,12 @@ test_saturated_hostile(int i915, const intel_ctx_t *base_ctx,
 		}
 		spin->out_fence = -1;
 	}
+	put_ahnd(ahnd);
 	igt_require(fence != -1);
 
 	ctx = ctx_create_persistence(i915, &base_ctx->cfg, false);
-	spin = igt_spin_new(i915, .ctx = ctx,
+	ahnd = get_reloc_ahnd(i915, ctx->id);
+	spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 			    .engine = engine->flags,
 			    .flags = (IGT_SPIN_NO_PREEMPTION |
 				      IGT_SPIN_POLL_RUN |
@@ -891,6 +941,7 @@ test_saturated_hostile(int i915, const intel_ctx_t *base_ctx,
 	gem_quiescent_gpu(i915);
 	igt_assert_eq(wait_for_status(fence, reset_timeout_ms), 1);
 	close(fence);
+	put_ahnd(ahnd);
 }
 
 static void test_processes(int i915)
@@ -912,11 +963,15 @@ static void test_processes(int i915)
 		igt_fork(child, 1) {
 			igt_spin_t *spin;
 			int pid;
+			uint64_t ahnd;
 
+			intel_allocator_init();
 			i915 = gem_reopen_driver(i915);
 			gem_context_set_persistence(i915, 0, i);
 
-			spin = igt_spin_new(i915, .flags = IGT_SPIN_FENCE_OUT);
+			ahnd = get_reloc_ahnd(i915, 0);
+			spin = igt_spin_new(i915, .ahnd = ahnd,
+					    .flags = IGT_SPIN_FENCE_OUT);
 			/* prevent autocleanup */
 			igt_list_del(&spin->link);
 
@@ -978,10 +1033,12 @@ static void __smoker(int i915, const intel_ctx_cfg_t *cfg,
 	int fence = -1;
 	int fd, extra;
 	const intel_ctx_t *ctx;
+	uint64_t ahnd;
 
 	fd = gem_reopen_driver(i915);
 	ctx = ctx_create_persistence(fd, cfg, expected > 0);
-	spin = igt_spin_new(fd, .ctx = ctx, .engine = engine,
+	ahnd = get_reloc_ahnd(fd, ctx->id);
+	spin = igt_spin_new(fd, .ahnd = ahnd, .ctx = ctx, .engine = engine,
 			    .flags = IGT_SPIN_FENCE_OUT);
 
 	extra = rand() % 8;
@@ -1010,6 +1067,7 @@ static void __smoker(int i915, const intel_ctx_cfg_t *cfg,
 
 	spin->handle = 0;
 	igt_spin_free(fd, spin);
+	put_ahnd(ahnd);
 }
 
 static void smoker(int i915, const intel_ctx_cfg_t *cfg,
@@ -1065,6 +1123,7 @@ static void many_contexts(int i915, const intel_ctx_cfg_t *cfg)
 	const struct intel_execution_engine2 *e;
 	int64_t timeout = NSEC_PER_SEC;
 	igt_spin_t *spin;
+	uint64_t ahnd = get_reloc_ahnd(i915, 0);
 
 	cleanup(i915);
 
@@ -1074,7 +1133,7 @@ static void many_contexts(int i915, const intel_ctx_cfg_t *cfg)
 	 * creating new contexts, and submitting new execbuf.
 	 */
 
-	spin = igt_spin_new(i915, .flags = IGT_SPIN_NO_PREEMPTION);
+	spin = igt_spin_new(i915, .ahnd = ahnd, .flags = IGT_SPIN_NO_PREEMPTION);
 	igt_spin_end(spin);
 
 	gem_sync(i915, spin->handle);
@@ -1104,6 +1163,7 @@ static void many_contexts(int i915, const intel_ctx_cfg_t *cfg)
 
 	igt_spin_free(i915, spin);
 	gem_quiescent_gpu(i915);
+	put_ahnd(ahnd);
 }
 
 static void do_test(void (*test)(int i915, const intel_ctx_cfg_t *cfg,
@@ -1256,9 +1316,21 @@ igt_main
 
 		igt_subtest("many-contexts")
 			many_contexts(i915, &ctx->cfg);
+	}
+
+	igt_subtest_group {
+		igt_fixture {
+			gem_require_contexts(i915);
+			intel_allocator_multiprocess_start();
+		}
 
 		igt_subtest("smoketest")
 			smoketest(i915, &ctx->cfg);
+
+		igt_fixture {
+			intel_allocator_multiprocess_stop();
+		}
+
 	}
 
 	igt_fixture {
diff --git a/tests/i915/gem_ctx_shared.c b/tests/i915/gem_ctx_shared.c
index 4441e6eb..9bfa5115 100644
--- a/tests/i915/gem_ctx_shared.c
+++ b/tests/i915/gem_ctx_shared.c
@@ -157,6 +157,7 @@ static void disjoint_timelines(int i915, const intel_ctx_cfg_t *cfg)
 	const intel_ctx_t *ctx[2];
 	igt_spin_t *spin[2];
 	uint32_t plug;
+	uint64_t ahnd;
 
 	igt_require(gem_has_execlists(i915));
 
@@ -169,11 +170,13 @@ static void disjoint_timelines(int i915, const intel_ctx_cfg_t *cfg)
 	vm_cfg.vm = gem_vm_create(i915);
 	ctx[0] = intel_ctx_create(i915, &vm_cfg);
 	ctx[1] = intel_ctx_create(i915, &vm_cfg);
+	/* Context id is not important, we share vm */
+	ahnd = get_reloc_ahnd(i915, 0);
 
 	plug = igt_cork_plug(&cork, i915);
 
-	spin[0] = __igt_spin_new(i915, .ctx = ctx[0], .dependency = plug);
-	spin[1] = __igt_spin_new(i915, .ctx = ctx[1]);
+	spin[0] = __igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx[0], .dependency = plug);
+	spin[1] = __igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx[1]);
 
 	/* Wait for the second spinner, will hang if stuck behind the first */
 	igt_spin_end(spin[1]);
@@ -183,6 +186,7 @@ static void disjoint_timelines(int i915, const intel_ctx_cfg_t *cfg)
 
 	igt_spin_free(i915, spin[1]);
 	igt_spin_free(i915, spin[0]);
+	put_ahnd(ahnd);
 
 	intel_ctx_destroy(i915, ctx[0]);
 	intel_ctx_destroy(i915, ctx[1]);
@@ -391,11 +395,12 @@ static void single_timeline(int i915, const intel_ctx_cfg_t *cfg)
 	intel_ctx_cfg_t st_cfg;
 	const intel_ctx_t *ctx;
 	igt_spin_t *spin;
+	uint64_t ahnd = get_reloc_ahnd(i915, 0);
 	int n;
 
 	igt_require(gem_context_has_single_timeline(i915));
 
-	spin = igt_spin_new(i915);
+	spin = igt_spin_new(i915, .ahnd = ahnd);
 
 	/*
 	 * For a "single timeline" context, each ring is on the common
@@ -429,6 +434,7 @@ static void single_timeline(int i915, const intel_ctx_cfg_t *cfg)
 		igt_assert(!strcmp(rings[0].obj_name, rings[i].obj_name));
 	}
 	intel_ctx_destroy(i915, ctx);
+	put_ahnd(ahnd);
 }
 
 static void exec_single_timeline(int i915, const intel_ctx_cfg_t *cfg,
@@ -438,19 +444,22 @@ static void exec_single_timeline(int i915, const intel_ctx_cfg_t *cfg,
 	igt_spin_t *spin;
 	intel_ctx_cfg_t st_cfg;
 	const intel_ctx_t *ctx;
+	uint64_t ahnd;
 
 	/*
 	 * On an ordinary context, a blockage on one engine doesn't prevent
 	 * execution on an other.
 	 */
 	ctx = intel_ctx_create(i915, cfg);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 	spin = NULL;
 	for_each_ctx_cfg_engine(i915, cfg, e) {
 		if (e->flags == engine)
 			continue;
 
 		if (spin == NULL) {
-			spin = __igt_spin_new(i915, .ctx = ctx, .engine = e->flags);
+			spin = __igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
+					      .engine = e->flags);
 		} else {
 			struct drm_i915_gem_execbuffer2 execbuf = {
 				.buffers_ptr = spin->execbuf.buffers_ptr,
@@ -465,6 +474,7 @@ static void exec_single_timeline(int i915, const intel_ctx_cfg_t *cfg,
 	igt_assert_eq(nop_sync(i915, ctx, engine, NSEC_PER_SEC), 0);
 	igt_spin_free(i915, spin);
 	intel_ctx_destroy(i915, ctx);
+	put_ahnd(ahnd);
 
 	/*
 	 * But if we create a context with just a single shared timeline,
@@ -474,13 +484,15 @@ static void exec_single_timeline(int i915, const intel_ctx_cfg_t *cfg,
 	st_cfg = *cfg;
 	st_cfg.flags |= I915_CONTEXT_CREATE_FLAGS_SINGLE_TIMELINE;
 	ctx = intel_ctx_create(i915, &st_cfg);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 	spin = NULL;
 	for_each_ctx_cfg_engine(i915, &st_cfg, e) {
 		if (e->flags == engine)
 			continue;
 
 		if (spin == NULL) {
-			spin = __igt_spin_new(i915, .ctx = ctx, .engine = e->flags);
+			spin = __igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
+					      .engine = e->flags);
 		} else {
 			struct drm_i915_gem_execbuffer2 execbuf = {
 				.buffers_ptr = spin->execbuf.buffers_ptr,
@@ -495,11 +507,12 @@ static void exec_single_timeline(int i915, const intel_ctx_cfg_t *cfg,
 	igt_assert_eq(nop_sync(i915, ctx, engine, NSEC_PER_SEC), -ETIME);
 	igt_spin_free(i915, spin);
 	intel_ctx_destroy(i915, ctx);
+	put_ahnd(ahnd);
 }
 
-static void store_dword(int i915, const intel_ctx_t *ctx, unsigned ring,
-			uint32_t target, uint32_t offset, uint32_t value,
-			uint32_t cork, unsigned write_domain)
+static void store_dword(int i915, uint64_t ahnd, const intel_ctx_t *ctx,
+			unsigned ring, uint32_t target, uint32_t offset,
+		        uint32_t value, uint32_t cork, unsigned write_domain)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	struct drm_i915_gem_exec_object2 obj[3];
@@ -533,7 +546,15 @@ static void store_dword(int i915, const intel_ctx_t *ctx, unsigned ring,
 	reloc.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
 	reloc.write_domain = write_domain;
 	obj[2].relocs_ptr = to_user_pointer(&reloc);
-	obj[2].relocation_count = 1;
+	if (ahnd) {
+		obj[0].flags |= EXEC_OBJECT_PINNED;
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+		if (write_domain)
+			obj[1].flags |= EXEC_OBJECT_WRITE;
+		obj[2].flags |= EXEC_OBJECT_PINNED;
+	} else {
+		obj[2].relocation_count = 1;
+	}
 
 	i = 0;
 	batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
@@ -574,21 +595,28 @@ static void unplug_show_queue(int i915, struct igt_cork *c,
 			      const intel_ctx_cfg_t *cfg, unsigned int engine)
 {
 	igt_spin_t *spin[MAX_ELSP_QLEN];
+	uint64_t ahnd;
 
 	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
 		const intel_ctx_t *ctx = create_highest_priority(i915, cfg);
-		spin[n] = __igt_spin_new(i915, .ctx = ctx, .engine = engine);
+		ahnd = get_reloc_ahnd(i915, ctx->id);
+		spin[n] = __igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
+					 .engine = engine);
 		intel_ctx_destroy(i915, ctx);
 	}
 
 	igt_cork_unplug(c); /* batches will now be queued on the engine */
 	igt_debugfs_dump(i915, "i915_engine_info");
 
-	for (int n = 0; n < ARRAY_SIZE(spin); n++)
+	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
+		ahnd = spin[n]->ahnd;
 		igt_spin_free(i915, spin[n]);
+		put_ahnd(ahnd);
+	}
 }
 
 static uint32_t store_timestamp(int i915,
+				uint64_t ahnd,
 				const intel_ctx_t *ctx,
 				unsigned ring,
 				unsigned mmio_base,
@@ -599,7 +627,7 @@ static uint32_t store_timestamp(int i915,
 	uint32_t handle = gem_create(i915, 4096);
 	struct drm_i915_gem_exec_object2 obj = {
 		.handle = handle,
-		.relocation_count = 1,
+		.relocation_count = !ahnd ? 1 : 0,
 		.offset = (32 << 20) + (handle << 16),
 	};
 	struct drm_i915_gem_relocation_entry reloc = {
@@ -652,6 +680,7 @@ static void independent(int i915, const intel_ctx_cfg_t *cfg,
 	unsigned int mmio_base;
 	IGT_CORK_FENCE(cork);
 	int fence;
+	uint64_t ahnd;
 
 	mmio_base = gem_engine_mmio_base(i915, e->name);
 	igt_require_f(mmio_base, "mmio base not known\n");
@@ -662,18 +691,22 @@ static void independent(int i915, const intel_ctx_cfg_t *cfg,
 
 	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
 		const intel_ctx_t *ctx = create_highest_priority(i915, &q_cfg);
-		spin[n] = __igt_spin_new(i915, .ctx = ctx, .engine = e->flags);
+		ahnd = get_reloc_ahnd(i915, ctx->id);
+		spin[n] = __igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
+					 .engine = e->flags);
 		intel_ctx_destroy(i915, ctx);
 	}
 
 	fence = igt_cork_plug(&cork, i915);
 	for (int i = 0; i < ARRAY_SIZE(priorities); i++) {
 		const intel_ctx_t *ctx = create_highest_priority(i915, &q_cfg);
+		ahnd = get_reloc_ahnd(i915, ctx->id);
 		gem_context_set_priority(i915, ctx->id, priorities[i]);
-		handle[i] = store_timestamp(i915, ctx,
+		handle[i] = store_timestamp(i915, ahnd, ctx,
 					    e->flags, mmio_base,
 					    fence, TIMESTAMP);
 		intel_ctx_destroy(i915, ctx);
+		put_ahnd(ahnd);
 	}
 	close(fence);
 	kick_tasklets(); /* XXX try to hide cmdparser delays XXX */
@@ -681,8 +714,11 @@ static void independent(int i915, const intel_ctx_cfg_t *cfg,
 	igt_cork_unplug(&cork);
 	igt_debugfs_dump(i915, "i915_engine_info");
 
-	for (int n = 0; n < ARRAY_SIZE(spin); n++)
+	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
+		ahnd = spin[n]->ahnd;
 		igt_spin_free(i915, spin[n]);
+		put_ahnd(ahnd);
+	}
 
 	for (int i = 0; i < ARRAY_SIZE(priorities); i++) {
 		uint32_t *ptr;
@@ -714,6 +750,7 @@ static void reorder(int i915, const intel_ctx_cfg_t *cfg,
 	intel_ctx_cfg_t q_cfg;
 	const intel_ctx_t *ctx[2];
 	uint32_t plug;
+	uint64_t ahnd = get_reloc_ahnd(i915, 0);
 
 	q_cfg = *cfg;
 	q_cfg.vm = gem_vm_create(i915);
@@ -731,8 +768,8 @@ static void reorder(int i915, const intel_ctx_cfg_t *cfg,
 	/* We expect the high priority context to be executed first, and
 	 * so the final result will be value from the low priority context.
 	 */
-	store_dword(i915, ctx[LO], ring, scratch, 0, ctx[LO]->id, plug, 0);
-	store_dword(i915, ctx[HI], ring, scratch, 0, ctx[HI]->id, plug, 0);
+	store_dword(i915, ahnd, ctx[LO], ring, scratch, 0, ctx[LO]->id, plug, 0);
+	store_dword(i915, ahnd, ctx[HI], ring, scratch, 0, ctx[HI]->id, plug, 0);
 
 	unplug_show_queue(i915, &cork, &q_cfg, ring);
 	gem_close(i915, plug);
@@ -750,6 +787,7 @@ static void reorder(int i915, const intel_ctx_cfg_t *cfg,
 
 	intel_ctx_destroy(i915, ctx[LO]);
 	intel_ctx_destroy(i915, ctx[HI]);
+	put_ahnd(ahnd);
 
 	gem_vm_destroy(i915, q_cfg.vm);
 }
@@ -762,6 +800,7 @@ static void promotion(int i915, const intel_ctx_cfg_t *cfg, unsigned ring)
 	intel_ctx_cfg_t q_cfg;
 	const intel_ctx_t *ctx[3];
 	uint32_t plug;
+	uint64_t ahnd[3];
 
 	q_cfg = *cfg;
 	q_cfg.vm = gem_vm_create(i915);
@@ -769,12 +808,15 @@ static void promotion(int i915, const intel_ctx_cfg_t *cfg, unsigned ring)
 
 	ctx[LO] = intel_ctx_create(i915, &q_cfg);
 	gem_context_set_priority(i915, ctx[LO]->id, MIN_PRIO);
+	ahnd[LO] = get_reloc_ahnd(i915, ctx[LO]->id);
 
 	ctx[HI] = intel_ctx_create(i915, &q_cfg);
 	gem_context_set_priority(i915, ctx[HI]->id, 0);
+	ahnd[HI] = get_reloc_ahnd(i915, ctx[HI]->id);
 
 	ctx[NOISE] = intel_ctx_create(i915, &q_cfg);
 	gem_context_set_priority(i915, ctx[NOISE]->id, MIN_PRIO/2);
+	ahnd[NOISE] = get_reloc_ahnd(i915, ctx[NOISE]->id);
 
 	result = gem_create(i915, 4096);
 	dep = gem_create(i915, 4096);
@@ -786,14 +828,14 @@ static void promotion(int i915, const intel_ctx_cfg_t *cfg, unsigned ring)
 	 * fifo would be NOISE, LO, HI.
 	 * strict priority would be  HI, NOISE, LO
 	 */
-	store_dword(i915, ctx[NOISE], ring, result, 0, ctx[NOISE]->id, plug, 0);
-	store_dword(i915, ctx[LO], ring, result, 0, ctx[LO]->id, plug, 0);
+	store_dword(i915, ahnd[NOISE], ctx[NOISE], ring, result, 0, ctx[NOISE]->id, plug, 0);
+	store_dword(i915, ahnd[LO], ctx[LO], ring, result, 0, ctx[LO]->id, plug, 0);
 
 	/* link LO <-> HI via a dependency on another buffer */
-	store_dword(i915, ctx[LO], ring, dep, 0, ctx[LO]->id, 0, I915_GEM_DOMAIN_INSTRUCTION);
-	store_dword(i915, ctx[HI], ring, dep, 0, ctx[HI]->id, 0, 0);
+	store_dword(i915, ahnd[LO], ctx[LO], ring, dep, 0, ctx[LO]->id, 0, I915_GEM_DOMAIN_INSTRUCTION);
+	store_dword(i915, ahnd[HI], ctx[HI], ring, dep, 0, ctx[HI]->id, 0, 0);
 
-	store_dword(i915, ctx[HI], ring, result, 0, ctx[HI]->id, 0, 0);
+	store_dword(i915, ahnd[HI], ctx[HI], ring, result, 0, ctx[HI]->id, 0, 0);
 
 	unplug_show_queue(i915, &cork, &q_cfg, ring);
 	gem_close(i915, plug);
@@ -817,6 +859,9 @@ static void promotion(int i915, const intel_ctx_cfg_t *cfg, unsigned ring)
 	intel_ctx_destroy(i915, ctx[NOISE]);
 	intel_ctx_destroy(i915, ctx[LO]);
 	intel_ctx_destroy(i915, ctx[HI]);
+	put_ahnd(ahnd[NOISE]);
+	put_ahnd(ahnd[LO]);
+	put_ahnd(ahnd[HI]);
 
 	gem_vm_destroy(i915, q_cfg.vm);
 }
@@ -831,6 +876,7 @@ static void smoketest(int i915, const intel_ctx_cfg_t *cfg,
 	unsigned engine;
 	uint32_t scratch;
 	uint32_t *ptr;
+	uint64_t ahnd;
 
 	q_cfg = *cfg;
 	q_cfg.vm = gem_vm_create(i915);
@@ -855,6 +901,7 @@ static void smoketest(int i915, const intel_ctx_cfg_t *cfg,
 		hars_petruska_f54_1_random_perturb(child);
 
 		ctx = intel_ctx_create(i915, &q_cfg);
+		ahnd = get_reloc_ahnd(i915, ctx->id);
 		igt_until_timeout(timeout) {
 			int prio;
 
@@ -862,15 +909,16 @@ static void smoketest(int i915, const intel_ctx_cfg_t *cfg,
 			gem_context_set_priority(i915, ctx->id, prio);
 
 			engine = engines[hars_petruska_f54_1_random_unsafe_max(nengine)];
-			store_dword(i915, ctx, engine, scratch,
+			store_dword(i915, ahnd, ctx, engine, scratch,
 				    8*child + 0, ~child,
 				    0, 0);
 			for (unsigned int step = 0; step < 8; step++)
-				store_dword(i915, ctx, engine, scratch,
+				store_dword(i915, ahnd, ctx, engine, scratch,
 					    8*child + 4, count++,
 					    0, 0);
 		}
 		intel_ctx_destroy(i915, ctx);
+		put_ahnd(ahnd);
 	}
 	igt_waitchildren();
 
@@ -974,6 +1022,16 @@ igt_main
 				for_each_queue(e, i915, &cfg)
 					promotion(i915, &cfg, e->flags);
 			}
+		}
+
+		igt_subtest_group {
+			igt_fixture {
+				igt_require(gem_scheduler_enabled(i915));
+				igt_require(gem_scheduler_has_ctx_priority(i915));
+				igt_require(gem_has_vm(i915));
+				igt_require(gem_context_has_single_timeline(i915));
+				intel_allocator_multiprocess_start();
+			}
 
 			igt_subtest_with_dynamic("Q-smoketest") {
 				for_each_queue(e, i915, &cfg)
@@ -982,8 +1040,13 @@ igt_main
 
 			igt_subtest("Q-smoketest-all")
 				smoketest(i915, &cfg, -1, 30);
+
+			igt_fixture {
+				intel_allocator_multiprocess_stop();
+			}
 		}
 
+
 		igt_subtest("exhaust-shared-gtt")
 			exhaust_shared_gtt(i915, 0);
 
diff --git a/tests/i915/gem_exec_await.c b/tests/i915/gem_exec_await.c
index bea57c61..8b45e10d 100644
--- a/tests/i915/gem_exec_await.c
+++ b/tests/i915/gem_exec_await.c
@@ -72,6 +72,7 @@ static void wide(int fd, const intel_ctx_t *ctx, int ring_size,
 	unsigned engines[I915_EXEC_RING_MASK + 1], nengine;
 	unsigned long count;
 	double time;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
 
 	nengine = 0;
 	for_each_ctx_engine(fd, ctx, engine) {
@@ -87,17 +88,22 @@ static void wide(int fd, const intel_ctx_t *ctx, int ring_size,
 	exec = calloc(nengine, sizeof(*exec));
 	igt_assert(exec);
 
-	intel_require_memory(nengine*(2 + ring_size), 4096, CHECK_RAM);
+	//intel_require_memory(nengine*(2 + ring_size), 4096, CHECK_RAM);
 	obj = calloc(nengine*ring_size + 1, sizeof(*obj));
 	igt_assert(obj);
 
+	igt_info("nengine: %u, ring_size: %u\n", nengine, ring_size);
 	for (unsigned e = 0; e < nengine; e++) {
 		exec[e].obj = calloc(ring_size, sizeof(*exec[e].obj));
 		igt_assert(exec[e].obj);
 		for (unsigned n = 0; n < ring_size; n++)  {
 			exec[e].obj[n].handle = gem_create(fd, 4096);
 			exec[e].obj[n].flags = EXEC_OBJECT_WRITE;
-
+			if (ahnd) {
+				exec[e].obj[n].offset = get_offset(ahnd, exec[e].obj[n].handle, 4096, 0);
+				exec[e].obj[n].flags |= EXEC_OBJECT_PINNED;
+				obj[e*ring_size + n].offset = exec[e].obj[n].offset;
+			}
 			obj[e*ring_size + n].handle = exec[e].obj[n].handle;
 		}
 
@@ -123,6 +129,8 @@ static void wide(int fd, const intel_ctx_t *ctx, int ring_size,
 		exec[e].cmd[0] = MI_BATCH_BUFFER_END;
 
 		gem_execbuf(fd, &exec[e].execbuf);
+		igt_info("obj[0].offset: %llx, handle: %u\n",
+			 exec[e].exec[0].offset, exec[e].exec[0].handle);
 		exec[e].exec[1] = exec[e].exec[0];
 		exec[e].execbuf.buffer_count = 2;
 
@@ -181,6 +189,7 @@ static void wide(int fd, const intel_ctx_t *ctx, int ring_size,
 				exec[e].cmd[++i] = address;
 			}
 
+			igt_info("Address: %llx\n", (long long) address);
 			exec[e].exec[0] = obj[nengine*ring_size];
 			gem_execbuf(fd, &exec[e].execbuf);
 
@@ -208,6 +217,7 @@ static void wide(int fd, const intel_ctx_t *ctx, int ring_size,
 		for (unsigned e = 0; e < nengine; e++)
 			exec[e].cmd[0] = MI_BATCH_BUFFER_END;
 		__sync_synchronize();
+		break;
 	}
 
 	igt_assert_eq(intel_detect_and_clear_missed_interrupts(fd), 0);
diff --git a/tests/i915/gem_exec_fence.c b/tests/i915/gem_exec_fence.c
index ef1bb0ca..52335c91 100644
--- a/tests/i915/gem_exec_fence.c
+++ b/tests/i915/gem_exec_fence.c
@@ -57,9 +57,10 @@ struct sync_merge_data {
 #define   MI_SEMAPHORE_SAD_EQ_SDD       (4 << 12)
 #define   MI_SEMAPHORE_SAD_NEQ_SDD      (5 << 12)
 
-static void store(int fd, const intel_ctx_t *ctx,
+static void store(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
 		  const struct intel_execution_engine2 *e,
-		  int fence, uint32_t target, unsigned offset_value)
+		  int fence, uint32_t target, uint64_t target_offset,
+		  unsigned offset_value)
 {
 	const int SCRATCH = 0;
 	const int BATCH = 1;
@@ -67,7 +68,8 @@ static void store(int fd, const intel_ctx_t *ctx,
 	struct drm_i915_gem_exec_object2 obj[2];
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
-	uint32_t batch[16];
+	uint32_t batch[16], delta;
+	uint64_t bb_offset;
 	int i;
 
 	memset(&execbuf, 0, sizeof(execbuf));
@@ -84,33 +86,43 @@ static void store(int fd, const intel_ctx_t *ctx,
 
 	obj[BATCH].handle = gem_create(fd, 4096);
 	obj[BATCH].relocs_ptr = to_user_pointer(&reloc);
-	obj[BATCH].relocation_count = 1;
+	obj[BATCH].relocation_count = !ahnd ? 1 : 0;
+	bb_offset = get_offset(ahnd, obj[BATCH].handle, 4096, 0);
 	memset(&reloc, 0, sizeof(reloc));
 
 	i = 0;
-	reloc.target_handle = obj[SCRATCH].handle;
-	reloc.presumed_offset = -1;
-	reloc.offset = sizeof(uint32_t) * (i + 1);
-	reloc.delta = sizeof(uint32_t) * offset_value;
-	reloc.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
-	reloc.write_domain = I915_GEM_DOMAIN_INSTRUCTION;
+	delta = sizeof(uint32_t) * offset_value;
+	if (!ahnd) {
+		reloc.target_handle = obj[SCRATCH].handle;
+		reloc.presumed_offset = -1;
+		reloc.offset = sizeof(uint32_t) * (i + 1);
+		reloc.delta = delta;
+		reloc.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
+		reloc.write_domain = I915_GEM_DOMAIN_INSTRUCTION;
+	} else {
+		obj[SCRATCH].offset = target_offset;
+		obj[SCRATCH].flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+		obj[BATCH].offset = bb_offset;
+		obj[BATCH].flags |= EXEC_OBJECT_PINNED;
+	}
 	batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 	if (gen >= 8) {
-		batch[++i] = reloc.delta;
-		batch[++i] = 0;
+		batch[++i] = target_offset + delta;
+		batch[++i] = target_offset >> 32;
 	} else if (gen >= 4) {
 		batch[++i] = 0;
-		batch[++i] = reloc.delta;
+		batch[++i] = delta;
 		reloc.offset += sizeof(uint32_t);
 	} else {
 		batch[i]--;
-		batch[++i] = reloc.delta;
+		batch[++i] = delta;
 	}
 	batch[++i] = offset_value;
 	batch[++i] = MI_BATCH_BUFFER_END;
 	gem_write(fd, obj[BATCH].handle, 0, batch, sizeof(batch));
 	gem_execbuf(fd, &execbuf);
 	gem_close(fd, obj[BATCH].handle);
+	put_offset(ahnd, obj[BATCH].handle);
 }
 
 static bool fence_busy(int fence)
@@ -132,6 +144,7 @@ static void test_fence_busy(int fd, const intel_ctx_t *ctx,
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct timespec tv;
 	uint32_t *batch;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 	int fence, i, timeout;
 
 	if ((flags & HANG) == 0)
@@ -147,10 +160,7 @@ static void test_fence_busy(int fd, const intel_ctx_t *ctx,
 
 	memset(&obj, 0, sizeof(obj));
 	obj.handle = gem_create(fd, 4096);
-
-	obj.relocs_ptr = to_user_pointer(&reloc);
-	obj.relocation_count = 1;
-	memset(&reloc, 0, sizeof(reloc));
+	obj.offset = get_offset(ahnd, obj.handle, 4096, 0);
 
 	batch = gem_mmap__wc(fd, obj.handle, 0, 4096, PROT_WRITE);
 	gem_set_domain(fd, obj.handle,
@@ -160,26 +170,33 @@ static void test_fence_busy(int fd, const intel_ctx_t *ctx,
 	if ((flags & HANG) == 0)
 		batch[i++] = 0x5 << 23;
 
-	reloc.target_handle = obj.handle; /* recurse */
-	reloc.presumed_offset = 0;
-	reloc.offset = (i + 1) * sizeof(uint32_t);
-	reloc.delta = 0;
-	reloc.read_domains = I915_GEM_DOMAIN_COMMAND;
-	reloc.write_domain = 0;
+	if (!ahnd) {
+		obj.relocs_ptr = to_user_pointer(&reloc);
+		obj.relocation_count = 1;
+		memset(&reloc, 0, sizeof(reloc));
+		reloc.target_handle = obj.handle; /* recurse */
+		reloc.presumed_offset = obj.offset;
+		reloc.offset = (i + 1) * sizeof(uint32_t);
+		reloc.delta = 0;
+		reloc.read_domains = I915_GEM_DOMAIN_COMMAND;
+		reloc.write_domain = 0;
+	} else {
+		obj.flags |= EXEC_OBJECT_PINNED;
+	}
 
 	batch[i] = MI_BATCH_BUFFER_START;
 	if (gen >= 8) {
 		batch[i] |= 1 << 8 | 1;
-		batch[++i] = 0;
-		batch[++i] = 0;
+		batch[++i] = obj.offset;
+		batch[++i] = obj.offset >> 32;
 	} else if (gen >= 6) {
 		batch[i] |= 1 << 8;
-		batch[++i] = 0;
+		batch[++i] = obj.offset;
 	} else {
 		batch[i] |= 2 << 6;
-		batch[++i] = 0;
+		batch[++i] = obj.offset;
 		if (gen < 4) {
-			batch[i] |= 1;
+			batch[i]++;
 			reloc.delta = 1;
 		}
 	}
@@ -216,6 +233,8 @@ static void test_fence_busy(int fd, const intel_ctx_t *ctx,
 
 	close(fence);
 	gem_close(fd, obj.handle);
+	put_offset(ahnd, obj.handle);
+	put_ahnd(ahnd);
 
 	gem_quiescent_gpu(fd);
 }
@@ -229,6 +248,7 @@ static void test_fence_busy_all(int fd, const intel_ctx_t *ctx, unsigned flags)
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct timespec tv;
 	uint32_t *batch;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 	int all, i, timeout;
 
 	gem_quiescent_gpu(fd);
@@ -239,10 +259,8 @@ static void test_fence_busy_all(int fd, const intel_ctx_t *ctx, unsigned flags)
 
 	memset(&obj, 0, sizeof(obj));
 	obj.handle = gem_create(fd, 4096);
-
-	obj.relocs_ptr = to_user_pointer(&reloc);
-	obj.relocation_count = 1;
-	memset(&reloc, 0, sizeof(reloc));
+	obj.offset = get_offset(ahnd, obj.handle, 4096, 0);
+	igt_assert(obj.offset != -1);
 
 	batch = gem_mmap__wc(fd, obj.handle, 0, 4096, PROT_WRITE);
 	gem_set_domain(fd, obj.handle,
@@ -252,26 +270,33 @@ static void test_fence_busy_all(int fd, const intel_ctx_t *ctx, unsigned flags)
 	if ((flags & HANG) == 0)
 		batch[i++] = 0x5 << 23;
 
-	reloc.target_handle = obj.handle; /* recurse */
-	reloc.presumed_offset = 0;
-	reloc.offset = (i + 1) * sizeof(uint32_t);
-	reloc.delta = 0;
-	reloc.read_domains = I915_GEM_DOMAIN_COMMAND;
-	reloc.write_domain = 0;
+	if (!ahnd) {
+		obj.relocs_ptr = to_user_pointer(&reloc);
+		obj.relocation_count = 1;
+		memset(&reloc, 0, sizeof(reloc));
+		reloc.target_handle = obj.handle; /* recurse */
+		reloc.presumed_offset = obj.offset;
+		reloc.offset = (i + 1) * sizeof(uint32_t);
+		reloc.delta = 0;
+		reloc.read_domains = I915_GEM_DOMAIN_COMMAND;
+		reloc.write_domain = 0;
+	} else {
+		obj.flags |= EXEC_OBJECT_PINNED;
+	}
 
 	batch[i] = MI_BATCH_BUFFER_START;
 	if (gen >= 8) {
 		batch[i] |= 1 << 8 | 1;
-		batch[++i] = 0;
-		batch[++i] = 0;
+		batch[++i] = obj.offset;
+		batch[++i] = obj.offset >> 32;
 	} else if (gen >= 6) {
 		batch[i] |= 1 << 8;
-		batch[++i] = 0;
+		batch[++i] = obj.offset;
 	} else {
 		batch[i] |= 2 << 6;
-		batch[++i] = 0;
+		batch[++i] = obj.offset;
 		if (gen < 4) {
-			batch[i] |= 1;
+			batch[i]++;
 			reloc.delta = 1;
 		}
 	}
@@ -331,6 +356,8 @@ static void test_fence_busy_all(int fd, const intel_ctx_t *ctx, unsigned flags)
 
 	close(all);
 	gem_close(fd, obj.handle);
+	put_offset(ahnd, obj.handle);
+	put_ahnd(ahnd);
 
 	gem_quiescent_gpu(fd);
 }
@@ -351,13 +378,17 @@ static void test_fence_await(int fd, const intel_ctx_t *ctx,
 	uint32_t scratch = gem_create(fd, 4096);
 	igt_spin_t *spin;
 	uint32_t *out;
+	uint64_t scratch_offset, ahnd = get_reloc_ahnd(fd, ctx->id);
 	int i;
 
+	scratch_offset = get_offset(ahnd, scratch, 4096, 0);
+
 	out = gem_mmap__wc(fd, scratch, 0, 4096, PROT_WRITE);
 	gem_set_domain(fd, scratch,
 			I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
 
 	spin = igt_spin_new(fd,
+			    .ahnd = ahnd,
 			    .ctx = ctx,
 			    .engine = e->flags,
 			    .flags = IGT_SPIN_FENCE_OUT | spin_hang(flags));
@@ -369,10 +400,15 @@ static void test_fence_await(int fd, const intel_ctx_t *ctx,
 			continue;
 
 		if (flags & NONBLOCK) {
-			store(fd, ctx, e2, spin->out_fence, scratch, i);
+			store(fd, ahnd, ctx, e2, spin->out_fence,
+			      scratch, scratch_offset, i);
 		} else {
-			igt_fork(child, 1)
-				store(fd, ctx, e2, spin->out_fence, scratch, i);
+			igt_fork(child, 1) {
+				ahnd = get_reloc_ahnd(fd, ctx->id);
+				store(fd, ahnd, ctx, e2, spin->out_fence,
+				      scratch, scratch_offset, i);
+				put_ahnd(ahnd);
+			}
 		}
 
 		i++;
@@ -398,6 +434,8 @@ static void test_fence_await(int fd, const intel_ctx_t *ctx,
 
 	igt_spin_free(fd, spin);
 	gem_close(fd, scratch);
+	put_offset(ahnd, scratch);
+	put_ahnd(ahnd);
 }
 
 static uint32_t timeslicing_batches(int i915, uint32_t *offset)
@@ -623,9 +661,12 @@ static void test_parallel(int i915, const intel_ctx_t *ctx,
 	igt_spin_t *spin;
 	int fence;
 	int x = 0;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id), bb_offset;
+	uint64_t scratch_offset = get_offset(ahnd, scratch, 4096, 0);
 
 	fence = igt_cork_plug(&cork, i915),
 	spin = igt_spin_new(i915,
+			    .ahnd = ahnd,
 			    .ctx = ctx,
 			    .engine = e->flags,
 			    .fence = fence,
@@ -644,7 +685,7 @@ static void test_parallel(int i915, const intel_ctx_t *ctx,
 			{ .handle = scratch, },
 			{
 				.relocs_ptr = to_user_pointer(&reloc),
-				.relocation_count = 1,
+				.relocation_count = !ahnd ? 1 : 0,
 			}
 		};
 		struct drm_i915_gem_execbuffer2 execbuf = {
@@ -662,11 +703,19 @@ static void test_parallel(int i915, const intel_ctx_t *ctx,
 
 		obj[1].handle = gem_create(i915, 4096);
 
+		if (ahnd) {
+			bb_offset = get_offset(ahnd, obj[1].handle, 4096, 0);
+			obj[1].offset = bb_offset;
+			obj[1].flags = EXEC_OBJECT_PINNED;
+			obj[0].offset = scratch_offset;
+			obj[0].flags = EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+		}
+
 		i = 0;
 		batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 		if (gen >= 8) {
-			batch[++i] = reloc.delta;
-			batch[++i] = 0;
+			batch[++i] = scratch_offset + reloc.delta;
+			batch[++i] = scratch_offset >> 32;
 		} else if (gen >= 4) {
 			batch[++i] = 0;
 			batch[++i] = reloc.delta;
@@ -687,6 +736,7 @@ static void test_parallel(int i915, const intel_ctx_t *ctx,
 	}
 	igt_assert(gem_bo_busy(i915, spin->handle));
 	gem_close(i915, scratch);
+	put_offset(ahnd, scratch);
 	igt_require(x);
 
 	/*
@@ -713,18 +763,21 @@ static void test_parallel(int i915, const intel_ctx_t *ctx,
 
 		igt_assert_eq_u32(out[i], ~i);
 		gem_close(i915, handle[i]);
+		put_offset(ahnd, handle[i]);
 	}
 	munmap(out, 4096);
 
 	/* Master should still be spinning, but all output should be written */
 	igt_assert(gem_bo_busy(i915, spin->handle));
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static void test_concurrent(int i915, const intel_ctx_t *ctx,
 			    const struct intel_execution_engine2 *e)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id);
 	struct drm_i915_gem_relocation_entry reloc = {
 		.target_handle =  gem_create(i915, 4096),
 		.write_domain = I915_GEM_DOMAIN_RENDER,
@@ -735,7 +788,7 @@ static void test_concurrent(int i915, const intel_ctx_t *ctx,
 		{
 			.handle = gem_create(i915, 4096),
 			.relocs_ptr = to_user_pointer(&reloc),
-			.relocation_count = 1,
+			.relocation_count = !ahnd ? 1 : 0,
 		}
 	};
 	struct drm_i915_gem_execbuffer2 execbuf = {
@@ -749,9 +802,19 @@ static void test_concurrent(int i915, const intel_ctx_t *ctx,
 	igt_spin_t *spin;
 	const intel_ctx_t *tmp_ctx;
 	uint32_t result;
+	uint64_t bb_offset, target_offset;
 	int fence;
 	int i;
 
+	bb_offset = get_offset(ahnd, obj[1].handle, 4096, 0);
+	target_offset = get_offset(ahnd, obj[0].handle, 4096, 0);
+	if (ahnd) {
+		obj[1].offset = bb_offset;
+		obj[1].flags = EXEC_OBJECT_PINNED;
+		obj[0].offset = target_offset;
+		obj[0].flags = EXEC_OBJECT_PINNED | EXEC_OBJECT_WRITE;
+	}
+
 	/*
 	 * A variant of test_parallel() that runs a bonded pair on a single
 	 * engine and ensures that the secondary batch cannot start before
@@ -760,6 +823,7 @@ static void test_concurrent(int i915, const intel_ctx_t *ctx,
 
 	fence = igt_cork_plug(&cork, i915),
 	      spin = igt_spin_new(i915,
+				  .ahnd = ahnd,
 				  .ctx = ctx,
 				  .engine = e->flags,
 				  .fence = fence,
@@ -770,8 +834,8 @@ static void test_concurrent(int i915, const intel_ctx_t *ctx,
 	i = 0;
 	batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 	if (gen >= 8) {
-		batch[++i] = reloc.delta;
-		batch[++i] = 0;
+		batch[++i] = target_offset + reloc.delta;
+		batch[++i] = target_offset >> 32;
 	} else if (gen >= 4) {
 		batch[++i] = 0;
 		batch[++i] = reloc.delta;
@@ -793,6 +857,7 @@ static void test_concurrent(int i915, const intel_ctx_t *ctx,
 	gem_execbuf(i915, &execbuf);
 	intel_ctx_destroy(i915, tmp_ctx);
 	gem_close(i915, obj[1].handle);
+	put_offset(ahnd, obj[1].handle);
 
 	/*
 	 * No secondary should be executed since master is stalled. If there
@@ -814,10 +879,12 @@ static void test_concurrent(int i915, const intel_ctx_t *ctx,
 	gem_read(i915, obj[0].handle, 0, &result, sizeof(result));
 	igt_assert_eq_u32(result, 0xd0df0d);
 	gem_close(i915, obj[0].handle);
+	put_offset(ahnd, obj[0].handle);
 
 	/* Master should still be spinning, but all output should be written */
 	igt_assert(gem_bo_busy(i915, spin->handle));
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static void test_submit_chain(int i915, const intel_ctx_t *ctx)
@@ -827,12 +894,14 @@ static void test_submit_chain(int i915, const intel_ctx_t *ctx)
 	IGT_LIST_HEAD(list);
 	IGT_CORK_FENCE(cork);
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	/* Check that we can simultaneously launch spinners on each engine */
 
 	fence = igt_cork_plug(&cork, i915);
 	for_each_ctx_engine(i915, ctx, e) {
 		spin = igt_spin_new(i915,
+				    .ahnd = ahnd,
 				    .ctx = ctx,
 				    .engine = e->flags,
 				    .fence = fence,
@@ -860,6 +929,7 @@ static void test_submit_chain(int i915, const intel_ctx_t *ctx)
 		igt_assert_eq(sync_fence_status(spin->out_fence), 1);
 		igt_spin_free(i915, spin);
 	}
+	put_ahnd(ahnd);
 }
 
 static uint32_t batch_create(int fd)
@@ -889,9 +959,10 @@ static void test_keep_in_fence(int fd, const intel_ctx_t *ctx,
 	unsigned long count, last;
 	struct itimerval itv;
 	igt_spin_t *spin;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
 	int fence;
 
-	spin = igt_spin_new(fd, .ctx = ctx, .engine = e->flags);
+	spin = igt_spin_new(fd, .ahnd = ahnd, .ctx = ctx, .engine = e->flags);
 
 	gem_execbuf_wr(fd, &execbuf);
 	fence = upper_32_bits(execbuf.rsvd2);
@@ -940,6 +1011,7 @@ static void test_keep_in_fence(int fd, const intel_ctx_t *ctx,
 
 	igt_spin_free(fd, spin);
 	gem_quiescent_gpu(fd);
+	put_ahnd(ahnd);
 }
 
 #define EXPIRED 0x10000
@@ -1165,7 +1237,8 @@ static void test_syncobj_unused_fence(int fd)
 	struct drm_i915_gem_exec_fence fence = {
 		.handle = syncobj_create(fd, 0),
 	};
-	igt_spin_t *spin = igt_spin_new(fd);
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
+	igt_spin_t *spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* sanity check our syncobj_to_sync_file interface */
 	igt_assert_eq(__syncobj_to_sync_file(fd, 0), -ENOENT);
@@ -1191,6 +1264,7 @@ static void test_syncobj_unused_fence(int fd)
 	syncobj_destroy(fd, fence.handle);
 
 	igt_spin_free(fd, spin);
+	put_ahnd(ahnd);
 }
 
 static void test_syncobj_invalid_wait(int fd)
@@ -1257,7 +1331,8 @@ static void test_syncobj_signal(int fd)
 	struct drm_i915_gem_exec_fence fence = {
 		.handle = syncobj_create(fd, 0),
 	};
-	igt_spin_t *spin = igt_spin_new(fd);
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
+	igt_spin_t *spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* Check that the syncobj is signaled only when our request/fence is */
 
@@ -1286,6 +1361,7 @@ static void test_syncobj_signal(int fd)
 
 	gem_close(fd, obj.handle);
 	syncobj_destroy(fd, fence.handle);
+	put_ahnd(ahnd);
 }
 
 static void test_syncobj_wait(int fd, const intel_ctx_t *ctx)
@@ -1300,6 +1376,7 @@ static void test_syncobj_wait(int fd, const intel_ctx_t *ctx)
 	unsigned handle[I915_EXEC_RING_MASK + 1];
 	igt_spin_t *spin;
 	int n;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
 
 	/* Check that we can use the syncobj to asynchronous wait prior to
 	 * execution.
@@ -1307,7 +1384,7 @@ static void test_syncobj_wait(int fd, const intel_ctx_t *ctx)
 
 	gem_quiescent_gpu(fd);
 
-	spin = igt_spin_new(fd);
+	spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	memset(&execbuf, 0, sizeof(execbuf));
 	execbuf.buffers_ptr = to_user_pointer(&obj);
@@ -1357,6 +1434,8 @@ static void test_syncobj_wait(int fd, const intel_ctx_t *ctx)
 		gem_sync(fd, handle[i]);
 		gem_close(fd, handle[i]);
 	}
+
+	put_ahnd(ahnd);
 }
 
 static void test_syncobj_export(int fd)
@@ -1368,7 +1447,10 @@ static void test_syncobj_export(int fd)
 		.handle = syncobj_create(fd, 0),
 	};
 	int export[2];
-	igt_spin_t *spin = igt_spin_new(fd);
+	igt_spin_t *spin;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
+
+	spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* Check that if we export the syncobj prior to use it picks up
 	 * the later fence. This allows a syncobj to establish a channel
@@ -1416,6 +1498,8 @@ static void test_syncobj_export(int fd)
 		syncobj_destroy(fd, import);
 		close(export[n]);
 	}
+
+	put_ahnd(ahnd);
 }
 
 static void test_syncobj_repeat(int fd)
@@ -1426,7 +1510,10 @@ static void test_syncobj_repeat(int fd)
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_exec_fence *fence;
 	int export;
-	igt_spin_t *spin = igt_spin_new(fd);
+	igt_spin_t *spin;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
+
+	spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* Check that we can wait on the same fence multiple times */
 	fence = calloc(nfences, sizeof(*fence));
@@ -1474,6 +1561,8 @@ static void test_syncobj_repeat(int fd)
 		syncobj_destroy(fd, fence[i].handle);
 	}
 	free(fence);
+
+	put_ahnd(ahnd);
 }
 
 static void test_syncobj_import(int fd)
@@ -1481,7 +1570,8 @@ static void test_syncobj_import(int fd)
 	const uint32_t bbe = MI_BATCH_BUFFER_END;
 	struct drm_i915_gem_exec_object2 obj;
 	struct drm_i915_gem_execbuffer2 execbuf;
-	igt_spin_t *spin = igt_spin_new(fd);
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
+	igt_spin_t *spin = igt_spin_new(fd, .ahnd = ahnd);
 	uint32_t sync = syncobj_create(fd, 0);
 	int fence;
 
@@ -1517,6 +1607,7 @@ static void test_syncobj_import(int fd)
 
 	gem_close(fd, obj.handle);
 	syncobj_destroy(fd, sync);
+	put_ahnd(ahnd);
 }
 
 static void test_syncobj_channel(int fd)
@@ -1808,8 +1899,8 @@ static void test_syncobj_timeline_unused_fence(int fd)
 	struct drm_i915_gem_exec_fence fence = {
 		.handle = syncobj_create(fd, 0),
 	};
-	igt_spin_t *spin = igt_spin_new(fd);
-	uint64_t value = 1;
+	uint64_t value = 1, ahnd = get_reloc_ahnd(fd, 0);
+	igt_spin_t * spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* sanity check our syncobj_to_sync_file interface */
 	igt_assert_eq(__syncobj_to_sync_file(fd, 0), -ENOENT);
@@ -1841,6 +1932,7 @@ static void test_syncobj_timeline_unused_fence(int fd)
 	syncobj_destroy(fd, fence.handle);
 
 	igt_spin_free(fd, spin);
+	put_ahnd(ahnd);
 }
 
 static const char *test_syncobj_timeline_invalid_wait_desc =
@@ -1949,7 +2041,7 @@ static void test_syncobj_timeline_signal(int fd)
 	struct drm_i915_gem_exec_fence fence = {
 		.handle = syncobj_create(fd, 0),
 	};
-	uint64_t value = 42, query_value;
+	uint64_t value = 42, query_value, ahnd = get_reloc_ahnd(fd, 0);
 	igt_spin_t *spin;
 
 	/* Check that the syncobj is signaled only when our request/fence is */
@@ -1974,7 +2066,7 @@ static void test_syncobj_timeline_signal(int fd)
 	fence.flags = I915_EXEC_FENCE_SIGNAL;
 
 	/* Check syncobj after waiting on the buffer handle. */
-	spin = igt_spin_new(fd);
+	spin = igt_spin_new(fd, .ahnd = ahnd);
 	gem_execbuf(fd, &execbuf);
 
 	igt_assert(gem_bo_busy(fd, obj.handle));
@@ -1993,7 +2085,7 @@ static void test_syncobj_timeline_signal(int fd)
 	syncobj_timeline_query(fd, &fence.handle, &query_value, 1);
 	igt_assert_eq(query_value, value);
 
-	spin = igt_spin_new(fd);
+	spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/*
 	 * Wait on the syncobj and verify the state of the buffer
@@ -2024,6 +2116,7 @@ static void test_syncobj_timeline_signal(int fd)
 
 	gem_close(fd, obj.handle);
 	syncobj_destroy(fd, fence.handle);
+	put_ahnd(ahnd);
 }
 
 static const char *test_syncobj_timeline_wait_desc =
@@ -2046,7 +2139,7 @@ static void test_syncobj_timeline_wait(int fd, const intel_ctx_t *ctx)
 	};
 	unsigned handle[I915_EXEC_RING_MASK + 1];
 	const struct intel_execution_engine2 *e;
-	uint64_t value = 1;
+	uint64_t value = 1, ahnd = get_reloc_ahnd(fd, ctx->id);
 	igt_spin_t *spin;
 	int n;
 
@@ -2056,7 +2149,7 @@ static void test_syncobj_timeline_wait(int fd, const intel_ctx_t *ctx)
 
 	gem_quiescent_gpu(fd);
 
-	spin = igt_spin_new(fd, .ctx = ctx, .engine = ALL_ENGINES);
+	spin = igt_spin_new(fd, .ahnd = ahnd, .ctx = ctx, .engine = ALL_ENGINES);
 
 	memset(&timeline_fences, 0, sizeof(timeline_fences));
 	timeline_fences.base.name = DRM_I915_GEM_EXECBUFFER_EXT_TIMELINE_FENCES;
@@ -2105,6 +2198,7 @@ static void test_syncobj_timeline_wait(int fd, const intel_ctx_t *ctx)
 		gem_sync(fd, handle[i]);
 		gem_close(fd, handle[i]);
 	}
+	put_ahnd(ahnd);
 }
 
 static const char *test_syncobj_timeline_export_desc =
@@ -2121,9 +2215,9 @@ static void test_syncobj_timeline_export(int fd)
 	struct drm_i915_gem_exec_fence fence = {
 		.handle = syncobj_create(fd, 0),
 	};
-	uint64_t value = 1;
+	uint64_t value = 1, ahnd = get_reloc_ahnd(fd, 0);
 	int export[2];
-	igt_spin_t *spin = igt_spin_new(fd);
+	igt_spin_t *spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* Check that if we export the syncobj prior to use it picks up
 	 * the later fence. This allows a syncobj to establish a channel
@@ -2177,6 +2271,7 @@ static void test_syncobj_timeline_export(int fd)
 		syncobj_destroy(fd, import);
 		close(export[n]);
 	}
+	put_ahnd(ahnd);
 }
 
 static const char *test_syncobj_timeline_repeat_desc =
@@ -2193,9 +2288,9 @@ static void test_syncobj_timeline_repeat(int fd)
 	struct drm_i915_gem_execbuffer2 execbuf;
 	struct drm_i915_gem_execbuffer_ext_timeline_fences timeline_fences;
 	struct drm_i915_gem_exec_fence *fence;
-	uint64_t *values;
+	uint64_t *values, ahnd = get_reloc_ahnd(fd, 0);
 	int export;
-	igt_spin_t *spin = igt_spin_new(fd);
+	igt_spin_t *spin = igt_spin_new(fd, .ahnd = ahnd);
 
 	/* Check that we can wait on the same fence multiple times */
 	fence = calloc(nfences, sizeof(*fence));
@@ -2266,6 +2361,7 @@ static void test_syncobj_timeline_repeat(int fd)
 	}
 	free(fence);
 	free(values);
+	put_ahnd(ahnd);
 }
 
 static const char *test_syncobj_timeline_multiple_ext_nodes_desc =
@@ -3005,6 +3101,7 @@ igt_main
 		igt_subtest_group {
 			igt_fixture {
 				igt_fork_hang_detector(i915);
+				intel_allocator_multiprocess_start();
 			}
 
 			igt_subtest_with_dynamic("basic-busy") {
@@ -3097,6 +3194,7 @@ igt_main
 			}
 
 			igt_fixture {
+				intel_allocator_multiprocess_stop();
 				igt_stop_hang_detector();
 			}
 		}
@@ -3106,6 +3204,7 @@ igt_main
 
 			igt_fixture {
 				hang = igt_allow_hang(i915, 0, 0);
+				intel_allocator_multiprocess_start();
 			}
 
 			igt_subtest_with_dynamic("busy-hang") {
@@ -3133,6 +3232,7 @@ igt_main
 				}
 			}
 			igt_fixture {
+				intel_allocator_multiprocess_stop();
 				igt_disallow_hang(i915, hang);
 			}
 		}
@@ -3162,6 +3262,7 @@ igt_main
 			igt_require(exec_has_fence_array(i915));
 			igt_assert(has_syncobj(i915));
 			igt_fork_hang_detector(i915);
+			intel_allocator_multiprocess_start();
 		}
 
 		igt_subtest("invalid-fence-array")
@@ -3195,6 +3296,7 @@ igt_main
 			test_syncobj_channel(i915);
 
 		igt_fixture {
+			intel_allocator_multiprocess_stop();
 			igt_stop_hang_detector();
 		}
 	}
diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
index e5fb4598..f4ab1dab 100644
--- a/tests/i915/gem_exec_schedule.c
+++ b/tests/i915/gem_exec_schedule.c
@@ -91,8 +91,9 @@ void __sync_read_u32_count(int fd, uint32_t handle, uint32_t *dst, uint64_t size
 	gem_read(fd, handle, 0, dst, size);
 }
 
-static uint32_t __store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
-			      uint32_t target, uint32_t offset, uint32_t value,
+static uint32_t __store_dword(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
+			      unsigned ring, uint32_t target, uint64_t target_offset,
+			      uint32_t offset, uint32_t value,
 			      uint32_t cork, int fence, unsigned write_domain)
 {
 	const unsigned int gen = intel_gen(intel_get_drm_devid(fd));
@@ -117,12 +118,26 @@ static uint32_t __store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
 
 	memset(obj, 0, sizeof(obj));
 	obj[0].handle = cork;
-	obj[0].offset = cork << 20;
 	obj[1].handle = target;
-	obj[1].offset = target << 20;
 	obj[2].handle = gem_create(fd, 4096);
-	obj[2].offset = 256 << 10;
-	obj[2].offset += (random() % 128) << 12;
+	if (ahnd) {
+		/* If cork handle == 0 skip getting the offset */
+		if (obj[0].handle) {
+			obj[0].offset = get_offset(ahnd, obj[0].handle, 4096, 0);
+			obj[0].flags |= EXEC_OBJECT_PINNED;
+		}
+		obj[1].offset = target_offset;
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+		if (write_domain)
+			obj[1].flags |= EXEC_OBJECT_WRITE;
+		obj[2].offset = get_offset(ahnd, obj[2].handle, 4096, 0);
+		obj[2].flags |= EXEC_OBJECT_PINNED;
+	} else {
+		obj[0].offset = cork << 20;
+		obj[1].offset = target << 20;
+		obj[2].offset = 256 << 10;
+		obj[2].offset += (random() % 128) << 12;
+	}
 
 	memset(&reloc, 0, sizeof(reloc));
 	reloc.target_handle = obj[1].handle;
@@ -132,13 +147,13 @@ static uint32_t __store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
 	reloc.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
 	reloc.write_domain = write_domain;
 	obj[2].relocs_ptr = to_user_pointer(&reloc);
-	obj[2].relocation_count = 1;
+	obj[2].relocation_count = !ahnd ? 1 : 0;
 
 	i = 0;
 	batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 	if (gen >= 8) {
 		batch[++i] = reloc.presumed_offset + reloc.delta;
-		batch[++i] = 0;
+		batch[++i] = (reloc.presumed_offset + reloc.delta) >> 32;
 	} else if (gen >= 4) {
 		batch[++i] = 0;
 		batch[++i] = reloc.presumed_offset + reloc.delta;
@@ -159,8 +174,19 @@ static void store_dword(int fd, const intel_ctx_t *ctx, unsigned ring,
 			uint32_t target, uint32_t offset, uint32_t value,
 			unsigned write_domain)
 {
-	gem_close(fd, __store_dword(fd, ctx, ring,
-				    target, offset, value,
+	gem_close(fd, __store_dword(fd, 0, ctx, ring,
+				    target, 123123, offset, value,
+				    0, -1, write_domain));
+}
+
+static void store_dword2(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
+			 unsigned ring,
+			 uint32_t target, uint64_t target_offset,
+			 uint32_t offset, uint32_t value,
+			 unsigned write_domain)
+{
+	gem_close(fd, __store_dword(fd, ahnd, ctx, ring,
+				    target, target_offset, offset, value,
 				    0, -1, write_domain));
 }
 
@@ -168,8 +194,19 @@ static void store_dword_plug(int fd, const intel_ctx_t *ctx, unsigned ring,
 			     uint32_t target, uint32_t offset, uint32_t value,
 			     uint32_t cork, unsigned write_domain)
 {
-	gem_close(fd, __store_dword(fd, ctx, ring,
-				    target, offset, value,
+	gem_close(fd, __store_dword(fd, 0, ctx, ring,
+				    target, 123123, offset, value,
+				    cork, -1, write_domain));
+}
+
+static void store_dword_plug2(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
+			      unsigned ring,
+			      uint32_t target, uint64_t target_offset,
+			      uint32_t offset, uint32_t value,
+			      uint32_t cork, unsigned write_domain)
+{
+	gem_close(fd, __store_dword(fd, ahnd, ctx, ring,
+				    target, target_offset, offset, value,
 				    cork, -1, write_domain));
 }
 
@@ -177,8 +214,19 @@ static void store_dword_fenced(int fd, const intel_ctx_t *ctx, unsigned ring,
 			       uint32_t target, uint32_t offset, uint32_t value,
 			       int fence, unsigned write_domain)
 {
-	gem_close(fd, __store_dword(fd, ctx, ring,
-				    target, offset, value,
+	gem_close(fd, __store_dword(fd, 0, ctx, ring,
+				    target, 123123, offset, value,
+				    0, fence, write_domain));
+}
+
+static void store_dword_fenced2(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
+				unsigned ring,
+				uint32_t target, uint64_t target_offset,
+				uint32_t offset, uint32_t value,
+				int fence, unsigned write_domain)
+{
+	gem_close(fd, __store_dword(fd, ahnd, ctx, ring,
+				    target, target_offset, offset, value,
 				    0, fence, write_domain));
 }
 
@@ -222,26 +270,62 @@ static void unplug_show_queue(int fd, struct igt_cork *c,
 
 }
 
+static void unplug_show_queue2(int fd, struct igt_cork *c,
+			      const intel_ctx_cfg_t *cfg,
+			      unsigned int engine)
+{
+	igt_spin_t *spin[MAX_ELSP_QLEN];
+	int max = MAX_ELSP_QLEN;
+
+	/* If no scheduler, all batches are emitted in submission order */
+	if (!gem_scheduler_enabled(fd))
+		max = 1;
+
+	for (int n = 0; n < max; n++) {
+		const intel_ctx_t *ctx = create_highest_priority(fd, cfg);
+		uint64_t ahnd = get_reloc_ahnd(fd, ctx->id);
+
+		spin[n] = __igt_spin_new(fd, .ahnd = ahnd, .ctx = ctx,
+					 .engine = engine);
+		intel_ctx_destroy(fd, ctx);
+	}
+
+	igt_cork_unplug(c); /* batches will now be queued on the engine */
+	igt_debugfs_dump(fd, "i915_engine_info");
+
+	for (int n = 0; n < max; n++) {
+		uint64_t ahnd = spin[n]->ahnd;
+		igt_spin_free(fd, spin[n]);
+		put_ahnd(ahnd);
+	}
+
+}
+
 static void fifo(int fd, const intel_ctx_t *ctx, unsigned ring)
 {
 	IGT_CORK_FENCE(cork);
 	uint32_t scratch;
 	uint32_t result;
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id), scratch_offset;
 
 	scratch = gem_create(fd, 4096);
+	scratch_offset = get_offset(ahnd, scratch, 4096, 0);
 
 	fence = igt_cork_plug(&cork, fd);
 
 	/* Same priority, same timeline, final result will be the second eb */
-	store_dword_fenced(fd, ctx, ring, scratch, 0, 1, fence, 0);
-	store_dword_fenced(fd, ctx, ring, scratch, 0, 2, fence, 0);
+	store_dword_fenced2(fd, ahnd, ctx, ring, scratch, scratch_offset,
+			    0, 1, fence, 0);
+	store_dword_fenced2(fd, ahnd, ctx, ring, scratch, scratch_offset,
+			    0, 2, fence, 0);
 
-	unplug_show_queue(fd, &cork, &ctx->cfg, ring);
+	unplug_show_queue2(fd, &cork, &ctx->cfg, ring);
 	close(fence);
 
 	result =  __sync_read_u32(fd, scratch, 0);
 	gem_close(fd, scratch);
+	put_ahnd(ahnd);
 
 	igt_assert_eq_u32(result, 2);
 }
@@ -260,6 +344,7 @@ static void implicit_rw(int i915, const intel_ctx_t *ctx, unsigned int ring,
 	uint32_t scratch;
 	uint32_t result;
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id), scratch_offset;
 
 	count = 0;
 	for_each_ctx_engine(i915, ctx, e) {
@@ -274,11 +359,12 @@ static void implicit_rw(int i915, const intel_ctx_t *ctx, unsigned int ring,
 	igt_require(count);
 
 	scratch = gem_create(i915, 4096);
+	scratch_offset = get_offset(ahnd, scratch, 4096, 0);
 	fence = igt_cork_plug(&cork, i915);
 
 	if (dir & WRITE_READ)
-		store_dword_fenced(i915, ctx,
-				   ring, scratch, 0, ~ring,
+		store_dword_fenced2(i915, ahnd, ctx,
+				   ring, scratch, scratch_offset, 0, ~ring,
 				   fence, I915_GEM_DOMAIN_RENDER);
 
 	for_each_ctx_engine(i915, ctx, e) {
@@ -288,21 +374,22 @@ static void implicit_rw(int i915, const intel_ctx_t *ctx, unsigned int ring,
 		if (!gem_class_can_store_dword(i915, e->class))
 			continue;
 
-		store_dword_fenced(i915, ctx,
-				   e->flags, scratch, 0, e->flags,
+		store_dword_fenced2(i915, ahnd, ctx,
+				   e->flags, scratch, scratch_offset, 0, e->flags,
 				   fence, 0);
 	}
 
 	if (dir & READ_WRITE)
-		store_dword_fenced(i915, ctx,
-				   ring, scratch, 0, ring,
+		store_dword_fenced2(i915, ahnd, ctx,
+				   ring, scratch, scratch_offset, 0, ring,
 				   fence, I915_GEM_DOMAIN_RENDER);
 
-	unplug_show_queue(i915, &cork, &ctx->cfg, ring);
+	unplug_show_queue2(i915, &cork, &ctx->cfg, ring);
 	close(fence);
 
 	result =  __sync_read_u32(i915, scratch, 0);
 	gem_close(i915, scratch);
+	put_ahnd(ahnd);
 
 	if (dir & WRITE_READ)
 		igt_assert_neq_u32(result, ~ring);
@@ -319,8 +406,10 @@ static void independent(int fd, const intel_ctx_t *ctx, unsigned int engine,
 	uint32_t scratch, batch;
 	uint32_t *ptr;
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(fd, ctx->id), scratch_offset;
 
 	scratch = gem_create(fd, 4096);
+	scratch_offset = get_offset(ahnd, scratch, 4096, 0);
 	ptr = gem_mmap__device_coherent(fd, scratch, 0, 4096, PROT_READ);
 	igt_assert_eq(ptr[0], 0);
 
@@ -336,6 +425,7 @@ static void independent(int fd, const intel_ctx_t *ctx, unsigned int engine,
 
 		if (spin == NULL) {
 			spin = __igt_spin_new(fd,
+					      .ahnd = ahnd,
 					      .ctx = ctx,
 					      .engine = e->flags,
 					      .flags = flags);
@@ -348,14 +438,17 @@ static void independent(int fd, const intel_ctx_t *ctx, unsigned int engine,
 			gem_execbuf(fd, &eb);
 		}
 
-		store_dword_fenced(fd, ctx, e->flags, scratch, 0, e->flags, fence, 0);
+		store_dword_fenced2(fd, ahnd, ctx, e->flags,
+				    scratch, scratch_offset,
+				    0, e->flags, fence, 0);
 	}
 	igt_require(spin);
 
 	/* Same priority, but different timeline (as different engine) */
-	batch = __store_dword(fd, ctx, engine, scratch, 0, engine, 0, fence, 0);
+	batch = __store_dword(fd, ahnd, ctx, engine, scratch, scratch_offset,
+			      0, engine, 0, fence, 0);
 
-	unplug_show_queue(fd, &cork, &ctx->cfg, engine);
+	unplug_show_queue2(fd, &cork, &ctx->cfg, engine);
 	close(fence);
 
 	gem_sync(fd, batch);
@@ -369,6 +462,7 @@ static void independent(int fd, const intel_ctx_t *ctx, unsigned int engine,
 
 	igt_spin_free(fd, spin);
 	gem_quiescent_gpu(fd);
+	put_ahnd(ahnd);
 
 	/* And we expect the others to have overwritten us, order unspecified */
 	igt_assert(!gem_bo_busy(fd, scratch));
@@ -388,6 +482,7 @@ static void smoketest(int fd, const intel_ctx_cfg_t *cfg,
 	unsigned engine;
 	uint32_t scratch;
 	uint32_t result[2 * ncpus];
+	uint64_t ahnd, scratch_offset;
 
 	nengine = 0;
 	if (ring == ALL_ENGINES) {
@@ -400,6 +495,8 @@ static void smoketest(int fd, const intel_ctx_cfg_t *cfg,
 	igt_require(nengine);
 
 	scratch = gem_create(fd, 4096);
+	scratch_offset = get_offset(ahnd, scratch, 4096, 0);
+
 	igt_fork(child, ncpus) {
 		unsigned long count = 0;
 		const intel_ctx_t *ctx;
@@ -407,6 +504,8 @@ static void smoketest(int fd, const intel_ctx_cfg_t *cfg,
 		hars_petruska_f54_1_random_perturb(child);
 
 		ctx = intel_ctx_create(fd, cfg);
+		ahnd = get_reloc_ahnd(fd, ctx->id);
+		scratch_offset = get_offset(ahnd, scratch, 4096, 0);
 		igt_until_timeout(timeout) {
 			int prio;
 
@@ -414,20 +513,23 @@ static void smoketest(int fd, const intel_ctx_cfg_t *cfg,
 			gem_context_set_priority(fd, ctx->id, prio);
 
 			engine = engines[hars_petruska_f54_1_random_unsafe_max(nengine)];
-			store_dword(fd, ctx, engine, scratch,
-				    8*child + 0, ~child,
-				    0);
+			store_dword2(fd, ahnd, ctx, engine,
+				     scratch, scratch_offset,
+				     8*child + 0, ~child, 0);
 			for (unsigned int step = 0; step < 8; step++)
-				store_dword(fd, ctx, engine, scratch,
-					    8*child + 4, count++,
-					    0);
+				store_dword2(fd, ahnd, ctx, engine,
+					     scratch, scratch_offset,
+					     8*child + 4, count++,
+					     0);
 		}
 		intel_ctx_destroy(fd, ctx);
+		put_ahnd(ahnd);
 	}
 	igt_waitchildren();
 
 	__sync_read_u32_count(fd, scratch, result, sizeof(result));
 	gem_close(fd, scratch);
+	put_ahnd(ahnd);
 
 	for (unsigned n = 0; n < ncpus; n++) {
 		igt_assert_eq_u32(result[2 * n], ~n);
@@ -644,12 +746,15 @@ static void lateslice(int i915, const intel_ctx_cfg_t *cfg,
 {
 	const intel_ctx_t *ctx;
 	igt_spin_t *spin[3];
+	uint64_t ahnd[3];
 
 	igt_require(gem_scheduler_has_timeslicing(i915));
 	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
 
 	ctx = intel_ctx_create(i915, cfg);
-	spin[0] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
+	ahnd[0] = get_reloc_ahnd(i915, ctx->id);
+	spin[0] = igt_spin_new(i915, .ahnd = ahnd[0], .ctx = ctx,
+			       .engine = engine,
 			       .flags = (IGT_SPIN_POLL_RUN |
 					 IGT_SPIN_FENCE_OUT |
 					 flags));
@@ -658,7 +763,9 @@ static void lateslice(int i915, const intel_ctx_cfg_t *cfg,
 	igt_spin_busywait_until_started(spin[0]);
 
 	ctx = intel_ctx_create(i915, cfg);
-	spin[1] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
+	ahnd[1] = get_reloc_ahnd(i915, ctx->id);
+	spin[1] = igt_spin_new(i915, .ahnd = ahnd[1], .ctx = ctx,
+			       .engine = engine,
 			       .fence = spin[0]->out_fence,
 			       .flags = (IGT_SPIN_POLL_RUN |
 					 IGT_SPIN_FENCE_IN |
@@ -675,7 +782,9 @@ static void lateslice(int i915, const intel_ctx_cfg_t *cfg,
 	 */
 
 	ctx = intel_ctx_create(i915, cfg);
-	spin[2] = igt_spin_new(i915, .ctx = ctx, .engine = engine,
+	ahnd[2] = get_reloc_ahnd(i915, ctx->id);
+	spin[2] = igt_spin_new(i915, .ahnd = ahnd[2], .ctx = ctx,
+			       .engine = engine,
 			       .flags = IGT_SPIN_POLL_RUN | flags);
 	intel_ctx_destroy(i915, ctx);
 
@@ -696,6 +805,9 @@ static void lateslice(int i915, const intel_ctx_cfg_t *cfg,
 
 	igt_assert(gem_bo_busy(i915, spin[1]->handle));
 	igt_spin_free(i915, spin[1]);
+
+	for (int i = 0; i < ARRAY_SIZE(ahnd); i++)
+		put_ahnd(ahnd[i]);
 }
 
 static void cancel_spinner(int i915,
@@ -742,6 +854,7 @@ static void submit_slice(int i915, const intel_ctx_cfg_t *cfg,
 		.num_engines = 1,
 	};
 	const intel_ctx_t *ctx;
+	uint64_t ahnd0 = get_reloc_ahnd(i915, 0);
 
 	/*
 	 * When using a submit fence, we do not want to block concurrent work,
@@ -755,13 +868,14 @@ static void submit_slice(int i915, const intel_ctx_cfg_t *cfg,
 		igt_spin_t *bg, *spin;
 		int timeline = -1;
 		int fence = -1;
+		uint64_t ahndN;
 
 		if (!gem_class_can_store_dword(i915, cancel->class))
 			continue;
 
 		igt_debug("Testing cancellation from %s\n", e->name);
 
-		bg = igt_spin_new(i915, .engine = e->flags);
+		bg = igt_spin_new(i915, .ahnd = ahnd0, .engine = e->flags);
 
 		if (flags & LATE_SUBMIT) {
 			timeline = sw_sync_timeline_create();
@@ -771,7 +885,8 @@ static void submit_slice(int i915, const intel_ctx_cfg_t *cfg,
 		engine_cfg.engines[0].engine_class = e->class;
 		engine_cfg.engines[0].engine_instance = e->instance;
 		ctx = intel_ctx_create(i915, &engine_cfg);
-		spin = igt_spin_new(i915, .ctx = ctx,
+		ahndN = get_reloc_ahnd(i915, ctx->id);
+		spin = igt_spin_new(i915, .ahnd = ahndN, .ctx = ctx,
 				    .fence = fence,
 				    .flags =
 				    IGT_SPIN_POLL_RUN |
@@ -800,7 +915,10 @@ static void submit_slice(int i915, const intel_ctx_cfg_t *cfg,
 		igt_spin_free(i915, bg);
 
 		intel_ctx_destroy(i915, ctx);
+		put_ahnd(ahndN);
 	}
+
+	put_ahnd(ahnd0);
 }
 
 static uint32_t __batch_create(int i915, uint32_t offset)
@@ -829,6 +947,7 @@ static void semaphore_userlock(int i915, const intel_ctx_t *ctx,
 	igt_spin_t *spin = NULL;
 	uint32_t scratch;
 	const intel_ctx_t *tmp_ctx;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	igt_require(gem_scheduler_has_timeslicing(i915));
 
@@ -843,6 +962,7 @@ static void semaphore_userlock(int i915, const intel_ctx_t *ctx,
 	for_each_ctx_engine(i915, ctx, e) {
 		if (!spin) {
 			spin = igt_spin_new(i915,
+					    .ahnd = ahnd,
 					    .ctx = ctx,
 					    .dependency = scratch,
 					    .engine = e->flags,
@@ -885,6 +1005,7 @@ static void semaphore_userlock(int i915, const intel_ctx_t *ctx,
 	gem_close(i915, obj.handle);
 
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 }
 
 static void semaphore_codependency(int i915, const intel_ctx_t *ctx,
@@ -894,6 +1015,7 @@ static void semaphore_codependency(int i915, const intel_ctx_t *ctx,
 	struct {
 		igt_spin_t *xcs, *rcs;
 	} task[2];
+	uint64_t ahnd;
 	int i;
 
 	/*
@@ -919,9 +1041,11 @@ static void semaphore_codependency(int i915, const intel_ctx_t *ctx,
 			continue;
 
 		tmp_ctx = intel_ctx_create(i915, &ctx->cfg);
+		ahnd = get_simple_l2h_ahnd(i915, tmp_ctx->id);
 
 		task[i].xcs =
 			__igt_spin_new(i915,
+				       .ahnd = ahnd,
 				       .ctx = tmp_ctx,
 				       .engine = e->flags,
 				       .flags = IGT_SPIN_POLL_RUN | flags);
@@ -930,6 +1054,7 @@ static void semaphore_codependency(int i915, const intel_ctx_t *ctx,
 		/* Common rcs tasks will be queued in FIFO */
 		task[i].rcs =
 			__igt_spin_new(i915,
+				       .ahnd = ahnd,
 				       .ctx = tmp_ctx,
 				       .engine = 0,
 				       .dependency = task[i].xcs->handle);
@@ -952,8 +1077,10 @@ static void semaphore_codependency(int i915, const intel_ctx_t *ctx,
 	}
 
 	for (i = 0; i < ARRAY_SIZE(task); i++) {
+		ahnd = task[i].rcs->ahnd;
 		igt_spin_free(i915, task[i].xcs);
 		igt_spin_free(i915, task[i].rcs);
+		put_ahnd(ahnd);
 	}
 }
 
@@ -964,6 +1091,7 @@ static void semaphore_resolve(int i915, const intel_ctx_cfg_t *cfg,
 	const uint32_t SEMAPHORE_ADDR = 64 << 10;
 	uint32_t semaphore, *sema;
 	const intel_ctx_t *outer, *inner;
+	uint64_t ahnd = get_reloc_ahnd(i915, 0);
 
 	/*
 	 * Userspace may submit batches that wait upon unresolved
@@ -994,7 +1122,8 @@ static void semaphore_resolve(int i915, const intel_ctx_cfg_t *cfg,
 		if (!gem_class_can_store_dword(i915, e->class))
 			continue;
 
-		spin = __igt_spin_new(i915, .engine = e->flags, .flags = flags);
+		spin = __igt_spin_new(i915, .ahnd = ahnd,
+				      .engine = e->flags, .flags = flags);
 		igt_spin_end(spin); /* we just want its address for later */
 		gem_sync(i915, spin->handle);
 		igt_spin_reset(spin);
@@ -1086,6 +1215,7 @@ static void semaphore_resolve(int i915, const intel_ctx_cfg_t *cfg,
 
 	intel_ctx_destroy(i915, inner);
 	intel_ctx_destroy(i915, outer);
+	put_ahnd(ahnd);
 }
 
 static void semaphore_noskip(int i915, const intel_ctx_cfg_t *cfg,
@@ -1094,10 +1224,12 @@ static void semaphore_noskip(int i915, const intel_ctx_cfg_t *cfg,
 	const unsigned int gen = intel_gen(intel_get_drm_devid(i915));
 	const struct intel_execution_engine2 *outer, *inner;
 	const intel_ctx_t *ctx;
+	uint64_t ahnd;
 
 	igt_require(gen >= 6); /* MI_STORE_DWORD_IMM convenience */
 
 	ctx = intel_ctx_create(i915, cfg);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	for_each_ctx_engine(i915, ctx, outer) {
 	for_each_ctx_engine(i915, ctx, inner) {
@@ -1110,10 +1242,10 @@ static void semaphore_noskip(int i915, const intel_ctx_cfg_t *cfg,
 		    !gem_class_can_store_dword(i915, inner->class))
 			continue;
 
-		chain = __igt_spin_new(i915, .ctx = ctx,
+		chain = __igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 				       .engine = outer->flags, .flags = flags);
 
-		spin = __igt_spin_new(i915, .ctx = ctx,
+		spin = __igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
 				      .engine = inner->flags, .flags = flags);
 		igt_spin_end(spin); /* we just want its address for later */
 		gem_sync(i915, spin->handle);
@@ -1172,6 +1304,7 @@ static void semaphore_noskip(int i915, const intel_ctx_cfg_t *cfg,
 	}
 
 	intel_ctx_destroy(i915, ctx);
+	put_ahnd(ahnd);
 }
 
 static void
@@ -1197,6 +1330,7 @@ noreorder(int i915, const intel_ctx_cfg_t *cfg,
 	igt_spin_t *spin;
 	int fence = -1;
 	uint64_t addr;
+	uint64_t ahnd[2];
 
 	if (flags & CORKED)
 		fence = igt_cork_plug(&cork, i915);
@@ -1205,8 +1339,9 @@ noreorder(int i915, const intel_ctx_cfg_t *cfg,
 		vm_cfg.vm = gem_vm_create(i915);
 
 	ctx = intel_ctx_create(i915, &vm_cfg);
+	ahnd[0] = get_reloc_ahnd(i915, ctx->id);
 
-	spin = igt_spin_new(i915, .ctx = ctx,
+	spin = igt_spin_new(i915, .ahnd = ahnd[0], .ctx = ctx,
 			    .engine = engine,
 			    .fence = fence,
 			    .flags = IGT_SPIN_FENCE_OUT | IGT_SPIN_FENCE_IN);
@@ -1281,7 +1416,9 @@ noreorder(int i915, const intel_ctx_cfg_t *cfg,
 	 * Without timeslices, fallback to waiting a second.
 	 */
 	ctx = intel_ctx_create(i915, &vm_cfg);
+	ahnd[1] = get_reloc_ahnd(i915, ctx->id);
 	slice = igt_spin_new(i915,
+			    .ahnd = ahnd[1],
 			    .ctx = ctx,
 			    .engine = engine,
 			    .flags = IGT_SPIN_POLL_RUN);
@@ -1310,6 +1447,7 @@ static void reorder(int fd, const intel_ctx_cfg_t *cfg,
 	uint32_t result;
 	const intel_ctx_t *ctx[2];
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0), scratch_offset;
 
 	ctx[LO] = intel_ctx_create(fd, cfg);
 	gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
@@ -1318,19 +1456,23 @@ static void reorder(int fd, const intel_ctx_cfg_t *cfg,
 	gem_context_set_priority(fd, ctx[HI]->id, flags & EQUAL ? MIN_PRIO : 0);
 
 	scratch = gem_create(fd, 4096);
+	scratch_offset = get_offset(ahnd, scratch, 4096, 0);
 	fence = igt_cork_plug(&cork, fd);
 
 	/* We expect the high priority context to be executed first, and
 	 * so the final result will be value from the low priority context.
 	 */
-	store_dword_fenced(fd, ctx[LO], ring, scratch, 0, ctx[LO]->id, fence, 0);
-	store_dword_fenced(fd, ctx[HI], ring, scratch, 0, ctx[HI]->id, fence, 0);
+	store_dword_fenced2(fd, ahnd, ctx[LO], ring, scratch, scratch_offset,
+			    0, ctx[LO]->id, fence, 0);
+	store_dword_fenced2(fd, ahnd, ctx[HI], ring, scratch, scratch_offset,
+			    0, ctx[HI]->id, fence, 0);
 
-	unplug_show_queue(fd, &cork, cfg, ring);
+	unplug_show_queue2(fd, &cork, cfg, ring);
 	close(fence);
 
 	result =  __sync_read_u32(fd, scratch, 0);
 	gem_close(fd, scratch);
+	put_ahnd(ahnd);
 
 	if (flags & EQUAL) /* equal priority, result will be fifo */
 		igt_assert_eq_u32(result, ctx[HI]->id);
@@ -1348,6 +1490,7 @@ static void promotion(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 	uint32_t result_read, dep_read;
 	const intel_ctx_t *ctx[3];
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0), result_offset, dep_offset;
 
 	ctx[LO] = intel_ctx_create(fd, cfg);
 	gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
@@ -1359,7 +1502,10 @@ static void promotion(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 	gem_context_set_priority(fd, ctx[NOISE]->id, MIN_PRIO/2);
 
 	result = gem_create(fd, 4096);
+	result_offset = get_offset(ahnd, result, 4096, 0);
+	get_offset(ahnd, 1000, 4096, 0);
 	dep = gem_create(fd, 4096);
+	dep_offset = get_offset(ahnd, dep, 4096, 0);
 
 	fence = igt_cork_plug(&cork, fd);
 
@@ -1368,16 +1514,21 @@ static void promotion(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 	 * fifo would be NOISE, LO, HI.
 	 * strict priority would be  HI, NOISE, LO
 	 */
-	store_dword_fenced(fd, ctx[NOISE], ring, result, 0, ctx[NOISE]->id, fence, 0);
-	store_dword_fenced(fd, ctx[LO], ring, result, 0, ctx[LO]->id, fence, 0);
+	store_dword_fenced2(fd, ahnd, ctx[NOISE], ring, result, result_offset,
+			    0, ctx[NOISE]->id, fence, 0);
+	store_dword_fenced2(fd, ahnd, ctx[LO], ring, result, result_offset,
+			    0, ctx[LO]->id, fence, 0);
 
 	/* link LO <-> HI via a dependency on another buffer */
-	store_dword(fd, ctx[LO], ring, dep, 0, ctx[LO]->id, I915_GEM_DOMAIN_INSTRUCTION);
-	store_dword(fd, ctx[HI], ring, dep, 0, ctx[HI]->id, 0);
+	store_dword2(fd, ahnd, ctx[LO], ring, dep, dep_offset,
+		     0, ctx[LO]->id, I915_GEM_DOMAIN_INSTRUCTION);
+	store_dword2(fd, ahnd, ctx[HI], ring, dep, dep_offset,
+		     0, ctx[HI]->id, 0);
 
-	store_dword(fd, ctx[HI], ring, result, 0, ctx[HI]->id, 0);
+	store_dword2(fd, ahnd, ctx[HI], ring, result, result_offset,
+		     0, ctx[HI]->id, 0);
 
-	unplug_show_queue(fd, &cork, cfg, ring);
+	unplug_show_queue2(fd, &cork, cfg, ring);
 	close(fence);
 
 	dep_read = __sync_read_u32(fd, dep, 0);
@@ -1385,6 +1536,7 @@ static void promotion(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 
 	result_read = __sync_read_u32(fd, result, 0);
 	gem_close(fd, result);
+	put_ahnd(ahnd);
 
 	igt_assert_eq_u32(dep_read, ctx[HI]->id);
 	igt_assert_eq_u32(result_read, ctx[NOISE]->id);
@@ -1413,32 +1565,42 @@ static void preempt(int fd, const intel_ctx_cfg_t *cfg,
 	igt_spin_t *spin[MAX_ELSP_QLEN];
 	const intel_ctx_t *ctx[2];
 	igt_hang_t hang;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0);
+	uint64_t ahnd_lo_arr[MAX_ELSP_QLEN], ahnd_lo;
+	uint64_t result_offset = get_offset(ahnd, result, 4096, 0);
 
 	/* Set a fast timeout to speed the test up (if available) */
 	set_preempt_timeout(fd, e, 150);
 
 	ctx[LO] = intel_ctx_create(fd, cfg);
 	gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
+	ahnd_lo = get_reloc_ahnd(fd, ctx[LO]->id);
 
 	ctx[HI] = intel_ctx_create(fd, cfg);
 	gem_context_set_priority(fd, ctx[HI]->id, MAX_PRIO);
 
 	if (flags & HANG_LP)
-		hang = igt_hang_ctx(fd, ctx[LO]->id, e->flags, 0);
+		hang = igt_hang_ctx_with_ahnd(fd, ahnd, ctx[LO]->id, e->flags, 0);
 
 	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
+		uint64_t currahnd = ahnd_lo;
+
 		if (flags & NEW_CTX) {
 			intel_ctx_destroy(fd, ctx[LO]);
 			ctx[LO] = intel_ctx_create(fd, cfg);
 			gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
+			ahnd_lo_arr[n] = get_reloc_ahnd(fd, ctx[LO]->id);
+			currahnd = ahnd_lo_arr[n];
 		}
 		spin[n] = __igt_spin_new(fd,
+					 .ahnd = currahnd,
 					 .ctx = ctx[LO],
 					 .engine = e->flags,
 					 .flags = flags & USERPTR ? IGT_SPIN_USERPTR : 0);
 		igt_debug("spin[%d].handle=%d\n", n, spin[n]->handle);
 
-		store_dword(fd, ctx[HI], e->flags, result, 0, n + 1, I915_GEM_DOMAIN_RENDER);
+		store_dword2(fd, ahnd, ctx[HI], e->flags, result, result_offset,
+			     0, n + 1, I915_GEM_DOMAIN_RENDER);
 
 		result_read = __sync_read_u32(fd, result, 0);
 		igt_assert_eq_u32(result_read, n + 1);
@@ -1453,6 +1615,13 @@ static void preempt(int fd, const intel_ctx_cfg_t *cfg,
 
 	intel_ctx_destroy(fd, ctx[LO]);
 	intel_ctx_destroy(fd, ctx[HI]);
+	put_ahnd(ahnd);
+	put_ahnd(ahnd_lo);
+
+	if (flags & NEW_CTX) {
+		for (int n = 0; n < ARRAY_SIZE(spin); n++)
+			put_ahnd(ahnd_lo_arr[n]);
+	}
 
 	gem_close(fd, result);
 }
@@ -1460,7 +1629,7 @@ static void preempt(int fd, const intel_ctx_cfg_t *cfg,
 #define CHAIN 0x1
 #define CONTEXTS 0x2
 
-static igt_spin_t *__noise(int fd, const intel_ctx_t *ctx,
+static igt_spin_t *__noise(int fd, uint64_t ahnd, const intel_ctx_t *ctx,
 			   int prio, igt_spin_t *spin)
 {
 	const struct intel_execution_engine2 *e;
@@ -1470,6 +1639,7 @@ static igt_spin_t *__noise(int fd, const intel_ctx_t *ctx,
 	for_each_ctx_engine(fd, ctx, e) {
 		if (spin == NULL) {
 			spin = __igt_spin_new(fd,
+					      .ahnd = ahnd,
 					      .ctx = ctx,
 					      .engine = e->flags);
 		} else {
@@ -1487,6 +1657,7 @@ static igt_spin_t *__noise(int fd, const intel_ctx_t *ctx,
 }
 
 static void __preempt_other(int fd,
+			    uint64_t *ahnd,
 			    const intel_ctx_t **ctx,
 			    unsigned int target, unsigned int primary,
 			    unsigned flags)
@@ -1495,24 +1666,27 @@ static void __preempt_other(int fd,
 	uint32_t result = gem_create(fd, 4096);
 	uint32_t result_read[4096 / sizeof(uint32_t)];
 	unsigned int n, i;
+	uint64_t result_offset_lo = get_offset(ahnd[LO], result, 4096, 0);
+	uint64_t result_offset_hi = get_offset(ahnd[HI], result, 4096, 0);
 
 	n = 0;
-	store_dword(fd, ctx[LO], primary,
-		    result, (n + 1)*sizeof(uint32_t), n + 1,
+	store_dword2(fd, ahnd[LO], ctx[LO], primary,
+		    result, result_offset_lo, (n + 1)*sizeof(uint32_t), n + 1,
 		    I915_GEM_DOMAIN_RENDER);
 	n++;
 
 	if (flags & CHAIN) {
 		for_each_ctx_engine(fd, ctx[LO], e) {
-			store_dword(fd, ctx[LO], e->flags,
-				    result, (n + 1)*sizeof(uint32_t), n + 1,
+			store_dword2(fd, ahnd[LO], ctx[LO], e->flags,
+				    result, result_offset_lo,
+				     (n + 1)*sizeof(uint32_t), n + 1,
 				    I915_GEM_DOMAIN_RENDER);
 			n++;
 		}
 	}
 
-	store_dword(fd, ctx[HI], target,
-		    result, (n + 1)*sizeof(uint32_t), n + 1,
+	store_dword2(fd, ahnd[HI], ctx[HI], target,
+		    result, result_offset_hi, (n + 1)*sizeof(uint32_t), n + 1,
 		    I915_GEM_DOMAIN_RENDER);
 
 	igt_debugfs_dump(fd, "i915_engine_info");
@@ -1533,6 +1707,7 @@ static void preempt_other(int fd, const intel_ctx_cfg_t *cfg,
 	const struct intel_execution_engine2 *e;
 	igt_spin_t *spin = NULL;
 	const intel_ctx_t *ctx[3];
+	uint64_t ahnd[3];
 
 	/* On each engine, insert
 	 * [NOISE] spinner,
@@ -1546,16 +1721,19 @@ static void preempt_other(int fd, const intel_ctx_cfg_t *cfg,
 
 	ctx[LO] = intel_ctx_create(fd, cfg);
 	gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
+	ahnd[LO] = get_reloc_ahnd(fd, ctx[LO]->id);
 
 	ctx[NOISE] = intel_ctx_create(fd, cfg);
-	spin = __noise(fd, ctx[NOISE], 0, NULL);
+	ahnd[NOISE] = get_reloc_ahnd(fd, ctx[NOISE]->id);
+	spin = __noise(fd, ahnd[NOISE], ctx[NOISE], 0, NULL);
 
 	ctx[HI] = intel_ctx_create(fd, cfg);
 	gem_context_set_priority(fd, ctx[HI]->id, MAX_PRIO);
+	ahnd[HI] = get_reloc_ahnd(fd, ctx[HI]->id);
 
 	for_each_ctx_cfg_engine(fd, cfg, e) {
 		igt_debug("Primary engine: %s\n", e->name);
-		__preempt_other(fd, ctx, ring, e->flags, flags);
+		__preempt_other(fd, ahnd, ctx, ring, e->flags, flags);
 
 	}
 
@@ -1565,6 +1743,9 @@ static void preempt_other(int fd, const intel_ctx_cfg_t *cfg,
 	intel_ctx_destroy(fd, ctx[LO]);
 	intel_ctx_destroy(fd, ctx[NOISE]);
 	intel_ctx_destroy(fd, ctx[HI]);
+	put_ahnd(ahnd[LO]);
+	put_ahnd(ahnd[NOISE]);
+	put_ahnd(ahnd[HI]);
 }
 
 static void __preempt_queue(int fd, const intel_ctx_cfg_t *cfg,
@@ -1574,12 +1755,18 @@ static void __preempt_queue(int fd, const intel_ctx_cfg_t *cfg,
 	const struct intel_execution_engine2 *e;
 	uint32_t result = gem_create(fd, 4096);
 	uint32_t result_read[4096 / sizeof(uint32_t)];
+	uint64_t result_offset;
 	igt_spin_t *above = NULL, *below = NULL;
 	const intel_ctx_t *ctx[3] = {
 		intel_ctx_create(fd, cfg),
 		intel_ctx_create(fd, cfg),
 		intel_ctx_create(fd, cfg),
 	};
+	uint64_t ahnd[3] = {
+		get_reloc_ahnd(fd, ctx[0]->id),
+		get_reloc_ahnd(fd, ctx[1]->id),
+		get_reloc_ahnd(fd, ctx[2]->id),
+	};
 	int prio = MAX_PRIO;
 	unsigned int n, i;
 
@@ -1588,7 +1775,7 @@ static void __preempt_queue(int fd, const intel_ctx_cfg_t *cfg,
 			intel_ctx_destroy(fd, ctx[NOISE]);
 			ctx[NOISE] = intel_ctx_create(fd, cfg);
 		}
-		above = __noise(fd, ctx[NOISE], prio--, above);
+		above = __noise(fd, ahnd[NOISE], ctx[NOISE], prio--, above);
 	}
 
 	gem_context_set_priority(fd, ctx[HI]->id, prio--);
@@ -1598,28 +1785,31 @@ static void __preempt_queue(int fd, const intel_ctx_cfg_t *cfg,
 			intel_ctx_destroy(fd, ctx[NOISE]);
 			ctx[NOISE] = intel_ctx_create(fd, cfg);
 		}
-		below = __noise(fd, ctx[NOISE], prio--, below);
+		below = __noise(fd, ahnd[NOISE], ctx[NOISE], prio--, below);
 	}
 
 	gem_context_set_priority(fd, ctx[LO]->id, prio--);
 
 	n = 0;
-	store_dword(fd, ctx[LO], primary,
-		    result, (n + 1)*sizeof(uint32_t), n + 1,
+	result_offset = get_offset(ahnd[LO], result, 4096, 0);
+	store_dword2(fd, ahnd[LO], ctx[LO], primary,
+		    result, result_offset, (n + 1)*sizeof(uint32_t), n + 1,
 		    I915_GEM_DOMAIN_RENDER);
 	n++;
 
 	if (flags & CHAIN) {
 		for_each_ctx_engine(fd, ctx[LO], e) {
-			store_dword(fd, ctx[LO], e->flags,
-				    result, (n + 1)*sizeof(uint32_t), n + 1,
+			store_dword2(fd, ahnd[LO], ctx[LO], e->flags,
+				    result, result_offset,
+				     (n + 1)*sizeof(uint32_t), n + 1,
 				    I915_GEM_DOMAIN_RENDER);
 			n++;
 		}
 	}
 
-	store_dword(fd, ctx[HI], target,
-		    result, (n + 1)*sizeof(uint32_t), n + 1,
+	result_offset = get_offset(ahnd[HI], result, 4096, 0);
+	store_dword2(fd, ahnd[HI], ctx[HI], target,
+		    result, result_offset, (n + 1)*sizeof(uint32_t), n + 1,
 		    I915_GEM_DOMAIN_RENDER);
 
 	igt_debugfs_dump(fd, "i915_engine_info");
@@ -1645,6 +1835,9 @@ static void __preempt_queue(int fd, const intel_ctx_cfg_t *cfg,
 	intel_ctx_destroy(fd, ctx[LO]);
 	intel_ctx_destroy(fd, ctx[NOISE]);
 	intel_ctx_destroy(fd, ctx[HI]);
+	put_ahnd(ahnd[LO]);
+	put_ahnd(ahnd[NOISE]);
+	put_ahnd(ahnd[HI]);
 
 	gem_close(fd, result);
 }
@@ -1679,6 +1872,7 @@ static void preempt_engines(int i915,
 	IGT_LIST_HEAD(plist);
 	igt_spin_t *spin, *sn;
 	const intel_ctx_t *ctx;
+	uint64_t ahnd;
 
 	/*
 	 * A quick test that each engine within a context is an independent
@@ -1694,12 +1888,14 @@ static void preempt_engines(int i915,
 		igt_list_add(&pnode[n].link, &plist);
 	}
 	ctx = intel_ctx_create(i915, &cfg);
+	ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	for (int n = -(GEM_MAX_ENGINES - 1); n < GEM_MAX_ENGINES; n++) {
 		unsigned int engine = n & I915_EXEC_RING_MASK;
 
 		gem_context_set_priority(i915, ctx->id, n);
-		spin = igt_spin_new(i915, .ctx = ctx, .engine = engine);
+		spin = igt_spin_new(i915, .ahnd = ahnd, .ctx = ctx,
+				   .engine = engine);
 
 		igt_list_move_tail(&spin->link, &pnode[engine].spinners);
 		igt_list_move(&pnode[engine].link, &plist);
@@ -1713,6 +1909,7 @@ static void preempt_engines(int i915,
 		}
 	}
 	intel_ctx_destroy(i915, ctx);
+	put_ahnd(ahnd);
 }
 
 static void preempt_self(int fd, const intel_ctx_cfg_t *cfg,
@@ -1724,6 +1921,7 @@ static void preempt_self(int fd, const intel_ctx_cfg_t *cfg,
 	igt_spin_t *spin[MAX_ELSP_QLEN];
 	unsigned int n, i;
 	const intel_ctx_t *ctx[3];
+	uint64_t ahnd[3], result_offset;
 
 	/* On each engine, insert
 	 * [NOISE] spinner,
@@ -1735,21 +1933,26 @@ static void preempt_self(int fd, const intel_ctx_cfg_t *cfg,
 
 	ctx[NOISE] = intel_ctx_create(fd, cfg);
 	ctx[HI] = intel_ctx_create(fd, cfg);
+	ahnd[NOISE] = get_reloc_ahnd(fd, ctx[NOISE]->id);
+	ahnd[HI] = get_reloc_ahnd(fd, ctx[HI]->id);
+	result_offset = get_offset(ahnd[HI], result, 4096, 0);
 
 	n = 0;
 	gem_context_set_priority(fd, ctx[HI]->id, MIN_PRIO);
 	for_each_ctx_cfg_engine(fd, cfg, e) {
 		spin[n] = __igt_spin_new(fd,
+					 .ahnd = ahnd[NOISE],
 					 .ctx = ctx[NOISE],
 					 .engine = e->flags);
-		store_dword(fd, ctx[HI], e->flags,
-			    result, (n + 1)*sizeof(uint32_t), n + 1,
+		store_dword2(fd, ahnd[HI], ctx[HI], e->flags,
+			    result, result_offset,
+			     (n + 1)*sizeof(uint32_t), n + 1,
 			    I915_GEM_DOMAIN_RENDER);
 		n++;
 	}
 	gem_context_set_priority(fd, ctx[HI]->id, MAX_PRIO);
-	store_dword(fd, ctx[HI], ring,
-		    result, (n + 1)*sizeof(uint32_t), n + 1,
+	store_dword2(fd, ahnd[HI], ctx[HI], ring,
+		    result, result_offset, (n + 1)*sizeof(uint32_t), n + 1,
 		    I915_GEM_DOMAIN_RENDER);
 
 	gem_set_domain(fd, result, I915_GEM_DOMAIN_GTT, 0);
@@ -1767,6 +1970,8 @@ static void preempt_self(int fd, const intel_ctx_cfg_t *cfg,
 
 	intel_ctx_destroy(fd, ctx[NOISE]);
 	intel_ctx_destroy(fd, ctx[HI]);
+	put_ahnd(ahnd[NOISE]);
+	put_ahnd(ahnd[HI]);
 
 	gem_close(fd, result);
 }
@@ -1777,25 +1982,29 @@ static void preemptive_hang(int fd, const intel_ctx_cfg_t *cfg,
 	igt_spin_t *spin[MAX_ELSP_QLEN];
 	igt_hang_t hang;
 	const intel_ctx_t *ctx[2];
+	uint64_t ahnd_hi, ahnd_lo;
 
 	/* Set a fast timeout to speed the test up (if available) */
 	set_preempt_timeout(fd, e, 150);
 
 	ctx[HI] = intel_ctx_create(fd, cfg);
 	gem_context_set_priority(fd, ctx[HI]->id, MAX_PRIO);
+	ahnd_hi = get_reloc_ahnd(fd, ctx[HI]->id);
 
 	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
 		ctx[LO] = intel_ctx_create(fd, cfg);
 		gem_context_set_priority(fd, ctx[LO]->id, MIN_PRIO);
+		ahnd_lo = get_reloc_ahnd(fd, ctx[LO]->id);
 
 		spin[n] = __igt_spin_new(fd,
+					 .ahnd = ahnd_lo,
 					 .ctx = ctx[LO],
 					 .engine = e->flags);
 
 		intel_ctx_destroy(fd, ctx[LO]);
 	}
 
-	hang = igt_hang_ctx(fd, ctx[HI]->id, e->flags, 0);
+	hang = igt_hang_ctx_with_ahnd(fd, ahnd_hi, ctx[HI]->id, e->flags, 0);
 	igt_post_hang_ring(fd, hang);
 
 	for (int n = 0; n < ARRAY_SIZE(spin); n++) {
@@ -1803,11 +2012,14 @@ static void preemptive_hang(int fd, const intel_ctx_cfg_t *cfg,
 		 * This is subject to change as the scheduler evolve. The test should
 		 * be updated to reflect such changes.
 		 */
+		ahnd_lo = spin[n]->ahnd;
 		igt_assert(gem_bo_busy(fd, spin[n]->handle));
 		igt_spin_free(fd, spin[n]);
+		put_ahnd(ahnd_lo);
 	}
 
 	intel_ctx_destroy(fd, ctx[HI]);
+	put_ahnd(ahnd_hi);
 }
 
 static void deep(int fd, const intel_ctx_cfg_t *cfg,
@@ -1941,12 +2153,14 @@ static void wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 	const intel_ctx_t **ctx;
 	unsigned int count;
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0), result_offset;
 
 	ctx = malloc(sizeof(*ctx)*MAX_CONTEXTS);
 	for (int n = 0; n < MAX_CONTEXTS; n++)
 		ctx[n] = intel_ctx_create(fd, cfg);
 
 	result = gem_create(fd, 4*MAX_CONTEXTS);
+	result_offset = get_offset(ahnd, result, 4 * MAX_CONTEXTS, 0);
 
 	fence = igt_cork_plug(&cork, fd);
 
@@ -1955,14 +2169,15 @@ static void wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 	     igt_seconds_elapsed(&tv) < 5 && count < ring_size;
 	     count++) {
 		for (int n = 0; n < MAX_CONTEXTS; n++) {
-			store_dword_fenced(fd, ctx[n], ring, result, 4*n, ctx[n]->id,
+			store_dword_fenced2(fd, ahnd, ctx[n], ring,
+					    result, result_offset, 4*n, ctx[n]->id,
 					   fence, I915_GEM_DOMAIN_INSTRUCTION);
 		}
 	}
 	igt_info("Submitted %d requests over %d contexts in %.1fms\n",
 		 count, MAX_CONTEXTS, igt_nsec_elapsed(&tv) * 1e-6);
 
-	unplug_show_queue(fd, &cork, cfg, ring);
+	unplug_show_queue2(fd, &cork, cfg, ring);
 	close(fence);
 
 	__sync_read_u32_count(fd, result, result_read, sizeof(result_read));
@@ -1974,6 +2189,7 @@ static void wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 
 	gem_close(fd, result);
 	free(ctx);
+	put_ahnd(ahnd);
 }
 
 static void reorder_wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
@@ -1989,8 +2205,11 @@ static void reorder_wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 	IGT_CORK_FENCE(cork);
 	uint32_t *expected;
 	int fence;
+	uint64_t ahnd = get_reloc_ahnd(fd, 0), result_offset;
+	unsigned int sz = ALIGN(ring_size * 64, 4096);
 
 	result = gem_create(fd, 4096);
+	result_offset = get_offset(ahnd, result, 4096, 0);
 	target = gem_create(fd, 4096);
 	fence = igt_cork_plug(&cork, fd);
 
@@ -2017,8 +2236,14 @@ static void reorder_wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 	execbuf.flags |= I915_EXEC_FENCE_IN;
 	execbuf.rsvd2 = fence;
 
+	if (ahnd) {
+		obj[0].flags |= EXEC_OBJECT_PINNED;
+		obj[0].offset = result_offset;
+		obj[1].flags |= EXEC_OBJECT_PINNED;
+		obj[1].relocation_count = 0;
+	}
+
 	for (int n = 0, x = 1; n < ARRAY_SIZE(priorities); n++, x++) {
-		unsigned int sz = ALIGN(ring_size * 64, 4096);
 		uint32_t *batch;
 		const intel_ctx_t *tmp_ctx;
 
@@ -2027,6 +2252,11 @@ static void reorder_wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 		execbuf.rsvd1 = tmp_ctx->id;
 
 		obj[1].handle = gem_create(fd, sz);
+		if (ahnd) {
+			obj[1].offset = get_offset(ahnd, obj[1].handle, sz, 0);
+			reloc.presumed_offset = obj[1].offset;
+		}
+
 		batch = gem_mmap__device_coherent(fd, obj[1].handle, 0, sz, PROT_WRITE);
 		gem_set_domain(fd, obj[1].handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
 
@@ -2067,7 +2297,7 @@ static void reorder_wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring)
 		intel_ctx_destroy(fd, tmp_ctx);
 	}
 
-	unplug_show_queue(fd, &cork, cfg, ring);
+	unplug_show_queue2(fd, &cork, cfg, ring);
 	close(fence);
 
 	__sync_read_u32_count(fd, result, result_read, sizeof(result_read));
@@ -2455,6 +2685,7 @@ static void test_pi_iova(int i915, const intel_ctx_cfg_t *cfg,
 	pthread_t hi, lo;
 	char poison[4096];
 	int ufd;
+	uint64_t ahnd = get_reloc_ahnd(i915, 0);
 
 	/*
 	 * In this scenario, we have a pair of contending contexts that
@@ -2521,7 +2752,7 @@ static void test_pi_iova(int i915, const intel_ctx_cfg_t *cfg,
 	 * the local tasklet will not run until after all signals have been
 	 * delivered... but another tasklet might).
 	 */
-	spin = igt_spin_new(i915, .engine = engine);
+	spin = igt_spin_new(i915, .ahnd = ahnd, .engine = engine);
 	for (int i = 0; i < MAX_ELSP_QLEN; i++) {
 		const intel_ctx_t *ctx = create_highest_priority(i915, cfg);
 		spin->execbuf.rsvd1 = ctx->id;
@@ -2554,6 +2785,7 @@ static void test_pi_iova(int i915, const intel_ctx_cfg_t *cfg,
 	pthread_mutex_unlock(&t.mutex);
 	igt_debugfs_dump(i915, "i915_engine_info");
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 
 	pthread_join(hi, NULL);
 	pthread_join(lo, NULL);
@@ -2703,9 +2935,16 @@ static uint32_t read_ctx_timestamp(int i915, const intel_ctx_t *ctx,
 #define RUNTIME (base + 0x3a8)
 	uint32_t *map, *cs;
 	uint32_t ts;
+	uint64_t ahnd = get_reloc_ahnd(i915, ctx->id);
 
 	igt_require(base);
 
+	if (ahnd) {
+		obj.offset = get_offset(ahnd, obj.handle, 4096, 0);
+		obj.flags |= EXEC_OBJECT_PINNED | EXEC_OBJECT_SUPPORTS_48B_ADDRESS;
+		obj.relocation_count = 0;
+	}
+
 	cs = map = gem_mmap__device_coherent(i915, obj.handle,
 					     0, 4096, PROT_WRITE);
 
@@ -2741,11 +2980,14 @@ static void fairslice(int i915, const intel_ctx_cfg_t *cfg,
 	double threshold;
 	const intel_ctx_t *ctx[3];
 	uint32_t ts[3];
+	uint64_t ahnd;
 
 	for (int i = 0; i < ARRAY_SIZE(ctx); i++) {
 		ctx[i] = intel_ctx_create(i915, cfg);
 		if (spin == NULL) {
+			ahnd = get_reloc_ahnd(i915, ctx[i]->id);
 			spin = __igt_spin_new(i915,
+					      .ahnd = ahnd,
 					      .ctx = ctx[i],
 					      .engine = e->flags,
 					      .flags = flags);
@@ -2770,6 +3012,7 @@ static void fairslice(int i915, const intel_ctx_cfg_t *cfg,
 	for (int i = 0; i < ARRAY_SIZE(ctx); i++)
 		intel_ctx_destroy(i915, ctx[i]);
 	igt_spin_free(i915, spin);
+	put_ahnd(ahnd);
 
 	/*
 	 * If we imagine that the timeslices are randomly distributed to
diff --git a/tests/i915/gem_exec_whisper.c b/tests/i915/gem_exec_whisper.c
index d1640920..104f0a16 100644
--- a/tests/i915/gem_exec_whisper.c
+++ b/tests/i915/gem_exec_whisper.c
@@ -89,6 +89,8 @@ struct hang {
 	struct drm_i915_gem_relocation_entry reloc;
 	struct drm_i915_gem_execbuffer2 execbuf;
 	int fd;
+	uint64_t ahnd;
+	uint64_t bb_offset;
 };
 
 static void init_hang(struct hang *h, int fd, const intel_ctx_cfg_t *cfg)
@@ -104,8 +106,10 @@ static void init_hang(struct hang *h, int fd, const intel_ctx_cfg_t *cfg)
 	if (gem_has_contexts(fd)) {
 		h->ctx = intel_ctx_create(h->fd, cfg);
 		h->execbuf.rsvd1 = h->ctx->id;
+		h->ahnd = get_reloc_ahnd(fd, h->ctx->id);
 	} else {
 		h->ctx = NULL;
+		h->ahnd = get_reloc_ahnd(fd, 0);
 	}
 
 	memset(&h->execbuf, 0, sizeof(h->execbuf));
@@ -114,9 +118,12 @@ static void init_hang(struct hang *h, int fd, const intel_ctx_cfg_t *cfg)
 
 	memset(&h->obj, 0, sizeof(h->obj));
 	h->obj.handle = gem_create(h->fd, 4096);
+	h->bb_offset = get_offset(h->ahnd, h->obj.handle, 4096, 0);
+	if (h->ahnd)
+		h->obj.flags |= EXEC_OBJECT_PINNED;
 
 	h->obj.relocs_ptr = to_user_pointer(&h->reloc);
-	h->obj.relocation_count = 1;
+	h->obj.relocation_count = !h->ahnd ? 1 : 0;
 	memset(&h->reloc, 0, sizeof(h->reloc));
 
 	batch = gem_mmap__cpu(h->fd, h->obj.handle, 0, 4096, PROT_WRITE);
@@ -138,8 +145,8 @@ static void init_hang(struct hang *h, int fd, const intel_ctx_cfg_t *cfg)
 	batch[i] = MI_BATCH_BUFFER_START;
 	if (gen >= 8) {
 		batch[i] |= 1 << 8 | 1;
-		batch[++i] = 0;
-		batch[++i] = 0;
+		batch[++i] = h->bb_offset;
+		batch[++i] = h->bb_offset >> 32;
 	} else if (gen >= 6) {
 		batch[i] |= 1 << 8;
 		batch[++i] = 0;
@@ -167,6 +174,8 @@ static void submit_hang(struct hang *h, unsigned *engines, int nengine, unsigned
 
 static void fini_hang(struct hang *h)
 {
+	put_offset(h->ahnd, h->bb_offset);
+	put_ahnd(h->ahnd);
 	intel_ctx_destroy(h->fd, h->ctx);
 	close(h->fd);
 }
@@ -206,6 +215,7 @@ static void whisper(int fd, const intel_ctx_t *ctx,
 	int i, n, loc;
 	int debugfs;
 	int nchild;
+	bool has_relocs = gem_has_relocations(fd);
 
 	if (flags & PRIORITY) {
 		igt_require(gem_scheduler_enabled(fd));
@@ -264,7 +274,7 @@ static void whisper(int fd, const intel_ctx_t *ctx,
 		memset(&store, 0, sizeof(store));
 		store.handle = gem_create(fd, 4096);
 		store.relocs_ptr = to_user_pointer(&reloc);
-		store.relocation_count = 1;
+		store.relocation_count = has_relocs ? 1 : 0;
 
 		memset(&reloc, 0, sizeof(reloc));
 		reloc.offset = sizeof(uint32_t);
@@ -288,12 +298,18 @@ static void whisper(int fd, const intel_ctx_t *ctx,
 			execbuf.flags |= I915_EXEC_NO_RELOC;
 			if (gen < 6)
 				execbuf.flags |= I915_EXEC_SECURE;
+
 			execbuf.rsvd1 = ctx->id;
 			igt_require(__gem_execbuf(fd, &execbuf) == 0);
 			scratch = tmp[0];
 			store = tmp[1];
 		}
 
+		if (!has_relocs) {
+			scratch.flags |= EXEC_OBJECT_PINNED;
+			store.flags |= EXEC_OBJECT_PINNED;
+		}
+
 		i = 0;
 		batch[i] = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0);
 		if (gen >= 8) {
@@ -355,7 +371,9 @@ static void whisper(int fd, const intel_ctx_t *ctx,
 			inter[n].presumed_offset = old_offset;
 			inter[n].delta = loc;
 			batches[n].relocs_ptr = to_user_pointer(&inter[n]);
-			batches[n].relocation_count = 1;
+			batches[n].relocation_count = has_relocs ? 1 : 0;
+			if (!has_relocs)
+				batches[n].flags |= EXEC_OBJECT_PINNED;
 			gem_write(fd, batches[n].handle, 0, batch, sizeof(batch));
 
 			old_offset = batches[n].offset;
@@ -396,7 +414,9 @@ static void whisper(int fd, const intel_ctx_t *ctx,
 				tmp[1] = store;
 				verify_reloc(fd, store.handle, &reloc);
 				execbuf.buffers_ptr = to_user_pointer(tmp);
+
 				gem_execbuf(fd, &execbuf);
+
 				igt_assert_eq_u64(reloc.presumed_offset, tmp[0].offset);
 				if (flags & SYNC)
 					gem_sync(fd, tmp[0].handle);
@@ -450,7 +470,7 @@ static void whisper(int fd, const intel_ctx_t *ctx,
 						gem_sync(this_fd, batches[n-1].handle);
 					relocations += inter[n].presumed_offset != old_offset;
 
-					batches[n-1].relocation_count = 1;
+					batches[n-1].relocation_count = has_relocs ? 1 : 0;
 					batches[n-1].flags &= ~EXEC_OBJECT_WRITE;
 
 					if (this_fd != fd) {
@@ -468,6 +488,8 @@ static void whisper(int fd, const intel_ctx_t *ctx,
 				tmp[0] = tmp[1];
 				tmp[0].relocation_count = 0;
 				tmp[0].flags = EXEC_OBJECT_WRITE;
+				if (!has_relocs)
+					tmp[0].flags |= EXEC_OBJECT_PINNED;
 				reloc_migrations += tmp[0].offset != inter[0].presumed_offset;
 				tmp[0].offset = inter[0].presumed_offset;
 				old_offset = tmp[0].offset;
@@ -478,6 +500,7 @@ static void whisper(int fd, const intel_ctx_t *ctx,
 					reloc_interruptions++;
 					inter[0].presumed_offset = tmp[0].offset;
 				}
+
 				igt_assert_eq_u64(inter[0].presumed_offset, tmp[0].offset);
 				relocations += inter[0].presumed_offset != old_offset;
 				batches[0] = tmp[1];
@@ -487,7 +510,7 @@ static void whisper(int fd, const intel_ctx_t *ctx,
 				igt_assert(tmp[0].flags & EXEC_OBJECT_WRITE);
 				igt_assert_eq_u64(reloc.presumed_offset, tmp[0].offset);
 				igt_assert(tmp[1].relocs_ptr == to_user_pointer(&reloc));
-				tmp[1].relocation_count = 1;
+				tmp[1].relocation_count = has_relocs ? 1 : 0;
 				tmp[1].flags &= ~EXEC_OBJECT_WRITE;
 				verify_reloc(fd, store.handle, &reloc);
 				gem_execbuf(fd, &execbuf);
@@ -591,6 +614,7 @@ igt_main
 		ctx = intel_ctx_create_all_physical(fd);
 
 		igt_fork_hang_detector(fd);
+		intel_allocator_multiprocess_start();
 	}
 
 	for (const struct mode *m = modes; m->name; m++) {
@@ -631,6 +655,7 @@ igt_main
 	}
 
 	igt_fixture {
+		intel_allocator_multiprocess_stop();
 		intel_ctx_destroy(fd, ctx);
 		close(fd);
 	}
diff --git a/tests/intel-ci/fast-feedback.testlist b/tests/intel-ci/fast-feedback.testlist
index fa5006d2..cac694b6 100644
--- a/tests/intel-ci/fast-feedback.testlist
+++ b/tests/intel-ci/fast-feedback.testlist
@@ -22,7 +22,6 @@ igt at gem_exec_fence@basic-busy
 igt at gem_exec_fence@basic-wait
 igt at gem_exec_fence@basic-await
 igt at gem_exec_fence@nb-await
-igt at gem_exec_gttfill@basic
 igt at gem_exec_parallel@engines
 igt at gem_exec_store@basic
 igt at gem_exec_suspend@basic-s0
-- 
2.31.1



More information about the Intel-gfx-trybot mailing list