[PATCH v3.1] drm/xe: Implement VM snapshot support for BO's and userptr
Souza, Jose
jose.souza at intel.com
Tue Feb 6 20:08:32 UTC 2024
On Mon, 2024-02-05 at 12:35 -0800, José Roberto de Souza wrote:
> On Mon, 2024-02-05 at 08:32 -0800, José Roberto de Souza wrote:
> > On Fri, 2024-02-02 at 23:45 +0100, Maarten Lankhorst wrote:
> > > Since we cannot immediately capture the BO's and userptr, perform it in
> > > 2 stages. The immediate stage takes a reference to each BO and userptr,
> > > while a delayed worker captures the contents and then frees the
> > > reference.
> > >
> > > This is required because in signaling context, no locks can be taken, no
> > > memory can be allocated, and no waits on userspace can be performed.
> > >
> > > With the delayed worker, all of this can be performed very easily,
> > > without having to resort to hacks.
> > >
> > > Changes since v1:
> > > - Fix crash on NULL captured vm.
> > > - Use ascii85_encode to capture BO contents and save some space.
> > > - Add length to coredump output for each captured area.
> > > Changes since v2:
> > > - Dump each mapping on their own line, to simplify tooling.
> > > - Fix null pointer deref in xe_vm_snapshot_free.
> >
> >
> > First crash dump with piglit already got:
> >
> >
> > [ 65.097735] xe 0000:00:02.0: [drm:intel_pps_vdd_off_sync_unlocked [xe]] [ENCODER:307:DDI A/PHY A] PPS 0 PP_STATUS: 0x80000008 PP_CONTROL:
> > 0x00000067
> > [ 65.238975] xe 0000:00:02.0: [drm:intel_power_well_disable [xe]] disabling PW_3
> > [ 65.239050] xe 0000:00:02.0: [drm:intel_power_well_disable [xe]] disabling PW_2
> > [ 305.225363] loop0: detected capacity change from 0 to 8
> > [ 430.689574] xe 0000:00:02.0: [drm] Timedout job: seqno=4294967169, guc_id=2, flags=0x8
> > [ 430.689933] ------------[ cut here ]------------
> > [ 430.689958] DEBUG_LOCKS_WARN_ON(lock->magic != lock)
> > [ 430.689964] WARNING: CPU: 6 PID: 94 at kernel/locking/mutex.c:587 __mutex_lock+0x50d/0xb80
> > [ 430.690007] Modules linked in: snd_hda_codec_hdmi snd_ctl_led ledtrig_audio snd_hda_codec_realtek snd_hda_codec_generic xe drm_ttm_helper gpu_sched
> > drm_suballoc_helper drm_gpuvm drm_exec i2c_algo_bit drm_buddy drm_display_helper ttm x86_pkg_temp_thermal mei_pxp mei_hdcp coretemp snd_hda_intel
> > wmi_bmof crct10dif_pclmul snd_intel_dspcfg crc32_pclmul snd_hda_codec e1000e video ghash_clmulni_intel snd_hwdep kvm_intel snd_hda_core ptp i2c_i801
> > snd_pcm pps_core i2c_smbus mei_me mei intel_pmc_core intel_vsec wmi pmt_telemetry pmt_class fuse
> > [ 430.690164] CPU: 6 PID: 94 Comm: kworker/u16:2 Not tainted 6.8.0-rc3-zeh-xe+ #1222
> > [ 430.690188] Hardware name: Dell Inc. Latitude 5420/01M3M4, BIOS 1.27.0 03/17/2023
> > [ 430.690212] Workqueue: gt-ordered-wq drm_sched_job_timedout [gpu_sched]
> > [ 430.690239] RIP: 0010:__mutex_lock+0x50d/0xb80
> > [ 430.690254] Code: ff 85 c0 0f 84 7d fb ff ff 8b 15 b2 ec ba 00 85 d2 0f 85 6f fb ff ff 48 c7 c6 ca 46 3a 82 48 c7 c7 73 d6 39 82 e8 73 19 40 ff
> > <0f> 0b e9 55 fb ff ff 31 c9 31 d2 4c 89 e7 e8 80 7e 47 ff 84 c0 0f
> > [ 430.690298] RSP: 0018:ffffc900005ebc60 EFLAGS: 00010282
> > [ 430.690313] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
> > [ 430.690332] RDX: 0000000000000002 RSI: 0000000000000027 RDI: 00000000ffffffff
> > [ 430.690356] RBP: ffffc900005ebcf0 R08: 00000000fffeffff R09: 0000000000000001
> > [ 430.690375] R10: 00000000fffeffff R11: ffff888287080000 R12: ffff88824998d4a8
> > [ 430.690394] R13: 0000000000000000 R14: ffff88824998d038 R15: ffff8881109b3e00
> > [ 430.690417] FS: 0000000000000000(0000) GS:ffff888287b00000(0000) knlGS:0000000000000000
> > [ 430.690438] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [ 430.690455] CR2: 00007f5ccd99de3e CR3: 000000000564a003 CR4: 0000000000770ef0
> > [ 430.690475] PKRU: 55555554
> > [ 430.690485] Call Trace:
> > [ 430.690495] <TASK>
> > [ 430.690504] ? __mutex_lock+0x50d/0xb80
> > [ 430.690518] ? __warn+0x7c/0x170
> > [ 430.690538] ? __mutex_lock+0x50d/0xb80
> > [ 430.690575] ? report_bug+0x189/0x1c0
> > [ 430.690597] ? handle_bug+0x36/0x70
> > [ 430.690616] ? exc_invalid_op+0x13/0x60
> > [ 430.690637] ? asm_exc_invalid_op+0x16/0x20
> > [ 430.690661] ? __mutex_lock+0x50d/0xb80
> > [ 430.690680] ? __slab_alloc.isra.0+0x4d/0x90
> > [ 430.690703] ? __slab_alloc.isra.0+0x5a/0x90
> > [ 430.690722] ? xe_vm_snapshot_capture+0x35/0x1f0 [xe]
> > [ 430.690840] ? rcu_is_watching+0xd/0x40
> > [ 430.690863] ? __kmalloc+0x2bd/0x400
> > [ 430.690883] ? xe_vm_snapshot_capture+0x35/0x1f0 [xe]
> > [ 430.690984] xe_vm_snapshot_capture+0x35/0x1f0 [xe]
> > [ 430.691070] ? xe_sched_job_snapshot_capture+0x64/0x80 [xe]
> > [ 430.691155] xe_devcoredump+0x1b9/0x2e0 [xe]
> > [ 430.691226] guc_exec_queue_timedout_job+0x98/0x5a0 [xe]
> > [ 430.691305] drm_sched_job_timedout+0x77/0xe0 [gpu_sched]
> > [ 430.691337] ? process_one_work+0x18d/0x4d0
> > [ 430.691360] process_one_work+0x1f4/0x4d0
> > [ 430.691378] worker_thread+0x1d8/0x3c0
> > [ 430.691396] ? rescuer_thread+0x390/0x390
> > [ 430.691414] kthread+0xfb/0x130
> > [ 430.691429] ? kthread_complete_and_exit+0x20/0x20
> > [ 430.691451] ret_from_fork+0x28/0x40
> > [ 430.691470] ? kthread_complete_and_exit+0x20/0x20
> > [ 430.691495] ret_from_fork_asm+0x11/0x20
> > [ 430.691514] </TASK>
> > [ 430.691523] irq event stamp: 6077281
> > [ 430.691537] hardirqs last enabled at (6077281): [<ffffffff81d2dcaa>] _raw_spin_unlock_irqrestore+0x4a/0x70
> > [ 430.691598] hardirqs last disabled at (6077280): [<ffffffff81d2da7a>] _raw_spin_lock_irqsave+0x4a/0x50
> > [ 430.691640] softirqs last enabled at (6075212): [<ffffffff81131962>] irq_exit_rcu+0x82/0xe0
> > [ 430.691679] softirqs last disabled at (6075205): [<ffffffff81131962>] irq_exit_rcu+0x82/0xe0
> > [ 430.691722] ---[ end trace 0000000000000000 ]---
> > [ 430.691802] xe 0000:00:02.0: [drm] Xe device coredump has been created
>
> Hold a bit in this crash, the VM is being destroyed by Iris before drm_exec is done.
> I think fixing it will fix this warning.
Yep that was the case, it will be fixed by https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27500
>
> >
> > And again this is missing 'hw status', 'hw context' and 'GuC log buffer' dumps(using i915 error dump names).
> >
If could at least include 'hw status' and 'hw context' buffers in the dump and fix the other comments that I left we would be ready to merge this.
The 'GuC log buffer' can be added later.
> >
> > >
> > > Signed-off-by: Maarten Lankhorst <maarten.lankhorst at linux.intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_devcoredump.c | 33 ++++-
> > > drivers/gpu/drm/xe/xe_devcoredump_types.h | 8 ++
> > > drivers/gpu/drm/xe/xe_vm.c | 162 ++++++++++++++++++++++
> > > drivers/gpu/drm/xe/xe_vm.h | 5 +
> > > 4 files changed, 206 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
> > > index 08d3f6cb72292..3e863e51b9d4d 100644
> > > --- a/drivers/gpu/drm/xe/xe_devcoredump.c
> > > +++ b/drivers/gpu/drm/xe/xe_devcoredump.c
> > > @@ -17,6 +17,7 @@
> > > #include "xe_guc_submit.h"
> > > #include "xe_hw_engine.h"
> > > #include "xe_sched_job.h"
> > > +#include "xe_vm.h"
> > >
> > > /**
> > > * DOC: Xe device coredump
> > > @@ -59,12 +60,22 @@ static struct xe_guc *exec_queue_to_guc(struct xe_exec_queue *q)
> > > return &q->gt->uc.guc;
> > > }
> > >
> > > +static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
> > > +{
> > > + struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
> > > +
> > > + xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
> > > + if (ss->vm)
> > > + xe_vm_snapshot_capture_delayed(ss->vm);
> > > + xe_force_wake_put(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
> > > +}
> > > +
> > > static ssize_t xe_devcoredump_read(char *buffer, loff_t offset,
> > > size_t count, void *data, size_t datalen)
> > > {
> > > struct xe_devcoredump *coredump = data;
> > > struct xe_device *xe = coredump_to_xe(coredump);
> > > - struct xe_devcoredump_snapshot *ss;
> > > + struct xe_devcoredump_snapshot *ss = &coredump->snapshot;
> > > struct drm_printer p;
> > > struct drm_print_iterator iter;
> > > struct timespec64 ts;
> > > @@ -74,12 +85,14 @@ static ssize_t xe_devcoredump_read(char *buffer, loff_t offset,
> > > if (!data || !coredump_to_xe(coredump))
> > > return -ENODEV;
> > >
> > > + /* Ensure delayed work is captured before continuing */
> > > + flush_work(&ss->work);
> > > +
> > > iter.data = buffer;
> > > iter.offset = 0;
> > > iter.start = offset;
> > > iter.remain = count;
> > >
> > > - ss = &coredump->snapshot;
> > > p = drm_coredump_printer(&iter);
> > >
> > > drm_printf(&p, "**** Xe Device Coredump ****\n");
> > > @@ -104,6 +117,10 @@ static ssize_t xe_devcoredump_read(char *buffer, loff_t offset,
> > > if (coredump->snapshot.hwe[i])
> > > xe_hw_engine_snapshot_print(coredump->snapshot.hwe[i],
> > > &p);
> > > + if (coredump->snapshot.vm) {
> > > + drm_printf(&p, "\n**** VM state ****\n");
> > > + xe_vm_snapshot_print(coredump->snapshot.vm, &p);
> > > + }
> > >
> > > return count - iter.remain;
> > > }
> > > @@ -117,12 +134,16 @@ static void xe_devcoredump_free(void *data)
> > > if (!data || !coredump_to_xe(coredump))
> > > return;
> > >
> > > + cancel_work_sync(&coredump->snapshot.work);
> > > +
> > > xe_guc_ct_snapshot_free(coredump->snapshot.ct);
> > > xe_guc_exec_queue_snapshot_free(coredump->snapshot.ge);
> > > xe_sched_job_snapshot_free(coredump->snapshot.job);
> > > for (i = 0; i < XE_NUM_HW_ENGINES; i++)
> > > if (coredump->snapshot.hwe[i])
> > > xe_hw_engine_snapshot_free(coredump->snapshot.hwe[i]);
> > > + xe_vm_snapshot_free(coredump->snapshot.vm);
> > > + memset(&coredump->snapshot, 0, sizeof(coredump->snapshot));
> > >
> > > coredump->captured = false;
> > > drm_info(&coredump_to_xe(coredump)->drm,
> > > @@ -145,6 +166,9 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
> > > ss->snapshot_time = ktime_get_real();
> > > ss->boot_time = ktime_get_boottime();
> > >
> > > + ss->gt = q->gt;
> > > + INIT_WORK(&ss->work, xe_devcoredump_deferred_snap_work);
> > > +
> > > cookie = dma_fence_begin_signalling();
> > > for (i = 0; q->width > 1 && i < XE_HW_ENGINE_MAX_INSTANCE;) {
> > > if (adj_logical_mask & BIT(i)) {
> > > @@ -160,6 +184,7 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
> > > coredump->snapshot.ct = xe_guc_ct_snapshot_capture(&guc->ct, true);
> > > coredump->snapshot.ge = xe_guc_exec_queue_snapshot_capture(job);
> > > coredump->snapshot.job = xe_sched_job_snapshot_capture(job);
> > > + coredump->snapshot.vm = xe_vm_snapshot_capture(q->vm);
> > >
> > > for_each_hw_engine(hwe, q->gt, id) {
> > > if (hwe->class != q->hwe->class ||
> > > @@ -170,6 +195,9 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
> > > coredump->snapshot.hwe[id] = xe_hw_engine_snapshot_capture(hwe);
> > > }
> > >
> > > + if (ss->vm)
> > > + queue_work(system_unbound_wq, &ss->work);
> > > +
> > > xe_force_wake_put(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
> > > dma_fence_end_signalling(cookie);
> > > }
> > > @@ -203,3 +231,4 @@ void xe_devcoredump(struct xe_sched_job *job)
> > > xe_devcoredump_read, xe_devcoredump_free);
> > > }
> > > #endif
> > > +
> > > diff --git a/drivers/gpu/drm/xe/xe_devcoredump_types.h b/drivers/gpu/drm/xe/xe_devcoredump_types.h
> > > index d259119b2c980..b389c1a298e3d 100644
> > > --- a/drivers/gpu/drm/xe/xe_devcoredump_types.h
> > > +++ b/drivers/gpu/drm/xe/xe_devcoredump_types.h
> > > @@ -12,6 +12,7 @@
> > > #include "xe_hw_engine_types.h"
> > >
> > > struct xe_device;
> > > +struct xe_gt;
> > >
> > > /**
> > > * struct xe_devcoredump_snapshot - Crash snapshot
> > > @@ -26,6 +27,11 @@ struct xe_devcoredump_snapshot {
> > > /** @boot_time: Relative boot time so the uptime can be calculated. */
> > > ktime_t boot_time;
> > >
> > > + /** @gt: Affected GT, used by forcewake for delayed capture */
> > > + struct xe_gt *gt;
> > > + /** @work: Workqueue for deffered capture outside of signaling context */
> > > + struct work_struct work;
> > > +
> > > /* GuC snapshots */
> > > /** @ct: GuC CT snapshot */
> > > struct xe_guc_ct_snapshot *ct;
> > > @@ -36,6 +42,8 @@ struct xe_devcoredump_snapshot {
> > > struct xe_hw_engine_snapshot *hwe[XE_NUM_HW_ENGINES];
> > > /** @job: Snapshot of job state */
> > > struct xe_sched_job_snapshot *job;
> > > + /** @vm: Snapshot of VM state */
> > > + struct xe_vm_snapshot *vm;
> > > };
> > >
> > > /**
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > > index 1f0d58bfd1046..6965fe15bcbea 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > @@ -13,6 +13,7 @@
> > > #include <drm/ttm/ttm_execbuf_util.h>
> > > #include <drm/ttm/ttm_tt.h>
> > > #include <drm/xe_drm.h>
> > > +#include <linux/ascii85.h>
> > > #include <linux/delay.h>
> > > #include <linux/kthread.h>
> > > #include <linux/mm.h>
> > > @@ -3267,3 +3268,164 @@ int xe_analyze_vm(struct drm_printer *p, struct xe_vm *vm, int gt_id)
> > >
> > > return 0;
> > > }
> > > +
> > > +struct xe_vm_snapshot {
> > > + unsigned long num_snaps;
> > > + struct {
> > > + uint64_t ofs, bo_ofs;
> > > + unsigned long len;
> > > + struct xe_bo *bo;
> > > + void *data;
> > > + struct mm_struct *mm;
> > > + } snap[];
> > > +};
> > > +
> > > +struct xe_vm_snapshot *xe_vm_snapshot_capture(struct xe_vm *vm)
> > > +{
> > > + unsigned long num_snaps = 0, i;
> > > + struct xe_vm_snapshot *snap = NULL;
> > > + struct drm_gpuva *gpuva;
> > > +
> > > + if (!vm)
> > > + return NULL;
> > > +
> > > + mutex_lock(&vm->snap_mutex);
> > > + drm_gpuvm_for_each_va(gpuva, &vm->gpuvm) {
> > > + if (gpuva->flags & XE_VMA_DUMPABLE)
> > > + num_snaps++;
> > > + }
> > > +
> > > + if (num_snaps)
> > > + snap = kvzalloc(offsetof(struct xe_vm_snapshot, snap[num_snaps]), GFP_NOWAIT);
> > > + if (!snap)
> > > + goto out_unlock;
> > > +
> > > + snap->num_snaps = num_snaps;
> > > + i = 0;
> > > + drm_gpuvm_for_each_va(gpuva, &vm->gpuvm) {
> > > + struct xe_vma *vma = gpuva_to_vma(gpuva);
> > > + struct xe_bo *bo = vma->gpuva.gem.obj ?
> > > + gem_to_xe_bo(vma->gpuva.gem.obj) : NULL;
> > > +
> > > + if (!(gpuva->flags & XE_VMA_DUMPABLE))
> > > + continue;
> > > +
> > > + snap->snap[i].ofs = xe_vma_start(vma);
> > > + snap->snap[i].len = xe_vma_size(vma);
> > > + if (bo) {
> > > + snap->snap[i].bo = xe_bo_get(bo);
> > > + snap->snap[i].bo_ofs = xe_vma_bo_offset(vma);
> > > + } else if (xe_vma_is_userptr(vma)) {
> > > + struct xe_userptr *userptr = &to_userptr_vma(vma)->userptr;
> > > + if (mmget_not_zero(userptr->notifier.mm))
> > > + snap->snap[i].mm = userptr->notifier.mm;
> > > + else
> > > + snap->snap[i].data = ERR_PTR(-EFAULT);
> > > + snap->snap[i].bo_ofs = xe_vma_userptr(vma);
> > > + } else {
> > > + snap->snap[i].data = ERR_PTR(-ENOENT);
> > > + }
> > > + i++;
> > > + }
> > > +
> > > +out_unlock:
> > > + mutex_unlock(&vm->snap_mutex);
> > > + return snap;
> > > +}
> > > +
> > > +void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap)
> > > +{
> > > + for (int i = 0; i < snap->num_snaps; i++) {
> > > + struct xe_bo *bo = snap->snap[i].bo;
> > > + struct iosys_map src;
> > > + int err;
> > > +
> > > + if (IS_ERR(snap->snap[i].data))
> > > + continue;
> > > +
> > > + snap->snap[i].data = kvmalloc(snap->snap[i].len, GFP_USER);
> > > + if (!snap->snap[i].data) {
> > > + snap->snap[i].data = ERR_PTR(-ENOMEM);
> > > + goto cleanup_bo;
> > > + }
> > > +
> > > + if (bo) {
> > > + dma_resv_lock(bo->ttm.base.resv, NULL);
> > > + err = ttm_bo_vmap(&bo->ttm, &src);
> > > + if (!err) {
> > > + xe_map_memcpy_from(xe_bo_device(bo),
> > > + snap->snap[i].data,
> > > + &src, snap->snap[i].bo_ofs,
> > > + snap->snap[i].len);
> > > + ttm_bo_vunmap(&bo->ttm, &src);
> > > + }
> > > + dma_resv_unlock(bo->ttm.base.resv);
> > > + } else {
> > > + void __user *userptr = (void __user *)(size_t)snap->snap[i].bo_ofs;
> > > + kthread_use_mm(snap->snap[i].mm);
> > > +
> > > + if (!copy_from_user(snap->snap[i].data, userptr, snap->snap[i].len))
> > > + err = 0;
> > > + else
> > > + err = -EFAULT;
> > > + kthread_unuse_mm(snap->snap[i].mm);
> > > + mmput(snap->snap[i].mm);
> > > + snap->snap[i].mm = NULL;
> > > + }
> > > +
> > > + if (err) {
> > > + kvfree(snap->snap[i].data);
> > > + snap->snap[i].data = ERR_PTR(err);
> > > + }
> > > +
> > > +cleanup_bo:
> > > + xe_bo_put(bo);
> > > + snap->snap[i].bo = NULL;
> > > + }
> > > +}
> > > +
> > > +void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p)
> > > +{
> > > + unsigned long i, j;
> > > +
> > > + for (i = 0; i < snap->num_snaps; i++) {
> > > + if (IS_ERR(snap->snap[i].data))
> > > + goto uncaptured;
> > > +
> > > + drm_printf(p, "[%llx].length: 0x%lx\n", snap->snap[i].ofs, snap->snap[i].len);
> > > + drm_printf(p, "[%llx].data: ",
> > > + snap->snap[i].ofs + j);
> > > +
> > > + for (j = 0; j < snap->snap[i].len; j += sizeof(u32)) {
> > > + uint32_t *val = snap->snap[i].data + j;
> > > + char dumped[ASCII85_BUFSZ];
> > > +
> > > + drm_puts(p, ascii85_encode(*val, dumped));
> > > + }
> > > +
> > > + drm_puts(p, "\n");
> > > + continue;
> > > +
> > > +uncaptured:
> > > + drm_printf(p, "Unable to capture range [%llx-%llx]: %li\n",
> > > + snap->snap[i].ofs, snap->snap[i].ofs + snap->snap[i].len - 1,
> > > + PTR_ERR(snap->snap[i].data));
> > > + }
> > > +}
> > > +
> > > +void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
> > > +{
> > > + unsigned long i;
> > > +
> > > + if (!snap)
> > > + return;
> > > +
> > > + for (i = 0; i < snap->num_snaps; i++) {
> > > + if (!IS_ERR(snap->snap[i].data))
> > > + kvfree(snap->snap[i].data);
> > > + xe_bo_put(snap->snap[i].bo);
> > > + if (snap->snap[i].mm)
> > > + mmput(snap->snap[i].mm);
> > > + }
> > > + kvfree(snap);
> > > +}
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> > > index df4a82e960ff0..6df1f1c7f85d9 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.h
> > > +++ b/drivers/gpu/drm/xe/xe_vm.h
> > > @@ -271,3 +271,8 @@ static inline void vm_dbg(const struct drm_device *dev,
> > > { /* noop */ }
> > > #endif
> > > #endif
> > > +
> > > +struct xe_vm_snapshot *xe_vm_snapshot_capture(struct xe_vm *vm);
> > > +void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap);
> > > +void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p);
> > > +void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
> >
>
More information about the Intel-xe
mailing list