[Intel-gfx] [PATCH] drm/i915: Make vm eviction uninterruptible

Ben Widawsky ben at bwidawsk.net
Sun Apr 6 20:35:03 CEST 2014


On Sat, Apr 05, 2014 at 07:45:28PM -0700, Ben Widawsky wrote:
> On Sat, Apr 05, 2014 at 09:34:12PM +0100, Chris Wilson wrote:
> > On Sat, Apr 05, 2014 at 01:08:02PM -0700, Ben Widawsky wrote:
> > > Our current code cannot handle a failure to evict well. You'll get at
> > > the very least the following splat, but usually a lot worse fallout after:
> > > 
> > > [  134.819441] ------------[ cut here ]------------
> > > [  134.819467] WARNING: CPU: 3 PID: 442 at drivers/gpu/drm/i915/i915_gem_evict.c:230 i915_gem_evict_vm+0x8a/0x1c0 [i915]()
> > > [  134.819471] Modules linked in: i915 drm_kms_helper drm intel_gtt agpgart i2c_algo_bit ext4 crc16 mbcache jbd2 x86_pkg_temp_thermal coretemp kvm_intel kvm crc32c_intel ghash_clmulni_intel aesni_intel aes_x86_64 lrw iTCO_wdt gf128mul iTCO_vendor_support glue_helper ablk_helper cryptd microcode serio_raw i2c_i801 fan thermal battery e1000e acpi_cpufreq evdev ptp ac acpi_pad pps_core processor lpc_ich mfd_core snd_seq_dummy snd_seq_oss snd_seq_midi_event snd_seq snd_seq_device snd_timer snd soundcore sd_mod crc_t10dif crct10dif_common ahci libahci libata ehci_pci ehci_hcd usbcore scsi_mod usb_common
> > > [  134.819565] CPU: 3 PID: 442 Comm: glxgears Not tainted 3.14.0-BEN+ #480
> > > [  134.819568] Hardware name: Intel Corporation Broadwell Client platform/WhiteTip Mountain 1, BIOS BDW-E1R1.86C.0063.R01.1402110503 02/11/2014
> > > [  134.819571]  0000000000000009 ffff88009b10fa80 ffffffff8159e6a5 0000000000000000
> > > [  134.819577]  ffff88009b10fab8 ffffffff8104895d ffff880145c353c0 ffff880145f400f8
> > > [  134.819584]  0000000000000000 ffff8800a274d300 ffff88009b10fb78 ffff88009b10fac8
> > > [  134.819590] Call Trace:
> > > [  134.819599]  [<ffffffff8159e6a5>] dump_stack+0x4e/0x7a
> > > [  134.819607]  [<ffffffff8104895d>] warn_slowpath_common+0x7d/0xa0
> > > [  134.819635]  [<ffffffff81048a3a>] warn_slowpath_null+0x1a/0x20
> > > [  134.819656]  [<ffffffffa050c82a>] i915_gem_evict_vm+0x8a/0x1c0 [i915]
> > > [  134.819677]  [<ffffffffa050a26b>] ppgtt_release+0x17b/0x1e0 [i915]
> > > [  134.819693]  [<ffffffffa050a34d>] i915_gem_context_free+0x7d/0x180 [i915]
> > > [  134.819707]  [<ffffffffa050a48c>] context_idr_cleanup+0x3c/0x40 [i915]
> > > [  134.819715]  [<ffffffff81332d14>] idr_for_each+0x104/0x1a0
> > > [  134.819730]  [<ffffffffa050a450>] ? i915_gem_context_free+0x180/0x180 [i915]
> > > [  134.819735]  [<ffffffff815a27fc>] ? mutex_lock_nested+0x28c/0x3d0
> > > [  134.819761]  [<ffffffffa057da75>] ? i915_driver_preclose+0x25/0x50 [i915]
> > > [  134.819778]  [<ffffffffa050b715>] i915_gem_context_close+0x35/0xa0 [i915]
> > > [  134.819802]  [<ffffffffa057da80>] i915_driver_preclose+0x30/0x50 [i915]
> > > [  134.819816]  [<ffffffffa03e6a7d>] drm_release+0x5d/0x5f0 [drm]
> > > [  134.819822]  [<ffffffff811aae3a>] __fput+0xea/0x240
> > > [  134.819827]  [<ffffffff811aafde>] ____fput+0xe/0x10
> > > [  134.819832]  [<ffffffff810701bc>] task_work_run+0xac/0xe0
> > > [  134.819837]  [<ffffffff8104b82f>] do_exit+0x2cf/0xcf0
> > > [  134.819844]  [<ffffffff815a5dac>] ? _raw_spin_unlock_irq+0x2c/0x60
> > > [  134.819849]  [<ffffffff8104c2dc>] do_group_exit+0x4c/0xc0
> > > [  134.819855]  [<ffffffff8105ef11>] get_signal_to_deliver+0x2d1/0x920
> > > [  134.819861]  [<ffffffff81002408>] do_signal+0x48/0x620
> > > [  134.819867]  [<ffffffff811aa0d9>] ? do_readv_writev+0x169/0x220
> > > [  134.819873]  [<ffffffff8109e33d>] ? trace_hardirqs_on_caller+0xfd/0x1c0
> > > [  134.819879]  [<ffffffff811c9b9d>] ? __fget_light+0x13d/0x160
> > > [  134.819886]  [<ffffffff815a744c>] ? sysret_signal+0x5/0x47
> > > [  134.819892]  [<ffffffff81002a45>] do_notify_resume+0x65/0x80
> > > [  134.819897]  [<ffffffff815a76da>] int_signal+0x12/0x17
> > > [  134.819901] ---[ end trace dbf4da2122c3d683 ]---
> > > 
> > > At first I was going to call this a bandage to the problem. However,
> > > upon further thought, I rather like the idea of making evictions atomic,
> > > and less prone to failure anyway. The reason it can still somewhat be
> > > considered a band-aid however is GPU hangs. It would be nice if we had
> > > some way to interrupt the process when the GPU is hung. I'll leave it
> > > for a follow patch though.
> > 
> > No, this should be decided by the caller to i915_gem_evict_vm(), because
> > we very much do want it to be interruptible in most cases. The filp close
> > path is certainly one where being non-interruptible seems justifiable.
> > 
> > However, this is very much papering over the bug that we should never be
> > freeing an active vm in the first place.
> > -Chris
> > 
> 
> The issue I was seeing appeared to seeing from sigkill. In such a case,
> the process may want to die before the context/work/address space is
> freeable. For example:
> 1. evict_vm called for whatever reason
> 2. wait_seqno because the VMA is still active

hmm something isn't right here. Why did I get to wait_seqno if pin_count
was 0? Just FYI, this wasn't hypothetical. I did trace it all the way to
exactly ERESTARTSYS from wait_seqno.

By the way, another option in evict would be:
while(ret = (i915_vma_unbind(vma) == -ERESTARTSYS));
WARN_ON(ret);

> 3. receive signal break out of wait_seqno
> 4. return to evict_vm and the above WARN
> 
> Our error handling from there just spirals.
> 
> One issue I have with our current code is I'd really like eviction to
> not be able to fail (obviously extreme cases are unavoidable). Perhaps
> one other solution would be to make sure the context is idled before
> evicting its VM.
> 
> 
> -- 
> Ben Widawsky, Intel Open Source Technology Center
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Ben Widawsky, Intel Open Source Technology Center



More information about the Intel-gfx mailing list