Possible use_mm() mis-uses

Paolo Bonzini pbonzini at redhat.com
Thu Aug 23 08:38:17 UTC 2018


On 23/08/2018 08:07, Zhenyu Wang wrote:
> On 2018.08.22 20:20:46 +0200, Paolo Bonzini wrote:
>> On 22/08/2018 18:44, Linus Torvalds wrote:
>>> An example of something that *isn't* right, is the i915 kvm interface,
>>> which does
>>>
>>>         use_mm(kvm->mm);
>>>
>>> on an mm that was initialized in virt/kvm/kvm_main.c using
>>>
>>>         mmgrab(current->mm);
>>>         kvm->mm = current->mm;
>>>
>>> which is *not* right. Using "mmgrab()" does indeed guarantee the
>>> lifetime of the 'struct mm_struct' itself, but it does *not* guarantee
>>> the lifetime of the page tables. You need to use "mmget()" and
>>> "mmput()", which get the reference to the actual process address
>>> space!
>>>
>>> Now, it is *possible* that the kvm use is correct too, because kvm
>>> does register a mmu_notifier chain, and in theory you can avoid the
>>> proper refcounting by just making sure the mmu "release" notifier
>>> kills any existing uses, but I don't really see kvm doing that. Kvm
>>> does register a release notifier, but that just flushes the shadow
>>> page tables, it doesn't kill any use_mm() use by some i915 use case.
>>
>> Yes, KVM is correct but the i915 bits are at least fishy.  It's probably
>> as simple as adding a mmget/mmput pair respectively in kvmgt_guest_init
>> and kvmgt_guest_exit, or maybe mmget_not_zero.
>>
> 
> yeah, that's the clear way to fix this imo. We only depend on guest
> life cycle to access guest memory properly. Here's proposed fix, will
> verify and integrate it later.
> 
> Thanks!
> 
> From 5e5a8d0409aa150884adf5a4d0b956fd0b9906b3 Mon Sep 17 00:00:00 2001
> From: Zhenyu Wang <zhenyuw at linux.intel.com>
> Date: Thu, 23 Aug 2018 14:08:06 +0800
> Subject: [PATCH] drm/i915/gvt: Fix life cycle reference on KVM mm
> 
> Handle guest mm access life cycle properly with mmget()/mmput()
> through guest init()/exit(). As noted by Linus, use_mm() depends
> on valid live page table but KVM's mmgrab() doesn't guarantee that.
> As vGPU usage depends on guest VM life cycle, need to make sure to
> use mmget()/mmput() to guarantee VM address access.
> 
> Cc: Linus Torvalds <torvalds at linux-foundation.org>
> Cc: Paolo Bonzini <pbonzini at redhat.com>
> Cc: Zhi Wang <zhi.a.wang at intel.com>
> Signed-off-by: Zhenyu Wang <zhenyuw at linux.intel.com>

Reviewed-by: Paolo Bonzini <pbonzini at redhat.com>

> ---
>  drivers/gpu/drm/i915/gvt/kvmgt.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index 71751be329e3..4a0988747d08 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -32,6 +32,7 @@
>  #include <linux/device.h>
>  #include <linux/mm.h>
>  #include <linux/mmu_context.h>
> +#include <linux/sched/mm.h>
>  #include <linux/types.h>
>  #include <linux/list.h>
>  #include <linux/rbtree.h>
> @@ -1614,9 +1615,16 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
>  	if (__kvmgt_vgpu_exist(vgpu, kvm))
>  		return -EEXIST;
>  
> +	if (!mmget_not_zero(kvm->mm)) {
> +		gvt_vgpu_err("Can't get KVM mm reference\n");
> +		return -EINVAL;
> +	}
> +
>  	info = vzalloc(sizeof(struct kvmgt_guest_info));
> -	if (!info)
> +	if (!info) {
> +		mmput(kvm->mm);
>  		return -ENOMEM;
> +	}
>  
>  	vgpu->handle = (unsigned long)info;
>  	info->vgpu = vgpu;
> @@ -1647,6 +1655,8 @@ static bool kvmgt_guest_exit(struct kvmgt_guest_info *info)
>  	debugfs_remove(info->debugfs_cache_entries);
>  
>  	kvm_page_track_unregister_notifier(info->kvm, &info->track_node);
> +	if (info->kvm->mm)
> +		mmput(info->kvm->mm);
>  	kvm_put_kvm(info->kvm);
>  	kvmgt_protect_table_destroy(info);
>  	gvt_cache_destroy(info->vgpu);
> 


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: OpenPGP digital signature
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20180823/a6102d5b/attachment-0001.sig>


More information about the amd-gfx mailing list