[PATCH] drm/vmwgfx: Keep a gem reference to user bos in surfaces
Sverdlin, Alexander
alexander.sverdlin at siemens.com
Thu Dec 21 10:54:27 UTC 2023
Hi Zack,
thank you for the patch!
On Thu, 2023-09-28 at 00:13 -0400, Zack Rusin wrote:
> From: Zack Rusin <zackr at vmware.com>
>
> Surfaces can be backed (i.e. stored in) memory objects (mob's) which
> are created and managed by the userspace as GEM buffers. Surfaces
> grab only a ttm reference which means that the gem object can
> be deleted underneath us, especially in cases where prime buffer
> export is used.
>
> Make sure that all userspace surfaces which are backed by gem objects
> hold a gem reference to make sure they're not deleted before vmw
> surfaces are done with them, which fixes:
> ------------[ cut here ]------------
> refcount_t: underflow; use-after-free.
> WARNING: CPU: 2 PID: 2632 at lib/refcount.c:28 refcount_warn_saturate+0xfb/0x150
> Modules linked in: overlay vsock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vsock snd_ens1371 snd_ac97_codec ac97_bus snd_pcm gameport>
> CPU: 2 PID: 2632 Comm: vmw_ref_count Not tainted 6.5.0-rc2-vmwgfx #1
> Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 11/12/2020
> RIP: 0010:refcount_warn_saturate+0xfb/0x150
> Code: eb 9e 0f b6 1d 8b 5b a6 01 80 fb 01 0f 87 ba e4 80 00 83 e3 01 75 89 48 c7 c7 c0 3c f9 a3 c6 05 6f 5b a6 01 01 e8 15 81 98 ff <0f> 0b e9 6f ff ff ff 0f b>
> RSP: 0018:ffffbdc34344bba0 EFLAGS: 00010286
> RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000027
> RDX: ffff960475ea1548 RSI: 0000000000000001 RDI: ffff960475ea1540
> RBP: ffffbdc34344bba8 R08: 0000000000000003 R09: 65646e75203a745f
> R10: ffffffffa5b32b20 R11: 72657466612d6573 R12: ffff96037d6a6400
> R13: ffff9603484805b0 R14: 000000000000000b R15: ffff9603bed06060
> FS: 00007f5fd8520c40(0000) GS:ffff960475e80000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007f5fda755000 CR3: 000000010d012005 CR4: 00000000003706e0
> Call Trace:
> <TASK>
> ? show_regs+0x6e/0x80
> ? refcount_warn_saturate+0xfb/0x150
> ? __warn+0x91/0x150
> ? refcount_warn_saturate+0xfb/0x150
> ? report_bug+0x19d/0x1b0
> ? handle_bug+0x46/0x80
> ? exc_invalid_op+0x1d/0x80
> ? asm_exc_invalid_op+0x1f/0x30
> ? refcount_warn_saturate+0xfb/0x150
> drm_gem_object_handle_put_unlocked+0xba/0x110 [drm]
> drm_gem_object_release_handle+0x6e/0x80 [drm]
> drm_gem_handle_delete+0x6a/0xc0 [drm]
> ? __pfx_vmw_bo_unref_ioctl+0x10/0x10 [vmwgfx]
> vmw_bo_unref_ioctl+0x33/0x40 [vmwgfx]
> drm_ioctl_kernel+0xbc/0x160 [drm]
> drm_ioctl+0x2d2/0x580 [drm]
> ? __pfx_vmw_bo_unref_ioctl+0x10/0x10 [vmwgfx]
> ? do_vmi_munmap+0xee/0x180
> vmw_generic_ioctl+0xbd/0x180 [vmwgfx]
> vmw_unlocked_ioctl+0x19/0x20 [vmwgfx]
> __x64_sys_ioctl+0x99/0xd0
> do_syscall_64+0x5d/0x90
> ? syscall_exit_to_user_mode+0x2a/0x50
> ? do_syscall_64+0x6d/0x90
> ? handle_mm_fault+0x16e/0x2f0
> ? exit_to_user_mode_prepare+0x34/0x170
> ? irqentry_exit_to_user_mode+0xd/0x20
> ? irqentry_exit+0x3f/0x50
> ? exc_page_fault+0x8e/0x190
> entry_SYSCALL_64_after_hwframe+0x6e/0xd8
> RIP: 0033:0x7f5fda51aaff
> Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <41> 89 c0 3d 00 f0 ff ff 7>
> RSP: 002b:00007ffd536a4d30 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
> RAX: ffffffffffffffda RBX: 00007ffd536a4de0 RCX: 00007f5fda51aaff
> RDX: 00007ffd536a4de0 RSI: 0000000040086442 RDI: 0000000000000003
> RBP: 0000000040086442 R08: 000055fa603ada50 R09: 0000000000000000
> R10: 0000000000000001 R11: 0000000000000246 R12: 00007ffd536a51b8
> R13: 0000000000000003 R14: 000055fa5ebb4c80 R15: 00007f5fda90f040
> </TASK>
> ---[ end trace 0000000000000000 ]---
>
> A lot of the analyis on the bug was done by Murray McAllister and
> Ian Forbes.
>
> Reported-by: Murray McAllister <murray.mcallister at gmail.com>
> Cc: Ian Forbes <iforbes at vmware.com>
> Signed-off-by: Zack Rusin <zackr at vmware.com>
> Fixes: a950b989ea29 ("drm/vmwgfx: Do not drop the reference to the handle too soon")
> Cc: <stable at vger.kernel.org> # v6.2+
Do you remember the particular reason this was marked 6.2+?
We see this on Debian 6.1.67 (which at least has the mentioned
"drm/vmwgfx: Do not drop the reference to the handle too soon"):
------------[ cut here ]------------
refcount_t: underflow; use-after-free.
WARNING: CPU: 0 PID: 1945 at lib/refcount.c:28 refcount_warn_saturate+0xba/0x110
Modules linked in: ...
CPU: 0 PID: 1945 Comm: ksplashqml Tainted: G OE 6.1.0-16-amd64 #1 Debian 6.1.67-1
Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 11/12/2020
RIP: 0010:refcount_warn_saturate+0xba/0x110
Code: ...
RSP: 0018:ffffb3cb41cfbc58 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ffff8d7da05dde00 RCX: 0000000000000027
RDX: ffff8d802de203a8 RSI: 0000000000000001 RDI: ffff8d802de203a0
RBP: ffff8d7d110f5800 R08: 0000000000000000 R09: ffffb3cb41cfbad0
R10: 0000000000000003 R11: ffff8d803fec1b28 R12: 0000000000000015
R13: ffff8d7d110f5858 R14: ffff8d7d110f5840 R15: 0000000000000015
FS: 00007f6584cc99c0(0000) GS:ffff8d802de00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fca928a5000 CR3: 0000000143f12004 CR4: 00000000003706f0
Call Trace:
<TASK>
? __warn+0x7d/0xc0
? refcount_warn_saturate+0xba/0x110
? report_bug+0xe2/0x150
? handle_bug+0x41/0x70
? exc_invalid_op+0x13/0x60
? asm_exc_invalid_op+0x16/0x20
? refcount_warn_saturate+0xba/0x110
? refcount_warn_saturate+0xba/0x110
drm_gem_handle_delete+0x8c/0xd0 [drm]
? vmw_bo_create+0xa0/0xa0 [vmwgfx]
vmw_bo_unref_ioctl+0xf/0x20 [vmwgfx]
drm_ioctl_kernel+0xc6/0x170 [drm]
drm_ioctl+0x22f/0x410 [drm]
? vmw_bo_create+0xa0/0xa0 [vmwgfx]
? drm_ioctl_kernel+0x170/0x170 [drm]
vmw_generic_ioctl+0xa4/0x110 [vmwgfx]
__x64_sys_ioctl+0x8d/0xd0
do_syscall_64+0x58/0xc0
? exit_to_user_mode_prepare+0x40/0x1e0
? syscall_exit_to_user_mode+0x27/0x40
? do_syscall_64+0x67/0xc0
? vmw_generic_ioctl+0xa4/0x110 [vmwgfx]
? exit_to_user_mode_prepare+0x40/0x1e0
? syscall_exit_to_user_mode+0x27/0x40
? do_syscall_64+0x67/0xc0
? syscall_exit_to_user_mode+0x27/0x40
? do_syscall_64+0x67/0xc0
? do_syscall_64+0x67/0xc0
entry_SYSCALL_64_after_hwframe+0x64/0xce
RIP: 0033:0x7f6589f1cb5b
Code: ...
RSP: 002b:00007fff0cb4bb50 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f65446996c0 RCX: 00007f6589f1cb5b
RDX: 00007fff0cb4bbf0 RSI: 0000000040086442 RDI: 0000000000000009
RBP: 00007fff0cb4bbf0 R08: 0000000000000007 R09: 000055e1608c1fd0
R10: 73af2d263a1ce7d7 R11: 0000000000000246 R12: 0000000040086442
R13: 0000000000000009 R14: 000055e160638400 R15: 00007f658b616020
</TASK>
---[ end trace 0000000000000000 ]---
Maybe the subject patch shall be queued for stable as well?
> ---
> drivers/gpu/drm/vmwgfx/vmwgfx_bo.c | 7 +++---
> drivers/gpu/drm/vmwgfx/vmwgfx_bo.h | 17 +++++++++----
> drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c | 6 ++---
> drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 4 +++
> drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c | 10 +++++---
> drivers/gpu/drm/vmwgfx/vmwgfx_gem.c | 18 +++++++++++---
> drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 6 ++---
> drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c | 2 +-
> drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 12 ++++-----
> drivers/gpu/drm/vmwgfx/vmwgfx_shader.c | 4 +--
> drivers/gpu/drm/vmwgfx/vmwgfx_surface.c | 31 +++++++++---------------
> 11 files changed, 68 insertions(+), 49 deletions(-)
>
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> index c43853597776..2bfac3aad7b7 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
> @@ -34,6 +34,8 @@
>
> static void vmw_bo_release(struct vmw_bo *vbo)
> {
> + WARN_ON(vbo->tbo.base.funcs &&
> + kref_read(&vbo->tbo.base.refcount) != 0);
> vmw_bo_unmap(vbo);
> drm_gem_object_release(&vbo->tbo.base);
> }
> @@ -497,7 +499,7 @@ static int vmw_user_bo_synccpu_release(struct drm_file *filp,
> if (!(flags & drm_vmw_synccpu_allow_cs)) {
> atomic_dec(&vmw_bo->cpu_writers);
> }
> - vmw_user_bo_unref(vmw_bo);
> + vmw_user_bo_unref(&vmw_bo);
> }
>
> return ret;
> @@ -539,7 +541,7 @@ int vmw_user_bo_synccpu_ioctl(struct drm_device *dev, void *data,
> return ret;
>
> ret = vmw_user_bo_synccpu_grab(vbo, arg->flags);
> - vmw_user_bo_unref(vbo);
> + vmw_user_bo_unref(&vbo);
> if (unlikely(ret != 0)) {
> if (ret == -ERESTARTSYS || ret == -EBUSY)
> return -EBUSY;
> @@ -612,7 +614,6 @@ int vmw_user_bo_lookup(struct drm_file *filp,
> }
>
> *out = to_vmw_bo(gobj);
> - ttm_bo_get(&(*out)->tbo);
>
> return 0;
> }
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
> index 1d433fceed3d..0d496dc9c6af 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
> @@ -195,12 +195,19 @@ static inline struct vmw_bo *vmw_bo_reference(struct vmw_bo *buf)
> return buf;
> }
>
> -static inline void vmw_user_bo_unref(struct vmw_bo *vbo)
> +static inline struct vmw_bo *vmw_user_bo_ref(struct vmw_bo *vbo)
> {
> - if (vbo) {
> - ttm_bo_put(&vbo->tbo);
> - drm_gem_object_put(&vbo->tbo.base);
> - }
> + drm_gem_object_get(&vbo->tbo.base);
> + return vbo;
> +}
> +
> +static inline void vmw_user_bo_unref(struct vmw_bo **buf)
> +{
> + struct vmw_bo *tmp_buf = *buf;
> +
> + *buf = NULL;
> + if (tmp_buf)
> + drm_gem_object_put(&tmp_buf->tbo.base);
> }
>
> static inline struct vmw_bo *to_vmw_bo(struct drm_gem_object *gobj)
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c
> index c0b24d1cacbf..a7c07692262b 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c
> @@ -432,7 +432,7 @@ static int vmw_cotable_resize(struct vmw_resource *res, size_t new_size)
> * for the new COTable. Initially pin the buffer object to make sure
> * we can use tryreserve without failure.
> */
> - ret = vmw_bo_create(dev_priv, &bo_params, &buf);
> + ret = vmw_gem_object_create(dev_priv, &bo_params, &buf);
> if (ret) {
> DRM_ERROR("Failed initializing new cotable MOB.\n");
> goto out_done;
> @@ -502,7 +502,7 @@ static int vmw_cotable_resize(struct vmw_resource *res, size_t new_size)
>
> vmw_resource_mob_attach(res);
> /* Let go of the old mob. */
> - vmw_bo_unreference(&old_buf);
> + vmw_user_bo_unref(&old_buf);
> res->id = vcotbl->type;
>
> ret = dma_resv_reserve_fences(bo->base.resv, 1);
> @@ -521,7 +521,7 @@ static int vmw_cotable_resize(struct vmw_resource *res, size_t new_size)
> out_wait:
> ttm_bo_unpin(bo);
> ttm_bo_unreserve(bo);
> - vmw_bo_unreference(&buf);
> + vmw_user_bo_unref(&buf);
>
> out_done:
> MKS_STAT_TIME_POP(MKSSTAT_KERN_COTABLE_RESIZE);
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
> index 58bfdf203eca..3cd5090dedfc 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
> @@ -853,6 +853,10 @@ static inline bool vmw_resource_mob_attached(const struct vmw_resource *res)
> /**
> * GEM related functionality - vmwgfx_gem.c
> */
> +struct vmw_bo_params;
> +int vmw_gem_object_create(struct vmw_private *vmw,
> + struct vmw_bo_params *params,
> + struct vmw_bo **p_vbo);
> extern int vmw_gem_object_create_with_handle(struct vmw_private *dev_priv,
> struct drm_file *filp,
> uint32_t size,
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
> index 98e0723ca6f5..06b06350f61f 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
> @@ -1151,7 +1151,7 @@ static int vmw_translate_mob_ptr(struct vmw_private *dev_priv,
> SVGAMobId *id,
> struct vmw_bo **vmw_bo_p)
> {
> - struct vmw_bo *vmw_bo;
> + struct vmw_bo *vmw_bo, *tmp_bo;
> uint32_t handle = *id;
> struct vmw_relocation *reloc;
> int ret;
> @@ -1164,7 +1164,8 @@ static int vmw_translate_mob_ptr(struct vmw_private *dev_priv,
> }
> vmw_bo_placement_set(vmw_bo, VMW_BO_DOMAIN_MOB, VMW_BO_DOMAIN_MOB);
> ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo);
> - vmw_user_bo_unref(vmw_bo);
> + tmp_bo = vmw_bo;
> + vmw_user_bo_unref(&tmp_bo);
> if (unlikely(ret != 0))
> return ret;
>
> @@ -1206,7 +1207,7 @@ static int vmw_translate_guest_ptr(struct vmw_private *dev_priv,
> SVGAGuestPtr *ptr,
> struct vmw_bo **vmw_bo_p)
> {
> - struct vmw_bo *vmw_bo;
> + struct vmw_bo *vmw_bo, *tmp_bo;
> uint32_t handle = ptr->gmrId;
> struct vmw_relocation *reloc;
> int ret;
> @@ -1220,7 +1221,8 @@ static int vmw_translate_guest_ptr(struct vmw_private *dev_priv,
> vmw_bo_placement_set(vmw_bo, VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM,
> VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM);
> ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo);
> - vmw_user_bo_unref(vmw_bo);
> + tmp_bo = vmw_bo;
> + vmw_user_bo_unref(&tmp_bo);
> if (unlikely(ret != 0))
> return ret;
>
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
> index c0da89e16e6f..8b1eb0061610 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
> @@ -111,6 +111,20 @@ static const struct drm_gem_object_funcs vmw_gem_object_funcs = {
> .vm_ops = &vmw_vm_ops,
> };
>
> +int vmw_gem_object_create(struct vmw_private *vmw,
> + struct vmw_bo_params *params,
> + struct vmw_bo **p_vbo)
> +{
> + int ret = vmw_bo_create(vmw, params, p_vbo);
> +
> + if (ret != 0)
> + goto out_no_bo;
> +
> + (*p_vbo)->tbo.base.funcs = &vmw_gem_object_funcs;
> +out_no_bo:
> + return ret;
> +}
> +
> int vmw_gem_object_create_with_handle(struct vmw_private *dev_priv,
> struct drm_file *filp,
> uint32_t size,
> @@ -126,12 +140,10 @@ int vmw_gem_object_create_with_handle(struct vmw_private *dev_priv,
> .pin = false
> };
>
> - ret = vmw_bo_create(dev_priv, ¶ms, p_vbo);
> + ret = vmw_gem_object_create(dev_priv, ¶ms, p_vbo);
> if (ret != 0)
> goto out_no_bo;
>
> - (*p_vbo)->tbo.base.funcs = &vmw_gem_object_funcs;
> -
> ret = drm_gem_handle_create(filp, &(*p_vbo)->tbo.base, handle);
> out_no_bo:
> return ret;
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
> index 1489ad73c103..818b7f109f53 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
> @@ -1471,8 +1471,8 @@ static int vmw_create_bo_proxy(struct drm_device *dev,
> /* Reserve and switch the backing mob. */
> mutex_lock(&res->dev_priv->cmdbuf_mutex);
> (void) vmw_resource_reserve(res, false, true);
> - vmw_bo_unreference(&res->guest_memory_bo);
> - res->guest_memory_bo = vmw_bo_reference(bo_mob);
> + vmw_user_bo_unref(&res->guest_memory_bo);
> + res->guest_memory_bo = vmw_user_bo_ref(bo_mob);
> res->guest_memory_offset = 0;
> vmw_resource_unreserve(res, false, false, false, NULL, 0);
> mutex_unlock(&res->dev_priv->cmdbuf_mutex);
> @@ -1666,7 +1666,7 @@ static struct drm_framebuffer *vmw_kms_fb_create(struct drm_device *dev,
> err_out:
> /* vmw_user_lookup_handle takes one ref so does new_fb */
> if (bo)
> - vmw_user_bo_unref(bo);
> + vmw_user_bo_unref(&bo);
> if (surface)
> vmw_surface_unreference(&surface);
>
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c b/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
> index fb85f244c3d0..c45b4724e414 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
> @@ -451,7 +451,7 @@ int vmw_overlay_ioctl(struct drm_device *dev, void *data,
>
> ret = vmw_overlay_update_stream(dev_priv, buf, arg, true);
>
> - vmw_user_bo_unref(buf);
> + vmw_user_bo_unref(&buf);
>
> out_unlock:
> mutex_unlock(&overlay->mutex);
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
> index 71eeabf001c8..ca300c7427d2 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
> @@ -141,7 +141,7 @@ static void vmw_resource_release(struct kref *kref)
> if (res->coherent)
> vmw_bo_dirty_release(res->guest_memory_bo);
> ttm_bo_unreserve(bo);
> - vmw_bo_unreference(&res->guest_memory_bo);
> + vmw_user_bo_unref(&res->guest_memory_bo);
> }
>
> if (likely(res->hw_destroy != NULL)) {
> @@ -338,7 +338,7 @@ static int vmw_resource_buf_alloc(struct vmw_resource *res,
> return 0;
> }
>
> - ret = vmw_bo_create(res->dev_priv, &bo_params, &gbo);
> + ret = vmw_gem_object_create(res->dev_priv, &bo_params, &gbo);
> if (unlikely(ret != 0))
> goto out_no_bo;
>
> @@ -457,11 +457,11 @@ void vmw_resource_unreserve(struct vmw_resource *res,
> vmw_resource_mob_detach(res);
> if (res->coherent)
> vmw_bo_dirty_release(res->guest_memory_bo);
> - vmw_bo_unreference(&res->guest_memory_bo);
> + vmw_user_bo_unref(&res->guest_memory_bo);
> }
>
> if (new_guest_memory_bo) {
> - res->guest_memory_bo = vmw_bo_reference(new_guest_memory_bo);
> + res->guest_memory_bo = vmw_user_bo_ref(new_guest_memory_bo);
>
> /*
> * The validation code should already have added a
> @@ -551,7 +551,7 @@ vmw_resource_check_buffer(struct ww_acquire_ctx *ticket,
> ttm_bo_put(val_buf->bo);
> val_buf->bo = NULL;
> if (guest_memory_dirty)
> - vmw_bo_unreference(&res->guest_memory_bo);
> + vmw_user_bo_unref(&res->guest_memory_bo);
>
> return ret;
> }
> @@ -727,7 +727,7 @@ int vmw_resource_validate(struct vmw_resource *res, bool intr,
> goto out_no_validate;
> else if (!res->func->needs_guest_memory && res->guest_memory_bo) {
> WARN_ON_ONCE(vmw_resource_mob_attached(res));
> - vmw_bo_unreference(&res->guest_memory_bo);
> + vmw_user_bo_unref(&res->guest_memory_bo);
> }
>
> return 0;
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
> index 1e81ff2422cf..a01ca3226d0a 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
> @@ -180,7 +180,7 @@ static int vmw_gb_shader_init(struct vmw_private *dev_priv,
>
> res->guest_memory_size = size;
> if (byte_code) {
> - res->guest_memory_bo = vmw_bo_reference(byte_code);
> + res->guest_memory_bo = vmw_user_bo_ref(byte_code);
> res->guest_memory_offset = offset;
> }
> shader->size = size;
> @@ -809,7 +809,7 @@ static int vmw_shader_define(struct drm_device *dev, struct drm_file *file_priv,
> shader_type, num_input_sig,
> num_output_sig, tfile, shader_handle);
> out_bad_arg:
> - vmw_user_bo_unref(buffer);
> + vmw_user_bo_unref(&buffer);
> return ret;
> }
>
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
> index 5db403ee8261..3829be282ff0 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
> @@ -686,9 +686,6 @@ static void vmw_user_surface_base_release(struct ttm_base_object **p_base)
> container_of(base, struct vmw_user_surface, prime.base);
> struct vmw_resource *res = &user_srf->srf.res;
>
> - if (res->guest_memory_bo)
> - drm_gem_object_put(&res->guest_memory_bo->tbo.base);
> -
> *p_base = NULL;
> vmw_resource_unreference(&res);
> }
> @@ -855,23 +852,21 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
> * expect a backup buffer to be present.
> */
> if (dev_priv->has_mob && req->shareable) {
> - uint32_t backup_handle;
> -
> - ret = vmw_gem_object_create_with_handle(dev_priv,
> - file_priv,
> - res->guest_memory_size,
> - &backup_handle,
> - &res->guest_memory_bo);
> + struct vmw_bo_params params = {
> + .domain = VMW_BO_DOMAIN_SYS,
> + .busy_domain = VMW_BO_DOMAIN_SYS,
> + .bo_type = ttm_bo_type_device,
> + .size = res->guest_memory_size,
> + .pin = false
> + };
> +
> + ret = vmw_gem_object_create(dev_priv,
> + ¶ms,
> + &res->guest_memory_bo);
> if (unlikely(ret != 0)) {
> vmw_resource_unreference(&res);
> goto out_unlock;
> }
> - vmw_bo_reference(res->guest_memory_bo);
> - /*
> - * We don't expose the handle to the userspace and surface
> - * already holds a gem reference
> - */
> - drm_gem_handle_delete(file_priv, backup_handle);
> }
>
> tmp = vmw_resource_reference(&srf->res);
> @@ -1512,7 +1507,7 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
> if (ret == 0) {
> if (res->guest_memory_bo->tbo.base.size < res->guest_memory_size) {
> VMW_DEBUG_USER("Surface backup buffer too small.\n");
> - vmw_bo_unreference(&res->guest_memory_bo);
> + vmw_user_bo_unref(&res->guest_memory_bo);
> ret = -EINVAL;
> goto out_unlock;
> } else {
> @@ -1526,8 +1521,6 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
> res->guest_memory_size,
> &backup_handle,
> &res->guest_memory_bo);
> - if (ret == 0)
> - vmw_bo_reference(res->guest_memory_bo);
> }
>
> if (unlikely(ret != 0)) {
--
Alexander Sverdlin
Siemens AG
www.siemens.com
More information about the dri-devel
mailing list