Semantics of dma_{map,pin,unmap}_guest_page

Yan Zhao yan.y.zhao at intel.com
Tue May 26 06:02:29 UTC 2020


On Mon, May 25, 2020 at 05:32:39PM +0200, Julian Stecklina wrote:
> Hello,
> 
> as you know we are writing a hypervisor backend for i915/gvt. We were wondering
> about the semantics of dma_map_guest_page, dma_pin_guest_page, and
> dma_unmap_guest_page from intel_gvt_ops.
> 
> My current understanding is this: Map creates a new DMA mapping with a reference
> count of 1. Pin increases the reference count by one. Unmap decreases the
> reference count by 1 and if it reaches zero, removes the DMA mapping. Pretty
> straight forward.
>
yes, that's right.

guest modifications to GGTT/ppgtt would trigger calls to Map/Unmap to
pin/unpin guest pages and create/remove DMA mapping.


> I would also expect that after I call `intel_gvt_ops->vgpu_destroy`, all DMA
> mappings are released by the mediator with the appropriate number of unmap
> calls. This doesn't seem to be the case, as I see many DMA mappings that are
> still alive after the vGPU is destroyed.
>
as the Unmap calls are triggered from guest page table modification, its
count does not necessarily match to that of Map calls.
But unpon vGPU is destroyed, gvt_cache_destroy() would be called by
kvmgt_guest_exit() and would remove all DMA mappings which might be still
alive regardless of its ref count.


> At this point, I'm a bit unsure what to do with these mappings, because they
> might still be in use. So the options are to either free them (and risk memory
> corruption) or keep them around leak memory.
> 
> Do I have a flaw in my assumptions or is it expected behavior to clean up some
> mappings that still have a reference count >0 after the vGPU is destroyed?
> 
>
as with the presence of gvt_cache_destroy(), I did not observe the
mentioned leak. if you do encounter it, could you details the
reproducing procedures? so we can check if it's a bug unnoticed.

Thanks
Yan



More information about the intel-gvt-dev mailing list