Semantics of dma_{map,pin,unmap}_guest_page
Julian Stecklina
julian.stecklina at cyberus-technology.de
Tue May 26 08:11:12 UTC 2020
Hi Yan,
thanks for the quick response!
On Tue, 2020-05-26 at 02:02 -0400, Yan Zhao wrote:
> On Mon, May 25, 2020 at 05:32:39PM +0200, Julian Stecklina wrote:
> > I would also expect that after I call `intel_gvt_ops->vgpu_destroy`, all DMA
> > mappings are released by the mediator with the appropriate number of unmap
> > calls. This doesn't seem to be the case, as I see many DMA mappings that are
> > still alive after the vGPU is destroyed.
> >
> as the Unmap calls are triggered from guest page table modification, its
> count does not necessarily match to that of Map calls.
> But unpon vGPU is destroyed, gvt_cache_destroy() would be called by
> kvmgt_guest_exit() and would remove all DMA mappings which might be still
> alive regardless of its ref count.
If page tables stay pinned across a call to vgpu_destroy, that would explain
what I'm seeing. This is then also harmless. I was worried that we accumulate
these pins over time.
That being said, I've opened an issue in our internal bug tracker to re-visit
this issue and confirm the theories.
>
> > At this point, I'm a bit unsure what to do with these mappings, because they
> > might still be in use. So the options are to either free them (and risk
> > memory
> > corruption) or keep them around leak memory.
> >
> > Do I have a flaw in my assumptions or is it expected behavior to clean up
> > some
> > mappings that still have a reference count >0 after the vGPU is destroyed?
> >
> >
> as with the presence of gvt_cache_destroy(), I did not observe the
> mentioned leak. if you do encounter it, could you details the
> reproducing procedures? so we can check if it's a bug unnoticed.
Yes, gvt_cache_destroy removes any mappings. My question was only whether its
actually safe to clean them up even if they have a reference count >0. And if I
understand you correctly, that's the case. Thanks for explaining!
That being said, it would be easier to catch bugs in this logic if vgpu_destroy
would unpin all DMA mappings it knows of itself. Then the hypervisor backend can
just yell, if anything is left pinned when it cleans up, but I'm not sure how
hard this is to implement.
Julian
More information about the intel-gvt-dev
mailing list