Semantics of dma_{map,pin,unmap}_guest_page
Yan Zhao
yan.y.zhao at intel.com
Tue May 26 08:37:47 UTC 2020
On Tue, May 26, 2020 at 10:30:36AM +0200, Julian Stecklina wrote:
> On Tue, 2020-05-26 at 04:12 -0400, Yan Zhao wrote:
> > On Tue, May 26, 2020 at 10:11:12AM +0200, Julian Stecklina wrote:
> > > On Tue, 2020-05-26 at 02:02 -0400, Yan Zhao wrote:
> > > > as the Unmap calls are triggered from guest page table modification, its
> > > > count does not necessarily match to that of Map calls.
> > > > But unpon vGPU is destroyed, gvt_cache_destroy() would be called by
> > > > kvmgt_guest_exit() and would remove all DMA mappings which might be still
> > > > alive regardless of its ref count.
> > >
> > > If page tables stay pinned across a call to vgpu_destroy, that would explain
> > > what I'm seeing. This is then also harmless. I was worried that we
> > > accumulate
> > > these pins over time.
> > >
> > > That being said, I've opened an issue in our internal bug tracker to re-
> > > visit
> > > this issue and confirm the theories.
> > >
> > guest page tables are not necessarily cleared before vgpu_destroy,
> > especially when guest is killed or crashed.
> > so Unmap count is always less than Map count. I don't think it's a bug,
> > and it's safe to clear all dma mappings generated for guest and unpin all
> > previously pinned guest pages as now guest is destroyed. isn't it?
>
> It's fine. It was just a bit surprising to me.
>
> As I said before, it would be easier to spot bugs if vgpu_destroy would clean
> DMA mappings up that it knows about, but it's mostly cosmetic.
>
yes, if vgpu could maintain a list of its pinned pages and unpin them in
vgpu_destroy, it's fine.
But hypervisor adapter layer still needs to maintain a list in order to catch up
the missed ones to make itself complete.
so putting DMA mappings cleanup in hypervisor adapter layer makes our
life easier now:)
Thanks
Yan
More information about the intel-gvt-dev
mailing list