[Xen-devel] [PATCH v2 5/9] xen/gntdev: Allow mappings for DMA buffers
Boris Ostrovsky
boris.ostrovsky at oracle.com
Fri Jun 8 19:21:24 UTC 2018
On 06/08/2018 01:59 PM, Stefano Stabellini wrote:
>
>>>>>>>> @@ -325,6 +401,14 @@ static int map_grant_pages(struct
>>>>>>>> grant_map
>>>>>>>> *map)
>>>>>>>> map->unmap_ops[i].handle = map->map_ops[i].handle;
>>>>>>>> if (use_ptemod)
>>>>>>>> map->kunmap_ops[i].handle =
>>>>>>>> map->kmap_ops[i].handle;
>>>>>>>> +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC
>>>>>>>> + else if (map->dma_vaddr) {
>>>>>>>> + unsigned long mfn;
>>>>>>>> +
>>>>>>>> + mfn = __pfn_to_mfn(page_to_pfn(map->pages[i]));
>>>>>>> Not pfn_to_mfn()?
>>>>>> I'd love to, but pfn_to_mfn is only defined for x86, not ARM: [1]
>>>>>> and [2]
>>>>>> Thus,
>>>>>>
>>>>>> drivers/xen/gntdev.c:408:10: error: implicit declaration of function
>>>>>> ‘pfn_to_mfn’ [-Werror=implicit-function-declaration]
>>>>>> mfn = pfn_to_mfn(page_to_pfn(map->pages[i]));
>>>>>>
>>>>>> So, I'll keep __pfn_to_mfn
>>>>> How will this work on non-PV x86?
>>>> So, you mean I need:
>>>> #ifdef CONFIG_X86
>>>> mfn = pfn_to_mfn(page_to_pfn(map->pages[i]));
>>>> #else
>>>> mfn = __pfn_to_mfn(page_to_pfn(map->pages[i]));
>>>> #endif
>>>>
>>> I'd rather fix it in ARM code. Stefano, why does ARM uses the
>>> underscored version?
>> Do you want me to add one more patch for ARM to wrap __pfn_to_mfn
>> with static inline for ARM? e.g.
>> static inline ...pfn_to_mfn(...)
>> {
>> __pfn_to_mfn();
>> }
>
> A Xen on ARM guest doesn't actually know the mfns behind its own
> pseudo-physical pages. This is why we stopped using pfn_to_mfn and
> started using pfn_to_bfn instead, which will generally return "pfn",
> unless the page is a foreign grant. See include/xen/arm/page.h.
> pfn_to_bfn was also introduced on x86. For example, see the usage of
> pfn_to_bfn in drivers/xen/swiotlb-xen.c. Otherwise, if you don't care
> about other mapped grants, you can just use pfn_to_gfn, that always
> returns pfn.
I think then this code needs to use pfn_to_bfn().
>
> Also, for your information, we support different page granularities in
> Linux as a Xen guest, see the comment at include/xen/arm/page.h:
>
> /*
> * The pseudo-physical frame (pfn) used in all the helpers is always based
> * on Xen page granularity (i.e 4KB).
> *
> * A Linux page may be split across multiple non-contiguous Xen page so we
> * have to keep track with frame based on 4KB page granularity.
> *
> * PV drivers should never make a direct usage of those helpers (particularly
> * pfn_to_gfn and gfn_to_pfn).
> */
>
> A Linux page could be 64K, but a Xen page is always 4K. A granted page
> is also 4K. We have helpers to take into account the offsets to map
> multiple Xen grants in a single Linux page, see for example
> drivers/xen/grant-table.c:gnttab_foreach_grant. Most PV drivers have
> been converted to be able to work with 64K pages correctly, but if I
> remember correctly gntdev.c is the only remaining driver that doesn't
> support 64K pages yet, so you don't have to deal with it if you don't
> want to.
I believe somewhere in this series there is a test for PAGE_SIZE vs.
XEN_PAGE_SIZE. Right, Oleksandr?
Thanks for the explanation.
-boris
More information about the dri-devel
mailing list