Usage of dma-buf sg-tables

Daniel Vetter daniel at ffwll.ch
Fri Nov 1 20:58:25 CET 2013


On Fri, Nov 1, 2013 at 2:33 PM, Thomas Hellstrom <thellstrom at vmware.com> wrote:
> Considering that the linux DMA-API states that information in an sg-list may
> be destroyed when it's mapped,
> it seems to me that at least  one of the drm prime functions use invalid
> assumptions.
>
> In particular, I don't think it's safe to assume that pages in a single
> sg-list segment are contigous after mapping, so
> if we want struct page pointers we should use
>
> pfn = dma_to_phys((sg_dma_address(sg) + p_offset*PAGE_SIZE)) >> PAGE_SHIFT
>
> and if the pfn is valid, convert it to a struct page.
>
> (Incorrect code is, for example, in drm_prime_sg_to_page_addr_arrays)
>
> Or does dma-buf require that page info in sg-lists need to be kept across
> the map operation?
>
> BTW this brings up another question: It's stated that the above function is
> needed by the TTM driver in order to do
> correct fault handling. This seems odd, TTM shouldn't be able to mmap() or
> fault an imported dma-buf, right?

The idea of the interface is that the backing storage is completely
opaque ot the importer and might not even be backed by something with
a struct page attached. But since that'd require us to rework lots of
code (e.g. add new case to the ttm code besides ioremap and kmap to
get at the backing storage from the cpu and also frob the fault
handler a bit) Dave just hacked something up. No one ever touched it
since. And I'm actually not too sure how the underlying pages survive
the sg dma mapping ...
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch


More information about the dri-devel mailing list