[PATCH 1/2] ttm: add prime sharing support to TTM

Dave Airlie airlied at gmail.com
Fri May 4 07:49:03 PDT 2012


On Fri, May 4, 2012 at 3:33 PM, Konrad Rzeszutek Wilk
<konrad.wilk at oracle.com> wrote:
> On Fri, May 04, 2012 at 02:46:54PM +0100, Dave Airlie wrote:
>> From: Dave Airlie <airlied at redhat.com>
>>
>> This adds the ability for ttm common code to take an SG table
>> and use it as the backing for a slave TTM object.
>
> I took a look at both patches from the perspective of the TTM DMA pool
> code and it looks fine (thought I would prefer to test this first - but
> I don't have the hardware).
>
> What I am curious is that you are allowing to map two or more
> of SG pages to different PCI devices. And for that you are using
> the nouveau_gem_map_dma_buf, which does this (roughly)
>
>  ttm->sg = sg_alloc_table().
>  for_each_sg()
>        sg->dma_address[i] = dma_map(..)
>
> So I am seeing two potential issues:
>  1). ttm->sg = sg_alloc() is called on every new other device.
>    In other words, if you end up with three of these attachment PCI devices
>    you are going to cause an memory leak of the sg_alloc() and over-write
>    the ttm->sg every time. But I might be misreading how the import code
>    works.

I think you are mixing up the export and import sides.

The export side takes a TTM object, and create an sg table mapped into
the importers address space.

The import side takes an exported sg_table and wraps an object around it.

So for multiple importers, there are multiple sg_tables. For one
import there should only be one sg_table per object.

>  2). You are going to put in sg->dma_address() in of different PCI devices.
>     Meaning - the dma address might have a different value depending on
>     whether there is an different IOMMU for each of the devices.
>     On x86 that is usually not the case (the odd ball being the IBM high-end
>     boxes that have an Calgary-X IOMMU for PCI buses).
>    What this means is that the DMA address you are programming in
>    the sub-sequent PCI devices might the DMA address for a previous PCI
>    device.
>
> But both of these issues are only a problem if the slave cards have
> a different PCI device. If they are the same - then I think you are OK
> (except the multiple calls to import which would cause a memory leak).

Yeah we need to stop a slave calling map_dma_buf multiple times,
but we block that by keeping a list of currently imported buffer handles.

Dave.


More information about the dri-devel mailing list