[bug report] drm/ttm: add transparent huge page support for DMA allocations v2

Koenig, Christian Christian.Koenig at amd.com
Mon Jul 15 13:38:46 UTC 2019


Am 15.07.19 um 14:50 schrieb Christoph Hellwig:
> On Mon, Jul 15, 2019 at 10:41:14AM +0000, Koenig, Christian wrote:
> [SNIP]
>>>>> that are DMA coherent.  Adding a DMA_ATTR_UNCACHED would be mostly
>>>>> trivial, we just need to define proper semantics for it.
>>>> Sounds good. Can you do this? Cause I only know x86 and a few bits of ARM.
>>> So what semantics do you need?  Given that we have some architectures
>>> that can't set pages as uncached at runtime it either has to be a hint,
>>> or we could fail it if not supported by implementation.  Which one would
>>> you prefer?
>> Well first of all I think we need a function which can tell if it's
>> supported in general on the current architecture.
>>
>> Then I've asked around a bit and we unfortunately found a few more cases
>> I didn't knew before where uncached access to system memory is
>> mandatory. The only good news I have is that the AMD devices needing
>> that are all integrated into the CPU. So at least for AMD hardware we
>> can safely assume x86 for those cases.
>>
>> But because of that I would say we should hard fail if it is not
>> possible to get some uncached memory.
> So I guess the semantics preferred by you is a DMA_ATTR_UNCACHED flag
> that would fail if not supported.  That should be relatively easy
> to support.  Initially you'd need that on x86 with the direct mapping
> and AMD IOMMU?

Currently I need that for both dma_alloc_attrs() and dma_map_page_attrs().

But I hope to get rid of the uncached use case for dma_map_page_attrs() 
and only use dma_alloc_attrs().

So if that makes things easier you can just ignore dma_map_page_attrs() 
and we just continue to use the hack we already have till I manage to 
migrate all drivers using TTM away from that.

>> When we return a proper error code we at least give the user a good idea
>> of what's going wrong.
>>
>> I mean the only other possible workaround in the kernel I can see is to
>> instead of trying to map a page backing a certain userspace address is
>> to change where this userspace address is pointing to. You know what I
>> mean? (It's kind of hard to explain because I'm not a native speaker of
>> English) But that approach sounds like a deep rabbit hole to me.
> Isn't that kinda what we are doing for the device private memory
> work in hmm?  But it could certainly go down a rathole fast.

Oh, good point. Yeah that is very similar.

Instead of replacing the system memory page with a device memory page, 
we would replace one inaccessible system memory page with an accessible 
one. But apart from that it is essentially the same functionality.

Regards,
Christian.


More information about the dri-devel mailing list