[EARLY RFC][PATCH] dma-buf: Add dma-buf heaps framework
Andrew F. Davis
afd at ti.com
Tue Mar 5 18:05:24 UTC 2019
On 3/4/19 7:16 PM, John Stultz wrote:
> On Mon, Mar 4, 2019 at 6:53 AM Andrew F. Davis <afd at ti.com> wrote:
>> On 3/1/19 6:06 AM, Brian Starkey wrote:
>>> On Mon, Feb 25, 2019 at 08:36:04AM -0600, Andrew F. Davis wrote:
>>>> +static long dma_heap_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
>>>> +{
>>>> + switch (cmd) {
>>>> + case DMA_HEAP_IOC_ALLOC:
>>>> + {
>>>> + struct dma_heap_allocation_data heap_allocation;
>>>> + struct dma_heap *heap = filp->private_data;
>>>> + int fd;
>>>> +
>>>> + if (copy_from_user(&heap_allocation, (void __user *)arg, _IOC_SIZE(cmd)))
>>>> + return -EFAULT;
>>>
>>> Am I right in thinking "cmd" comes from userspace? It might be a good
>>> idea to not trust _IOC_SIZE(cmd) and use our own. I'm looking at
>>> Documentation/ioctl/botching-up-ioctls.txt and drm_ioctl()
>>>
>>
>> Yes cmd is from userspace, but we are in a switch-case that has already
>> matched cmd == DMA_HEAP_IOC_ALLOC which is sized correctly.
>>
>
> Well, even so, I went through and made this cleanup over the weekend,
> as sizeof(heap_allocation) is probably more straight forward.
>
> The current patchset against v5.0 (with hikey960 patches), which
> includes the flags and other suggested changes is here:
> https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap
>
> W/ userland support here:
> https://android-review.googlesource.com/c/device/linaro/hikey/+/909436
>
> I'm hoping to send this out for a real RFC in the next few days. So
> Andrew, if you can check it out and make sure it suits you ok I'd
> appreciate it!
>
Nothing at a high level, couple little things I can just point out when
you RFC, I think this is in good shape for a real RFC.
>>>> + heap->num_of_buffers = 0;
>>>> + heap->num_of_alloc_bytes = 0;
>>>> + heap->alloc_bytes_wm = 0;
>>>> + spin_lock_init(&heap->stat_lock);
>>>> + heap_root = debugfs_create_dir(heap->name, dma_heap_debug_root);
>>>> + debugfs_create_u64("num_of_buffers", 0444, heap_root, &heap->num_of_buffers);
>>>> + debugfs_create_u64("num_of_alloc_bytes", 0444, heap_root, &heap->num_of_alloc_bytes);
>>>> + debugfs_create_u64("alloc_bytes_wm", 0444, heap_root, &heap->alloc_bytes_wm);
>>>
>>> I can see it being useful to be able to reset this one.
>>>
>>
>> Agree, looks like these will be pulled into the heaps themselves in the
>> next rev John is working on, so shouldn't matter here.
>
> Yea. Sort of half-way done on this. I yanked the stats, but haven't
> re-added them back to the heaps yet.
>
>> What we are moving to is a better (I think) ownership model. 'DMA-heaps'
>> only tracks 'heaps', 'heaps' track their 'buffers'. In the above we have
>> 'DMA-heaps' tracking info on 'buffers', bypassing the 'heaps' layer, so
>> in the next rev will be the 'DMA-heaps' core asks the 'heaps' to report
>> back stats.
>
> Yea. This matches my plan.
>
>>>> +/*
>>>> + * mappings of this buffer should be un-cached, otherwise dmabuf heaps will
>>>> + * need cache maintenance using DMA_BUF_SYNC_* when the buffer is mapped for dma
>>>> + */
>>>> +#define DMA_HEAP_FLAG_COHERENT 1
>>>
>>> I'm not really clear on the intention of this flag, and I do think
>>> that "COHERENT" is a confusing/overloaded term.
>>>
>>
>> It is, I wanted to use the term as used by the other DMA frameworks, but
>> I don't really like it either.
>>
>>> To me, if the buffer is actually coherent with DMA masters, then
>>> there's no need for any cache maintenance - which is the opposite of
>>> what the comment says.
>>>
>>
>> Buffers themselves can't be (non)coherent, they are just memory, masters
>> on a bus can have coherent interactions depending on which masters are
>> talking and how the buffer in question is managed. So really the term
>> isn't right almost anywhere as it only applies from the perspective of
>> the local core (running Linux) and only if simply not locally caching
>> the buffer is enough to make it "coherent". But that is a rant best
>> saved for another time.
>>
>> For us lets drop that flag, if you want to allocate from a
>> non-locally-cached buffer then it can be its own heap that only provides
>> that type of allocation. Or even the same heap exporter providing both
>> types, just registers itself twice with the DMA-heaps core, once for an
>> interface that gives out cached buffers and one for uncached.
>
> So I've not removed this yet. My only concern is that if its a
> reasonable common attribute for heaps to implement, we probably should
> keep it, rather then pushing for new heaps for coherent/non-coherent.
> This comes from my experience creating the CLOCK_REALTIME_ALARM,
> CLOCK_BOOTTIME_ALARM clockids, then later realizing ALARM should have
> been just a attribute flag on the REALTIME/BOOTTIME clockids. I'd
> rather not rework the heaps to have system and system-coherent and
> cma and cma-coherent, if its a general thing.
>
> That said, I did find the flag's meaning confusing myself initially,
> so maybe holding off on it for now (if we don't have a clear user) is
> a good idea?
>
I'd say hold off on it for now. My problem is that caching is not
something you can really switch at runtime. A heap's backing memory is
either cached or not. This an issue Brian and I had discussed a bit on
the other thread.
Basically if you have a carveout heap for instance, if in DT you have
the reserved-memory node marked "no-map" then it will be non-cached and
you cannot ever make it cached, you could try to hand out cached
userspace mappings on mmap() but those pages will not have a valid
struct *page so most of the dma_sync_* stuff will break. Same is true
the other way, if it is cached, the kernel will always have valid cached
entries of that memory as the kernel logic address area, so handing out
non-cached mappings to userspace breaks aliased mapping attribute rules
on ARM (and probably dangerous on other ARCHs).
In the end you only have heaps that have cached or heaps with non-cached
memory, but cannot switch based on a flag at runtime.
Thanks,
Andrew
> thanks
> -john
>
More information about the dri-devel
mailing list