[RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)

Andrew F. Davis afd at ti.com
Sat Oct 19 13:41:27 UTC 2019


On 10/18/19 2:57 PM, Ayan Halder wrote:
> On Fri, Oct 18, 2019 at 11:49:22AM -0700, John Stultz wrote:
>> On Fri, Oct 18, 2019 at 11:41 AM Ayan Halder <Ayan.Halder at arm.com> wrote:
>>> On Fri, Oct 18, 2019 at 09:55:17AM +0000, Brian Starkey wrote:
>>>> On Thu, Oct 17, 2019 at 01:57:45PM -0700, John Stultz wrote:
>>>>> On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd at ti.com> wrote:
>>>>>> On 10/17/19 3:14 PM, John Stultz wrote:
>>>>>>> But if the objection stands, do you have a proposal for an alternative
>>>>>>> way to enumerate a subset of CMA heaps?
>>>>>>>
>>>>>> When in staging ION had to reach into the CMA framework as the other
>>>>>> direction would not be allowed, so cma_for_each_area() was added. If
>>>>>> DMA-BUF heaps is not in staging then we can do the opposite, and have
>>>>>> the CMA framework register heaps itself using our framework. That way
>>>>>> the CMA system could decide what areas to export or not (maybe based on
>>>>>> a DT property or similar).
>>>>>
>>>>> Ok. Though the CMA core doesn't have much sense of DT details either,
>>>>> so it would probably have to be done in the reserved_mem logic, which
>>>>> doesn't feel right to me.
>>>>>
>>>>> I'd probably guess we should have some sort of dt binding to describe
>>>>> a dmabuf cma heap and from that node link to a CMA node via a
>>>>> memory-region phandle. Along with maybe the default heap as well? Not
>>>>> eager to get into another binding review cycle, and I'm not sure what
>>>>> non-DT systems will do yet, but I'll take a shot at it and iterate.
>>>>>
>>>>>> The end result is the same so we can make this change later (it has to
>>>>>> come after DMA-BUF heaps is in anyway).
>>>>>
>>>>> Well, I'm hesitant to merge code that exposes all the CMA heaps and
>>>>> then add patches that becomes more selective, should anyone depend on
>>>>> the initial behavior. :/
>>>>
>>>> How about only auto-adding the system default CMA region (cma->name ==
>>>> "reserved")?
>>>>
>>>> And/or the CMA auto-add could be behind a config option? It seems a
>>>> shame to further delay this, and the CMA heap itself really is useful.
>>>>
>>> A bit of a detour, comming back to the issue why the following node
>>> was not getting detected by the dma-buf heaps framework.
>>>
>>>         reserved-memory {
>>>                 #address-cells = <2>;
>>>                 #size-cells = <2>;
>>>                 ranges;
>>>
>>>                 display_reserved: framebuffer at 60000000 {
>>>                         compatible = "shared-dma-pool";
>>>                         linux,cma-default;
>>>                         reusable; <<<<<<<<<<<<-----------This was missing in our
>>> earlier node
>>>                         reg = <0 0x60000000 0 0x08000000>;
>>>                 };
>>
>> Right. It has to be a CMA region for us to expose it from the cma heap.
>>
>>
>>> With 'reusable', rmem_cma_setup() succeeds , but the kernel crashes as follows :-
>>>
>>> [    0.450562] WARNING: CPU: 2 PID: 1 at mm/cma.c:110 cma_init_reserved_areas+0xec/0x22c
>>
>> Is the value 0x60000000 you're using something you just guessed at? It
>> seems like the warning here is saying the pfn calculated from the base
>> address isn't valid.
> It is a valid memory region we use to allocate framebuffers.


But does it have a valid kernel virtual mapping? Most ARM systems (just
assuming you are working on ARM :)) that I'm familiar with have the DRAM
space starting at 0x80000000 and so don't start having valid pfns until
that point. Is this address you are reserving an SRAM?

Andrew


>>
>> thanks
>> -john


More information about the dri-devel mailing list