[RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)

Ayan Halder Ayan.Halder at arm.com
Fri Oct 18 18:51:50 UTC 2019


++ john.stultz at linaro.org (Sorry, somehow I am missing your email while
sending. :( )
On Fri, Oct 18, 2019 at 06:41:24PM +0000, Ayan Halder wrote:
> On Fri, Oct 18, 2019 at 09:55:17AM +0000, Brian Starkey wrote:
> > On Thu, Oct 17, 2019 at 01:57:45PM -0700, John Stultz wrote:
> > > On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd at ti.com> wrote:
> > > > On 10/17/19 3:14 PM, John Stultz wrote:
> > > > > But if the objection stands, do you have a proposal for an alternative
> > > > > way to enumerate a subset of CMA heaps?
> > > > >
> > > > When in staging ION had to reach into the CMA framework as the other
> > > > direction would not be allowed, so cma_for_each_area() was added. If
> > > > DMA-BUF heaps is not in staging then we can do the opposite, and have
> > > > the CMA framework register heaps itself using our framework. That way
> > > > the CMA system could decide what areas to export or not (maybe based on
> > > > a DT property or similar).
> > > 
> > > Ok. Though the CMA core doesn't have much sense of DT details either,
> > > so it would probably have to be done in the reserved_mem logic, which
> > > doesn't feel right to me.
> > > 
> > > I'd probably guess we should have some sort of dt binding to describe
> > > a dmabuf cma heap and from that node link to a CMA node via a
> > > memory-region phandle. Along with maybe the default heap as well? Not
> > > eager to get into another binding review cycle, and I'm not sure what
> > > non-DT systems will do yet, but I'll take a shot at it and iterate.
> > > 
> > > > The end result is the same so we can make this change later (it has to
> > > > come after DMA-BUF heaps is in anyway).
> > > 
> > > Well, I'm hesitant to merge code that exposes all the CMA heaps and
> > > then add patches that becomes more selective, should anyone depend on
> > > the initial behavior. :/
> > 
> > How about only auto-adding the system default CMA region (cma->name ==
> > "reserved")?
> > 
> > And/or the CMA auto-add could be behind a config option? It seems a
> > shame to further delay this, and the CMA heap itself really is useful.
> >
> A bit of a detour, comming back to the issue why the following node
> was not getting detected by the dma-buf heaps framework.
> 
>         reserved-memory {
>                 #address-cells = <2>;
>                 #size-cells = <2>;
>                 ranges;
> 
>                 display_reserved: framebuffer at 60000000 {
>                         compatible = "shared-dma-pool";
>                         linux,cma-default;
>                         reusable; <<<<<<<<<<<<-----------This was missing in our
> earlier node
>                         reg = <0 0x60000000 0 0x08000000>;
>                 };
>  
> Quoting reserved-memory.txt :-
> "The operating system can use the memory in this region with the limitation that
>  the device driver(s) owning the region need to be able to reclaim it back"
> 
> Thus as per my observation, without 'reusable', rmem_cma_setup()
> returns -EINVAL and the reserved-memory is not added as a cma region.
> 
> With 'reusable', rmem_cma_setup() succeeds , but the kernel crashes as follows :-
> 
> [    0.450562] WARNING: CPU: 2 PID: 1 at mm/cma.c:110 cma_init_reserved_areas+0xec/0x22c
> [    0.458415] Modules linked in:                                                                                                             
> [    0.461470] CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.3.0-rc4-01377-g51dbcf03884c-dirty #15                                              
> [    0.470017] Hardware name: ARM Juno development board (r0) (DT)                                                                            
> [    0.475953] pstate: 80000005 (Nzcv daif -PAN -UAO)                                                                                         
> [    0.480755] pc : cma_init_reserved_areas+0xec/0x22c  
> [    0.485643] lr : cma_init_reserved_areas+0xe8/0x22c 
> <----snip register dump --->
> 
> [    0.600646] Unable to handle kernel paging request at virtual address ffff7dffff800000
> [    0.608591] Mem abort info:
> [    0.611386]   ESR = 0x96000006
> <---snip uninteresting bits --->
> [    0.681069] pc : cma_init_reserved_areas+0x114/0x22c
> [    0.686043] lr : cma_init_reserved_areas+0xe8/0x22c
> 
> 
> I am looking into this now. My final objective is to get "/dev/dma_heap/framebuffer"
> (as a cma heap).
> Any leads?
> 
> > Cheers,
> > -Brian
> > 
> > > 
> > > So, <sigh>, I'll start on the rework for the CMA bits.
> > > 
> > > That said, I'm definitely wanting to make some progress on this patch
> > > series, so maybe we can still merge the core/helpers/system heap and
> > > just hold the cma heap for a rework on the enumeration bits. That way
> > > we can at least get other folks working on switching their vendor
> > > heaps from ION.
> > > 
> > > Sumit: Does that sound ok? Assuming no other objections, can you take
> > > the v11 set minus the CMA heap patch?
> > > 
> > > thanks
> > > -john
> _______________________________________________
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel


More information about the dri-devel mailing list