DMA-heap driver hints

John Stultz jstultz at google.com
Wed Jan 25 18:59:25 UTC 2023


Sorry for the delay, this was almost ready to send, but then got
forgotten in my drafts folder.

On Mon, Jan 23, 2023 at 11:15 PM Christian König
<christian.koenig at amd.com> wrote:
> Am 24.01.23 um 06:19 schrieb John Stultz:
> > On Mon, Jan 23, 2023 at 8:29 AM Christian König
> > <christian.koenig at amd.com> wrote:
> >> Am 23.01.23 um 14:55 schrieb Laurent Pinchart:
> >>> - I assume some drivers will be able to support multiple heaps. How do
> >>>     you envision this being implemented ?
> >> I don't really see an use case for this.
> >>
> >> We do have some drivers which say: for this use case you can use
> >> whatever you want, but for that use case you need to use specific memory
> >> (scan out on GPUs for example works like this).
> >>
> > [snipping the constraints argument, which I agree with]
> >> What we do have is compatibility between heaps. E.g. a CMA heap is
> >> usually compatible with the system heap or might even be a subset of
> >> another CMA heap. But I wanted to add that as next step to the heaps
> >> framework itself.
> > So the difficult question is how is userland supposed to know which
> > heap is compatible with which?
>
> The heaps should know which other heap they are compatible with.
>
> E.g. the CMA heap should have a link to the system heap because it can
> handle all system memory allocations as well.
>
> If we have a specialized CMA heap (for example for 32bit DMA) it should
> have a link to the general CMA heap.

This is an interesting idea, but it seems to assume a linear or at
least converging "compatibility" order, which I don't think is always
the case.
(For instance, there may be secure heaps which a small set of devices
have access to, but supporting secure memory doesn't imply system
memory for all devices or vice versa).

So I really think being able to express support for multiple heaps
would be important to resolve the majority of these edge cases.

Also to have a single link ordering, it means the constraints have to
go from the heap that satisfies more constraints to the heap that
satisfies less (which is sort of reverse of how I'd think of
compatibility). Which makes the solving logic for userland doable, but
somewhat complex/non-intuitive (as you're searching for the most
"satisfying" heap from the set which will be one of the starting
points).

Whereas finding the intersection of lists seems a bit more straightforward.


> > If you have two devices, one that points to heap "foo" and the other
> > points to heap "bar", how does userland know that "foo" satisfies the
> > constraints of "bar" but "bar" doesn't satisfy the constraints of
> > "foo".
> > (foo ="cma",  bar="system")
> >
> > I think it would be much better for device 1 to list "foo" and device
> > 2 to list "foo" and "bar", so you can find that "foo" is the common
> > heap which will solve both devices' needs.
>
> I think that this would be a rather bad idea because then all devices
> need to know about all the possible different heaps they are compatible
> with.

I agree it is somewhat burdensome, but I suspect we'd eventually want
registration helpers to abstract out some of the relationships you
mention above (ie: system supporting devices will accept CMA buffers,
dma32 buffers, etc). But at least that logic would be in-kernel and
not exposed to userland.

> >>> - Devices could have different constraints based on particular
> >>>     configurations. For instance, a device may require specific memory
> >>>     layout for multi-planar YUV formats only (as in allocating the Y and C
> >>>     planes of NV12 from different memory banks). A dynamic API may thus be
> >>>     needed (but may also be very painful to use from userspace).
> >> Uff, good to know. But I'm not sure how to expose stuff like that.
> > Yeah. These edge cases are really hard to solve generically.  And
> > single devices that have separate constraints for different uses are
> > also not going to be solvable with a simple linking approach.
> >
> > But I do wonder if a generic solution to all cases is needed
> > (especially if it really isn't possible)? If we leave the option for
> > gralloc like omniscient device-specific userland policy, those edge
> > cases can be handled by those devices that can't run generic logic.
> > And those devices just won't be able to be supported by generic
> > distros, hopefully motivating future designs to have less odd
> > constraints?
>
> Potentially yes, but I think that anything more complex than "please
> allocate from this piece of memory for me" is not something which should
> be handled inside the device independent framework.
>
> Especially device specific memory and allocation constrains (e.g. things
> like don't put those two things on the same memory channel) is *not*
> something we should have in an inter device framework.
>
> In those cases we should just be able to say that an allocation should
> be made from a specific device and then let the device specific drivers
> deal with the constrain.

Yeah. I don't think we can get away from needing omniscient userland,
but hopefully we can solve a large chunk of the issue with something
like your approach.

thanks
-john


More information about the dri-devel mailing list