[RFC] drm: add overlays as first class KMS objects
robdclark at gmail.com
Thu Apr 28 09:24:34 PDT 2011
2011/4/25 Stéphane Marchesin <stephane.marchesin at gmail.com>:
> On Mon, Apr 25, 2011 at 16:22, Jesse Barnes <jbarnes at virtuousgeek.org> wrote:
>> On Mon, 25 Apr 2011 16:16:18 -0700
>> Keith Packard <keithp at keithp.com> wrote:
>>> On Mon, 25 Apr 2011 15:12:20 -0700, Jesse Barnes <jbarnes at virtuousgeek.org> wrote:
>>> > Overlays are a bit like half-CRTCs. They have a location and fb, but
>>> > don't drive outputs directly. Add support for handling them to the core
>>> > KMS code.
>>> Are overlays/underlays not associated with a specific CRTC? To my mind,
>>> overlays are another scanout buffer associated with a specific CRTC, so
>>> you'd create a scanout buffer and attach that to a specific scanout slot
>>> in a crtc, with the 'default' slot being the usual graphics plane.
>> Yes, that matches my understanding as well. I've deliberately made the
>> implementation flexible there though, under the assumption that some
>> hardware allows a plane to be directed at more than one CRTC (though
>> probably not simultaneously).
>> Arguably, this is something we should have done when the
>> connector/encoder split was done (making planes in general first class
>> objects). But with today's code, treating a CRTC as a pixel pump and a
>> primary plane seems fine, with overlays tacked onto the side as
>> secondary pixel sources but tied to a specific CRTC.
> What is the plan for supporting multiple formats? When I looked at
> this for nouveau it ended up growing out of control when adding
> support for all the YUV (planar, packed, 12 or 16 bpp formats) and RGB
> format combinations.
maybe a dumb idea, but since all the GEM buffer allocation is already
done thru driver specific ioctl, couldn't the color format (and the
one or more plane pointers) be something that the DRM overlay
infrastructure doesn't have to care about. I mean, I guess it is
somehow analogous to various tiled formats that you might have.
If the layout of the bytes is a property of the actual buffer object,
then wouldn't it be ok for DRM overlay infrastructure to ignore it and
the individual driver implementations just do the right thing based on
some private driver properties of the bo?
Maybe I'm over-simplifying or overlooking something, though..
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
More information about the dri-devel