[PATCH 1/1] drm/fourcc: Add documentation about software color conversion.

Pekka Paalanen ppaalanen at gmail.com
Fri Aug 18 13:24:15 UTC 2023


On Thu, 10 Aug 2023 09:45:27 +0200
Maxime Ripard <mripard at kernel.org> wrote:

> Hi
> 
> On Mon, Aug 07, 2023 at 03:45:15PM +0200, Jocelyn Falempe wrote:
> > After discussions on IRC, the consensus is that the DRM drivers should
> > not do software color conversion, and only advertise the supported formats.
> > Update the doc accordingly so that the rule and exceptions are clear for
> > everyone.
> > 
> > Signed-off-by: Jocelyn Falempe <jfalempe at redhat.com>
> > ---
> >  include/uapi/drm/drm_fourcc.h | 7 +++++++
> >  1 file changed, 7 insertions(+)
> > 
> > diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h
> > index 8db7fd3f743e..00a29152da9f 100644
> > --- a/include/uapi/drm/drm_fourcc.h
> > +++ b/include/uapi/drm/drm_fourcc.h
> > @@ -38,6 +38,13 @@ extern "C" {
> >   * fourcc code, a Format Modifier may optionally be provided, in order to
> >   * further describe the buffer's format - for example tiling or compression.
> >   *
> > + * DRM drivers should not do software color conversion, and only advertise the
> > + * format they support in hardware. But there are two exceptions:  
> 
> I would do a bullet list here:
> https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html#lists-and-quote-like-blocks
> 
> > + * The first is to support XRGB8888 if the hardware doesn't support it, because
> > + * it's the de facto standard for userspace applications.  
> 
> We can also provide a bit more context here, something like:
> 
> All drivers must support XRGB8888, even if the hardware cannot support
> it. This has become the de-facto standard and a lot of user-space assume
> it will be present.
> 
> > + * The second is to drop the unused bits when sending the data to the hardware,
> > + * to improve the bandwidth, like dropping the "X" in XRGB8888.  
> 
> I think it can be made a bit more generic, with something like:
> 
> Any driver is free to modify its internal representation of the format,
> as long as it doesn't alter the visible content in any way. An example
> would be to drop the padding component from a format to save some memory
> bandwidth.

Hi,

to my understanding and desire, the rule to not "fake" pixel format
support is strictly related to performance. When a KMS client does a
page flip, it usually does not expect a massive amount of CPU or GPU
work to occur just because of the flip. A name for such work is "copy",
referring to any kind of copying of large amounts of pixel data,
including a format conversion or not.

This is especially important with GPU rendering and hardware video
playback systems, where any such copy could destroy the usability of
the whole system. This is the main reason why KMS must not do any
expensive processing unexpectedly (as in, not documented in UAPI).
Doing any kind of copy could cause a vblank to be missed, ruining
display timings.

I believe the above is the spirit of the rule. Then there will be
exceptions. I'd like to think that everything below (except for
XRGB8888) can be derived from the above with common sense - that's what
I did.

XRGB8888 support is the prime exception. I suspect it originates from
the legacy KMS UAPI, and the practise that XRGB8888 has been widely
supported always. This makes it plausible for userspace to exist that
cannot produce any other format. Hence, it is good to support XRGB8888
through a conversion (copy) in the kernel for dumb buffers (that is,
for software rendered framebuffers). I would be very hesitant to extend
this exception to GPU rendered buffers, but OTOH if you have a GPU,
presumably you also have a display controller capable of scanning out
what the GPU renders, so you wouldn't even consider copying under the
hood.

DRM devices that cannot directly scan out buffers at all are a whole
category of exceptions. They include USB display adapters (literal USB,
not USB-C alt mode), perhaps networked and wireless displays, VKMS
which does everything in software, and so on. They simply have to
process the bulk pixel data with a CPU one way or another, and
hopefully they make use of damage rectangles to minimise the work.

Old-school special cursor planes may have been using special pixel
formats that may not be supported by userspace. Cursors are usually
small images and they can make a huge performance impact, so it makes
sense to support ARGB8888 even with a CPU conversion.

Then we have display controllers without GPUs. Everything is
software-rendered. If it so happens that software rendering into sysram
and then copying (with conversion) into VRAM is more performant than
rendering into VRAM, then the copy is well justified.

Software-rendering into sysram and then copying into VRAM is actually
so commonly preferred, that KMS has a special flag to suggest userspace
does it: DRM_CAP_DUMB_PREFER_SHADOW [1]. A well-behaved
software-rendering KMS client checks this flag and honours it. If a
driver both sets the flag, and copies itself, then that's two copies
for each flip. The driver's copy is unexpected, but is there a good
reason for the driver to do it?

I can only think one reason: hardware scanout pixel format being one
that userspace cannot reasonably be expected to produce. I think
nowadays RGB888 (the literally 3 bytes per pixel) format counts as one.

If hardware requires RGB888 to e.g. reach a specific resolution, should
the driver set DRM_CAP_DUMB_PREFER_SHADOW or not? If the driver always
allocates sysram as dumb buffers because there simply is not enough
VRAM to give out, then definitely not. That is a very good reason for
the driver to do a copy/conversion with damage under the hood. It
sucks, but it's the only way it can work.

But if the dumb buffers are allocated in VRAM, then
DRM_CAP_DUMB_PREFER_SHADOW should likely be set because direct VRAM
access tends to be slow, and the driver should not copy - unless maybe
for XRGB8888 and cursors. A CPU copy from VRAM into VRAM is the worst.

For RGB888 hardware and software rendering, it would be best if:

- Dumb buffers are allocated from VRAM, making them directly
  scanout-able for RGB888.

- DRM_CAP_DUMB_PREFER_SHADOW is set, telling userspace to render into a
  shadow and then copy into a dumb buffer.

- Userspace's copy into a dumb buffer produces RGB888, while the shadow
  buffer can be of any format userspace likes.

This minimises the amount of work done, and page flips are literal
flips without a hidden copy in the driver.

If userspace does not support RGB888, things get hairy. If XRGB8888 is
faked via a driver copy, then you would not want to be copying from
VRAM into VRAM. Create-dumb ioctl passes in bpp, so the driver could
special-case 24 vs. 32 I guess, allocating 24 from VRAM and 32 from
sysram. But do you set DRM_CAP_DUMB_PREFER_SHADOW? It would always be
wrong for the other format.

Ideally, XRGB8888 would be supported without artificially crippling
more optimal pixel formats by lack of DRM_CAP_DUMB_PREFER_SHADOW, even
if XRGB8888 support is fake and hurt by DRM_CAP_DUMB_PREFER_SHADOW. But
that also depends on the expected userspace, which format does it use.


[1] https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#c.DRM_CAP_DUMB_PREFER_SHADOW


Thanks,
pq
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <https://lists.freedesktop.org/archives/dri-devel/attachments/20230818/3c653436/attachment.sig>


More information about the dri-devel mailing list