[PATCH V8 32/43] drm/colorop: Add 1D Curve Custom LUT type
Simon Ser
contact at emersion.fr
Tue Apr 15 16:25:12 UTC 2025
On Tuesday, April 15th, 2025 at 17:05, Harry Wentland <harry.wentland at amd.com> wrote:
> > > > We want to have just one change in the way we expose the hardware
> > > > capabilities else all looks good in general.
> > >
> > > I would really recommend leaving this as a follow-up extension. It's a complicated
> > > addition that requires more discussion.
> >
> > Hi Simon,
> > We have tried to solve the complex part and made it simple to understand and implement
> > along with a reference implementation [1] (can also help add the same for AMD case as well).
> > Without this we will end up with up 2 interfaces for 1dL Lut which is not nice where the one above
> > will be able to cover the current one. Let us know the problems with the proposed interface and we can
> > work to fix the same. But having a common and single interface is good and the current one will not fit
> > Intel's color pipeline distribution so the generic one anyways will be needed, and it will benefit userspace
> > to know the underlying LUT distribution to compute the LUT samples.
> >
> > [1] https://patchwork.freedesktop.org/series/129812/
>
> I think there is a lot of value in giving userspace a simple LUT
> to work with. There are many compositors and many compositor
> maintainers. When someone new jumps into color management usually
> same thing happens. It starts with "it's not too complicated",
> and then over a period of time progresses to "this is very much
> non-trivial" as understanding one bit usually opens ten more
> questions.
>
> Forcing people to deal with another level of complexity will
> discourage implementations and be counterproductive to furthering
> adoption of color operations for HW acceleration, IMO.
>
> I'm am not opposed to a complex LUT definition but I don't think
> it should replace a simple and well-understood definition.
Agreed. To add on this, I think shipping many additional features from
day one significantly increases the work load (more code to write,
review, test at once) and we'd also need to go through supplementary
rounds to validate the API design and ensure it's not too
Intel-specific. Also adding this feature as a second step will prove
that the API is as extensible as we desire.
I don't really understand why it's important to have this feature in
the first version. Intel has been converting simple LUTs into the
fancy distribution for the existing GAMMA_LUT and DEGAMMA_LUT for a
while, so can do it for colorop as well. The upsides of the fancy
distribution is more precise and smaller LUTs, but that doesn't seem
critical?
Simon
More information about the dri-devel
mailing list