[RFC] Plane color pipeline KMS uAPI

Pekka Paalanen ppaalanen at gmail.com
Fri Jun 16 07:59:28 UTC 2023


On Thu, 15 Jun 2023 17:44:33 -0400
Christopher Braga <quic_cbraga at quicinc.com> wrote:

> On 6/14/2023 5:00 AM, Pekka Paalanen wrote:
> > On Tue, 13 Jun 2023 12:29:55 -0400
> > Christopher Braga <quic_cbraga at quicinc.com> wrote:
> >   
> >> On 6/13/2023 4:23 AM, Pekka Paalanen wrote:  
> >>> On Mon, 12 Jun 2023 12:56:57 -0400
> >>> Christopher Braga <quic_cbraga at quicinc.com> wrote:
> >>>      
> >>>> On 6/12/2023 5:21 AM, Pekka Paalanen wrote:  
> >>>>> On Fri, 9 Jun 2023 19:11:25 -0400
> >>>>> Christopher Braga <quic_cbraga at quicinc.com> wrote:
> >>>>>         
> >>>>>> On 6/9/2023 12:30 PM, Simon Ser wrote:  
> >>>>>>> Hi Christopher,
> >>>>>>>
> >>>>>>> On Friday, June 9th, 2023 at 17:52, Christopher Braga <quic_cbraga at quicinc.com> wrote:
> >>>>>>>            
> >>>>>>>>> The new COLOROP objects also expose a number of KMS properties. Each has a
> >>>>>>>>> type, a reference to the next COLOROP object in the linked list, and other
> >>>>>>>>> type-specific properties. Here is an example for a 1D LUT operation:
> >>>>>>>>>
> >>>>>>>>>          Color operation 42
> >>>>>>>>>          ├─ "type": enum {Bypass, 1D curve} = 1D curve
> >>>>>>>>>          ├─ "1d_curve_type": enum {LUT, sRGB, PQ, BT.709, HLG, …} = LUT  
> >>>>>>>> The options sRGB / PQ / BT.709 / HLG would select hard-coded 1D
> >>>>>>>> curves? Will different hardware be allowed to expose a subset of these
> >>>>>>>> enum values?  
> >>>>>>>
> >>>>>>> Yes. Only hardcoded LUTs supported by the HW are exposed as enum entries.
> >>>>>>>            
> >>>>>>>>>          ├─ "lut_size": immutable range = 4096
> >>>>>>>>>          ├─ "lut_data": blob
> >>>>>>>>>          └─ "next": immutable color operation ID = 43
> >>>>>>>>>           
> >>>>>>>> Some hardware has per channel 1D LUT values, while others use the same
> >>>>>>>> LUT for all channels.  We will definitely need to expose this in the
> >>>>>>>> UAPI in some form.  
> >>>>>>>
> >>>>>>> Hm, I was assuming per-channel 1D LUTs here, just like the existing GAMMA_LUT/
> >>>>>>> DEGAMMA_LUT properties work. If some hardware can't support that, it'll need
> >>>>>>> to get exposed as another color operation block.
> >>>>>>>            
> >>>>>>>>> To configure this hardware block, user-space can fill a KMS blob with
> >>>>>>>>> 4096 u32
> >>>>>>>>> entries, then set "lut_data" to the blob ID. Other color operation types
> >>>>>>>>> might
> >>>>>>>>> have different properties.
> >>>>>>>>>           
> >>>>>>>> The bit-depth of the LUT is an important piece of information we should
> >>>>>>>> include by default. Are we assuming that the DRM driver will always
> >>>>>>>> reduce the input values to the resolution supported by the pipeline?
> >>>>>>>> This could result in differences between the hardware behavior
> >>>>>>>> and the shader behavior.
> >>>>>>>>
> >>>>>>>> Additionally, some pipelines are floating point while others are fixed.
> >>>>>>>> How would user space know if it needs to pack 32 bit integer values vs
> >>>>>>>> 32 bit float values?  
> >>>>>>>
> >>>>>>> Again, I'm deferring to the existing GAMMA_LUT/DEGAMMA_LUT. These use a common
> >>>>>>> definition of LUT blob (u16 elements) and it's up to the driver to convert.
> >>>>>>>
> >>>>>>> Using a very precise format for the uAPI has the nice property of making the
> >>>>>>> uAPI much simpler to use. User-space sends high precision data and it's up to
> >>>>>>> drivers to map that to whatever the hardware accepts.
> >>>>>>>           
> >>>>>> Conversion from a larger uint type to a smaller type sounds low effort,
> >>>>>> however if a block works in a floating point space things are going to
> >>>>>> get messy really quickly. If the block operates in FP16 space and the
> >>>>>> interface is 16 bits we are good, but going from 32 bits to FP16 (such
> >>>>>> as in the matrix case or 3DLUT) is less than ideal.  
> >>>>>
> >>>>> Hi Christopher,
> >>>>>
> >>>>> are you thinking of precision loss, or the overhead of conversion?
> >>>>>
> >>>>> Conversion from N-bit fixed point to N-bit floating-point is generally
> >>>>> lossy, too, and the other direction as well.
> >>>>>
> >>>>> What exactly would be messy?
> >>>>>         
> >>>> Overheard of conversion is the primary concern here. Having to extract
> >>>> and / or calculate the significand + exponent components in the kernel
> >>>> is burdensome and imo a task better suited for user space. This also has
> >>>> to be done every blob set, meaning that if user space is re-using
> >>>> pre-calculated blobs we would be repeating the same conversion
> >>>> operations in kernel space unnecessarily.  
> >>>
> >>> What is burdensome in that calculation? I don't think you would need to
> >>> use any actual floating-point instructions. Logarithm for finding the
> >>> exponent is about finding the highest bit set in an integer and
> >>> everything is conveniently expressed in base-2. Finding significand is
> >>> just masking the integer based on the exponent.
> >>>      
> >> Oh it definitely can be done, but I think this is just a difference of
> >> opinion at this point. At the end of the day we will do it if we have
> >> to, but it is just more optimal if a more agreeable common type is used.
> >>  
> >>> Can you not cache the converted data, keyed by the DRM blob unique
> >>> identity vs. the KMS property it is attached to?  
> >> If the userspace compositor has N common transforms (ex: standard P3 ->
> >> sRGB matrix), they would likely have N unique blobs. Obviously from the
> >> kernel end we wouldn't want to cache the transform of every blob passed
> >> down through the UAPI.  
> > 
> > Hi Christoper,
> > 
> > as long as the blob exists, why not?  
> 
> Generally because this is an unbounded amount of blobs. I'm not 100% 
> sure what the typical behavior is upstream, but in our driver we have 
> scenarios where we can have per-frame blob updates (unique per-frame blobs).

All kernel allocated blob-related data should be accounted to the
userspace process. I don't think that happens today, but I think it
definitely should. Userspace can create a practically unlimited number
of arbitrary sized blobs to begin with, consuming arbitrary amounts of
kernel memory at will, even without drivers caching any derived data.

It does not seem to me like refusing to cache derived blob data would
really help.

> Speaking of per-frame blob updates, there is one concern I neglected to 
> bring up. Internally we have seen scenarios where frequent blob 
> allocation can lead to memory allocation delays of two frames or higher. 
> This typically was seen when the system is under high memory usage and 
> the blob allocation is > 1 page. The patch 
> https://patchwork.freedesktop.org/patch/525857/ was uploaded a few 
> months back to help mitigate these delays, but it didn't gain traction 
> at the time.

That is worrying.

As a userspace developer, I like the idea of limiting blob allocation
to DRM master only, but if the concern is the DRM master leaking, then
I'd imagine process accounting could at least point to the culprit.

Trying to defend against a malicious DRM master is in my opinion a
little moot. Untrusted processes should not be able to gain DRM master
to begin with.

Hmm, but DRM leasing...

> This color pipeline UAPI is ultimately going to have the same problem. 
> Frequent 3DLUT color block updates will result in large allocations, and 
> if there is high system memory usage this could see blob allocation 
> delays. So two things here:
> - Let's reconsider https://patchwork.freedesktop.org/patch/525857/ so 
> frequent blob allocation doesn't get unnecessarily delayed
> - Do we have any alternative methods at our disposal for sending down 
> the color configuration data? Generally blobs work fine for low update 
> or blob cycling use cases, but frequent blob data updates results in a 
> total per frame IOCTL sequence of:
>    (IOCTL_BLOB_DESTROY * #_of_blob_updates) +		
>      (IOCTL_BLOB_CREATE * #_of_blob_updates) + IOCTL_DRM_ATOMIC

Good questions.

I have no ideas for that, but I got a random idea to mitigate the blob
conversion overhead:

What if we had a new kind of blob that is targeted to a specific
property of a specific KMS object at creation?

Then the driver could do the conversion work at create ioctl time, and
store only the derived data and not the original userspace data at all.
Then there are no unexpected delays due to allocation or conversion at
atomic commit time, and the memory cost is optimal for the specific
usage.

The disadvantage is that the blob is then tied to the specific property
of the specific KMS object, and cannot be used anywhere else. I'm not
sure how much of a problem that would be in practice for userspace
having to create maybe even more blobs per-plane or per-crtc, or a
problem for drivers that have a flexible mapping between KMS objects
and hardware blocks.


Thanks
pq
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <https://lists.freedesktop.org/archives/dri-devel/attachments/20230616/4e10e983/attachment-0001.sig>


More information about the dri-devel mailing list