[RFC PATCH] drm/ssd130x: Allocate buffer in the CRTC's .atomic_check() callback

Javier Martinez Canillas javierm at redhat.com
Wed Sep 6 12:04:22 UTC 2023


Maxime Ripard <mripard at kernel.org> writes:

> On Fri, Sep 01, 2023 at 02:08:11PM +0200, Geert Uytterhoeven wrote:
>> Hi Maxime,
>> 
>> On Fri, Sep 1, 2023 at 2:00 PM Maxime Ripard <mripard at kernel.org> wrote:
>> > On Fri, Sep 01, 2023 at 10:36:17AM +0200, Geert Uytterhoeven wrote:
>> > > On Fri, Sep 1, 2023 at 10:22 AM Maxime Ripard <mripard at kernel.org> wrote:
>> > > > On Wed, Aug 30, 2023 at 08:25:08AM +0200, Javier Martinez Canillas wrote:
>> > > > > The commit 45b58669e532 ("drm/ssd130x: Allocate buffer in the plane's
>> > > > > .atomic_check() callback") moved the allocation of the intermediate and
>> > > > > HW buffers from the encoder's .atomic_enable callback to primary plane's
>> > > > > .atomic_check callback.
>> > > > >
>> > > > > This was suggested by Maxime Ripard because drivers aren't allowed to fail
>> > > > > after drm_atomic_helper_swap_state() has been called, and the encoder's
>> > > > > .atomic_enable happens after the new atomic state has been swapped.
>> > > > >
>> > > > > But that change caused a performance regression in very slow platforms,
>> > > > > since now the allocation happens for every plane's atomic state commit.
>> > > > > For example, Geert Uytterhoeven reports that is the case on a VexRiscV
>> > > > > softcore (RISC-V CPU implementation on an FPGA).
>> > > >
>> > > > I'd like to have numbers on that. It's a bit surprising to me that,
>> > > > given how many objects we already allocate during a commit, two small
>> > > > additional allocations affect performances so dramatically, even on a
>> > > > slow platform.
>> > >
>> > > To be fair, I didn't benchmark that.  Perhaps it's just too slow due to
>> > > all these other allocations (and whatever else happens).
>> > >
>> > > I just find it extremely silly to allocate a buffer over and over again,
>> > > while we know that buffer is needed for each and every display update.
>> >
>> > Maybe it's silly, but I guess it depends on what you want to optimize
>> > for. You won't know the size of that buffer before you're in
>> > atomic_check. So it's a different trade-off than you would like, but I
>> > wouldn't call it extremely silly.
>> 
>> The size of ssd130x_plane_state.data_array[] is fixed, and depends
>> on the actual display connected.
>
> That one can be tied to the CRTC state if needed. It would only be
> allocated on each modeset, so probably once for that kind of device.
>

Yes.

>> The size of ssd130x_plane_state.buffer[]  is also fixed, and depends
>> on the plane's size (which is currently fixed to the display size).
>
> Doesn't it depend on the format as well?
>

Yes and no. The buffer[] size is fixed, but whether that intermediate
buffer is needed or not will depend if the native format was chosen.

So one could say that is either 0 (not used) or the fixed size needed
to do the format conversion from XRGB8888 to R1.

> Maxime
>

-- 
Best regards,

Javier Martinez Canillas
Core Platforms
Red Hat



More information about the dri-devel mailing list