[PATCH v3 4/6] drm/ssd130x: Add support for the SSD132x OLED controller family

Javier Martinez Canillas javierm at redhat.com
Fri Oct 13 14:57:48 UTC 2023


Thomas Zimmermann <tzimmermann at suse.de> writes:

Hello Thomas,

Thanks a lot for your feedback.

> Hi Javier,
>
> thanks for this patch.
>
> Am 12.10.23 um 23:38 schrieb Javier Martinez Canillas:
> [...]
>>   
>> +static int ssd132x_fb_blit_rect(struct drm_framebuffer *fb,
>> +				const struct iosys_map *vmap,
>> +				struct drm_rect *rect, u8 *buf,
>> +				u8 *data_array)
>> +{
>> +	struct ssd130x_device *ssd130x = drm_to_ssd130x(fb->dev);
>> +	unsigned int dst_pitch = drm_rect_width(rect);
>> +	struct iosys_map dst;
>> +	int ret = 0;
>> +
>> +	/* Align x to display segment boundaries */
>> +	rect->x1 = round_down(rect->x1, SSD132X_SEGMENT_WIDTH);
>> +	rect->x2 = min_t(unsigned int, round_up(rect->x2, SSD132X_SEGMENT_WIDTH),
>> +			 ssd130x->width);
>> +
>> +	ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE);
>> +	if (ret)
>> +		return ret;
>> +
>> +	iosys_map_set_vaddr(&dst, buf);
>> +	drm_fb_xrgb8888_to_gray8(&dst, &dst_pitch, vmap, fb, rect);
>
> Here's an idea for a follow-up patchset.
>
> You could attempt to integrate the gray8 and mono conversions into 
> drm_fb_blit(). With some the right parameters, both, ssd130x and ssd132x 
> could use the same blitting code from BO to buffer.
>

Yeah, I considered that but as mentioned in the commit message want to see
what are the needs of the SSD133x controller family (I bought a SSD1331
display but haven't had time to play with it yet) before trying to factor
out the common bits in helper functions.

[...]

>> +
>> +	ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
>> +	if (!ssd130x_state->buffer)
>> +		return -ENOMEM;
>
> It's unrelated to these patches and I know it's been discussed 
> endlessly, but I have a questions about buffer allocation. That memory 
> acts as another shadow buffer for the device's memory, such that format 
> conversion becomes easier.
>

Correct.

> But then, why is ->buffer part of the plane_state? Shouldn't it be part 
> of the plane and never be re-allocated? The real size of that buffer is 
> <width> times <height> (not <pitch>). That size is static over the 
> lifetime of the device. That would represent the semantics much better.
>
> This would allow for additional changes: blit_rect and update_rect would 
> be much easier to separate: no more segment adjustments for the blit 
> code; only for updates. If the update code has high latency (IDK), you 
> could push it into a worker thread to run besides the DRM logic. The gud 
> and repaper drivers do something to this effect.
>
>

The idea of making it part of the plane state is that this buffer could be
optional, for example in the case of user-space using the native display
format instead of the emulated XRGB8888.

In that case, an intermediate buffer won't be used because the shadow-plane
format will already be the native one (e.g: R1) and there won't be a need
to do any format conversion (only the conversion to the data format as is
expected by the controller).

Take a look to Geert's patch adding R1 support to ssd130x for an example:

https://lore.kernel.org/all/72746f6d9c47f09fc057ad7a4bbb3b7f423af803.1689252746.git.geert@linux-m68k.org/

That's why it was decided that making it part of the plane state follows
better the KMS model, because when using R1 this buffer won't even be
allocated in the primary plane .atomic_check handler.

[...]

>> +	drm_atomic_helper_damage_iter_init(&iter, old_plane_state, plane_state);
>> +	drm_atomic_for_each_plane_damage(&iter, &damage) {
>> +		dst_clip = plane_state->dst;
>> +
>> +		if (!drm_rect_intersect(&dst_clip, &damage))
>> +			continue;
>> +
>> +		ssd132x_fb_blit_rect(fb, &shadow_plane_state->data[0], &dst_clip,
>> +				     ssd130x_plane_state->buffer,
>> +				     ssd130x_crtc_state->data_array);
>> +	}
>
> Here's another idea for a another follow-up patchset:
>
> You are allocating state->buffer to cover the whole display, right? It's 
> <pitch> times <height> IIRC.  Maybe it would make sense to split the 
> damage loop into two loops and inline the driver's blit_rect() function. 
> Something like that
>
>    begin_cpu_access()
>
>    for_each(damage) {
>      drm_fb_blit( "from GEM BO to buffer" )
>    }
>
>    end_cpu_access()
>
>    for_each(damge) {
>      update_rect( "from buffer to device" )
>    }
>
> With the changes from the other comments, the first loop could become 
> entirely device-neutral AFAICT.
>

Regardless, splitting the blit and update rect might make sense and is an
intersesting idea. I need to explore this, thanks for the suggestion.

As you mention that these could be follow-up changes, I assume that you
agree with the current approach. Should I expect your review / ack for
this patch-set?

> Best regards
> Thomas
>

-- 
Best regards,

Javier Martinez Canillas
Core Platforms
Red Hat



More information about the dri-devel mailing list