[PATCH] drm/ssd130x: Add drm_panic support

Javier Martinez Canillas javierm at redhat.com
Fri Jun 21 19:34:00 UTC 2024


Daniel Vetter <daniel at ffwll.ch> writes:

Hello Sima,

Thanks for your comment and explanations.

> On Fri, Jun 21, 2024 at 05:42:53PM +0200, Javier Martinez Canillas wrote:
>> Javier Martinez Canillas <javierm at redhat.com> writes:
>> 
>> > Jocelyn Falempe <jfalempe at redhat.com> writes:
>> >
>> > Hello Jocelyn, thanks for your feedback!
>> >
>> >> On 21/06/2024 00:22, Javier Martinez Canillas wrote:
>> >>> Add support for the drm_panic infrastructure, which allows to display
>> >>> a user friendly message on the screen when a Linux kernel panic occurs.
>> >>> 
>> >>> The display controller doesn't scanout the framebuffer, but instead the
>> >>> pixels are sent to the device using a transport bus. For this reason, a
>> >>> .panic_flush handler is needed to flush the panic image to the display.
>> >>
>> >> Thanks for this patch, that's really cool that drm_panic can work on 
>> >> this device too.
>> >>
>> >
>> > Indeed, that's why I did it. Just to see if it could work :)
>> >
>> > [...]
>> >
>> >>> +static void ssd130x_primary_plane_helper_panic_flush(struct drm_plane *plane)
>> >>> +{
>> >>> +	struct drm_plane_state *plane_state = plane->state;
>> >>> +	struct ssd130x_plane_state *ssd130x_plane_state = to_ssd130x_plane_state(plane_state);
>> >>> +	struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state);
>> >>> +	struct drm_crtc *crtc = plane_state->crtc;
>> >>> +	struct ssd130x_crtc_state *ssd130x_crtc_state = to_ssd130x_crtc_state(crtc->state);
>> >>> +
>> >>> +	ssd130x_fb_blit_rect(plane_state->fb, &shadow_plane_state->data[0], &plane_state->dst,
>> >>> +			     ssd130x_plane_state->buffer, ssd130x_crtc_state->data_array,
>> >>> +			     &shadow_plane_state->fmtcnv_state);
>> >>
>> >> ssd130x_fb_blit_rect() will call regmap->write(), which involve mutex 
>> >> and might sleep. And if the mutex is taken when the panic occurs, it 
>> >> might deadlock the panic handling.
>> >
>> > That's a good point and I something haven't considered...
>> >
>> >> One solution would be to configure the regmap with config->fast_io and 
>> >> config->use_raw_spinlock, and check that the lock is available with 
>> >> try_lock(map->raw_spin_lock)
>> >> But that means it will waste cpu cycle with busy waiting for normal 
>> >> operation, which is not good.
>> >>
>> >
>> > Yeah, I would prefer to not change the driver for normal operation.
>> >
>> 
>> Another option, that I believe makes more sense, is to just disable the
>> regmap locking (using struct regmap_config.disable_locking field [0]).
>> 
>> Since this regmap is not shared with other drivers and so any concurrent
>> access should already be prevented by the DRM core locking scheme.
>> 
>> Is my understanding correct?
>
> Quick irc discussion summary: Since these are panels that sit on i2c/spi
> buses, you need to put the raw spinlock panic locking into these
> subsystems. Which is going to be extreme amounts of fun, becuase:
>
> - You need to protect innermost register access with a raw spinlock, but
>   enough so that every access is still consistent.
>
> - You need separate panic paths which avoid all the existing subsystem
>   locking (i2c/spi have userspace apis, so they need mutexes) and only
>   rely on the caller having done the raw spinlock trylocking.
>
> - Bonus points: Who even owns that raw spinlock ...
>
> I'm afraid, this is going to be a tough nut to crack :-/
>

Yeah, not worth the effort then. I'll just drop this patch.

> Cheers, Sima
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
>

-- 
Best regards,

Javier Martinez Canillas
Core Platforms
Red Hat



More information about the dri-devel mailing list