[Spice-devel] [PATCH spice-server 2/2] Fix crash attempting to connect duplicate channels
Christophe de Dinechin
cdupontd at redhat.com
Wed Aug 30 09:02:27 UTC 2017
> On 30 Aug 2017, at 10:53, Frediano Ziglio <fziglio at redhat.com> wrote:
>
>>
>>> On 30 Aug 2017, at 10:19, Frediano Ziglio <fziglio at redhat.com> wrote:
>>>
>>> You could easily trigger this issue using multiple monitors and
>>> a modified client with this patch:
>>>
>>> --- a/src/channel-main.c
>>> +++ b/src/channel-main.c
>>> @@ -1699,6 +1699,7 @@ static gboolean _channel_new(channel_new_t *c)
>>> {
>>> g_return_val_if_fail(c != NULL, FALSE);
>>>
>>> + if (c->type == SPICE_CHANNEL_DISPLAY) c->id = 0;
>>> spice_channel_new(c->session, c->type, c->id);
>>
>> So what you are doing with this change is create multiple channels
>> with the same id. Is that valid? Why would you want to do that
>> legitimately?
>>
>
> Because I'm the bad guy! :-)
>
> If you expect that everybody is good with you... well...
> then there are no security issue, right?
Oh, I missed that the patch was client side.
> In this case an authenticated user (maybe a normal one) can
> crash (so shutdown) the guest (which require admin privileges).
>
> But beside that a robust code should handle unexpected
> conditions, at the end anything can come from the network.
Agreed.
>
>> If it’s not valid, shouldn’t we rather assert at creation time?
>> Would that also fix the issue you were seeing?
>>
>
> assert -> crash ?? why do I have to crash the VM ?
We shouldn’t. I misunderstood your experiment and thought
you were creating internal inconsistency, when in fact you were
generating bad input for the server.
>
>> I’m trying to understand if the condition your patch is addressing
>> is legitimate today, or if there is a reason to make it legitimate later,
>> or if it’s forever a “don’t go there” scenario :-)
>>
>>>
>>> g_object_unref(c->session);
>>>
>>>
>>> which cause a crash like
>>>
>>> (process:28742): Spice-WARNING **: Failed to create channel client: Client
>>> 0x40246f5d0: duplicate channel type 2 id 0
>>> 2017-08-24 09:36:57.451+0000: shutting down, reason=crashed
>>>
>>> The watch in RedsStream is supposed to be bound to the Qemu
>>> provided SpiceCoreInterface while RedChannelClient bound it
>>> to Glib one causing the crash when the watch is deleted from
>>> RedsStream. To avoid this a new watch is saved in
>>> RedChannelClientPrivate.
>>
>> For my education, could you explain why it’s harmless to move from a
>> per-stream watch to a per-channel client watch?
>>
>>
>
> Is just that RedsStream assume that the stream is attached to
> the main SpiceCoreInterface which is different from the RCC one.
> This difference basically causes the code to call a function
> (provided by spice server user) passing an invalid pointer.
> So storing the watch in RCC fix this assumption.
OK. But then can’t you remove the stream watch? Or is it used elsewhere?
> As RedsStream is attached to a single RCC both watches would
> be in theory per-channel.
>
> Another option would be to store the right SpiceCoreInterface
> inside RedsStream.
OK. Need to explore the code more to really understand that. For now,
I’ll trust your understanding of the matter ;-)
>
>>
>>>
>>> Signed-off-by: Frediano Ziglio <fziglio at redhat.com>
>>> ---
>>> server/red-channel-client.c | 39 ++++++++++++++++++++++++++++-----------
>>> 1 file changed, 28 insertions(+), 11 deletions(-)
>>>
>>> diff --git a/server/red-channel-client.c b/server/red-channel-client.c
>>> index 145ba43f..65392ed1 100644
>>> --- a/server/red-channel-client.c
>>> +++ b/server/red-channel-client.c
>>> @@ -120,6 +120,7 @@ struct RedChannelClientPrivate
>>> RedChannel *channel;
>>> RedClient *client;
>>> RedsStream *stream;
>>> + SpiceWatch *watch;
>>> gboolean monitor_latency;
>>>
>>> struct {
>>> @@ -339,6 +340,20 @@ red_channel_client_finalize(GObject *object)
>>> {
>>> RedChannelClient *self = RED_CHANNEL_CLIENT(object);
>>>
>>> + SpiceCoreInterfaceInternal *core =
>>> red_channel_get_core_interface(self->priv->channel);
>>> + if (self->priv->watch) {
>>> + core->watch_remove(core, self->priv->watch);
>>> + self->priv->watch = NULL;
>>> + }
>>> + if (self->priv->latency_monitor.timer) {
>>> + core->timer_remove(core, self->priv->latency_monitor.timer);
>>> + self->priv->latency_monitor.timer = NULL;
>>> + }
>>> + if (self->priv->connectivity_monitor.timer) {
>>> + core->timer_remove(core, self->priv->connectivity_monitor.timer);
>>> + self->priv->connectivity_monitor.timer = NULL;
>>> + }
>>> +
>>> reds_stream_free(self->priv->stream);
>>> self->priv->stream = NULL;
>>>
>>> @@ -922,7 +937,7 @@ static gboolean
>>> red_channel_client_initable_init(GInitable *initable,
>>>
>>> core = red_channel_get_core_interface(self->priv->channel);
>>> if (self->priv->stream)
>>> - self->priv->stream->watch =
>>> + self->priv->watch =
>>> core->watch_add(core, self->priv->stream->socket,
>>> SPICE_WATCH_EVENT_READ,
>>> red_channel_client_event,
>>> @@ -1003,9 +1018,11 @@ void red_channel_client_destroy(RedChannelClient
>>> *rcc)
>>> void red_channel_client_shutdown(RedChannelClient *rcc)
>>> {
>>> if (rcc->priv->stream && !rcc->priv->stream->shutdown) {
>>> - SpiceCoreInterfaceInternal *core =
>>> red_channel_get_core_interface(rcc->priv->channel);
>>> - core->watch_remove(core, rcc->priv->stream->watch);
>>> - rcc->priv->stream->watch = NULL;
>>> + if (rcc->priv->watch) {
>>> + SpiceCoreInterfaceInternal *core =
>>> red_channel_get_core_interface(rcc->priv->channel);
>>> + core->watch_remove(core, rcc->priv->watch);
>>> + rcc->priv->watch = NULL;
>>> + }
>>> shutdown(rcc->priv->stream->socket, SHUT_RDWR);
>>> rcc->priv->stream->shutdown = TRUE;
>>> }
>>> @@ -1294,10 +1311,10 @@ void red_channel_client_push(RedChannelClient *rcc)
>>> red_channel_client_send_item(rcc, pipe_item);
>>> }
>>> if (red_channel_client_no_item_being_sent(rcc) &&
>>> g_queue_is_empty(&rcc->priv->pipe)
>>> - && rcc->priv->stream->watch) {
>>> + && rcc->priv->watch) {
>>> SpiceCoreInterfaceInternal *core;
>>> core = red_channel_get_core_interface(rcc->priv->channel);
>>> - core->watch_update_mask(core, rcc->priv->stream->watch,
>>> + core->watch_update_mask(core, rcc->priv->watch,
>>> SPICE_WATCH_EVENT_READ);
>>> }
>>> rcc->priv->during_send = FALSE;
>>> @@ -1511,10 +1528,10 @@ static inline gboolean
>>> prepare_pipe_add(RedChannelClient *rcc, RedPipeItem *item
>>> red_pipe_item_unref(item);
>>> return FALSE;
>>> }
>>> - if (g_queue_is_empty(&rcc->priv->pipe) && rcc->priv->stream->watch) {
>>> + if (g_queue_is_empty(&rcc->priv->pipe) && rcc->priv->watch) {
>>> SpiceCoreInterfaceInternal *core;
>>> core = red_channel_get_core_interface(rcc->priv->channel);
>>> - core->watch_update_mask(core, rcc->priv->stream->watch,
>>> + core->watch_update_mask(core, rcc->priv->watch,
>>> SPICE_WATCH_EVENT_READ |
>>> SPICE_WATCH_EVENT_WRITE);
>>> }
>>> return TRUE;
>>> @@ -1678,9 +1695,9 @@ void red_channel_client_disconnect(RedChannelClient
>>> *rcc)
>>> spice_printerr("rcc=%p (channel=%p type=%d id=%d)", rcc, channel,
>>> type, id);
>>> red_channel_client_pipe_clear(rcc);
>>> - if (rcc->priv->stream->watch) {
>>> - core->watch_remove(core, rcc->priv->stream->watch);
>>> - rcc->priv->stream->watch = NULL;
>>> + if (rcc->priv->watch) {
>>> + core->watch_remove(core, rcc->priv->watch);
>>> + rcc->priv->watch = NULL;
>>> }
>>> if (rcc->priv->latency_monitor.timer) {
>>> core->timer_remove(core, rcc->priv->latency_monitor.timer);
>
> Frediano
More information about the Spice-devel
mailing list