[Spice-devel] [PATCH spice-server v3 2/2] stream-channel: Send the full frame in a single message
Lukáš Hrázký
lhrazky at redhat.com
Thu Jun 14 10:23:41 UTC 2018
On Thu, 2018-06-14 at 05:10 -0400, Frediano Ziglio wrote:
> >
> > On Wed, 2018-06-13 at 07:13 -0400, Frediano Ziglio wrote:
> > > >
> > > > On Tue, 2018-06-12 at 13:30 +0200, Christophe Fergeau wrote:
> > > > > Hey,
> > > > >
> > > > > On Wed, May 09, 2018 at 01:20:19PM +0200, Lukáš Hrázký wrote:
> > > > > > On Wed, 2018-05-09 at 05:18 -0400, Frediano Ziglio wrote:
> > > > > > > The agent send a single message, but reads/writes to the device are
> > > > > > > not atomic. Note that the current protocol introduce additional
> > > > > > > delays as the frames cannot be partially decoded but must wait for
> > > > > > > the
> > > > > > > full message (maybe the client can change its read code to handle
> > > > > > > this,
> > > > > > > at the moment it does nothing about, on the server is less of a
> > > > > > > problem
> > > > > > > as the message is build quickly from device to memory so not much
> > > > > > > delay
> > > > > > > is added).
> > > > > >
> > > > > > This is the part I don't understand... AFAICS, you read the whole
> > > > > > message in red-stream-device.c:handle_msg_data(). That should be the
> > > > > > whole frame? Then you send the whole frame with
> > > > > > stream_channel_send_data(). So it should never be partial?
> > > > >
> > > > > I would guess the data you get in handle_msg_data() got fragmented
> > > > > somewhere on the agent -> kernel -> virtio-serial -> qemu -> spice way.
> > > > > So one mjpeg frame will correspond to multiple calls to
> > > > > n = sif->read(sin, buf, MIN(sizeof(buf), dev->hdr.size));
> > > > > Then with this patch, stream_channel_send_data() will coalesce all
> > > > > these
> > > > > reads in a single StreamDataItem and send it when its size corresponds
> > > > > to dev->hdr.size.
> > > >
> > > > Yah, figured that out when I revisited this few days ago after a
> > > > discussion with Uri. He mentioned he's got another patch for it, though
> > > > I'm not sure where it is (I may have missed it).
> > > >
> > > > > > > > > void
> > > > > > > > > @@ -563,11 +609,25 @@ stream_channel_send_data(StreamChannel
> > > > > > > > > *channel,
> > > > > > > > > const void *data, size_t size,
> > > > > > > > >
> > > > > > > > > RedChannel *red_channel = RED_CHANNEL(channel);
> > > > > > > > >
> > > > > > > > > - StreamDataItem *item = stream_data_item_new(channel, size,
> > > > > > > > > mm_time);
> > > > > > > > > - stream_channel_update_queue_stat(channel, 1, size);
> > > > > > > > > - // TODO try to optimize avoiding the copy
> > > > > > > > > - memcpy(item->data.data, data, size);
> > > > > > > > > - red_channel_pipes_add(red_channel, &item->base);
> > > > > > > > > + while (size) {
> > > > > > > > > + if (channel->data_item == NULL) {
> > > > > > > > > + stream_channel_init_data_item(channel, size,
> > > > > > > > > mm_time);
> > > > > > > > > + }
> > > > > > > > > +
> > > > > > > > > + StreamDataItem *item = channel->data_item;
> > > > > > > > > +
> > > > > > > > > + size_t copy_size = item->data.data_size -
> > > > > > > > > channel->data_item_pos;
> > > > > > > > > + copy_size = MIN(copy_size, size);
> > > > > > > > > + // TODO try to optimize avoiding the copy
> > > > > > > > > + memcpy(item->data.data + channel->data_item_pos, data,
> > > > > > > > > copy_size);
> > > > > > > > > + size -= copy_size;
> > > > > > > > > + channel->data_item_pos += copy_size;
> > > > > > > > > + if (channel->data_item_pos == item->data.data_size) {
> > > > > > > > > + channel->data_item = NULL;
> > > > > > > > > + stream_channel_update_queue_stat(channel, 1,
> > > > > > > > > item->data.data_size);
> > > > > > > > > + red_channel_pipes_add(red_channel, &item->base);
> > > > > > > > > + }
> > > > > > > > > + }
> > > > > >
> > > > > > What does the while (size) loop do here? It will do more than one
> > > > > > iteration only if copy_size < size, which means there is not enough
> > > > > > space in the item buffer and in that case it seems to me it will loop
> > > > > > forever? Am I missing something?
> > > > >
> > > > > I don't think it will loop forever (channel->data_item will be set to
> > > > > NULL, and then a new one will be created on the next iteration, and
> > > > > this
> > > > > one will be sent, and at that point, size will be 0).
> > > >
> > > > Same as above... figured it out, not sure what I was thinking at the
> > > > time :)
> > > >
> > > > > However, I'm also
> > > > > slightly confused as for the intent of that loop. Maybe to handle the
> > > > > case when we receive more data than we expect?
> > > >
> > > > Yeah, come look at it once again (unless I'm wrong again) seems you're
> > > > right :) I don't think we can actually receive more data for the
> > > > message than what was the size in the header? Because the size defines
> > > > the end of the message? So:
> > > >
> > > > 1. The loop should never do more than one iteration.
> > > >
> > >
> > > Does in case the guest wants to send very large message, in this case
> > > is limited to 32Mb.
> > >
> > > > 2. If it actually does, I think it will fragment the message once again
> > > > (sending two messages for a single frame), so the bug with the client
> > > > persists.
> > > >
> > >
> > > Simply not expected to be a regression in this case.
> >
> > I'm not following you here. Do you mean that you do not expect this to
> > happen? If so, and if the code actually doesn't work for that case, the
> > code doesn't need to be there...
> >
>
> What the "this" in the above sentence?
> Mjpeg with very large message are not expected, other cases can be
> handled by that code.
What's the "that" in the above sentence? :D
The "this" means the message exceeding 32MB. So, to rephrase:
If you do not expect the 32MB limit to be exceeded and the code
fragments the message once again, which still breaks the client, the
code handling the > 32MB case doesn't need to be there.
Bu below you say it does work for codec != Mjpeg ...
> > > We need a single message for frame only with Mjpeg.
> >
> > I don't understand this either. AFAIK the frame data messages are
> > handled the same regardless of the codec? My understanding was that the
> > Mjpeg messages are a bit longer so that they actually trigger the bug?
> >
> > I think this bug is actually present for all the messages received by
> > the server, it's just that most of them are too short to trigger it.
> > I'm not sure we should rely on it though.
> >
>
> No, just other codecs (no mjpeg) are perfectly fine splitting frame data.
How can it work? I should go dig it up in the code but a quick
explanation from you first would help...
> > > > I was also thinking the ideal way of handling this would be to "pipe
> > > > through" the data, i.e. send a header with the full data message size
> > > > to the streaming channel once a data message header arrives on the
> > > > streaming device and then send whatever fragments of the message arrive
> > > > on the device, until the message is complete. But I think that's not
> > > > possible with the channel interface?
> > > >
> > >
> > > Yes, though so and would be great. Currently the RedChannelClient requires
> > > each item to return a marshaller with all data and will send data from
> > > the marshaller. RCC does not allow to "pause" with data not available.
> > >
>
> Had a look again how would be possible this "pipe through", didn't find
> something easy.
> Currently looking at how messages are handled the way we process them
> can increase latency if they are big. We wait to have full messages
> before processing any data (both on server and client). This means
> that server have to wait guest to send all data and client will start
> decoding when all data is received. The 32MB limit for instance on a
> 1Gb connection will have to wait about 0.3 seconds before any processing
> can take place.
Yeah, the frames are normally much smaller, but latency is very
important to us, and it can stack up in this spot in the server... Do
you think we'll need to address it, even though it won't be simple?
> > > > Cheers,
> > > > Lukas
> > > >
> > > > > Christophe
>
> Frediano
More information about the Spice-devel
mailing list