[Spice-devel] [RFC PATCH 3/3] worker: use manual flushing on stream to decrease packet fragmentation
Jonathon Jongsma
jjongsma at redhat.com
Mon Feb 22 17:17:38 UTC 2016
On Tue, 2016-02-16 at 15:36 +0000, Frediano Ziglio wrote:
> Signed-off-by: Frediano Ziglio <fziglio at redhat.com>
> ---
> server/red-channel.c | 12 ++++++++----
> server/red-worker.c | 24 +++++++++++++-----------
> 2 files changed, 21 insertions(+), 15 deletions(-)
>
> diff --git a/server/red-channel.c b/server/red-channel.c
> index d7ff37e..c2a0c78 100644
> --- a/server/red-channel.c
> +++ b/server/red-channel.c
> @@ -1364,10 +1364,14 @@ void red_channel_client_push(RedChannelClient *rcc)
> while ((pipe_item = red_channel_client_pipe_item_get(rcc))) {
> red_channel_client_send_item(rcc, pipe_item);
> }
> - if (red_channel_client_no_item_being_sent(rcc) && ring_is_empty(&rcc
> ->pipe)
> - && rcc->stream->watch) {
> - rcc->channel->core->watch_update_mask(rcc->stream->watch,
> - SPICE_WATCH_EVENT_READ);
> + if (red_channel_client_no_item_being_sent(rcc) && ring_is_empty(&rcc
> ->pipe)) {
> + if (rcc->stream->watch) {
> + rcc->channel->core->watch_update_mask(rcc->stream->watch,
> + SPICE_WATCH_EVENT_READ);
> + }
> + reds_stream_flush(rcc->stream);
> + } else if (red_channel_client_waiting_for_ack(rcc)) {
> + reds_stream_flush(rcc->stream);
Can you explain the reason for flushing when we're waiting for an ACK? Is there
a reason we couldn't just move the flush completely outside of the if statement
and flush in all circumstances in this function?
> }
> rcc->during_send = FALSE;
> red_channel_client_unref(rcc);
> diff --git a/server/red-worker.c b/server/red-worker.c
> index bf12a22..b368a63 100644
> --- a/server/red-worker.c
> +++ b/server/red-worker.c
> @@ -416,17 +416,19 @@ static int common_channel_config_socket(RedChannelClient
> *rcc)
>
> // TODO - this should be dynamic, not one time at channel creation
> ccc->is_low_bandwidth = main_channel_client_is_low_bandwidth(mcc);
> - delay_val = ccc->is_low_bandwidth ? 0 : 1;
> - /* FIXME: Using Nagle's Algorithm can lead to apparent delays, depending
> - * on the delayed ack timeout on the other side.
> - * Instead of using Nagle's, we need to implement message buffering on
> - * the application level.
> - * see: http://www.stuartcheshire.org/papers/NagleDelayedAck/
> - */
> - if (setsockopt(stream->socket, IPPROTO_TCP, TCP_NODELAY, &delay_val,
> - sizeof(delay_val)) == -1) {
> - if (errno != ENOTSUP) {
> - spice_warning("setsockopt failed, %s", strerror(errno));
> + if (!reds_stream_set_auto_flush(rcc->stream, false)) {
> + delay_val = ccc->is_low_bandwidth ? 0 : 1;
> + /* FIXME: Using Nagle's Algorithm can lead to apparent delays,
> depending
> + * on the delayed ack timeout on the other side.
> + * Instead of using Nagle's, we need to implement message buffering
> on
> + * the application level.
> + * see: http://www.stuartcheshire.org/papers/NagleDelayedAck/
> + */
> + if (setsockopt(stream->socket, IPPROTO_TCP, TCP_NODELAY, &delay_val,
> + sizeof(delay_val)) == -1) {
> + if (errno != ENOTSUP) {
> + spice_warning("setsockopt failed, %s", strerror(errno));
> + }
I don't quite understand why you're setting NODELAY if CORK fails here. Can you
provide a bit more justification/explanation here?
> }
> }
> return TRUE;
More information about the Spice-devel
mailing list