<div dir="ltr">Hi Pekka,<br><br><div class="gmail_quote"><div dir="ltr">On Tue, Jun 19, 2018 at 5:12 AM Pekka Paalanen <<a href="mailto:ppaalanen@gmail.com" target="_blank">ppaalanen@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I think the aim should be to remove the abort(). I do not see any<br>
reason to leave it in even as an option.<br></blockquote><div><br></div><div>Thank you. Will proceeed in that direction.</div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">The soft-buffer in libwayland-client is quite small, the kernel socket<br>
buffers are usually much larger in bytes. EAGAIN is triggered only when<br>
the flush to the kernel fails, so I don't think that would happen<br>
needlessly.<br></blockquote><div><br></div><div>Ack.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">If I'm reading the code right, MAX_FDS_OUT is only a limit for a single<br>
sendmsg() call. The fds_out buffer is capable of holding 1k fds.<br></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">We have 4kB message data buffer (what I've been calling the</span><br style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">soft-buffer, as opposed to in-kernel buffers). A minimal request to</span><br style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">carry an fd would be header (8B) plus fd (4B), so that's 12B. That</span><br style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">makes at most 341 fds until the soft-buffer is full. That should well</span><br style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">fit into the 1k fds that fds_out can hold.</span><br style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><br style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">One could have more fds in a single request. Let's say four. The</span><br style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">request size becomes 24B carrying 4 fds. That makes 680 fds for a full</span><br style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">soft-buffer. Should still be fine.</span></blockquote><div> </div><div><div style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial">The limit is both the maximum amount per sendmsg() call, as well as the maximum amount that can be sent by a single send request.</div><div style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><br></div><div style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><div style="text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255)">The fd's are added once at a time by a call to wl_connection_put_fd(), which forces a flush once the limit is reached. Those calls are made by copy_fds_to_connection(), called once per message send from wl_closure_send().</div><div style="text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255)"><br></div><div style="text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255)">On a flush (forced or not), the fd's are closed by close_fds(), which then advances the tail to the head, emptying the ring buffer again.</div><div style="text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255)"><br></div><div style="text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255)">That the output fd butter is over-large at 4k just allows much of the ring buffer code to be reused. It will never actually hold 1024 fds enqueued for output though.</div></div><div style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><br></div><div style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial">(The input fd buffer may allow 1k fd's. I did not examine that carefully to be sure).</div><div style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><br></div><div style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial">I think this limit is still a problem. I could change it to actually allow 1k fd's to be queued up, but then that will make the dup() more likely to fail (more in the other reply).</div><div style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"> <br></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">However, wl_connection_flush() attempts to write out at most<br>
MAX_FDS_OUT fds at a time, but all of the message data. OTOH, one<br>
cannot even try to flush out fds without any message data, IIRC<br>
sendmsg() doesn't support that. So it can run out of message data while<br>
fds accumulate and finally fill fds_out up, causing irrecoverable<br>
failure.<br>
<br>
Do your observations support this theory?<br></blockquote><div><br></div><div>I haven't observed a problem case like that. Any normal individual request will probably only use a few fd's, and I don't think there is a way for a request to not carry some non-fd data.</div><div><br></div><div>I can imagine someone creating a new protocol message that does take more than 28 fd's in a request, where that would then fail to flush if there were no other normal message data also already queued up. But that seems a very abnormal use. wl_closure_marshal() would be the best place for a check (it already checks for null objects -- see "goto err_null"), and I didn't find other bound check on the number of fd's in a request on a quick look through the code.</div><div><br></div><div>But I'll study the code a bit more in case there is some subtlety that escapes me.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">It seems we should somehow flush less message data so that we can call<br>
sendmsg() enough times to handle all the fds and avoid them piling up.<br>
But if we do so, we must be very careful with any potential assumptions<br>
on receiving side on the ordering between message data and related fds.<br></blockquote><div><br></div><div>I expect that the receiving code will just pull the next fd out of the fds_in ring buffer when turning the messages back into wl_closures for each message, as it realizes an fd is needed for any argument.</div><div><br></div><div>But I haven't looked at the code. Another thing I will check to be safe.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Even the author of the client would usually not know. It would make a<br>
very hard API to use correctly.<br></blockquote><div><br></div><div>Ok, I won't introduce anything like that.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">My objection is to the automatic blocking in general. We have a pretty<br>
nicely asynchronous protocol and we try to teach programmers to write<br>
their apps in a non-blocking way, so blocking inside libwayland-client<br>
is not nice. We have very few functions that are documented as<br>
potentially blocking. Extending that to all request sending functions<br>
is not acceptable in my mind.<br></blockquote><div><br></div><div>I agree and appreciate that the core library is non-blocking, and really don't want to change that, which is why my original patch made that blocking at least opt-in, and even then NOT even part of the core client code.</div><div><br></div><div>- Lloyd</div></div></div>