<div dir="ltr">Hi Pekka,<div><br></div><div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">- ABI to query a "flush recommended" flag; This flag would be set when<br>
the soft-buffer is at least half-full, and cleared when it drops<br>
to... below half? empty?</blockquote><div><br></div><div><div style="text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255)">This sounds reasonable, I'm happy to incorporate this first bit into my patch, but .....</div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">- When a client is doing lots of request sending without returning to<br>
its main loop which would call wl_display_flush() anyway, it can<br>
query the flag to see if it needs to flush. </blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
- If flush ever fails, stop all request sending, poll for writable and<br>
try again. How to do this is left for the application. Most<br>
importantly, the application could set some state and return to its<br>
main event loop to do other stuff in the mean while.<br></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">You're right that this wouldn't help an application that sends requests<br>
from multiple threads a lot. They would need to be checking the flag<br>
practically for every few requests, but at least that would be cheaper<br>
than calling wl_display_flush() outright.<br></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> </blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">[...]</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> </blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Can you think of any way to recover from dupfd failure without<br>
disconnecting?</blockquote><div><br></div><div>There are only two ways I can think of to recover from the dupfd failure:</div><div><br></div><div>1) Don't dup(). The purpose of it is to enqueue fds in the fds_out ring buffer across multiple send calls. If instead a send call that had fd's did an immediate flush, there would be no need for the dupe. The downside is that the caller doesn't get a chance to do its wl_display_flush() under conditions where it can do a wait. </div><div>2) Add a new "send-error" flag, breaking existing code, and requiring the client to check after every send call and somehow do its own recovery and retry the send calls.</div><div><br></div><div>I don't think anyone would want the second, except maybe as some fundamental redesign that new clients were written anew for.</div><div><br></div><div>The first might actually be acceptable to me in practice.</div><div><br></div><div>Assuming the one wl_abort() after the call to wl_closure_send() becomes a display_fatal_error(), and an EAGAIN here is converted to an EPIPE or other fatal error, I don't think I will see EAGAIN very often if ever.</div><div><br></div><div>As you said, the kernel buffers are large. I don't know if that also applies to fd's, but I would expect it has much larger limits there too.</div><div><br></div><div>But that is based on what I'm think I'm running into. Another client might not want that.</div><div><br></div><div>I'm out of immediate ideas. What do you think?</div><div><br></div><div>- Lloyd</div><div><br></div></div></div></div>