<div dir="ltr">Hi Pekka,<div><br></div><div>Sorry for the lack of any updates for (over) a month. I was on holiday for a while, and had a few other things come up delaying me from getting back to this.</div><div><br></div><div>I've created a first set of patches (send sets a fatal display error, introduce "wl_display_soft_flush" which does a wl_display_flush only if buffers are half-full), and I will try and get them uploaded soon.</div><div><br></div><div>I also have an almost completely untested code change which raises the buffered fd limit (and constraints messages to at most three fd's). It needs a test case, which I may or may not get to this week, as being included and exercised in a more production environment.</div><div><br></div><div>As it's been a while, and I haven't looked at what is going on with the move to gitlab for the Wayland project, should I still send my patches here, or should I send them there?</div><div><br></div><div>Thanks!</div><div><br></div><div>- Lloyd Pique</div></div><br><div class="gmail_quote"><div dir="ltr">On Thu, Jun 21, 2018 at 3:34 AM Pekka Paalanen <<a href="mailto:ppaalanen@gmail.com">ppaalanen@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Tue, 19 Jun 2018 21:06:15 -0700<br>
Lloyd Pique <<a href="mailto:lpique@google.com" target="_blank">lpique@google.com</a>> wrote:<br>
<br>
> I did think of one more option to avoid the dup() abort.<br>
> <br>
> - Open MAX_FDS_OUT to reserve a pool of fds (open /dev/nul or something<br>
> innocuous)<br>
> - Whenever the core client needs to dup a caller fd, use dup2/dup3 on an fd<br>
> in the pool<br>
> - Whenever the core client is done with the caller fd, use dup2/dup3 to<br>
> release it and point it back at something innocuous.<br>
<br>
Sorry, I don't think we are that desperate. :-)<br>
<br>
I think we have established there probably isn't any reasonable way to<br>
recover from dup() failure without disconnecting.<br>
<br>
> Separately, as my client is a service level nested server, I could<br>
> effectively configure it with setrlimit, and bump up the pool to the 1000<br>
> fds to fill the entire ring buffer. That would work, but may not be a<br>
> general solution for simpler clients, especially since the default limit is<br>
> only 1024.<br>
<br>
Very rare client would even be sending tons of fds. Usually a client<br>
has few buffers per wl_surface, and fds are sent when creating<br>
wl_buffers but not when re-using them. A window being continuously<br>
resized would be the usual cause of sending fds constantly since buffer<br>
resize often implies allocation. Even then, the rate would be limited<br>
to one buffer per display refresh and surface for a well-behaved client.<br>
<br>
Outside buffers, the use of fds is very rare in general. This is why<br>
you are the first to seriously hit and investigate this problem. You<br>
have probably the first app that is hitting the 28 fds limit, at least<br>
I don't recall hearing about such before.<br>
<br>
Therefore I claim that most clients return to their main event loop to<br>
sleep well before they have queued even a dozen fds. Your Wayland<br>
client could have its rlimit raised, and sounds like it should have in<br>
any case. This is why I think the dup() failure does not really need a<br>
recovery mechanism that would keep the Wayland connection alive.<br>
<br>
<br>
Thanks,<br>
pq<br>
<br>
> If blocking is out, maybe the only thing I can do is add an option to<br>
> configure the amount of fd's the core will reserve/enqueue? However, while<br>
> it isn't a dangerous option, it also wouldn't just necessarily with a<br>
> default of 28, without the developer knowing what larger value to set.<br>
> <br>
> (To make it clear, I'm thinking through this final issue with the fd limit<br>
> as best I can before I go ahead and revise my patch)<br>
> <br>
> - Lloyd<br>
<br>
</blockquote></div>