[PATCH] client: Allow send error recovery without an abort

Lloyd Pique lpique at google.com
Mon Aug 6 21:54:16 UTC 2018

Hi Pekka,

Sorry for the lack of any updates for (over) a month. I was on holiday for
a while, and had a few other things come up delaying me from getting back
to this.

I've created a first set of patches (send sets a fatal display error,
introduce "wl_display_soft_flush" which does a wl_display_flush only if
buffers are half-full), and I will try and get them uploaded soon.

I also have an almost completely untested code change which raises the
buffered fd limit (and constraints messages to at most three fd's). It
needs a test case, which I may or may not get to this week, as being
included and exercised in a more production environment.

As it's been a while, and I haven't looked at what is going on with the
move to gitlab for the Wayland project, should I still send my patches
here, or should I send them there?


- Lloyd Pique

On Thu, Jun 21, 2018 at 3:34 AM Pekka Paalanen <ppaalanen at gmail.com> wrote:

> On Tue, 19 Jun 2018 21:06:15 -0700
> Lloyd Pique <lpique at google.com> wrote:
> > I did think of one more option to avoid the dup() abort.
> >
> > - Open MAX_FDS_OUT to reserve a pool of fds (open /dev/nul or something
> > innocuous)
> > - Whenever the core client needs to dup a caller fd, use dup2/dup3 on an
> fd
> > in the pool
> > - Whenever the core client is done with the caller fd, use dup2/dup3 to
> > release it and point it back at something innocuous.
> Sorry, I don't think we are that desperate. :-)
> I think we have established there probably isn't any reasonable way to
> recover from dup() failure without disconnecting.
> > Separately, as my client is a service level nested server, I could
> > effectively configure it with setrlimit, and bump up the pool to the 1000
> > fds to fill the entire ring buffer. That would work, but may not be a
> > general solution for simpler clients, especially since the default limit
> is
> > only 1024.
> Very rare client would even be sending tons of fds. Usually a client
> has few buffers per wl_surface, and fds are sent when creating
> wl_buffers but not when re-using them. A window being continuously
> resized would be the usual cause of sending fds constantly since buffer
> resize often implies allocation. Even then, the rate would be limited
> to one buffer per display refresh and surface for a well-behaved client.
> Outside buffers, the use of fds is very rare in general. This is why
> you are the first to seriously hit and investigate this problem. You
> have probably the first app that is hitting the 28 fds limit, at least
> I don't recall hearing about such before.
> Therefore I claim that most clients return to their main event loop to
> sleep well before they have queued even a dozen fds. Your Wayland
> client could have its rlimit raised, and sounds like it should have in
> any case. This is why I think the dup() failure does not really need a
> recovery mechanism that would keep the Wayland connection alive.
> Thanks,
> pq
> > If blocking is out, maybe the only thing I can do is add an option to
> > configure the amount of fd's the core will reserve/enqueue? However,
> while
> > it isn't a dangerous option, it also wouldn't just necessarily with a
> > default of 28, without the developer knowing what larger value to set.
> >
> > (To make it clear, I'm thinking through this final issue with the fd
> limit
> > as best I can before I go ahead and revise my patch)
> >
> > - Lloyd
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/wayland-devel/attachments/20180806/fe6b41ad/attachment.html>

More information about the wayland-devel mailing list