[RFC wayland 01/18] shm: add getters for the fd and offset

Derek Foreman derekf at osg.samsung.com
Thu Feb 11 16:20:16 UTC 2016


On 11/02/16 02:52 AM, Pekka Paalanen wrote:
> On Wed, 10 Feb 2016 12:00:48 -0600 Derek Foreman
> <derekf at osg.samsung.com> wrote:
> 
>> On 10/02/16 07:08 AM, Pekka Paalanen wrote:
>>> On Tue,  9 Feb 2016 10:55:48 -0600 Derek Foreman
>>> <derekf at osg.samsung.com> wrote:
>>> 
>>>> From: Giulio Camuffo <giuliocamuffo at gmail.com>
>>>> 
>>>> This allows to share the buffer data by mmapping the fd
>>>> again. Reviewed-by: David FORT
>>>> <contact at hardening-consulting.com>
>>>> 
>>>> Signed-off-by: Derek Foreman <derekf at osg.samsung.com> --- 
>>>> src/wayland-server-core.h |  6 ++++++ src/wayland-shm.c
>>>> | 16 +++++++++++++++- 2 files changed, 21 insertions(+), 1
>>>> deletion(-)
>>>> 
>>>> diff --git a/src/wayland-server-core.h
>>>> b/src/wayland-server-core.h index e8e1e9c..3316022 100644 ---
>>>> a/src/wayland-server-core.h +++ b/src/wayland-server-core.h 
>>>> @@ -480,6 +480,12 @@ wl_shm_buffer_create(struct wl_client
>>>> *client, uint32_t id, int32_t width, int32_t height, int32_t
>>>> stride, uint32_t format) WL_DEPRECATED;
>>>> 
>>>> +int +wl_shm_buffer_get_offset(struct wl_shm_buffer
>>>> *buffer); + +int +wl_shm_buffer_get_fd(struct wl_shm_buffer
>>>> *buffer); + void wl_log_set_handler_server(wl_log_func_t
>>>> handler);
>>>> 
>>>> #ifdef  __cplusplus diff --git a/src/wayland-shm.c
>>>> b/src/wayland-shm.c index a4343a4..911165d 100644 ---
>>>> a/src/wayland-shm.c +++ b/src/wayland-shm.c @@ -55,6 +55,7 @@
>>>> struct wl_shm_pool { int refcount; char *data; int32_t size; 
>>>> +	int fd; };
>>>> 
>>>> struct wl_shm_buffer { @@ -80,6 +81,7 @@
>>>> shm_pool_unref(struct wl_shm_pool *pool) return;
>>>> 
>>>> munmap(pool->data, pool->size); +	close(pool->fd); 
>>>> free(pool); }
>>>> 
>>>> @@ -253,7 +255,7 @@ shm_create_pool(struct wl_client *client,
>>>> struct wl_resource *resource, "failed mmap fd %d", fd); goto
>>>> err_close; } -	close(fd); +	pool->fd = fd;
>>>> 
>>>> pool->resource = wl_resource_create(client,
>>>> &wl_shm_pool_interface, 1, id); @@ -421,6 +423,18 @@
>>>> wl_shm_pool_unref(struct wl_shm_pool *pool) 
>>>> shm_pool_unref(pool); }
>>>> 
>>>> +WL_EXPORT int +wl_shm_buffer_get_offset(struct wl_shm_buffer
>>>> *buffer) +{ +	return buffer->offset; +} + +WL_EXPORT int 
>>>> +wl_shm_buffer_get_fd(struct wl_shm_buffer *buffer) +{ +
>>>> return buffer->pool->fd; +} + static void 
>>>> reraise_sigbus(void) {
>>> 
>>> Hi,
>>> 
>>> Derek did wonder about this the last time with this patch, and
>>> I want to reiterate, that leaving the pool fd open causes the
>>> server process to have much much more open fds.
>> 
>> I don't think Giulio is carrying this patch anymore - I just
>> needed the functionality and he'd already done the work.  I'm
>> quite happy to take any beatings related to the implementation.
>> 
>> And yeah, *piles* of fds.  Shortly after I complained last time
>> I checked what my local limits are, and they're pretty huge.
>> 
>> Yesterday I learned that lots of other distributions are still
>> happy with 4096 as a hard limit.
>> 
>>> The server-side fds must be kept open also after the client
>>> has destroyed the wl_shm_pool (and can no longer resize it), as
>>> the usual usage pattern is to create a pool, create a
>>> wl_buffer, destroy the pool. No fd is left open on the client,
>>> so the client is not in risk of running out of open file
>>> slots.
>>> 
>>> Not only that, but a single server process is now keeping the
>>> shm fds open from all clients.
>> 
>> Also, no matter how high the fd limit is raised, a client would
>> be able to create an unbounded number of open fds just by
>> allocating a bunch of small shm pools?
> 
> I don't think we've actually reviewed protocols against fd-flooding
> the compositor... but OTOH, is it even possible? Don't we get fds
> open already when libwayland-server buffers the incoming requests,
> which means the actual protocol semantics are irrelevant?

Yeah, they're already open when received.  For shm pools we currently
close them before we move on to the next closure, if I understand
things correctly.

What I'm wondering about now is drag and drop/selection behaviour...
I think we're ok, and we divest ourselves of the open fd before
processing any additional closures...

Oh, dmabuf looks like fun, I just made weston hold open 5000 fds for a
single client which has closed all its own fds.  This seems
potentially problematic. :)

> So if we wanted to limit open fds per client, it needs to be done
> as early as right after the recvmsg() call, I think.

I can't off the top of my head think of a good way to check how many
files the current process has open.  libwayland will know how often
we're passed open file descriptors, but won't know when the compositor
has closed them.

How would we limit the number of open file descriptors anyway? :)

> 
> Thanks, pq
> 
>> I don't think we currently have to do anything to limit the
>> number of fds a single client consumes, but if we did something
>> like this we'd need to.
>> 
>>> We should carefully consider that before accepting this patch.
>>> 
>> 
>> Indeed!
>> 
>>> Maybe it's time to start lobbying for raising the open fd
>>> limits? We use fds for so many things.
>> 
>> Couldn't hurt. ;)
>> 



More information about the wayland-devel mailing list