Wayland intermediate sized data transfer

Pekka Paalanen ppaalanen at gmail.com
Tue Dec 18 13:46:51 UTC 2018


On Sat, 15 Dec 2018 23:21:43 +0000
Simon Ser <contact at emersion.fr> wrote:

> On Monday, November 12, 2018 2:54 PM, Pekka Paalanen <ppaalanen at gmail.com> wrote:
> > On Mon, 12 Nov 2018 14:48:19 +0200
> > Pekka Paalanen ppaalanen at gmail.com wrote:
> >  
> > > Quite likely we need to revisit this in any case. Using shared memory
> > > feels complicated, but OTOH it is relatively lot of data. Even the
> > > kernel UABI does not use a flat list of format+modifier but a fairly
> > > "interesting" bitfield encoding. That's probably not appropriate for
> > > Wayland though, so maybe we have to use shared memory for it.  
> >
> > Hi,
> >
> > having thought about this, I have the feeling that Wayland handles well
> > tiny bits of data as protocol messages and large chunks of data as
> > shared memory file descriptors, but it seems we lack a good solution
> > for intermediate sized bits of data in the range 1 kB - 8 kB, just to
> > throw some random numbers up.
> >
> > It is too risky to put these through the protocol messages in line, but
> > the trouble of setting up a shared memory file seems disproportionate
> > to the amount of data. Yet, it seems that setting up a shared memory
> > file is the only solution since the in line data is too risky.
> >
> > I started wondering if we should have a generic shared memory
> > interface, something like the following sketch of a Wayland extension.
> >
> > interface shm_factory
> > Is the global.
> >
> > -   request: create_shm_file(new shm_file, fd, size, seals, direction)
> >     Creates a new shm_file object that refers to the memory backing
> >     the fd, of the given size, and being sealed with the mentioned
> >     seals. Direction means whether the server or the client will be
> >     the writer, so this will be a one-way street but a re-usable
> >     one.
> >
> >     (This is a good chance to get memfd and seals properly used.)
> >
> >     interface shm_file
> >     Represents a piece of shared memory. Comes in two mutually
> >     exclusive flavours:
> >     -   server-writable
> >     -   client-writable
> >         Has a fixed size.
> >
> >         The usage pattern is that the writer signals the reader when it
> >         needs to copy the data out. This is done by a custom protocol
> >         message carrying a shm_file as an argument, which makes the
> >         shm_file read-locked. The reader copies the data out of the
> >         shared memory and sends client_read_done or server_read_done
> >         ASAP, releasing the read-lock. While the shm_file is
> >         read-locked, the writer may not write into it. While the
> >         shm_file is not read-locked, the reader may not read it.
> >
> > -   request: client_read_done
> >     Sent by the client when it has copied the data out. Releases
> >     the read-lock.
> >
> > -   event: server_read_done
> >     Sent by the server when it has copied the data out. Releases
> >     the read-lock.
> >
> >     When e.g. zwp_linux_dmabuf would provide the list of pixel formats and
> >     modifiers, the server needs to first send the required shared memory
> >     size to the client, the client creates a server-writable shm_file, and
> >     sends it to the server. The server fills in the data and sends an event
> >     with the shm_file as an argument that tell the client to read it (sets
> >     the read-lock). The rest goes according to the the generic protocol
> >     above.
> >
> >     Why all the roundtripping to get the shm_file created?
> >
> >     Because I would prefer the memory allocation is always
> >     accounted to the client, not the server. We should try to keep
> >     server allocations on behalf of clients to a minimum so that
> >     OOM killer etc. can find the right culprit.
> >
> >     Why so much copying?
> >
> >     Because the amount of data should be small enough that copying
> >     it is insignificant. By assuming that readers maintain their
> >     own copy, the protocol is simpler. No need to juggle multiple
> >     shm_files like we do with wl_buffers.
> >
> >     Why unidirectional?
> >
> >     To keep it simple. Need bidirectional transfers? Make one
> >     shm_file for each direction.
> >
> >     Isn't creating and tearing down shared memory relatively expensive?
> >
> >     Yes, but shm_file is meant to be repeatedly re-used. After
> >     reader has read, the writer can write again. No need to tear it
> >     down, if you expect repeated transfers.
> >
> >     While writing this, I have a strong feeling I am reinventing the wheel
> >     here...
> >
> >     Just throwing this idea out there, not sure if it was a good one.
> >
> >     Thanks,
> >     pq
> >  
> 
> Hi,
> 
> I've been thinking about this for a while, and I've been wondering: if we do
> a copy, why not use a pipe directly?

Hi Simon,

a good question. Would it work something like this:

Writer creates a pipe, sends the read-end fd over Wayland, adds
write-end fd to its main event loop polling for writable, and sets up
the writing. Once writing is complete, it closes the fd.

Reader receives the fd, adds it to its main event loop for reading, and
sets up the reading. Once EOF, the reader knows it got everything.

I see one downside: with pipes, you need to resume your main event loop
before the transfer is guaranteed to complete. That means you cannot
process the protocol message completely on the spot. With a shared
memory piece, you have all the data the moment you process the protocol
message.

I don't think that is always a significant difference, but I suppose
having all the data on the spot might make code usually simpler.


Thanks,
pq
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <https://lists.freedesktop.org/archives/wayland-devel/attachments/20181218/18e8beb9/attachment.sig>


More information about the wayland-devel mailing list