[PATCH weston 0/6] ivi-shell proposal

Pekka Paalanen ppaalanen at gmail.com
Thu Sep 12 05:24:15 PDT 2013


On Tue, 10 Sep 2013 11:13:06 +0000
<Michael.Schuldt at bmw.de> wrote:

> Hi Jason, Tanibata-san
> 
> > Hi Jason,
> > 
> > Thank you very much for feedback.
> > 
> >> Michale & Nobuhiko,
> >> 
> >> First of all, thank you for the clarification and thank you for
> >> sending this to the list and being willing to work with the FOSS
> >> community to try and make a standard. I'm sorry that this reply is
> >> not inline. I think that would get disorganized and more confusing
> >> but I'll try to hit everything I saw.
> >> 
> >> The first distinction that needs to be made (and I think Pekka was
> >> trying to hint at this) is what should be standardized. If you
> >> look at the current [wayland core protocol][1] there is only one
> >> shell protocol called wl_shell. I have proposed another which will
> >> probably get called [wl_fullscreen_shell][2]. Both of these have
> >> something in common: they are purely client-side. There is nothing
> >> whatsoever in the standard about managing surfaces. I think that
> >> we should focus on what you have designated ivi_client.
> 
> Ok, now I got it. No we do not want to push it to the core protocol.
> We want to define a IVI-Shell like desktop shell and want to use
> weston for the reference implementation. The ivi-shell extension
> should be the minimum subset which shall provided by each
> ivi-compositor. But in terms of the genivi compliance we are able to
> define a shell protocol and say this is the minimal subset which is
> supported, and therefore we need a reference implementation too.
> Maybe we have mixed something here from our side. We will clarify that

Standardizing does not necessarily mean Wayland core. It could be an
IVI standard, as opposed to a private protocol in some IVI
implementation.

I think your proposal could be split into an IVI standard part, and
something that is private to your specific implementation, depending on
how you want to be in your IVI standard.

But if you IVI-standardize the control protocol interfaces, does that
mean that all implementations must then support that regardless? Maybe
some implementations would just have that as compositor-internal APIs.

> >> What about the controller? If you look in the [weston protocol
> >> folder][3] you will find a number of different protocol files.
> >> Some of these are for experimental extensions such as subsurfaces
> >> which have not yet made it into wayland core. However, a number of
> >> them such as desktop-shell, screenshooter, etc. will *never* be
> >> standardized in the wayland core. These protocols are completely
> >> internal to weston and are considered implementation details. The
> >> primary example is desktop-shell. This protocol exists for the
> >> purpose of allowing the out-of-process shell controller manage
> >> surfaces similar to what you propose with ivi_shell. There are
> >> other shell plugins for weston (hawaii & orbital) that each have
> >> their own shell plugin and can have their own protocol for talking
> >> to an out-of-process controller.
> 
> Yes this is our focus too, like I have described above.
> 
> >> How does this impact your proposed protocol? Unless you are
> >> convinced that every single IVI system manufacturer will want to
> >> manage surfaces the same way, the controller should be left as a
> >> private implementation detail. You are free to do it
> >> out-of-process and talk the wayland protocol to do so
> >> (desktop-shell does) but there is no need to expose it as part of
> >> a standard protocol. By only standardizing the client interface
> >> you leave app developers (GPS, Media players, etc.) free to design
> >> their apps however they want and you leave IVI system
> >> manufacturers free to handle those clients and surfaces in
> >> whatever way they want.
> >> 
> >> Ok, now on to actual suggestions. >From this point forward, I am
> >> going to completely ignore the controller side of things.
> >> 
> >> First, I would propose to follow the pattern of wl_shell and make
> >> two interfaces for clients to talk to the compositor. For now, I
> >> am going to call them wl_ivi_shell and wl_ivi_surface. We can come
> >> up with different names if you'd like, but those seem reasonable.
> >> If we follow the pattern of wl_shell, wl_ivi_shell will probably
> >> exist for the sole purpose of creating wl_ivi_surface objects.
> >> This pattern is common in the protocol (wl_shell,
> >> wl_subcompositor, wl_compositor, etc.).
> >> 
> >> The main question, then, becomes what to put in wl_ivi_surface. I'm
> >> not 100% sure what you intend with some of this surface and layer
> >> stuff, so I'm afraid I don't have a whole lot of specific
> >> suggestions on that for now. I do, however have some general
> >> thoughts and questions:
> >> 
> >>  First, I agree with Pekka that you can probably avoid the layers
> >> thing by simply using subsurfaces.
> >> 
> > I see. However, we have a use case that several application,
> > different process share a layer. E.g. Navigation map and Route
> > guidance are separated into other application. It may kind of
> > grouping parent surfaces.

Do you really need layers as protocol objects, anyway?
Can't you just arrange surfaces into layers inside the compositor?

You could group surfaces based on surface roles. The big difference to
layers is that roles can form implicit groups, and you don't leak the
scenegraph outside of the compositor.

If, say, Navigation and Route guidance are separate processes, how do
you position them on the screens? Do they negotiate with each other who
goes where, or do they perhaps have predetermined places and cannot be
anywhere else?

> >> Second, Why are you specifying pixel formats in ivi_surface? Is the
> >> compositor supposed to tell the client what format to render in?
> >> 
> >> Third, concerning the "visibility" flag. The wayland protocol as it
> >> currently stands tries to avoid telling clients specifically
> >> whether or not they are visible and where they are on screen. This
> >> is because, when clients abuse this information, compositors lose
> >> the freedom to throw surfaces around how they want. Instead of a
> >> visibility flag, the wl_surface interface provides a "frame"
> >> callback that the clients can use to know when was the last time
> >> they were drawn to the screen. A client should throttle rendering
> >> based on these frame events. If the surface is offscreen and the
> >> compositor wants the client to stop rendering it simply stop
> >> sending it frame events and the client will stop drawing.
> >> 
> > 
> > I have two concerning to use "frame" for realizing invisible.
> > - To set invisible, application needs to call clear color by
> > itself. I think it might be overhead for GPU. If we can realize it
> > in shell, it simply skips it to be composite. Of course,
> > application shall stop drawing as well.

No. To make a surface invisible, the application just attaches a NULL
wl_buffer to it. You do not have to destroy the old wl_buffer either,
if you want to keep the corrent contents saved.

> > - Invisible shall be conrolled by cetral controller due to safy
> > reason.It shall be done in lower part as much as possible. Ideally,
> > if we can allocate it to another physical plane, it may be best. If
> > Display contoller doesn't support it, next is compositor.

The central controller being in the compositor (the ultimate dictator
for all things on screens), it can choose what surfaces are visible and
what are not, and there is nothing any client or application can do
about that. Use that power.

After all, the compositor is the only one that talks to the display
hardware. Doing otherwise gets really nasty really quick.


Thanks,
pq


More information about the wayland-devel mailing list