[PATCH weston] xdg-shell: Make stable
Giulio Camuffo
giuliocamuffo at gmail.com
Tue Aug 26 01:01:08 PDT 2014
2014-08-26 10:24 GMT+03:00 Pekka Paalanen <ppaalanen at gmail.com>:
> On Mon, 25 Aug 2014 21:51:57 -0700
> Jason Ekstrand <jason at jlekstrand.net> wrote:
>
>> Just a couple quick comments below.
>>
>> I can't fin where this goes, so I'm putting it here: Why are we having
>> compositors send an initial configure event again? Given that we have a
>> serial, tiling compositors can just send a configure and wait until the
>> client responds to the event. Non-tiling compositors will just send 0x0
>> for "I don't care" right? I seem to recall something about saving a
>> repaint on application launch in the tiling case but I don't remember that
>> well. I'm not sure that I like the required roundtrip any more than the
>> extra repaint. It's probably a wash.
>
> There is more to configure than just the size, though making the
> initial configure event optional, well...
>
> If clients are not required to wait for the initial configure event,
> they will draw in whatever state they happen to choose. If the
> compositor disagrees, the first drawing will go wasted, as the
> compositor will ask for a new one and just not use the first one
> because it would glitch.
>
> But as configure is about more than just the size, the compositor
> cannot even check if it disagrees. There is no request corresponding to
> configure event that says this is the state I used to draw, there is
> only ack_configure.
>
> So if you want to make the initial configure event optional, you'll
> need more changes to the protocol.
>
> I do think one roundtrip always at window creation is better than a
> wasted drawing sometimes, even if you fixed the protocol to not have
> the state problem.
>
> We just have to make sure, that any features that require feedback
> during window creation can be conflated into the one and the same
> roundtrip: client sending a series of requests and a wl_display_sync in
> the end, the server replying with possibly multiple (e.g. configure)
> events each overriding the previous, and once the sync callback fires,
> the latest configure event is the correct one.
>
> So yeah, we could make the initial configure event optional, but we
> cannot remove the roundtrip, just in case some compositor actually
> wants to do initial configuration efficiently.
>
> That is still worth to consider in the spec; do we require all
> compositors to send the initial configure event, or do we only give the
> option, and say that clients really should use a wl_display_sync once
> they have set up the xdg_surface state?
>
>> On Aug 22, 2014 1:48 PM, "Jasper St. Pierre" <jstpierre at mecheye.net> wrote:
>> >
>> > On Wed, Aug 6, 2014 at 9:39 AM, Pekka Paalanen <ppaalanen at gmail.com>
>> wrote:
>> >>
>> >> On Thu, 17 Jul 2014 17:57:45 -0400
>> >> "Jasper St. Pierre" <jstpierre at mecheye.net> wrote:
>> >>
>
>> >> Oh btw. we still don't have the popup placement negotiation protocol
>> >> (to avoid popup going off-screen), but the draft I read a long time ago
>> >> assumed, that the client knows in advance the size of the popup
>> >> surface. We don't know the size here. I'm not sure if solving that
>> >> would change something here.
>> >
>> >
>> > The compositor can pivot the menu around a point, which is very likely
>> > going to be the current cursor position. Specifying a box would be overall
>> > more correct if the popup instead stemmed from a button (so it could appear
>> > on all sides of the box) but I don't imagine that clients will ever use
>> > this on a button. We could add it for completeness if you're really
>> > concerned.
>>
>> Just thinking out loud here but why not just have the client send a list of
>> locations in order of preference. That way the client can make its popups
>> more interesting without breaking the world. Also, the client probably
>> wants feedback of where ilthe popup is going to be before it renders. I'm
>> thinking about GTK's little popup boxes with the little pointer thing that
>> points to whatever you gicked to get the menu. (I know they have a name, I
>> just can't remember it right now. They're all over the place in GNOME
>> shell.)
>
> That brings us back to the original idea of the popup placement
> protocol: after a probe or two, the client knows where the popup is
> placed and can render accordingly.
>
>
>> >> > <entry name="fullscreen" value="2" summary="the surface is
>> fullscreen">
>> >> > The surface is fullscreen. The window geometry specified in
>> the configure
>> >> > event must be obeyed by the client.
>> >>
>> >> Really? So, will we rely on wl_viewport for scaling low-res apps to
>> >> fullscreen? No provision for automatic black borders in aspect ratio or
>> >> size mismatch, even if the display hardware would be able to generate
>> >> those for free while scanning out the client buffer, bypassing
>> >> compositing?
>> >
>> >
>> >> Since we have a big space for these states, I suppose we could do those
>> >> mismatch cases in separate and explicit state variants of fullscreen,
>> >> could we not?
>> >
>> >
>> > I explicitly removed this feature from the first draft of the patch
>> simply to make my life easier as a compositor writer. We could add
>> additional states for this, or break fullscreen into multiple states:
>> "fullscreen + size_strict" or "fullscreen + size_loose" or something.
>> >
>> > I am not familiar enough with the sixteen different configurations of the
>> > old fullscreen system to make an informed decision about what most clients
>> > want. Your help and experience is very appreciated. I'm not keen to add
>> > back the matrix of configuration options.
>>
>> Yeah, not sure what to do here. I like the idea of the compositor doing it
>> for the client. Sure, you could use wl_viewport, subsurfaces, and a couple
>> black surfaces for letterboxing. However that is going to be far more
>> difficult for the compositor to translate into overlays/planes than just
>> the one surface and some scaling instructions.
>
> I don't think it would be that hard for a compositor to use overlays
> even then. Have one surface with a 1x1 wl_buffer, scaled with
> wl_viewport to fill the screen, and then have another surface on top
> with the video wl_buffers being fed in, scaled with wl_viewport to keep
> aspect ratio. A compositor can easily put the video on an overlay, and
> if the CRTC hardware supports, it might even eliminate the black surface
> and replace it with a background color.
What is not clear to me is what advantages does the
wl_subsurface+wl_viewport approach has compared to the compositor just
scaling the surface and putting a black surface behind it.
Is it just to remove the hint in the wl_fullscreen request? This seems
a lazy reason to me, implementing that hint in the compositor is not
hard, and in turn it means increasing the complexity of every client
that would ever want to go fullscreen.
There is also one use case for which the wl_subsurface+wl_viewport
cannot work. Having the same surface fullscreen on two differently
sized outputs (think of presentations), like this:
http://im9.eu/picture/phx647
Sure, the compositor can send the configure event with one output's
size and scale the surface to fit the other one, but then what is the
purpose of wl_viewport again here, if the compositor must scale the
surface anyway?
>
> In my previous reply, I concluded that wl_viewport+wl_subsurface would
> be enough (I suprised myself), and we would not really need yet another
> way from xdg_surface. Obviously, I forgot something, that the IRC
> discussion of last night brought back to my mind.
>
> The wl_shell fullscreening has three different cases for dealing with
> size mismatch between the output and the window:
> - aspect-correct scaling
> - centered, no scaling
> - please, I would really like a mode switch
>
> There is also the fourth, which means the client does not care at all
> what the compositor does. This protocol was designed before
> sub-surfaces or wl_viewport were a thing.
>
> If we do not have that in xdg_shell, but instead rely on
> wl_viewport+wl_subsurface, there are two consequences:
> - scaling vs. centered is no longer a hint, but it is dictated by the
> client
> - the mode switch case is lost
> (- all desktop compositors are required to implement both wl_scaler
> and wl_subcompositor)
>
> The first point is likely not an issue, but the second may very well
> be. If xdg_surface requires fullscreen windows to be the exact output
> size, and clients obey, there is no way to trigger the mode switch.
>
> I suspect there are at least gamers out there, who would not like this.
> In fact, they would be pushed back to the way X11 works right now: if
> you want a mode switch, use a helper app to permanently change the
> output resolution (this screws up your whole desktop layout), then
> launch the game, afterwards do a manual switch back.
>
> No, sorry, that would actually be *worse* than the situation on X11
> right now.
>
> Recalling that, I do think we need something to support the mode switch
> case. A crucial part of a video mode for a gamer is the monitor refresh
> rate, which is why wl_shell_surface.set_fullscreen includes the
> framerate parameter.
I don't think games need the screen to be at a NxM pixels mode,
scaling up the surface would be good and possibly better, since we can
scale better than what LCD screens usually do.
On the other hand, there is the framerate parameter, and games may
care about that... I'm not sure what is the best course of action
here.
--
Giulio
>
> Also remember, that the mode switch is not meant to be mandatory if
> requested by a client. It is only a preference, that when this app is
> active (and top-most?), it would really like the video mode to be
> changed to the nearest compatible with the window. The compositor is
> free to switch between the game's mode and the native desktop mode at
> will. Minimize the game? Sure, switch back to the native desktop video
> mode. Bring the game back - switch back to the game mode. Alt+tab?
> Switch only when something else than the game gets activated, maybe.
>
> The axiom here is that people (e.g. gamers) sometimes really want to
> run an application on a different video mode than the normal desktop.
> It has been true in the past, can anyone claim it is not true nowadays?
> Or can anyone claim these people's use cases do not matter enough?
>
>
> Giulio also had some reasons to prefer the wl_shell way that he
> mentioned in IRC. Giulio, could you elaborate here?
>
>
>
> Thanks,
> pq
More information about the wayland-devel
mailing list