[PATCH weston] xdg-shell: Make stable
Jason Ekstrand
jason at jlekstrand.net
Tue Aug 26 07:39:59 PDT 2014
On Aug 26, 2014 1:01 AM, "Giulio Camuffo" <giuliocamuffo at gmail.com> wrote:
>
> 2014-08-26 10:24 GMT+03:00 Pekka Paalanen <ppaalanen at gmail.com>:
> > On Mon, 25 Aug 2014 21:51:57 -0700
> > Jason Ekstrand <jason at jlekstrand.net> wrote:
> >
> >> Just a couple quick comments below.
> >>
> >> I can't fin where this goes, so I'm putting it here: Why are we having
> >> compositors send an initial configure event again? Given that we have
a
> >> serial, tiling compositors can just send a configure and wait until the
> >> client responds to the event. Non-tiling compositors will just send
0x0
> >> for "I don't care" right? I seem to recall something about saving a
> >> repaint on application launch in the tiling case but I don't remember
that
> >> well. I'm not sure that I like the required roundtrip any more than
the
> >> extra repaint. It's probably a wash.
> >
> > There is more to configure than just the size, though making the
> > initial configure event optional, well...
> >
> > If clients are not required to wait for the initial configure event,
> > they will draw in whatever state they happen to choose. If the
> > compositor disagrees, the first drawing will go wasted, as the
> > compositor will ask for a new one and just not use the first one
> > because it would glitch.
> >
> > But as configure is about more than just the size, the compositor
> > cannot even check if it disagrees. There is no request corresponding to
> > configure event that says this is the state I used to draw, there is
> > only ack_configure.
> >
> > So if you want to make the initial configure event optional, you'll
> > need more changes to the protocol.
> >
> > I do think one roundtrip always at window creation is better than a
> > wasted drawing sometimes, even if you fixed the protocol to not have
> > the state problem.
> >
> > We just have to make sure, that any features that require feedback
> > during window creation can be conflated into the one and the same
> > roundtrip: client sending a series of requests and a wl_display_sync in
> > the end, the server replying with possibly multiple (e.g. configure)
> > events each overriding the previous, and once the sync callback fires,
> > the latest configure event is the correct one.
> >
> > So yeah, we could make the initial configure event optional, but we
> > cannot remove the roundtrip, just in case some compositor actually
> > wants to do initial configuration efficiently.
> >
> > That is still worth to consider in the spec; do we require all
> > compositors to send the initial configure event, or do we only give the
> > option, and say that clients really should use a wl_display_sync once
> > they have set up the xdg_surface state?
Ok, you've convinced me. Clients should at least round-trip to see if they
get an initial configure. They probably don't want to use
wl_display_roundtrip (unless they're being lazy) but they probably do want
to wait for something before they draw. I was still thinking in wl_shell
terms where stuff is a lot more client-driven.
> >> On Aug 22, 2014 1:48 PM, "Jasper St. Pierre" <jstpierre at mecheye.net>
wrote:
> >> >
> >> > On Wed, Aug 6, 2014 at 9:39 AM, Pekka Paalanen <ppaalanen at gmail.com>
> >> wrote:
> >> >>
> >> >> On Thu, 17 Jul 2014 17:57:45 -0400
> >> >> "Jasper St. Pierre" <jstpierre at mecheye.net> wrote:
> >> >>
> >
> >> >> Oh btw. we still don't have the popup placement negotiation protocol
> >> >> (to avoid popup going off-screen), but the draft I read a long time
ago
> >> >> assumed, that the client knows in advance the size of the popup
> >> >> surface. We don't know the size here. I'm not sure if solving that
> >> >> would change something here.
> >> >
> >> >
> >> > The compositor can pivot the menu around a point, which is very
likely
> >> > going to be the current cursor position. Specifying a box would be
overall
> >> > more correct if the popup instead stemmed from a button (so it could
appear
> >> > on all sides of the box) but I don't imagine that clients will ever
use
> >> > this on a button. We could add it for completeness if you're really
> >> > concerned.
> >>
> >> Just thinking out loud here but why not just have the client send a
list of
> >> locations in order of preference. That way the client can make its
popups
> >> more interesting without breaking the world. Also, the client probably
> >> wants feedback of where ilthe popup is going to be before it renders.
I'm
> >> thinking about GTK's little popup boxes with the little pointer thing
that
> >> points to whatever you gicked to get the menu. (I know they have a
name, I
> >> just can't remember it right now. They're all over the place in GNOME
> >> shell.)
> >
> > That brings us back to the original idea of the popup placement
> > protocol: after a probe or two, the client knows where the popup is
> > placed and can render accordingly.
> >
> >
> >> >> > <entry name="fullscreen" value="2" summary="the surface is
> >> fullscreen">
> >> >> > The surface is fullscreen. The window geometry specified
in
> >> the configure
> >> >> > event must be obeyed by the client.
> >> >>
> >> >> Really? So, will we rely on wl_viewport for scaling low-res apps to
> >> >> fullscreen? No provision for automatic black borders in aspect
ratio or
> >> >> size mismatch, even if the display hardware would be able to
generate
> >> >> those for free while scanning out the client buffer, bypassing
> >> >> compositing?
> >> >
> >> >
> >> >> Since we have a big space for these states, I suppose we could do
those
> >> >> mismatch cases in separate and explicit state variants of
fullscreen,
> >> >> could we not?
> >> >
> >> >
> >> > I explicitly removed this feature from the first draft of the patch
> >> simply to make my life easier as a compositor writer. We could add
> >> additional states for this, or break fullscreen into multiple states:
> >> "fullscreen + size_strict" or "fullscreen + size_loose" or something.
> >> >
> >> > I am not familiar enough with the sixteen different configurations
of the
> >> > old fullscreen system to make an informed decision about what most
clients
> >> > want. Your help and experience is very appreciated. I'm not keen to
add
> >> > back the matrix of configuration options.
> >>
> >> Yeah, not sure what to do here. I like the idea of the compositor
doing it
> >> for the client. Sure, you could use wl_viewport, subsurfaces, and a
couple
> >> black surfaces for letterboxing. However that is going to be far more
> >> difficult for the compositor to translate into overlays/planes than
just
> >> the one surface and some scaling instructions.
> >
> > I don't think it would be that hard for a compositor to use overlays
> > even then. Have one surface with a 1x1 wl_buffer, scaled with
> > wl_viewport to fill the screen, and then have another surface on top
> > with the video wl_buffers being fed in, scaled with wl_viewport to keep
> > aspect ratio. A compositor can easily put the video on an overlay, and
> > if the CRTC hardware supports, it might even eliminate the black surface
> > and replace it with a background color.
>
> What is not clear to me is what advantages does the
> wl_subsurface+wl_viewport approach has compared to the compositor just
> scaling the surface and putting a black surface behind it.
> Is it just to remove the hint in the wl_fullscreen request? This seems
> a lazy reason to me, implementing that hint in the compositor is not
> hard, and in turn it means increasing the complexity of every client
> that would ever want to go fullscreen.
> There is also one use case for which the wl_subsurface+wl_viewport
> cannot work. Having the same surface fullscreen on two differently
> sized outputs (think of presentations), like this:
> http://im9.eu/picture/phx647
> Sure, the compositor can send the configure event with one output's
> size and scale the surface to fit the other one, but then what is the
> purpose of wl_viewport again here, if the compositor must scale the
> surface anyway?
Yeah, I think I still have to go with Guilio here. It really it's probably
less work for the compositor to implement fullscreen modes than to
implement viewport+subsurface correctly and it's certainly less work for
clients.
> >
> > In my previous reply, I concluded that wl_viewport+wl_subsurface would
> > be enough (I suprised myself), and we would not really need yet another
> > way from xdg_surface. Obviously, I forgot something, that the IRC
> > discussion of last night brought back to my mind.
> >
> > The wl_shell fullscreening has three different cases for dealing with
> > size mismatch between the output and the window:
> > - aspect-correct scaling
> > - centered, no scaling
> > - please, I would really like a mode switch
> >
> > There is also the fourth, which means the client does not care at all
> > what the compositor does. This protocol was designed before
> > sub-surfaces or wl_viewport were a thing.
> >
> > If we do not have that in xdg_shell, but instead rely on
> > wl_viewport+wl_subsurface, there are two consequences:
> > - scaling vs. centered is no longer a hint, but it is dictated by the
> > client
> > - the mode switch case is lost
> > (- all desktop compositors are required to implement both wl_scaler
> > and wl_subcompositor)
> >
> > The first point is likely not an issue, but the second may very well
> > be. If xdg_surface requires fullscreen windows to be the exact output
> > size, and clients obey, there is no way to trigger the mode switch.
> >
> > I suspect there are at least gamers out there, who would not like this.
> > In fact, they would be pushed back to the way X11 works right now: if
> > you want a mode switch, use a helper app to permanently change the
> > output resolution (this screws up your whole desktop layout), then
> > launch the game, afterwards do a manual switch back.
> >
> > No, sorry, that would actually be *worse* than the situation on X11
> > right now.
> >
> > Recalling that, I do think we need something to support the mode switch
> > case. A crucial part of a video mode for a gamer is the monitor refresh
> > rate, which is why wl_shell_surface.set_fullscreen includes the
> > framerate parameter.
>
> I don't think games need the screen to be at a NxM pixels mode,
> scaling up the surface would be good and possibly better, since we can
> scale better than what LCD screens usually do.
> On the other hand, there is the framerate parameter, and games may
> care about that... I'm not sure what is the best course of action
> here.
I'd agree. Even on X, when you do a "mode switch" it frequently isn't a
mode switch at all. I know that on the nVidia driver, X will do scaling on
the graphics card if it's plugged in via a digital connection (DVI, HDMI,
DP, etc.) There really is no "mode switch" involved. This is precicely
because of what Guilio mentioned: LCD monitors do crappy scaling.
That said, there's a counter example! Look at LCD projectors. Unless you
have the projector oriented perfectly so that no keystoning is required
(BTW: this never happens in practice), the projector will transform the
image you give it to make it more rectangular on the projected screen. In
this case, the graphics card stretching it and the projector transforming
it will probably look worse than the projector transforming the original
image.
I think the point here is that the app can't really know whether it wants
scaling or a mode switch. Apps shouldn't need to be dealing with all the
details of what outputs are connected and how beyond maybe knowing that the
presentation goes on the projector. Compositors are much better situated
to decide when do a mode-switch vs. putting the fullscreen surface in a
plane.
I think the bigger issue here is framerate. The reason why games want to
run at these odd modes is because your card may not be able to render the
game at 1920x1080 at 60fps (to use an extreme example). So what does the game
do? It asks for a lower resolution and maybe a lower framerate. If it can
only hit 50fps consistently, then using a 50hz vsync looks better than the
jitter you would get on a 60hz vsync. Can we use a heuristic for this?
Maybe, maybe not.
> >
> > Also remember, that the mode switch is not meant to be mandatory if
> > requested by a client. It is only a preference, that when this app is
> > active (and top-most?), it would really like the video mode to be
> > changed to the nearest compatible with the window. The compositor is
> > free to switch between the game's mode and the native desktop mode at
> > will. Minimize the game? Sure, switch back to the native desktop video
> > mode. Bring the game back - switch back to the game mode. Alt+tab?
> > Switch only when something else than the game gets activated, maybe.
As I mentioned above, I don't think clients should care whether they get a
new mode or not beyond a possible FPS change. However, I don't know that I
like the word "hint" in the fullscreen spec in wl_shell_surface. The way
he spec was written before, the client could ask for centered and get a
mode-switch or vice-versa. What was the intent here?
Perhaps we can do everything we want by saying that fullscreen means "fill
the screen without changing aspect-ratio" and add a couple of hints
requests that the client calls before calling set_fullscreen:
- Preferred framerate: "I don't think I can render any faster than this,
so it will look better if you refresh at this rate." This could, in
theory, apply to non-fullscreen windows as well. Also, it could be a bogus
rate such as 24fps (movies) and the compositor could try to refresh at a
multiple of 24 or something.
- Preferred fullscreen scaling: "I would like to be as large as possible",
"I would like to be pixel-perfect, even if that means smaller and
surrounded in black", etc. TBH, I don't know how many clients would
actually like the later one. Maybe if they have text they don't want
getting blurry?
In any case, as long as the default is sane (don't change aspect ratio)
then we can easily add the rest as hints so it's not a show-stopper for now.
--Jason
> >
> > The axiom here is that people (e.g. gamers) sometimes really want to
> > run an application on a different video mode than the normal desktop.
> > It has been true in the past, can anyone claim it is not true nowadays?
> > Or can anyone claim these people's use cases do not matter enough?
> >
> >
> > Giulio also had some reasons to prefer the wl_shell way that he
> > mentioned in IRC. Giulio, could you elaborate here?
> >
> >
> >
> > Thanks,
> > pq
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/wayland-devel/attachments/20140826/1d610642/attachment-0001.html>
More information about the wayland-devel
mailing list