[protocol PATCH v2 1/2] add parameters for set_fullscreen
Pekka Paalanen
ppaalanen at gmail.com
Wed Jan 11 00:10:00 PST 2012
On Wed, 11 Jan 2012 15:07:05 +0800
Juan Zhao <juan.j.zhao at linux.intel.com> wrote:
> Thank you very much for your review. :)
>
> On Tue, 2012-01-10 at 13:42 +0200, Pekka Paalanen wrote:
>
> > These could also use commentary on what they mean. My suggestion of them
> > is the following.
> >
> > "none" means the application does not care what method a compositor
> > uses, so the compositor would probably choose either the cheapest
> > (fill? no-fill?) or the best desktop-integrated user experience (scale?)
> > method.
> "None" means no need for the compositor to do any work here, it is useful
> in case that client want to re-layout his UI components to fullscreen.
> Re-layout means rearrange the contents of that client other than scaling.
>
> For example, the current terminal demo code. It does the fullscreen
> itself.
> Another example, when set the video window to fullscreen, the detailed
> operation can be decided by video app, and no need for the compositor to
> scale or change mode. For example, the scaling from 720X576 to 1024X768
> of a video data is handled by video driver. When they set fullscreen,
> they do not hope compositor to do anything. The video driver can make
> the decision how to scale it.
Sorry, I do not see your point.
In my view, "none" mode as you define it is implicit, and therefore no
need to specify it in the protocol. Consider the following:
1. client creates a surface
2. client sends the request set_fullscreen, and enters its event loop,
not rendering anything yet
3. client receives a configure event, which was triggered by the
set_fullscreen request
4. client chooses to use the exact size from the configure event, and
creates a buffer with that
5. client renders the content to the buffer
6. client attaches the buffer to the surface to show it.
That is the basic flow of communication for any surface that
starts out as fullscreen or maximised. I think it would also apply for
the case where a normal surface changes to fullscreen or maximised, in
which case you just ignore step 1.
For fullscreen, the compositor has chosen an output, and sends the
output's size in the configure event. If the client uses that size, the
scaling mode is completely irrelevant, since the buffer has the
matching size. That is the "none" case.
The scaling modes apply only to the case, where the client decides to
attach a buffer of different size than what the compositor suggests
(which is the size of the assigned output).
The configure event is just a hint, an application can be well-behaving
even if it does not exactly follow the configure data. That is how we
are supposed to implement integer-cell resizing (what was the proper
term for this?), the client rounding the size to the nearest integer
multiple of a character size (think about terminals), for example.
> For your concern, I think we can add one "auto" to give the decision of
> choosing one for the compositor.
Yes, that would be nice. It allows to have a default scaling mode
in the compositor, which then gets used by applications which do not
have a real preference (usually because they honour the size from the
configure events, in which case no scaling is performed).
Ok, so "auto" does sort of match your "none" in most of the
practical use cases, I would just change the semantics a bit.
Should we call it "default" mode?
> > "scale" is for preferring scaling in the compositor, the application
> > really would like to fill the whole screen, even if it renders a buffer
> > that is too small. The compositor might (be configured to) also switch
> > the video mode, if it wants to scan out the client surface.
> Yeah.
>
> > Scaling
> > would always preserve surface's aspect ratio. The surface is centered.
> Good idea, we need to preserve aspect ratio. Some following patches are
> needed.
>
>
> >
> > "fill" is for black borders. The application does not want other apps
> > showing at all, and it does not want to be scaled, because it might
> > look bad. This would be preferring 1:1 pixel mapping in the monitor
> > native video mode. The surface is centered.
> Yeah, agree.
>
> >
> > "force" is like "scale", the application wants to fill the whole
> > monitor, regardless of what size it renders its buffers in. The
> > preference is to switch video mode to the smallest mode that can fit
> > the client buffer. In the optimal case, the buffer size matches a video
> > mode, and will be scanned out directly. If the sizes do not match,
> > black borders are added, either by compositing or by the video driver.
> Agree. Some following patches are needed for size do not match.
>
>
> > The position of the surface on an output is not defined here,
> > possibly allowing the video driver to scan out a surface smaller than
> > the video mode.
> I'm not sure I catch your idea here. I would like to make it on the
> center jus t like fill mode when the surface size do not match any mode.
My point is to let hardware scan out from the client surface, even if
the surface is smaller than the video mode. By not defining the image
to be centered, I am hoping to let more hardware to support this case.
But, I don't really know about scan-out hardware, and this rationale
may be moot.
If some hardware can center the image, then it is welcome to do that.
If some hardware forces to place the image at the top-left corner for
it to be scanned out, I don't see a reason to require centering in that
case, because centering would prevent scanning out the client surface.
Hence, I would not *require* centering for this scaling mode.
>
>
> > The aim of "force" mode is to be fast to render.
> Agree
Thanks,
pq
More information about the wayland-devel
mailing list