[RFC v2] surface crop & scale protocol extension
ppaalanen at gmail.com
Sun Nov 10 01:38:56 PST 2013
On Fri, 8 Nov 2013 22:33:18 -0600
Jason Ekstrand <jason at jlekstrand.net> wrote:
> I've got just a comple of general comments below.
> On Nov 8, 2013 8:46 AM, "Pekka Paalanen" <ppaalanen at gmail.com> wrote:
> > Hi all,
> > this is the v2 of the crop and scale extension, as RFC.
> > The v1 was in:
> > http://lists.freedesktop.org/archives/wayland-devel/2013-April/008927.html
> > Based on v1, Jonny Lamb has been working on a Weston implementation,
> > starting from the protocol side, and working towards the renderer
> > implementations. That work is still very much in progress.
> > Introduction:
> > The primary use case for the crop & scale extension is for hardware
> > accelerated, zero-copy video display. Hardware video decoders produce
> > complete frames or ultimately wl_buffers that are attached to
> > a wl_surface. The crop & scale extension allows the client to
> > request the compositor to crop and scale the frame so it fits the
> > client's purposes. Optimally, a compositor implements this by
> > programming a hardware overlay unit to do the requested cropping and
> > scaling. The major benefit is that the CPU never has to even see the
> > video pixels.
> > Probably a common case is to have a video in a sub-surface, so that
> > window decorations and overlaid GUI elements can be in other
> > (sub-)surfaces, and avoid touching the pixels in the video frame
> > buffers. Video as a browser element will need cropping when the element
> > is partially off-view. Scaling should be useful in general, e.g.
> > showing a video scaled up (or down) but still in a window.
> > However, I also see that crop & scale can be used to present videos
> > with non-square pixels in fullscreen (e.g. without sub-surfaces). Crop
> > & scale is needed in this case, because the fullscreening methods in
> > wl_shell do not allow changing the aspect ratio, or cropping parts of
> > the video out to fill the whole output area when video and output
> > aspect ratios differ. The fullscreening methods support only adding
> > black borders, but not scale-to-fill-and-crop.
> > Changes since v1:
> > The changes I have made are very small, and can be seen in patch form
> > at:
> > http://cgit.collabora.com/git/user/pq/weston.git/log/?h=clipscale-wip
> > The changes are:
> > - improve wording, add missing details
> > - take buffer_scale into account
> > - rewrite the coordinate transformations more clearly
> > In the end, I did not get much else out from the discussions of v1.
> > I think some people do not like the structure of the protocol
> > additions, namely adding yet another global factory interface and a
> > sub-interface to wl_surface. If the concensus really is to move this
> > into a single wl_surface request, that is fine.
> > But, to me this would not make sense as a request in wl_subsurface. The
> > crop & scale state is supposed to be driven by the application
> > component that produces the wl_buffers and attaches them to a
> > wl_surface. The wl_subsurface interface OTOH is to be used by the
> > parent component, for e.g. setting the sub-surface position. The
> > situation is similar to a compositor vs. clients: clients define the
> > surface size and content, compositor the position. Also, the crop &
> > scale state is not well suited to be "forced on" by a parent software
> > component, as it changes how e.g. input event coordinates relate to the
> > wl_buffer contents. Finally, there is the fullscreen video use case I
> > described above.
> In the design of both the subsurfaces and crop&scale, I fear that you may
> be trying too hard to avoid out-of-band communication between components
> with little to no benefit. I don't want to start a bikeshedding landslide
> here and you are free to disagree. However, I'm concerned that we're
> overcomplicating things without gaining anything.
I don't really see how excluding wl_subsurface as a possible host
interface for crop & scale would complicate things. Using
wl_subsurface would only restrict the possible use cases without
any gain I can see.
> In some simple use-cases, you will be able to write one component that has
> runs on a subsurface and another that uses it and the two components are
> basically unaware of each other beyond the initial handshake. However, I
> think in practice I feel that what is more likely to happen is that the
> primary tookit will handle everything. If this is the case, it makes no
> difference whether it's separate or inside wl_subsurface except that the
> toolkit has more extensions to wrangle.
There has to be communication after the initial hand-shake anyway,
at least for resizing. The thing I am trying to avoid is to have to
synchronize the two components (which may run in different threads)
for every frame in the steady state. Yes, that complicated the
sub-surface protocol, but crop & scale is largely unrelated and
agnostic, unless it is made part of wl_subsurface.
Maybe someone from Gstreamer video sink or Clutter et al. experts
can comment here. I seem to recall hearing about use cases where
two different toolkits are being used in the same client.
> Concerning full-screen video: After thinking about it a bit more, I think
> this is a failure of wl_subsurface more than a need for scaling regular
> surfaces. I think the primary issue there is that wl_subsurface provides
> no real way to make a surface that, itself, is empty but its subsurfaces
> have content. If this were possible, then it would be no problem putting
> crop&scale in the wl_subsurface protocol. It's worth noting that this same
> issue makes putting a video (from an external source) in a SHM subsurface
> frame really awkward. See also this e-mail from April:
Actually, crop & scale *not* being in wl_subsurface allows you to
have your invisible main surface! (If you really insist.) Make a
1x1 wl_shm buffer, make the pixel in it (0,0,0,0), and use crop &
scale to define the size of the main surface: you get an invisible
normal wl_surface of the size you want, without wasting memory, to
be used as you like.
Personally I don't like that use case, but it is possible.
As for the video player with hw-accelerated video and shm
decorations, how about keeping e.g. the window title side
sub-surface always as the main surface, and video in a sub-surface?
When fullscreening, the title decoration surface would be
completely occluded by the video sub-surface. Then, when shell
scales the window to fill the output, the only visible surface will
be the video sub-surface. It should also trivially allow use of hw
overlays in the compositor. Would that work?
Sure, you'd need input to the video sub-surface, but that is
quite solvable. Any invisible surfaces covering everything might
prevent the use of hw overlays.
More information about the wayland-devel