[PATCH v1 weston 07/11] tests: Add a fadein test

Pekka Paalanen ppaalanen at gmail.com
Wed Nov 26 23:48:21 PST 2014

On Wed, 26 Nov 2014 10:49:08 -0600
Derek Foreman <derekf at osg.samsung.com> wrote:

> On 26/11/14 02:43 AM, Pekka Paalanen wrote:
> > On Tue, 25 Nov 2014 10:15:04 -0600
> > Derek Foreman <derekf at osg.samsung.com> wrote:
> > 
> >> On 25/11/14 04:11 AM, Pekka Paalanen wrote:
> >>> On Mon, 24 Nov 2014 18:48:51 -0800
> >>> Bryce Harrington <bryce at osg.samsung.com> wrote:
> >>>
> >>>> On Mon, Nov 24, 2014 at 01:19:46PM +0200, Pekka Paalanen wrote:
> >>>>> On Wed, 19 Nov 2014 15:06:22 -0800
> >>>>> Bryce Harrington <bryce at osg.samsung.com> wrote:
> >>>>>
> >>>>>> This also serves as a proof of concept of the screen capture
> >>>>>> functionality and as a demo for snapshot-based rendering verification.
> >>>>>>
> >>>>>> Signed-off-by: Bryce Harrington <bryce at osg.samsung.com>
> >>>>>> ---
> >>>>>>  Makefile.am         |  7 +++++-
> >>>>>>  tests/fadein-test.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>>>  2 files changed, 70 insertions(+), 1 deletion(-)
> >>>
> >>>>>> +TEST(fadein)

> > Timers I imagine we would not care about. Animations are not driven
> > by timers. There are things like idle timeout, which for testing
> > probably just want to be disabled, unless testing the idle timeout
> > itself.
> The only timer I was concerned about was the one in the headless backend
> frame completion stuff - which I think will have to be replaced by
> something that isn't actually a timer when we're controlling the clock
> anyway.

Correct. That timer would be replaced maybe by an idle task/callback in
the main event loop. When we use client driven repaint, the repaint
request should cause the finish_frame hook to run, but I'm not sure
it's safe to do from a request handler. So, the request handler would
schedule an idle task that would do exactly what the timer task would
do, but with the fake timestamp from the request. Something like that.

> So yeah, I think you're right and timers aren't going to need any attention.
> >> The client would advance the clock by a specified number of
> >> (nano?)seconds (monotonically, always increasing) via protocol.
> > 
> > Let's make it absolutely clear what happens: always send absolute time.
> > We even have some rare cases where weston actually tests for "monotonic"
> > time going backwards to detect broken things, IIRC.
> Ok, I'll do it that way.  :)
> >> headless renderer would need changes to fake a refresh rate - I don't
> >> think we'd want every clock advance to trigger a repaint.  I think we're
> >> better off with client driving the clock, and repaint being indirectly
> >> controlled.  Right now it just waits 16ms after a repaint then repaints
> >> again.
> > 
> > I'd suggest driving both independently from the test client. That way
> > we have the most control, and the compositor side is simpler.
> > 
> > It's up to the test to drive it right.
> > 
> > But, when we are not using client driven clock and repaint, then the
> > headless backend would indeed benefit from a timer to fake a refresh
> > rate.
> Right now it does have a 16ms timer to fake a refresh rate - do you see
> any reason this needs to be configurable or more complicated than it is now?
> For now I'm inclined to leave the un-client-driven repaint loop driven
> by the simple 16ms timer.

Yeah, that's fine. I just forgot it had a timer already. :-)

Btw. do check, that the headless backend reports the wl_output refresh
rate as 60000, not 60 mHz. I fixed that already once, but I forget if I
ever pushed it. (A free commit for you if so! ;-)

I was baffled when some tests of mine were taking a very long time to
complete, because something was sleeping based on the reported refresh

> > All we are planning here is basically for non-realtime testing, e.g.
> > testing in build-bots and such. We have had some talk about also doing
> > automated realtime testing on real hardware, like does Weston keep on
> > reaching the full refresh rate when playing a 1080p video on, say, RPi
> > if it once did. Relying on a particular real hardware platform allows
> > testing much more, including timing sensitive operations (hw video
> > decode combined with compositing) and driver/hw operations (does this
> > video surface actually go to a hardware overlay). So far these are just
> > plans, but this kind of things would never be part of the non-realtime
> > test suite of 'make check'.
> Very cool, except we run into a situation where make check can fail if a
> cron job wakes up in the background.  :D

It would be a full OS image, completely repeatable, and no funny
automatic random stuff running, flashed/booted by a test box controller
in a completely automated way. We have some infrastructure for such
things at Collabora, but not for Weston yet. Nor RPi, AFAIK.

So that's all just ponies atm. but planned.

> Testing if something actually ended up in a plane would probably need
> additional test protocol to query such things?

There is likely going to be a bit in the
presentation_feedback.presented flags for that. We've already done it
once at Collabora as a hack. I just need to those flags sorted out
before Christmas so Weston 1.7 won't have a handicapped Presentation...

Mario Kleiner was very interested in presentation_feedback telling
exactly how the presentation was done, so it fits there quite nicely.

> Anyway, I'm getting ahead of myself, that's an interesting problem for
> another day. :)

Indeed. ;-)

> Maybe our first tests should be for all output transforms - gives a good
> way to validate the big transforms refactor...


> I think just drawing two rectangles of different colors in appropriate
> places and then using your suggested "check for solid color" functions
> to verify they're moved appropriately...
> (why two regions?  because if we do just one we can false positive if
> the whole screen ends up that color by mistake - bad zoom)

Or a 2x2 grid, with a distinct background.

> >>> You post damage to a surface and wait for the
> >>> presentation_feedback.presented event. Then do the screenshot. Or
> >>> actually you shouldn't even need to wait, because completing a
> >>> screenshot likely requires one more repaint cycle, so shooting itself
> >>> already guarantees a repaint.
> >>
> >> Hmm, I'm not sure why the screenshot would require an extra repaint cycle?
> > 
> > That's the way it's implemented. If you read the current frame on
> > screen, it may already be outdated if the scenegraph state has changed.
> > Your protocol stream just before the record_screenshot request
> > may include state changes. There might be also some technical reasons I
> > forget, but IIRC we read out the image just before pushing it to the
> > display hw.
> Ah, ok.  I see this now in the screenshooter code.  That's not how my
> current (direct to file) implementation operates.
> I'll follow screenshooter's example for the next revision.

Right. Btw. it will interact with the client driven repaint, so you
need to carefully think what triggers what, and what will end up in the
screenshot both client state wise and the clock controls for
flip/current time.

Going with the current code initially is a good plan.


More information about the wayland-devel mailing list