<div dir="ltr"><div>Hi Zhao,<br><br>I've done testing with both 30 and 24 fps and received similar results.<br><br>I will test with the values you mentioned. Can you explain how num_units_in_tick and time_scale work? (What is a tick?)<br>
<br></div>Also, is there a good place in the Intel driver to dump the QP value used for each frame? I'd like to add some QP logging when an env variable is set.<br><br>Regards,<br><br>Chris<br></div><div class="gmail_extra">
<br><br><div class="gmail_quote">On Mon, Aug 18, 2014 at 1:30 AM, Zhao, Yakui <span dir="ltr"><<a href="mailto:yakui.zhao@intel.com" target="_blank">yakui.zhao@intel.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="">On Mon, 2014-08-18 at 01:13 -0600, Chris Healy wrote:<br>
> Hi Zhao,<br>
><br>
><br>
> I enabled LIBVA_TRACE recently and grabbed a bunch of output. Here's<br>
> a link to good size fragment of the output:<br>
><br>
> <a href="http://pastebin.com/KJYzGQAA" target="_blank">http://pastebin.com/KJYzGQAA</a><br>
><br>
><br>
> Here's answers to the specific questions you asked: (From LIBVA_TRACE<br>
> output)<br>
><br>
> [57113.237423] intra_period = 30<br>
> [57113.237424] intra_idr_period = 30<br>
> [57113.237425] ip_period = 1<br>
> [57113.237427] bits_per_second = 3700000<br>
> [57113.237428] max_num_ref_frames = 2<br>
> [57113.237469] num_units_in_tick = 15<br>
> [57113.237470] time_scale = 900<br>
<br>
</div>If the expected fps is 24, the setting of num_units_in_tick/time_scale<br>
is incorrect. It will be better that you should use the following<br>
setting in your tool:<br>
num_units_in_tick = 1<br>
time_scale = 2 * fps<br>
<div class=""><br>
<br>
<br>
><br>
> I see avenc.c, but it's unclear to me if I am dealing with an issue<br>
> with the encoder application or something lower down in libva or<br>
> libva-driver-intel or the HW itself.<br>
><br>
><br>
> Am I correct in believing (simplified) that the HW is just given a raw<br>
> video frame and a QP and the HW returns a chunk of encoded data that<br>
> is "some size" and that it is the responsibility of the SW above the<br>
> HW to dynamically adjust the QP to hit the target bitrate to meet<br>
> whatever the rate control algorithm deems correct?<br>
><br>
<br>
</div>When the CBR mode is used, the driver will adjust QP dynamically so that<br>
the encoded bitrate can meet with the requirement of target bitrate<br>
based on the input encoding parameter(For example: intra_period,<br>
ip_period, time_scale, num_units_in_tick and so on).<br>
<div class="im HOEnZb"><br>
<br>
> If this is the case, where is the code that is dynamically adjusting<br>
> the QP? Also, in the HW, where are the registers and bits control the<br>
> QP? (I'm looking at the "Intel ® OpenSource HD Graphics Programmer’s<br>
> Reference Manual (PRM) Volume 2 Part 3: Multi-Format Transcoder – MFX<br>
> (Ivy Bridge)" so a reference to the registers might be helpful for me<br>
> to understand better.)<br>
><br>
><br>
> Regards,<br>
><br>
> Chris<br>
><br>
><br>
><br>
</div><div class="HOEnZb"><div class="h5">> On Sun, Aug 17, 2014 at 11:58 PM, Zhao, Yakui <<a href="mailto:yakui.zhao@intel.com">yakui.zhao@intel.com</a>><br>
> wrote:<br>
> On Sun, 2014-08-17 at 19:27 -0600, Chris Healy wrote:<br>
> > I've done some further analysis with our real stream and we<br>
> experience<br>
> > the same inconsistent bitrate behaviour as with the test<br>
> app. It<br>
> > seems to me that the way the bitrate control works doesn't<br>
> do a good<br>
> > job of handling certain input video sequences and the<br>
> encoded bitrate<br>
> > subsequently spikes as a result of this.<br>
> ><br>
> > To help understand what I'm dealing with, I've posted a<br>
> video on<br>
> > youtube showing the video being encoded:<br>
> ><br>
> > <a href="http://www.youtube.com/watch?v=LpYS_9IB0jU" target="_blank">www.youtube.com/watch?v=LpYS_9IB0jU</a><br>
> ><br>
> ><br>
> > I've also posted a bitrate graph online too that shows what<br>
> happens<br>
> > when encoding the video referenced above:<br>
> ><br>
> > <a href="http://snag.gy/imvBe.jpg" target="_blank">http://snag.gy/imvBe.jpg</a><br>
> ><br>
> ><br>
> > In the above graph, I set the targeted encode bitrate to<br>
> 3.7Mbps, CBR,<br>
> > and High Profile H.264. Most of the time the bitrate hovers<br>
> around<br>
> > 3.7Mbps, but sometimes the bitrate drops very low then<br>
> spikes up very<br>
> > high. I also notice that when the bitrate drops down low<br>
> then spikes<br>
> > up real high, the "highness" seems to be a function of how<br>
> much and<br>
> > long the bitrate was under 3.7Mbps. It seems that the rate<br>
> control<br>
> > logic is taking a 20 second running bitrate average and<br>
> trying it's<br>
> > best to keep the aggregate bitrate at 3.7Mbps, so if the<br>
> scene<br>
> > complexity drops, the rate control logic reacts by cranking<br>
> the QP to<br>
> > a very low value (high quality) to bring the bitrate back<br>
> up. This<br>
> > behaviour combined with the fact that the video goes to a<br>
> simple fixed<br>
> > image, then crossfades to something complex in less than 20<br>
> seconds<br>
> > when the QP is a very low value results in the massive spike<br>
> in<br>
> > bitrate. (This is my naive understanding of what’s going<br>
> on.)<br>
> ><br>
> > The code I'm using to encode and stream is based in large<br>
> part on<br>
> > libva/test/encode/h264encode.c. I'm not sure if the logic<br>
> for doing<br>
> > rate control is in libva, libva-driver-intel, or supposed to<br>
> be driven<br>
> > by the code that uses libva. Am I dealing with an issue<br>
> with the<br>
> > encoder itself or is it more likely my code not correctly<br>
> driving the<br>
> > encoder?<br>
><br>
><br>
> Hi, Chris<br>
><br>
> Thank you for reporting the issue.<br>
> Will you please check the encoding parameters required by<br>
> CBR? (For<br>
> example: intra_period/ip_period/<br>
> num_units_in_tick/time_scale/bits_per_second in<br>
> VAEncSequenceParameterBufferH264.)<br>
><br>
> Will you please take a look at the example of<br>
> libva/test/encode/avcenc.c and see whether it is helpful?<br>
> (There exist two h264 encoding examples because of history<br>
> reasons. The<br>
> avcenc case is more consistent with the libva-intel-driver.)<br>
><br>
> Thanks.<br>
> Yakui<br>
><br>
> > What can be changed to keep the actual bitrate from being so<br>
> bursty<br>
> > given the video behaviour?<br>
> ><br>
> ><br>
> > Regards,<br>
> ><br>
> > Chris<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > On Fri, Aug 15, 2014 at 6:03 PM, Chris Healy<br>
> <<a href="mailto:cphealy@gmail.com">cphealy@gmail.com</a>><br>
> > wrote:<br>
> > I've been encoding h264 content using HD 4000 HW and<br>
> am not<br>
> > able to make heads or tails of the way the encoder<br>
> is behaving<br>
> > from the standpoint of the data size coming out of<br>
> the<br>
> > encoder.<br>
> ><br>
> > I have a 24 fps 720p video that is the same image<br>
> for ~8<br>
> > seconds, then a 1.5 second fade to the next image<br>
> followed by<br>
> > another ~8 seconds on that image. This goes on and<br>
> on<br>
> > indefinitely. I would have expected that the<br>
> bitrate would<br>
> > have been pretty low, then spike for 1.5 seconds<br>
> then go back<br>
> > to a similarly low value.<br>
> ><br>
> ><br>
> > When I look at the data coming out of the encoder<br>
> with a 4Mb/s<br>
> > bitrate set and CBR, I'm seeing almost the inverse<br>
> where most<br>
> > of the time, the bitrate is pretty close to 4Mb/s<br>
> then it<br>
> > spikes above 4Mb/s (presumably for the fade), then<br>
> it drops<br>
> > down to ~2Mbps for a second or so before going back<br>
> up to<br>
> > ~4Mb/s.<br>
> ><br>
> > The strangest part is that for the first ~30 seconds<br>
> of<br>
> > encode, across the board, the bitrate is ~2x the<br>
> bitrate from<br>
> > second 31 -> end of encode. (So, I'm hitting a<br>
> typical rate<br>
> > of 7Mbps and peaking out at 13Mbps.)<br>
> ><br>
> ><br>
> > Is this behaviour expected with gen7 HW? Is there<br>
> something I<br>
> > can do in the initial setup that will cap the MAX<br>
> bitrate<br>
> > regardless of the impact on encode quality?<br>
> ><br>
> > Regards,<br>
> ><br>
> > Chris<br>
> ><br>
> ><br>
> ><br>
><br>
><br>
><br>
><br>
><br>
<br>
<br>
</div></div></blockquote></div><br></div>