[uvch264src in 1.1.1.1] "Got data flow before segment event" warnings until crash
Peter Rennert
p.rennert at cs.ucl.ac.uk
Fri Jul 26 06:45:25 PDT 2013
After playing a bit around it with it, I think the memory leak arises if
you set copy_threshold too high (25 seems to be still ok). In my patch
in the previous email I had it set to 15 only, and capped the
max_buffers to 100 in the gstv4l2src.c
Alternatively, I can also set
diff --git a/sys/v4l2/gstv4l2bufferpool.c b/sys/v4l2/gstv4l2bufferpool.c
index 1e74fc7..90e8470 100644
--- a/sys/v4l2/gstv4l2bufferpool.c
+++ b/sys/v4l2/gstv4l2bufferpool.c
@@ -376,7 +376,7 @@ gst_v4l2_buffer_pool_set_config (GstBufferPool *
bpool, GstStructure * config)
/* request a reasonable number of buffers when no max specified.
We will
* copy when we run out of buffers */
if (max_buffers == 0)
- num_buffers = 4;
+ num_buffers = 100;
else
num_buffers = max_buffers;
(using the original gstv4l2src.c)
leaving
copy_threshold = 2;
Intact. Now the pipeline runs without memory leak for 4.35min and
crashes then.
I get exactly the same behaviour if I set num_buffers = 100;
copy_threshold = 25; or num_buffers = 200; copy_threshold = 25;
On 07/26/2013 02:26 PM, Robert Krakora wrote:
> Basically, you want to set up v4l2src so that it ALWAYS copies it's
> buffer pool buffers to freshly allocated buffers...this was it's
> default behaviour in 0.10.
>
>
> On Fri, Jul 26, 2013 at 9:24 AM, Robert Krakora
> <rob.krakora at messagenetsystems.com
> <mailto:rob.krakora at messagenetsystems.com>> wrote:
>
> If the only thing that I do is set the copy threshold to 100 I can
> run until the memory leak that I mentioned before invokes the OS
> to kill gst-launch...I don't see the error reported...
>
>
> On Fri, Jul 26, 2013 at 9:03 AM, Peter Rennert
> <p.rennert at cs.ucl.ac.uk <mailto:p.rennert at cs.ucl.ac.uk>> wrote:
>
> Hmm that is not the solution. It still fails after some
> minutes, reporting the same error as before. I am trying to
> find the right way to manipulate the buffer numbers now.
>
>
>
> On 07/26/2013 01:13 PM, Peter Rennert wrote:
>> Rob,
>>
>> Well spotted!
>>
>> It works for me. I am monitoring the memory only with htop at
>> the moment, but it seems to be stable. I think the memory
>> leak stems from the fact that max_buffers is set to 0, so it
>> will always make the buffer bigger and bigger. I think that
>> is generally a bad idea. This might affect also plain v4l2src
>> applications.
>>
>> Try this patch to get rid of the memory leak:
>>
>> diff --git a/sys/v4l2/gstv4l2bufferpool.c
>> b/sys/v4l2/gstv4l2bufferpool.c
>> index 1e74fc7..282ac2b 100644
>> --- a/sys/v4l2/gstv4l2bufferpool.c
>> +++ b/sys/v4l2/gstv4l2bufferpool.c
>> @@ -411,7 +411,7 @@ gst_v4l2_buffer_pool_set_config
>> (GstBufferPool * bpool, GstStructure * config)
>> if (max_buffers == 0 || num_buffers < max_buffers) {
>> /* if we are asked to provide more buffers than we
>> have allocated, start
>> * copying buffers when we only have 2 buffers left
>> in the pool */
>> - copy_threshold = 2;
>> + copy_threshold = 15;// 2;
>> } else {
>> /* we are certain that we have enough buffers so we
>> don't need to
>> * copy */
>> diff --git a/sys/v4l2/gstv4l2src.c b/sys/v4l2/gstv4l2src.c
>> index 107ea21..0c5d91a 100644
>> --- a/sys/v4l2/gstv4l2src.c
>> +++ b/sys/v4l2/gstv4l2src.c
>> @@ -529,7 +529,8 @@ gst_v4l2src_decide_allocation (GstBaseSrc
>> * bsrc, GstQuery * query)
>> update = TRUE;
>> } else {
>> pool = NULL;
>> - min = max = 0;
>> + min = 0;
>> + max = 100;
>> size = 0;
>> update = FALSE;
>> }
>>
>>
>> On 07/25/2013 09:52 PM, Robert Krakora wrote:
>>> Hi Peter,
>>>
>>> I also wanted to note that when you apply the aforementioned
>>> "hack" to emulate the default operation of v4l2src in
>>> version 0.10 ("always-copy=true") with the pipeline below,
>>> your system will run out of memory due to a memory leak. It
>>> will then kill off the gst-launch process instantiated to
>>> run your pipeline. However, your pipeline will run for
>>> quite a bit (a lot longer than the previous 36 seconds noted
>>> by Yusuf a couple of months ago).
>>>
>>> gst-launch-1.0 uvch264src device=/dev/video0 name=src
>>> auto-start=true src.vfsrc ! queue ! fakesink src.vidsrc !
>>> queue ! video/x-h264 ! fakesink
>>>
>>> Best Regards,
>>>
>>> Rob Krakora
>>>
>>>
>>>
>>> On Thu, Jul 25, 2013 at 4:27 PM, Robert Krakora
>>> <rob.krakora at messagenetsystems.com
>>> <mailto:rob.krakora at messagenetsystems.com>> wrote:
>>>
>>> Hi Peter,
>>>
>>> I forgot to mention that the file that needs
>>> modification is named gstv4l2bufferpool.c and is under
>>> sys/v4l2 under plugins-good.
>>>
>>>
>>> if (max_buffers == 0 || num_buffers < max_buffers) {
>>> /* if we are asked to provide more buffers than
>>> we have allocated, start
>>> * copying buffers when we only have 2 buffers
>>> left in the pool */
>>> copy_threshold = 100; //2;
>>> } else {
>>> /* we are certain that we have enough buffers so
>>> we don't need to
>>> * copy */
>>> copy_threshold = 0;
>>> }
>>>
>>>
>>> On Thu, Jul 25, 2013 at 4:21 PM, Robert Krakora
>>> <rob.krakora at messagenetsystems.com
>>> <mailto:rob.krakora at messagenetsystems.com>> wrote:
>>>
>>> Hi Peter,
>>>
>>> In version 0.10 v4l2src used to have a property to
>>> force buffers from it's pool to always be copied
>>> prior to being pushed out. This property was called
>>> "always-copy" and defaulted to "true". It seems
>>> that this was removed in version 1.x. If you go to
>>> version 1.x good plugins and change
>>> "copy_threadhold" from 2 to 100 you effectively get
>>> the same behaviour with 1.x in this regard (same as
>>> "always-copy=true" default in 0.10 v4l2src) and
>>> there is no error after 30 some odd seconds that
>>> causes the stream to abort.
>>>
>>> if (max_buffers == 0 || num_buffers <
>>> max_buffers) {
>>> /* if we are asked to provide more buffers
>>> than we have allocated, start
>>> * copying buffers when we only have 2
>>> buffers left in the pool */
>>> copy_threshold = 100; //2;
>>> } else {
>>> /* we are certain that we have enough
>>> buffers so we don't need to
>>> * copy */
>>> copy_threshold = 0;
>>> }
>>>
>>> Best Regards,
>>>
>>> Rob Krakora
>>>
>>>
>>>
>>> On Thu, Jul 25, 2013 at 1:06 PM, Robert Krakora
>>> <rob.krakora at messagenetsystems.com
>>> <mailto:rob.krakora at messagenetsystems.com>> wrote:
>>>
>>> Hi Peter,
>>>
>>> I did some work yesterday on this and enabled
>>> logging on uvcvideo.ko and was able to correlate
>>> the buffer sizes reported by it and by the
>>> uvch264src plugin. Below is the frame that
>>> failed...the size in the kernel module was the
>>> same as the size of the buffer once it got back
>>> up to the application to v4l2src and uvch264src.
>>>
>>> uvcvideo: Frame complete (EOF found).
>>> uvcvideo: EOF in empty payload.
>>> uvcvideo: uvc_v4l2_poll
>>> uvcvideo: uvc_v4l2_ioctl(VIDIOC_DQBUF)
>>> uvcvideo: HD Pro Webcam C920: PTS 1029592232 y
>>> 2948.098587 SOF 2948.098587 (x1 2149480048 x2
>>> 2179588048 <tel:2179588048> y1 193593344 y2
>>> 199426048 SOF offset 39)
>>> uvcvideo: HD Pro Webcam C920: SOF 2948.098587 y
>>> 927116325 ts 589.071670 buf ts 589.232519 (x1
>>> 197984256/205/906 x2 204537856/49/995 y1
>>> 1000000000 y2 1099975668)
>>> uvcvideo: uvc_dequeue_buffer - buf->bytesused =
>>> 220410
>>> uvcvideo: uvc_v4l2_ioctl(VIDIOC_QBUF)
>>>
>>> Best Regards,
>>>
>>> Rob
>>>
>>>
>>> On Thu, Jul 25, 2013 at 12:22 PM, Peter Rennert
>>> <p.rennert at cs.ucl.ac.uk
>>> <mailto:p.rennert at cs.ucl.ac.uk>> wrote:
>>>
>>> A bit of an update today.
>>>
>>> To figure out what is going on normally and
>>> what is different in the diseased frame I
>>> printed out the debug line [line 508] that
>>> was already in the code, augmented with the
>>> total size of the buffer. The original line was:
>>>
>>> GST_DEBUG_OBJECT (self,
>>> "Found APP4 marker (%d). JPG: %d-%d - APP4:
>>> %d - %d", segment_size,
>>> last_offset, i, i, i + 2 + segment_size);
>>>
>>> I print for each time in the loop (before
>>> the sanity test that will fail finally):
>>>
>>> printf("Found APP4 marker (%d). JPG: %d-%d -
>>> APP4: %d - %d - size: %d\n", segment_size,
>>> last_offset, i, i, i + 2 + segment_size,
>>> (int)size);
>>>
>>> What I get for a normal frame is about this.
>>> The total buffer size changes and also the
>>> size of the first APP4 chunk, however, I
>>> always seem to get 4 blocks for each frame.
>>> The segment size of the first marker differs
>>> between the individual frames, while the
>>> 2nd, 3rd and 4th markers are the same size
>>> all the time as far as I can tell:
>>>
>>> Found APP4 marker (13010). JPG: 0-8 - APP4:
>>> 8 - 13020 - size: 166660
>>>
>>> Found APP4 marker (65533). JPG: 13020-13020
>>> - APP4: 13020 - 78555 - size: 166660
>>>
>>> Found APP4 marker (65533). JPG: 78555-78555
>>> <tel:78555-78555> - APP4: 78555 - 144090 -
>>> size: 166660
>>>
>>> Found APP4 marker (22566). JPG:
>>> 144090-144090 - APP4: 144090 - 166658 -
>>> size: 166660
>>>
>>>
>>> The frame that causes the failure of the
>>> system has the following output:
>>>
>>> Found APP4 marker (12084). JPG: 0-8 - APP4:
>>> 8 - 12094 - size: 165485
>>>
>>> Found APP4 marker (65533). JPG: 12094-12094
>>> - APP4: 12094 - 77629 - size: 165485
>>>
>>> Found APP4 marker (65533). JPG: 77629-77629
>>> - APP4: 77629 - 143164 - size: 165485
>>>
>>> Found APP4 marker (22566). JPG:
>>> 143164-143164 - APP4: 143164 - 165732 -
>>> size: 165485
>>>
>>>
>>> As you can see it seems as the last segment
>>> reaches out of the total size of the buffer.
>>> As the last segment seems to be always of a
>>> length 22566, I think for some reason (that
>>> does not happen in gstreamer 0.10) a piece
>>> of the h264 stream is missing in this
>>> particular buffer.
>>>
>>> The number of bytes getting lost is not the
>>> same for different trials, neither is the
>>> cut-off. For my second trial I got:
>>>
>>> Found APP4 marker (22566). JPG:
>>> 144796-144796 - APP4: 144796 - 167364 -
>>> size: 165815
>>>
>>> So where are these bytes? And why are they
>>> missing so regularly?
>>>
>>> _______________________________________________
>>> gstreamer-devel mailing list
>>> gstreamer-devel at lists.freedesktop.org
>>> <mailto:gstreamer-devel at lists.freedesktop.org>
>>> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>>>
>>>
>>>
>>>
>>> --
>>> Rob Krakora
>>> MessageNet Systems
>>> 101 East Carmel Dr. Suite 105
>>> Carmel, IN 46032
>>> (317)566-1677Ext 212
>>> (317)663-0808 Fax
>>>
>>>
>>>
>>>
>>> --
>>> Rob Krakora
>>> MessageNet Systems
>>> 101 East Carmel Dr. Suite 105
>>> Carmel, IN 46032
>>> (317)566-1677Ext 212
>>> (317)663-0808 Fax
>>>
>>>
>>>
>>>
>>> --
>>> Rob Krakora
>>> MessageNet Systems
>>> 101 East Carmel Dr. Suite 105
>>> Carmel, IN 46032
>>> (317)566-1677Ext 212
>>> (317)663-0808 Fax
>>>
>>>
>>>
>>>
>>> --
>>> Rob Krakora
>>> MessageNet Systems
>>> 101 East Carmel Dr. Suite 105
>>> Carmel, IN 46032
>>> (317)566-1677Ext 212
>>> (317)663-0808 Fax
>>>
>>>
>>> _______________________________________________
>>> gstreamer-devel mailing list
>>> gstreamer-devel at lists.freedesktop.org <mailto:gstreamer-devel at lists.freedesktop.org>
>>> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>>
>>
>>
>> _______________________________________________
>> gstreamer-devel mailing list
>> gstreamer-devel at lists.freedesktop.org <mailto:gstreamer-devel at lists.freedesktop.org>
>> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>
>
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.freedesktop.org
> <mailto:gstreamer-devel at lists.freedesktop.org>
> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>
>
>
>
> --
> Rob Krakora
> MessageNet Systems
> 101 East Carmel Dr. Suite 105
> Carmel, IN 46032
> (317)566-1677Ext 212
> (317)663-0808 Fax
>
>
>
>
> --
> Rob Krakora
> MessageNet Systems
> 101 East Carmel Dr. Suite 105
> Carmel, IN 46032
> (317)566-1677Ext 212
> (317)663-0808 Fax
>
>
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/gstreamer-devel/attachments/20130726/28d3a8ee/attachment-0001.html>
More information about the gstreamer-devel
mailing list