appsink python memory leak?? unref call on sample?

Turmel, Frederic Frederic.Turmel at arris.com
Fri Apr 15 19:09:12 UTC 2016


Further test results, it looks like something is wrong with the python binding:

gst-launch-1.0 -v udpsrc uri=udp://239.1.1.1.1:5724 buffer-size=50000000 ! tsdemux program-number=1 ! fakesink  NO MEMORY LEAK
python udpsrc->tsdemux program-number=1->appsink    MEMORY LEAK
python udpsrc->tsdemux program-number=1->fakesink   MEMORY LEAK

All test using the same multicast source and property. So it looks like something is going with the python bindings. Any suggestion for the next step?  
BTW is see the same behavior on 1.6.2 on windows

It is possible that tsdemux emit signals that the python binding cannot handle?

Thanks

-----Original Message-----
From: gstreamer-devel [mailto:gstreamer-devel-bounces at lists.freedesktop.org] On Behalf Of Turmel, Frederic
Sent: Thursday, April 14, 2016 11:00 PM
To: Discussion of the development of and with GStreamer <gstreamer-devel at lists.freedesktop.org>
Subject: RE: appsink python memory leak?? unref call on sample?

Hi, I have a weird behavior that I hope somebody here have insight. So I'm chasing this memory leak I described earlier. I'm now testing only on Ubuntu with 1.8.0 build.

To summarize again, I'm have the following pipeline:
UDPSRC->tsdemux->decobin->appsink

I have done various test with same test signal with the GST launch tool bypassing python. Example:

gst-launch-1.0 -v udpsrc uri=udp://239.1.1.1:1234 buffer-size=50000000 ! tsdemux program-number=1 ! decodebin ! fakesink This does not expose significant memory leak after several days running.
If I build the same pipeline with appsink in python things gets ugly. I'm doing the bare minimum in python as seen in the code seen below. I was initially suspecting that I was not pulling the buffers fast enough but that does not seems to be the case as setting drop property and max-buffer on appsink makes no difference. I tried pulled and with sample ready signal

What is puzzling me is the following: 
If I ingest the same source seen in parallel by gst-launch and the below python code, memory gets eaten slowly by python. The udp source is a multicast with single program but with a lot of private table type on pid 0 If I remove those "private tables" and only have audio and video I don't see memory increasing in python.

The only difference, is that tsdemux does not have to deal with those extra tables that are not being passed downstream anyway. So to me that is pointing to something wrong with tsdemux plugin. 
The big question is if there was a problem with tsdemux I would see the same problem with gst-launch but I'm not

How is it possible that something affect the memory when using python and not when using gst-launch?

What would make python behave differently than gst-launch in that regards?

Any debug flag to suggest?

import gi

gi.require_version('Gst', '1.0')
from gi.repository import Gst, GObject, GLib import os import time from subprocess import check_output from threading import Thread

Gst.init(None)

def on_new_buffer(appsink): 

    buf = appsink.emit('pull-sample')
 
    return False

def on_new_preroll(appsink): 
    buf = appsink.emit('pull-preroll') 
    print 'new preroll '
    return  False
       
pipeline = Gst.Pipeline.new("Pipeline")

sourceA = Gst.ElementFactory.make('udpsrc', 'SourceA') tsdemuxA = Gst.ElementFactory.make('tsdemux', 'tsdemuxA') decodeA = Gst.ElementFactory.make('decodebin', 'decodeA') appsinkA = Gst.ElementFactory.make('appsink', 'appsinkA') 


appsinkA.set_property('emit-signals', True) appsinkA.set_property('enable-last-sample', False) appsinkA.set_property('max-buffers', 200) appsinkA.set_property('drop', False) appsinkA.set_property('sync', False) sourceA.set_property('uri', 'udp://239.127.23.220:5724') sourceA.set_property('buffer-size', 50000000) tsdemuxA.set_property('program-number', 1) tsdemuxA.set_property('parse-private-sections', False) cap = Gst.Caps.from_string("video/x-raw")
appsinkA.set_property('caps', cap)
appsinkA.set_property('qos', True)


appsinkA.connect('new-sample', on_new_buffer) appsinkA.connect('new-preroll', on_new_preroll)


pipeline.add(sourceA)
pipeline.add(tsdemuxA)
pipeline.add(decodeA)
pipeline.add(appsinkA)  

def on_decode_pad_addedA(element, pad):
    type = pad.query_caps(None).to_string()
    print "cap name is = " + type
    #only link video:
    if type.startswith('video/'):
        pad.link(appsinkA.get_static_pad("sink"))
        print 'new pad other than video so ignoring'

def on_decode_pad_addedtsdemuxA(element, pad):
    type = pad.query_caps(None).to_string()
    print "cap name is = " + type
    testA = True
    #only link video:
    if type.startswith('video/'):
        pad.link(decodeA.get_static_pad("sink"))
        print 'new pad other than video so ignoring'

decodeA.connect('pad-added', on_decode_pad_addedA) tsdemuxA.connect('pad-added', on_decode_pad_addedtsdemuxA)

sourceA.link(tsdemuxA)
tsdemuxA.link(decodeA)
decodeA.link(appsinkA)

pipeline.set_state(Gst.State.PLAYING)

GObject.threads_init()
loop = GLib.MainLoop()
loop.run() 
    
pipeline.set_state(Gst.State.PAUSED)

pipeline.set_state(Gst.State.NULL)





-----Original Message-----
From: gstreamer-devel [mailto:gstreamer-devel-bounces at lists.freedesktop.org] On Behalf Of Turmel, Frederic
Sent: Tuesday, April 12, 2016 8:15 AM
To: Discussion of the development of and with GStreamer <gstreamer-devel at lists.freedesktop.org>
Subject: RE: appsink python memory leak?? unref call on sample?

Hi sebastian, I made some progress, I removed python from the equation to try to pin point where the leak is happening. Currently testing with 1.8 on windows and ubuntu1.8  in parallel.

Gst-launch-1.0  udpsrc (receiving multicast) ! tsdemux program-number=101 ! h264parse ! avdec_h264 ! fakesink That pipeline started at 70MB to end up 1.7GB after overnight test (8 hours) on windows

gst-launch-1.0 -v udpsrc (receiving multicast) ! tsdemux program-number=101 ! fakesink
That pipeline started at 12172B to 13428B after 8 hours will keep monitoring      ubuntu w1.8

gst-launch-1.0 -v udpsrc (receiving multicast) ! tsdemux program-number=102 ! h264parse ! fakesink That pipeline started at 12764B to 13892 after 8 hours will keep monitoring  Ubuntu w1.8 

I also see that 1 memory leak (EIT) was fixed in tsdemux right after 1.8 was build that could potentially contribute to this but that would be really small.

I'm starting more test to see if avdec_h264 is the culprit.

Thanks
FredT


-----Original Message-----
From: gstreamer-devel [mailto:gstreamer-devel-bounces at lists.freedesktop.org] On Behalf Of Sebastian Dröge
Sent: Monday, April 11, 2016 11:48 PM
To: Discussion of the development of and with GStreamer <gstreamer-devel at lists.freedesktop.org>
Subject: Re: appsink python memory leak?? unref call on sample?

On Di, 2016-04-12 at 02:59 +0000, Turmel, Frederic wrote:
> Hi, I’m observing a slow memory leak with the app sink in python. I’m 
> using appsink to receive raw video frame from a decodebin.
>  
> In the C API I see that we need to call “gst_sample_unref(sample)”
> after reading the sample
>  
> Is there an equivalent in python?
>  
> The leak does not seems to be cause by a frame buffer leak since the 
> leak is really small and a frame leak would be much bigger than what 
> I’m seeing.
>  
> Pipeline is udpsrc->tsdemux->decodebin->appsink
>  
> Any information will be appreciated.

Please provide some code to reproduce the problem. Then we can decide if it's a problem in your code or in the Python bindings, it's probably the latter though.

--
Sebastian Dröge, Centricular Ltd · http://www.centricular.com

_______________________________________________
gstreamer-devel mailing list
gstreamer-devel at lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
_______________________________________________
gstreamer-devel mailing list
gstreamer-devel at lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel


More information about the gstreamer-devel mailing list