Tracking down a memory leak

Romain Picard romain.picard at oakbits.com
Sun Mar 9 13:40:09 PDT 2014


On Sun, Mar 9, 2014 at 4:49 AM, Chris Tapp <opensource at keylevel.com> 
wrote:
> Hi Tim,
> 
> On 9 Mar 2014, at 00:12, Tim Müller <tim at centricular.com> wrote:
> 
>> On Sat, 2014-03-08 at 17:37 +0000, Chris Tapp wrote:
>> 
>> Hi Chris,
>> 
>>> I've got a pipeline that's used to play audio and feed video to a
>>> fakesink so that I can grab the latest frame when I want it (for
>>> rendering in a GLES app).
>>> 
>>> This works fine on my development system (Ubuntu 12.04, AMD video
>>> card), but when I run it on the target hardware there is a small 
>>> memory
>>> leak related to running the pipeline. I've found this when running 
>>> some
>>> long-term robustness tests, and it takes many, many days of looping
>>> playback for the leak to get to the point where the application gets
>>> terminated. This isn't ideal, as the app is expected to run for 
>>> months
>>> at a time. As I said, this leak does not happen on the development
>>> system.
>>> 
>>> The target is an Intel Cedartrail platform with the PVR drivers 
>>> (needed
>>> for acceleration), and I suspect that the memory leak is related to
>>> these drivers and/or their interaction with GStreamer.
>>> 
>>> Are there any recommend techniques that I can use to try and get an
>>> idea of where the problem lies?
>> 
>> I would start with the usual tools: run your test case / app in 
>> valgrind
>> (--leak-check=yes, initially with--show-reachable=no); you can also 
>> use
>> valgrind's massif tool to track memory allocation over time to see 
>> which
>> functions are responsible for increasing the usage.
>> 
>> Alternatively, try to narrow down if any particular type of object or
>> mini object is being leaked (e.g. caps or events or so). If you don't
>> mind modifying gstreamer itself, you could use the internal alloc 
>> trace
>> stuff and regularly dump the list (counter) of allocated objects. Once
>> you know that you can look at the code. This assumes it's in GStreamer
>> though.
> 
> Thanks, I'll give that a try. Valgrind didn't throw up anything obvious 
> on a quick run (lots of 'possibles', out of my code).
> 
>> I would try valgrind first. Note that you don't have to run it until 
>> it
>> aborts of course, just run it for a while (the longer, the more likely
>> the leak is to stand out). Could also try running it for an hour and 
>> for
>> two hours to see if any particular leaked/reachable allocations
>> increase/double.
>> 
>> If valgrind is too slow or makes it change behaviour, there are
>> LD_PRELOAD type memory alloc/free trackers as well if I remember
>> correctly.
> 
> That could be useful, as it does run much slower (30 to 60 times!) and 
> it
> takes quite a while for the problem to be visible.


You can try also edleak:
https://github.com/MainRo/edKit

This a preloaded library hooking usual allocation calls, and python/html 
visualization tools to find quite easily where the leaks come from.

Documentation is rather poor but some posts on my blog may help you to 
use it:
http://blog.oakbits.com/

I use this tool on boards where valgrind and duma are too intrusive, so 
it may help you.

Romain.


More information about the gstreamer-devel mailing list