[gst-devel] VideoFilter(transformation element) implementation

alex pkit at mail.ru
Thu Apr 1 23:17:02 CEST 2010


Hello to all.

I'm a novice in gstreamer plugins writing, so the question to "novice".
I'm using debian squeeze with gstreamer-0.10.28 from distribution.

I need to implement element which can be described as 
video_coverter/video_filter.
Unfortunately there are no such example in PWG(as for now i don't read 
it to the end, just finished chapter 2), there is only gsttransform.c 
element and gstplugin.c . Particularly my element must have independent 
input and output buffers, because of kind of processing happening.

At first I tried to realize this on base of gstplugin code with creation 
of additional GstBuffer (which act as output) but this causes memory 
leak, because i can't free memory before pass data to output.

2-nd try was to create a temporary buffer directly using glib memory 
management function g_memdup() with copy of input GstBuffer data, then 
modify something, then copy data back to GstBuffer, free allocated 
memory and output modified GstBuffer to element sink pad. In this case i 
haven't troubles with memory but it can be seen (i see for example 
blinking(black - orig, red - after modify) square) that, as i understand 
that outputted data overwritten by inputted sometimes, as element has 
one buffer.

Finally, as i understand there is no pre-made class for video-filters, 
so i must derive my plugin from GstBaseTransform (so use gsttransform.c 
template). As all that i try to do earlier - stupid.

In gstreamer-libs documentation i found that i need to implement 
"prepare_output_buffer" function where output buffer must be created:
==============================================================
Normal mode
     *      always_in_place flag is not set, or there is no transform_ip 
function
     *      Element will receive an input buffer and output buffer to 
operate on.
     *      Output buffer is allocated by calling the 
prepare_output_buffer function.
Example elements
     * Videoscale, ffmpegcolorspace, audioconvert when doing 
scaling/conversions

Special output buffer allocations
     *      Elements which need to do special allocation of their output 
buffers other than what gst_buffer_pad_alloc allows should implement a 
prepare_output_buffer method, which calls the parent implementation and 
passes the newly allocated buffer.
===============================================================

But i'm found no example on how to implement such function.
Can anyone be so kind and give implementation example or some direction 
where i can learn how to write video transformation element that work in 
"Normal mode"

P.S. If you need any additional info please ask. And I tried to look at 
"Example elements" which pointed in docs (videoscale, ffmpegcolorspace), 
but they are at least complicated and overloaded by concrete 
case|implementation, and so not clear for novice like me.

I found solution with temporary buffer that works (in gstplugin.c case), 
so now I'm not sure that my try with  g_memdup()/g_free() was totally wrong.




More information about the gstreamer-devel mailing list