[Mesa-dev] GSoC : Video decoding state tracker for Gallium3d
emeric.grange at gmail.com
Fri Mar 25 14:18:10 PDT 2011
Hi, thanks for the reply,
2011/3/24 Christian König <deathsimple at vodafone.de>:
>> Hi everyone,
>> My name is Emeric, I am a?22 years old?french student, and I am
>> currently looking to apply to the google summer of code 2011.
>> I saw the "Gallium H.264 decoding" idea on the X.Org GSoC page, and I
>> am particularly interested by this project.
>> The idea would be to use shaders to do some work, in order to write a
>> generic video decoding solution, targeting r300g, r600g, nouveau, and
>> basically all Gallium3d drivers.
>> The project would be to write?a state tracker wich expose some of the
>> most shaders-friendly decoding operations (like motion compensation,
>> idct, intra-predictions, deblocking filter and maybe vlc decoding)
>> through a common API like VDPAU or VA-API.
>> These APIs can be used to decode mpeg2, mpeg 4 asp/avc, vc1 and
>> others, but at first I intend to focus on the h264 decoding to save
>> time, because I know it better and it is currently widely in use, but
>> again the goal of the project is to be generic.
>> * I am well familiar with the h264 specification and implementation, I
>> can understand the amount of work required on that side.
>> * I have some understanding of how a graphic card works, what could be
>> and what should not be done with it, but I am not used to write
>> shaders programs. But hey, this is the fun part, and I am?definitively
>> here to learn.
>> * As I see it there is no easy or magical gains when using shaders to
>> compute stuff like motion compensation or idct, but there are probably
>> some things that can be done to smooth over the decoding of high
>> Beside with the emergence of h265?in the next years, it seems that
>> there will be more and more computational power involved (bigger
>> transform size, much more intra and inter prediction schemes, insane
>> resolutions, ...), so I think spending some time preparing to that
>> could be a good idea.
> Sounds like I finally get some more help with the pipe-video branch
I'll be glad to help on the pipe-video branch !
>Having a good idea how h264 works is a good start, but keep in
>mind that this is quite a bunch of work todo.
I realize that this is a lot of work, trying to define a proposal that
can be achieved in due time is gonna take some thinking...
> Now to your questions:
>> - First of all, does this project seems to be a good idea, would you
>> be interested in it ?
> Yes, I think it's quite a good idea to have an open source
> implementation of the different video APIs as state trackers in mesa.
> Beside the obviously value of the shader based decoding the mesa
> infrastructure seems to be the right place to even support the different
> hardware decoders.
>> - As the time to realize this project will be quite limited, what
>> would be the tasks to achieve in priority ?
> It generally seems to be a good Idea to code the pipeline in reverse
> order, e.g. start with a pure CPU based implementation and then
> implement the step nearest to displaying a frame (usually MC), then the
> next stage and so on.
I'm ok with writing a pure CPU based implementation first then dig
into gpu offloading. As I understand it, the vdpau API will require
the state tracker to be almost fully independent anyway. But writing
an (almost) fully featured h264 decoder is already an heavy task.
Would that be ok to pull some code from ffmpeg to cut down on development time ?
The thing with a video decoder is that it can only produce pictures if
everything inside it is functional. Sure some things can be ignored,
like interlaced related functionalities, loop filter, and anything
else not in the high profile, but the core of the decoder will still
be a big piece of code.
Doing like 80% of the work and having 0% of the results might prove to
What could be done to split this gsoc project in independent tasks ?
>> - What API would be better suited for the state tracker, vaapi or
> vdpau ?
> Actually I don't really know, vdpau and vaapi doesn't seems to have so
> much differences. I'm currently in the process to clean the stuff up and
> get at least the presentation part of vdpau working, so I would suggest
> to start there.
I think vdpau have more support from multimedia softwares, and if you
have spend some time working on it already that should settle the
issue : vdpau it is. If the APIs are close enough and sufficient care
is taken during the development, supporting both in the future may
even be possible.
>> - I understand my entry point into this project should be the?g3dvl
>> state tracker and the mesa pipe-video branch, I'd like to learn more
>> about the state of these work.
> A good start would be to checkout and compile the stuff and get a
> working development environment for testing up and running.
> Then start reading the code and get familiar with the coding style and
> the different ideas behind it. If you have any questions about g3dvl
> feel free to ask me.
I will try to get g3dvl working on a laptop using an nv50 gpu as
suggested by Younes Manton and see what happen "behind the scene".
Shaders computation might be hard for me to understand in a first
>> - Should I cross-post this on xorg or xorg-devel maillist ?
> No, I don't think that's necessary. X is just used for outputting the
> final rendering result these days (hopefully the X devs won't hurt me to
> hard for this).
Well I finally did it before receiving your answer. The coding would
be in mesa side but the gsoc is handled on xorg side I believe.
>> Thanks for reading me, and I want to apologies if some parts of this
>> mail are hard to understand, I am not an english native speaker,
>> mistakes may have been made :-)
>> If there is anything that I forgot or that I should detailed do not
>> hesitate to tell me.
> I'm not a native speaker myself, and to be honest my English speaking
> ability could also need some improvements, but I think that English is
> one of the easiest languages to learn and most of the computer specific
> vocabulary is based on English any way.
More information about the mesa-dev