[PATCH intel-gpu-tools 00/10] Upgrade module configuration - Part 2

Gaetan Nadon memsize at videotron.ca
Wed Jan 4 16:52:29 PST 2012

On 12-01-04 06:33 PM, Ben Widawsky wrote:
> On 01/04/2012 02:14 PM, Gaetan Nadon wrote:
>> On 12-01-04 02:38 PM, Ben Widawsky wrote:
>>> I should have added... The shader debugger isn't complete. This is a
>>> prototype/proof of concept. Somehow that got dropped in my earlier
>>> mail.
>>> On 01/04/2012 11:34 AM, Ben Widawsky wrote:
>>>> On Wed, Jan 04, 2012 at 03:31:48PM +0100, Daniel Vetter wrote:
>>>>> On Wed, Jan 4, 2012 at 15:17, Gaetan Nadon<memsize at videotron.ca>
>>>>> wrote:
>>>>>> On 12-01-03 09:42 PM, Eric Anholt wrote:
>>>>>>> On Mon, 02 Jan 2012 18:23:15 -0500, Gaetan
>>>>>>> Nadon<memsize at videotron.ca>   wrote:
>>>>>>>> This series applies some xorg project policies and code reuse
>>>>>>>> from util-macros.
>>>>>>>> In some cases it reverts "upgrades" that were "too new" for the
>>>>>>>> overall xorg.
>>>>>>>> There were no bug fixes, things went smoothly.
>>>>>>> Both series for updates to automake infrastructure for this
>>>>>>> project are
>>>>>>> Acked-by: Eric Anholt<eric at anholt.net>
>>>>>>> I think I cribbed from xf86-video-intel when I originally did this
>>>>>>> stuff, and I didn't mean for it to be gratuitously different
>>>>>>> from our
>>>>>>> other projects, as I recall.
>>>>>> Things evolved gradually over the last 3 years to arrive at the
>>>>>> configuration we have today. I provided explanations so the
>>>>>> changes do
>>>>>> not appear to be gratuitous.
>>>>>> I noticed the system_routine configuration is rather convoluted. I
>>>>>> prototyped a formal automake Makefile.am and it simplifies things
>>>>>> quite
>>>>>> a bit, all the way up to configure.ac. As it is now, the
>>>>>> system_routine
>>>>>> does not build from a tarball due to a missing Makefile.
>>>>>> I'd need a little help to do  a better job. I suppose "sr" is the
>>>>>> system
>>>>>> routine, how would this gets used by someone who installed the
>>>>>> package
>>>>>> from a distro? I am wondering which files to install from this
>>>>>> subdir.
>>>>> The system_routine/debugger stuff is from Ben Widawsky. Adding him to
>>>>> cc so he can join the fun.
>>>>> -Daniel
>>>>> -- 
>>>>> Daniel Vetter
>>>>> daniel.vetter at ffwll.ch - +41 (0) 79 365 57 48 - http://blog.ffwll.ch
>>>> Wow, you're my hero. I spent at least a full day trying to get it
>>>> working with autotools, and just gave up.
>>>> The first question which must be answered is, does anyone still want
>>>> this. Maybe I didn't advertise the feature well enough, but nobody
>>>> seemed intersted. To remind everyone, this is a HW supported
>>>> feature to
>>>> do shader debugging on the GEN EUs.
>>>> As of now, the debugger directory has nothing which should be
>>>> installed
>>>> since my mesa patches never made it upstream (ie. the debugger is
>>>> prototype/instructional only). Carl Worth volunteered to work on it a
>>>> bit, but since there were more promosing tools to be developed, I
>>>> think
>>>> it got dropped on the floor. Honestly, even a successful build really
>>>> shouldn't be a requirement for the i-g-t package as of this instant. I
>>>> just wanted to have something in place so next time someone wants
>>>> to try
>>>> doing shader debugging on intel platforms, my hard work can be reused.
>> Now that the module is posted on x.org web and picked-up by a distro,
>> I'd like to avoid bug reports.
>>>> Now assuming you do want to get it working properly after reading
>>>> that...
>>>> The package relies on python3, and intel-gen4asm to assemble the
>>>> system_routine. 
>>>> (http://cgit.freedesktop.org/xorg/app/intel-gen4asm/).
>>>> This was supposed to go away as the "compilation" was going to move to
>>>> mesa (though I personally preferred a discrete SR, others were
>>>> opposed).
>>>> In any case, the dependency is there now.
>> I noticed. That can be handled in configure. I can 'dist' the generated
>> .c files so one can build from a tarball without having python3/gen4asm.
>> We do this for lex&  yacc for example.
>>>> debugger/debug_rdata.c - help debug the debugger, shouldn't be
>>>> installed
>>>> debugger/system_routine/pre_cpp.py - roll my own preprocessor which
>>>> evelautates defines. Shouldn't be installed. Would prefer a better
>>>> solution here, but couldn't find one.
>>>> debugger/system_routine/eviction_macro.c - generate a lot of
>>>> repetivive
>>>> code for the sr. Shouldn't be installed. Not even relevant on Gen7+.
>>>> debugger/eudb - the client side debugger (equivalent of gdb). Must be
>>>> run as root. Probably would be an install target.
>>>> debugger/system_routine/sr.* - There are a few options here and it
>>>> depends on how mesa wanted to use it. One option is a binary blob
>>>> to be
>>>> read in at runtime. Another is to take the bytes as an array and build
>>>> it in to mesa (this was what the final mesa patches had). The most
>>>> reusable from mesa side would probably be to install the binary
>>>> somewhere, and mesa could just suck it in at runtime.
>> How about "test" and "helper". The "helper seems unfinished. Worth
>> keeping? I assume they wouldn't be installed either. The "test" data
>> object has the same name as /usr/bin/test command.
> The only purpose of helper was to get output not in a tempfile. This
> was useful for catching bugs in intel-gen4asm, but probably shouldn't
> be distributed. It's fine with me if you remove this target completely.
In my prototype I did not use temp files (was filling my /tmp) but used
several rules, each step being saved in a separate file. I can keep it
this way and remove "helper". Anyone like me who is discovering what is
being done will have all the intermediate files to look at.
> Test.g4a is the simplest possible system routine we can run on the GPU
> without actually hanging the system. Rename as you like. It's really
> useful to have around, but also should be fine to just remove this
> target. Anyone doing serious enough development can easily regenerate it.
> Keep in mind that both test and helper though will not generate ELF
> files, but rather binaries to run on the GPU.
>>>> As a side note, I'm willing to improve on this stuff if I found some
>>>> users of the feature. It sort of died a lonely death unfortunately.
>> I'll submit a patch which reflects your sepc above. This will preserve
>> existing work. Additional work can be done afterwards. I'll test it all
>> builds such that we don't get bug reports.
>> One question on the sr.*. I suppose it is only for Linux, but would it
>> not be the same code being generated on any and all Linux distros?
> The sr code is specific to the GPU. We would/could have many per GEN.
> Although with the other plan it would all be in the mesa IR.
> Now that I think about it, perhaps there should be a directory of GEN
> binaries (maybe libva does something similar?). Just a tought.
>> Thanks
> Thank you!

More information about the xorg-devel mailing list