[Mesa-dev] [PATCH 2/2] draw: inject frontface info into wireframe outputs

Roland Scheidegger sroland at vmware.com
Thu Aug 1 13:02:39 PDT 2013


Am 01.08.2013 21:11, schrieb Jose Fonseca:
> 
> 
> ----- Original Message -----
>>>> +   if (draw_will_inject_frontface(lp_context->draw) &&
>>> I think it's annoying you have to do these calls to determine if there's
>>> a valid frontface here for each line instead of just per draw call but
>>> it doesn't seem easy to avoid it.
>>
>> Yea, there's no trivial way of avoiding it.
>>
>>> Also, no love for llvmpipe point face? I realize d3d10 doesn't require
>>> it but OpenGL (and IIRC d3d9) do.
>>
>> I didn't know of any tests for the points and we care only about lines right
>> now. It's just four extra lines of code or so, so I can trivially add it but
>> I don't have anything to test it with.
>>
>>> Looks like quite a heavy interface (and sort of silly to allocate 128
>>> bits in the vertex data (so actually twice that for one line) for 1 bit
>>> of information but given all our data passed on to the line/point funcs
>>> are float4 I don't really see any other easy way neither), but seems all
>>> necessary unfortunately. I guess another option would be to pass the
>>> face info always along the vertex data no matter what (which would mean
>>> all those additional calls for setting up outputs, determining if
>>> there's a valid frontface etc. could go along with the storage needed)
>>> for all primitives to the point/line/tri funcs but I'm not really
>>> thrilled about that idea neither (passing it for tris so it doesn't have
>>> to be recalculated may or may not be a good idea neither).
>>
>> Yes, plus then we'd need a brand new pipeline stage that is always run and
>> that is largely useless for vast majority of rendering. It's sort of a lose
>> lose scenario. The only thing that is clear is that we have to pass the data
>> along the shader outputs, everything else is a messy glue to make it
>> possible.
> 
> The only other thing I can think of would be to modify draw module unfilled stage so that it further decomposed wireframe's lines into face-preserving triangles, when it notices that the fragment shader reads face register. Draw module already has stages to decompose line into triangles, so could be a matter of patching things up. Performance wouldn't be as good, but it would avoid adding complexity to draw->driver interface.
> 
> Just a thought. The patch looks good too.
> 

I think ideally we would quite radically change the interface, so we
don't just have a vertex_layout but also a prim_layout (whose entries
certainly would be scalars not vec4), and hence the point/tri/line funcs
wouldn't just receive the vertex data but the prim data separately.
Apart from face prim_id certainly belongs to that category too.
Right now we're "back-plugging" those per-prim values into per-vertex
ones - that is we're really trying to shove d3d10 semantics into legacy
(both gl and d3d9) ones, which only really had per-vertex values (now it
did have front face true enough but it never worked right obviously for
exactly these special cases). But these values neither have a meaning
nor really exist at the vertex level. The only benefit seems to be that
it didn't need an interface change, and at setup we can just pretend
it's like any other (constant-interpolated) vertex data. For a single
tri those two are going to require 24 32bit values (3 per vertex x vec4)
despite it's just two scalars.
In any case though I suspect more interface changes would be necessary
for even newer things (d3d11 has more semantics for hull/domain shaders
obviously). And I didn't really think this through if it would work
nicely :-).

Roland


More information about the mesa-dev mailing list