[Mesa-dev] [RFC 1/2] gallium: add renderonly driver

Christian Gmeiner christian.gmeiner at gmail.com
Sun Oct 18 12:41:29 PDT 2015


Hi Thierry,

Thanks for reviewing this patches.

2015-10-16 15:31 GMT+02:00 Thierry Reding <thierry.reding at gmail.com>:
> Hi Christian,
>
> First off, thanks for reviving this effort. It's been one of the things
> that I've had nagging at me for much too long and I think it needs to be
> solved. So I'm hopeful that the more people we get looking at this the
> more likely it will be to come up with a solution that works well for
> everyone.
>

Yeah

> That said, I don't agree with the approach you've chosen here. I'll try
> to clarify why below.
>
> On Sun, Oct 11, 2015 at 05:09:21PM +0200, Christian Gmeiner wrote:
>> This commit adds a generic renderonly driver library, which fullfille
>> the requirements for tegra and etnaviv. As a result it is possible to
>> run unmodified egl software directly (without any compositor) on
>> supported devices.
>
> Technically this isn't a library but rather a midlayer. There's a subtle
> difference, but the implications are what concerns me.
>

Yes the term midlayer fits better.

> Back when I wrote the original driver for Tegra/Nouveau I also looked
> into possibilities to make this more generic. Since I know how bad mid-
> layers can be (from kernel experience) I shied away from something like
> this early on. What I tried to do next was abstract away enough to make
> this usable by more than just a single driver. Unfortunately the end
> result was that not much could be reused, so drivers ended up still
> having to implement all of the pipe_* objects, only to call generic
> functions. Most of the code needed in the various callbacks ended up not
> being much more than just a single line, so the gains from a helper
> library weren't very big.
>

If I am not completely wrong ddebug and trace gallium drivers are using the
same concept. We can create a chain of gallium drivers, where every
component can do special stuff and call or call not the next drivers function
in the chain.

> Another reason why I think this level of abstraction doesn't gain us
> much is that we already have a good level of abstraction, which is
> Gallium. I realize that implementing only the skeleton for a full

Do you see other ways then the midlayer approach to support renderonly
gpus better? A more tight integration into Gallium?

> Gallium driver is rather complicated, but that's due to the fact that
> graphics drivers are complex beasts.
>
> That said, I think for some areas it might be beneficial to have helpers
> to reduce the amount of duplication. However I think at this point in
> time we haven't had enough real-world exposure for this kind of driver
> to know what the requirements are. For that reason I think it is
> premature to use a generic midlayer such as this. Yes, I know that the
> alternative is roughly 2000 lines of code per driver, but on one hand
> that's nothing compared to the amount of code required by a proper GPU
> driver and on the other hand this will (ideally) be temporary until we
> get a better picture of where things need to go. At which point it may
> become more obvious how we can solve the boilerplate problem while at
> the same time avoiding the restrictions imposed by the midlayer.
>

For the moment I have no problem to go that way, but I am not sure how long
we should wait to collect the needed requirements. Do you know of any other
renderonly gpu coming in the near future (with opensource drivers)?

I am really no expert in this area but the most interesting part for a
renderonly
GPU midlayer is "how to display the rendering result?".
So we need to support different tiling formats and if a intermediate buffer is
needed (for IP combination where GPU or scanout format are not compatible).

For etnaviv we need to support all this cases (with the GC2000+ coming the next
months we can render into the scanout buffer directly - same as tegra).

I know that it is quite hard to design a good api/midlayer/whatever to be future
proof for the next 1-2 years without (bigger) refactoring work. But at
some point
we need to jump into the cold water, start to design it and make use of it.
I am open to any discussion about software architecture and requirements.

>> In every use case we import a dumb buffer from scanout gpu into
>> the renderonly gpu.
>>
>> If the scanout hardware does support the used tiling format from the
>> renderonly gpu, a driver can define a function which is used to 'setup'
>> the needed tiling on that imported buffer. This functions gets called
>> during rendertarget resource creation.
>>
>> If the scanout hardware does not support the used tiling format we need
>> to create an extra rendertarget resource for the renderonly gpu.
>> During XXX we blit the renderonly rendertarget onto the imported dumb
>> buffer.
>>
>> We assume that the renderonly driver provides a blit function that is
>> capable of resolving the tilied into untiled one.
>
> I understand that there's a want to eliminate the amount of boilerplate,
> but I think this approach of using a midlayer has several flaws. One of
> the typical pitfalls with a midlayer such as this is that it has the
> potential to grow into an unmaintainable mess. Granted, this currently
> doesn't look all that bad, but that's primarily because it supports only
> two types of devices. I suspect that the more devices we add, the more
> hooks and quirks we'll need. Every combination of GPU and display is
> likely going to have their own specialties that need to be handled and
> which are beyond simple things like the tiling format.

Do you have some examples?

>
> We also know that there are issues with the current approach (EGL
> clients in Weston don't properly display). It's unknown what the reason
> for this is and it may require largish changes to the architecture to
> fix it.
>

That is something I am looking into currently. For me this is also a blocker
but once it got fixed I would say the architecture works.

> For all of the above reasons I think it'd be better to live with a
> little boilerplate for now and refactor things as they become obvious
> candidates for refactoring.
>

Okay. So I can rename the renderonly driver to imx and we are all happy :)

> [...]
>> diff --git a/src/gallium/drivers/renderonly/renderonly_screen.c b/src/gallium/drivers/renderonly/renderonly_screen.c
> [...]
>> +static const char *
>> +renderonly_get_vendor(struct pipe_screen *pscreen)
>> +{
>> +     return "renderonly";
>> +}
>
> I don't think this going to do us much good. Applications may want to
> know more precisely what kind of device they're running on and change
> behaviour accordingly.
>

Yep that is correct. Lets return a combination of scanout and render gpu
vendor.

>> +static void renderonly_screen_destroy(struct pipe_screen *pscreen)
>> +{
>> +     struct renderonly_screen *screen = to_renderonly_screen(pscreen);
>> +
>> +     screen->gpu->destroy(screen->gpu);
>> +     free(pscreen);
>> +}
>> +
>> +static int
>> +renderonly_screen_get_param(struct pipe_screen *pscreen, enum pipe_cap param)
>> +{
>> +     struct renderonly_screen *screen = to_renderonly_screen(pscreen);
>> +
>> +     return screen->gpu->get_param(screen->gpu, param);
>> +}
>>
>> +static float
>> +renderonly_screen_get_paramf(struct pipe_screen *pscreen, enum pipe_capf param)
>> +{
>> +     struct renderonly_screen *screen = to_renderonly_screen(pscreen);
>> +
>> +     return screen->gpu->get_paramf(screen->gpu, param);
>> +}
>
> I can easily imagine cases where the drivers might want to override some
> of these parameters rather than simply propagate the result from the GPU
> driver.
>

Do you have some examples?

>> +static boolean
>> +renderonly_screen_is_format_supported(struct pipe_screen *pscreen,
>> +                              enum pipe_format format,
>> +                              enum pipe_texture_target target,
>> +                              unsigned sample_count,
>> +                              unsigned usage)
>> +{
>> +     struct renderonly_screen *screen = to_renderonly_screen(pscreen);
>> +
>> +     return screen->gpu->is_format_supported(screen->gpu, format, target,
>> +                                             sample_count, usage);
>> +}
>
> Same here. I'm almost certain that GPUs will support more formats than
> the display hardware (in general). Also I guess this depends a lot on
> the usage parameter as well.
>

Ok. I can come up with struct renderonly_ops extension to support that.

> I know that this is a copy of what I had in my original proposal, but I
> fully admit that it wasn't much more than a proof of concept with quite
> a few rough edges to work out.
>

Mine is also more like a proof-of-concept but I want to get it merged
in some form :)

Greets
--
Christian Gmeiner, MSc

https://soundcloud.com/christian-gmeiner


More information about the mesa-dev mailing list