[Mesa-dev] [PATCH] mesa gallium: use compute shaders for vaapi blit

Ilia Mirkin imirkin at alum.mit.edu
Wed Apr 3 13:57:43 UTC 2019


On Wed, Apr 3, 2019 at 9:36 AM Marek Olšák <maraeo at gmail.com> wrote:
>
> On Wed, Apr 3, 2019 at 9:06 AM Ilia Mirkin <imirkin at alum.mit.edu> wrote:
>>
>> On Wed, Apr 3, 2019 at 8:38 AM Marek Olšák <maraeo at gmail.com> wrote:
>> >
>> > On Tue, Apr 2, 2019 at 2:14 PM Eric Anholt <eric at anholt.net> wrote:
>> >>
>> >> Ilia Mirkin <imirkin at alum.mit.edu> writes:
>> >>
>> >> > Shouldn't this sort of decision be left up to the driver? If the
>> >> > driver would like to use CS for blits, fine, but why not let it blit
>> >> > in the most optimal way possible and force it to use a compute shader?
>> >>
>> >> Yeah, commit messages require an explanation of why a change is being
>> >> made.
>> >
>> >
>> > We plan to create vaapi contexts with PIPE_CONTEXT_COMPUTE_ONLY for better GPU multitasking.
>> >
>> > RadeonSI uses async compute queues if PIPE_CONTEXT_COMPUTE_ONLY is set, so it can't do any graphics stuff, not even blit. (pipe_context::blit is NULL)
>>
>> Makes sense. Sounds like one of those would be a better condition than
>> the mere existence of compute support then?
>
>
> Or we can add PIPE_CAP_PREFER_COMPUTE_BLIT as a performance hint.

When would a driver set that, and when would a state tracker respect it?

As I see it, if the driver prefers compute blits, it can just do that
in its ->blit impl. If the state tracker created a
PIPE_CONTEXT_COMPUTE_ONLY context, then it can also decide to not use
->blit(). I don't see what the CAP adds, but perhaps I'm missing
something.

  -ilia


More information about the mesa-dev mailing list