drm/scheduler for vc5

Christian König christian.koenig at amd.com
Wed Apr 4 07:13:56 UTC 2018


Am 04.04.2018 um 01:08 schrieb Eric Anholt:
> Christian König <christian.koenig at amd.com> writes:
>
>> Hi Eric,
>>
>> nice to see that the scheduler gets used more and more.
>>
>> The feature your need to solve both your binning/rendering as well as
>> your MMU problem is dependency handling. See the "dependency" callback
>> of the backend operations.
>>
>> With this callback the driver can return dma_fences which need to signal
>> (or at least be scheduled if it targets the same ring buffer/fifo).
>>
>> Now you need dma_fences as result of your run_job callback for the
>> binning step anyway. So when you return this fence from the binning step
>> as dependency for your rendering step the scheduler does exactly what
>> you want, e.g. not start the rendering before the binning is finished.
> It looks like in order to use the bin's fence returned from run_job,
> render first needs to depend on exec->bin.base.s_fence->scheduled so
> that run_job has been called.  Is there any reason not to just depend on
> exec->bin.base.s_fence->finished, instead?  Finished will be signaled
> basically immediately after the run_job fence completes, right?

Yes exec->bin.base.s_fence->finished should be sufficient as well.

See there are three fences involved in the scheduler:
1. The hardware fence returned by the run_job callback.

The scheduler will register on that one to be notified for completion so 
that it can schedule the next job.

If you use the timeout feature it can be that we push a job to the 
hardware multiple times and replace this fence when we do so.

2. s_fence->scheduled this one is signaled when the scheduler has picked 
up a job.

It is the first one signaled and generally means that the job entered 
the hardware fifo.

3. s_fence->finished this one is signaled when the underlying hardware 
fence is signaled.

The difference to the hardware fence is that it is created much earlier 
during command submission.

I should probably write all this into some kind of documentation.

Regards,
Christian.

>
> Also, I hadn't quite followed your suggestion about MMU switching
> before.  Your trick was that you return a newly-generated dependency on
> MMU switching as the final dependency, so that you only decide on
> serializing the MMU switch once you're ready to run and the scheduler
> was about to pick your job anyway.  This seems good to me.



More information about the dri-devel mailing list