drm/scheduler for vc5

Eric Anholt eric at anholt.net
Mon Apr 2 18:49:15 UTC 2018


Christian König <christian.koenig at amd.com> writes:

> Hi Eric,
>
> nice to see that the scheduler gets used more and more.
>
> The feature your need to solve both your binning/rendering as well as 
> your MMU problem is dependency handling. See the "dependency" callback 
> of the backend operations.
>
> With this callback the driver can return dma_fences which need to signal 
> (or at least be scheduled if it targets the same ring buffer/fifo).
>
> Now you need dma_fences as result of your run_job callback for the 
> binning step anyway. So when you return this fence from the binning step 
> as dependency for your rendering step the scheduler does exactly what 
> you want, e.g. not start the rendering before the binning is finished.
>
>
> The same idea can be used for the MMU switch. As an example on how to do 
> this see how the dependency callback is implemented in 
> amdgpu_job_dependency():
>>     struct dma_fence *fence = amdgpu_sync_get_fence(&job->sync, 
>> &explicit);
>
> First we get the "normal" dependencies from our sync object (a storage 
> for fences).
>
> ...
>>     while (fence == NULL && vm && !job->vmid) {
>>         struct amdgpu_ring *ring = job->ring;
>>
>>         r = amdgpu_vmid_grab(vm, ring, &job->sync,
>>                      &job->base.s_fence->finished,
>>                      job);
>>         if (r)
>>             DRM_ERROR("Error getting VM ID (%d)\n", r);
>>
>>         fence = amdgpu_sync_get_fence(&job->sync, NULL);
>>     }
>
> If we don't have any more "normal" dependencies left we call into the 
> VMID subsystem to allocate an MMU for that job (we have 16 of those).
>
> This call will pick a VMID and remember that the process of the job is 
> now the owner of this VMID. If the VMID previously didn't belonged to 
> the process of the current job all fences of the old process are added 
> to the job->sync object again.

This makes some sense when you have many VMIDs and reuse won't happen
very often.  I'm concerned that when I effectively have one VMID that I
need to keep swapping, then we're creating a specific serialization of
the jobs at the time they're submitted to the kernel (dependency()
callback) rather than when the scheduler decides it would like to submit
to the HW (run_job() callback after deciding on a job based on
priority).
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: <https://lists.freedesktop.org/archives/dri-devel/attachments/20180402/1628d704/attachment.sig>


More information about the dri-devel mailing list