[PATCH] drm/panfrost: Implement per FD address spaces
Tomeu Vizoso
tomeu.vizoso at collabora.com
Fri Aug 9 07:53:14 UTC 2019
On 8/9/19 5:01 AM, Rob Herring wrote:
> On Thu, Aug 8, 2019 at 5:11 PM Alyssa Rosenzweig
> <alyssa.rosenzweig at collabora.com> wrote:
>>
>>> @@ -448,6 +453,7 @@ static irqreturn_t panfrost_job_irq_handler(int irq, void *data)
>>> }
>>>
>>> if (status & JOB_INT_MASK_DONE(j)) {
>>> + panfrost_mmu_as_put(pfdev, &pfdev->jobs[j]->file_priv->mmu);
>>> panfrost_devfreq_record_transition(pfdev, j);
>>> dma_fence_signal(pfdev->jobs[j]->done_fence);
>>> }
>>
>> Is the idea to switch AS's when an IRQ is fired corresponding to a
>> process with a particular address sspace? (Where do we switch back? Or
>> is that not how the MMU actually works here?)
>
> No. There's 3 states an AS can be in: free, allocated, and in use.
> When a job runs, it requests an address space and then marks it not in
> use when job is complete(but stays assigned). The first time thru, we
> find a free AS in the alloc_mask and assign the AS to the FD. Then the
> next time thru, we most likely already have our AS and we just mark it
> in use with a ref count. We need a ref count because we have multiple
> job slots. If the job/FD doesn't have an AS assigned and there are no
> free ones, then we pick an allocated one not in use from our LRU list
> and switch the AS from the old FD to the new one.
>
> Switching an AS from one FD to another turns out to be quite simple.
> We simply update the AS registers to point to new page table base
> address and that's it.
>
>> Logically it seems sound, just armchair nervous about potential race
>> conditions with weird multithreading setups.
>
> But WebGL! :)
>
> I was worried too. It seems to be working pretty well though, but more
> testing would be good.
Soon we should be switching our Mesa CI to run dEQP tests concurrently,
and we may find any such issues.
> I don't think there are a lot of usecases that
> use more AS than the h/w has (8 on T860), but I'm not sure.
Yeah, I think we'll see often more than 8 clients connected at the same
time, but very rarely they will be all submitting jobs simultaneously.
Cheers,
Tomeu
> I tried to come up with a lockless fastpath, but then just gave up and
> stuck a spinlock around the whole thing.
>>
>>> + /* Assign the free or reclaimed AS to the */
>>
>> to the....?
>
> FD
>
> Rob
>
More information about the dri-devel
mailing list