[Intel-gfx] [PATCH 24/28] drm: use new iterator in drm_gem_plane_helper_prepare_fb v2

Christian König ckoenig.leichtzumerken at gmail.com
Tue Oct 19 15:51:38 UTC 2021


Am 19.10.21 um 16:30 schrieb Daniel Vetter:
> On Tue, Oct 19, 2021 at 03:02:26PM +0200, Christian König wrote:
>> Am 13.10.21 um 16:23 schrieb Daniel Vetter:
>>> On Tue, Oct 05, 2021 at 01:37:38PM +0200, Christian König wrote:
>>>> Makes the handling a bit more complex, but avoids the use of
>>>> dma_resv_get_excl_unlocked().
>>>>
>>>> v2: improve coding and documentation
>>>>
>>>> Signed-off-by: Christian König <christian.koenig at amd.com>
>>>> ---
>>>>    drivers/gpu/drm/drm_gem_atomic_helper.c | 13 +++++++++++--
>>>>    1 file changed, 11 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>> index e570398abd78..8534f78d4d6d 100644
>>>> --- a/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>> +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c
>>>> @@ -143,6 +143,7 @@
>>>>     */
>>>>    int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state)
>>>>    {
>>>> +	struct dma_resv_iter cursor;
>>>>    	struct drm_gem_object *obj;
>>>>    	struct dma_fence *fence;
>>>> @@ -150,9 +151,17 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st
>>>>    		return 0;
>>>>    	obj = drm_gem_fb_get_obj(state->fb, 0);
>>>> -	fence = dma_resv_get_excl_unlocked(obj->resv);
>>>> -	drm_atomic_set_fence_for_plane(state, fence);
>>>> +	dma_resv_iter_begin(&cursor, obj->resv, false);
>>>> +	dma_resv_for_each_fence_unlocked(&cursor, fence) {
>>>> +		/* TODO: We only use the first write fence here and need to fix
>>>> +		 * the drm_atomic_set_fence_for_plane() API to accept more than
>>>> +		 * one. */
>>> I'm confused, right now there is only one write fence. So no need to
>>> iterate, and also no need to add a TODO. If/when we add more write fences
>>> then I think this needs to be revisited, and ofc then we do need to update
>>> the set_fence helpers to carry an entire array of fences.
>> Well could be that I misunderstood you, but in your last explanation it
>> sounded like the drm_atomic_set_fence_for_plane() function needs fixing
>> anyway because a plane could have multiple BOs.
>>
>> So in my understanding what we need is a
>> drm_atomic_add_dependency_for_plane() function which records that a certain
>> fence needs to be signaled before a flip.
> Yeah that's another issue, but in practice there's no libva which decodes
> into planar yuv with different fences between the planes. So not a bug in
> practice.
>
> But this is entirely orthogonal to you picking up the wrong fence here if
> there's not exclusive fence set:
>
> - old code: Either pick the exclusive fence, or not fence if the exclusive
>    one is not set.
>
> - new code: Pick the exclusive fence or the first shared fence

Hui what?

We use "dma_resv_iter_begin(&cursor, obj->resv, *false*);" here which 
means that only the exclusive fence is returned and no shared fences 
whatsoever.

My next step is to replace the boolean with a bunch of use case 
describing enums. I hope that will make it much clearer what's going on 
here.

Christian.

> New behaviour is busted, because scanning out and reading from a buffer at
> the same time (for the next frame, e.g. to copy over damaged areas or some
> other tricks) is very much a supported thing. Atomic _only_ wants to look
> at the exclusive fence slot, which mean "there is an implicitly synced
> write to this buffers". Implicitly synced reads _must_ be ignored.


>
> Now amdgpu doesn't have this distinction in its uapi, but many drivers do.
> -Daniel
>
>> Support for more than one write fence then comes totally naturally.
>>
>> Christian.
>>
>>> -Daniel
>>>
>>>> +		dma_fence_get(fence);
>>>> +		break;
>>>> +	}
>>>> +	dma_resv_iter_end(&cursor);
>>>> +	drm_atomic_set_fence_for_plane(state, fence);
>>>>    	return 0;
>>>>    }
>>>>    EXPORT_SYMBOL_GPL(drm_gem_plane_helper_prepare_fb);
>>>> -- 
>>>> 2.25.1
>>>>



More information about the Intel-gfx mailing list