Soliciting DRM feedback on latest DC rework

Harry Wentland harry.wentland at amd.com
Mon May 8 18:54:22 UTC 2017


Hi Daniel,

Thanks for taking the time to look at DC.

I had a couple more questions/comments in regard to the patch you posted 
on IRC: http://paste.debian.net/plain/930704

My impression is that this item is the most important next step for us:

>     From a quick glance I think what we want instead is:
>     - amdgpu_crtc_state and amdgpu_plane_state as following:
>
>     struct amdgpu_crtc_state {
>             struct drm_crtc_state base;
>             struct dc_stream;
>     };

Unfortunately as sketched here it doesn't quite mesh with what we 
currently have, which is:

struct stream {
	struct core_stream protected;
	...
}

struct core_stream {
	struct dc_stream public;
	...
}

struct dc_stream {
	...
}

Any objections to doing something like this instead:

#ifdef LINUX
#include "drm_crtc.h"
#endif

struct dc_stream {
#ifdef LINUX
	struct drm_crtc_state base;
#endif
	...
}

The ifdefs are then removed on Linux versions of DC, while other OSs 
wouldn't compile with the LINUX flag.

>     - validate_context->res_ctx->pool looks extremely fishy. In principle,
>       refcounting does not mesh with the duplicate, check, then
>       free-or-commit semantics of atomic. Are you sure this works
>       correctly when doing TEST_ONLY commits? igt or weston (with Daniel's
>       patches) are your friend (or enemy) here :-)
>
>       The common approach all other drivers implement:
>       - Same duplicate semantics for private objects as for core drm
>         objects. The private obj helpers will help with cutting down the
>         boiler-plate.
>       - No refcounts, instead an allocation bitmask in the object one up
>         in the hierarchy (usually global state, but sometimes also
>         crtc/stream).
>       - If you want to keep the current design, then you
>         must do a deep copy of pool (i.e. not just the struct, but also
>         every struct that might change it points too). The above
>         accomplishes that essentially, except in standard atomic patterns.
>       - You'll probably need a bunch of for_each_foo_in_context macros,
>         built using the private state helpers.
>
>       Trying to review correctness of atomic_check when private resources
>       are refcounted is real hard. The only case we have is vc4, and it's
>       kinda not pretty (it has it's own manual rollback hacks in the
>       release functions). Going entirely with free-standing stuff is much
>       better.

Good points here. I think I'll ultimately have to get IGT running 
against our code with TEST_ONLY commits and see what happens. Ultimately 
we probably have to deep copy, one way or another. I don't really want 
any rollback stuff as that would be near impossible to maintain.

>     This shouldn't be a huge conceptual change in the DC design (it
>     already has checks for "is this unchanged" and then ignore that
>     object), just quite a bit more invasive than sprinkling for_each_*
>     macros all over the place. From a glane it could be that you'd get
>     80% there by having a for_each_changed_*_in_context set of macros,
>     with the current code everywhere else, and the "jumps over unchanged
>     obj because they're not in drm_atomic_state/dc_validation_context"
>     logic on drm atomic.

Yeah, this should fit mostly with DC design. Will evaluate this once we 
link DC objects to DRM state objects.

> @@ -1640,6 +1644,8 @@ static void dm_crtc_helper_disable(struct drm_crtc *crtc)
>  {
>  }
>
> +/* no dummy funcs pls, counts everywhere */
> +

Does the comment apply to the preceding or next function? What do you 
mean with "counts everywhere"?

>  static int dm_crtc_helper_atomic_check(
>  	struct drm_crtc *crtc,
>  	struct drm_crtc_state *state)


> @@ -3077,6 +3102,15 @@ int amdgpu_dm_atomic_check(struct drm_device *dev,
>
>  		acrtc = to_amdgpu_crtc(crtc);
>
> +		/* This is the wrong way round. If you have a requirement for a
> +		 * 1:1 connector/crtc mapping, then loop over connectors and
> +		 * grab the crtc from there. Plus make sure there's really only
> +		 * 1 connector per crtc (but the possible clones stuff should
> +		 * catch that for you I think, at least with latest atomic
> +		 * helpers.
> +		 *
> +		 * Similar for the same loop in commit.
> +		 */
>  		aconnector = amdgpu_dm_find_first_crct_matching_connector(state, crtc, true);
>
>  		action = get_dm_commit_action(crtc_state);

Good point. This code is still a bit of a mess.

Harry


On 2017-05-03 03:12 PM, Harry Wentland wrote:
>
>
> On 2017-05-03 11:02 AM, Daniel Vetter wrote:
>> On Wed, May 03, 2017 at 04:26:51PM +0200, Christian König wrote:
>>> Hi Harry,
>>>
>>> while this looks more and more like it could work something which would
>>> really help would be to have a set of patches squashed together and
>>> rebased
>>> on drm-next.
>>>
>>> The dc-drm-next-atomic-wip looks like a start, but we need more
>>> something
>>> like:
>>> drm/amdgpu: add base DC components
>>> drm/amdgpu: add DCE8 support for DC
>>> drm/amdgpu: add DCE9 support for DC
>>> drm/amdgpu: add DCE10 support for DC
>>> ...
>>> drm/amdgpu: enable DC globally
>>> drm/amdgpu: remove old display infrastructure
>>>
>>> Otherwise I fear that people will just be lost in the mass amount of
>>> patches
>>> we have in the branch.
>>
>> I think for a quick first feedback round simply something that's based on
>> top of recent drm-next is good enough, since I'll probably only look at
>> the aggregate diff anyway (or resulting tree).
>
> This was basically what I figured which is why I didn't invest more time
> to squash changes. I agree with Christian that eventually we probably
> want something like his suggested split but for now I'd rather focus on
> tying a dc_surface to a drm_plane (and same for other objects). Starting
> to tie our objects to drm objects has been very eye-opening, to say the
> least.
>
> Daniel, do you think you'll find time to take a look at this this week
> or next?
>
> Harry
>
>> -Daniel
>>
>>>
>>> Regards,
>>> Christian.
>>>
>>> Am 03.05.2017 um 16:13 schrieb Harry Wentland:
>>>> Hi all,
>>>>
>>>> Over the last few months we (mostly Andrey and myself) have taken and
>>>> addressed some of the feedback received from December's DC RFC. A
>>>> lot of
>>>> our work so far centers around atomic. We were able to take a whole
>>>> bunch of the areas where we rolled our own solution and use DRM atomic
>>>> helpers instead.
>>>>
>>>> You can find our most recent drm-next based tree at
>>>> https://cgit.freedesktop.org/~hwentland/linux/log/?h=dc-drm-next-atomic-wip
>>>>
>>>>
>>>> An outline of the changes with commit hashes from that tree are listed
>>>> below. They aren't necessarily in order but are grouped by
>>>> functionality.
>>>>
>>>> I would like to solicit input on the changes and the current state of
>>>> amd/display in general.
>>>>
>>>> I'm on IRC every weekday from 9-5 eastern time, sometimes at other
>>>> hours. Feel free to ask questions, discuss, leave comments. Let me know
>>>> if there's anything else I can do to facilitate review.
>>>>
>>>> We know there's more work to be done but would much rather prioritize
>>>> that work based on community feedback than merely our own impressions.
>>>>
>>>> We haven't finished plumbing drm types to the dc types yet, and there
>>>> are still a bunch of other things in progress.  We are not looking to
>>>> re-hash the previous discussion, but rather we'd like some feedback on
>>>> our work so far.
>>>>
>>>> The list of changes (trimmed commit tags):
>>>>
>>>> == Use common helpers for pageflip ==
>>>> 144da239b047 Use pflip prepare and submit parts (v2)
>>>> ff7ac264a9a1 Fix page flip after daniel's acquire_ctx change
>>>>
>>>>
>>>> == implement atomic_get_properties ==
>>>> cf4a84df7189 Fallback on legacy properties in atomic_get_properties
>>>> 01f96706b6ca get_atomic_property missing for drm_connector_funcs
>>>>
>>>>
>>>> == Use common helpers for gamma ==
>>>> 3f547d7098de Use atomic helpers for gamma
>>>>
>>>>
>>>> == Use atomic helpers for commit ==
>>>> 41831f55bd58 Refactor atomic commit implementation. (v2)
>>>> 6c67dd3c5cd5 Refactor headless to use atomic commit.
>>>> eb22ef1ecb16 Remove page_fleep_needed function.
>>>>
>>>>
>>>> == Use atomic helpers for S3 ==
>>>> 5a6ae6f76249 Switch to DRM helpers in s3.
>>>>
>>>>
>>>> == Simmplify mapping between DRM and DC objects ==
>>>> 84a3ee023b9b Remove get_connector_for_link.
>>>> 6d8978a98b40 Remove get_connector_for_sink.
>>>>
>>>>
>>>> == Use DRM EDID read instead of DC ==
>>>> 566969dacfad Fix i2c write flag.
>>>> 527c3699ff3c Refactor edid read.
>>>> 5ac51023d275 Fix missing irq refactor causing potential i2c race
>>>>
>>>>
>>>> == Save DC validate_ctx in atomic_check and use in commit ==
>>>> 8c194d8e4ee9 pull commit_surfaces out of atomic_commit into helper
>>>> function
>>>> 27eef98b38c8 Return context from validate_context
>>>> ca3ee10e915b Add DM global atomic state
>>>> 8ba1ca856532 Hook dm private state into atomic_check
>>>> 10160455ac6d Add correct retain/release for objs in dm_atomic_state
>>>> 0f1b2e2aecbb Commit validation set from state
>>>> 258e6a91fc61 Add validate_context to atomic_state
>>>> 64f569b5df20 Use validate_context from atomic_check in commit
>>>>
>>>>
>>>> == Start multi-plane implementation ==
>>>> 601ff4e70b7c decouple per-crtc-plane model
>>>> a0b859a0b114 Fix cleanup in amdgpu_dm_initialize_drm_device
>>>> ee02010d7a82 update plane functionalities
>>>> 3b7345fd1abb Allow planes on all crtcs
>>>> d9cf37462156 initialize YUV plane capabilities
>>>> 248f06b2613e Universal cursor plane hook-up.
>>>>
>>>>
>>>> == Minor cleanups ==
>>>> d99bf02cb8fc Rename atomic_commit parameter.
>>>> f15fb9726502 Use amdgpu mode funcs statically
>>>> d3e37fa70643 Remove unused define from amdgpu_dm_types
>>>> 80a38ad58125 Add lock around updating freesync property
>>>> 8c7f16853824 Use new drm_dp_find_vcpi_slots to compute slots
>>>>
>>>> Regards,
>>>> Harry
>>>> _______________________________________________
>>>> amd-gfx mailing list
>>>> amd-gfx at lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>>
>>>
>>


More information about the amd-gfx mailing list