[RFC] Using DC in amdgpu for upcoming GPU
Harry Wentland
harry.wentland at amd.com
Thu Dec 8 02:02:13 UTC 2016
We propose to use the Display Core (DC) driver for display support on
AMD's upcoming GPU (referred to by uGPU in the rest of the doc). In
order to avoid a flag day the plan is to only support uGPU initially and
transition to older ASICs gradually.
The DC component has received extensive testing within AMD for DCE8, 10,
and 11 GPUs and is being prepared for uGPU. Support should be better
than amdgpu's current display support.
* All of our QA effort is focused on DC
* All of our CQE effort is focused on DC
* All of our OEM preloads and custom engagements use DC
* DC behavior mirrors what we do for other OSes
The new asic utilizes a completely re-designed atom interface, so we
cannot easily leverage much of the existing atom-based code.
We've introduced DC to the community earlier in 2016 and received a fair
amount of feedback. Some of what we've addressed so far are:
* Self-contain ASIC specific code. We did a bunch of work to pull
common sequences into dc/dce and leave ASIC specific code in
separate folders.
* Started to expose AUX and I2C through generic kernel/drm
functionality and are mostly using that. Some of that code is still
needlessly convoluted. This cleanup is in progress.
* Integrated Dave and Jerome’s work on removing abstraction in bios
parser.
* Retire adapter service and asic capability
* Remove some abstraction in GPIO
Since a lot of our code is shared with pre- and post-silicon validation
suites changes need to be done gradually to prevent breakages due to a
major flag day. This, coupled with adding support for new asics and
lots of new feature introductions means progress has not been as quick
as we would have liked. We have made a lot of progress none the less.
The remaining concerns that were brought up during the last review that
we are working on addressing:
* Continue to cleanup and reduce the abstractions in DC where it
makes sense.
* Removing duplicate code in I2C and AUX as we transition to using the
DRM core interfaces. We can't fully transition until we've helped
fill in the gaps in the drm core that we need for certain features.
* Making sure Atomic API support is correct. Some of the semantics of
the Atomic API were not particularly clear when we started this,
however, that is improving a lot as the core drm documentation
improves. Getting this code upstream and in the hands of more
atomic users will further help us identify and rectify any gaps we
have.
Unfortunately we cannot expose code for uGPU yet. However refactor /
cleanup work on DC is public. We're currently transitioning to a public
patch review. You can follow our progress on the amd-gfx mailing list.
We value community feedback on our work.
As an appendix I've included a brief overview of the how the code
currently works to make understanding and reviewing the code easier.
Prior discussions on DC:
* https://lists.freedesktop.org/archives/dri-devel/2016-March/103398.html
*
https://lists.freedesktop.org/archives/dri-devel/2016-February/100524.html
Current version of DC:
*
https://cgit.freedesktop.org/~agd5f/linux/tree/drivers/gpu/drm/amd/display?h=amd-staging-4.7
Once Alex pulls in the latest patches:
*
https://cgit.freedesktop.org/~agd5f/linux/tree/drivers/gpu/drm/amd/display?h=amd-staging-4.7
Best Regards,
Harry
************************************************
*** Appendix: A Day in the Life of a Modeset ***
************************************************
Below is a high-level overview of a modeset with dc. Some of this might
be a little out-of-date since it's based on my XDC presentation but it
should be more-or-less the same.
amdgpu_dm_atomic_commit()
{
/* setup atomic state */
drm_atomic_helper_prepare_planes(dev, state);
drm_atomic_helper_swap_state(dev, state);
drm_atomic_helper_update_legacy_modeset_state(dev, state);
/* create or remove targets */
/********************************************************************
* *** Call into DC to commit targets with list of all known targets
********************************************************************/
/* DC is optimized not to do anything if 'targets' didn't change. */
dc_commit_targets(dm->dc, commit_targets, commit_targets_count)
{
/******************************************************************
* *** Build context (function also used for validation)
******************************************************************/
result = core_dc->res_pool->funcs->validate_with_context(
core_dc,set,target_count,context);
/******************************************************************
* *** Apply safe power state
******************************************************************/
pplib_apply_safe_state(core_dc);
/****************************************************************
* *** Apply the context to HW (program HW)
****************************************************************/
result = core_dc->hwss.apply_ctx_to_hw(core_dc,context)
{
/* reset pipes that need reprogramming */
/* disable pipe power gating */
/* set safe watermarks */
/* for all pipes with an attached stream */
/************************************************************
* *** Programming all per-pipe contexts
************************************************************/
status = apply_single_controller_ctx_to_hw(...)
{
pipe_ctx->tg->funcs->set_blank(...);
pipe_ctx->clock_source->funcs->program_pix_clk(...);
pipe_ctx->tg->funcs->program_timing(...);
pipe_ctx->mi->funcs->allocate_mem_input(...);
pipe_ctx->tg->funcs->enable_crtc(...);
bios_parser_crtc_source_select(...);
pipe_ctx->opp->funcs->opp_set_dyn_expansion(...);
pipe_ctx->opp->funcs->opp_program_fmt(...);
stream->sink->link->link_enc->funcs->setup(...);
pipe_ctx->stream_enc->funcs->dp_set_stream_attribute(...);
pipe_ctx->tg->funcs->set_blank_color(...);
core_link_enable_stream(pipe_ctx);
unblank_stream(pipe_ctx,
program_scaler(dc, pipe_ctx);
}
/* program audio for all pipes */
/* update watermarks */
}
program_timing_sync(core_dc, context);
/* for all targets */
target_enable_memory_requests(...);
/* Update ASIC power states */
pplib_apply_display_requirements(...);
/* update surface or page flip */
}
}
More information about the amd-gfx
mailing list