[RFC] Using DC in amdgpu for upcoming GPU
Rob Clark
robdclark at gmail.com
Tue Dec 13 14:59:04 UTC 2016
On Mon, Dec 12, 2016 at 11:10 PM, Cheng, Tony <tony.cheng at amd.com> wrote:
> We need to treat most of resource that don't map well as global. One example
> is pixel pll. We have 6 display pipes but only 2 or 3 plls in CI/VI, as a
> result we are limited in number of HDMI or DVI we can drive at the same
> time. Also the pixel pll can be used to drive DP as well, so there is
> another layer of HW specific but we can't really contain it in crtc or
> encoder by itself. Doing this resource allocation require knowlege of the
> whole system, and knowning which pixel pll is already used, and what can we
> support with remaining pll.
>
> Another ask is lets say we are driving 2 displays, we would always want
> instance 0 and instance 1 of scaler, timing generator etc getting used. We
> want to avoid possiblity of due to different user mode commit sequence we
> end up with driving the 2 display with 0 and 2nd instance of HW. Not only
> this configuration isn't really validated in the lab, we will be less
> effective in power gating as instance 0 and 1 are one the same tile.
> instead of having 2/3 of processing pipeline silicon power gated we can only
> power gate 1/3. And if we power gate wrong the you will have 1 of the 2
> display not lighting up.
Note that as of 4.10, drm/msm/mdp5 is dynamically assigning hwpipes to
planes tracked as part of the driver's global atomic state. (And for
future hw we will need to dynamically assign layermixers to crtc's).
I'm also using global state for allocating SMP (basically fifo)
blocks. And drm/i915 is also using global atomic state for shared
resources.
Dynamic assignment of hw resources to kms objects is not a problem,
and the locking model in atomic allows for this. (I introduced one
new global modeset_lock to protect the global state, so only multiple
parallel updates which both touch shared state will serialize)
BR,
-R
More information about the dri-devel
mailing list