[PATCH] drm/amdgpu: Program ring for vce instance 1 at its register space
Christian König
deathsimple at vodafone.de
Tue May 30 16:15:25 UTC 2017
Am 30.05.2017 um 17:56 schrieb Leo Liu:
> We need program ring buffer on instance 1 register space domain,
> when only if instance 1 available, with two instances or instance 0,
> and we need only program instance 0 regsiter space domain for ring.
>
> Signed-off-by: Leo Liu <leo.liu at amd.com>
> Reviewed-by: Alex Deucher <alexander.deucher at amd.com>
> Cc: stable at vger.kernel.org
> ---
> drivers/gpu/drm/amd/amdgpu/vce_v3_0.c | 95 +++++++++++++++++++++++++----------
> 1 file changed, 68 insertions(+), 27 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
> index fb08193..7e39e42 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
> @@ -77,13 +77,26 @@ static int vce_v3_0_set_clockgating_state(void *handle,
> static uint64_t vce_v3_0_ring_get_rptr(struct amdgpu_ring *ring)
> {
> struct amdgpu_device *adev = ring->adev;
> + u32 v;
> +
> + if (adev->vce.harvest_config == AMDGPU_VCE_HARVEST_VCE0) {
> + mutex_lock(&adev->grbm_idx_mutex);
> + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(1));
> + }
Did you missed my comment? I think we should always grab the mutex and
program mmGRBM_GFX_INDEX.
Otherwise we silently rely on that it is correctly set when the function
is called and have different code path for different harvested configs.
Apart from that the patch looks good to me.
Christian.
>
> if (ring == &adev->vce.ring[0])
> - return RREG32(mmVCE_RB_RPTR);
> + v = RREG32(mmVCE_RB_RPTR);
> else if (ring == &adev->vce.ring[1])
> - return RREG32(mmVCE_RB_RPTR2);
> + v = RREG32(mmVCE_RB_RPTR2);
> else
> - return RREG32(mmVCE_RB_RPTR3);
> + v = RREG32(mmVCE_RB_RPTR3);
> +
> + if (adev->vce.harvest_config == AMDGPU_VCE_HARVEST_VCE0) {
> + WREG32(mmGRBM_GFX_INDEX, mmGRBM_GFX_INDEX_DEFAULT);
> + mutex_unlock(&adev->grbm_idx_mutex);
> + }
> +
> + return v;
> }
>
> /**
> @@ -96,13 +109,26 @@ static uint64_t vce_v3_0_ring_get_rptr(struct amdgpu_ring *ring)
> static uint64_t vce_v3_0_ring_get_wptr(struct amdgpu_ring *ring)
> {
> struct amdgpu_device *adev = ring->adev;
> + u32 v;
> +
> + if (adev->vce.harvest_config == AMDGPU_VCE_HARVEST_VCE0) {
> + mutex_lock(&adev->grbm_idx_mutex);
> + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(1));
> + }
>
> if (ring == &adev->vce.ring[0])
> - return RREG32(mmVCE_RB_WPTR);
> + v = RREG32(mmVCE_RB_WPTR);
> else if (ring == &adev->vce.ring[1])
> - return RREG32(mmVCE_RB_WPTR2);
> + v = RREG32(mmVCE_RB_WPTR2);
> else
> - return RREG32(mmVCE_RB_WPTR3);
> + v = RREG32(mmVCE_RB_WPTR3);
> +
> + if (adev->vce.harvest_config == AMDGPU_VCE_HARVEST_VCE0) {
> + WREG32(mmGRBM_GFX_INDEX, mmGRBM_GFX_INDEX_DEFAULT);
> + mutex_unlock(&adev->grbm_idx_mutex);
> + }
> +
> + return v;
> }
>
> /**
> @@ -116,12 +142,22 @@ static void vce_v3_0_ring_set_wptr(struct amdgpu_ring *ring)
> {
> struct amdgpu_device *adev = ring->adev;
>
> + if (adev->vce.harvest_config == AMDGPU_VCE_HARVEST_VCE0) {
> + mutex_lock(&adev->grbm_idx_mutex);
> + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(1));
> + }
> +
> if (ring == &adev->vce.ring[0])
> WREG32(mmVCE_RB_WPTR, lower_32_bits(ring->wptr));
> else if (ring == &adev->vce.ring[1])
> WREG32(mmVCE_RB_WPTR2, lower_32_bits(ring->wptr));
> else
> WREG32(mmVCE_RB_WPTR3, lower_32_bits(ring->wptr));
> +
> + if (adev->vce.harvest_config == AMDGPU_VCE_HARVEST_VCE0) {
> + WREG32(mmGRBM_GFX_INDEX, mmGRBM_GFX_INDEX_DEFAULT);
> + mutex_unlock(&adev->grbm_idx_mutex);
> + }
> }
>
> static void vce_v3_0_override_vce_clock_gating(struct amdgpu_device *adev, bool override)
> @@ -231,33 +267,38 @@ static int vce_v3_0_start(struct amdgpu_device *adev)
> struct amdgpu_ring *ring;
> int idx, r;
>
> - ring = &adev->vce.ring[0];
> - WREG32(mmVCE_RB_RPTR, lower_32_bits(ring->wptr));
> - WREG32(mmVCE_RB_WPTR, lower_32_bits(ring->wptr));
> - WREG32(mmVCE_RB_BASE_LO, ring->gpu_addr);
> - WREG32(mmVCE_RB_BASE_HI, upper_32_bits(ring->gpu_addr));
> - WREG32(mmVCE_RB_SIZE, ring->ring_size / 4);
> -
> - ring = &adev->vce.ring[1];
> - WREG32(mmVCE_RB_RPTR2, lower_32_bits(ring->wptr));
> - WREG32(mmVCE_RB_WPTR2, lower_32_bits(ring->wptr));
> - WREG32(mmVCE_RB_BASE_LO2, ring->gpu_addr);
> - WREG32(mmVCE_RB_BASE_HI2, upper_32_bits(ring->gpu_addr));
> - WREG32(mmVCE_RB_SIZE2, ring->ring_size / 4);
> -
> - ring = &adev->vce.ring[2];
> - WREG32(mmVCE_RB_RPTR3, lower_32_bits(ring->wptr));
> - WREG32(mmVCE_RB_WPTR3, lower_32_bits(ring->wptr));
> - WREG32(mmVCE_RB_BASE_LO3, ring->gpu_addr);
> - WREG32(mmVCE_RB_BASE_HI3, upper_32_bits(ring->gpu_addr));
> - WREG32(mmVCE_RB_SIZE3, ring->ring_size / 4);
> -
> mutex_lock(&adev->grbm_idx_mutex);
> for (idx = 0; idx < 2; ++idx) {
> if (adev->vce.harvest_config & (1 << idx))
> continue;
>
> WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(idx));
> +
> + /* Program instance 0 reg space for two instances or instance 0 case
> + program instance 1 reg space for only instance 1 available case */
> + if (idx != 1 || adev->vce.harvest_config == AMDGPU_VCE_HARVEST_VCE0) {
> + ring = &adev->vce.ring[0];
> + WREG32(mmVCE_RB_RPTR, lower_32_bits(ring->wptr));
> + WREG32(mmVCE_RB_WPTR, lower_32_bits(ring->wptr));
> + WREG32(mmVCE_RB_BASE_LO, ring->gpu_addr);
> + WREG32(mmVCE_RB_BASE_HI, upper_32_bits(ring->gpu_addr));
> + WREG32(mmVCE_RB_SIZE, ring->ring_size / 4);
> +
> + ring = &adev->vce.ring[1];
> + WREG32(mmVCE_RB_RPTR2, lower_32_bits(ring->wptr));
> + WREG32(mmVCE_RB_WPTR2, lower_32_bits(ring->wptr));
> + WREG32(mmVCE_RB_BASE_LO2, ring->gpu_addr);
> + WREG32(mmVCE_RB_BASE_HI2, upper_32_bits(ring->gpu_addr));
> + WREG32(mmVCE_RB_SIZE2, ring->ring_size / 4);
> +
> + ring = &adev->vce.ring[2];
> + WREG32(mmVCE_RB_RPTR3, lower_32_bits(ring->wptr));
> + WREG32(mmVCE_RB_WPTR3, lower_32_bits(ring->wptr));
> + WREG32(mmVCE_RB_BASE_LO3, ring->gpu_addr);
> + WREG32(mmVCE_RB_BASE_HI3, upper_32_bits(ring->gpu_addr));
> + WREG32(mmVCE_RB_SIZE3, ring->ring_size / 4);
> + }
> +
> vce_v3_0_mc_resume(adev, idx);
> WREG32_FIELD(VCE_STATUS, JOB_BUSY, 1);
>
More information about the amd-gfx
mailing list