[PATCH] drm/amd/powerplay: add smu message mutex
Huang, Ray
Ray.Huang at amd.com
Tue Jun 4 05:37:38 UTC 2019
> -----Original Message-----
> From: amd-gfx <amd-gfx-bounces at lists.freedesktop.org> On Behalf Of Xiao,
> Jack
> Sent: Monday, June 03, 2019 3:02 PM
> To: amd-gfx at lists.freedesktop.org; Deucher, Alexander
> <Alexander.Deucher at amd.com>; Zhang, Hawking
> <Hawking.Zhang at amd.com>
> Cc: Xiao, Jack <Jack.Xiao at amd.com>
> Subject: [PATCH] drm/amd/powerplay: add smu message mutex
>
> Add smu message mutex preventing against race condition issue.
>
> Signed-off-by: Jack Xiao <Jack.Xiao at amd.com>
> ---
> drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 1 +
> drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h | 1 +
> drivers/gpu/drm/amd/powerplay/smu_v11_0.c | 7 ++++++-
> 3 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
> b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
> index 3026c7e..db2bbec 100644
> --- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
> +++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
> @@ -350,6 +350,7 @@ static int smu_early_init(void *handle)
> smu->adev = adev;
> smu->pm_enabled = !!amdgpu_dpm;
> mutex_init(&smu->mutex);
> + mutex_init(&smu->msg_mutex);
As talked with you, we need use smu->mutex to protect the context in the thread instead of introducing the specific mutex of messages. Because msg_mutex cannot protect the case of multi-message pairing. And yes, this is the key issue of swSMU so far.
+ Linux power folks,
Kevin, could you please use the smu->mutex to protect below callbacks which will be called from gpu_info ioctl.
amdgpu_dpm_get_sclk
amdgpu_dpm_get_mclk
And we need smu->mutex to protect the smu_dpm_set_uvd_enable/ smu_dpm_set_vce_enable as well, because they will be called during VCN command submissions. We should look over all ioctl/sysfs interface in the driver, they all need the mutex.
Thanks,
Ray
>
> return smu_set_funcs(adev);
> }
> diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
> b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
> index 3eb1de9..735233e 100644
> --- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
> +++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
> @@ -374,6 +374,7 @@ struct smu_context
> const struct smu_funcs *funcs;
> const struct pptable_funcs *ppt_funcs;
> struct mutex mutex;
> + struct mutex msg_mutex;
> uint64_t pool_size;
>
> struct smu_table_context smu_table;
> diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
> b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
> index d2eeb62..de737a0 100644
> --- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
> +++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
> @@ -104,6 +104,8 @@ static int smu_v11_0_send_msg(struct smu_context
> *smu, uint16_t msg)
> if (index < 0)
> return index;
>
> + mutex_lock(&smu->msg_mutex);
> +
> smu_v11_0_wait_for_response(smu);
>
> WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_90, 0); @@ -
> 111,11 +113,11 @@ static int smu_v11_0_send_msg(struct smu_context
> *smu, uint16_t msg)
> smu_v11_0_send_msg_without_waiting(smu, (uint16_t)index);
>
> ret = smu_v11_0_wait_for_response(smu);
> -
> if (ret)
> pr_err("Failed to send message 0x%x, response 0x%x\n",
> index,
> ret);
>
> + mutex_unlock(&smu->msg_mutex);
> return ret;
>
> }
> @@ -132,6 +134,8 @@ static int smu_v11_0_send_msg(struct smu_context
> *smu, uint16_t msg)
> if (index < 0)
> return index;
>
> + mutex_lock(&smu->msg_mutex);
> +
> ret = smu_v11_0_wait_for_response(smu);
> if (ret)
> pr_err("Failed to send message 0x%x, response 0x%x, param
> 0x%x\n", @@ -148,6 +152,7 @@ static int smu_v11_0_send_msg(struct
> smu_context *smu, uint16_t msg)
> pr_err("Failed to send message 0x%x, response 0x%x param
> 0x%x\n",
> index, ret, param);
>
> + mutex_unlock(&smu->msg_mutex);
> return ret;
> }
>
> --
> 1.9.1
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
More information about the amd-gfx
mailing list