<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
        {font-family:SimSun;
        panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:DengXian;
        panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
        {font-family:SimSun;
        panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
        {font-family:DengXian;
        panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0in;
        margin-bottom:.0001pt;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:blue;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:purple;
        text-decoration:underline;}
p.msonormal0, li.msonormal0, div.msonormal0
        {mso-style-name:msonormal;
        margin:0in;
        margin-bottom:.0001pt;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}
span.EmailStyle20
        {mso-style-type:personal-reply;
        font-family:"Calibri",sans-serif;
        color:windowtext;}
.MsoChpDefault
        {mso-style-type:export-only;
        font-size:10.0pt;}
@page WordSection1
        {size:8.5in 11.0in;
        margin:1.0in 1.25in 1.0in 1.25in;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal">>[kevin]:<o:p></o:p></p>
<p class="MsoNormal">>for these paris of message, the smu driver should be better to add lock to protect bellow case on multi thread case (eg: cat sysfs)<o:p></o:p></p>
<p class="MsoNormal">>High A + Low B<o:p></o:p></p>
<p class="MsoNormal">>High B + Low A<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Hmm, that’s what I was trying to avoid to do(add internal lock protections).<o:p></o:p></p>
<p class="MsoNormal">As, these should be proper protected/locked by top apis(e.g. smu_read_sensor) from amdgpu_smu.c.<o:p></o:p></p>
<p class="MsoNormal">Adding more internal locks bring more troubles/confusions than benefits.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<div style="border:none;border-left:solid blue 1.5pt;padding:0in 0in 0in 4.0pt">
<div>
<div style="border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b>From:</b> Wang, Kevin(Yang) <Kevin1.Wang@amd.com> <br>
<b>Sent:</b> Sunday, January 5, 2020 2:11 PM<br>
<b>To:</b> Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org<br>
<b>Cc:</b> Deucher, Alexander <Alexander.Deucher@amd.com><br>
<b>Subject:</b> Re: [PATCH] drm/amd/powerplay: unified VRAM address for driver table interaction with SMU V2<o:p></o:p></p>
</div>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<p style="margin:15.0pt"><span style="font-size:10.0pt;font-family:"Arial",sans-serif;color:#0078D7">[AMD Official Use Only - Internal Distribution Only]<o:p></o:p></span></p>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p class="MsoNormal"><span style="font-size:12.0pt;color:black"><o:p> </o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span style="font-size:12.0pt;color:black"><o:p> </o:p></span></p>
</div>
<div class="MsoNormal" align="center" style="text-align:center">
<hr size="2" width="98%" align="center">
</div>
<div id="divRplyFwdMsg">
<p class="MsoNormal"><b><span style="color:black">From:</span></b><span style="color:black"> amd-gfx <<a href="mailto:amd-gfx-bounces@lists.freedesktop.org">amd-gfx-bounces@lists.freedesktop.org</a>> on behalf of Evan Quan <<a href="mailto:evan.quan@amd.com">evan.quan@amd.com</a>><br>
<b>Sent:</b> Thursday, January 2, 2020 10:39 AM<br>
<b>To:</b> <a href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a> <<a href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a>><br>
<b>Cc:</b> Deucher, Alexander <<a href="mailto:Alexander.Deucher@amd.com">Alexander.Deucher@amd.com</a>>; Quan, Evan <<a href="mailto:Evan.Quan@amd.com">Evan.Quan@amd.com</a>><br>
<b>Subject:</b> [PATCH] drm/amd/powerplay: unified VRAM address for driver table interaction with SMU V2</span>
<o:p></o:p></p>
<div>
<p class="MsoNormal"> <o:p></o:p></p>
</div>
</div>
<div>
<div>
<p class="MsoNormal">By this, we can avoid to pass in the VRAM address on every table<br>
transferring. That puts extra unnecessary traffics on SMU on<br>
some cases(e.g. polling the amdgpu_pm_info sysfs interface).<br>
<br>
V2: document what the driver table is for and how it works<br>
<br>
Change-Id: Ifb74d9cd89790b301e88d472b29cdb9b0365b65a<br>
Signed-off-by: Evan Quan <<a href="mailto:evan.quan@amd.com">evan.quan@amd.com</a>><br>
---<br>
 drivers/gpu/drm/amd/powerplay/amdgpu_smu.c    | 98 ++++++++++++-------<br>
 drivers/gpu/drm/amd/powerplay/arcturus_ppt.c  |  3 +-<br>
 .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h    | 10 ++<br>
 drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h |  2 +<br>
 drivers/gpu/drm/amd/powerplay/inc/smu_v12_0.h |  2 +<br>
 drivers/gpu/drm/amd/powerplay/navi10_ppt.c    |  1 +<br>
 drivers/gpu/drm/amd/powerplay/renoir_ppt.c    |  1 +<br>
 drivers/gpu/drm/amd/powerplay/smu_internal.h  |  2 +<br>
 drivers/gpu/drm/amd/powerplay/smu_v11_0.c     | 18 ++++<br>
 drivers/gpu/drm/amd/powerplay/smu_v12_0.c     | 26 +++--<br>
 drivers/gpu/drm/amd/powerplay/vega20_ppt.c    |  1 +<br>
 11 files changed, 117 insertions(+), 47 deletions(-)<br>
<br>
diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c<br>
index 95238ad38de8..beea4d9e82d4 100644<br>
--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c<br>
+++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c<br>
@@ -519,26 +519,19 @@ int smu_update_table(struct smu_context *smu, enum smu_table_id table_index, int<br>
 {<br>
         struct smu_table_context *smu_table = &smu->smu_table;<br>
         struct amdgpu_device *adev = smu->adev;<br>
-       struct smu_table *table = NULL;<br>
-       int ret = 0;<br>
+       struct smu_table *table = &smu_table->driver_table;<br>
         int table_id = smu_table_get_index(smu, table_index);<br>
+       uint32_t table_size;<br>
+       int ret = 0;<br>
 <br>
         if (!table_data || table_id >= SMU_TABLE_COUNT || table_id < 0)<br>
                 return -EINVAL;<br>
 <br>
-       table = &smu_table->tables[table_index];<br>
+       table_size = smu_table->tables[table_index].size;<br>
 <br>
         if (drv2smu)<br>
-               memcpy(table->cpu_addr, table_data, table->size);<br>
+               memcpy(table->cpu_addr, table_data, table_size);<br>
 <br>
-       ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetDriverDramAddrHigh,<br>
-                                         upper_32_bits(table->mc_address));<br>
-       if (ret)<br>
-               return ret;<br>
-       ret = smu_send_smc_msg_with_param(smu, SMU_MSG_SetDriverDramAddrLow,<br>
-                                         lower_32_bits(table->mc_address));<br>
-       if (ret)<br>
-               return ret;<br>
         ret = smu_send_smc_msg_with_param(smu, drv2smu ?<br>
                                           SMU_MSG_TransferTableDram2Smu :<br>
                                           SMU_MSG_TransferTableSmu2Dram,<br>
@@ -550,7 +543,7 @@ int smu_update_table(struct smu_context *smu, enum smu_table_id table_index, int<br>
         adev->nbio.funcs->hdp_flush(adev, NULL);<br>
 <br>
         if (!drv2smu)<br>
-               memcpy(table_data, table->cpu_addr, table->size);<br>
+               memcpy(table_data, table->cpu_addr, table_size);<br>
 <br>
         return ret;<br>
 }<br>
@@ -976,32 +969,56 @@ static int smu_init_fb_allocations(struct smu_context *smu)<br>
         struct amdgpu_device *adev = smu->adev;<br>
         struct smu_table_context *smu_table = &smu->smu_table;<br>
         struct smu_table *tables = smu_table->tables;<br>
+       struct smu_table *driver_table = &(smu_table->driver_table);<br>
+       uint32_t max_table_size = 0;<br>
         int ret, i;<br>
 <br>
-       for (i = 0; i < SMU_TABLE_COUNT; i++) {<br>
-               if (tables[i].size == 0)<br>
-                       continue;<br>
+       /* VRAM allocation for tool table */<br>
+       if (tables[SMU_TABLE_PMSTATUSLOG].size) {<br>
                 ret = amdgpu_bo_create_kernel(adev,<br>
-                                             tables[i].size,<br>
-                                             tables[i].align,<br>
-                                             tables[i].domain,<br>
-                                             &tables[i].bo,<br>
-                                             &tables[i].mc_address,<br>
-                                             &tables[i].cpu_addr);<br>
-               if (ret)<br>
-                       goto failed;<br>
+                                             tables[SMU_TABLE_PMSTATUSLOG].size,<br>
+                                             tables[SMU_TABLE_PMSTATUSLOG].align,<br>
+                                             tables[SMU_TABLE_PMSTATUSLOG].domain,<br>
+                                             &tables[SMU_TABLE_PMSTATUSLOG].bo,<br>
+                                             &tables[SMU_TABLE_PMSTATUSLOG].mc_address,<br>
+                                             &tables[SMU_TABLE_PMSTATUSLOG].cpu_addr);<br>
+               if (ret) {<br>
+                       pr_err("VRAM allocation for tool table failed!\n");<br>
+                       return ret;<br>
+               }<br>
         }<br>
 <br>
-       return 0;<br>
-failed:<br>
-       while (--i >= 0) {<br>
+       /* VRAM allocation for driver table */<br>
+       for (i = 0; i < SMU_TABLE_COUNT; i++) {<br>
                 if (tables[i].size == 0)<br>
                         continue;<br>
-               amdgpu_bo_free_kernel(&tables[i].bo,<br>
-                                     &tables[i].mc_address,<br>
-                                     &tables[i].cpu_addr);<br>
 <br>
+               if (i == SMU_TABLE_PMSTATUSLOG)<br>
+                       continue;<br>
+<br>
+               if (max_table_size < tables[i].size)<br>
+                       max_table_size = tables[i].size;<br>
+       }<br>
+<br>
+       driver_table->size = max_table_size;<br>
+       driver_table->align = PAGE_SIZE;<br>
+       driver_table->domain = AMDGPU_GEM_DOMAIN_VRAM;<br>
+<br>
+       ret = amdgpu_bo_create_kernel(adev,<br>
+                                     driver_table->size,<br>
+                                     driver_table->align,<br>
+                                     driver_table->domain,<br>
+                                     &driver_table->bo,<br>
+                                     &driver_table->mc_address,<br>
+                                     &driver_table->cpu_addr);<br>
+       if (ret) {<br>
+               pr_err("VRAM allocation for driver table failed!\n");<br>
+               if (tables[SMU_TABLE_PMSTATUSLOG].mc_address)<br>
+                       amdgpu_bo_free_kernel(&tables[SMU_TABLE_PMSTATUSLOG].bo,<br>
+                                             &tables[SMU_TABLE_PMSTATUSLOG].mc_address,<br>
+                                             &tables[SMU_TABLE_PMSTATUSLOG].cpu_addr);<br>
         }<br>
+<br>
         return ret;<br>
 }<br>
 <br>
@@ -1009,18 +1026,19 @@ static int smu_fini_fb_allocations(struct smu_context *smu)<br>
 {<br>
         struct smu_table_context *smu_table = &smu->smu_table;<br>
         struct smu_table *tables = smu_table->tables;<br>
-       uint32_t i = 0;<br>
+       struct smu_table *driver_table = &(smu_table->driver_table);<br>
 <br>
         if (!tables)<br>
                 return 0;<br>
 <br>
-       for (i = 0; i < SMU_TABLE_COUNT; i++) {<br>
-               if (tables[i].size == 0)<br>
-                       continue;<br>
-               amdgpu_bo_free_kernel(&tables[i].bo,<br>
-                                     &tables[i].mc_address,<br>
-                                     &tables[i].cpu_addr);<br>
-       }<br>
+       if (tables[SMU_TABLE_PMSTATUSLOG].mc_address)<br>
+               amdgpu_bo_free_kernel(&tables[SMU_TABLE_PMSTATUSLOG].bo,<br>
+                                     &tables[SMU_TABLE_PMSTATUSLOG].mc_address,<br>
+                                     &tables[SMU_TABLE_PMSTATUSLOG].cpu_addr);<br>
+<br>
+       amdgpu_bo_free_kernel(&driver_table->bo,<br>
+                             &driver_table->mc_address,<br>
+                             &driver_table->cpu_addr);<br>
 <br>
         return 0;<br>
 }<br>
@@ -1091,6 +1109,10 @@ static int smu_smc_table_hw_init(struct smu_context *smu,<br>
 <br>
         /* smu_dump_pptable(smu); */<br>
 <br>
+       ret = smu_set_driver_table_location(smu);<br>
+       if (ret)<br>
+               return ret;<br>
+<br>
         /*<br>
          * Copy pptable bo in the vram to smc with SMU MSGs such as<br>
          * SetDriverDramAddr and TransferTableDram2Smu.<br>
diff --git a/drivers/gpu/drm/amd/powerplay/arcturus_ppt.c b/drivers/gpu/drm/amd/powerplay/arcturus_ppt.c<br>
index 50b317f4b1e6..064b5201a8a7 100644<br>
--- a/drivers/gpu/drm/amd/powerplay/arcturus_ppt.c<br>
+++ b/drivers/gpu/drm/amd/powerplay/arcturus_ppt.c<br>
@@ -2022,7 +2022,7 @@ static int arcturus_i2c_eeprom_read_data(struct i2c_adapter *control,<br>
         SwI2cRequest_t req;<br>
         struct amdgpu_device *adev = to_amdgpu_device(control);<br>
         struct smu_table_context *smu_table = &adev->smu.smu_table;<br>
-       struct smu_table *table = &smu_table->tables[SMU_TABLE_I2C_COMMANDS];<br>
+       struct smu_table *table = &smu_table->driver_table;<br>
 <br>
         memset(&req, 0, sizeof(req));<br>
         arcturus_fill_eeprom_i2c_req(&req, false, address, numbytes, data);<br>
@@ -2261,6 +2261,7 @@ static const struct pptable_funcs arcturus_ppt_funcs = {<br>
         .check_fw_version = smu_v11_0_check_fw_version,<br>
         .write_pptable = smu_v11_0_write_pptable,<br>
         .set_min_dcef_deep_sleep = smu_v11_0_set_min_dcef_deep_sleep,<br>
+       .set_driver_table_location = smu_v11_0_set_driver_table_location,<br>
         .set_tool_table_location = smu_v11_0_set_tool_table_location,<br>
         .notify_memory_pool_location = smu_v11_0_notify_memory_pool_location,<br>
         .system_features_control = smu_v11_0_system_features_control,<br>
diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h<br>
index 02d33b50e735..b0591a8dda41 100644<br>
--- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h<br>
+++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h<br>
@@ -260,6 +260,15 @@ struct smu_table_context<br>
         struct smu_bios_boot_up_values  boot_values;<br>
         void                            *driver_pptable;<br>
         struct smu_table                *tables;<br>
+       /*<br>
+        * The driver table is just a staging buffer for<br>
+        * uploading/downloading content from the SMU.<br>
+        *<br>
+        * And the table_id for SMU_MSG_TransferTableSmu2Dram/<br>
+        * SMU_MSG_TransferTableDram2Smu instructs SMU<br>
+        * which content driver is interested.<br>
+        */<br>
+       struct smu_table                driver_table;<br>
         struct smu_table                memory_pool;<br>
         uint8_t                         thermal_controller_type;<br>
 <br>
@@ -498,6 +507,7 @@ struct pptable_funcs {<br>
         int (*set_gfx_cgpg)(struct smu_context *smu, bool enable);<br>
         int (*write_pptable)(struct smu_context *smu);<br>
         int (*set_min_dcef_deep_sleep)(struct smu_context *smu);<br>
+       int (*set_driver_table_location)(struct smu_context *smu);<br>
         int (*set_tool_table_location)(struct smu_context *smu);<br>
         int (*notify_memory_pool_location)(struct smu_context *smu);<br>
         int (*set_last_dcef_min_deep_sleep_clk)(struct smu_context *smu);<br>
diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h<br>
index db3f78676aeb..662989296174 100644<br>
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h<br>
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h<br>
@@ -170,6 +170,8 @@ int smu_v11_0_write_pptable(struct smu_context *smu);<br>
 <br>
 int smu_v11_0_set_min_dcef_deep_sleep(struct smu_context *smu);<br>
 <br>
+int smu_v11_0_set_driver_table_location(struct smu_context *smu);<br>
+<br>
 int smu_v11_0_set_tool_table_location(struct smu_context *smu);<br>
 <br>
 int smu_v11_0_notify_memory_pool_location(struct smu_context *smu);<br>
diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v12_0.h b/drivers/gpu/drm/amd/powerplay/inc/smu_v12_0.h<br>
index 3f1cd06e273c..d79e54b5ebf6 100644<br>
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_v12_0.h<br>
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v12_0.h<br>
@@ -90,4 +90,6 @@ int smu_v12_0_mode2_reset(struct smu_context *smu);<br>
 int smu_v12_0_set_soft_freq_limited_range(struct smu_context *smu, enum smu_clk_type clk_type,<br>
                             uint32_t min, uint32_t max);<br>
 <br>
+int smu_v12_0_set_driver_table_location(struct smu_context *smu);<br>
+<br>
 #endif<br>
diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c<br>
index bb0915a6388e..a16af3a3843c 100644<br>
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c<br>
+++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c<br>
@@ -2112,6 +2112,7 @@ static const struct pptable_funcs navi10_ppt_funcs = {<br>
         .check_fw_version = smu_v11_0_check_fw_version,<br>
         .write_pptable = smu_v11_0_write_pptable,<br>
         .set_min_dcef_deep_sleep = smu_v11_0_set_min_dcef_deep_sleep,<br>
+       .set_driver_table_location = smu_v11_0_set_driver_table_location,<br>
         .set_tool_table_location = smu_v11_0_set_tool_table_location,<br>
         .notify_memory_pool_location = smu_v11_0_notify_memory_pool_location,<br>
         .system_features_control = smu_v11_0_system_features_control,<br>
diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c<br>
index 506cc6bf4bc0..861e6410363b 100644<br>
--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c<br>
+++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c<br>
@@ -920,6 +920,7 @@ static const struct pptable_funcs renoir_ppt_funcs = {<br>
         .get_dpm_ultimate_freq = smu_v12_0_get_dpm_ultimate_freq,<br>
         .mode2_reset = smu_v12_0_mode2_reset,<br>
         .set_soft_freq_limited_range = smu_v12_0_set_soft_freq_limited_range,<br>
+       .set_driver_table_location = smu_v12_0_set_driver_table_location,<br>
 };<br>
 <br>
 void renoir_set_ppt_funcs(struct smu_context *smu)<br>
diff --git a/drivers/gpu/drm/amd/powerplay/smu_internal.h b/drivers/gpu/drm/amd/powerplay/smu_internal.h<br>
index 77864e4236c4..783319ec8bf9 100644<br>
--- a/drivers/gpu/drm/amd/powerplay/smu_internal.h<br>
+++ b/drivers/gpu/drm/amd/powerplay/smu_internal.h<br>
@@ -61,6 +61,8 @@<br>
         ((smu)->ppt_funcs->write_pptable ? (smu)->ppt_funcs->write_pptable((smu)) : 0)<br>
 #define smu_set_min_dcef_deep_sleep(smu) \<br>
         ((smu)->ppt_funcs->set_min_dcef_deep_sleep ? (smu)->ppt_funcs->set_min_dcef_deep_sleep((smu)) : 0)<br>
+#define smu_set_driver_table_location(smu) \<br>
+       ((smu)->ppt_funcs->set_driver_table_location ? (smu)->ppt_funcs->set_driver_table_location((smu)) : 0)<br>
 #define smu_set_tool_table_location(smu) \<br>
         ((smu)->ppt_funcs->set_tool_table_location ? (smu)->ppt_funcs->set_tool_table_location((smu)) : 0)<br>
 #define smu_notify_memory_pool_location(smu) \<br>
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c<br>
index 6fb93eb6ab39..e804f9854027 100644<br>
--- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c<br>
+++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c<br>
@@ -776,6 +776,24 @@ int smu_v11_0_set_min_dcef_deep_sleep(struct smu_context *smu)<br>
         return smu_v11_0_set_deep_sleep_dcefclk(smu, table_context->boot_values.dcefclk / 100);<br>
 }<br>
 <br>
+int smu_v11_0_set_driver_table_location(struct smu_context *smu)<br>
+{<br>
+       struct smu_table *driver_table = &smu->smu_table.driver_table;<br>
+       int ret = 0;<br>
+<br>
+       if (driver_table->mc_address) {<br>
+               ret = smu_send_smc_msg_with_param(smu,<br>
+                               SMU_MSG_SetDriverDramAddrHigh,<br>
+                               upper_32_bits(driver_table->mc_address));<br>
+               if (!ret)<br>
+                       ret = smu_send_smc_msg_with_param(smu,<br>
+                               SMU_MSG_SetDriverDramAddrLow,<br>
+                               lower_32_bits(driver_table->mc_address));<br>
+       }<br>
+<br>
+       return ret;<br>
+}<br>
+<br>
 int smu_v11_0_set_tool_table_location(struct smu_context *smu)<br>
 {<br>
         int ret = 0;<br>
diff --git a/drivers/gpu/drm/amd/powerplay/smu_v12_0.c b/drivers/gpu/drm/amd/powerplay/smu_v12_0.c<br>
index 9e27462d0f4e..870e6db2907e 100644<br>
--- a/drivers/gpu/drm/amd/powerplay/smu_v12_0.c<br>
+++ b/drivers/gpu/drm/amd/powerplay/smu_v12_0.c<br>
@@ -318,14 +318,6 @@ int smu_v12_0_fini_smc_tables(struct smu_context *smu)<br>
 int smu_v12_0_populate_smc_tables(struct smu_context *smu)<br>
 {<br>
         struct smu_table_context *smu_table = &smu->smu_table;<br>
-       struct smu_table *table = NULL;<br>
-<br>
-       table = &smu_table->tables[SMU_TABLE_DPMCLOCKS];<br>
-       if (!table)<br>
-               return -EINVAL;<br>
-<br>
-       if (!table->cpu_addr)<br>
-               return -EINVAL;<br>
 <br>
         return smu_update_table(smu, SMU_TABLE_DPMCLOCKS, 0, smu_table->clocks_table, false);<br>
 }<br>
@@ -514,3 +506,21 @@ int smu_v12_0_set_soft_freq_limited_range(struct smu_context *smu, enum smu_clk_<br>
 <br>
         return ret;<br>
 }<br>
+<br>
+int smu_v12_0_set_driver_table_location(struct smu_context *smu)<br>
+{<br>
+       struct smu_table *driver_table = &smu->smu_table.driver_table;<br>
+       int ret = 0;<br>
+<br>
+       if (driver_table->mc_address) {<br>
+               ret = smu_send_smc_msg_with_param(smu,<br>
+                               SMU_MSG_SetDriverDramAddrHigh,<br>
+                               upper_32_bits(driver_table->mc_address));<br>
+               if (!ret)<br>
+                       ret = smu_send_smc_msg_with_param(smu,<br>
+                               SMU_MSG_SetDriverDramAddrLow,<br>
+                               lower_32_bits(driver_table->mc_address));<br>
+       }<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">[kevin]:<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">for these paris of message, the smu driver should be better to add lock to protect bellow case on multi thread case (eg: cat sysfs)<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">High A + Low B<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">High B + Low A<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">+<br>
+       return ret;<br>
+}<br>
diff --git a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c<br>
index 27bdcdeb08d9..38febd5ca4da 100644<br>
--- a/drivers/gpu/drm/amd/powerplay/vega20_ppt.c<br>
+++ b/drivers/gpu/drm/amd/powerplay/vega20_ppt.c<br>
@@ -3236,6 +3236,7 @@ static const struct pptable_funcs vega20_ppt_funcs = {<br>
         .check_fw_version = smu_v11_0_check_fw_version,<br>
         .write_pptable = smu_v11_0_write_pptable,<br>
         .set_min_dcef_deep_sleep = smu_v11_0_set_min_dcef_deep_sleep,<br>
+       .set_driver_table_location = smu_v11_0_set_driver_table_location,<br>
         .set_tool_table_location = smu_v11_0_set_tool_table_location,<br>
         .notify_memory_pool_location = smu_v11_0_notify_memory_pool_location,<br>
         .system_features_control = smu_v11_0_system_features_control,<br>
-- <br>
2.24.1<br>
<br>
_______________________________________________<br>
amd-gfx mailing list<br>
<a href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a><br>
<a href="https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7CKevin1.Wang%40amd.com%7Cf88b8d717d4745f7521308d78f2d1824%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637135296167400253&amp;sdata=nHLXbyqZO2YNTyE4nTw4SP9ldcZqXijxZuGMSDWPxHM%3D&amp;reserved=0">https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7CKevin1.Wang%40amd.com%7Cf88b8d717d4745f7521308d78f2d1824%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637135296167400253&amp;sdata=nHLXbyqZO2YNTyE4nTw4SP9ldcZqXijxZuGMSDWPxHM%3D&amp;reserved=0</a><o:p></o:p></p>
</div>
</div>
</div>
</div>
</div>
</body>
</html>