<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:DengXian;
panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:"\@DengXian";
panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
p.msipheadera92e061b, li.msipheadera92e061b, div.msipheadera92e061b
{mso-style-name:msipheadera92e061b;
mso-margin-top-alt:auto;
margin-right:0in;
mso-margin-bottom-alt:auto;
margin-left:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
span.EmailStyle20
{mso-style-type:personal-compose;
font-family:"Arial",sans-serif;
color:#0078D7;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="#0563C1" vlink="#954F72">
<p style="font-family:Arial;font-size:10pt;color:#0078D7;margin:15pt;" align="Left">
[AMD Official Use Only - Internal Distribution Only]<br>
</p>
<br>
<div>
<div class="WordSection1">
<p class="msipheadera92e061b" style="margin:0in;margin-bottom:.0001pt"><span style="font-size:10.0pt;font-family:"Arial",sans-serif;color:#0078D7">[AMD Official Use Only - Internal Distribution Only]</span><o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Agreed. We actually discussed next step to unify legacy powerplay and swsmu support. Will keep this in pipe.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Regards,<br>
Hawking<o:p></o:p></p>
<div>
<div style="border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b>From:</b> Deucher, Alexander <Alexander.Deucher@amd.com> <br>
<b>Sent:</b> Wednesday, May 6, 2020 23:24<br>
<b>To:</b> Wang, Kevin(Yang) <Kevin1.Wang@amd.com>; Zhang, Hawking <Hawking.Zhang@amd.com>; amd-gfx@lists.freedesktop.org<br>
<b>Cc:</b> Liu, Monk <Monk.Liu@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com><br>
<b>Subject:</b> Re: [PATCH 2/3] drm/amdgpu: optimize amdgpu device attribute code<o:p></o:p></p>
</div>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<p style="margin:15.0pt"><span style="font-size:10.0pt;font-family:"Arial",sans-serif;color:#0078D7">[AMD Official Use Only - Internal Distribution Only]<o:p></o:p></span></p>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p class="MsoNormal"><span style="font-size:12.0pt;color:black">Perhaps it's too much churn for this patch set, but I'd like to unify the pp func callbacks between powerplay and swsmu so we can drop all of the is_swsmu_supported() and function pointer checks
sprinkled all through the code.<o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span style="font-size:12.0pt;color:black"><o:p> </o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span style="font-size:12.0pt;color:black">Alex<o:p></o:p></span></p>
</div>
<div class="MsoNormal" align="center" style="text-align:center">
<hr size="2" width="98%" align="center">
</div>
<div id="divRplyFwdMsg">
<p class="MsoNormal"><b><span style="color:black">From:</span></b><span style="color:black"> Wang, Kevin(Yang) <<a href="mailto:Kevin1.Wang@amd.com">Kevin1.Wang@amd.com</a>><br>
<b>Sent:</b> Wednesday, May 6, 2020 7:04 AM<br>
<b>To:</b> Zhang, Hawking <<a href="mailto:Hawking.Zhang@amd.com">Hawking.Zhang@amd.com</a>>;
<a href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a> <<a href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a>><br>
<b>Cc:</b> Deucher, Alexander <<a href="mailto:Alexander.Deucher@amd.com">Alexander.Deucher@amd.com</a>>; Liu, Monk <<a href="mailto:Monk.Liu@amd.com">Monk.Liu@amd.com</a>>; Feng, Kenneth <<a href="mailto:Kenneth.Feng@amd.com">Kenneth.Feng@amd.com</a>><br>
<b>Subject:</b> Re: [PATCH 2/3] drm/amdgpu: optimize amdgpu device attribute code</span>
<o:p></o:p></p>
<div>
<p class="MsoNormal"> <o:p></o:p></p>
</div>
</div>
<div>
<p style="margin:15.0pt"><span style="font-size:10.0pt;font-family:"Arial",sans-serif;color:#0078D7">[AMD Official Use Only - Internal Distribution Only]<o:p></o:p></span></p>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p class="MsoNormal"><span style="font-size:12.0pt;color:black"><o:p> </o:p></span></p>
</div>
<div class="MsoNormal" align="center" style="text-align:center">
<hr size="2" width="98%" align="center">
</div>
<div id="x_divRplyFwdMsg">
<p class="MsoNormal"><b><span style="color:black">From:</span></b><span style="color:black"> Zhang, Hawking <<a href="mailto:Hawking.Zhang@amd.com">Hawking.Zhang@amd.com</a>><br>
<b>Sent:</b> Wednesday, May 6, 2020 5:26 PM<br>
<b>To:</b> Wang, Kevin(Yang) <<a href="mailto:Kevin1.Wang@amd.com">Kevin1.Wang@amd.com</a>>;
<a href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a> <<a href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a>><br>
<b>Cc:</b> Deucher, Alexander <<a href="mailto:Alexander.Deucher@amd.com">Alexander.Deucher@amd.com</a>>; Liu, Monk <<a href="mailto:Monk.Liu@amd.com">Monk.Liu@amd.com</a>>; Feng, Kenneth <<a href="mailto:Kenneth.Feng@amd.com">Kenneth.Feng@amd.com</a>><br>
<b>Subject:</b> RE: [PATCH 2/3] drm/amdgpu: optimize amdgpu device attribute code</span>
<o:p></o:p></p>
<div>
<p class="MsoNormal"> <o:p></o:p></p>
</div>
</div>
<div>
<div>
<p class="MsoNormal">[AMD Official Use Only - Internal Distribution Only]<br>
<br>
Hi Kelvin,<br>
<br>
Thanks for the series that remove the duplicated one_vf mode check in all the amdgpu_dpm functions.<br>
<br>
Can we split the patch into two? One for amdgpu device sysfs attr code refine, with the new dev_attr structures, the other for retiring all the unnecessary one_vf mode support.<br>
<br>
<span style="color:black;background:white">thanks your comment.</span><br>
[kevin]: Q1, agree, i will split it into two patch.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><br>
+enum amdgpu_device_attr_states {<br>
+ ATTR_STATE_UNSUPPORT = 0,<br>
+ ATTR_STATE_SUPPORT,<br>
+ ATTR_STATE_DEAD,<br>
+ ATTR_STATE_ALIVE,<br>
+};<br>
+<br>
The attr_states seems unnecessary to me. You need a flag to mark whether a particular attribute is supported by specific ASIC or not, right? Then just a bool variable should be good enough for this purpose, Like attr->supported. I' d like to understand the
use case for DEAD and ALIVE. Accordingly, you can simplify the logic that only remove the supported ones.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">[kevin]: Q2, the origin idea, it is used to store sysfs file state, but for this case, we can try to drop DEAD & ALIVE state, <o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">because the origin code logic will exit directly when create file fail.<br>
<br>
If we have to introduce more complicated flags to indicate different status, I'd prefer to go directly to initialize one_vf mode attr sets and bare-metal attr sets directly.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">[kevin]: Q3, i'd like to keep this patch code, in fact, not all sysfs devices need to be created on bare-metal mode.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">the driver must check it at runtime. eg: is_sw_smu_support(), if (asic_chip == XXX), etc..<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><br>
In addition, the function naming like default_attr_perform also confusing me. Would it be the function that used to update attr status?
<br>
+static int default_attr_perform(struct amdgpu_device *adev, struct amdgpu_device_attr *attr,<br>
+ uint32_t mask)<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">[kevin]: Q4, yes, the function is used to update attr status according to asic information at runtime.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">maybe rename to "attr_update" is better.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Best Regards,<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">Kevin<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Regards,<br>
Hawking<br>
<br>
-----Original Message-----<br>
From: Wang, Kevin(Yang) <<a href="mailto:Kevin1.Wang@amd.com">Kevin1.Wang@amd.com</a>>
<br>
Sent: Wednesday, May 6, 2020 14:23<br>
To: <a href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a><br>
Cc: Zhang, Hawking <<a href="mailto:Hawking.Zhang@amd.com">Hawking.Zhang@amd.com</a>>; Deucher, Alexander <<a href="mailto:Alexander.Deucher@amd.com">Alexander.Deucher@amd.com</a>>; Liu, Monk <<a href="mailto:Monk.Liu@amd.com">Monk.Liu@amd.com</a>>; Feng, Kenneth
<<a href="mailto:Kenneth.Feng@amd.com">Kenneth.Feng@amd.com</a>>; Wang, Kevin(Yang) <<a href="mailto:Kevin1.Wang@amd.com">Kevin1.Wang@amd.com</a>><br>
Subject: [PATCH 2/3] drm/amdgpu: optimize amdgpu device attribute code<br>
<br>
unified amdgpu device attribute node functions:<br>
1. add some helper functions to create amdgpu device attribute node.<br>
2. create device node according to device attr flags on different VF mode.<br>
3. rename some functions name to adapt a new interface.<br>
4. remove unneccessary virt mode check in inernal functions (xxx_show, xxx_store).<br>
<br>
Signed-off-by: Kevin Wang <<a href="mailto:kevin1.wang@amd.com">kevin1.wang@amd.com</a>><br>
---<br>
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c | 577 ++++++++++--------------- drivers/gpu/drm/amd/amdgpu/amdgpu_pm.h | 48 ++<br>
2 files changed, 271 insertions(+), 354 deletions(-)<br>
<br>
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c<br>
index c762deb5abc7..367ac79418b9 100644<br>
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c<br>
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c<br>
@@ -154,18 +154,15 @@ int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum amd_pp_sensors senso<br>
*<br>
*/<br>
<br>
-static ssize_t amdgpu_get_dpm_state(struct device *dev,<br>
- struct device_attribute *attr,<br>
- char *buf)<br>
+static ssize_t amdgpu_get_power_dpm_state(struct device *dev,<br>
+ struct device_attribute *attr,<br>
+ char *buf)<br>
{<br>
struct drm_device *ddev = dev_get_drvdata(dev);<br>
struct amdgpu_device *adev = ddev->dev_private;<br>
enum amd_pm_state_type pm;<br>
int ret;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
ret = pm_runtime_get_sync(ddev->dev);<br>
if (ret < 0)<br>
return ret;<br>
@@ -189,19 +186,16 @@ static ssize_t amdgpu_get_dpm_state(struct device *dev,<br>
(pm == POWER_STATE_TYPE_BALANCED) ? "balanced" : "performance"); }<br>
<br>
-static ssize_t amdgpu_set_dpm_state(struct device *dev,<br>
- struct device_attribute *attr,<br>
- const char *buf,<br>
- size_t count)<br>
+static ssize_t amdgpu_set_power_dpm_state(struct device *dev,<br>
+ struct device_attribute *attr,<br>
+ const char *buf,<br>
+ size_t count)<br>
{<br>
struct drm_device *ddev = dev_get_drvdata(dev);<br>
struct amdgpu_device *adev = ddev->dev_private;<br>
enum amd_pm_state_type state;<br>
int ret;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return -EINVAL;<br>
-<br>
if (strncmp("battery", buf, strlen("battery")) == 0)<br>
state = POWER_STATE_TYPE_BATTERY;<br>
else if (strncmp("balanced", buf, strlen("balanced")) == 0) @@ -294,18 +288,15 @@ static ssize_t amdgpu_set_dpm_state(struct device *dev,<br>
*<br>
*/<br>
<br>
-static ssize_t amdgpu_get_dpm_forced_performance_level(struct device *dev,<br>
- struct device_attribute *attr,<br>
- char *buf)<br>
+static ssize_t amdgpu_get_power_dpm_force_performance_level(struct device *dev,<br>
+ struct device_attribute *attr,<br>
+ char *buf)<br>
{<br>
struct drm_device *ddev = dev_get_drvdata(dev);<br>
struct amdgpu_device *adev = ddev->dev_private;<br>
enum amd_dpm_forced_level level = 0xff;<br>
int ret;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
ret = pm_runtime_get_sync(ddev->dev);<br>
if (ret < 0)<br>
return ret;<br>
@@ -332,10 +323,10 @@ static ssize_t amdgpu_get_dpm_forced_performance_level(struct device *dev,<br>
"unknown");<br>
}<br>
<br>
-static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,<br>
- struct device_attribute *attr,<br>
- const char *buf,<br>
- size_t count)<br>
+static ssize_t amdgpu_set_power_dpm_force_performance_level(struct device *dev,<br>
+ struct device_attribute *attr,<br>
+ const char *buf,<br>
+ size_t count)<br>
{<br>
struct drm_device *ddev = dev_get_drvdata(dev);<br>
struct amdgpu_device *adev = ddev->dev_private; @@ -343,9 +334,6 @@ static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,<br>
enum amd_dpm_forced_level current_level = 0xff;<br>
int ret = 0;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return -EINVAL;<br>
-<br>
if (strncmp("low", buf, strlen("low")) == 0) {<br>
level = AMD_DPM_FORCED_LEVEL_LOW;<br>
} else if (strncmp("high", buf, strlen("high")) == 0) { @@ -475,9 +463,6 @@ static ssize_t amdgpu_get_pp_cur_state(struct device *dev,<br>
enum amd_pm_state_type pm = 0;<br>
int i = 0, ret = 0;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
ret = pm_runtime_get_sync(ddev->dev);<br>
if (ret < 0)<br>
return ret;<br>
@@ -514,9 +499,6 @@ static ssize_t amdgpu_get_pp_force_state(struct device *dev,<br>
struct drm_device *ddev = dev_get_drvdata(dev);<br>
struct amdgpu_device *adev = ddev->dev_private;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
if (adev->pp_force_state_enabled)<br>
return amdgpu_get_pp_cur_state(dev, attr, buf);<br>
else<br>
@@ -534,9 +516,6 @@ static ssize_t amdgpu_set_pp_force_state(struct device *dev,<br>
unsigned long idx;<br>
int ret;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return -EINVAL;<br>
-<br>
if (strlen(buf) == 1)<br>
adev->pp_force_state_enabled = false;<br>
else if (is_support_sw_smu(adev))<br>
@@ -592,9 +571,6 @@ static ssize_t amdgpu_get_pp_table(struct device *dev,<br>
char *table = NULL;<br>
int size, ret;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
ret = pm_runtime_get_sync(ddev->dev);<br>
if (ret < 0)<br>
return ret;<br>
@@ -634,9 +610,6 @@ static ssize_t amdgpu_set_pp_table(struct device *dev,<br>
struct amdgpu_device *adev = ddev->dev_private;<br>
int ret = 0;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return -EINVAL;<br>
-<br>
ret = pm_runtime_get_sync(ddev->dev);<br>
if (ret < 0)<br>
return ret;<br>
@@ -873,10 +846,10 @@ static ssize_t amdgpu_get_pp_od_clk_voltage(struct device *dev,<br>
* the corresponding bit from original ppfeature masks and input the<br>
* new ppfeature masks.<br>
*/<br>
-static ssize_t amdgpu_set_pp_feature_status(struct device *dev,<br>
- struct device_attribute *attr,<br>
- const char *buf,<br>
- size_t count)<br>
+static ssize_t amdgpu_set_pp_features(struct device *dev,<br>
+ struct device_attribute *attr,<br>
+ const char *buf,<br>
+ size_t count)<br>
{<br>
struct drm_device *ddev = dev_get_drvdata(dev);<br>
struct amdgpu_device *adev = ddev->dev_private; @@ -917,9 +890,9 @@ static ssize_t amdgpu_set_pp_feature_status(struct device *dev,<br>
return count;<br>
}<br>
<br>
-static ssize_t amdgpu_get_pp_feature_status(struct device *dev,<br>
- struct device_attribute *attr,<br>
- char *buf)<br>
+static ssize_t amdgpu_get_pp_features(struct device *dev,<br>
+ struct device_attribute *attr,<br>
+ char *buf)<br>
{<br>
struct drm_device *ddev = dev_get_drvdata(dev);<br>
struct amdgpu_device *adev = ddev->dev_private; @@ -985,9 +958,6 @@ static ssize_t amdgpu_get_pp_dpm_sclk(struct device *dev,<br>
ssize_t size;<br>
int ret;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
ret = pm_runtime_get_sync(ddev->dev);<br>
if (ret < 0)<br>
return ret;<br>
@@ -1051,9 +1021,6 @@ static ssize_t amdgpu_set_pp_dpm_sclk(struct device *dev,<br>
int ret;<br>
uint32_t mask = 0;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return -EINVAL;<br>
-<br>
ret = amdgpu_read_mask(buf, count, &mask);<br>
if (ret)<br>
return ret;<br>
@@ -1085,9 +1052,6 @@ static ssize_t amdgpu_get_pp_dpm_mclk(struct device *dev,<br>
ssize_t size;<br>
int ret;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
ret = pm_runtime_get_sync(ddev->dev);<br>
if (ret < 0)<br>
return ret;<br>
@@ -1115,9 +1079,6 @@ static ssize_t amdgpu_set_pp_dpm_mclk(struct device *dev,<br>
uint32_t mask = 0;<br>
int ret;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return -EINVAL;<br>
-<br>
ret = amdgpu_read_mask(buf, count, &mask);<br>
if (ret)<br>
return ret;<br>
@@ -1149,9 +1110,6 @@ static ssize_t amdgpu_get_pp_dpm_socclk(struct device *dev,<br>
ssize_t size;<br>
int ret;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
ret = pm_runtime_get_sync(ddev->dev);<br>
if (ret < 0)<br>
return ret;<br>
@@ -1179,9 +1137,6 @@ static ssize_t amdgpu_set_pp_dpm_socclk(struct device *dev,<br>
int ret;<br>
uint32_t mask = 0;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return -EINVAL;<br>
-<br>
ret = amdgpu_read_mask(buf, count, &mask);<br>
if (ret)<br>
return ret;<br>
@@ -1215,9 +1170,6 @@ static ssize_t amdgpu_get_pp_dpm_fclk(struct device *dev,<br>
ssize_t size;<br>
int ret;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
ret = pm_runtime_get_sync(ddev->dev);<br>
if (ret < 0)<br>
return ret;<br>
@@ -1245,9 +1197,6 @@ static ssize_t amdgpu_set_pp_dpm_fclk(struct device *dev,<br>
int ret;<br>
uint32_t mask = 0;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return -EINVAL;<br>
-<br>
ret = amdgpu_read_mask(buf, count, &mask);<br>
if (ret)<br>
return ret;<br>
@@ -1347,9 +1296,6 @@ static ssize_t amdgpu_get_pp_dpm_pcie(struct device *dev,<br>
ssize_t size;<br>
int ret;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
ret = pm_runtime_get_sync(ddev->dev);<br>
if (ret < 0)<br>
return ret;<br>
@@ -1377,9 +1323,6 @@ static ssize_t amdgpu_set_pp_dpm_pcie(struct device *dev,<br>
int ret;<br>
uint32_t mask = 0;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return -EINVAL;<br>
-<br>
ret = amdgpu_read_mask(buf, count, &mask);<br>
if (ret)<br>
return ret;<br>
@@ -1571,9 +1514,6 @@ static ssize_t amdgpu_get_pp_power_profile_mode(struct device *dev,<br>
ssize_t size;<br>
int ret;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
ret = pm_runtime_get_sync(ddev->dev);<br>
if (ret < 0)<br>
return ret;<br>
@@ -1615,9 +1555,6 @@ static ssize_t amdgpu_set_pp_power_profile_mode(struct device *dev,<br>
if (ret)<br>
return -EINVAL;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return -EINVAL;<br>
-<br>
if (profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {<br>
if (count < 2 || count > 127)<br>
return -EINVAL;<br>
@@ -1663,17 +1600,14 @@ static ssize_t amdgpu_set_pp_power_profile_mode(struct device *dev,<br>
* The SMU firmware computes a percentage of load based on the<br>
* aggregate activity level in the IP cores.<br>
*/<br>
-static ssize_t amdgpu_get_busy_percent(struct device *dev,<br>
- struct device_attribute *attr,<br>
- char *buf)<br>
+static ssize_t amdgpu_get_gpu_busy_percent(struct device *dev,<br>
+ struct device_attribute *attr,<br>
+ char *buf)<br>
{<br>
struct drm_device *ddev = dev_get_drvdata(dev);<br>
struct amdgpu_device *adev = ddev->dev_private;<br>
int r, value, size = sizeof(value);<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
r = pm_runtime_get_sync(ddev->dev);<br>
if (r < 0)<br>
return r;<br>
@@ -1699,17 +1633,14 @@ static ssize_t amdgpu_get_busy_percent(struct device *dev,<br>
* The SMU firmware computes a percentage of load based on the<br>
* aggregate activity level in the IP cores.<br>
*/<br>
-static ssize_t amdgpu_get_memory_busy_percent(struct device *dev,<br>
- struct device_attribute *attr,<br>
- char *buf)<br>
+static ssize_t amdgpu_get_mem_busy_percent(struct device *dev,<br>
+ struct device_attribute *attr,<br>
+ char *buf)<br>
{<br>
struct drm_device *ddev = dev_get_drvdata(dev);<br>
struct amdgpu_device *adev = ddev->dev_private;<br>
int r, value, size = sizeof(value);<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
r = pm_runtime_get_sync(ddev->dev);<br>
if (r < 0)<br>
return r;<br>
@@ -1748,9 +1679,6 @@ static ssize_t amdgpu_get_pcie_bw(struct device *dev,<br>
uint64_t count0, count1;<br>
int ret;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
ret = pm_runtime_get_sync(ddev->dev);<br>
if (ret < 0)<br>
return ret;<br>
@@ -1781,66 +1709,186 @@ static ssize_t amdgpu_get_unique_id(struct device *dev,<br>
struct drm_device *ddev = dev_get_drvdata(dev);<br>
struct amdgpu_device *adev = ddev->dev_private;<br>
<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
- return 0;<br>
-<br>
if (adev->unique_id)<br>
return snprintf(buf, PAGE_SIZE, "%016llx\n", adev->unique_id);<br>
<br>
return 0;<br>
}<br>
<br>
-static DEVICE_ATTR(power_dpm_state, S_IRUGO | S_IWUSR, amdgpu_get_dpm_state, amdgpu_set_dpm_state); -static DEVICE_ATTR(power_dpm_force_performance_level, S_IRUGO | S_IWUSR,<br>
- amdgpu_get_dpm_forced_performance_level,<br>
- amdgpu_set_dpm_forced_performance_level);<br>
-static DEVICE_ATTR(pp_num_states, S_IRUGO, amdgpu_get_pp_num_states, NULL); -static DEVICE_ATTR(pp_cur_state, S_IRUGO, amdgpu_get_pp_cur_state, NULL); -static DEVICE_ATTR(pp_force_state, S_IRUGO | S_IWUSR,<br>
- amdgpu_get_pp_force_state,<br>
- amdgpu_set_pp_force_state);<br>
-static DEVICE_ATTR(pp_table, S_IRUGO | S_IWUSR,<br>
- amdgpu_get_pp_table,<br>
- amdgpu_set_pp_table);<br>
-static DEVICE_ATTR(pp_dpm_sclk, S_IRUGO | S_IWUSR,<br>
- amdgpu_get_pp_dpm_sclk,<br>
- amdgpu_set_pp_dpm_sclk);<br>
-static DEVICE_ATTR(pp_dpm_mclk, S_IRUGO | S_IWUSR,<br>
- amdgpu_get_pp_dpm_mclk,<br>
- amdgpu_set_pp_dpm_mclk);<br>
-static DEVICE_ATTR(pp_dpm_socclk, S_IRUGO | S_IWUSR,<br>
- amdgpu_get_pp_dpm_socclk,<br>
- amdgpu_set_pp_dpm_socclk);<br>
-static DEVICE_ATTR(pp_dpm_fclk, S_IRUGO | S_IWUSR,<br>
- amdgpu_get_pp_dpm_fclk,<br>
- amdgpu_set_pp_dpm_fclk);<br>
-static DEVICE_ATTR(pp_dpm_dcefclk, S_IRUGO | S_IWUSR,<br>
- amdgpu_get_pp_dpm_dcefclk,<br>
- amdgpu_set_pp_dpm_dcefclk);<br>
-static DEVICE_ATTR(pp_dpm_pcie, S_IRUGO | S_IWUSR,<br>
- amdgpu_get_pp_dpm_pcie,<br>
- amdgpu_set_pp_dpm_pcie);<br>
-static DEVICE_ATTR(pp_sclk_od, S_IRUGO | S_IWUSR,<br>
- amdgpu_get_pp_sclk_od,<br>
- amdgpu_set_pp_sclk_od);<br>
-static DEVICE_ATTR(pp_mclk_od, S_IRUGO | S_IWUSR,<br>
- amdgpu_get_pp_mclk_od,<br>
- amdgpu_set_pp_mclk_od);<br>
-static DEVICE_ATTR(pp_power_profile_mode, S_IRUGO | S_IWUSR,<br>
- amdgpu_get_pp_power_profile_mode,<br>
- amdgpu_set_pp_power_profile_mode);<br>
-static DEVICE_ATTR(pp_od_clk_voltage, S_IRUGO | S_IWUSR,<br>
- amdgpu_get_pp_od_clk_voltage,<br>
- amdgpu_set_pp_od_clk_voltage);<br>
-static DEVICE_ATTR(gpu_busy_percent, S_IRUGO,<br>
- amdgpu_get_busy_percent, NULL);<br>
-static DEVICE_ATTR(mem_busy_percent, S_IRUGO,<br>
- amdgpu_get_memory_busy_percent, NULL);<br>
-static DEVICE_ATTR(pcie_bw, S_IRUGO, amdgpu_get_pcie_bw, NULL); -static DEVICE_ATTR(pp_features, S_IRUGO | S_IWUSR,<br>
- amdgpu_get_pp_feature_status,<br>
- amdgpu_set_pp_feature_status);<br>
-static DEVICE_ATTR(unique_id, S_IRUGO, amdgpu_get_unique_id, NULL);<br>
+static struct amdgpu_device_attr amdgpu_device_attrs[] = {<br>
+ AMDGPU_DEVICE_ATTR_RW(power_dpm_state, ATTR_FLAG_BASIC|ATTR_FLAG_ONEVF),<br>
+ AMDGPU_DEVICE_ATTR_RW(power_dpm_force_performance_level, ATTR_FLAG_BASIC|ATTR_FLAG_ONEVF),<br>
+ AMDGPU_DEVICE_ATTR_RO(pp_num_states, ATTR_FLAG_BASIC),<br>
+ AMDGPU_DEVICE_ATTR_RO(pp_cur_state, ATTR_FLAG_BASIC),<br>
+ AMDGPU_DEVICE_ATTR_RW(pp_force_state, ATTR_FLAG_BASIC),<br>
+ AMDGPU_DEVICE_ATTR_RW(pp_table, ATTR_FLAG_BASIC),<br>
+ AMDGPU_DEVICE_ATTR_RW(pp_dpm_sclk, ATTR_FLAG_BASIC|ATTR_FLAG_ONEVF),<br>
+ AMDGPU_DEVICE_ATTR_RW(pp_dpm_mclk, ATTR_FLAG_BASIC|ATTR_FLAG_ONEVF),<br>
+ AMDGPU_DEVICE_ATTR_RW(pp_dpm_socclk, ATTR_FLAG_BASIC|ATTR_FLAG_ONEVF),<br>
+ AMDGPU_DEVICE_ATTR_RW(pp_dpm_fclk, ATTR_FLAG_BASIC|ATTR_FLAG_ONEVF),<br>
+ AMDGPU_DEVICE_ATTR_RW(pp_dpm_dcefclk, ATTR_FLAG_BASIC),<br>
+ AMDGPU_DEVICE_ATTR_RW(pp_dpm_pcie, ATTR_FLAG_BASIC),<br>
+ AMDGPU_DEVICE_ATTR_RW(pp_sclk_od, ATTR_FLAG_BASIC),<br>
+ AMDGPU_DEVICE_ATTR_RW(pp_mclk_od, ATTR_FLAG_BASIC),<br>
+ AMDGPU_DEVICE_ATTR_RW(pp_power_profile_mode, ATTR_FLAG_BASIC),<br>
+ AMDGPU_DEVICE_ATTR_RW(pp_od_clk_voltage, ATTR_FLAG_BASIC),<br>
+ AMDGPU_DEVICE_ATTR_RO(gpu_busy_percent, ATTR_FLAG_BASIC),<br>
+ AMDGPU_DEVICE_ATTR_RO(mem_busy_percent, ATTR_FLAG_BASIC),<br>
+ AMDGPU_DEVICE_ATTR_RO(pcie_bw, ATTR_FLAG_BASIC),<br>
+ AMDGPU_DEVICE_ATTR_RW(pp_features, ATTR_FLAG_BASIC),<br>
+ AMDGPU_DEVICE_ATTR_RO(unique_id, ATTR_FLAG_BASIC),<br>
+};<br>
+<br>
+static int default_attr_perform(struct amdgpu_device *adev, struct amdgpu_device_attr *attr,<br>
+ uint32_t mask)<br>
+{<br>
+ struct device_attribute *dev_attr = &attr->dev_attr;<br>
+ const char *attr_name = dev_attr->attr.name;<br>
+ struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;<br>
+ enum amd_asic_type asic_type = adev->asic_type;<br>
+<br>
+ if (!(attr->flags & mask)) {<br>
+ attr->states = ATTR_STATE_UNSUPPORT;<br>
+ return 0;<br>
+ }<br>
+<br>
+#define DEVICE_ATTR_IS(_name) (!strcmp(attr_name, #_name))<br>
+<br>
+ if (DEVICE_ATTR_IS(pp_dpm_socclk)) {<br>
+ if (asic_type <= CHIP_VEGA10)<br>
+ attr->states = ATTR_STATE_UNSUPPORT;<br>
+ } else if (DEVICE_ATTR_IS(pp_dpm_dcefclk)) {<br>
+ if (asic_type <= CHIP_VEGA10 || asic_type == CHIP_ARCTURUS)<br>
+ attr->states = ATTR_STATE_UNSUPPORT;<br>
+ } else if (DEVICE_ATTR_IS(pp_dpm_fclk)) {<br>
+ if (asic_type < CHIP_VEGA20)<br>
+ attr->states = ATTR_STATE_UNSUPPORT;<br>
+ } else if (DEVICE_ATTR_IS(pp_dpm_pcie)) {<br>
+ if (asic_type == CHIP_ARCTURUS)<br>
+ attr->states = ATTR_STATE_UNSUPPORT;<br>
+ } else if (DEVICE_ATTR_IS(pp_od_clk_voltage)) {<br>
+ attr->states = ATTR_STATE_UNSUPPORT;<br>
+ if ((is_support_sw_smu(adev) && adev->smu.od_enabled) ||<br>
+ (!is_support_sw_smu(adev) && hwmgr->od_enabled))<br>
+ attr->states = ATTR_STATE_UNSUPPORT;<br>
+ } else if (DEVICE_ATTR_IS(mem_busy_percent)) {<br>
+ if (adev->flags & AMD_IS_APU || asic_type == CHIP_VEGA10)<br>
+ attr->states = ATTR_STATE_UNSUPPORT;<br>
+ } else if (DEVICE_ATTR_IS(pcie_bw)) {<br>
+ /* PCIe Perf counters won't work on APU nodes */<br>
+ if (adev->flags & AMD_IS_APU)<br>
+ attr->states = ATTR_STATE_UNSUPPORT;<br>
+ } else if (DEVICE_ATTR_IS(unique_id)) {<br>
+ if (!adev->unique_id)<br>
+ attr->states = ATTR_STATE_UNSUPPORT;<br>
+ } else if (DEVICE_ATTR_IS(pp_features)) {<br>
+ if (adev->flags & AMD_IS_APU || asic_type <= CHIP_VEGA10)<br>
+ attr->states = ATTR_STATE_UNSUPPORT;<br>
+ }<br>
+<br>
+ if (asic_type == CHIP_ARCTURUS) {<br>
+ /* Arcturus does not support standalone mclk/socclk/fclk level setting */<br>
+ if (DEVICE_ATTR_IS(pp_dpm_mclk) ||<br>
+ DEVICE_ATTR_IS(pp_dpm_socclk) ||<br>
+ DEVICE_ATTR_IS(pp_dpm_fclk)) {<br>
+ dev_attr->attr.mode &= ~S_IWUGO;<br>
+ dev_attr->store = NULL;<br>
+ }<br>
+ }<br>
+<br>
+#undef DEVICE_ATTR_IS<br>
+<br>
+ return 0;<br>
+}<br>
+<br>
+<br>
+static int amdgpu_device_attr_create(struct amdgpu_device *adev,<br>
+ struct amdgpu_device_attr *attr,<br>
+ uint32_t mask)<br>
+{<br>
+ int ret = 0;<br>
+ struct device_attribute *dev_attr = &attr->dev_attr;<br>
+ const char *name = dev_attr->attr.name;<br>
+ int (*attr_perform)(struct amdgpu_device *adev, struct amdgpu_device_attr *attr,<br>
+ uint32_t mask) = default_attr_perform;<br>
+<br>
+ BUG_ON(!attr);<br>
+<br>
+ if (attr->states == ATTR_STATE_UNSUPPORT ||<br>
+ attr->states == ATTR_STATE_ALIVE)<br>
+ return 0;<br>
+<br>
+ if (attr->perform) {<br>
+ attr_perform = attr->perform;<br>
+ }<br>
+<br>
+ ret = attr_perform(adev, attr, mask);<br>
+ if (ret) {<br>
+ dev_err(adev->dev, "failed to perform device file %s, ret = %d\n",<br>
+ name, ret);<br>
+ return ret;<br>
+ }<br>
+<br>
+ /* the attr->states maybe changed after call attr->perform function */<br>
+ if (attr->states == ATTR_STATE_UNSUPPORT)<br>
+ return 0;<br>
+<br>
+ ret = device_create_file(adev->dev, dev_attr);<br>
+ if (ret) {<br>
+ dev_err(adev->dev, "failed to create device file %s, ret = %d\n",<br>
+ name, ret);<br>
+ }<br>
+<br>
+ attr->states = ATTR_STATE_ALIVE;<br>
+<br>
+ return ret;<br>
+}<br>
+<br>
+static void amdgpu_device_attr_remove(struct amdgpu_device *adev, <br>
+struct amdgpu_device_attr *attr) {<br>
+ struct device_attribute *dev_attr = &attr->dev_attr;<br>
+<br>
+ if (attr->states != ATTR_STATE_ALIVE)<br>
+ return;<br>
+<br>
+ device_remove_file(adev->dev, dev_attr);<br>
+<br>
+ attr->states = ATTR_STATE_DEAD;<br>
+}<br>
+<br>
+static int amdgpu_device_attr_create_groups(struct amdgpu_device *adev,<br>
+ struct amdgpu_device_attr *attrs,<br>
+ uint32_t counts,<br>
+ uint32_t mask)<br>
+{<br>
+ int ret = 0;<br>
+ uint32_t i = 0;<br>
+<br>
+ for (i = 0; i < counts; i++) {<br>
+ ret = amdgpu_device_attr_create(adev, &attrs[i], mask);<br>
+ if (ret)<br>
+ goto failed;<br>
+ }<br>
+<br>
+ return 0;<br>
+<br>
+failed:<br>
+ for (; i > 0; i--) {<br>
+ amdgpu_device_attr_remove(adev, &attrs[i]);<br>
+ }<br>
+<br>
+ return ret;<br>
+}<br>
+<br>
+static void amdgpu_device_attr_remove_groups(struct amdgpu_device *adev,<br>
+ struct amdgpu_device_attr *attrs,<br>
+ uint32_t counts)<br>
+{<br>
+ uint32_t i = 0;<br>
+<br>
+ for (i = 0; i < counts; i++)<br>
+ amdgpu_device_attr_remove(adev, &attrs[i]); }<br>
<br>
static ssize_t amdgpu_hwmon_show_temp(struct device *dev,<br>
struct device_attribute *attr, @@ -2790,7 +2838,7 @@ static umode_t hwmon_attributes_visible(struct kobject *kobj,<br>
umode_t effective_mode = attr->mode;<br>
<br>
/* under multi-vf mode, the hwmon attributes are all not supported */<br>
- if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))<br>
+ if (amdgpu_virt_get_sriov_vf_mode(adev) == SRIOV_VF_MODE_MULTI_VF)<br>
return 0;<br>
<br>
/* there is no fan under pp one vf mode */ @@ -3241,8 +3289,8 @@ int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_versio<br>
<br>
int amdgpu_pm_sysfs_init(struct amdgpu_device *adev) {<br>
- struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;<br>
int ret;<br>
+ uint32_t mask = 0;<br>
<br>
if (adev->pm.sysfs_initialized)<br>
return 0;<br>
@@ -3260,168 +3308,25 @@ int amdgpu_pm_sysfs_init(struct amdgpu_device *adev)<br>
return ret;<br>
}<br>
<br>
- ret = device_create_file(adev->dev, &dev_attr_power_dpm_state);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file for dpm state\n");<br>
- return ret;<br>
- }<br>
- ret = device_create_file(adev->dev, &dev_attr_power_dpm_force_performance_level);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file for dpm state\n");<br>
- return ret;<br>
- }<br>
-<br>
- if (!amdgpu_sriov_vf(adev)) {<br>
- ret = device_create_file(adev->dev, &dev_attr_pp_num_states);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file pp_num_states\n");<br>
- return ret;<br>
- }<br>
- ret = device_create_file(adev->dev, &dev_attr_pp_cur_state);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file pp_cur_state\n");<br>
- return ret;<br>
- }<br>
- ret = device_create_file(adev->dev, &dev_attr_pp_force_state);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file pp_force_state\n");<br>
- return ret;<br>
- }<br>
- ret = device_create_file(adev->dev, &dev_attr_pp_table);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file pp_table\n");<br>
- return ret;<br>
- }<br>
- }<br>
-<br>
- ret = device_create_file(adev->dev, &dev_attr_pp_dpm_sclk);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file pp_dpm_sclk\n");<br>
- return ret;<br>
- }<br>
-<br>
- /* Arcturus does not support standalone mclk/socclk/fclk level setting */<br>
- if (adev->asic_type == CHIP_ARCTURUS) {<br>
- dev_attr_pp_dpm_mclk.attr.mode &= ~S_IWUGO;<br>
- dev_attr_pp_dpm_mclk.store = NULL;<br>
-<br>
- dev_attr_pp_dpm_socclk.attr.mode &= ~S_IWUGO;<br>
- dev_attr_pp_dpm_socclk.store = NULL;<br>
-<br>
- dev_attr_pp_dpm_fclk.attr.mode &= ~S_IWUGO;<br>
- dev_attr_pp_dpm_fclk.store = NULL;<br>
- }<br>
-<br>
- ret = device_create_file(adev->dev, &dev_attr_pp_dpm_mclk);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file pp_dpm_mclk\n");<br>
- return ret;<br>
- }<br>
- if (adev->asic_type >= CHIP_VEGA10) {<br>
- ret = device_create_file(adev->dev, &dev_attr_pp_dpm_socclk);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file pp_dpm_socclk\n");<br>
- return ret;<br>
- }<br>
- if (adev->asic_type != CHIP_ARCTURUS) {<br>
- ret = device_create_file(adev->dev, &dev_attr_pp_dpm_dcefclk);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file pp_dpm_dcefclk\n");<br>
- return ret;<br>
- }<br>
- }<br>
- }<br>
- if (adev->asic_type >= CHIP_VEGA20) {<br>
- ret = device_create_file(adev->dev, &dev_attr_pp_dpm_fclk);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file pp_dpm_fclk\n");<br>
- return ret;<br>
- }<br>
- }<br>
-<br>
- /* the reset are not needed for SRIOV one vf mode */<br>
- if (amdgpu_sriov_vf(adev)) {<br>
- adev->pm.sysfs_initialized = true;<br>
- return ret;<br>
+ switch (amdgpu_virt_get_sriov_vf_mode(adev)) {<br>
+ case SRIOV_VF_MODE_ONE_VF:<br>
+ mask = ATTR_FLAG_ONEVF;<br>
+ break;<br>
+ case SRIOV_VF_MODE_MULTI_VF:<br>
+ mask = 0;<br>
+ break;<br>
+ case SRIOV_VF_MODE_BARE_METAL:<br>
+ default:<br>
+ mask = ATTR_FLAG_MASK_ALL;<br>
+ break;<br>
}<br>
<br>
- if (adev->asic_type != CHIP_ARCTURUS) {<br>
- ret = device_create_file(adev->dev, &dev_attr_pp_dpm_pcie);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file pp_dpm_pcie\n");<br>
- return ret;<br>
- }<br>
- }<br>
- ret = device_create_file(adev->dev, &dev_attr_pp_sclk_od);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file pp_sclk_od\n");<br>
- return ret;<br>
- }<br>
- ret = device_create_file(adev->dev, &dev_attr_pp_mclk_od);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file pp_mclk_od\n");<br>
- return ret;<br>
- }<br>
- ret = device_create_file(adev->dev,<br>
- &dev_attr_pp_power_profile_mode);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file "<br>
- "pp_power_profile_mode\n");<br>
- return ret;<br>
- }<br>
- if ((is_support_sw_smu(adev) && adev->smu.od_enabled) ||<br>
- (!is_support_sw_smu(adev) && hwmgr->od_enabled)) {<br>
- ret = device_create_file(adev->dev,<br>
- &dev_attr_pp_od_clk_voltage);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file "<br>
- "pp_od_clk_voltage\n");<br>
- return ret;<br>
- }<br>
- }<br>
- ret = device_create_file(adev->dev,<br>
- &dev_attr_gpu_busy_percent);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file "<br>
- "gpu_busy_level\n");<br>
- return ret;<br>
- }<br>
- /* APU does not have its own dedicated memory */<br>
- if (!(adev->flags & AMD_IS_APU) &&<br>
- (adev->asic_type != CHIP_VEGA10)) {<br>
- ret = device_create_file(adev->dev,<br>
- &dev_attr_mem_busy_percent);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file "<br>
- "mem_busy_percent\n");<br>
- return ret;<br>
- }<br>
- }<br>
- /* PCIe Perf counters won't work on APU nodes */<br>
- if (!(adev->flags & AMD_IS_APU)) {<br>
- ret = device_create_file(adev->dev, &dev_attr_pcie_bw);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file pcie_bw\n");<br>
- return ret;<br>
- }<br>
- }<br>
- if (adev->unique_id)<br>
- ret = device_create_file(adev->dev, &dev_attr_unique_id);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file unique_id\n");<br>
+ ret = amdgpu_device_attr_create_groups(adev,<br>
+ amdgpu_device_attrs,<br>
+ ARRAY_SIZE(amdgpu_device_attrs),<br>
+ mask);<br>
+ if (ret)<br>
return ret;<br>
- }<br>
-<br>
- if ((adev->asic_type >= CHIP_VEGA10) &&<br>
- !(adev->flags & AMD_IS_APU)) {<br>
- ret = device_create_file(adev->dev,<br>
- &dev_attr_pp_features);<br>
- if (ret) {<br>
- DRM_ERROR("failed to create device file "<br>
- "pp_features\n");<br>
- return ret;<br>
- }<br>
- }<br>
<br>
adev->pm.sysfs_initialized = true;<br>
<br>
@@ -3430,51 +3335,15 @@ int amdgpu_pm_sysfs_init(struct amdgpu_device *adev)<br>
<br>
void amdgpu_pm_sysfs_fini(struct amdgpu_device *adev) {<br>
- struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;<br>
-<br>
if (adev->pm.dpm_enabled == 0)<br>
return;<br>
<br>
if (adev->pm.int_hwmon_dev)<br>
hwmon_device_unregister(adev->pm.int_hwmon_dev);<br>
- device_remove_file(adev->dev, &dev_attr_power_dpm_state);<br>
- device_remove_file(adev->dev, &dev_attr_power_dpm_force_performance_level);<br>
-<br>
- device_remove_file(adev->dev, &dev_attr_pp_num_states);<br>
- device_remove_file(adev->dev, &dev_attr_pp_cur_state);<br>
- device_remove_file(adev->dev, &dev_attr_pp_force_state);<br>
- device_remove_file(adev->dev, &dev_attr_pp_table);<br>
-<br>
- device_remove_file(adev->dev, &dev_attr_pp_dpm_sclk);<br>
- device_remove_file(adev->dev, &dev_attr_pp_dpm_mclk);<br>
- if (adev->asic_type >= CHIP_VEGA10) {<br>
- device_remove_file(adev->dev, &dev_attr_pp_dpm_socclk);<br>
- if (adev->asic_type != CHIP_ARCTURUS)<br>
- device_remove_file(adev->dev, &dev_attr_pp_dpm_dcefclk);<br>
- }<br>
- if (adev->asic_type != CHIP_ARCTURUS)<br>
- device_remove_file(adev->dev, &dev_attr_pp_dpm_pcie);<br>
- if (adev->asic_type >= CHIP_VEGA20)<br>
- device_remove_file(adev->dev, &dev_attr_pp_dpm_fclk);<br>
- device_remove_file(adev->dev, &dev_attr_pp_sclk_od);<br>
- device_remove_file(adev->dev, &dev_attr_pp_mclk_od);<br>
- device_remove_file(adev->dev,<br>
- &dev_attr_pp_power_profile_mode);<br>
- if ((is_support_sw_smu(adev) && adev->smu.od_enabled) ||<br>
- (!is_support_sw_smu(adev) && hwmgr->od_enabled))<br>
- device_remove_file(adev->dev,<br>
- &dev_attr_pp_od_clk_voltage);<br>
- device_remove_file(adev->dev, &dev_attr_gpu_busy_percent);<br>
- if (!(adev->flags & AMD_IS_APU) &&<br>
- (adev->asic_type != CHIP_VEGA10))<br>
- device_remove_file(adev->dev, &dev_attr_mem_busy_percent);<br>
- if (!(adev->flags & AMD_IS_APU))<br>
- device_remove_file(adev->dev, &dev_attr_pcie_bw);<br>
- if (adev->unique_id)<br>
- device_remove_file(adev->dev, &dev_attr_unique_id);<br>
- if ((adev->asic_type >= CHIP_VEGA10) &&<br>
- !(adev->flags & AMD_IS_APU))<br>
- device_remove_file(adev->dev, &dev_attr_pp_features);<br>
+<br>
+ amdgpu_device_attr_remove_groups(adev,<br>
+ amdgpu_device_attrs,<br>
+ ARRAY_SIZE(amdgpu_device_attrs));<br>
}<br>
<br>
void amdgpu_pm_compute_clocks(struct amdgpu_device *adev) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.h<br>
index 5db0ef86e84c..5ca5f3f9e8c0 100644<br>
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.h<br>
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.h<br>
@@ -30,6 +30,54 @@ struct cg_flag_name<br>
const char *name;<br>
};<br>
<br>
+enum amdgpu_device_attr_flags {<br>
+ ATTR_FLAG_BASIC = (1 << 0),<br>
+ ATTR_FLAG_ONEVF = (1 << 16),<br>
+};<br>
+<br>
+#define ATTR_FLAG_TYPE_MASK (0x0000ffff)<br>
+#define ATTR_FLAG_MODE_MASK (0xffff0000)<br>
+#define ATTR_FLAG_MASK_ALL (0xffffffff)<br>
+<br>
+enum amdgpu_device_attr_states {<br>
+ ATTR_STATE_UNSUPPORT = 0,<br>
+ ATTR_STATE_SUPPORT,<br>
+ ATTR_STATE_DEAD,<br>
+ ATTR_STATE_ALIVE,<br>
+};<br>
+<br>
+struct amdgpu_device_attr {<br>
+ struct device_attribute dev_attr;<br>
+ enum amdgpu_device_attr_flags flags;<br>
+ enum amdgpu_device_attr_states states;<br>
+ int (*perform)(struct amdgpu_device *adev,<br>
+ struct amdgpu_device_attr* attr,<br>
+ uint32_t mask);<br>
+};<br>
+<br>
+#define to_amdgpu_device_attr(_dev_attr) \<br>
+ container_of(_dev_attr, struct amdgpu_device_attr, dev_attr)<br>
+<br>
+#define __AMDGPU_DEVICE_ATTR(_name, _mode, _show, _store, _flags, ...) \<br>
+ { .dev_attr = __ATTR(_name, _mode, _show, _store), \<br>
+ .flags = _flags, \<br>
+ .states = ATTR_STATE_SUPPORT, \<br>
+ ##__VA_ARGS__, }<br>
+<br>
+#define AMDGPU_DEVICE_ATTR(_name, _mode, _flags, ...) \<br>
+ __AMDGPU_DEVICE_ATTR(_name, _mode, \<br>
+ amdgpu_get_##_name, amdgpu_set_##_name, \<br>
+ _flags, ##__VA_ARGS__)<br>
+<br>
+#define AMDGPU_DEVICE_ATTR_RW(_name, _flags, ...) \<br>
+ AMDGPU_DEVICE_ATTR(_name, S_IRUGO | S_IWUSR, \<br>
+ _flags, ##__VA_ARGS__)<br>
+<br>
+#define AMDGPU_DEVICE_ATTR_RO(_name, _flags, ...) \<br>
+ __AMDGPU_DEVICE_ATTR(_name, S_IRUGO, \<br>
+ amdgpu_get_##_name, NULL, \<br>
+ _flags, ##__VA_ARGS__)<br>
+<br>
void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev); int amdgpu_pm_sysfs_init(struct amdgpu_device *adev); int amdgpu_pm_virt_sysfs_init(struct amdgpu_device *adev);<br>
--<br>
2.17.1<o:p></o:p></p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</body>
</html>