[PATCH 1/2] drm/amd/pp: Remove manual mode for power_dpm_force_performance_level

Zhu, Rex Rex.Zhu at amd.com
Mon Jan 29 21:51:11 UTC 2018


>I'd prefer to keep the requirement to select manual mode to be able to
>manually mess with the clock levels.  This also makes simplifies the
>profile interface by only allowing you to change the profile if you
>select manual mode first.  That way we don't have to add an AUTO
>profile to the profile selection to let the driver pick the profile.
>that can happen automatically if the user sets the force_performance
>level to auto.


Hi Alex,


so you mean use "auto"  state of force_performance_leve as reset mode to all the performance related sysfs.


Currently we just unfore the clock range in the auto state.


Anyway, I will drop this patch.


Best Regards

Rex


________________________________
From: Alex Deucher <alexdeucher at gmail.com>
Sent: Tuesday, January 30, 2018 2:02 AM
To: Zhu, Rex
Cc: Kuehling, Felix; amd-gfx at lists.freedesktop.org
Subject: Re: [PATCH 1/2] drm/amd/pp: Remove manual mode for power_dpm_force_performance_level

On Mon, Jan 29, 2018 at 7:03 AM, Zhu, Rex <Rex.Zhu at amd.com> wrote:
> Hi Alex,
>
>>How about we make the profile selection dependent on selecting the manual force_performance_level
>
> If so, we need to check pm.dpm.forced_level  before "profile heuristics" and "disable power containment" as same as pp_dpm_sclk/mclk/pcie.
>
> if (adev->pm.dpm.forced_level & (AMD_DPM_FORCED_LEVEL_AUTO |
>                                AMD_DPM_FORCED_LEVEL_LOW |
>                                AMD_DPM_FORCED_LEVEL_HIGH))
>                return -EINVAL;
>
> How about delete judging code before force_clock_level and just change force_level to manual after force_clock_level. Please see the attached patch.
>
> This change have no impact on existing interface and tools.
> It just refine the logic of power_dpm_force_performance_level and pp_dpm_sclk/mclk/pcie.
> Also has no impact to power/computer profile mode on smu7.
>
> The difference is:
>
> New user can have one less command need to input if they want to use pp_dpm_sclk/mclk/pcie.

I'd prefer to keep the requirement to select manual mode to be able to
manually mess with the clock levels.  This also makes simplifies the
profile interface by only allowing you to change the profile if you
select manual mode first.  That way we don't have to add an AUTO
profile to the profile selection to let the driver pick the profile.
that can happen automatically if the user sets the force_performance
level to auto.

Alex

>
>
> Best Regards
> Rex
> -----Original Message-----
> From: Alex Deucher [mailto:alexdeucher at gmail.com]
> Sent: Saturday, January 27, 2018 7:51 AM
> To: Zhu, Rex
> Cc: Kuehling, Felix; amd-gfx at lists.freedesktop.org
> Subject: Re: [PATCH 1/2] drm/amd/pp: Remove manual mode for power_dpm_force_performance_level
>
> I think we have two use cases for the profiles:
>
> 1. automatic profile switching for different driver use cases 2. manually tweaking profiles/clocks/power for testing
>
> How about we make the profile selection dependent on selecting the manual force_performance_level and not add an auto to the profile selector.  Then when you select manual you can tweak the clocks and profile heuristics and power containment via their respective knobs.
>
>
> Alex
>
> On Fri, Jan 26, 2018 at 3:08 PM, Zhu, Rex <Rex.Zhu at amd.com> wrote:
>>>Existing tools and users expect that switching back to auto removes
>>>the manual clock settings. If you allow changing the clock in auto
>>>mode, that won't happen any more.
>>
>>
>> I have sent the patch v2 to fix this problem. user can swith back auto
>> mode and all manual clock setting will be removed.
>>
>>
>>>One more reason why allowing the user to set pp_dpm_sckl/mclk
>>>shouldn't be  allowed in auto-mode.
>>
>> this is an old logic, maybe ref radeon driver.
>> Driver can allow to set pp_dpm_sclk/mclk range in auto/high/low mode.
>>
>> Best Regards
>> Rex
>> ________________________________
>> From: Kuehling, Felix
>> Sent: Saturday, January 27, 2018 3:32 AM
>> To: Zhu, Rex; amd-gfx at lists.freedesktop.org
>> Subject: Re: [PATCH 1/2] drm/amd/pp: Remove manual mode for
>> power_dpm_force_performance_level
>>
>> On 2018-01-26 02:20 PM, Zhu, Rex wrote:
>>>
>>> >1. You're breaking the semantics of the existing pp_dpm_sclk/mclk/pcie
>>> >    interfaces, which affects existing tools
>>>
>>>
>>> Rex: I don't think the patch will affects existing tools.
>>>
>>>
>>> User set "manual" to power_performance_level, and then change the
>>> clock range through  pp_dpm_sclk/mclk/pcie.
>>>
>>>
>>> with this patch, User dont need to set "manual" command,  if still
>>> receive the manual command, driver just return sucess to user in
>>> order not  break existing
>>>
>>> tools.
>>>
>>
>> Existing tools and users expect that switching back to auto removes
>> the manual clock settings. If you allow changing the clock in auto
>> mode, that won't happen any more.
>>
>>>
>>>  >2. You're taking the clock limits out of the power profile.
>>>  >  Automatically adjusting the minimum sclk/mclk is a requirement for
>>>  >   the compute power profile
>>>
>>>
>>> Rex: In vega10, under default comput mode(with
>>> busy_set_point/fps/use_rlc_busy/min_active_level set), just two
>>> performance levels left
>>> (level0 and level7). and clock just switch between lowest and highest.
>>>
>>> I am not sure in this case, driver still can set min sclk/mclk.
>>
>> One more reason why allowing the user to set pp_dpm_sckl/mclk
>> shouldn't be allowed in auto-mode.
>>
>> Regards,
>>   Felix
>>
>>>
>>> Best Regards
>>> Rex
>>>
>>>
>>> ---------------------------------------------------------------------
>>> ---
>>> *From:* Kuehling, Felix
>>> *Sent:* Saturday, January 27, 2018 12:49 AM
>>> *To:* Zhu, Rex; amd-gfx at lists.freedesktop.org
>>> *Subject:* Re: [PATCH 1/2] drm/amd/pp: Remove manual mode for
>>> power_dpm_force_performance_level
>>>
>>> Hi Rex,
>>>
>>> I think I understand what you're trying to do. To summarize my
>>> concerns, there are two reasons I'm against your plan:
>>>
>>>  1. You're breaking the semantics of the existing pp_dpm_sclk/mclk/pcie
>>>     interfaces, which affects existing tools  2. You're taking the
>>> clock limits out of the power profile.
>>>     Automatically adjusting the minimum sclk/mclk is a requirement for
>>>     the compute power profile
>>>
>>> Regards,
>>>   Felix
>>>
>>> On 2018-01-26 07:50 AM, Zhu, Rex wrote:
>>> >
>>> > Hi Felix,
>>> >
>>> >
>>> > >That would make sense. But switching to manual mode would disable
>>> > >profiles and automatic profile selection. That was one reason why
>>> > >I objected to your plan to control profile clock limits using
>>> > >these
>>> files.
>>> >
>>> > Rex:
>>> >
>>> >
>>> > I am not very clear the old logic of gfx/compute power profile switch.
>>> >
>>> >
>>> > But with new sysfs,
>>> >
>>> >
>>> >
>>> > The logic is(those sysfs are independent)
>>> >
>>> >  1. configure uphyst/downhyst/min_ativity through
>>> > power_profile_mode,
>>> >
>>> >       2. adjust clock range through pp_dpm_sclk/mclk/pcie.(once
>>> > this sysffs was called, set the dpm level mode to unknown)
>>> >
>>> >       3. adjust power limit through pp_od_power_limit(maybe equal
>>> > to disable power containment).
>>> >
>>> >
>>> >
>>> > In those functions, driver do not check the dpm level mode.
>>> >
>>> > the dpm level mode just used by power_dpm_force_performance_level
>>> > functions.
>>> >
>>> >
>>> > Best Regards
>>> >
>>> > Rex
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > -------------------------------------------------------------------
>>> > -----
>>> > *From:* Kuehling, Felix
>>> > *Sent:* Friday, January 26, 2018 8:26 AM
>>> > *To:* Zhu, Rex; amd-gfx at lists.freedesktop.org
>>> > *Subject:* Re: [PATCH 1/2] drm/amd/pp: Remove manual mode for
>>> > power_dpm_force_performance_level
>>> >
>>> > On 2018-01-25 07:07 PM, Zhu, Rex wrote:
>>> > > I also think about this problem.
>>> > > just think user should unforced clk level through pp dpm
>>> > > sclk/mclk/pcie if they change the clock logic through those sysfs.
>>> > >
>>> > > The logic seems weird, As we supply many sysfs for adjust clock range.
>>> > >
>>> > > We can fix this problem by change current mode to manual mode
>>> > > after user call pp dpm sclk/mclk/pcie.
>>> > >
>>> > > But another think,if user change back the clk range through pp
>>> > > dpm
>>> clk.
>>> > >
>>> > > we are in manual mode, and user set auto mode, in fact, driver
>>> > > change nothing.
>>> >
>>> > With profiles, switching back to auto mode would select the
>>> > appropriate profile, which may have a different clock mask. For
>>> > example for compute we enable only the highest two sclk levels.
>>> >
>>> > >
>>> > > Comparatively speaking, better set manual mode after user call pp
>>> > dpm clk.
>>> >
>>> > That would make sense. But switching to manual mode would disable
>>> > profiles and automatic profile selection. That was one reason why I
>>> > objected to your plan to control profile clock limits using these files.
>>> >
>>> > Regards,
>>> >   Felix
>>> >
>>> > > Thanks very much.
>>> > >
>>> > > Best Regards
>>> > > Rex
>>> > >
>>> ---------------------------------------------------------------------
>>> ---
>>> > > *From:* Kuehling, Felix
>>> > > *Sent:* Friday, January 26, 2018 12:55:19 AM
>>> > > *To:* amd-gfx at lists.freedesktop.org; Zhu, Rex
>>> > > *Subject:* Re: [PATCH 1/2] drm/amd/pp: Remove manual mode for
>>> > > power_dpm_force_performance_level
>>> > >
>>> > > This patch breaks unforcing of clocks, which is currently done by
>>> > > switching back from "manual" to "auto". By removing "manual"
>>> > > mode, you remove the ability to unset forced clocks.
>>> > >
>>> > > Regards,
>>> > >   Felix
>>> > >
>>> > >
>>> > > On 2018-01-25 06:26 AM, Rex Zhu wrote:
>>> > > > Driver do not maintain manual mode for
>>> > > > dpm_force_performance_level, User can set sclk/mclk/pcie range
>>> > > > through
>>> > > pp_dpm_sclk/pp_dpm_mclk/pp_dpm_pcie
>>> > > > directly.
>>> > > >
>>> > > > In order to not break currently tools, when set "manual" to
>>> > > > power_dpm_force_performance_level driver will do nothing and
>>> > > > just return successful.
>>> > > >
>>> > > > Change-Id: Iaf672b9abc7fa57b765ceb7fa2fba6ad3e80c50b
>>> > > > Signed-off-by: Rex Zhu <Rex.Zhu at amd.com>
>>> > > > ---
>>> > > >  drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c             |  3 +--
>>> > > >  drivers/gpu/drm/amd/amdgpu/ci_dpm.c                |  5 -----
>>> > > >  drivers/gpu/drm/amd/include/kgd_pp_interface.h     | 15
>>> > +++++++--------
>>> > > >  drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c     |  4 ----
>>> > > >  drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c     |  1 -
>>> > > >  drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c   |  6 ------
>>> > > >  drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c |  6 ------
>>> > > >  7 files changed, 8 insertions(+), 32 deletions(-)
>>> > > >
>>> > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
>>> > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
>>> > > > index 1812009..66b4df0 100644
>>> > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
>>> > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
>>> > > > @@ -152,7 +152,6 @@ static ssize_t
>>> > > amdgpu_get_dpm_forced_performance_level(struct device *dev,
>>> > > >                        (level == AMD_DPM_FORCED_LEVEL_AUTO) ?
>>> "auto" :
>>> > > >                        (level == AMD_DPM_FORCED_LEVEL_LOW) ? "low" :
>>> > > >                        (level == AMD_DPM_FORCED_LEVEL_HIGH) ?
>>> "high" :
>>> > > > -                     (level == AMD_DPM_FORCED_LEVEL_MANUAL) ?
>>> > > "manual" :
>>> > > >                        (level ==
>>> > > AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD) ? "profile_standard" :
>>> > > >                        (level ==
>>> > > AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK) ? "profile_min_sclk" :
>>> > > >                        (level ==
>>> > > AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK) ? "profile_min_mclk" :
>>> > > > @@ -186,7 +185,7 @@ static ssize_t
>>> > > amdgpu_set_dpm_forced_performance_level(struct device *dev,
>>> > > >        } else if (strncmp("auto", buf, strlen("auto")) == 0) {
>>> > > >                level = AMD_DPM_FORCED_LEVEL_AUTO;
>>> > > >        } else if (strncmp("manual", buf, strlen("manual")) == 0) {
>>> > > > -             level = AMD_DPM_FORCED_LEVEL_MANUAL;
>>> > > > +             pr_info("No need to set manual mode, Just go
>>> ahead\n");
>>> > > >        } else if (strncmp("profile_exit", buf,
>>> > > strlen("profile_exit")) == 0) {
>>> > > >                level = AMD_DPM_FORCED_LEVEL_PROFILE_EXIT;
>>> > > >        } else if (strncmp("profile_standard", buf,
>>> > > strlen("profile_standard")) == 0) {
>>> > > > diff --git a/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
>>> > > b/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
>>> > > > index ab45232..8ddc978 100644
>>> > > > --- a/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
>>> > > > +++ b/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
>>> > > > @@ -6639,11 +6639,6 @@ static int ci_dpm_force_clock_level(void
>>> > *handle,
>>> > > >        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> > > >        struct ci_power_info *pi = ci_get_pi(adev);
>>> > > >
>>> > > > -     if (adev->pm.dpm.forced_level & (AMD_DPM_FORCED_LEVEL_AUTO |
>>> > > > -                             AMD_DPM_FORCED_LEVEL_LOW |
>>> > > > -                             AMD_DPM_FORCED_LEVEL_HIGH))
>>> > > > -             return -EINVAL;
>>> > > > -
>>> > > >        switch (type) {
>>> > > >        case PP_SCLK:
>>> > > >                if (!pi->sclk_dpm_key_disabled) diff --git
>>> > > > a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> > > b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> > > > index b9aa9f4..3fab686 100644
>>> > > > --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> > > > +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> > > > @@ -41,14 +41,13 @@ struct amd_vce_state {
>>> > > >
>>> > > >  enum amd_dpm_forced_level {
>>> > > >        AMD_DPM_FORCED_LEVEL_AUTO = 0x1,
>>> > > > -     AMD_DPM_FORCED_LEVEL_MANUAL = 0x2,
>>> > > > -     AMD_DPM_FORCED_LEVEL_LOW = 0x4,
>>> > > > -     AMD_DPM_FORCED_LEVEL_HIGH = 0x8,
>>> > > > -     AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD = 0x10,
>>> > > > -     AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK = 0x20,
>>> > > > -     AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK = 0x40,
>>> > > > -     AMD_DPM_FORCED_LEVEL_PROFILE_PEAK = 0x80,
>>> > > > -     AMD_DPM_FORCED_LEVEL_PROFILE_EXIT = 0x100,
>>> > > > +     AMD_DPM_FORCED_LEVEL_LOW = 0x2,
>>> > > > +     AMD_DPM_FORCED_LEVEL_HIGH = 0x4,
>>> > > > +     AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD = 0x8,
>>> > > > +     AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK = 0x10,
>>> > > > +     AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK = 0x20,
>>> > > > +     AMD_DPM_FORCED_LEVEL_PROFILE_PEAK = 0x40,
>>> > > > +     AMD_DPM_FORCED_LEVEL_PROFILE_EXIT = 0x80,
>>> > > >  };
>>> > > >
>>> > > >  enum amd_pm_state_type {
>>> > > > diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
>>> > > b/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
>>> > > > index dec8dd9..60d280c 100644
>>> > > > --- a/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
>>> > > > +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
>>> > > > @@ -1250,7 +1250,6 @@ static int cz_dpm_force_dpm_level(struct
>>> > > pp_hwmgr *hwmgr,
>>> > > >        case AMD_DPM_FORCED_LEVEL_AUTO:
>>> > > >                ret = cz_phm_unforce_dpm_levels(hwmgr);
>>> > > >                break;
>>> > > > -     case AMD_DPM_FORCED_LEVEL_MANUAL:
>>> > > >        case AMD_DPM_FORCED_LEVEL_PROFILE_EXIT:
>>> > > >        default:
>>> > > >                break;
>>> > > > @@ -1558,9 +1557,6 @@ static int cz_get_dal_power_level(struct
>>> > > pp_hwmgr *hwmgr,
>>> > > >  static int cz_force_clock_level(struct pp_hwmgr *hwmgr,
>>> > > >                enum pp_clock_type type, uint32_t mask)  {
>>> > > > -     if (hwmgr->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL)
>>> > > > -             return -EINVAL;
>>> > > > -
>>> > > >        switch (type) {
>>> > > >        case PP_SCLK:
>>> > > >                smum_send_msg_to_smc_with_parameter(hwmgr,
>>> > > > diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
>>> > > b/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
>>> > > > index 409a56b..eddcbcd 100644
>>> > > > --- a/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
>>> > > > +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
>>> > > > @@ -605,7 +605,6 @@ static int rv_dpm_force_dpm_level(struct
>>> > > pp_hwmgr *hwmgr,
>>> > > >
>>> > > PPSMC_MSG_SetSoftMaxFclkByFreq,
>>> > > >
>>> > > RAVEN_UMD_PSTATE_MIN_FCLK);
>>> > > >                break;
>>> > > > -     case AMD_DPM_FORCED_LEVEL_MANUAL:
>>> > > >        case AMD_DPM_FORCED_LEVEL_PROFILE_EXIT:
>>> > > >        default:
>>> > > >                break;
>>> > > > diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
>>> > > b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
>>> > > > index 13db75c..e3a8374 100644
>>> > > > --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
>>> > > > +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
>>> > > > @@ -2798,7 +2798,6 @@ static int smu7_force_dpm_level(struct
>>> > > pp_hwmgr *hwmgr,
>>> > > >                smu7_force_clock_level(hwmgr, PP_MCLK, 1<<mclk_mask);
>>> > > >                smu7_force_clock_level(hwmgr, PP_PCIE, 1<<pcie_mask);
>>> > > >                break;
>>> > > > -     case AMD_DPM_FORCED_LEVEL_MANUAL:
>>> > > >        case AMD_DPM_FORCED_LEVEL_PROFILE_EXIT:
>>> > > >        default:
>>> > > >                break;
>>> > > > @@ -4311,11 +4310,6 @@ static int smu7_force_clock_level(struct
>>> > > pp_hwmgr *hwmgr,
>>> > > >  {
>>> > > >        struct smu7_hwmgr *data = (struct smu7_hwmgr
>>> > *)(hwmgr->backend);
>>> > > >
>>> > > > -     if (hwmgr->request_dpm_level & (AMD_DPM_FORCED_LEVEL_AUTO |
>>> > > > -                                     AMD_DPM_FORCED_LEVEL_LOW |
>>> > > > -                                     AMD_DPM_FORCED_LEVEL_HIGH))
>>> > > > -             return -EINVAL;
>>> > > > -
>>> > > >        switch (type) {
>>> > > >        case PP_SCLK:
>>> > > >                if (!data->sclk_dpm_key_disabled) diff --git
>>> > > > a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
>>> > > b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
>>> > > > index 6b28896..828677e 100644
>>> > > > --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
>>> > > > +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
>>> > > > @@ -4241,7 +4241,6 @@ static int
>>> > > > vega10_dpm_force_dpm_level(struct
>>> > > pp_hwmgr *hwmgr,
>>> > > >                vega10_force_clock_level(hwmgr, PP_SCLK,
>>> 1<<sclk_mask);
>>> > > >                vega10_force_clock_level(hwmgr, PP_MCLK,
>>> 1<<mclk_mask);
>>> > > >                break;
>>> > > > -     case AMD_DPM_FORCED_LEVEL_MANUAL:
>>> > > >        case AMD_DPM_FORCED_LEVEL_PROFILE_EXIT:
>>> > > >        default:
>>> > > >                break;
>>> > > > @@ -4500,11 +4499,6 @@ static int
>>> > > > vega10_force_clock_level(struct
>>> > > pp_hwmgr *hwmgr,
>>> > > >  {
>>> > > >        struct vega10_hwmgr *data = (struct vega10_hwmgr
>>> > > *)(hwmgr->backend);
>>> > > >
>>> > > > -     if (hwmgr->request_dpm_level & (AMD_DPM_FORCED_LEVEL_AUTO |
>>> > > > -                             AMD_DPM_FORCED_LEVEL_LOW |
>>> > > > -                             AMD_DPM_FORCED_LEVEL_HIGH))
>>> > > > -             return -EINVAL;
>>> > > > -
>>> > > >        switch (type) {
>>> > > >        case PP_SCLK:
>>> > > >                data->smc_state_table.gfx_boot_level = mask ?
>>> > > (ffs(mask) - 1) : 0;
>>> > >
>>> >
>>>
>>
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx at lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20180129/87eb3b5a/attachment-0001.html>


More information about the amd-gfx mailing list