[PATCH] drm/i915/display: Skip unavailable power wells based on pipe mask

Borah, Chaitanya Kumar chaitanya.kumar.borah at intel.com
Mon Jul 21 15:17:56 UTC 2025


On 7/19/2025 4:07 PM, Imre Deak wrote:
> On Fri, Jul 18, 2025 at 05:16:20PM -0300, Gustavo Sousa wrote:
> ...
>
> DDI TC3/4 are never wired on WCL, so I'd still confirm the above, i.e.
> do the PORT_AUX_CTL_USBC3/4 registers exist indeed (they do based on the
> spec) and do the 'PHY Power Request' / 'PHY Power State' flags get
> updated if the request bit is set/cleared as expected?
>
>>   * As you explained in a previous reply, the driver needs to deal with
>>     power wells of fused-off pipes to ensure those get properly powered
>>     off in case whatever was controlling display before the driver takes
>>     control let them enabled.
>>
>> So, I guess we could either
>>
>>    (1) go back to Chaitanya's original patch;
>>    (2) or tweak this patch to use a mask of pipes supported by the
>>        display IP instead of non-fused-off ones.
>>
>> I personally would prefer (1), since then we would make the presence of
>> power wells and mapping more explicit in the code;
> Agreed, I'd also prefer (1). I'd still check if registering the
> AUX_TC3/4 power wells is correct based on the above.
>
I had a version of my first patch that excluded AUX_TC3/4.

That lead to the following warning. I need to dig deeper to understand 
the failure but it might be more obvious to one of you.
At the time I just assumed that these power wells are actually present.

[    4.223342] ------------[ cut here ]------------
[    4.223343] xe 0000:00:02.0: [drm] drm_WARN_ON(aux_powered)
[    4.223367] WARNING: CPU: 0 PID: 145 at 
drivers/gpu/drm/i915/display/intel_tc.c:1447 
intel_tc_port_reset_mode+0x135/0x350 [xe]
[    4.223583] Modules linked in: xe(+) drm_ttm_helper ttm 
drm_suballoc_helper cec rc_core drm_buddy gpu_sched drm_gpuvm drm_exec 
drm_gpusvm drm_display_helper mac_hid video wmi pinctrl_intel_platform 
pinctrl_intel pwm_lpss
...
[    4.223801] Call Trace:
[    4.223803]  <TASK>
[    4.223806]  __intel_tc_port_lock+0xdf/0x130 [xe]
[    4.223943]  intel_tc_port_lock+0x1e/0x30 [xe]
[    4.224062]  intel_digital_port_connected+0x33/0xa0 [xe]
[    4.224258]  intel_dp_detect+0xef/0x860 [xe]
[    4.224474]  ? ww_mutex_lock+0xfd/0x110
[    4.224482]  detect_connector_status+0x1e/0x50
[    4.224490]  drm_helper_probe_detect+0x4b/0x80
[    4.224495] drm_helper_probe_single_connector_modes+0x3f4/0x680
[    4.224502]  drm_client_modeset_probe+0x251/0x1a40
[    4.224506]  ? kmem_cache_alloc_lru_noprof+0x2cd/0x3b0
[    4.224512]  ? __d_alloc+0x2e/0x1f0
[    4.224519]  ? __kmalloc_node_track_caller_noprof+0x347/0x4c0
[    4.224524] __drm_fb_helper_initial_config_and_unlock+0x3e/0x560
[    4.224529]  ? kstrdup+0x3c/0x70
[    4.224533]  ? kstrdup+0x52/0x70
[    4.224536]  drm_fb_helper_initial_config+0x36/0x40
[    4.224540]  drm_fbdev_client_hotplug+0x76/0xc0
[    4.224543]  drm_client_register+0x68/0xb0
[    4.224549]  drm_fbdev_client_setup+0xe8/0x1d0
[    4.224552]  drm_client_setup+0x5b/0x80
[    4.224555]  drm_client_setup_with_color_mode+0x29/0x40
[    4.224557]  intel_fbdev_setup+0x20f/0x4c0 [xe]
[    4.224699]  intel_display_driver_register+0xb9/0x100 [xe]
[    4.224905]  ? __pfx___drm_printfn_dbg+0x10/0x10
[    4.224909]  ? intel_display_driver_register+0x32/0x100 [xe]
[    4.225108]  xe_display_register+0x2c/0x40 [xe]
[    4.225321]  xe_device_probe+0x4af/0x580 [xe]
[    4.225467]  xe_pci_probe+0x9a2/0xcd0 [xe]
[    4.225653]  local_pci_probe+0x4c/0xb0
[    4.225657]  pci_device_probe+0xdb/0x230
[    4.225660]  really_probe+0xe2/0x390
[    4.225665]  __driver_probe_device+0x7e/0x160
[    4.225669]  driver_probe_device+0x23/0xa0
[    4.225673]  __driver_attach+0xe8/0x1e0
[    4.225677]  ? __pfx___driver_attach+0x10/0x10
[    4.225681]  bus_for_each_dev+0x7d/0xd0
[    4.225684]  driver_attach+0x22/0x30
[    4.225687]  bus_add_driver+0x118/0x240
[    4.225691]  driver_register+0x68/0x130
[    4.225695]  __pci_register_driver+0x65/0x70
[    4.225697]  xe_register_pci_driver+0x27/0x30 [xe]
[    4.225875]  xe_init+0x35/0x90 [xe]
[    4.226005]  ? __pfx_xe_init+0x10/0x10 [xe]
[    4.226131]  do_one_initcall+0x49/0x330
[    4.226137]  do_init_module+0x6a/0x2a0
[    4.226141]  load_module+0x21e6/0x22b0
[    4.226145]  ? kernel_read_file+0x240/0x2c0
[    4.226150]  init_module_from_file+0x9b/0xe0
[    4.226153]  ? init_module_from_file+0x9b/0xe0
[    4.226156]  idempotent_init_module+0x170/0x270
[    4.226159]  __x64_sys_finit_module+0x6f/0xe0
[    4.226162]  x64_sys_call+0x1b7a/0x2150
[    4.226165]  do_syscall_64+0x56/0x860
[    4.226169]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[    4.226172] RIP: 0033:0x7fab2003b25d
...
[    4.226190] ---[ end trace 0000000000000000 ]---

- Chaitanya

>
>> but I wouldn't be against (2).
>>
>> --
>> Gustavo Sousa
>>
>>>>>> * If the power well that has irq_pipe_mask is also used to power
>>>>>>    something else than the pipes, we could have issues if pipes in that
>>>>>>    mask are fused off.
>>>>>>
>>>>>> I'm leaning more toward a solution that makes POWER_DOMAIN_INIT map to
>>>>>> POWER_DOMAIN_PIPE_* based on DISPLAY_RUNTIME_INFO(display)->pipe_mask. I
>>>>>> have some idea of how to do that without rewriting code to use a
>>>>>> hierarchical structure (which IMO would be ideal, but takes more
>>>>>> effort).
>>>>>>
>>>>>> The idea is to, during runtime and initialization of the mapping, set
>>>>>> the bit respective to POWER_DOMAIN_INIT in each power well that has the
>>>>>> bit for POWER_DOMAIN_PIPE_* set for non-fused off pipes. That would
>>>>>> also require removing the POWER_DOMAIN_INIT from the static mapping for
>>>>>> power wells directly responsible for POWER_DOMAIN_PIPE_*.
>>>>> Power wells that don't exist on a platform shouldn't be registered in
>>>>> the first place, so it's not enough to only remove them from the power
>>>>> well->domain mapping, while still registering the power well. Otherwise
>>>>> these non-existant power wells would still be accessed while disabling
>>>>> any unused power well during driver loading/resume. Also these power
>>>>> wells non-existant on a platform would be incorrectly listed in debugfs
>>>>> and other state dumps.
>>>>>
>>>>> However, I realized that pipe power wells that do exist on a platform,
>>>>> but for which the corresponing pipe is fused off (for instance pipe
>>>>> A/B/C on WCL) we still need to register the power well. On some
>>>>> platforms at least such power wells may be enabled after HW reset/by
>>>>> BIOS and so these still need to be checked and disabled if needed during
>>>>> driver loading/resume. I.e. instead of the above
>>>> Ah, I see. Yeah, that makes sense. Thanks for the details!
>>>>
>>>> Well, although Bspec overview page tells that WCL's display has only
>>>> pipes A, B and C, the page specific for power wells still lists power
>>>> well D. So I'm wondering if WCL display just has pipe D fused off and
>>>> the power well still exists or if power well D being listed in Bspec is
>>>> just a documentation mistake. I'll check with the hardware team.
>>>>
>>>>> DISPLAY_RUNTIME_INFO(display)->pipe_mask
>>>>>
>>>>> something like the following should be used:
>>>>>
>>>>> u8 pipe_pw_mask(display)
>>>>> {
>>>>>         if (DISPLAY_VERx100(display) == 3002)
>>>>>                 return BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C);
>>>>>
>>>>>         return BIT(I915_MAX_PIPES + 1) - 1;
>>>>> }
>>>> Well, if power well D does not exist indeed (i.e. not a case of pipe D
>>>> fused-off), we need either this above or maybe go back to Chaitanya's
>>>> original patch.
>>>>
>>>> I think I prefer the original patch, making the power well mapping
>>>> explicit.
>>>>
>>>> --
>>>> Gustavo Sousa
>>>>
>>>>>> --
>>>>>> Gustavo Sousa
>>>>>>
>>>>>>> +                return false;
>>>>>>> +
>>>>>>> +        return true;
>>>>>>> +}
>>>>>>> +
>>>>>>> static int
>>>>>>> __set_power_wells(struct i915_power_domains *power_domains,
>>>>>>>                    const struct i915_power_well_desc_list *power_well_descs,
>>>>>>> @@ -1763,8 +1773,10 @@ __set_power_wells(struct i915_power_domains *power_domains,
>>>>>>>          int power_well_count = 0;
>>>>>>>          int plt_idx = 0;
>>>>>>>
>>>>>>> -        for_each_power_well_instance(power_well_descs, power_well_descs_sz, desc_list, desc, inst)
>>>>>>> -                power_well_count++;
>>>>>>> +        for_each_power_well_instance(power_well_descs, power_well_descs_sz, desc_list, desc, inst) {
>>>>>>> +                if (is_power_well_available(display, desc))
>>>>>>> +                        power_well_count++;
>>>>>>> +        }
>>>>>>>
>>>>>>>          power_domains->power_well_count = power_well_count;
>>>>>>>          power_domains->power_wells =
>>>>>>> @@ -1778,6 +1790,9 @@ __set_power_wells(struct i915_power_domains *power_domains,
>>>>>>>                  struct i915_power_well *pw = &power_domains->power_wells[plt_idx];
>>>>>>>                  enum i915_power_well_id id = inst->id;
>>>>>>>
>>>>>>> +                if (!is_power_well_available(display, desc))
>>>>>>> +                        continue;
>>>>>>> +
>>>>>>>                  pw->desc = desc;
>>>>>>>                  drm_WARN_ON(display->drm,
>>>>>>>                              overflows_type(inst - desc->instances->list, pw->instance_idx));
>>>>>>> -- 
>>>>>>> 2.25.1
>>>>>>>


More information about the Intel-xe mailing list