[PATCH v2 5/8] drm/amdgpu: Refactor sysfs removal
Andrey Grodzovsky
Andrey.Grodzovsky at amd.com
Wed Dec 2 18:02:06 UTC 2020
On 12/2/20 12:34 PM, Greg KH wrote:
> On Wed, Dec 02, 2020 at 10:48:01AM -0500, Andrey Grodzovsky wrote:
>> On 11/11/20 10:34 AM, Greg KH wrote:
>>> On Wed, Nov 11, 2020 at 10:13:13AM -0500, Andrey Grodzovsky wrote:
>>>> On 11/10/20 12:59 PM, Greg KH wrote:
>>>>> On Tue, Nov 10, 2020 at 12:54:21PM -0500, Andrey Grodzovsky wrote:
>>>>>> Hi, back to this after a long context switch for some higher priority stuff.
>>>>>>
>>>>>> So here I was able eventually to drop all this code and this change here https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agrodzov%2Flinux%2Fcommit%2F%3Fh%3Damd-staging-drm-next-device-unplug%26id%3D61852c8a59b4dd89d637693552c73175b9f2ccd6&data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C29ff7efb89bd47d8488708d896e86e7c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637425272317529134%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Vzc3fVofA6%2BMPSqHmBqcWavQLKWU1%2FXKJFun24irLf0%3D&reserved=0
>>>>>> was enough for me. Seems like while device_remove_file can handle the use
>>>>>> case where the file and the parent directory already gone,
>>>>>> sysfs_remove_group goes down in flames in that case
>>>>>> due to kobj->sd being unset on device removal.
>>>>> A driver shouldn't ever have to remove individual sysfs groups, the
>>>>> driver core/bus logic should do it for them automatically.
>>>>>
>>>>> And whenever a driver calls a sysfs_* call, that's a hint that something
>>>>> is not working properly.
>>>>
>>>> Do you mean that while the driver creates the groups and files explicitly
>>>> from it's different subsystems it should not explicitly remove each
>>>> one of them because all of them should be removed at once (and
>>>> recursively) when the device is being removed ?
>>> Individual drivers should never add groups/files in sysfs, the driver
>>> core should do it properly for you if you have everything set up
>>> properly. And yes, the driver core will automatically remove them as
>>> well.
>>>
>>> Please use the default groups attribute for your bus/subsystem and this
>>> will happen automagically.
>>
>> Hi Greg, I tried your suggestion to hang amdgpu's sysfs
>> attributes on default attributes in struct device.groups but turns out it's
>> not usable since by the
>> time i have access to struct device from amdgpu code it has already been
>> initialized by pci core
>> (i.e. past the point where device_add->device_add_attrs->device_add_groups
>> with dev->groups is called)
>> and so i can't really use it.
> That's odd, why can't you just set the groups pointer in your pci_driver
> structure? That's what it is there for, right?
I am probably missing something but amdgpu sysfs attrs are per device not per
driver
and their life cycle is bound to the device and their location in the sysfs
topology is
under each device. Putting them as driver default attr will not put them in
their current per device location
and won't make them automatically be destroyed once a particular device goes
away, no ?
Andrey
>
>> What I can only think of using is creating my own struct attribute_group **
>> array in amdgpu where I aggregate all
>> amdgpu sysfs attributes, call device_add_groups in the end of amgpu pci
>> probe with that array and on device remove call
>> device_remove_groups with the same array.
> Horrid, no, see above :)
>
> thanks,
>
> greg k-h
More information about the dri-devel
mailing list