[PATCH 6.6 00/28] fix CVE-2024-46701
Yu Kuai
yukuai1 at huaweicloud.com
Sat Nov 9 01:38:56 UTC 2024
Hi,
在 2024/11/09 1:03, Liam R. Howlett 写道:
> * Chuck Lever III <chuck.lever at oracle.com> [241108 08:23]:
>>
>>
>>> On Nov 7, 2024, at 8:19 PM, Yu Kuai <yukuai1 at huaweicloud.com> wrote:
>>>
>>> Hi,
>>>
>>> 在 2024/11/07 22:41, Chuck Lever 写道:
>>>> On Thu, Nov 07, 2024 at 08:57:23AM +0800, Yu Kuai wrote:
>>>>> Hi,
>>>>>
>>>>> 在 2024/11/06 23:19, Chuck Lever III 写道:
>>>>>>
>>>>>>
>>>>>>> On Nov 6, 2024, at 1:16 AM, Greg KH <gregkh at linuxfoundation.org> wrote:
>>>>>>>
>>>>>>> On Thu, Oct 24, 2024 at 09:19:41PM +0800, Yu Kuai wrote:
>>>>>>>> From: Yu Kuai <yukuai3 at huawei.com>
>>>>>>>>
>>>>>>>> Fix patch is patch 27, relied patches are from:
>>>>>>
>>>>>> I assume patch 27 is:
>>>>>>
>>>>>> libfs: fix infinite directory reads for offset dir
>>>>>>
>>>>>> https://lore.kernel.org/stable/20241024132225.2271667-12-yukuai1@huaweicloud.com/
>>>>>>
>>>>>> I don't think the Maple tree patches are a hard
>>>>>> requirement for this fix. And note that libfs did
>>>>>> not use Maple tree originally because I was told
>>>>>> at that time that Maple tree was not yet mature.
>>>>>>
>>>>>> So, a better approach might be to fit the fix
>>>>>> onto linux-6.6.y while sticking with xarray.
>>>>>
>>>>> The painful part is that using xarray is not acceptable, the offet
>>>>> is just 32 bit and if it overflows, readdir will read nothing. That's
>>>>> why maple_tree has to be used.
>>>> A 32-bit range should be entirely adequate for this usage.
>>>> - The offset allocator wraps when it reaches the maximum, it
>>>> doesn't overflow unless there are actually billions of extant
>>>> entries in the directory, which IMO is not likely.
>>>
>>> Yes, it's not likely, but it's possible, and not hard to trigger for
>>> test.
>>
>> I question whether such a test reflects any real-world
>> workload.
>>
>> Besides, there are a number of other limits that will impact
>> the ability to create that many entries in one directory.
>> The number of inodes in one tmpfs instance is limited, for
>> instance.
>>
>>
>>> And please notice that the offset will increase for each new file,
>>> and file can be removed, while offset stays the same.
>>>> - The offset values are dense, so the directory can use all 2- or
>>>> 4- billion in the 32-bit integer range before wrapping.
>>>
>>> A simple math, if user create and remove 1 file in each seconds, it will
>>> cost about 130 years to overflow. And if user create and remove 1000
>>> files in each second, it will cost about 1 month to overflow.
>>
>> The question is what happens when there are no more offset
>> values available. xa_alloc_cyclic should fail, and file
>> creation is supposed to fail at that point. If it doesn't,
>> that's a bug that is outside of the use of xarray or Maple.
>>
>>
>>> maple tree use 64 bit value for the offset, which is impossible to
>>> overflow for the rest of our lifes.
>>>> - No-one complained about this limitation when offset_readdir() was
>>>> first merged. The xarray was replaced for performance reasons,
>>>> not because of the 32-bit range limit.
>>>> It is always possible that I have misunderstood your concern!
>>>
>>> The problem is that if the next_offset overflows to 0, then after patch
>>> 27, offset_dir_open() will record the 0, and later offset_readdir will
>>> return directly, while there can be many files.
>>
>> That's a separate bug that has nothing to do with the maximum
>> number of entries one directory can have. Again, you don't
>> need Maple tree to address that.
>>
>> My understanding from Liam is that backporting Maple into
>> v6.6 is just not practical to do. We must explore alternate
>> ways to address these concerns.
>>
>
> The tree itself is in v6.6, but the evolution of the tree to fit the
> needs of this and other subsystems isn't something that would be well
> tested. This is really backporting features and that's not the point of
> stable.
Of course.
>
> I think this is what Lorenzo was saying about changing your approach, we
> can't backport 28 patches to fix this when it isn't needed.
I don't have other approach now, so I'll not follow on fixing this cve.
I'll be great if someone has a beeter apporch. :)
Thanks,
Kuai
>
> Thanks,
> Liam
>
> .
>
More information about the dri-devel
mailing list