[PATCH v8 24/26] mm/gup: track FOLL_PIN pages

John Hubbard jhubbard at nvidia.com
Wed Dec 11 00:27:02 UTC 2019


On 12/10/19 5:39 AM, Jan Kara wrote:
...
>> +void grab_page(struct page *page, unsigned int flags)
>> +{
>> +	if (flags & FOLL_GET)
>> +		get_page(page);
>> +	else if (flags & FOLL_PIN) {
>> +		get_page(page);
>> +		WARN_ON_ONCE(flags & FOLL_GET);
>> +		/*
>> +		 * Use get_page(), above, to do the refcount error
>> +		 * checking. Then just add in the remaining references:
>> +		 */
>> +		page_ref_add(page, GUP_PIN_COUNTING_BIAS - 1);
> 
> This is wrong for two reasons:
> 
> 1) You miss compound_head() indirection from get_page() for this
> page_ref_add().

whoops, yes that is missing.

> 
> 2) page_ref_add() could overflow the counter without noticing.
> 
> Especially with GUP_PIN_COUNTING_BIAS being non-trivial, it is realistic
> that an attacker might try to overflow the page refcount and we have to
> protect the kernel against that. So I think that all the places that would
> use grab_page() actually need to use try_grab_page() and then gracefully
> deal with the failure.
> 

OK, I've replaced grab_page() everywhere with try_grab_page(), with the
above issues fixed. The v7 patchset had error handling for grab_page() failures,
that had been reviewed, so relevants parts of that have reappeared.

I had initially hesitated to do this, but now I've gone ahead and added:

#define page_ref_zero_or_close_to_bias_overflow(page) \
	((unsigned int) page_ref_count(page) + \
		GUP_PIN_COUNTING_BIAS <= GUP_PIN_COUNTING_BIAS)

...which is used in the new try_grab_page() for protection.


>> @@ -278,11 +425,23 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
>>  		goto retry;
>>  	}
>>  
>> -	if (flags & FOLL_GET) {
>> +	if (flags & (FOLL_PIN | FOLL_GET)) {
>> +		/*
>> +		 * Allow try_get_page() to take care of error handling, for
>> +		 * both cases: FOLL_GET or FOLL_PIN:
>> +		 */
>>  		if (unlikely(!try_get_page(page))) {
>>  			page = ERR_PTR(-ENOMEM);
>>  			goto out;
>>  		}
>> +
>> +		if (flags & FOLL_PIN) {
>> +			WARN_ON_ONCE(flags & FOLL_GET);
>> +
>> +			/* We got a +1 refcount from try_get_page(), above. */
>> +			page_ref_add(page, GUP_PIN_COUNTING_BIAS - 1);
>> +			__update_proc_vmstat(page, NR_FOLL_PIN_REQUESTED, 1);
>> +		}
>>  	}
> 
> The same problem here as above, plus this place should use the same
> try_grab..() helper, shouldn't it?


Yes, now that the new try_grab_page() has behavior that matches what
this call site needs. Done.


> 
>> @@ -544,8 +703,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
>>  	/* make this handle hugepd */
>>  	page = follow_huge_addr(mm, address, flags & FOLL_WRITE);
>>  	if (!IS_ERR(page)) {
>> -		BUG_ON(flags & FOLL_GET);
>> -		return page;
>> +		WARN_ON_ONCE(flags & (FOLL_GET | FOLL_PIN));
>> +		return NULL;
> 
> I agree with the change to WARN_ON_ONCE but why is correct the change of
> the return value? Note that this is actually a "success branch".
> 

Good catch, thanks! I worked through the logic...correctly at first, but then I must 
have become temporarily dazed by the raw destructive power of the pre-existing 
BUG_ON() statement, and screwed it up after all. :)


thanks,
-- 
John Hubbard
NVIDIA



More information about the dri-devel mailing list