[RFC PATCH net-next v6 02/15] net: page_pool: create hooks for custom page providers
Pavel Begunkov
asml.silence at gmail.com
Wed Mar 6 14:29:56 UTC 2024
On 3/5/24 22:36, Mina Almasry wrote:
> On Tue, Mar 5, 2024 at 1:55 PM David Wei <dw at davidwei.uk> wrote:
>>
>> On 2024-03-04 18:01, Mina Almasry wrote:
>>> +struct memory_provider_ops {
>>> + int (*init)(struct page_pool *pool);
>>> + void (*destroy)(struct page_pool *pool);
>>> + struct page *(*alloc_pages)(struct page_pool *pool, gfp_t gfp);
>>> + bool (*release_page)(struct page_pool *pool, struct page *page);
>>
>> For ZC Rx we added a scrub() function to memory_provider_ops that is
>> called from page_pool_scrub(). Does TCP devmem not custom behaviour
>> waiting for all netmem_refs to return before destroying the page pool?
>> What happens if e.g. application crashes?
>
> (sorry for the long reply, but he refcounting is pretty complicated to
> explain and I feel like we need to agree on how things currently work)
>
> Yeah, the addition of the page_pool_scrub() function is a bit of a
> head scratcher for me. Here is how the (complicated) refcounting works
> for devmem TCP (assuming the driver is not doing its own recycling
> logic which complicates things further):
>
> 1. When a netmem_ref is allocated by the page_pool (from dmabuf or
> page), the netmem_get_pp_ref_count_ref()==1 and belongs to the page
> pool as long as the netmem is waiting in the pool for driver
> allocation.
>
> 2. When a netmem is allocated by the driver, no refcounting is
> changed, but the ownership of the netmem_get_pp_ref_count_ref() is
> implicitly transferred from the page pool to the driver. i.e. the ref
> now belongs to the driver until an skb is formed.
>
> 3. When the driver forms an skb using skb_rx_add_frag_netmem(), no
> refcounting is changed, but the ownership of the
> netmem_get_pp_ref_count_ref() is transferred from the driver to the
> TCP stack.
>
> 4. When the TCP stack hands the skb to the application, the TCP stack
> obtains an additional refcount, so netmem_get_pp_ref_count_ref()==2,
> and frees the skb using skb_frag_unref(), which drops the
> netmem_get_pp_ref_count_ref()==1.
>
> 5. When the user is done with the skb, the user calls the
> DEVMEM_DONTNEED setsockopt which calls napi_pp_put_netmem() which
> recycles the netmem back to the page pool. This doesn't modify any
> refcounting, but the refcount ownership transfers from the userspace
> back to the page pool, and we're back at step 1.
>
> So all in all netmem can belong either to (a) the page pool, or (b)
> the driver, or (c) the TCP stack, or (d) the application depending on
> where exactly it is in the RX path.
>
> When an application running devmem TCP crashes, the netmem that belong
> to the page pool or driver are not touched, because the page pool is
> not tied to the application in our case really. However, the TCP stack
> notices the devmem socket of the application close, and when it does,
> the TCP stack will:
>
> 1. Free all the skbs in the sockets receive queue. This is not custom
> behavior for devmem TCP, it's just standard for TCP to free all skbs
> waiting to be received by the application.
> 2. The TCP stack will free references that belong to the application.
> Since the application crashed, it will not call the DEVMEM_DONTNEED
> setsockopt, so we need to free those on behalf of the application.
> This is done in this diff:
>
> @@ -2498,6 +2498,15 @@ static void tcp_md5sig_info_free_rcu(struct
> rcu_head *head)
> void tcp_v4_destroy_sock(struct sock *sk)
> {
> struct tcp_sock *tp = tcp_sk(sk);
> + __maybe_unused unsigned long index;
> + __maybe_unused void *netmem;
> +
> +#ifdef CONFIG_PAGE_POOL
> + xa_for_each(&sk->sk_user_frags, index, netmem)
> + WARN_ON_ONCE(!napi_pp_put_page((__force netmem_ref)netmem, false));
> +#endif
> +
> + xa_destroy(&sk->sk_user_frags);
>
> trace_tcp_destroy_sock(sk);
>
> To be honest, I think it makes sense for the TCP stack to be
> responsible for putting the references that belong to it and the
> application. To me, it does not make much sense for the page pool to
> be responsible for putting the reference that belongs to the TCP stack
> or driver via a page_pool_scrub() function, as those references do not
> belong to the page pool really. I'm not sure why there is a diff
> between our use cases here because I'm not an io_uring expert. Why do
> you need to scrub all the references on page pool destruction? Don't
> these belong to non-page pool components like io_uring stack or TCP
> stack ol otherwise?
That one is about cleaning buffers that are in b/w 4 and 5, i.e.
owned by the user, which devmem does at sock destruction. io_uring
could get by without scrub, dropping user refs while unregistering
ifq, but then it'd need to wait for all requests to finish so there
is no step 4 in the meantime. Might change, can be useful, but it
was much easier to hook into the pp release loop.
Another concern is who and when can reset ifq / kill pp outside
of io_uring/devmem. I assume it can happen on a whim, which is
hard to handle gracefully.
--
Pavel Begunkov
More information about the dri-devel
mailing list