[PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

Pavel Begunkov asml.silence at gmail.com
Fri Jun 7 15:46:57 UTC 2024


On 6/7/24 16:42, Pavel Begunkov wrote:
> On 6/7/24 15:27, David Ahern wrote:
>> On 6/7/24 7:42 AM, Pavel Begunkov wrote:
>>> I haven't seen any arguments against from the (net) maintainers so
>>> far. Nor I see any objection against callbacks from them (considering
>>> that either option adds an if).
>>
>> I have said before I do not understand why the dmabuf paradigm is not
>> sufficient for both device memory and host memory. A less than ideal
>> control path to put hostmem in a dmabuf wrapper vs extra checks and
>> changes in the datapath. The former should always be preferred.
> 
> If we're talking about types of memory specifically, I'm not strictly
> against wrapping into dmabuf in kernel, but that just doesn't give
> anything.

And the reason I don't have too strong of an opinion on that is
mainly because it's just setup/cleanup path.

> But the main reason for allocations there is the difference in
> approaches to the api. With io_uring the allocation callback is
> responsible for getting buffers back from the user (via a shared
> ring). No locking for the ring, and buffers are already in the
> context (napi) where they would be consumed from. Removes some
> headaches for the user (like batching before returning buffers),
> and should go better with smaller buffers and such.
> 
>> I also do not understand why the ifq cache 
> 
> I'm not sure what you mean by ifq cache. Can you elaborate?
> 
>> and overloading xdp functions
> 
> Assuming it's about setup via xdp, it was marked for remaking in
> RFCs for longer than desired but it's gone now in our tree (but
> maybe not in the latest series).
> 
>> have stuck around; I always thought both were added by Jonathan to
>> simplify kernel ports during early POC days.
> 

-- 
Pavel Begunkov


More information about the dri-devel mailing list