[PATCH v6 02/15] net: generalise net_iov chunk owners

Pavel Begunkov asml.silence at gmail.com
Thu Oct 24 16:40:02 UTC 2024


On 10/24/24 17:06, Christoph Hellwig wrote:
> On Thu, Oct 24, 2024 at 03:23:06PM +0100, Pavel Begunkov wrote:
>>> That's not what this series does.  It adds the new memory_provider_ops
>>> set of hooks, with once implementation for dmabufs, and one for
>>> io_uring zero copy.
>>
>> First, it's not a _new_ abstraction over a buffer as you called it
>> before, the abstraction (net_iov) is already merged.
> 
> Umm, it is a new ops vector.

I don't understand what you mean. Callback?

>> Second, you mention devmem TCP, and it's not just a page pool with
>> "dmabufs", it's a user API to use it and other memory agnostic
>> allocation logic. And yes, dmabufs there is the least technically
>> important part. Just having a dmabuf handle solves absolutely nothing.
> 
> It solves a lot, becaue it provides a proper abstraction.

Then please go ahead and take a look at the patchset in question
and see how much of dmabuf handling is there comparing to pure
networking changes. The point that it's a new set of API and lots
of changes not related directly to dmabufs stand. dmabufs is useful
there as an abstraction there, but it's a very long stretch saying
that the series is all about it.

> 
>>> So you are precluding zero copy RX into anything but your magic
>>> io_uring buffers, and using an odd abstraction for that.
>>
>> Right io_uring zero copy RX API expects transfer to happen into io_uring
>> controlled buffers, and that's the entire idea. Buffers that are based
>> on an existing network specific abstraction, which are not restricted to
>> pages or anything specific in the long run, but the flow of which from
>> net stack to user and back is controlled by io_uring. If you worry about
>> abuse, io_uring can't even sanely initialise those buffers itself and
>> therefore asking the page pool code to do that.
> 
> No, I worry about trying to io_uring for not good reason. This

It sounds that the argument is that you just don't want any
io_uring APIs, I don't think you'd be able to help you with
that.

> pre-cludes in-kernel uses which would be extremly useful for

Uses of what? devmem TCP is merged, I'm not removing it,
and the net_iov abstraction is in there, which can be potentially
be reused by other in-kernel users if that'd even make sense.

> network storage drivers, and it precludes device memory of all
> kinds.

You can't use page pools to allocate for a storage device, it's
a network specific allocator. You can get a dmabuf around that
device's memory and zero copy into it, but there is no problem
with that. Either use devmem TCP or wait until io_uring adds
support for dmabufs, which is, again, trivial.

>> I'm even more confused how that would help. The user API has to
>> be implemented and adding a new dmabuf gives nothing, not even
>> mentioning it's not clear what semantics of that beast is
>> supposed to be.
>>
> 
> The dma-buf maintainers already explained to you last time
> that there is absolutely no need to use the dmabuf UAPI, you
> can use dma-bufs through in-kernel interfaces just fine.

You can, even though it's not needed and I don't see how
it'd be useful, but you're missing the point. A new dmabuf
implementation doesn't implement the uapi we need nor it
helps to talk to the net layer.

-- 
Pavel Begunkov


More information about the dri-devel mailing list