[RFC PATCH v2 00/11] Device Memory TCP

Pavel Begunkov asml.silence at gmail.com
Thu Aug 17 18:00:35 UTC 2023


On 8/14/23 02:12, David Ahern wrote:
> On 8/9/23 7:57 PM, Mina Almasry wrote:
>> Changes in RFC v2:
>> ------------------
...
>> ** Test Setup
>>
>> Kernel: net-next with this RFC and memory provider API cherry-picked
>> locally.
>>
>> Hardware: Google Cloud A3 VMs.
>>
>> NIC: GVE with header split & RSS & flow steering support.
> 
> This set seems to depend on Jakub's memory provider patches and a netdev
> driver change which is not included. For the testing mentioned here, you
> must have a tree + branch with all of the patches. Is it publicly available?
> 
> It would be interesting to see how well (easy) this integrates with
> io_uring. Besides avoiding all of the syscalls for receiving the iov and
> releasing the buffers back to the pool, io_uring also brings in the
> ability to seed a page_pool with registered buffers which provides a
> means to get simpler Rx ZC for host memory.

The patchset sounds pretty interesting. I've been working with David Wei
(CC'ing) on io_uring zc rx (prototype polishing stage) all that is old
similar approaches based on allocating an rx queue. It targets host
memory and device memory as an extra feature, uapi is different, lifetimes
are managed/bound to io_uring. Completions/buffers are returned to user via
a separate queue instead of cmsg, and pushed back granularly to the kernel
via another queue. I'll leave it to David to elaborate

It sounds like we have space for collaboration here, if not merging then
reusing internals as much as we can, but we'd need to look into the
details deeper.

> Overall I like the intent and possibilities for extensions, but a lot of
> details are missing - perhaps some are answered by seeing an end-to-end
> implementation.

-- 
Pavel Begunkov


More information about the dri-devel mailing list