[PATCH 1/2] dma-buf: heaps: DMA_HEAP_IOCTL_ALLOC_READ_FILE framework
Huan Yang
link at vivo.com
Wed Jul 24 07:12:55 UTC 2024
在 2024/7/18 1:03, Christoph Hellwig 写道:
> copy_file_range only work inside the same file system anyway, so
> it is completely irrelevant here.
>
> What should work just fine is using sendfile (or splice if you like it
> complicated) to write TO the dma buf. That just iterates over the page
> cache on the source file and calls ->write_iter from the page cache
> pages. Of course that requires that you actually implement
> ->write_iter, but given that dmabufs support mmaping there I can't
> see why you should not be able to write to it.
This day, I test dma-buf read large file with sendfile. Here are two
problem I find when read O_DIRECT open file.
1. sendfile/splice transfer data between read and write through a pipe.
Even if the read process does not generate page cache, an
equivalent amount of CPU copy is still required.
This is particularly noticeable in the performance degradation when
reading large files.
2. Each pipe max_bytes is 64K(in my phone and arch test), This means
that for each IO, only 64K is read and then copied, resulting in poor IO
performance.
Based on observations from testing, it takes an average of 7s to perform
O_DIRECT read of a 3GB file. Trace show much runable and running and
some I/O between this.
For buffer read large file into dma-buf by sendfile, cost 2.3s, is normal.
Maybe this is not a good way to let dma-buf support direct IO?
>
> Reading FROM the dma buf in that fashion should also work if you provide
> a ->read_iter wire up ->splice_read to copy_splice_read so that it
We current more care abount read file into dma-buf, not write. :)
> doesn't require any page cache.
More information about the dri-devel
mailing list