[PATCH v3] dma-buf: cma_heap: Check for device max segment size when attaching
Andrew Davis
afd at ti.com
Mon Mar 6 16:51:43 UTC 2023
Although there is usually not such a limitation (and when there is it is
often only because the driver forgot to change the super small default),
it is still correct here to break scatterlist element into chunks of
dma_max_mapping_size().
This might cause some issues for users with misbehaving drivers. If
bisecting has landed you on this commit, make sure your drivers both set
dma_set_max_seg_size() and are checking for contiguousness correctly.
Signed-off-by: Andrew Davis <afd at ti.com>
---
Changes from v2:
- Rebase v6.3-rc1
Changes from v1:
- Fixed mixed declarations and code warning
drivers/dma-buf/heaps/cma_heap.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
index 1131fb943992..579261a46fa3 100644
--- a/drivers/dma-buf/heaps/cma_heap.c
+++ b/drivers/dma-buf/heaps/cma_heap.c
@@ -53,16 +53,18 @@ static int cma_heap_attach(struct dma_buf *dmabuf,
{
struct cma_heap_buffer *buffer = dmabuf->priv;
struct dma_heap_attachment *a;
+ size_t max_segment;
int ret;
a = kzalloc(sizeof(*a), GFP_KERNEL);
if (!a)
return -ENOMEM;
- ret = sg_alloc_table_from_pages(&a->table, buffer->pages,
- buffer->pagecount, 0,
- buffer->pagecount << PAGE_SHIFT,
- GFP_KERNEL);
+ max_segment = dma_get_max_seg_size(attachment->dev);
+ ret = sg_alloc_table_from_pages_segment(&a->table, buffer->pages,
+ buffer->pagecount, 0,
+ buffer->pagecount << PAGE_SHIFT,
+ max_segment, GFP_KERNEL);
if (ret) {
kfree(a);
return ret;
--
2.39.2
More information about the dri-devel
mailing list