<!DOCTYPE html><html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<p><br>
</p>
<div class="moz-cite-prefix">On 10/1/24 20:17, Lucas Stach wrote:<br>
</div>
<blockquote type="cite" cite="mid:7a6ffbb773784dee0ea3ee87e563ac4e4f7c9c90.camel@pengutronix.de">
<pre class="moz-quote-pre" wrap="">CAUTION: This email comes from a non Wind River email account!
Do not click links or open attachments unless you recognize the sender and know the content is safe.
Hi Xiaolei,
Am Dienstag, dem 03.09.2024 um 10:08 +0800 schrieb Xiaolei Wang:
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">Remove __GFP_HIGHMEM when requesting a page from DMA32 zone,
and since all vivante GPUs in the system will share the same
DMA constraints, move the check of whether to get a page from
DMA32 to etnaviv_bind().
Fixes: b72af445cd38 ("drm/etnaviv: request pages from DMA32 zone when needed")
Suggested-by: Sui Jingfeng <a class="moz-txt-link-rfc2396E" href="mailto:sui.jingfeng@linux.dev"><sui.jingfeng@linux.dev></a>
Signed-off-by: Xiaolei Wang <a class="moz-txt-link-rfc2396E" href="mailto:xiaolei.wang@windriver.com"><xiaolei.wang@windriver.com></a>
---
change log
v1:
<a class="moz-txt-link-freetext" href="https://patchwork.kernel.org/project/dri-devel/patch/20240806104733.2018783-1-xiaolei.wang@windriver.com/">https://patchwork.kernel.org/project/dri-devel/patch/20240806104733.2018783-1-xiaolei.wang@windriver.com/</a>
v2:
Modify the issue of not retaining GFP_USER in v1 and update the commit log.
v3:
Use "priv->shm_gfp_mask = GFP_USER | __GFP_RETRY_MAYFAIL | __GFP_NOWARN;"
instead of
"priv->shm_gfp_mask = GFP_HIGHUSER | __GFP_RETRY_MAYFAIL | __GFP_NOWARN;"
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">
I don't understand this part of the changes in the new version. Why
should we drop the HIGHMEM bit always and not only in the case where
dma addressing is limited? This seems overly restrictive.<span style="white-space: normal"></span></pre>
</blockquote>
<p>Makes sense, thanks for your reminder, I will drop the HIGHMEM
bit when the next version has address limit</p>
<p> if (dma_addressing_limited(gpu->dev)) {<br>
priv->shm_gfp_mask |= GFP_DMA32;<br>
priv->shm_gfp_mask &= ~__GFP_HIGHMEM;<br>
}</p>
<p>thanks</p>
<p>xiaolei<br>
</p>
<blockquote type="cite" cite="mid:7a6ffbb773784dee0ea3ee87e563ac4e4f7c9c90.camel@pengutronix.de">
<pre class="moz-quote-pre" wrap="">
Regards,
Lucas
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">and move the check of whether to get a page from DMA32 to etnaviv_bind().
drivers/gpu/drm/etnaviv/etnaviv_drv.c | 10 +++++++++-
drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 8 --------
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
index 6500f3999c5f..8cb2c3ec8e5d 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
@@ -536,7 +536,15 @@ static int etnaviv_bind(struct device *dev)
mutex_init(&priv->gem_lock);
INIT_LIST_HEAD(&priv->gem_list);
priv->num_gpus = 0;
- priv->shm_gfp_mask = GFP_HIGHUSER | __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
+ priv->shm_gfp_mask = GFP_USER | __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
+
+ /*
+ * If the GPU is part of a system with DMA addressing limitations,
+ * request pages for our SHM backend buffers from the DMA32 zone to
+ * hopefully avoid performance killing SWIOTLB bounce buffering.
+ */
+ if (dma_addressing_limited(dev))
+ priv->shm_gfp_mask |= GFP_DMA32;
priv->cmdbuf_suballoc = etnaviv_cmdbuf_suballoc_new(drm->dev);
if (IS_ERR(priv->cmdbuf_suballoc)) {
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
index 7c7f97793ddd..5e753dd42f72 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
@@ -839,14 +839,6 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
if (ret)
goto fail;
- /*
- * If the GPU is part of a system with DMA addressing limitations,
- * request pages for our SHM backend buffers from the DMA32 zone to
- * hopefully avoid performance killing SWIOTLB bounce buffering.
- */
- if (dma_addressing_limited(gpu->dev))
- priv->shm_gfp_mask |= GFP_DMA32;
-
/* Create buffer: */
ret = etnaviv_cmdbuf_init(priv->cmdbuf_suballoc, &gpu->buffer,
PAGE_SIZE);
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">
</pre>
</blockquote>
</body>
</html>