[Mesa-dev] [PATCH 02/10] dri_interface: add __DRI_IMAGE_TRANSFER_USER_STRIDE

Nicolai Hähnle nhaehnle at gmail.com
Fri Apr 27 09:00:50 UTC 2018


On 26.04.2018 04:30, Marek Olšák wrote:
> On Wed, Apr 25, 2018 at 6:56 PM, Gurchetan Singh 
> <gurchetansingh at chromium.org <mailto:gurchetansingh at chromium.org>> wrote:
> 
>     On Wed, Apr 25, 2018 at 2:16 PM, Marek Olšák <maraeo at gmail.com
>     <mailto:maraeo at gmail.com>> wrote:
>     > From: Nicolai Hähnle <nicolai.haehnle at amd.com <mailto:nicolai.haehnle at amd.com>>
>     >
>     > Allow the caller to specify the row stride (in bytes) with which an image
>     > should be mapped. Note that completely ignoring USER_STRIDE is a valid
>     > implementation of mapImage.
>     >
>     > This is horrible API design. Unfortunately, cros_gralloc does indeed have
>     > a horrible API design -- in that arbitrary images should be allowed to be
>     > mapped with the stride that a linear image of the same width would have.
> 
>     Yes, unfortunately the gralloc API doesn't return the stride when
>     (*lock) is called.  However, the stride is returned during (*alloc).
>     Currently, for the dri backend, minigbm uses
>     __DRI_IMAGE_ATTRIB_STRIDE to compute the pixel stride given to
>     Android.
> 
>     Is AMD seeing problems with the current approach (I haven't seen any
>     bugs filed for this issue)?
> 
>     Another possible solution is to call mapImage()/unmapImage right after
>     allocating the image, and use the stride returned by mapImage() to
>     compute the pixel stride.  That could also fix whatever bugs AMD is
>     seeing.
> 
> 
> Thanks. You cleared it up to me. It looks like that everything we've 
> been told so far is BS. This series isn't needed.
> 
> The solution is to do this in the amdgpu minigbm backend at alloc time:
>     stride = align(width * Bpp, 64);
> 
> Later chips should change that to:
>     stride = align(width * Bpp, 256);
> 
> No querying needed. What do you think?

I don't see how this approach can possibly work unless we always map the 
whole texture.

The whole problem with the fixed stride is that in many cases, we 
allocate a staging texture. The reasonable thing to do for a sane API 
(i.e., everything except ChromeOS) is to allocate a staging texture that 
is sized correctly for the area that you want to map. But since we don't 
know at texture allocation time what sizes of the texture will be 
mapped, we cannot do that.

That was the whole point of the USER_STRIDE business. There are two 
alternatives I can see:

1. Change minigbm so that it always maps the entire texture regardless 
of what the caller requests. This is a very simple fix, but it has the 
downside that the driver will always blit the entire surface to staging 
even if only a single pixel is needed. That can obviously bite us for 
performance and power, which is why I didn't want to go down that route.

2. Instead of having a USER_STRIDE interface, just simplify the story to 
saying that whenever a transfer allocates a staging texture, it should 
allocate a texture sized for the entire texture, but perform blits only 
for the region that the user has requested.

This ends up being the same thing in the end, but perhaps it's better as 
an interface because it's more explicit about what's happening.

By the way: the fact that gralloc wants to *cache* mappings will also 
cause us pain, but that's perhaps something we should really just change 
in gralloc.

Cheers,
Nicolai

-- 
Lerne, wie die Welt wirklich ist,
Aber vergiss niemals, wie sie sein sollte.


More information about the mesa-dev mailing list