[Freedreno] [PATCH] drm/msm: Check for the GPU IOMMU during bind

Dmitry Baryshkov dmitry.baryshkov at linaro.org
Fri Jul 7 17:27:18 UTC 2023


On 07/07/2023 18:03, Jordan Crouse wrote:
> On Thu, Jul 06, 2023 at 09:55:13PM +0300, Dmitry Baryshkov wrote:
>>
>> On 10/03/2023 00:20, Jordan Crouse wrote:
>>> While booting with amd,imageon on a headless target the GPU probe was
>>> failing with -ENOSPC in get_pages() from msm_gem.c.
>>>
>>> Investigation showed that the driver was using the default 16MB VRAM
>>> carveout because msm_use_mmu() was returning false since headless devices
>>> use a dummy parent device. Avoid this by extending the existing is_a2xx
>>> priv member to check the GPU IOMMU state on all platforms and use that
>>> check in msm_use_mmu().
>>>
>>> This works for memory allocations but it doesn't prevent the VRAM carveout
>>> from being created because that happens before we have a chance to check
>>> the GPU IOMMU state in adreno_bind.
>>>
>>> There are a number of possible options to resolve this but none of them are
>>> very clean. The easiest way is to likely specify vram=0 as module parameter
>>> on headless devices so that the memory doesn't get wasted.
>>
>> This patch was on my plate for quite a while, please excuse me for
>> taking it so long.
> 
> No worries. I'm also chasing a bunch of other stuff too.
> 
>> I see the following problem with the current code. We have two different
>> instances than can access memory: MDP/DPU and GPU. And each of them can
>> either have or miss the MMU.
>>
>> For some time I toyed with the idea of determining whether the allocated
>> BO is going to be used by display or by GPU, but then I abandoned it. We
>> can have display BOs being filled by GPU, so handling it this way would
>> complicate things a lot.
>>
>> This actually rings a tiny bell in my head with the idea of splitting
>> the display and GPU parts to two different drivers, but I'm not sure
>> what would be the overall impact.
> 
> As I now exclusively work on headless devices I would be 100% for this,
> but I'm sure that our laptop friends might not agree :)

I do not know here. This is probably a question to Rob, as he better 
understands the interaction between GPU and display parts of the userspace.

> 
>> More on the msm_use_mmu() below.
>>
>>>
>>> Signed-off-by: Jordan Crouse <jorcrous at amazon.com>
>>> ---
>>>
>>>    drivers/gpu/drm/msm/adreno/adreno_device.c | 6 +++++-
>>>    drivers/gpu/drm/msm/msm_drv.c              | 7 +++----
>>>    drivers/gpu/drm/msm/msm_drv.h              | 2 +-
>>>    3 files changed, 9 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c
>>> index 36f062c7582f..4f19da28f80f 100644
>>> --- a/drivers/gpu/drm/msm/adreno/adreno_device.c
>>> +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c
>>> @@ -539,7 +539,11 @@ static int adreno_bind(struct device *dev, struct device *master, void *data)
>>>        DBG("Found GPU: %u.%u.%u.%u", config.rev.core, config.rev.major,
>>>                config.rev.minor, config.rev.patchid);
>>>
>>> -     priv->is_a2xx = config.rev.core == 2;
>>> +     /*
>>> +      * A2xx has a built in IOMMU and all other IOMMU enabled targets will
>>> +      * have an ARM IOMMU attached
>>> +      */
>>> +     priv->has_gpu_iommu = config.rev.core == 2 || device_iommu_mapped(dev);
>>>        priv->has_cached_coherent = config.rev.core >= 6;
>>>
>>>        gpu = info->init(drm);
>>> diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
>>> index aca48c868c14..a125a351ec90 100644
>>> --- a/drivers/gpu/drm/msm/msm_drv.c
>>> +++ b/drivers/gpu/drm/msm/msm_drv.c
>>> @@ -318,11 +318,10 @@ bool msm_use_mmu(struct drm_device *dev)
>>>        struct msm_drm_private *priv = dev->dev_private;
>>>
>>>        /*
>>> -      * a2xx comes with its own MMU
>>> -      * On other platforms IOMMU can be declared specified either for the
>>> -      * MDP/DPU device or for its parent, MDSS device.
>>> +      * Return true if the GPU or the MDP/DPU or parent MDSS device has an
>>> +      * IOMMU
>>>         */
>>> -     return priv->is_a2xx ||
>>> +     return priv->has_gpu_iommu ||
>>>                device_iommu_mapped(dev->dev) ||
>>>                device_iommu_mapped(dev->dev->parent);
>>
>> I have a generic feeling that both old an new code is not fully correct.
>> Please correct me if I'm wrong:
>>
>> We should be using VRAM, if either of the blocks can not use remapped
>> memory. So this should have been:
>>
>> bool msm_use_mmu()
>> {
>>   if (!gpu_has_iommu)
>>     return false;
>>
>>   if (have_display_part && !display_has_mmu())
>>     return false;
>>
>>   return true;
>> }
>>
>> What do you think.
> 
> I would have to see (and try) the real code but that seems like it might
> be reasonable.

Sure, let me craft it then.

> I would like to hear from some of the a2xx users too
> because this mostly affects them. On 3xx and newer you've always had the
> option of not having a MMU for GPU or display but I can't think of any
> use cases where you wouldn't want it.

msm8974 doesn't have (working) IOMMU driver. I also think there was an 
issue somewhere around sdm630/660. And the wonderful msm8992/4, IIRC, 
also don't have one.

Also the headless mode was quite useful for bringing up platforms, as it 
allowed us to test GPU separately (and ofc. in some cases even w/o MMU).

I have both a2xx (only iMX for now) and a3xx for the tests here, on my 
desk.

-- 
With best wishes
Dmitry



More information about the dri-devel mailing list