[PATCH 4/5] drm/tegra: Restrict IOVA space to DMA mask

Dmitry Osipenko digetx at gmail.com
Thu Jan 24 13:15:46 UTC 2019


24.01.2019 13:24, Mikko Perttunen пишет:
> On 23.1.2019 21.42, Dmitry Osipenko wrote:
>> 23.01.2019 18:55, Dmitry Osipenko пишет:
>>> 23.01.2019 17:04, Thierry Reding пишет:
>>>> On Wed, Jan 23, 2019 at 04:41:44PM +0300, Dmitry Osipenko wrote:
>>>>> 23.01.2019 12:39, Thierry Reding пишет:
>>>>>> From: Thierry Reding <treding at nvidia.com>
>>>>>>
>>>>>> On Tegra186 and later, the ARM SMMU provides an input address space that
>>>>>> is 48 bits wide. However, memory clients can only address up to 40 bits.
>>>>>> If the geometry is used as-is, allocations of IOVA space can end up in a
>>>>>> region that cannot be addressed by the memory clients.
>>>>>>
>>>>>> To fix this, restrict the IOVA space to the DMA mask of the host1x
>>>>>> device. Note that, technically, the IOVA space needs to be restricted to
>>>>>> the intersection of the DMA masks for all clients that are attached to
>>>>>> the IOMMU domain. In practice using the DMA mask of the host1x device is
>>>>>> sufficient because all host1x clients share the same DMA mask.
>>>>>>
>>>>>> Signed-off-by: Thierry Reding <treding at nvidia.com>
>>>>>> ---
>>>>>>   drivers/gpu/drm/tegra/drm.c | 5 +++--
>>>>>>   1 file changed, 3 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
>>>>>> index 271c7a5fc954..0c5f1e6a0446 100644
>>>>>> --- a/drivers/gpu/drm/tegra/drm.c
>>>>>> +++ b/drivers/gpu/drm/tegra/drm.c
>>>>>> @@ -136,11 +136,12 @@ static int tegra_drm_load(struct drm_device *drm, unsigned long flags)
>>>>>>         if (tegra->domain) {
>>>>>>           u64 carveout_start, carveout_end, gem_start, gem_end;
>>>>>> +        u64 dma_mask = dma_get_mask(&device->dev);
>>>>>>           dma_addr_t start, end;
>>>>>>           unsigned long order;
>>>>>>   -        start = tegra->domain->geometry.aperture_start;
>>>>>> -        end = tegra->domain->geometry.aperture_end;
>>>>>> +        start = tegra->domain->geometry.aperture_start & dma_mask;
>>>>>> +        end = tegra->domain->geometry.aperture_end & dma_mask;
>>>>>>             gem_start = start;
>>>>>>           gem_end = end - CARVEOUT_SZ;
>>>>>>
>>>>>
>>>>> Wow, so IOVA could address >32bits on later Tegra's. AFAIK, currently
>>>>> there is no support for a proper programming of the 64bit addresses in
>>>>> the drivers code, hence.. won't it make sense to force IOVA mask to
>>>>> 32bit for now and hope that the second halve of address registers
>>>>> happen to be 0x00000000 in HW?
>>>>
>>>> I think this restriction only applies to display at this point. In
>>>> practice you'd be hard put to trigger that case because IOVA memory is
>>>> allocated from the bottom, so you'd actually need to use up to 4 GiB of
>>>> IOVA space before hitting that.
>>>>
>>>> That said, I vaguely remember typing up the patch to support writing the
>>>> WINBUF_START_ADDR_HI register and friends, but it looks as if that was
>>>> never merged.
>>>>
>>>> I'll try to dig out that patch (or rewrite it, shouldn't be very
>>>> difficult) and make it part of this series. I'd rather fix that issue
>>>> than arbitrarily restrict the IOVA space, because that's likely to come
>>>> back and bite us at some point.
>>>
>>> Looking at falcon.c.. it writes 32bit address only. Something need to be done about it as well, seems there is FALCON_DMATRFMOFFS register to facilitate >32bits addressing.
>>
>> Although scratch about FALCON_DMATRFMOFFS, it should be about something else. Mikko, could you please clarify whether falcon could load firmware only from a 32bit AS?
>>
> 
> The DMA base address is set using DMATRFBASE, which requires 256B alignment, meaning 40 bits are available for the address. The DMATRFFBOFFS I believe is then used as a 32-bit offset to that value.

TRM (up to T196) suggests that DMATRFMOFFS is a 16bit offset. Is it a kind of TRM bug or I'm missing something?

I suppose it should be fine to just reserve carveout from the bottom of IOVA space and done with it.


More information about the dri-devel mailing list