[RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

Thomas Hellstrom thomas at shipmail.org
Fri May 24 10:37:51 UTC 2019


On 5/24/19 12:18 PM, Koenig, Christian wrote:
> Am 24.05.19 um 11:55 schrieb Thomas Hellstrom:
>> [CAUTION: External Email]
>>
>> On 5/24/19 11:11 AM, Thomas Hellstrom wrote:
>>> Hi, Christian,
>>>
>>> On 5/24/19 10:37 AM, Koenig, Christian wrote:
>>>> Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):
>>>>> [CAUTION: External Email]
>>>>>
>>>>> From: Thomas Hellstrom <thellstrom at vmware.com>
>>>>>
>>>>> With SEV encryption, all DMA memory must be marked decrypted
>>>>> (AKA "shared") for devices to be able to read it. In the future we
>>>>> might
>>>>> want to be able to switch normal (encrypted) memory to decrypted in
>>>>> exactly
>>>>> the same way as we handle caching states, and that would require
>>>>> additional
>>>>> memory pools. But for now, rely on memory allocated with
>>>>> dma_alloc_coherent() which is already decrypted with SEV enabled.
>>>>> Set up
>>>>> the page protection accordingly. Drivers must detect SEV enabled and
>>>>> switch
>>>>> to the dma page pool.
>>>>>
>>>>> This patch has not yet been tested. As a follow-up, we might want to
>>>>> cache decrypted pages in the dma page pool regardless of their caching
>>>>> state.
>>>> This patch is unnecessary, SEV support already works fine with at least
>>>> amdgpu and I would expect that it also works with other drivers as
>>>> well.
>>>>
>>>> Also see this patch:
>>>>
>>>> commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
>>>> Author: Christian König <christian.koenig at amd.com>
>>>> Date:   Wed Mar 13 10:11:19 2019 +0100
>>>>
>>>>        drm: fallback to dma_alloc_coherent when memory encryption is
>>>> active
>>>>
>>>>        We can't just map any randome page we get when memory
>>>> encryption is
>>>>        active.
>>>>
>>>>        Signed-off-by: Christian König <christian.koenig at amd.com>
>>>>        Acked-by: Alex Deucher <alexander.deucher at amd.com>
>>>>        Link: https://patchwork.kernel.org/patch/10850833/
>>>>
>>>> Regards,
>>>> Christian.
>>> Yes, I noticed that. Although I fail to see where we automagically
>>> clear the PTE encrypted bit when mapping coherent memory? For the
>>> linear kernel map, that's done within dma_alloc_coherent() but for
>>> kernel vmaps and and user-space maps? Is that done automatically by
>>> the x86 platform layer?
> Yes, I think so. Haven't looked to closely at this either.

This sounds a bit odd. If that were the case, the natural place would be 
the PAT tracking code, but it only handles caching flags AFAICT. Not 
encryption flags.

But when you tested AMD with SEV, was that running as hypervisor rather 
than a guest, or did you run an SEV guest with PCI passthrough to the 
AMD device?

>
>>> /Thomas
>>>
>> And, as a follow up question, why do we need dma_alloc_coherent() when
>> using SME? I thought the hardware performs the decryption when DMA-ing
>> to / from an encrypted page with SME, but not with SEV?
> I think the issue was that the DMA API would try to use a bounce buffer
> in this case.

SEV forces SWIOTLB bouncing on, but not SME. So it should probably be 
possible to avoid dma_alloc_coherent() in the SME case.

/Thomas


>
> Christian.
>
>> Thanks, Thomas
>>
>>
>>



More information about the dri-devel mailing list