[RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

Thomas Hellstrom thomas at shipmail.org
Tue May 28 15:11:13 UTC 2019


Hi, Tom,

Thanks for the reply. The question is not graphics specific, but lies in 
your answer further below:

On 5/28/19 4:48 PM, Lendacky, Thomas wrote:
> On 5/28/19 2:31 AM, Thomas Hellstrom wrote:
>> Hi, Tom,
>>
>> Could you shed some light on this?
> I don't have a lot of GPU knowledge, so let me start with an overview of
> how everything should work and see if that answers the questions being
> asked.
>
> First, SME:
> The encryption bit is bit-47 of a physical address. So, if a device does
> not support at least 48-bit DMA, it will have to use the SWIOTLB and
> bounce buffer the data. This is handled automatically if the driver is
> using the Linux DMA-api as all of SWIOTLB has been marked un-encrypted.
> Data is bounced between the un-encrypted SWIOTLB and the (presumably)
> encrypted area of the driver.
>
> For SEV:
> The encryption bit position is the same as SME. However, with SEV all
> DMA must use an un-encrypted area so all DMA goes through SWIOTLB. Just
> like SME, this is handled automatically if the driver is using the Linux
> DMA-api as all of SWIOTLB has been marked un-encrypted. And just like SME,
> data is bounced between the un-encrypted SWIOTLB and the (presumably)
> encrypted area of the driver.
>
> There is an optimization for dma_alloc_coherent() where the pages are
> allocated and marked un-encrypted, thus avoiding the bouncing (see file
> kernel/dma/direct.c, dma_direct_alloc_pages()).
>
> As for kernel vmaps and user-maps, those pages will be marked encrypted
> (unless explicitly made un-encrypted by calling set_memory_decrypted()).
> But, if you are copying to/from those areas into the un-encrypted DMA
> area then everything will be ok.

The question is regarding the above paragraph.

AFAICT,  set_memory_decrypted() only changes the fixed kernel map PTEs.
But when setting up other aliased PTEs to the exact same decrypted 
pages, for example using dma_mmap_coherent(), kmap_atomic_prot(), vmap() 
etc. What code is responsible for clearing the encrypted flag on those 
PTEs? Is there something in the x86 platform code doing that?

Thanks,
Thomas


>
> Things get fuzzy for me when it comes to the GPU access of the memory
> and what and how it is accessed.
>
> Thanks,
> Tom
>
>> Thanks,
>> Thomas
>>
>>
>> On 5/24/19 5:08 PM, Alex Deucher wrote:
>>> + Tom
>>>
>>> He's been looking into SEV as well.
>>>
>>> On Fri, May 24, 2019 at 8:30 AM Thomas Hellstrom <thomas at shipmail.org>
>>> wrote:
>>>> On 5/24/19 2:03 PM, Koenig, Christian wrote:
>>>>> Am 24.05.19 um 12:37 schrieb Thomas Hellstrom:
>>>>>> [CAUTION: External Email]
>>>>>>
>>>>>> On 5/24/19 12:18 PM, Koenig, Christian wrote:
>>>>>>> Am 24.05.19 um 11:55 schrieb Thomas Hellstrom:
>>>>>>>> [CAUTION: External Email]
>>>>>>>>
>>>>>>>> On 5/24/19 11:11 AM, Thomas Hellstrom wrote:
>>>>>>>>> Hi, Christian,
>>>>>>>>>
>>>>>>>>> On 5/24/19 10:37 AM, Koenig, Christian wrote:
>>>>>>>>>> Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):
>>>>>>>>>>> [CAUTION: External Email]
>>>>>>>>>>>
>>>>>>>>>>> From: Thomas Hellstrom <thellstrom at vmware.com>
>>>>>>>>>>>
>>>>>>>>>>> With SEV encryption, all DMA memory must be marked decrypted
>>>>>>>>>>> (AKA "shared") for devices to be able to read it. In the future we
>>>>>>>>>>> might
>>>>>>>>>>> want to be able to switch normal (encrypted) memory to decrypted in
>>>>>>>>>>> exactly
>>>>>>>>>>> the same way as we handle caching states, and that would require
>>>>>>>>>>> additional
>>>>>>>>>>> memory pools. But for now, rely on memory allocated with
>>>>>>>>>>> dma_alloc_coherent() which is already decrypted with SEV enabled.
>>>>>>>>>>> Set up
>>>>>>>>>>> the page protection accordingly. Drivers must detect SEV enabled
>>>>>>>>>>> and
>>>>>>>>>>> switch
>>>>>>>>>>> to the dma page pool.
>>>>>>>>>>>
>>>>>>>>>>> This patch has not yet been tested. As a follow-up, we might
>>>>>>>>>>> want to
>>>>>>>>>>> cache decrypted pages in the dma page pool regardless of their
>>>>>>>>>>> caching
>>>>>>>>>>> state.
>>>>>>>>>> This patch is unnecessary, SEV support already works fine with at
>>>>>>>>>> least
>>>>>>>>>> amdgpu and I would expect that it also works with other drivers as
>>>>>>>>>> well.
>>>>>>>>>>
>>>>>>>>>> Also see this patch:
>>>>>>>>>>
>>>>>>>>>> commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
>>>>>>>>>> Author: Christian König <christian.koenig at amd.com>
>>>>>>>>>> Date:   Wed Mar 13 10:11:19 2019 +0100
>>>>>>>>>>
>>>>>>>>>>           drm: fallback to dma_alloc_coherent when memory
>>>>>>>>>> encryption is
>>>>>>>>>> active
>>>>>>>>>>
>>>>>>>>>>           We can't just map any randome page we get when memory
>>>>>>>>>> encryption is
>>>>>>>>>>           active.
>>>>>>>>>>
>>>>>>>>>>           Signed-off-by: Christian König <christian.koenig at amd.com>
>>>>>>>>>>           Acked-by: Alex Deucher <alexander.deucher at amd.com>
>>>>>>>>>>           Link: https://patchwork.kernel.org/patch/10850833/
>>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>> Christian.
>>>>>>>>> Yes, I noticed that. Although I fail to see where we automagically
>>>>>>>>> clear the PTE encrypted bit when mapping coherent memory? For the
>>>>>>>>> linear kernel map, that's done within dma_alloc_coherent() but for
>>>>>>>>> kernel vmaps and and user-space maps? Is that done automatically by
>>>>>>>>> the x86 platform layer?
>>>>>>> Yes, I think so. Haven't looked to closely at this either.
>>>>>> This sounds a bit odd. If that were the case, the natural place would be
>>>>>> the PAT tracking code, but it only handles caching flags AFAICT. Not
>>>>>> encryption flags.
>>>>>>
>>>>>> But when you tested AMD with SEV, was that running as hypervisor rather
>>>>>> than a guest, or did you run an SEV guest with PCI passthrough to the
>>>>>> AMD device?
>>>>> Yeah, well the problem is we never tested this ourself :)
>>>>>
>>>>>>>>> /Thomas
>>>>>>>>>
>>>>>>>> And, as a follow up question, why do we need dma_alloc_coherent() when
>>>>>>>> using SME? I thought the hardware performs the decryption when DMA-ing
>>>>>>>> to / from an encrypted page with SME, but not with SEV?
>>>>>>> I think the issue was that the DMA API would try to use a bounce buffer
>>>>>>> in this case.
>>>>>> SEV forces SWIOTLB bouncing on, but not SME. So it should probably be
>>>>>> possible to avoid dma_alloc_coherent() in the SME case.
>>>>> In this case I don't have an explanation for this.
>>>>>
>>>>> For the background what happened is that we got reports that SVE/SME
>>>>> doesn't work with amdgpu. So we told the people to try using the
>>>>> dma_alloc_coherent() path and that worked fine. Because of this we came
>>>>> up with the patch I noted earlier.
>>>>>
>>>>> I can confirm that it indeed works now for a couple of users, but we
>>>>> still don't have a test system for this in our team.
>>>>>
>>>>> Christian.
>>>> OK, undestood,
>>>>
>>>> But unless there is some strange magic going on, (which there might be
>>>> of course),I do think the patch I sent is correct, and the reason that
>>>> SEV works is that the AMD card is used by the hypervisor and not the
>>>> guest, and TTM is actually incorrectly creating conflicting maps and
>>>> treating the coherent memory as encrypted. But since the memory is only
>>>> accessed through encrypted PTEs, the hardware does the right thing,
>>>> using the hypervisor key for decryption....
>>>>
>>>> But that's only a guess, and this is not super-urgent. I will be able to
>>>> follow up if / when we bring vmwgfx up for SEV.
>>>>
>>>> /Thomas
>>>>
>>>>>> /Thomas
>>>>>>
>>>>>>
>>>>>>> Christian.
>>>>>>>
>>>>>>>> Thanks, Thomas
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel at lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>



More information about the dri-devel mailing list