[Intel-gfx] [PATCH v13 01/12] swiotlb: Refactor swiotlb init functions

Tom Lendacky thomas.lendacky at amd.com
Fri Jun 18 14:09:17 UTC 2021


On 6/18/21 1:25 AM, Claire Chang wrote:
> On Fri, Jun 18, 2021 at 7:30 AM Stefano Stabellini
> <sstabellini at kernel.org> wrote:
>>
>> On Thu, 17 Jun 2021, Claire Chang wrote:
>>> Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
>>> initialization to make the code reusable.
>>>
>>> Signed-off-by: Claire Chang <tientzu at chromium.org>
>>> Reviewed-by: Christoph Hellwig <hch at lst.de>
>>> Tested-by: Stefano Stabellini <sstabellini at kernel.org>
>>> Tested-by: Will Deacon <will at kernel.org>
>>> ---
>>>  kernel/dma/swiotlb.c | 50 ++++++++++++++++++++++----------------------
>>>  1 file changed, 25 insertions(+), 25 deletions(-)
>>>
>>> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
>>> index 52e2ac526757..47bb2a766798 100644
>>> --- a/kernel/dma/swiotlb.c
>>> +++ b/kernel/dma/swiotlb.c
>>> @@ -168,9 +168,28 @@ void __init swiotlb_update_mem_attributes(void)
>>>       memset(vaddr, 0, bytes);
>>>  }
>>>
>>> -int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>>> +static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
>>> +                                 unsigned long nslabs, bool late_alloc)
>>>  {
>>> +     void *vaddr = phys_to_virt(start);
>>>       unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
>>> +
>>> +     mem->nslabs = nslabs;
>>> +     mem->start = start;
>>> +     mem->end = mem->start + bytes;
>>> +     mem->index = 0;
>>> +     mem->late_alloc = late_alloc;
>>> +     spin_lock_init(&mem->lock);
>>> +     for (i = 0; i < mem->nslabs; i++) {
>>> +             mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
>>> +             mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
>>> +             mem->slots[i].alloc_size = 0;
>>> +     }
>>> +     memset(vaddr, 0, bytes);
>>> +}
>>> +
>>> +int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>>> +{
>>>       struct io_tlb_mem *mem;
>>>       size_t alloc_size;
>>>
>>> @@ -186,16 +205,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>>>       if (!mem)
>>>               panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
>>>                     __func__, alloc_size, PAGE_SIZE);
>>> -     mem->nslabs = nslabs;
>>> -     mem->start = __pa(tlb);
>>> -     mem->end = mem->start + bytes;
>>> -     mem->index = 0;
>>> -     spin_lock_init(&mem->lock);
>>> -     for (i = 0; i < mem->nslabs; i++) {
>>> -             mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
>>> -             mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
>>> -             mem->slots[i].alloc_size = 0;
>>> -     }
>>> +
>>> +     swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
>>>
>>>       io_tlb_default_mem = mem;
>>>       if (verbose)
>>> @@ -282,8 +293,8 @@ swiotlb_late_init_with_default_size(size_t default_size)
>>>  int
>>>  swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>>>  {
>>> -     unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
>>>       struct io_tlb_mem *mem;
>>> +     unsigned long bytes = nslabs << IO_TLB_SHIFT;
>>>
>>>       if (swiotlb_force == SWIOTLB_NO_FORCE)
>>>               return 0;
>>> @@ -297,20 +308,9 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>>>       if (!mem)
>>>               return -ENOMEM;
>>>
>>> -     mem->nslabs = nslabs;
>>> -     mem->start = virt_to_phys(tlb);
>>> -     mem->end = mem->start + bytes;
>>> -     mem->index = 0;
>>> -     mem->late_alloc = 1;
>>> -     spin_lock_init(&mem->lock);
>>> -     for (i = 0; i < mem->nslabs; i++) {
>>> -             mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
>>> -             mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
>>> -             mem->slots[i].alloc_size = 0;
>>> -     }
>>> -
>>> +     memset(mem, 0, sizeof(*mem));
>>> +     swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
>>>       set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
>>> -     memset(tlb, 0, bytes);
>>
>> This is good for swiotlb_late_init_with_tbl. However I have just noticed
>> that mem could also be allocated from swiotlb_init_with_tbl, in which
>> case the zeroing is missing. I think we need another memset in
>> swiotlb_init_with_tbl as well. Or maybe it could be better to have a
>> single memset at the beginning of swiotlb_init_io_tlb_mem instead. Up to
>> you.
> 
> swiotlb_init_with_tbl uses memblock_alloc to allocate the io_tlb_mem
> and memblock_alloc[1] will do memset in memblock_alloc_try_nid[2], so
> swiotlb_init_with_tbl is also good.
> I'm happy to add the memset in swiotlb_init_io_tlb_mem if you think
> it's clearer and safer.

On x86, if the memset is done before set_memory_decrypted() and memory
encryption is active, then the memory will look like ciphertext afterwards
and not be zeroes. If zeroed memory is required, then a memset must be
done after the set_memory_decrypted() calls.

Thanks,
Tom

> 
> [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.13-rc6%2Fsource%2Finclude%2Flinux%2Fmemblock.h%23L407&data=04%7C01%7Cthomas.lendacky%40amd.com%7C3e33e04212b84f9e4ed108d932230511%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595948355050693%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=TGBDj18KuSHTb45EBz%2Bypfbr4Xgqb1aGTRDCTIpIgJo%3D&reserved=0
> [2] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.13-rc6%2Fsource%2Fmm%2Fmemblock.c%23L1555&data=04%7C01%7Cthomas.lendacky%40amd.com%7C3e33e04212b84f9e4ed108d932230511%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595948355060689%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=K%2FWbN6iKN9JNtwDSkIaKH2BVLdDTWhn8tPfNdCOVkSA%3D&reserved=0
> 


More information about the Intel-gfx mailing list