[Intel-gfx] [PATCH v7 14/15] dt-bindings: of: Add restricted DMA pool
Will Deacon
will at kernel.org
Wed May 26 12:13:23 UTC 2021
Hi Claire,
On Tue, May 18, 2021 at 02:42:14PM +0800, Claire Chang wrote:
> Introduce the new compatible string, restricted-dma-pool, for restricted
> DMA. One can specify the address and length of the restricted DMA memory
> region by restricted-dma-pool in the reserved-memory node.
>
> Signed-off-by: Claire Chang <tientzu at chromium.org>
> ---
> .../reserved-memory/reserved-memory.txt | 27 +++++++++++++++++++
> 1 file changed, 27 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
> index e8d3096d922c..284aea659015 100644
> --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
> +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
> @@ -51,6 +51,23 @@ compatible (optional) - standard definition
> used as a shared pool of DMA buffers for a set of devices. It can
> be used by an operating system to instantiate the necessary pool
> management subsystem if necessary.
> + - restricted-dma-pool: This indicates a region of memory meant to be
> + used as a pool of restricted DMA buffers for a set of devices. The
> + memory region would be the only region accessible to those devices.
> + When using this, the no-map and reusable properties must not be set,
> + so the operating system can create a virtual mapping that will be used
> + for synchronization. The main purpose for restricted DMA is to
> + mitigate the lack of DMA access control on systems without an IOMMU,
> + which could result in the DMA accessing the system memory at
> + unexpected times and/or unexpected addresses, possibly leading to data
> + leakage or corruption. The feature on its own provides a basic level
> + of protection against the DMA overwriting buffer contents at
> + unexpected times. However, to protect against general data leakage and
> + system memory corruption, the system needs to provide way to lock down
> + the memory access, e.g., MPU. Note that since coherent allocation
> + needs remapping, one must set up another device coherent pool by
> + shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic
> + coherent allocation.
> - vendor specific string in the form <vendor>,[<device>-]<usage>
> no-map (optional) - empty property
> - Indicates the operating system must not create a virtual mapping
> @@ -120,6 +137,11 @@ one for multimedia processing (named multimedia-memory at 77000000, 64MiB).
> compatible = "acme,multimedia-memory";
> reg = <0x77000000 0x4000000>;
> };
> +
> + restricted_dma_mem_reserved: restricted_dma_mem_reserved {
> + compatible = "restricted-dma-pool";
> + reg = <0x50000000 0x400000>;
> + };
nit: You need to update the old text that states "This example defines 3
contiguous regions ...".
> };
>
> /* ... */
> @@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory at 77000000, 64MiB).
> memory-region = <&multimedia_reserved>;
> /* ... */
> };
> +
> + pcie_device: pcie_device at 0,0 {
> + memory-region = <&restricted_dma_mem_reserved>;
> + /* ... */
> + };
I still don't understand how this works for individual PCIe devices -- how
is dev->of_node set to point at the node you have above?
I tried adding the memory-region to the host controller instead, and then
I see it crop up in dmesg:
| pci-host-generic 40000000.pci: assigned reserved memory node restricted_dma_mem_reserved
but none of the actual PCI devices end up with 'dma_io_tlb_mem' set, and
so the restricted DMA area is not used. In fact, swiotlb isn't used at all.
What am I missing to make this work with PCIe devices?
Thanks,
Will
More information about the Intel-gfx
mailing list