[PATCH v4 00/23] device-dax: Support sub-dividing soft-reserved ranges

David Hildenbrand david at redhat.com
Fri Aug 21 10:15:03 UTC 2020


>>
>> 1. On x86-64, e820 indicates "soft-reserved" memory. This memory is not
>> automatically used in the buddy during boot, but remains untouched
>> (similar to pmem). But as it involves ACPI as well, it could also be
>> used on arm64 (-e820), correct?
> 
> Correct, arm64 also gets the EFI support for enumerating memory this
> way. However, I would clarify that whether soft-reserved is given to
> the buddy allocator by default or not is the kernel's policy choice,
> "buddy-by-default" is ok and is what will happen anyways with older
> kernels on platforms that enumerate a memory range this way.

Is "soft-reserved" then the right terminology for that? It sounds very
x86-64/e820 specific. Maybe a compressed for of "performance
differentiated memory" might be a better fit to expose to user space, no?

> 
>> 2. Soft-reserved memory is volatile RAM with differing performance
>> characteristics ("performance differentiated memory"). What would be
>> examples of such memory?
> 
> Likely the most prominent one that drove the creation of the "EFI
> Specific Purpose" attribute bit is high-bandwidth memory. One concrete
> example of that was a platform called Knights Landing [1] that ended
> up shipping firmware that lied to the OS about the latency
> characteristics of the memory to try to reverse engineer OS behavior
> to not allocate from that memory range by default. With the EFI
> attribute firmware performance tables can tell the truth about the
> performance characteristics of the memory range *and* indicate that
> the OS not use it for general purpose allocations by default.

Thanks for clarifying!

> 
> [1]: https://software.intel.com/content/www/us/en/develop/blogs/an-intro-to-mcdram-high-bandwidth-memory-on-knights-landing.html
> 
>> Like, memory that is faster than RAM (scratch
>> pad), or slower (pmem)? Or both? :)
> 
> Both, but note that PMEM is already hard-reserved by default.
> Soft-reserved is about a memory range that, for example, an
> administrator may want to reserve 100% for a weather simulation where
> if even a small amount of memory was stolen for the page cache the
> application may not meet its performance targets. It could also be a
> memory range that is so slow that only applications with higher
> latency tolerances would be prepared to consume it.
> 
> In other words the soft-reserved memory can be used to indicate memory
> that is either too precious, or too slow for general purpose OS
> allocations.

Right, so actually performance-differentiated in any way :)

> 
>> Is it a valid use case to use pmem
>> in a hypervisor to back this memory?
> 
> Depends on the pmem. That performance capability is indicated by the
> ACPI HMAT, not the EFI soft-reserved designation.
> 
>> 3. There seem to be use cases where "soft-reserved" memory is used via
>> DAX. What is an example use case? I assume it's *not* to treat it like
>> PMEM but instead e.g., use it as a fast buffer inside applications or
>> similar.
> 
> Right, in that weather-simulation example that application could just
> mmap /dev/daxX.Y and never worry about contending for the "fast
> memory" resource on the platform. Alternatively if that resource needs
> to be shared and/or over-commited then kernel memory-management
> services are needed and that dax-device can be assigned to kmem.
> 
>> 4. There seem to be use cases where some part of "soft-reserved" memory
>> is used via DAX, some other is given to the buddy. What is an example
>> use case? Is this really necessary or only some theoretical use case?
> 
> It's as necessary as pmem namespace partitioning, or the inclusion of
> dax-kmem upstream in the first place. In that kmem case the motivation
> was that some users want a portion of pmem provisioned for storage and
> some for volatile usage. The motivation is similar here, platform
> firmware can only identify memory attributes on coarse boundaries,
> finer grained provisioning decisions are up to the administrator /
> platform-owner and the kernel is a just a facilitator of that policy.
> 
>>
>> 5. The "provisioned along performance relevant address boundaries." part
>> is unclear to me. Can you give an example of how this would look like
>> from user space? Like, split that memory in blocks of size X with
>> alignment Y and give them to separate applications?
> 
> One example of platform address boundaries are the memory address
> ranges that alias in a direct-mapped memory-side-cache. In the
> direct-map-cache aliasing may repeat every N GBs where N is the ratio
> of far-to-near memory. ("Near memory" ==  cache "Far memory" ==
> backing memory). Also refer back to the background in the page
> allocator shuffling patches [2]. With this partitioning mechanism you
> could, for one example use case, assign different VMs to exclusive
> colors in the memory side cache.

Interesting, thanks!

> 
> [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e900a918b098
> 
>> 6. If you add such memory to the buddy, is there any way the system can
>> differentiate it from other memory? E.g., via fake/other NUMA nodes?
> 
> Numa node numbers / are how performance differentiated memory ranges
> are enumerated. The expectation is that all distinct performance
> memory targets have unique ACPI proximity domains and Linux numa node
> numbers as a result.

Makes sense to me (although it's somehow weird, because memory of the
same socket/node would be represented via different NUMA nodes), thanks!

> 
>> Also, can you give examples of how kmem-added memory is represented in
>> /proc/iomem for a) pmem and b) soft-resered memory after this series
>> (skimming over the patches, I think there is a change for pmem, right?)?
> 
> I don't expect a change. The only difference is the parent resource
> will be marked "Soft Reserved" instead of "Persistent Memory".

Right, I misread patch #11 while skimming - I thought the device
resource would be dropped.

> 
>> I am really wondering if it's the right approach to squeeze this into
>> our pmem/nvdimm infrastructure just because it's easy to do. E.g., man
>> "ndctl" - "ndctl - Manage "libnvdimm" subsystem devices (Non-volatile
>> Memory)" speaks explicitly about non-volatile memory.
> 
> In fact it's not squeezed into PMEM infrastructure. dax-kmem and
> device-dax are independent of PMEM. PMEM is one source of potential
> device-dax instances, soft-reserved memory is another orthogonal
> source. This is why device-dax needs its own userspace policy directed
> partitioning mechanism because there is no PMEM to store the
> configuration for partitioned higph-bandwidth memory. The userspace
> tooling for this mechanism is targeted for a tool called daxctl that
> has no PMEM dependencies. Look to Joao's use case that is using this
> infrastructure independent of PMEM with manual soft-reservations
> specified on the kernel command-line.

Thanks for clarifying, I was under the impression we would be reusing
libnvdimm to manage that memory.

-- 
Thanks,

David / dhildenb



More information about the dri-devel mailing list