Using a PCIe Slot to Install DRAM: New Samsung CXL.mem Expansion Moduleby Dr. Ian Cutress on May 11, 2021 4:10 AM EST
- Posted in
- Compute Express Link
- PCIe 5.0
In the computing industry, we’ve lived with PCIe as a standard for a long time. It is used to add any additional features to a system: graphics, storage, USB ports, more storage, networking, add-in cards, storage, sound cards, Wi-Fi, oh did I mention storage? Well the one thing that we haven’t been able to put into a PCIe slot is DRAM – I don’t mean DRAM as a storage device, but memory that actually is added to the system as useable DRAM. Back in 2019 a new CXL standard was introduced, which uses a PCIe 5.0 link as the physical interface. Part of that standard is CXL.memory – the ability to add DRAM into a system through a CXL/PCIe slot. Today Samsung is unveiling the first DRAM module specifically designed in this way.
CXL: A Refresher
The original CXL standard started off as a research project inside Intel to create an interface that can support accelerators, IO, cache, and memory. It subsequently spun out into its own consortium, with over 50+ members, and support from key players in the industry: Intel, AMD, Arm, IBM, Broadcom, Marvell, NVIDIA, Samsung, SK Hynix, WD, and others. The latest standard is CXL 2.0, finalized in November 2020.
The CXL 1.1 standard covers three sets of intrinsics, known as CXL.io, CXL.memory and CXL.cache. These allow for deeper control over the connected devices, as well as an expansion as to what is possible. The CXL consortium sees three main areas for this:
The first type is a cache/accelerator, such as an offload engine or a SmartNIC (a smart network controller). With the CXL.io and CXL.cache intrinsics, this would allow the network controller to sort incoming data, analyze it, and filter what is needed directly into the main processors memory.
The second type is an accelerator with memory, and direct access to the HBM on the accelerator from the processor (as well as access to DRAM from the accelerator). The idea is a pseudo-heterogeneous compute design allowing for simpler but dense computational solvers.
The third type is perhaps the one we’re most interested in today: memory buffers. Using CXL.memory, a memory buffer can be installed over a CXL link and the attached memory can be directly pooled with the system memory. This allows for either increased memory bandwidth, or increased memory expansion, to the order of thousands of gigabytes.
CXL 2.0 also introduces CXL.security, support for persistent memory, and switching capabilities.
It should be noted that CXL is using the same electrical interface as PCIe. That means any CXL device will have what looks like a PCIe physical connector. Beyond that, CXL uses PCIe in its startup process, so currently any CXL supporting device has to also support a PCIe-to-PCIe link, making any CXL controller also a PCIe controller by default.
One of the common questions I’ve seen is what would happen if a CXL-only CPU was made? Because CXL and PCIe are intertwined, a CPU can’t be CXL-only, it would have to support PCIe connections as well. That being said, from the other direction: if we see CXL-based graphics cards for example, they would also have to at least initialize over PCIe, however full working modes might not be possible if CXL isn’t initialized.
Intel is set to introduce CXL 1.1 over PCIe 5.0 with its Sapphire Rapids processors. Microchip has announced PCIe 5.0 and CXL-based retimers for motherboard trace extensions. Samsung today is the third announcement for CXL supported devices. IBM has a similar technology called OMI (OpenCAPI Memory Interface), however that hasn’t seen wide adoption outside of IBM’s own processors.
Samsung’s CXL Memory Module
Modern processors rely on memory controllers for attached DRAM access. The top line x86 processors have eight channels of DDR4, while a number of accelerators have gone down the HBM route. One of the limiting factors in scaling up memory bandwidth is the number of controllers, which can also limit capacity, and beyond that memory needs to be validated and trained to work with a system. Most systems are not built to simply add or remove memory the same way you might do with a storage device.
Enter CXL, and the ability to add memory like a storage device. Samsung’s unveiling today is of a CXL-attached module packed to the max with DDR5. It uses a full PCIe 5.0 x16 link, allowing for a theoretical bidirectional 32 GT/s, but with multiple TB of memory behind a buffer controller. In much the same way that companies like Samsung pack NAND into a U.2-sized form factor, with sufficient cooling, Samsung does the same here but with DRAM.
The DRAM is still a volatile memory, and data is lost if power is lost. (I doubt it is hot swappable either, but weirder things have happened). Persistent memory can be used, but only with CXL 2.0. Samsung hasn't stated if their device supports CXL 2.0, but it should be at least CXL 1.1 as they state it currently is being tested with Intel's Sapphire Rapids platform.
It should be noted that a modern DRAM slot is usually rated maximum for ~18W. The only modules in that power window are Intel’s Optane DCPMM, but a 256 GB DDR4 module would be in that ~10+ W range. For a 2 TB add-in CXL module like this, I suspect we are looking at around 70-80 W, and so to add that amount of DRAM through the CXL interface would likely require active cooling as well as the big heatsink that these renders suggest.
Samsung doesn’t give any details about the module they are unveiling, except that it is CXL based and has DDR5 in it. Not only that, but the ‘photos’ provided look a lot like renders, so it’s hard to state if they have an aesthetic unit available for photography, or if there’s simply a working controller in a bring-up lab somewhere that has been validated on a system. Update: Samsung has confirmed these are live shots, not renders.
As part of the announcement Samsung quoted AMD and Intel, indicating which partners they are more closely working with, and what they have today is being validated on Intel next-gen servers. Intel’s next-gen servers, Sapphire Rapids, are due to launch at the end of the year, in line with the Aurora supercomputing contract set to be initially shipped by year end.
- Compute eXpress Link 2.0 (CXL 2.0) Finalized: Switching, PMEM, Security
- CXL Consortium Formally Incorporated, Gets New Board Members & CXL 1.1 Specification
- CXL Specification 1.0 Released: New Industry High-Speed Interconnect From Intel
- Intel Agilex: 10nm FPGAs with PCIe 5.0, DDR5, and CXL
- Synopsys Demonstrates CXL and CCIX 1.1 over PCIe 5.0: Next-Gen In Action
- Microchip Announces PCIe 5.0 And CXL Retimers
- DDR5 Memory Specification Released: Setting the Stage for DDR5-6400 And Beyond
- Here's Some DDR5-4800: Hands-On First Look at Next Gen DRAM
- Insights into DDR5 Sub-timings and Latencies
Post Your CommentPlease log in or sign up to comment.
View All Comments
Kamen Rider Blade - Tuesday, May 11, 2021 - linkBut OpenCAPI & it's OMI interface seems to be about the connection between the CPU Memory Controller & RAM.
By changing out the old Parallel interface for a new serialized one in OMI.
Wereweeb - Wednesday, May 12, 2021 - linkHow long until non-serial DRAM basically becomes a fat L4 cache?
Billy Tallis - Thursday, May 13, 2021 - linkI doubt it'll ever be handled as a cache by the hardware. Operating systems are probably going to want the different pools of DRAM exposed as separate NUMA nodes.
mode_13h - Thursday, May 13, 2021 - linkI don't see these memory pools as de facto standard. I think they have a few, specific use cases:
* storage caching
* in-memory DBs
* sharing data among multiple CPUs/accelerators
guswillard - Monday, May 17, 2021 - linkThis seems to have some potential. I don't have much background on the OS and kernel side of things but trying to join some dots. How will the system see this type of memory, will it appear as contiguous memory under /dev/mem or some other location. For example, if I run a 'dmidecode' command on a system with this memory attached, what type will it be? How will the OS map this memory. Appreciate any pointers to existing reading material.
mode_13h - Tuesday, May 18, 2021 - link> How will the system see this type of memory
I assume it'll get mapped into the system's address space. However, I think the OS won't treat it the same as direct-attached memory, by default. There would probably be a special driver for it, or at least some way to explicitly allocate it.
AustinTechie - Monday, May 17, 2021 - linkBased on the images, and the x16 interface shown, this is not a U.2 device...it is an EDSFF E3.S device using the new EDSF PCIe interface.
Based on this, the current standard for EDSFF E3.S is Max 40W and if they go to an EDSFF E3.L then they will have a 70W envelop.