Caching And Tiering: Intel Optane Memory H20 and Enmotus FuzeDrive SSD Reviewed
by Billy Tallis on May 18, 2021 2:00 PM EST- Posted in
- SSDs
- Storage
- Intel
- SSD Caching
- 3D XPoint
- Optane
- Optane Memory
- Tiger Lake
The latest iteration of Intel's Optane Memory SSD caching is here. The new Optane Memory H20 is two NVMe drives in one, combining a 1TB QLC drive (derived from their recent 670p) with an updated 32GB Optane cache drive, all on one M.2 card. We're also taking a look at the Enmotus FuzeDrive SSD, a different take on the two-drives-in-one idea that augments its QLC with a dedicated pool of fast SLC NAND flash. Each of these drives is paired with software to intelligently manage data placement, putting heavily-used data on the faster, higher-endurance storage media. The overall goal of the two products is the same: to combine the affordable capacity of QLC NAND with the high-end performance and write endurance of SLC NAND or 3D XPoint memory.
SSD Caching History
There is a long history behind the general idea of combining fast and slow storage devices into one pool of storage that doesn't require end users to manually manage data placement. Caching of data in RAM is ubiquitous with CPUs having multiple levels of cache, and hard drives and some SSDs also having their own RAM caches, but all of those are temporary by nature. Persistent caches using a faster form of non-volatile storage has never been quite as pervasive, but there have been plenty of examples over the years.
In the consumer space, caching was of great interest when SSDs first started to go mainstream: they were far faster than hard drives, but not yet large enough to be used as a complete replacement for hard drives. Intel implemented Smart Response Technology (SRT) into their Rapid Storage Technology (RST) drivers starting a decade ago with the Z68 chipset for Sandy Bridge. Hard drive manufacturers also introduced hybrid drives, but with such pitifully small NAND flash caches that they weren't of much use.
More recently, the migration of SSDs to store more bits of data per physical memory cell has led to consumer SSDs implementing their own transparent caching. All consumer SSDs using TLC or QLC NAND manage a cache layer that operates a portion of the storage as SLC (or occasionally MLC)—less dense, but faster.
Optane Memory
Intel made another big push for SSD caching with their first Optane devices to hit the consumer market: tiny M.2 drives equipped with the promising new 3D XPoint memory, and rather confusingly branded Optane Memory as if they were DRAM alternatives instead of NVMe SSDs. Intel initially pitched these as cache devices for use in front of hard drives. The implementation of Optane Memory built on their RST work, but came with new platform requirements: motherboard firmware had to be able to understand the caching system in order to properly load an operating system from a cached volume, and that firmware support was only provided on Kaby Lake and newer platforms. The Optane + hard drive strategy never saw huge success; the continuing transition to TLC NAND meant SSDs that were big enough and fast enough became widely affordable. Multiple-drive caching setups were also a poor fit for the size and power constraints of notebooks. Optane caching in front of TLC NAND was possible, but not really worth the cost and complexity, especially with SLC caching working pretty well for mainstream single-drive setups.
QLC NAND provided a new opportunity for Optane caching, leading to the Optane Memory H10 and the new Optane Memory H20 we're reviewing today. These squeeze Intel's consumer QLC drives (660p and 670p respectively) and one of their Optane Memory cache drives onto a single M.2 card. This requires a somewhat non-standard interface; most systems cannot detect both devices and will be able to access either the QLC or the Optane side of the drive, but not both. Some Intel consumer platforms starting with Coffee Lake have the capability to detect these drives and configure the PCIe x4 link to a M.2 slot as two separate x2 links.
The caching system for Optane Memory H20 works pretty much the same as when using separate Optane and slow drives, though Intel has continued to refine their heuristics for data placement with successive releases of their RST drivers. One notable downside is that splitting the M.2 slot's four PCIe lanes into two x2 links means there's a bottleneck on the QLC side; the Silicon Motion SSD controllers Intel uses support four lanes, but only two can be wired up on the H10 and H20. For the H10, this hardly mattered because the QLC portion of that drive (equivalent to the Intel SSD 660p) could only rarely provide more than 2GB/s, so limiting it to PCIe 3.0 x2 had only a minor impact. Intel's 670p is quite a bit faster thanks to more advanced QLC and a much-improved controller, so limiting it to PCIe 3.0 x2 on the Optane Memory H20 actually hurts.
Intel Optane Memory H20 Specifications | |||||||
H20 | H10 | ||||||
Form Factor | single-sided M.2 2280 | single-sided M.2 2280 | |||||
NAND Controller | Silicon Motion SM2265 | Silicon Motion SM2263 | |||||
NAND Flash | Intel 144L 3D QLC | Intel 64L 3D QLC | |||||
Optane Controller | Intel SLL3D | ||||||
Optane Media | Intel 128Gb 3D XPoint | Intel 128Gb 3D XPoint | |||||
QLC NAND Capacity | 512 GB | 1024 GB | 256 GB | 512 GB | 1024 GB | ||
Optane Capacity | 32 GB | 16 GB | 32 GB | 32 GB | |||
Sequential Read | up to 3300 MB/s | 1450 MB/s | 2300 MB/s | 2400 MB/s | |||
Sequential Write | up to 2100 MB/s | 650 MB/s | 1300 MB/s | 1800 MB/s | |||
Random Read IOPS | 65k (QD1) | 230k | 320k | 330k | |||
Random Write IOPS | 40k (QD1) | 150k | 250k | 250k | |||
Launched | May 2021 | April 2019 | |||||
System Requirements |
11th Gen Core CPU 500 Series Chipset RST Driver 18.1 |
8th Gen Core CPU 300 Series Chipset RST Driver 17.2 |
Both the Optane Memory H10 and H20 are rated for peak throughput in excess of what either the Optane or QLC portion can provide on its own. To achieve this, Intel's caching software has to be capable of doing some RAID0-like striping of data between the two sub-devices; it can't simply send requests to the Optane portion while falling back on the QLC only when strictly necessary.
At first glance, the Optane Memory H20 looks like a rehash of the H10, but it is a substantially upgraded product. The Optane portion of the H20 is a bit faster than previous Optane Memory products including the Optane portion of the H10. Intel didn't give specifics on how they improved performance here, but they are still using first-generation 3D XPoint memory rather than the second-generation 3DXP that is now shipping in the enterprise Optane P5800X SSD.
The QLC side of the drive gets a major upgrade from 64L to 144L QLC NAND and a controller upgrade from the Silicon Motion SM2263 to the SM2265. The new controller is an Intel-specific custom part for the 670p and the H20, derived from the SM2267 controller but lacking the PCIe 4.0 capability. Cutting out the PCIe 4.0 support was reasonable for the Intel 670p because the QLC isn't fast enough to go beyond PCIe 3.0 speeds anyways, and Intel can reduce power draw and maybe save a bit of money with the SM2265 instead of the SM2267. But for the Optane Memory H20 and its PCIe x2 limitation for the QLC portion, it would have been nice to be able to run those two lanes at Gen4 speed.
The Optane Memory H10 was initially planned for both OEM and retail sales, but the retail version was cancelled before release and the (somewhat spotty) support for H10 that was provided by retail Coffee Lake motherboards ended up being useless to consumers. The H20 is launching as an OEM-only product from the outset, which ensures it will only be used in compatible Intel-based systems. This allows Intel to largely avoid any issues with end-users needing to install and configure the caching software, because OEMs will take care of that. The Optane Memory H20 is planned to start shipping in new systems starting in June.
45 Comments
View All Comments
haukionkannel - Wednesday, May 19, 2021 - link
Most likely PCI 5.0 or 6.0 in reality… and bigger ottaen part. Much bigger!tuxRoller - Friday, May 21, 2021 - link
You made me curious regarding the history of hsm.It earliest one seems to be the IBM 3850 in the 70s.
So. Yeah. It's not exactly new tech:-|
Monstieur - Tuesday, May 18, 2021 - link
VMD changes the PID & VID so the NVMe drive will not be detected with generic drivers. This is the same behavior on X299, but those boards let you enable / disable VMD per PCIe slot. There is yet another feature called "CPU Attached RAID" which lets you use RST RAID or Optane Memory acceleration with non-VMD drives attached to the CPU lanes and not chipset lanes.Monstieur - Tuesday, May 18, 2021 - link
500 Series:VMD (CPU) > RST VMD driver / RST Optane Memory Acceleration with H10 / H20
Non-VMD (CPU) > Generic driver
CPU Attached RAID (CPU) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
RAID (PCH) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
AHCI (PCH) > Generic driver
X299:
VMD (CPU) > VROC VMD driver / VROC RAID
Non-VMD (CPU) > Generic driver
CPU Attached RAID (CPU) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
RAID (PCH) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
AHCI (PCH) > Generic driver
dwillmore - Tuesday, May 18, 2021 - link
This really looks like a piece of hardware to avoid unless you run Windoes on the most recent generation of Intel hardware. So, that's a double "nope" from me. That's for the warning!Billy Tallis - Tuesday, May 18, 2021 - link
VMD has been an important feature of Intel server platforms for years. As a result, Linux has supported VMD for years. You may not be able to do a clean install of Windows onto this Tiger Lake laptop without loading extra drivers, but Linux has no problem.I had a multi-boot setup on a drive that was in the Whiskey Lake laptop. When I moved it over to the Tiger Lake laptop, grub tried to load its config from the wrong partition. But once I got past that issue, Linux booted with no trouble. Windows could only boot into its recovery environment. From there, I had to put RST drivers on a USB drive, load them in the recovery environment so it could detect the NVMe drive, then install them into the Windows image on the NVMe drive so it could boot on its own.
dsplover - Tuesday, May 18, 2021 - link
Great read, thanks. Love the combinations benefits being explained so well.CaptainChaos - Tuesday, May 18, 2021 - link
The phrase "putting lipstick on a pig" comes to mind for Intel here!Tomatotech - Wednesday, May 19, 2021 - link
Other way round. Optane is stunning but Intel has persistently shot it in the foot for almost all their non-server releases.In Intel’s defence, getting it right requires full-stack cooperation between Intel, Microsoft, and motherboard makers. You’d think they should be able to do it, given that cooperating is at the basis of their existence, but in Optane’s case it hasn’t been achievable.
Only Apple seems to be achieving this full stack integration with their M1 chip & unified memory & their OS, and it took them a long time to get to this point.
CaptainChaos - Wednesday, May 19, 2021 - link
Yes... I meant that Optane is the lipstick & QLC is the pig Tomatotech dude! I use several Optane drives but see no advantage at this point for QLC! It's just not priced properly to provide a tempting alternative to TLC.