Intel Ethernet 800 Series To Support NVMe over TCP, PCIe 4.0
by Billy Tallis on September 24, 2019 3:30 PM EST- Posted in
- Networking
- Intel
- Ethernet
- PCIe 4.0
- NVMeoF
- 100G
- 100G Ethernet
- Columbiaville
Today at the SNIA Storage Developer Conference, Intel is sharing more information about their 100Gb Ethernet chips, first announced in April and due to hit the market next month. The upcoming 800 Series Ethernet controllers and adapters will be Intel's first 100Gb Ethernet solutions, and also feature expanded capabilities for hardware accelerated packet processing. Intel is now announcing that they have implemented support for the TCP transport of NVMe over Fabrics using the Application Device Queues (ADQ) technology that the 800 Series is introducing.
NVMe over Fabrics has become the SAN protocol of choice for new systems, allowing for remote access to storage with just a few microseconds of extra latency compared to local NVMe SSD access. NVMeoF was initially defined to support two transport protocols: Fibre Channel and RDMA, which can be provided by Infiniband, iWARP and RoCE capable NICs. Intel already provides iWARP support on their X722 NICs, and RoCEv2 support was previously announced for the 800 Series. However, in the past year much of the interest in NVMeoF has shifted to the new NVMe over TCP transport specification, which makes NVMeoF usable over any IP network without requiring high-end RDMA-capable NICs or other niche network hardware. The NVMe over TCP spec was finalized in November 2018 and opened the doors to much wider use of NVMe over Fabrics.
Software-based NVMe over TCP implementations can use any network hardware, but for the high-performance applications that were originally the focus of NVMe over Fabrics, hardware acceleration is still required. Intel's ADQ functionality can be used to provide some acceleration of NVMe over TCP, and they are contributing code to support this in the Linux kernel. This makes the 800 Series Ethernet adapters capable of using NVMe over TCP with latency almost as low as RDMA-based NVMe over Fabrics. Intel has also announced that Lightbit Labs, one of the major commercial proponents of NVMe over TCP, will be adding ADQ support to their disaggregated storage solutions.
Unrelated to NVMe over Fabrics, Intel has also announced that Aerospike 4.7 will be the first commercial database to make use of ADQ acceleration, and Aerospike will be publishing their own performance measurements showing improvements to throughput and QoS.
The Intel Ethernet Controller E810 and four 800 Series Ethernet adapters will be available from numerous distributors and OEMs over the next several weeks. The product brief for the E810 controller has been posted, and indicates that it supports up to a PCIe 4.0 x16 host interface—to be expected from a 100Gb NIC, but not something Intel PR is keen to highlight while their CPUs are still on PCIe 3.0.
Related Reading
- Intel Columbiaville: 800 Series Ethernet at 100G, with ADQ and DDP
- Intel’s Enterprise Extravaganza 2019: Launching Cascade Lake, Optane DCPMM, Agilex FPGAs, 100G Ethernet, and Xeon D-1600
- NVIDIA To Acquire Datacenter Networking Firm Mellanox for $6.9 Billion
- Western Digital to Exit Storage Systems: Sells Off IntelliFlash Division
- Marvell at FMS 2019: NVMe Over Fabrics Controllers, AI On SSD
48 Comments
View All Comments
godrilla - Wednesday, September 25, 2019 - link
"supports up to a PCIe 4.0 x16 host interface—to be expected from a 100Gb NIC, but not something Intel PR is keen to highlight while their CPUs are still on PCIe 3.0." Lol quite we don't want people to know we support the competition because we are inferior until....The highlight of the story.
dumanfu - Wednesday, September 25, 2019 - link
As PCIe 4.0, it is missing a "designed for AMD Epyc" logoLOL
Namisecond - Wednesday, September 25, 2019 - link
NVME over TCP? Why does this exist? To me, it sounds like a security breach waiting to happen. At NVME speeds. Add in Intel's recent track record on architectural security....Billy Tallis - Wednesday, September 25, 2019 - link
Well, it was only a few weeks ago that news broke about doing cache timing attacks over RDMA with Intel NICs that support DDIO (DMA to L3 instead of DRAM). So NVMe over TCP sounds like it might be more secure than NVMe over RDMA in some cases. But in practice, most deployments and all the really high-performance deployments of this will be on fairly isolated/locked-down networks.A5 - Wednesday, September 25, 2019 - link
"I don't understand something so it is useless" is a really bad look.I know you were just setting up for your big lolz hot take, but you just look ignorant.
Dug - Wednesday, September 25, 2019 - link
You need to look at this-https://www.flashmemorysummit.com/English/Collater...
Maybe that will help you understand.
alpha754293 - Friday, September 27, 2019 - link
What I would like to see is a way to jump between QSFP28 and SFP+ connections.The cost per port to go from 1 GbE to 10 GbE is actually HIGHER than it is to go from 1 GbE to 100 Gbps (be it either GbE or IB).
The problem is that 10GbE is coming online for a lot more systems and devices (either RJ45 or SFP+) and right now, I don't have a way to bridge/jump between those speeds in a single switch.
MajesticTrout - Monday, September 30, 2019 - link
Regarding a switch for home use, the Mikrotik CRS305 is doing pretty well for me. ~$140, fanless, 4 SFP+ ports for 10gb, and an RJ45 at 1gb for management and/or another switch port.Gives me 10gb for a desktop PC, NAS, VM host, and an uplink to a different switch that handles my 1gb and PoE devices.