Rebranded Ethernet Technology Consortium Unveils 800 Gigabit Ethernet
by Gavin Bonshor on April 9, 2020 11:00 AM EST- Posted in
- Networking
- Cisco
- Ethernet
- 800 GbE
- 400 GbE
- 800GBase-T
With an increasing demand for networking speed and throughput performance within the datacenter and high performance computing clusters, the newly rebranded Ethernet Technology Consortium has announced a new 800 Gigabit Ethernet technology. Based upon many of the existing technologies that power contemporary 400 Gigabit Ethernet, the 800GBASE-R standard is looking to double performance once again, to feed ever-hungrier datacenters.
The recently-finalized standard comes from the Ethernet Technology Consortium, the non-IEEE, tech industry-backed consortium formerly known as the 25 Gigabit Ethernet Consortium. The group was originally created to develop 25, 50, and 100 Gigabit Ethernet technology, and while IEEE Ethernet standards have since surpassed what the consortium achieved, the consortium has stayed formed to push even faster networking speeds, and changing its name to keep with the times. Some of the biggest contributors and supporters of the ETC include Broadcom, Cisco, Google, and Microsoft, with more than 40 companies listed as integrators of its work.
800 Gigabit Ethernet Block Diagram
As for their new 800 Gigabit Ethernet standard, at a high level 800GbE can be thought of as essentially a wider version of 400GbE. The standard is primarily based around using existing 106.25G lanes, which were pioneered for 400GbE, but doubling the number of total lanes from 4 to 8. And while this is a conceptually simple change, there is a significant amount of work involved in bonding together additional lanes in this fashion, which is what the new 800GbE standard has to sort out.
Diving in, the new 800GBASE-R specification defines a new Media Access Control (MAC) and a Physical Coding Sublayer (PCS), which in turn is built on top of two 400 GbE 2xClause PCS's to create a single MAC which operates at a combined 800 Gb/s. Each 400 GbE PCS uses 4 x 106.25 GbE lanes, which when doubled brings the total to eight lanes, which has been used to create the new 800 GbE standard. And while the focus is on 106.25G lanes, it's not a hard requirement; the ETC states that this architecture could also allow for larger groupings of slower lanes, such as 16x53.125G, if manufacturers decided to pursue the matter.
Focusing on the MAC itself, the ETC claims that 800 Gb Ethernet will inherit all of the previous attributes of the 400 GbE standard, with full-duplex support between two terminals, and with a minimum interpacket gap of 8-bit times. The above diagram depicts each 400 GbE with 16 x 10 b lanes, with each 400 GbE data stream transcoding and scrambling packet data separately, with a bonding control which synchronizes and muxes both PCS's together.
All told, the 800GbE standard is the latest step for an industry as a whole that is moving to Terabit (and beyond) Ethernet. And while those future standards will ultimately require faster SerDes to drive the required individual lane speeds, for now 800GBASE-R can deliver 800GbE on current generation hardware. All of which should be a boon for the standard's intended hyperscaler and HPC operator customers, who are eager to get more bandwidth between systems.
The Ethernet Technology Consortium outlines the full specifications of the 800 GbE on its website in a PDF. There's no information when we might see 800GbE in products, but as its largely based on existing technology, it should be a relatively short wait by datacenter networking standards. Though datacenter operators will probably have to pay for the luxury; with even a Cisco Nexus 400 GbE 16-port switch costing upwards of $11,000, we don't expect 800GbE to come cheap.
Related Reading
- Sonnet Unveils Solo5G: A USB-C to 5 GbE Network Adapter
- Intel Launches Atom P5900: A 10nm Atom for Radio Access Networks
- D-Link Announces Nuclias Remote Management Solutions for SMB Networks
- TP-Link Updates Deco Mesh Networking Family with Wi-Fi 6
Source: Ethernet Technology Consortium
QSFP-DD Image Courtesy Optomind
75 Comments
View All Comments
JKflipflop98 - Monday, April 13, 2020 - link
Kids must have forgot to give close his Alzheimer's medicine today. Today's forecast is anger with a good chance of extreme confusion.Deicidium369 - Wednesday, April 15, 2020 - link
If someone could just get those damned kids off his lawn it would all be OKNikosD - Thursday, April 16, 2020 - link
@close has always the best comments in AnandTech's section and you should be careful reading them with proper respect and care.Of course in quarantine times due to COVID-19 and government policies around it, anyone could exaggerate a little more than usual especially after a provocative comment.
WaltC - Friday, April 17, 2020 - link
Just be delighted that you don't have some moron government agency telling you what you *don't* need and making sure you can't get it even if you don't agree...;)mode_13h - Monday, April 13, 2020 - link
> Good thing you posted the press release here then..It's not the press release. I'm sure you didn't even bother to check.
More to the point, if the content is not relevant or interesting for you, why did you even click it? Instead of attacking the site for publishing content not suited to your interests, why not just go somewhere else?
Deicidium369 - Wednesday, April 15, 2020 - link
that retailer router-switch.com is a joke - good luck ever getting anything from them - CDW is more realistic - not that you are actually in the market for one - I know this is all speculative.nismotigerwvu - Thursday, April 9, 2020 - link
That's some mind-bending bandwidth! Sure there will be overhead, but we're still talking about a link approaching 100 gigabytes per second (BYTES!).MenhirMike - Thursday, April 9, 2020 - link
It's crazy to think how to feed that pipe. I mean, I guess this isn't really meant as an Ethernet Card for a single server but for aggregation of many servers, but still, feeding 100 GB/s into that pipe is mindboggling. That's 3-4 BluRay disks (or one modern AAA game) every second.firewrath9 - Thursday, April 9, 2020 - link
These will probably be used as interconnects between servers. You could have a 42U server cabinet with say 40 1U compute servers, then have a networking switch handling cross server (idk maybe 40Gbe to each of the 1U servers, then a 800Gbe link to a central switch, handling many server cabinetsCokie - Saturday, April 11, 2020 - link
Even though my server workloads really don't usually go past 1 or 10 Gbit/s, it's not unusual for my to have 1-4 x 10Gbit/s just to the SAN. Not using 400 Gbit/s, but definitely would be nice for SAN links that I already saturate. Cool to see the expensive tech, even if I can't afford it, and hope it drives down the prices to make it more accessible.