The Intel Xeon D Review: Performance Per Watt Server SoC Champion?by Johan De Gelas on June 23, 2015 8:35 AM EST
Broadwell in a Server SoC
In a nutshell, the Xeon D-1540 is two silicon dies in one highly integrated package. Eight 14 nm Broadwell cores, a shared L3-cache, a dual 10 gigabit MAC, a PCIe 3.0 root with 24 lanes find a home in the integrated SoC whereas in the same package we find four USB 3.0, four USB 2.0, six SATA3 controllers and a PCIe 2.0 root integrated in a PCH chip.
The Broadwell architecture brings small microarchitectural improvements - Intel currently claims about 5.5% higher IPC in integer processing. Other improvements include slightly lower VM exit/enter latencies, something that Intel has been improving with almost every recent generation (excluding Sandy Bridge).
Of course, if you are in the server business, you care little about all the small IPC improvements. Let us focus on the large relevant improvements. The big improvements over the Xeon E3-1200 v3 are:
- Twice as many cores and threads (8/16 vs 4/8)
- 32 GB instead of 8 GB per DIMM supported and support for DDR4-2133
- Maximum memory capacity has quadrupled (128 GB vs 32 GB)
- 24 PCIe 3.0 lanes instead of 16 PCIe 3.0 lanes
- 12 MB L3 rather than 8 MB L3
- No separate C22x chipset necessary for SATA / USB
- Dual 10 Gbit Ethernet integrated ...
And last but not least, RAS (Reliability, Availability and Servicability) features which are more similar to the Xeon E5:
The only RAS features missing in the Xeon D are the expensive ones like memory mirroring. Those RAS features a very rarely used, and The Xeon D can not offer them as it does not have a second memory controller.
Compared to the Atom C2000, the biggest improvement is the fact that the Broadwell core is vastly more advanced than the Silvermont core. That is not all:
- Atom C2000 had no L3-cache, and are thus a lot slower in situation where the cores have to sync a lot (databases)
- No support for USB 3 (Xeon D: four USB 3 controllers)
- As far as we know Atom C2000 server boards were limited to two 1 Gbit PHYs (unless you add a separate 10 GBe controller)
- No support for PCIe 3.0, "only" 16 PCIe Gen2 lanes.
There are more subtle differences of course such as using a crossbar rather than a ring, but those are beyond the scope of this review.