An Overview of Server DIMM types

Typically, desktop and mobile systems use Unbuffered DIMMs (UDIMMs). The memory controller inside your CPU addresses each memory chip of your UDIMM individually and in parallel. However, each memory chip places a certain amount of capacitance on the memory channel and thus weakens the high frequency signals through that channel. As a result, the channel can only take a limited number of memory chips.

This is hardly an issue in the desktop world. Most people will be perfectly happy with 16GB (4x4GB) and run them at 1.6 to 2.133GHz while overvolting the DDR3 to 1.65V. It's only if you want to use 8GB DIMMs (at 1.5V) that you start to see the limitations: most boards will only allow you to install two of them, one per channel. Install four of them in a dual channel board and you will probably be limited to 1333MHz. But currently very few people will see any benefit from using slow 32GB instead of 16GB of fast DDR3 (and you'd need Windows 7 Professional or Ultimate to use more than 16GB).

In the server world, vendors tend to be a lot more conservative. Running DIMMs at an out-of-spec 1.65V will shorten their life and drive the energy consumption a lot higher. Higher power consumption for 2-3% more performance is simply insane in a rack full of power hogging servers.

Memory validation is a very costly process, another good reason why server vendors like to play it safe. You can use UDIMMs (with ECC most of the time, unlike desktop DIMMs) in servers, but they are limited to lower capacities and clockspeeds. For example, Dell's best UDIMM is a 1333MHz 4GB DIMM, and you can only place two of them per channel (2 DPC = 2 DIMMs Per Channel). That means that a single Xeon E5 cannot address more than 32GB of RAM when using UDIMMs. In the current HP servers (Generation 8), you can get 8GB UDIMMs, which doubles the UDIMM capacity to 64GB per CPU.

In short, UDIMMs are the cheapest server DIMMs, but you sacrifice a lot of memory capacity and a bit of performance.

RDIMMs (Registered DIMMs) are a much better option for your server in most cases. The best RDIMMs today are 16GB running at 1600MHz (800MHz clock DDR). With RDIMMs, you can get up to three times more capacity: 4 channels x 3 DPC x 16GB = 192GB per CPU. The disadvantage is that the clockspeed throttles back to 1066MHz.

If you want top speed, you have to limit yourself to 2 DPC (and 4 ranks). With 2DPC, the RDIMMs will run at 1600MHz. Each CPU can then address up to 128GB per CPU (4 channels x 2 DPC x 16GB). Which is still twice as much as with UDIMMs, while running at a 20% higher speed.

RDIMMs add a register, which buffers the address and command signals.The integrated memory controller in the CPU sees the register instead of addressing the memory chips directly. As a result, the number of ranks per channel is typically higher: the current Xeon E5 systems support up to eight ranks of RDIMMs. That is four dual ranked DIMMs per channel (but you only have three DIMM slots per channel) or two Quad Ranked DIMMs per channel. If you combine quad ranks with the largest memory chips, you get the largest DIMM capacities. For example, a quad rank DIMM with 4Gb chips is a 32GB DIMM (4 Gbit x 8 x 4 ranks). So in that case we can get up to 256GB: 4 channels x 2 DPC x 32GB. Not all servers support quad ranks though.


LRDIMMs can do even better. Load Reduced DIMMs replace the register with an Isolation Memory Buffer (iMB™ by Inphi) component. The iMB buffers the Command, Address, and data signals.The iMB isolates all electrical loading (including the data signals) of the memory chips on the (LR)DIMM from the host memory controller. Again, the host controllers sees only the iMB and not the individual memory chips. As a result you can fill all DIMM slots with quad ranked DIMMs. In reality this means that you get 50% to 100% more memory capacity.

Supermicro's 2U Twin HyperCloud DIMMs
Comments Locked


View All Comments

  • dgingeri - Friday, August 3, 2012 - link

    "Most 2U servers are limited to 24 memory slots and as a result 384GB of RAM. With two nodes in a 2U server and 16 slots per node, you get cram up to 512GB of RDIMMs in one server. "

    It's not one server. It's actually 2 servers. just because they're in a 2U X 1/2 width form factor doesn't mean they're just one system. There are 2 systems there. Sure you can pack 512GB into 2U with 2 servers, but there are better ways.

    1. Dell makes a PowerEdge R620, where you can pack 384GB into 1U, two of those gives you the same number of systems in the same space, with 50% more memory.

    2. Dell also has their new R720, which is 2U and has a capacity of 768GB in a 2U form factor. Again, 50% more memory capacity in the same 2U. However, that's short 2 processor sockets.

    2. Now, there's the new R820. 4 sockets, 1.5TB of memory, 7 slots, in 2U of space. It's a beast. I have one of these on the way from Dell for my test lab.

    Working as an admin in a test lab, dealing with all brands of servers, my experiences with various brands gives me a rather unique insight. I have had very few problems with Dell server, despite having nearly 30% Dell servers. We've had 7 drives die (all Toshiba) and one faceplate LCD go out. Our HP boxes, at less than 10% of our lab, have had more failures. The IBMs, ahile also less than 10%, have had absolutely no hardware failures. Our Supermicros comprise about 25% of the lab, yet contribute >80% of the hardware problems, from motherboards that just quit recognizing memory to backplanes that quit recognizing drives. I'm not too happy with them.
  • JHBoricua - Monday, August 6, 2012 - link


    Sure, you can load each of those Rxxx Dell servers with boatloads of memory, but you fail to mention that it comes with a significant performance/penalty. The moment you put a third Dimm on a memory channel your memory speeds drops from 1600 (IF you started with 1600 memory to begin with) to 1066 or worse, 800. On a virtualization host, that makes a big difference.
  • Casper42 - Friday, August 10, 2012 - link

    No one makes 32GB @ 1600 yet.
    So 512GB @ 2DPC would be 1333
    And 768GB @ 3DPC would be 1066 or 800 like you mentioned.

    384 using 16GB DIMMs would still be 3DPC and would drop from 1600 down to like 1066.

    256GB @ 1600 @ 2DPC still seems to be the sweet spot.

    BTW, why is the Dell R620 limited to 16GB DIMMs? The HP DL360p Gen8 is also 1U and supports 32GB LRDIMMs
  • ImSteevin - Friday, August 3, 2012 - link

    MMhmmm yeah
    Oh yeah ok
    I know some of these words.
  • thenew3 - Friday, August 3, 2012 - link

    The latest Dell R620's are 1U servers that can have two 8 core CPU's and 24 DIMM slots. Each slot can hold up to a 32GB DIMM giving total memory capacity of 768GB in a 1U space.

    We use these in our data centers for virtualization (we're 100% virtualized). Completely diskless (internal RAID 1 dual SD modules for ESXi)

    Each machine has four 10gb NIC plus two 1gb NIC. All storage on iSCSI SAN's through 10gb backbone.

    For most virtualization tasks, you really don't need the 2U R720, which has the same CPU/RAM options but gives you more drive bays and expansion slots.
  • shuntian8099 - Saturday, August 4, 2012 - link

    We accept any form of payment. )
    S=h=0=x ==sh0es=4oUSD
    Finally (48 hours) time limit to buy.

    LV Muffler $ 5.99
    LV Bags $ 19.9
    LV Wallet $ 6.55
    Armani Glasses $ 5.99
    LV Belt $ 6.9
    Buy addresses-
    ∴★∵**☆.∴★∵**☆.∴★∵**☆.█████.::∴★∵**☆.∴★∵**☆.█田田█:: ╭⌒╮ ╭⌒╮
    r satisfaction is our eternal pursuit!
  • ddr3memory - Sunday, August 5, 2012 - link

    A few corrections - the 192GB for HCDIMMs is incorrect - it should also be 384GB.

    There is no data available that confirms a 20% higher power consumption for HCDIMMs over LRDIMMs. There is a suspicious lack of benchmarks available for LRDIMMs. It is possible that figure arises from a comparison of 1.5V HCDIMMs vs. 1.35V LRDIMMs (as were available at IBM/HP).

    It is incorrect that LRDIMMs are somehow standard and HCDIMMs are non-standard.

    In fact HCDIMMs are 100% compatible with DDR3 RDIMM JEDEC standard.

    It is the LRDIMMs which are a new standard and are NOT compatible with DDR3 RDIMMs - you cannot use them together.

    The 1600MHz HCDIMM mention is interesting - would be good to hear more on that.
  • ddr3memory - Sunday, August 5, 2012 - link

    I have posted an article on the performance comparison of HyperCloud HCDIMMs (RDIMM-compatible) vs. LRDIMMs (RDIMM non-compatible).

    Cannot post link here it seems - search for the article on the blog:
    Awaiting 32GB HCDIMMs
  • ddr3memory - Monday, August 6, 2012 - link

    VMware has had good things to say about HCDIMM (not a word from VMware about LRDIMMs though). Search on the net for the article entitled:

    Memory for VMware virtualization servers
  • ddr3memory - Monday, August 6, 2012 - link

    The prices mentioned maybe off - I see IBM showing same retail prices for 16GB LRDIMMs/HCDIMMs and similar at the IBM resellers.

    These resellers show 16GB HCDIMMs selling at $431 at costcentral for example, $503 at glcomp and $424 at pcsuperstore.

    Search the internet for this article:

    What are IBM HCDIMMs and HP HDIMMs ?

    It has the links for the IBM/HP retail prices as well as the reseller prices.

Log in

Don't have an account? Sign up now