NVIDIA nForce 590 SLI: Reference Board Layout

Click to enlarge

NVIDIA designed a very well laid out board with all major connections easily reached. The board is lacking most clearance issues and was very easy to install in our mid-size ATX case. The reference board features an excellent voltage regulator power design along with Rubycon and Sanyo capacitors that yielded superb stability and overclocking results even with our early BIOS and board design.


The DIMM module slots' color coordination is correct for dual channel setup based upon the premise of utilizing different colors for each memory bank. The memory modules are easy to install with a full size video card placed in the first PCI Express X16 slot. The 24-pin ATX power connector is located along the upper edge of the board along with the single IDE port connector.


The six NVIDIA SATA ports are color coded black and are conveniently located below the NVIDIA MCP. The SATA ports feature the newer clamp and latch design. We found the positioning of the SATA ports to be excellent when utilizing either PCI-E X16 slot. The NVIDIA MCP does require a heatsink in this application.

The floppy drive connector is color coded black while being positioned along the left edge of the board. The location of this connector is not ideal and actually the whole point of having a floppy drive connector is becoming moot with the wide spread availability of USB floppy or flash drives along with BIOS support for boot purposes. The various chassis panel, USB connectors, and the second chassis fan header are located in the lower left portion of the board.


The board comes with (2) physical PCI Express X16 connectors, (1) PCI Express X1 connector, (1) PCI Express X4 connector, and (2) PCI 2.3 connectors. The layout of this design offers a very good balance of expansion slots for a mainstream board while providing excellent clearance space for graphics card utilization. Our main issue is that the first PCI 2.3 connector will be rendered physically useless if an SLI setup is utilized. Although the positioning and number of PCI-E and PCI slots are up to the manufacturer, we certainly hope to see better layouts that do not block the PCI slots in the near future. A four-pin molex power connector that is required for SLI operation and the first chassis fan header is located along the edge of the board. The four-pin molex connector is located in an unusual and difficult location to reach, the bottom left corner.

Click to enlarge

Returning to the CPU socket area, we find an ample amount of room for alternative cooling solutions. We utilized the stock heatsink/fan in our normal testing but also verified a couple of larger Socket 775 cooling solutions would fit in this area during our overclocking tests.

The NVIDIA SPP is actively cooled with a medium sized heatsink/fan unit that did not interfere with any installed peripherals. In fact this unit kept the chipset cool enough that additional chipset voltage was not a factor in our overclocking tests, although we could see this unit heating up if the HSF is not seated properly. Our only concern is the lifespan of the fan but it is very quiet during operation. NVIDIA places the 8-pin 12V auxiliary power connector at the top of the CPU socket area but out of the way of aftermarket cooling solutions.


The rear panel contains the standard PS/2 mouse and keyboard ports, LAN ports, and 4 USB ports. The LAN (RJ-45) ports have two LED indicators representing activity and speed of the connection. The audio panel consists of 6 ports that can be configured for 2, 4, 6, and 8-channel audio connections. The rear panel also includes two S/PDIF (optical/coaxial) ports, an external SATA 3Gb/s port, and an IEEE 1394a port.

Basic Features: NVIDIA nForce 590 SLI Overclocking and Test Setup
Comments Locked

37 Comments

View All Comments

  • bespoke - Tuesday, June 27, 2006 - link

    Once again, the southbridge chip and fan are right underneath the top video card clot. A large cooling solution on the video card will completely cover the sb chip - possibily preventing the video card from seating correctly and certainly not helping with airflow.

    Please move the SB chip or get rid of the fan! Arrrgh!
  • Gary Key - Wednesday, June 28, 2006 - link

    quote:

    Please move the SB chip or get rid of the fan! Arrrgh!


    Due to the required two chip solution for dual x16 GPU operation, there is not another area on the board to place the chipset and still retain the required trace layouts. Due to the heat generated by the MCP, it requires active cooling or a large passive heatsink (as MSI did on their 570 board). These issues will be solved late this year when NVIDIA goes to a single chip solution for their dual x16 boards. In the meantime, we are not happy either. ;-)
  • Anemone - Thursday, June 29, 2006 - link

    Probably should use DDR2 800 on the Asus and 667 on the 590 as the highest supported on each and recompare. I know that feels unfair but I'm saying that from a "highest supported" basis. Enthusiasts are likely to go beyond that, but you'll be giving the full oc tests a go in the next round.

    Initially however think 533 on both skewed things.
  • Per Hansson - Tuesday, June 27, 2006 - link

    "The reference board features an excellent voltage regulator power design along with Rubycon and Sanyo capacitors that yielded superb stability and overclocking results even with our early BIOS and board design."
    Actually those capacitors with a T vent are Panasonic FL, in the 12v input for the VRM and also for the 5v or 12v input for the memory regulators...

    Still very excellent capacitors; if it only where a requirement to also use them on the revised boards by the mainboard manufacturers... Wishful thinking I guess but with continued reporting of what components are used like this by you Anandtech eventually they will listen... (I hope atleast) Again thanks and great work! Hoping you will help to ease the confusion on what chipset to go with that Conroe...
  • Griswold - Tuesday, June 27, 2006 - link

    For some reason, pictures of the mobo wont show in Opera (v9) for me. The benchmark charts are there though. What gives? Anyone else experience this?

    Never had any kind of problem with Opera and AT before. :/
  • Per Hansson - Tuesday, June 27, 2006 - link

    Works fine in Opera 9 here, I think your issue might be that your browser is not set to enable refferer logging (under advanced>network)
  • Griswold - Tuesday, June 27, 2006 - link

    That was it. Not sure why that one was off, however, it works now. Thanks a bunch!
  • Gary Key - Tuesday, June 27, 2006 - link

    I will load up Opera 9 and test it shortly.
  • Myrandex - Tuesday, June 27, 2006 - link

    That eSata port looks a lot like the ieee1394B port, any relation? I heard there was apush once for eSata to use Firewire cables, but I thought only one manufacturer was trying for that (maybe Highpoint?).
  • eskimoe - Tuesday, June 27, 2006 - link

    First off, thanks for one of the very few test on the new nforce5 intel edition!
    For a long time now I have built my pcs with amd cpus, next month will be the first time since the first p3s that Ill build another intel pc!
    So at the moment, I am not really sure which chipset to use,
    of course it seems natural to use an intel chipset for an intel cpu..
    but the nforce chipsets have been very nice (at least for amd), and theres more competition than in the intel area...
    The only thing that intel has and nvidia doesnt, is the intel matrix storage
    (btw, a single nvidia card shouldnt have any probs running on an intel board/have disadvantages over a single ati card, should it?),
    which sounds very nice in my opinion.. therefore, I'd love to see
    some comparison in the RAID compartement between nforce5 and intel 975/965,
    especially since I cant find any information how RAID5 performs on
    nforce5 and intel chipsets.. until now, all onboard variants were
    very slow/used alot of cpu (at least when writing)
    So, I'd love to see some tests comparing raid0 performance/cpu utilization
    between the chipsets, as well as raid5 tests...
    and perhaps someone knows of some tests on matrix raid 5?
    The possibility to have 3x200gb drives, using for example 500gb as raid0,
    and 100gb as raid5 seems very promising, as long as the raid5 calculations
    are somewhat supported by chipset hardware, not only the cpu!
    Thanks alot

Log in

Don't have an account? Sign up now