Introduction and Evaluation Methodology

NAS units with four bays present the best balance between cost and expandability for home consumers. However, with increasing hard drive sizes, two bays make the cut for many usage scenarios. The demand for high-performance, but cost-effective NAS units has been picking up, and this is where the modern ARM-based platforms come into play. Netgear's ReadyNAS 200 series was launched at the 2015 CES and sported an ARM Cortex A15-based Annapurna Labs SoC. Negear's offerings differentiates itself from the competition due to use of btrfs as the file system for the data volume. We have already looked at the Intel Atom-based RN312. In this review, we will take a look at how the ARM-based RN202 performs with the ReadyNAS OS, and see how the unit stacks up against the competitors in this space.

The specifications of the Netgear RN202 are provided in the table below

Netgear RN202 Specifications
Processor Annapurna Labs SoC (2C/2T ARM Cortex A15 @ 1.4 GHz)
Drive Bays 2x 3.5"/2.5" SATA II / III HDD / SSD (Hot-Swappable)
Network Links 2x 1 GbE
External I/O Peripherals 3x USB 3.0, 1x eSATA
Expansion Slots N/A
VGA / Display Out N/A
Full Specifications Link Netgear RN202 Specifications
Price USD 282

The various specifications of the NAS are backed up by the data gleaned via SSH access to the unit.

The ReadyNAS 200 series also includes a 4-bay variant. The comparison between the two units (including the hardware specifications) are reproduced from Netgear's marketing material below.

The industrial design of the unit is the same as that of the ReadyNAS RN312. In terms of user experience with the hardware, the toolless drive caddies are one of the best designs we have seen across samples from most of the players in the industry. That said, they are a bit non-intuitive for first-time users, but they get the job done while also providing a bit of vibration dampening for the 3.5" drives. 2.5" drives still need screws for securing in the same caddy.

The setup process is quite straightforward. Upon connection to the network, the RN202 receives a DHCP address even in a diskless state. The IP address can be determined either from the DHCP provider in the system or via Netgear's RAIDar utility. Accessing the IP address with the default 'admin'/'password' login enables the setup process shown in the gallery below. We started off with one disk in the unit, and it was configured as a JBOD volume with X-RAID by  default. Support exists for manually defragging, scrubbing and balancing the btrfs volume. Hot-swapping of drives is possible and adding a new drive or replacing a failed drive with X-RAID enabled automatically triggers expansion / rebuild.

Similar to almost all other NAS units in the market, there is support for viewing the S.M.A.R.T attributes of the member disks. The settings section allows choice of various services to enable (SMB, AFP, NFS, FTP, SSH, ReadyDLNA, uPnP etc.). The user experience is a bit inconsistent here in terms of the interface. While clicking on the service buttons toggles the inner rectangle between green (enabled) and gray (disabled) in this section, there are other sections where similar buttons can't be clicked to toggle status. The settings page allows configuration of other aspects such as update management, backing up settings and alerts.

The logs section records the various NAS activities with timestamps and the power section enables power scheduling, disk spin-down configuration, Wake-on-LAN settings and UPS configuration.

Creating new shares allows us to configure bit-rot protection (disabled by default, results in a performance hit on ARM-based systems), compression (since btrfs provides native compression capabilities), snapshot scheduling and protocols with which it can be accessed. The web UI also features a built-in file browser for the NAS contents and includes a timeline view (based on the snapshots).

The network settings allow the interfaces to be bonded. It is possible to set up 802.3ad LACP (amongst other bonding modes). There are a number of third-party apps available (though the selection is nowhere close to what Synology and QNAP have). Some cloud management features (remote access via VPN, replication over the Internet etc.) are also available.

In the rest of the review, we will take a look at the single client performance for SMB and iSCSI, followed by a look at what enabling encryption entails. We will have three sections dealing with multi-client scenarios across a number of different client platforms as well as access protocols. Prior to all that, we will take a look at our testbed setup and testing methodology.

Testbed Setup and Testing Methodology

The Netgear RN202 can take up to 2 drives. Users can opt for either JBOD, RAID 0 or RAID 1 configurations. We expect typical usage to be with a single RAID-1 volume. To keep things consistent across different NAS units, we benchmarked a single RAID-1 volume. Two Western Digital WD4000FYYZ RE drives were used as the test disks. Our testbed configuration is outlined below.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ Z-Drive R4 CM88 (1.6TB PCIe SSD)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evolution 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

The above testbed can run up to 25 Windows 7 or CentOS VMs simultaneously, each with a dedicated 1 Gbps network interface. This simulates a real-life workload of up to 25 clients for the NAS being evaluated. All the VMs connect to the network switch to which the NAS is also connected (with link aggregation, as applicable). The VMs generate the NAS traffic for performance evaluation. However, keeping in mind the nature of this unit, we restricted ourselves to a maximum of 10 simultaneous clients.

Thank You!

We thank the following companies for helping us out with our NAS testbed:

Single Client Performance - CIFS & iSCSI on Windows
Comments Locked


View All Comments

  • Duncan Macdonald - Friday, September 25, 2015 - link

    Any NAS system that is limited to GbE or lower speed will give poor performance compared to even budget SSDs. (A GbE link can transfer about 100MB/sec after allowing for overheads - even low performance SSDs can do much better.) To beat locally mounted SSDs requires 10GbE or faster links. NAS systems are only useful for sharing files (slowly) to multiple computers or providing a backup far enough away to be unlikely to be affected by a common disaster (eg a house fire).
    As for NAS systems with 100Mb/sec links - AVOID (A USB 2.0 stick can be faster!!!)
  • BillyONeal - Friday, September 25, 2015 - link

    But most of the NASes here are well below saturating GigE. A USB 2.0 stick can be faster in extremely limited scenarios but in most cases USB protocol overhead per transfer will make it worse for these kinds of workloads.
  • Metaluna - Friday, September 25, 2015 - link

    Where in the article did anyone suggest using a NAS as a performance alternative to locally attached SSDs? And as for NAS only being useful for sharing files to multiple computers, yeah, that's kind of the whole point for why local area networks and file servers were developed in the first place. That's like saying "A GPU is really only useful for displaying images on your screen"
  • colinstu - Friday, September 25, 2015 - link

    don't know what 'overheads' you're talking about but my Synology NAS and gb network regularly transfer at 115MB/s (114-116). Still not the max theoretical of 125MB/s, but closer to the max then '100'
  • azazel1024 - Saturday, September 26, 2015 - link

    No, max theoretical is not 125MB/sec. That is raw data rate, but you can't actually transfer 125MB/sec of usable data over a 1GbE link. SMB max rate is about 117.5MB/sec using 9k jumbo frames and about 115MB/sec using standard 1500MTU. That is covering TCP/IP overhead as well as SMB overhead. Smaller file will reduce the max by a bit no matter how fast the host and server are because of additional SMB overhead involved in "opening" and "closing" each file transfer.

    NAS are just fine, at least newer moderately fast ones. But, I do have to say, if running windows based clients...a windows based server, if you can't/don't want to move to 10GbE can be significantly higher performing than a NAS, even in "undemanding" file transfers. My G1610 based server manages 235MB/sec between it and my desktop, both running Windows 8.1. Dual GbE NICs combined with SMB Multichannel is a beautiful thing.
  • UtilityMax - Sunday, September 27, 2015 - link

    NAS storage is slower than a directly attached storage! Shocking stuff! News at 11.

    GiE is is actually pretty acceptable for most applications, except a few specialist tasks. 10GbE can still be pretty expensive and power hungry.
  • UtilityMax - Sunday, September 27, 2015 - link

    Sorry mean 10GbE instead of GiE
  • Wixman666 - Sunday, September 27, 2015 - link

    So you decided that comparing apples and walruses is ok? A SSD and a 2 bay NAS have nothing in common for function, capacity, or price. Troll on, dude.
  • johnny_boy - Thursday, October 1, 2015 - link

    Any SSD system that is limited to SATA or even PCIE will give poor performance compared to even budget RAM disks. (A SATA link can transfer about blah MB/sec after allowing for overheads - even low performance RAM disks can do much better.) To beat locally mounted RAM disks requires bleek GbE or faster links. SSDs are only useful for reading and writing data.
    As for SSDs with blomps Mb/sec links - AVOID (A USB 3.0 stick can be faster!!!)
  • Wardrop - Friday, September 25, 2015 - link

    Do the btrfs snapshots show up in Windows under the "Previous versions" tab?

Log in

Don't have an account? Sign up now