People that know me will know that with my background being predominantly involved with HP Proliant based server equipment that I have a strong bias towards Proliants based on my experiences with using them in everything from SMB’s through to large multinational corporates.
So the opportunity to get my hands on one of Dell’s new PowerEdge R710 Nehalem CPU based servers was too good an opportunity to pass up. Now, it’s been a few years since I last had any hands-on time with a Dell server and to be totally honest I was never that impressed with their build quality (compared to that of HP Proliants) and their mix n’ match approach to the use of components and 3rd party utilities.
With this in mind I was ready to unbox, see if things had changed and to check out this new Intel Nehalem based server offering from Dell…
Below is a TechHead first. I thought it would be interesting to attempt a video based hands-on review of the Dell PowerEdge R710. I used my Flip UltraHD and combined with my basic knowledge of video editing on my MacBook produced what you see below – let me know what you think? In the rest of this article I have provided a more traditional overview of the R710 with pictures and text.
As with most enterprise level servers the packaging is more than adequate and is designed to withstand most things whilst is transport. The R710’s packaging was no exception.
Along with the physical server there is a box containing the following:
• Server facia plate/cover
• Power cable (IEC)
• Dell OpenManager CD
• Install documentation
• Health & Safety information (that no one ever reads)
The Dell PowerEdge R710 is a 2U server with hot-pluggable hard disks, a CD/DVD reader, dual USB 2.0 ports, video out and a useful LCD screen which displays monitoring, alerting and offers basic management.
Without facia (NB: Red Fridge not included…) 🙂
Below: Close up of the power button, 2 x USB 2.0 ports, video out port, LCD management display and on the far left a retractable plastic card containing the useful service tab and MAC addresses of the onboard network cards.
The Dell PowerEdge R710 is powered by the new Intel Nehalem EP (Efficient Processor) processor. The Nehalem features quad-core processing to
maximize performance and performance/watt for data center infrastructures and highly density server deployments. The Nehalem-EP processor also features Intel’s Core 45 nm micro-architecture.
There are a couple of features that makes the Nehalem well suited to hosting virtualization are:
The Nehalem-EP also includes the new Intel Turbo Boost technology and this combined with Hyper-Threading can give a noticeable increase in performance when required over the previous generation of Dual and Quad Core Xeon processors
The R710 has two CPU sockets each to house one of these new Nehalems. The demo unit, as can be seen below, came with a single CPU and a CPU blank in the CPU2 socket for thermal reasons.
Either side of the CPU sockets are nine DDR3 sockets that support both Registered ECC DDR3 DIMMs (RDIMM) or Unbuffered ECC DDR3 DIMMs (UDIMM). As for memory sizes they come in 2GB, 4GB, or 8GB RDIMMs and 1GB or 2GB UDIMMs. Depending on the memory type (ie: UDIMM or RDIMM) this determines the maximum memory supported with up to 96GB of RDIMM memory (with twelve 8GB RDIMMs) or up to 24GB of UDIMM memory (with twelve 2GB UDIMMs ).
This particular demo model had 6 x hot-pluggable 3.5” SAS 15K drives. The drives were easily removed and re-inserted again.
The latch mechanism on the hard disk has a good sturdy feel to it.
The R710 can come with a variety of 2.5” and 3.5” SAS or SATA hot-pluggable disk configurations. One advantage of the 2.5” based disks is that you can fit more of them (8) into the chassis as opposed to the 3.5” form factor disks (6). Though as would be expected the 2.5” disks each offer a smaller maximum capacity (300GB max) with the 3.5” disks equivalent being available in a slightly larger capacity (450GB). For SAS/SATA drive mixing one mixed 2.5" and 3.5" hard drive configuration is allowed:
• A pair of 2.5" 10k rpm SAS drives must be installed with an adapter in a 3.5" hard drive carrier in drive slots 0 and 1.
• The remaining hard drives must be 3.5" hard drives and must be either all SAS or all SATA.
The server comes with a Dell PERC 6i storage controller which uses a dedicated PCI Express slot in riser 1 and can be used to control all drives in the enclosure at the front of the server. The PERC 6i uses the LSI 1078 ROC (RAID on Chip) processor and DDR2 memory. A battery backed write cache is available as an option.
The internal airflow of the server is well channelled via a large baffle that sits over the top of the CPU(s) and memory. This is of a decent construction – a solid plastic not the flimsy variety you sometimes find used in servers.
Below: The airflow baffle removed.
Inside the front of the chassis there are a couple of ports of ‘port’icular interest. The first is an internal USB port – this can have a USB pen drive inserted containing a hypervisor such as VMware ESX, Xen Server or Hyper-V. From this pen drive the server can be booted meaning that all the internal disk space can be allocated to VM storage or there may be no internal disk storage installed at all, with the server accessing it’s VMs on shared storage (ie: SAN) over iSCSI or a fibre channel HBA.
The second port is to connect an internal Solid State Disk (SSD). This is an optional extra and the SSDs come in both 25GB and 50GB models. One point to note is that if you install an SSD it will have to be connected to the PERC 6/i Integrated storage controller and cannot be mixed with any other type of hard drive.
Five hot-swappable fans (orange colour indicates that they are hot pluggable) are mounted just behind the hard disk enclosure at the front of the server. There is an additional fan integrated in each power supply to cool the power supply subsystem and also provide additional cooling for the whole system. Single processor configurations (as per the demo unit) will only have four fans populated.
The Embedded Server Management logic built into the server monitors the speed of the fans. A fan failure or over-temperature in the system results in a notification by the inbuilt systems management environment (iDRAC6 – Integrated Dell Remote Access Controller6). Redundant cooling is supported with one fan failing at a time.
I found the fans very easy to remove and gave a reassuring definite click when re-inserted.
The hot pluggable cooling fan removed:
The base redundant system consists of two hot-plug 570W Energy Smart (energy efficient) power supplies in a 1+1 configuration. An 870W high-output power supply is also available. The demo unit had a pair of 870W power supplies but even when performing the benchmarking on it I was no-where near requiring all this extra power.
As with the cooling fans notice that the power supplies have an orange release tab indicating that they are hot-pluggable components. As you know hot-pluggable power supplies are fairly standard on most enterprise level servers these days.
I found the power supplies easy to remove and re-insert.
There are a pair of Broadcom 5709C dual-port based embedded gigabit ethernet chips providing four 1Gb ports on the rear of the server. These embedded gigabit ethernet ports are TOE (TCP Offload Engine) and iSCSI enabled (via an optional hardware key). I found that the Broadcom 5709C NIC controllers were identified by both Windows Server 2008 and ESX and that the correct drivers were installed without any problems.
Sufficient network ports are the order of the day with more and more companies now using servers such as the Dell R710 and the HP DL360 or DL380 as part of their virtualization platforms. Where two embedded network ports were standard with the previous generation of servers we are now seeing four as being the norm – saves having to buy and install additional PCI-Express based NICs.
PCI Express Expansion
There are two PCI Express risers within the server (Riser 1 and Riser 2) each connected to the system board via a x16 PCI Express connector. Riser 1 consists of two x4 slots and a third x4 slot dedicated for internal SAS storage (PERC 6i) with riser 2 consisting of two x8 PCI Express connectors. There is also an optional x16 Riser 2 that van be purchased that supports one x16 PCI Express card for when you have a card that needs that extra bandwidth.
The Rear End…
The back of the R710 has all the expansion ports that you’d expect to see. There is a remote management port (aka ILO) a video out, a serial port, two USB 2.0 ports and the four 1Gb Broadcom NIC ports. Of course there are also the hot-pluggable power supplies. In all nothing new and exciting to report here…
With it’s powerful CPUs, high memory capability and expansion the R710 is a good candidate to be used for virtualization purposes. As such it supports the following main hypervisors currently on the market:
• Microsoft Windows Server® 2008 Hyper-V
• VMware ESXi Version 4.0 and 3.5 update 4
• Citrix XenServer 5.0 with Hotfix 1 or later
From the VMware ESX systems compatibility list below you can see it is supported by VMware from ESX 3.5 U4 onwards.
Of course I had to install VMware ESX(i) on it just to make sure and can report that all hardware components were successfully detected. Below are some screenshots taken from VMware vCenter Server:
Storage: The internal disk storage was detected and when created was presented through as VMFS.
Network Adapters: No problems here..
Hardware Monitoring: There was an excellent level of hard monitoring presented through from the R710 to vCenter Server.
I took some storage performance metrics from a VM running on the R710 using IOMeter. I will post the results at a later stage.
I have been pleasantly impressed with the build quality and obvious thought that has gone into the design and construction of this server. When running I found it to be one of the more quiet servers I’ve used. Although noise is not a consideration when running a server into a data center it can be an important factor when running it in a branch office or similar.
When reviewing the R710 I have found myself comparing it to the HP Proliant DL380 whose physical footprint and feature-set make it an obvious choice for comparison. Although in my opinion the DL380 G5/G6 is hard to beat from an engineering perspective the R710 is a real contender and offers a good honest alternative.
The warranty that comes as standard on the server is a 1 year next business day. If you were to use this as a business critical sever then the you’d be wanting to upgrade this warranty to something a little more responsive.
The R710 with its Intel Nehalem processor(s) combined with its ability to house a large amount of physical memory (>96GB), adequate PCI Express expansion and four 1Gb LAN ports as default means that it is well suited to being a virtualization host. Using VMware’s recommended 4:1 virtual CPU (vCPU) to physical CPU ratio this server could (in theory) host 32 VMs. In reality, depending on how resource intensive the VMs are and the amount of memory installed, you could be looking at less or closer to 35-40 VMs. I have clients who comfortably run 25-30 VMs on a slightly lower specification server with a maximum of 32GB memory and 2.00GHz non Nehalem Xeon CPUs.
In all the Dell PowerEdge R710 gets a TechHead thumbs up! If you want to find out more check out the links below or head on over to the Servers Plus site.
Demo Unit Specification
Model : Intel(R) Xeon(R) CPU – Nehalem EP: E5520 @ 2.27GHz
Speed : 2.4GHz
Cores per Processor : 4 Unit(s)
Threads per Core : 2 Unit(s)
Type : Quad-Core
Integrated Data Cache : 4x 32kB, Synchronous, Write-Thru, 8-way, 64 byte line size, 2 threads sharing
L2 On-board Cache : 4x 256kB, ECC, Synchronous, ATC, 8-way, 64 byte line size, 2 threads sharing
L3 On-board Cache : 8MB, ECC, Synchronous, ATC, 16-way, Exclusive, 64 byte line size, 8 threads sharing
System : Dell Inc. PowerEdge R710
Mainboard : Dell Inc. 0N047H
Bus(es) : ISA X-Bus PCI PCIe IMB USB i2c/SMBus
Multi-Processor (MP) Support : No
Multi-Processor Advanced PIC (APIC) : Yes
NUMA Support : 2 Node(s)
BIOS : Dell Inc. 1.0.4
Total Memory : 4GB ECC DDR3 Scrubbing
Model : Dell 5520 (Tylersburg-36D) I/O Hub
Front Side Bus Speed : 2x 3GHz (5.85GHz)
Model : Intel Xeon CPU Generic Non-Core Registers
Front Side Bus Speed : 2x 3GHz (5.85GHz)
Total Memory : 4GB ECC DDR3 Scrubbing
Memory Bus Speed : 2x 532MHz (1GHz)
Adapter : Standard VGA Graphics Adapter (PCI)
DELL PERC 6/i 146.2GB (RAID, NCQ)
TSSTcorp DVD-ROM TS-L333A (SATA150, DVD+-R, CD-R, 198kB Cache)
LPC Hub Controller 1 : Dell (ICH9) LPC Interface Controller
LPC Legacy Controller 1 : SMSC EMC2700LPC
Serial Port(s) : 2
Disk Controller : Dell PowerEdge R710 SATA IDE Controller
Disk Controller : Dell PERC 6/i Integrated RAID Controller
USB Controller 1 : Dell PowerEdge R710 USB UHCI Controller
USB Controller 2 : Dell PowerEdge R710 USB UHCI Controller
USB Controller 3 : Dell PowerEdge R710 USB EHCI Controller
USB Controller 4 : Dell PowerEdge R710 USB UHCI Controller
USB Controller 5 : Dell PowerEdge R710 USB UHCI Controller
USB Controller 6 : Dell PowerEdge R710 USB EHCI Controller
SMBus/i2c Controller 1 : IPMI T1 Controller