Thanks to the team over at ServersPlus for arranging this Dell PowerEdge R710 demo unit for review.
People that know me will know that with my background being predominantly involved with HP Proliant based server equipment that I have a strong bias towards Proliants based on my experiences with using them in everything from SMB’s through to large multinational corporates.
So the opportunity to get my hands on one of Dell’s new PowerEdge R710 Nehalem CPU based servers was too good an opportunity to pass up. Now, it’s been a few years since I last had any hands-on time with a Dell server and to be totally honest I was never that impressed with their build quality (compared to that of HP Proliants) and their mix n’ match approach to the use of components and 3rd party utilities.
With this in mind I was ready to unbox, see if things had changed and to check out this new Intel Nehalem based server offering from Dell…
Below is a TechHead first. I thought it would be interesting to attempt a video based hands-on review of the Dell PowerEdge R710. I used my Flip UltraHD and combined with my basic knowledge of video editing on my MacBook produced what you see below – let me know what you think? In the rest of this article I have provided a more traditional overview of the R710 with pictures and text.
Unpacking
As with most enterprise level servers the packaging is more than adequate and is designed to withstand most things whilst is transport. The R710’s packaging was no exception.
Along with the physical server there is a box containing the following:
• Server facia plate/cover
• Power cable (IEC)
• Dell OpenManager CD
• Install documentation
• Health & Safety information (that no one ever reads)
The Dell PowerEdge R710 is a 2U server with hot-pluggable hard disks, a CD/DVD reader, dual USB 2.0 ports, video out and a useful LCD screen which displays monitoring, alerting and offers basic management.
Without facia (NB: Red Fridge not included…) 🙂
With facia…
Below: Close up of the power button, 2 x USB 2.0 ports, video out port, LCD management display and on the far left a retractable plastic card containing the useful service tab and MAC addresses of the onboard network cards.
Processor(s)
The Dell PowerEdge R710 is powered by the new Intel Nehalem EP (Efficient Processor) processor. The Nehalem features quad-core processing to
maximize performance and performance/watt for data center infrastructures and highly density server deployments. The Nehalem-EP processor also features Intel’s Core 45 nm micro-architecture.
There are a couple of features that makes the Nehalem well suited to hosting virtualization are:
• Intel 64 Technology for Virtualization.
• Intel VT-x and VT-d Technology.
The Nehalem-EP also includes the new Intel Turbo Boost technology and this combined with Hyper-Threading can give a noticeable increase in performance when required over the previous generation of Dual and Quad Core Xeon processors
The R710 has two CPU sockets each to house one of these new Nehalems. The demo unit, as can be seen below, came with a single CPU and a CPU blank in the CPU2 socket for thermal reasons.
Memory
Either side of the CPU sockets are nine DDR3 sockets that support both Registered ECC DDR3 DIMMs (RDIMM) or Unbuffered ECC DDR3 DIMMs (UDIMM). As for memory sizes they come in 2GB, 4GB, or 8GB RDIMMs and 1GB or 2GB UDIMMs. Depending on the memory type (ie: UDIMM or RDIMM) this determines the maximum memory supported with up to 96GB of RDIMM memory (with twelve 8GB RDIMMs) or up to 24GB of UDIMM memory (with twelve 2GB UDIMMs ).
Hot-Pluggable Drives
This particular demo model had 6 x hot-pluggable 3.5” SAS 15K drives. The drives were easily removed and re-inserted again.
The latch mechanism on the hard disk has a good sturdy feel to it.
The R710 can come with a variety of 2.5” and 3.5” SAS or SATA hot-pluggable disk configurations. One advantage of the 2.5” based disks is that you can fit more of them (8) into the chassis as opposed to the 3.5” form factor disks (6). Though as would be expected the 2.5” disks each offer a smaller maximum capacity (300GB max) with the 3.5” disks equivalent being available in a slightly larger capacity (450GB). For SAS/SATA drive mixing one mixed 2.5" and 3.5" hard drive configuration is allowed:
• A pair of 2.5" 10k rpm SAS drives must be installed with an adapter in a 3.5" hard drive carrier in drive slots 0 and 1.
• The remaining hard drives must be 3.5" hard drives and must be either all SAS or all SATA.
Storage Controller
The server comes with a Dell PERC 6i storage controller which uses a dedicated PCI Express slot in riser 1 and can be used to control all drives in the enclosure at the front of the server. The PERC 6i uses the LSI 1078 ROC (RAID on Chip) processor and DDR2 memory. A battery backed write cache is available as an option.
Feeling Baffled…
The internal airflow of the server is well channelled via a large baffle that sits over the top of the CPU(s) and memory. This is of a decent construction – a solid plastic not the flimsy variety you sometimes find used in servers.
Below: The airflow baffle removed.
Internal Connectivity
Inside the front of the chassis there are a couple of ports of ‘port’icular interest. The first is an internal USB port – this can have a USB pen drive inserted containing a hypervisor such as VMware ESX, Xen Server or Hyper-V. From this pen drive the server can be booted meaning that all the internal disk space can be allocated to VM storage or there may be no internal disk storage installed at all, with the server accessing it’s VMs on shared storage (ie: SAN) over iSCSI or a fibre channel HBA.
The second port is to connect an internal Solid State Disk (SSD). This is an optional extra and the SSDs come in both 25GB and 50GB models. One point to note is that if you install an SSD it will have to be connected to the PERC 6/i Integrated storage controller and cannot be mixed with any other type of hard drive.
Cooling Fans
Five hot-swappable fans (orange colour indicates that they are hot pluggable) are mounted just behind the hard disk enclosure at the front of the server. There is an additional fan integrated in each power supply to cool the power supply subsystem and also provide additional cooling for the whole system. Single processor configurations (as per the demo unit) will only have four fans populated.
The Embedded Server Management logic built into the server monitors the speed of the fans. A fan failure or over-temperature in the system results in a notification by the inbuilt systems management environment (iDRAC6 – Integrated Dell Remote Access Controller6). Redundant cooling is supported with one fan failing at a time.
I found the fans very easy to remove and gave a reassuring definite click when re-inserted.
The hot pluggable cooling fan removed:
Power Supplies
The base redundant system consists of two hot-plug 570W Energy Smart (energy efficient) power supplies in a 1+1 configuration. An 870W high-output power supply is also available. The demo unit had a pair of 870W power supplies but even when performing the benchmarking on it I was no-where near requiring all this extra power.
As with the cooling fans notice that the power supplies have an orange release tab indicating that they are hot-pluggable components. As you know hot-pluggable power supplies are fairly standard on most enterprise level servers these days.
I found the power supplies easy to remove and re-insert.
Network
There are a pair of Broadcom 5709C dual-port based embedded gigabit ethernet chips providing four 1Gb ports on the rear of the server. These embedded gigabit ethernet ports are TOE (TCP Offload Engine) and iSCSI enabled (via an optional hardware key). I found that the Broadcom 5709C NIC controllers were identified by both Windows Server 2008 and ESX and that the correct drivers were installed without any problems.
Sufficient network ports are the order of the day with more and more companies now using servers such as the Dell R710 and the HP DL360 or DL380 as part of their virtualization platforms. Where two embedded network ports were standard with the previous generation of servers we are now seeing four as being the norm – saves having to buy and install additional PCI-Express based NICs.
PCI Express Expansion
There are two PCI Express risers within the server (Riser 1 and Riser 2) each connected to the system board via a x16 PCI Express connector. Riser 1 consists of two x4 slots and a third x4 slot dedicated for internal SAS storage (PERC 6i) with riser 2 consisting of two x8 PCI Express connectors. There is also an optional x16 Riser 2 that van be purchased that supports one x16 PCI Express card for when you have a card that needs that extra bandwidth.
The Rear End…
The back of the R710 has all the expansion ports that you’d expect to see. There is a remote management port (aka ILO) a video out, a serial port, two USB 2.0 ports and the four 1Gb Broadcom NIC ports. Of course there are also the hot-pluggable power supplies. In all nothing new and exciting to report here…
Virtualization
With it’s powerful CPUs, high memory capability and expansion the R710 is a good candidate to be used for virtualization purposes. As such it supports the following main hypervisors currently on the market:
• Microsoft Windows Server® 2008 Hyper-V
• VMware ESXi Version 4.0 and 3.5 update 4
• Citrix XenServer 5.0 with Hotfix 1 or later
From the VMware ESX systems compatibility list below you can see it is supported by VMware from ESX 3.5 U4 onwards.
Of course I had to install VMware ESX(i) on it just to make sure and can report that all hardware components were successfully detected. Below are some screenshots taken from VMware vCenter Server:
Storage: The internal disk storage was detected and when created was presented through as VMFS.
Storage Adapters:
Network Adapters: No problems here..
Hardware Monitoring: There was an excellent level of hard monitoring presented through from the R710 to vCenter Server.
I took some storage performance metrics from a VM running on the R710 using IOMeter. I will post the results at a later stage.
Conclusion
I have been pleasantly impressed with the build quality and obvious thought that has gone into the design and construction of this server. When running I found it to be one of the more quiet servers I’ve used. Although noise is not a consideration when running a server into a data center it can be an important factor when running it in a branch office or similar.
When reviewing the R710 I have found myself comparing it to the HP Proliant DL380 whose physical footprint and feature-set make it an obvious choice for comparison. Although in my opinion the DL380 G5/G6 is hard to beat from an engineering perspective the R710 is a real contender and offers a good honest alternative.
The warranty that comes as standard on the server is a 1 year next business day. If you were to use this as a business critical sever then the you’d be wanting to upgrade this warranty to something a little more responsive.
The R710 with its Intel Nehalem processor(s) combined with its ability to house a large amount of physical memory (>96GB), adequate PCI Express expansion and four 1Gb LAN ports as default means that it is well suited to being a virtualization host. Using VMware’s recommended 4:1 virtual CPU (vCPU) to physical CPU ratio this server could (in theory) host 32 VMs. In reality, depending on how resource intensive the VMs are and the amount of memory installed, you could be looking at less or closer to 35-40 VMs. I have clients who comfortably run 25-30 VMs on a slightly lower specification server with a maximum of 32GB memory and 2.00GHz non Nehalem Xeon CPUs.
In all the Dell PowerEdge R710 gets a TechHead thumbs up! If you want to find out more check out the links below or head on over to the Servers Plus site.
Demo Unit Specification
Processor
Model : Intel(R) Xeon(R) CPU – Nehalem EP: E5520 @ 2.27GHz
Speed : 2.4GHz
Cores per Processor : 4 Unit(s)
Threads per Core : 2 Unit(s)
Type : Quad-Core
Integrated Data Cache : 4x 32kB, Synchronous, Write-Thru, 8-way, 64 byte line size, 2 threads sharing
L2 On-board Cache : 4x 256kB, ECC, Synchronous, ATC, 8-way, 64 byte line size, 2 threads sharing
L3 On-board Cache : 8MB, ECC, Synchronous, ATC, 16-way, Exclusive, 64 byte line size, 8 threads sharing
System
System : Dell Inc. PowerEdge R710
Mainboard : Dell Inc. 0N047H
Bus(es) : ISA X-Bus PCI PCIe IMB USB i2c/SMBus
Multi-Processor (MP) Support : No
Multi-Processor Advanced PIC (APIC) : Yes
NUMA Support : 2 Node(s)
BIOS : Dell Inc. 1.0.4
Total Memory : 4GB ECC DDR3 Scrubbing
Chipset
Model : Dell 5520 (Tylersburg-36D) I/O Hub
Front Side Bus Speed : 2x 3GHz (5.85GHz)
Chipset
Model : Intel Xeon CPU Generic Non-Core Registers
Front Side Bus Speed : 2x 3GHz (5.85GHz)
Total Memory : 4GB ECC DDR3 Scrubbing
Memory Bus Speed : 2x 532MHz (1GHz)
Video System
Adapter : Standard VGA Graphics Adapter (PCI)
Storage Devices
DELL PERC 6/i 146.2GB (RAID, NCQ)
TSSTcorp DVD-ROM TS-L333A (SATA150, DVD+-R, CD-R, 198kB Cache)
Peripherals
LPC Hub Controller 1 : Dell (ICH9) LPC Interface Controller
LPC Legacy Controller 1 : SMSC EMC2700LPC
Serial Port(s) : 2
Disk Controller : Dell PowerEdge R710 SATA IDE Controller
Disk Controller : Dell PERC 6/i Integrated RAID Controller
USB Controller 1 : Dell PowerEdge R710 USB UHCI Controller
USB Controller 2 : Dell PowerEdge R710 USB UHCI Controller
USB Controller 3 : Dell PowerEdge R710 USB EHCI Controller
USB Controller 4 : Dell PowerEdge R710 USB UHCI Controller
USB Controller 5 : Dell PowerEdge R710 USB UHCI Controller
USB Controller 6 : Dell PowerEdge R710 USB EHCI Controller
SMBus/i2c Controller 1 : IPMI T1 Controller
Si
Nice review, does that mean you will now be switching over to Dell hardware? Are they as quiet as the new Proliant G6 range?
Do you happen to know if the iDRAC6 management port allows you to view the server at the bootup screen, the same as the Proliant’s?
Also any chance you could get your hands on a Dell PE T610 for review?
Hey Martin,
Thanks for the comment.
Lol, I am still very much pro-HP though would certainly consider this server as an alternative. 🙂
Noise wise the R710 was quite quiet compared to your average server of a similar specification. I was tempted to get a sound level meter to measure the noise level of servers though unfortuantely they were a little too expensive. At a guess I would estimate that it’d be similar in noise levels when up and running – though this isn’t very scientific admitedly.
Yes, from memory the iDRAC6 management port allowed me to see the bootup screen during the reboot process. So it is effectively the same as the HP ILO equivilent.
Good idea on the Dell PE T610 – I’ll see what I can do and will keep you posted. Nice looking SMB servers – as with the ML110/ML115 it’d potentially make a nice home lab.
Cheers,
Si
Hi! Isn’t the max 144 GB (18×8)?
I am curious because I want a couple of these with single cpus, limiting me to 9 dimms (I’m thinking 36 GB, 9x4GB) – does this sound right?
thanks,
Wes
Hi, i just want to know that this server is compatible with ESXi 3.5 and ESXi 4?
Hi Wahyu,
It certainly is – you can confirm this on the VMware System Compatibility list here: (something of a long URL) http://www.vmware.com/resources/compatibility/search.php?action=search&deviceCategory=server&productId=1&advancedORbasic=advanced&maxDisplayRows=50&key=R710&release%5B%5D=-1&datePosted=-1&partnerId%5B%5D=23&formFactorId%5B%5D=2&filterByEVC=0&filterByFT=0&min_sockets=&min_cores=&min_memory=&rorre=0
Hope this helps,
Simon
Hi.
If I’m going to utilize this as a file server, what’s the maximum disk space and optimal setup to have for the hard drives.
Hi there
Very nice review. I was wondering if the internal SD Card port and USB port are optional, or do they come standard with the R710 servers?
Its hard to tell what is optional and what is standard.
Thanks
Little bit of an error in the article-
“The second port is to connect an internal Solid State Disk (SSD). This is an optional extra and the SSDs come in both 25GB and 50GB models. One point to note is that if you install an SSD it will have to be connected to the PERC 6/i Integrated storage controller and cannot be mixed with any other type of hard drive.”
That connection is NOT for a SSD- it is for the optional SD module (which I opted for). To answer the question another person had- yes it works fantastically with ESXi- I installed 4.1 on the 1GB SD module. It is NOT standard- it was a line item upgrade at $49. Totally worth it- you can have a backup ESXi installation on a USB stick internally if you’re feeling overzealous. Also, spring for the x5650 processors- they are 6 cores each that support Hyperthreading so ESXi sees 24 logical processors! I love my R710 and it is only time before I get another:)
Hi Tom,
Thanks for the correction on the second port and mixing of drive types – good info, that will definitely benefit others. I will amend the original article.
All the best,
Simon
We are looking for freelancer trainer in UK/Europe/Middle East.
Details:
We do have a trainer requirement for our partner.
Trainer should have good knowledge on Servers and should be able to deliver trainings on products like Dell Blade Server Configuration and Troubleshooting, Dell Power Edge Configuration and Troubleshooting. Further, we are looking for a French speaking trainer.
Traveling is required based on opportunity
Languages Known: English along with French/German/Dutch
If this interests you, requesting you to share details and financial expectations for the same. [email protected]
Shall you require more details please feel free to ask?
Warm Regards,
Sunny Raina
[email protected]
Quick question,
Can you tell me how the integrated hypervisor works? i want to install the hypervisor, and then use a 6 disk raid 10 for the vm storage. Does the the hypervisor installed directly to the server? or would it install to my raid 10 disk? Thanks very much
Hi,
This article is indeed a very usefull.. I have some more queries regarding Dell R710.
1. Power requirements
2. Cooling requirements
3. Power Sockets – type and numbers
4. Rack Space – for mounting server
5. Rack Space – for installing server
Regards,
Amit kumar
Hi Amit,
Take a look at Dell’s web page on the R710 for extra information, that should answer your questions: http://www.dell.com/uk/business/p/poweredge-r710/pd
Cheers,
Simon
hi simon
thanks for writing this article
i have question in here , i was upgrade the dell R710 buy new proceccor and 3x 8 gb , so i put the procesor and ram in the server but when i finish reinstall the os i cant booting the system there is no error on the screen but i got unsupported configuration : dimm configuration on each cpu should be match in cpu 1 i got configuration ram 4-2-2 4-2-2 4-2-2 in cpu 2 8– 8– 8– the total memory is 48 gb when the scanning memory we got 48 gb but regarding dimm configuration im still have no clue of that
A nice review;’i have one r710; the only doubt I have is : I have already 6 hard drives in front;
Can I put more hard drives, expanding with some PCI card ; or an extension to a special storage for 10 drives ?; I do not want a NAS storage because they use Linux OS.
Thanks in advance;
IT enthousiast.
Juan
Hi Juan,
Glad you liked the review. 🙂
There apparently an optional ‘flex bay’ that you can add to the R710 that allows you to add an extra two drives within the server chassis. See here for details: http://www.dell.com/us/business/p/poweredge-r710/pd?~ck=anav#TechSpec
As for taking it up to 10 drives, to my knowledge, you’d be looking at attaching some external storage. You could perhaps look at using a PCIe eSATA card and then attaching disks that way, or better still the Dell PowerVault DAS devices would be a potential option (though at a cost).
To be honest I have never tried either of these options, so I’d definitely look into it further before spending any money. 🙂
Hope this help.
Simon