Here’s another great article from regular TechHead guest contributor, James Pearce. James is a Kent based qualified accountant, currently working in information security and technical architecture with most of his time “being spent on virtualisation and business continuity at the moment”.
EqualLogic’s PS4000 iSCSI storage arrays are targeted at the SME sector, particularly for VMware virtualisation, as well as “branch office” use for larger companies. The XV model is reviewed, with 15k SAS drives, dual controllers and dual power supplies.
The PS4000 is quite a complicated product, so I’ve split this into four sections:
Part 1 – An Introduction to the PS4000
Part 2 – EqualLogic Networking with Force10
Part 3 – System Management and Monitoring
Part 4 – Performance
There’s a lot to cover, so here goes!
Dell have put an introductory promotional video on YouTube summarising the product here. There’s spades of the usual sales spin, but does cover the gist of the product quite nicely.
The XV model has EqualLogic branded Seagate Cheetah 15k6s, boasting a 0.55% annual failure rate, 3.4ms seek, 15,000 rpm and 164MB/s sustained sequential throughput. Super, especially when there’s 16 of them.
The Cheetah’s seem quite juicy at 16W a piece, but in performance terms they can easily do the work of two 8W SATA drives – and then some. In use the results don’t disappoint either, but we’ll come to that in Part 4.
These arrays work in pools spanning up to six units, managed together as a single group with a single virtual IP address. Each unit is shipped fully populated with 16 matching drives and can be configured only with a single RAID type across the lot – 10, 50, 5 or 6.
With two or more units operating in a group, different drive types and RAID levels can be mixed to get a balance of high-speed and high-capacity storage. The system itself determines which unit houses which LUNs based on usage patterns, and this can change over time (a ‘preferred’ RAID level can be specified for each LUN too).
Since each shelf has its own processors, performance scales-out as more arrays are added. Of course the trade off with this is that additional PS arrays are considerably more expensive than ‘dumb’ disk shelves.
Two identical controllers are fitted in an active-passive configuration, each with battery-backed mirrored write cache and two GbE iSCSI ports, plus a separate 100Mbps management port. Firmware is on an internal flash drive and so doesn’t use any disk space, although EqualLogic recommend keeping 100GB free on the disks for logs and other system use. The active-standby configuration and painless controller failover means that firmware can be updated online without interrupting workloads.
The two hefty PSUs operate in an active-active configuration, pulling about 200W each (400W total). This doesn’t change much with usage patterns since the controllers continually perform background patrol reads in idle periods.
The efficiency of the PSUs isn’t published, but the figures suggest they must be well into the 80+ bracket, since the disks alone will be drawing around 250W in this unit.
Fan noise is high through the boot cycle or if operating on one PSU, but at normal room temperature and with both PSUs powered the unit is quite quiet. A demonstration SSD based model ran its fans at full speed continually, but this has apparently fixed in a later firmware.
Fault tolerance is demonstrated as part of the handover by pulling cables, disks and a controller and ensuring attached systems continue without issue. Pulling a controller does interrupt IO for a few seconds, but is quick enough not to cause a Windows VM running IOMeter to report any problem.
The chassis itself has clearly been designed for easy maintenance – controllers and power supplies pull out the back, and disks pull out the front. My only criticism is the service tag – right at the bottom on the back and in what must be 6-point text. Definitely one to note down somewhere before it’s racked.
Should the need arise, EqualLogic support is primarily through email. But getting support is easy and effective with none of the usual messing about with “1st line” staff – it’s straight through to someone that can actually help and that knows the product inside-out.
The EqualLogic configuration wizard reserves hot-spares automatically depending on the RAID level set – two for RAID 10 and 50, and one for RAID 5 and 6. A ‘no-spares’ configuration is also possible, but only from the telnet interface.
A big issue with multi-terabyte parity based RAID volumes is the ability to rebuild them after a drive failure, since drive capacities continue to rocket without corresponding improvements in the quoted unrecoverable error rates. To illustrate this, good 2TB SATA drives have error rates only about a hundred times more than their capacity; in a hundred complete passes, they will probably suffer a read error. In a RAID volume with 14 disks, the statistics suddenly look daunting; around a one-in-ten chance of data loss in a rebuild.
To address this and to limit the performance impact of drive failure, the EqualLogic continually patrols the volumes and attempts a straight disk-to-disk clone to a hot-spare if a disk is flagged as likely to fail soon. In this way, most of the data can hopefully be cloned before the disk fails, reducing the amount of time-consuming and processor intensive parity-based reconstruction required.
Despite this, RAID-5 is still only recommended for DR and archival type uses. Double-parity RAID-6, the only option capable of withstanding failure of any two drives, actually performs very well, but for most purposes RAID-10 or RAID-50 will be preferred choices. EqualLogic have produced a handy summary of the available options:
|Workload Requirement||RAID 10||RAID 50||RAID 5||RAID 6|
|Performance impact of drive failure/RAID reconstruction||Minimal||Moderate||Moderate to Heavy||Heavy|
The XV model here is around £15.5k, which includes 3-years 4-hours support and a day of engineering time to commission the unit. The E model, with sixteen 500GB SATA drives, is around £12.5k, and the SAS model with 10k disks somewhere in-between.
Although these prices seem high, it’s well worth noting that all software functionality is enabled out-the-box; there is no extra to pay to get WAN replication going for example. The support backing is also a real strength.
So that’s the PS4000 – quite expensive, quite clever, and hopefully quite reliable. In the next post of the this four part series I’ll look at the networking requirements when used with VMware ESX and Force10 switches – things turn out to be more complicated than expected…