I’ve been running VMware ESXi 4.1 on an HP MicroServer for a few weeks now and the following post is to summarise my findings and to answer a question many people have been asking - Does the HP MicroServer make for a decent VMware vSphere lab server and is it a true replacement to the HP ML115 G5?
If you want to read my conclusion on running vSphere on the MicroServer then skip to the end of the post, though if you want more detail around what is presented through to ESX/ESXi then read on. Also if you want more hands on information on the MicroServer check out my previous posts here and here.
Installation: Installing VMware vSphere 4.1 onto the HP MicroServer didn’t raise any issues. The entire installation process went through without a hitch, none of the CD/DVD not recognised issues that were originally seen with the ML110 G6’s. But then I did have to provide my own CD/DVD as the MicroServer doesn’t come with one as standard.
Let’s take a look at what ESX/ESXi see’s of the underlying hardware once it has been installed. The following images shows some of the key areas from the vSphere Client.
From within the ‘Summary’ tab it is clear that the AMD Athlon II Neo N36L (dual core) laptop grade CPU is detected without issue along with the other core components of the HP MicroServer such as the memory, local storage and of course the network card,
Processor Information: The AMD Athlon N36L CPU has the AMD64 extensions which are necessary for VMware vSphere and 64bit only OS’s such as Windows Server 2008 R2. It also has AMD-V virtualization enhancement extensions.
Memory Information: The full 8GB (2 x 4GB DIMMs) of memory was detected by ESX/ESXi 4.1
Datastores – Local & NAS/SAN Storage: The local 160GB SATA disk was detected and presented through as a vmfs3 partition.
Network Adapter: The integrated Broadcom NC107i Gigabit network card was detected without issue.
Storage Adapters: The disk controller (SB700) is based on the on-board AMD chipset of the MicroServer. As this isn’t strictly a hardware based array controller and requires drivers to run an array on the server you will find that trying to RAID run under VMware ESX/ESXi won’t work. So if you require RAID functionality then you’d need to look at adding a PCIe based array controller or using shared SAN/NAS storage. However if you are not worried about disk resilience in your vSphere lab you could install ESX/ESXi off of a single local disk along with the VMs on the same or another added disk attached to the on-board controller.
VMDirectPath: Unsurprisingly the MicroServer’s chipset does not support VMDirect Path pass-through.
Power Management: The AMD N36L CPU of the MicroServer has the AMD PowerNow! instructions on-board meaning that ESX/ESXi host will be able to throttle the frequency of the CPU at times of varying utilization which can lead to power savings.
vMotion and Enhanced vMotion Compatibility (EVC): One area that I was interested to test was around vMotion. As the MicroServer is running a low powered non-AMD Opteron processor how would it work along side an AMD Opteron processor as found in the ML115 G5?
A vMotion between the HP MicroServer and a ML115 G5 as you’d expect fails due to the incompatibility between the CPU models of the AMD Athlon N36L and AMD Opteron.
But what if you apply Enhanced vMotion Compatiblity (EVC) to a cluster containing the HP MicroServer and ML115 G5? The good news is that if you set the EVC mode of the cluster to AMD Opteron Generation 1 then vMotion and subsequently DRS will work between the Athlon and Opteron based CPUs.
Performance: Over the duration of the two weeks I ran on average 5 VMs on the MicroServer. Although the AMD N36L Dual Core 1.297GHz processor in the MicroServer is of quite a low specification I rarely saw it hit 80-90% of total CPU utilisation. In fact most of the time my VMs only used 300-400MHz, which isn’t really surprising since they weren’t doing too much. This of course could likely change when heavier CPU workloads are applied to the VMs. If you think you’re going to be running moderately CPU intensive applications or utilities on your VMs running on the MicroServer then you should take the speed and architecture of the N36L into consideration. With regard to memory there is nothing unusual here, ESX/ESXi and the guest OS’s will consume most of the memory you will give them.
Conclusion – Running VMware vSphere on an HP MicroServer
What attracted me to the HP MicroServer initially was the form factor. For those of us running home labs this small form factor is highly beneficial especially when combined with the low power consumption and noise generated by the server. When I first opened the box of the server I was a little surprised with the height of it as I was expecting a smaller form factor, something the size of a Shuttle PC. Check out my video in my earlier post here for a comparison in size of the MicroServer, ML115 and Shuttle. The build quality of the MicroServer is good with the plastic components not feeling cheap and brittle as is the case in some entry level servers.
Here’s a breakdown of my thoughts on the varying core components that make up the MicroServer in the context of running VMware vSphere on it:
CPU: Sufficient if you want to run relatively low CPU workloads on your VMs. What it does lack however is more GHz and a larger L2 cache to accommodate a larger number of VMs or CPU intensive workloads. In my opinion the CPU is the main area which is lacking in this ‘server’ offering from HP. The low power of the CPU does mean that it only needs to be passively cooled via a heat sink with the large extraction fan at the rear drawing air over it.
Memory: The 8GB memory maximum is fine for a small vSphere lab. The downside to the memory on the MicroServer however is all around cost. The two DIMM sockets in the MicroServer is rather limiting in my opinion even for a non-virtualised environment. It means that if you want to take the server up to the maximum 8GB, which is quite common these days, then you’ll need to look at purchasing 2 x 4GB DIMMs which unfortunately aren’t the cheapest and will add considerable cost to the overall purchase price of the server. In a vSphere lab server you ideally need to be running 8GB especially if running Windows Server 2008 VMs which tend to consume more memory than other OS’s.
Network: No complaints with the single embedded Broadcom NC107i Gigabit network card, this is detected ok by ESX/ESXi and is sufficient for most small lab environments. Of course additional network ports can be added via a PCIe network card(s) if required – which is often preferable when you start running storage and vMotion based network traffic.
Storage: The on-board SB700 storage is pretty basic stuff. Don’t expect to run RAID under ESX/ESXi on the server and the performance of running VMs off a single disk won’t be blazingly fast – though may likely prove to be fine for many small vSphere lab environments. There are 4 drives bays within the MicroServer so you could easily distribute your VMs onto separate spindles.
Overall: I fully acknowledge that it was never HP’s intention for the MicroServer to be used as a vSphere virtualization server though with its attractive form factor and with the HP ML115 G5 going end of life many people including myself are looking for an entry level AMD alternative. For those people wanting a small basic vSphere lab server then definitely take a look at the Microserver, though in my opinion if you haven’t already committed to using AMD based CPU servers in your vSphere lab then the Intel i3-530/540 based HP ML110 G6 offers much better bang for Buck/Pound/Euro/Yen. It is comparable in price, has greater memory expansion options (ie: 4 x DIMM sockets), comes standard with a CD/DVD drive and has a higher specification CPU. Though if you want a small factor, super-uber quiet vSphere lab server to run 4-6 VMs with low/moderate CPU workloads then the HP MicroServer is definitely worth a look.