Creating your own full featured VMware vSphere lab for either home or work does not need to be as expensive as you at first think. Sure, there is the initial outlay for the hardware but after you have purchase/acquired/liberated this you will be in good shape (depending on the specification of your hardware and whether it will run ESX/ESXi) for some time.
In this ‘build you own VMware vSphere lab’ series I will be co-posting with Simon (yes, two of us called Simon 🙂 ) from vinf.net. I will cover the topic parts 1-5 below along with 12 and Simon (the other one) will take it from there covering topics such as running nested ESX instances and VMware Fault Tolerance (FT). This series will provide a step by step guide from the basics of the hardware configuration right, installing and configuring shared storage through to running Lab Manager on your home lab.
The intention is to release these posts over a period of time though we thought we’d map out the journey so you can see what’s coming up which will hopefully whet you appetite for building your own VMware vSphere lab either at home or work.
Part 2: Lab Hardware Configuration (TechHead) – Coming Soon!
Part 3: ESXi Installation & Configuration (TechHead) – Coming Soon!
Part 4: Shared Storage Installation & Configuration
– EMC Celerra (TechHead) – Coming Soon!
– HP LefHand (TechHead) – Coming Soon!
– StarWind iSCSI SAN (TechHead) – Coming Soon!
Part 5: Networking Configuration: VLAN’ing & Jumbo Frames (TechHead) – Coming Soon!
Part 6: VM’ed ESXi (vinf.net) – Coming Soon!
Part 7: VM’d vCenter; auto start-up of VMs (vinf.net) – Coming Soon!
Part 8: VM’d FT and FT’ing vCenter VMs (vinf.net) – Coming Soon!
Part 9: FT on the ML115 series – benchmarking Exchange VMs (vinf.net) – Coming Soon!
Part 10: VM’d Lab Manager farm environment on a pair of ML’s (vinf.net) – Coming Soon!
Part 11: VM’d View 4 farm environment on a pair on ML’s (vinf.net) – Coming Soon!
Part 12: Backing up your ESXi lab (Both) – Coming Soon!
Why build a virtualization lab?
Building your own virtualization lab either for home or work can serve many purposes from providing an ideal test bed for those of you training for an exam, wanting to test a new application or utility or just wanting to become more familiar with building and running your own mini server infrastructure.
Gone are the days where running a multiple server lab environment meant having a number of physical server whirring away creating costly electricity bills, taking up plenty of space and not evening mentioning the noise and heat generated. As you no doubt know the beauty of server virtualization is that you can now run multiple VM server instances on a single piece of server hardware greatly reducing many of the negative points mentioned above.
From this single server you are able to run multiple operating systems (OS), virtual appliance (VA) firewalls and network switches and even nested instances of the hypervisor itself (ie: VMware ESX).
With the significant processing power found in modern processors and the reduced cost of high capacity memory there has never been a better time to build your own lab with as little as one server or decent desktop PC.
On with the show…
So hopefully I’ve now sold you on the virtues of running your own virtualized server lab and have sparked your interest to find out how to create your own.
Here’s an overview of the hardware and software that will be used in this vSphere lab series:
For these postings we will be using the latest version of VMware ESX available at the time of writing, this being ESXi 4.0 Update 1 (U1) along with other components of the VMware vSphere suite. One of the reasons ESXi was chosen over the full-fat ESX version was that only ESXi can be installed onto a USB pen drive allowing us to use 100% of the internal disk space of the servers as shared storage for the VMs. Also as the service console portion of ESX is going to be replaced in a future version of ESX now is a good time to familiarise yourself with the remote command line (RCL).
For those of you that have read TechHead before already know that I favour the HP Proliant ML110 and/or ML115 entry level servers for use in my own virtualization test lab. My reasons for this are:
Reliability: I like HP kit as it has proven to be very reliable in my years of being in IT. I have been running two ML110’s and two ML115’s over the last 18 months both of which have never had a hardware failure despite me working them hard at times. I have only heard of one, what I’d call serious hardware failure on them this being on Simon Gallagher’s ML115 where he had a motherboard failure – though this was resolved after HP had shipped him a new replacement board.
Cost: This is probably the most important factor for many in the server selection process. The ML110 and ML115’s have fluctuated in price, at least here in the UK, from a low bargain price of £80 each about 18 months ago through to their current price of around £190 – which offers pretty good value for money when you look at the specification of the server. I’ve found that the prices from the various online vendors are usually pretty much the same though have warmed to using ServersPlus as they have consistently proven to provide the most competitive pricing and good pre and after sales support. As a result I recommend them to others and have arranged a free delivery deal for TechHead readers – as any savings in this current climate has got to be a good thing. Check out my ‘Hot Deals’ section for decent offers that I am told about or see – I try and keep this updated regularly.
The HP ML115 has also proven to be cheap to run with an average load consuming between 80-95 Watts of power – at least it does in my current VMware lab.
Compatibility: On the whole the ML110/ML115 although never being on the VMware ESX hardware compatibility list (HCL) has proven to be on the whole almost fully compatible with VMware ESX. In the early ESX 3.5 days there were issues with some of the onboard network controllers though in later 3.5 releases this was no longer an issue. The largest bug-bear, as you’d likely expect, has been around the storage controller compatibility though across both models of server things have been pretty stable since the ESX 3.5 U4 release. With the release of VMware vSphere and ESX 4.0 both G5 models of the ML110 and ML115 now work 100% – although they are not officially on the HCL which may be a consideration from a VMware support perspective if you were thinking of putting these servers into a live production environment.
Here’s a video I put together that gives you a brief overview of the HP ML115 G5:
Portability? Just add wheels!
Also with the relatively small form factor of the ML115 you can also transport it much easier than a full sized enterprise level server. An example of which can be seen with vinf’s vTARDIS.
Other options: Another popular method is to build your own ESX white box. You can actually end up building quite a powerful and cheap ESX host if you can put together the correct ESX compatible parts. There are some good sources out on the web that maintain active lists of what motherboards, disk controllers and network cards have been proven to work with the different versions of ESX. Here are some of these sites that you may want to take a look at if considering building an ESX white box solution.
Others such as vinf.net have looked towards a desktop white box solution such as the HP D530 for hosting their ESX environment.
For my networking hardware in my virtualization home lab I use a pair of eight port Linksys SLM2008 gigabit switches. The reason I use two is that I need this many ports if running most of my ML110 and ML115’s with shared iSCSI storage and wanting to have dedicated network connections for vMotion and FT traffic, etc. I also have a main PC from which I manage my environment which also requires a port or two. As I posted here I have found the Linksys SLM2008 switches to offer great bang for buck for use in a lab type environment. It has the necessary features such as VLAN tagging and Jumbo Frames which do come in useful when you start wanting to implement the more enterprise level features with your ESX/ESXi hosts.
To use many of the useful features within VMware ESX such as DRS, HA and FT you’ll need shared storage. For the purposes of this series I have decided to walk onto the storage vendor parking lot and kick a few tyres. The three vendors (EMC, HP and StarWind) I have chosen all have storage products suitable for virtualised environments that I have wanted to take a more in depth look at for sometime now.
All three of these storage vendors offer products that can be run as a virtual appliance which will either pool and share the local disk of the ML115’s or share out the local disk to ESX and then replicate it between both of the ESX nodes (ie: ML115’s). These are a couple of different methods for presenting shared storage so it’s going to be fun to go into more depth with them in the lab.
Here’s a summary of the storage virtual appliances I will be reviewing and using:
- EMC Celerra VSA
- HP LeftHand VSA
- StarWind iSCSI SAN Virtual Appliance
The good news is that if you are following this ‘build your own vSphere lab’ series by constructing your own home or work lab you can download fully working evaluation copies of all of these products to which I will be providing the links.
The End Bit
I hope you all enjoy this build your own vSphere lab series. I look forward to writing the posts and as always your feedback, suggestions and comments are always most welcome. 🙂