In this second part of the running the EMC Celerra Virtualized Storage Appliance (VSA) in your VMware vSphere lab series we will take a look at how to install the Celerra UBER VSA. If you missed the first part of the series I would recommend heading back to the article here and have a read as this will provide you with useful knowledge on the Celerra storage appliance which will help in running the virtualised edition.
To kick things off with, full credit again to Nick Weaver who took the original EMC Celerra VSA, ‘tweaked’ it, called it ”UBER” and included plenty of automated configuration goodness meaning that Celerra VSA newbies, such as myself, could get this virtual storage appliance up and running in no time at all. In fact the Celerra UBER VSA has such a decent level of automated configuration during the deployment process that it can now configure the VSA based on the VM resources you allocate to it, eg: the amount of memory you allocate to the VSA can determine the number of data movers it allows you to configure, amount of storage presented up to the VSA. More on this in a few sections time.
Before We Proceed Any Further….
…It is worth mentioning that although this may seem something of a long post, and it is, don’t be fooled, as the deployment process of the EMC Celerra VSA is VERY SIMPLE and QUICK indeed. What you are seeing in this post is the step by step process complete with plenty of extra information around the VSA and the install process hence incorrectly making it look like a lengthy task. In reality you can actually have the Celerra VSA up and running within 12-15 minutes (including the actual OVA deployment time)!
Once up and running the Celerra VSA will provide you with many of the enterprise level storage functions found in the physical Celerra storage appliance, such as:
- File Compression
- VMware vSphere plug-in
And it’s FREE!
Under The Hood
It is worth spending a little time now to explain how the Celerra VSA works because as you’d expect it does operate a little differently than its physical counterpart. If you cast your mind back to the first part of this series you will remember that I outlined the four key components that make up a physical Celerra, “these being the X-Blade/Data Movers, the Storage Processor Enclosure (SPE), the Control Station and of course the actual disk shelves or disk array enclosures (DAE)”. Now you might be asking yourself, “is there a separate VM for each of these components (apart from perhaps the DAE)? The answer to which is “No, it’s all done through the magic of emulation”.
How the Celerra VSA performs it’s magic is as follows.. Are you holding on? The VM of the VSA itself is in fact the VM of the Control Station (Linux) with the Data Movers and Storage Processors (which provide all of the smarts and the Ethernet based connectivity in and out of the Celerra VSA) being emulated using something called the Blackbird service from within the Control Station VM.
One important thing to remember about the Celerra VSA is that it is not officially supported by EMC and as such should not be run in a production environment. ‘Unofficial’ community led support can however be found at the EMC forums: http://forums.emc.com. Another thing to be mindful of is that as the real brains behind the Celerra VSA (ie: the Data Movers) are actually emulated within a VM (ie: of the Control Station) you are never going to get the same massive performance that you’d get from a physical Celerra storage appliance. That said, for a lab environment it will prove to be more than sufficient and definitely isn’t a slouch where performance is concerned.
Those of you that have read part 1 of this series or are familiar with the physical EMC Celerra storage appliance will be aware that it can have both Fibre Channel (FC) and Ethernet based connectivity to the outside world. As you can no doubt imagine it isn’t really practical to provide FC connectivity to the virtualized version meaning that all inbound and outbound traffic using the iSCSI, NFS or CIFS protocols is via Ethernet and provided by the VSA’s Data Mover (aka X-Blade).
How Many Data Movers Would You Like With That?
By default when installing the Celerra UBER VSA it is provided with a single Data Mover with a single Ethernet connection via the VSA VM. All of this is part of the automation that Nick has added to the VSA installation process along with there being an option to add another Data Mover if the VSA has had 4GB of memory or more allocated to it. This is useful if you want to test the load balancing or failover functionality between the Data Movers, though for the majority of people a single Data Mover will more than suffice. It should be pointed out that additional network ports can be allocated to a Data Mover if extra throughput is required. Also one very important point to note is that once you have installed your Celerra VSA you are unable to change the number of Data Movers at a later stage without a complete re-deployment of the VSA.
Interesting, Though Important, Things You Should Know About The Celerra UBER VSA
Before we kick things off and start installing the UBER VSA the are a number key points well worth knowing about this latest UBER VSA offering – for a complete and comprehensive list take a look at Nick’s site here:
– The Celerra UBER VSA is running DART version 22.214.171.124 and runs 64 bit code
– Each UBER VSA is given its own unique ID, which although not that important if using the VSA for standard storage operations it is very important when you want to start using the replication facility to replicate VMs between Celerra VSAs for use with VMware SRM.
My EMC Celerra UBER VSA Lab Setup
I will be running my EMC Celerra UBER VSAs on my newest addition to the TechHead home lab, this being an HP Proliant MicroServer. I have chosen the MicroServer as it is relatively low powered not just in power consumption but also in it’s specification. This will demonstrate that if the Celerra UBER VSA runs and performs well on my MicroServer then it will do so on most other lab setups out there. Here’s a summary of my lab
HP Proliant MicroServer
CPU: Athlon II Neo N36L 1.3GHz Dual-Core CPU
Memory: 8GB DDR3
Disks (SATA) : 1 x 160GB Seagate Barracuda 7200RPM
1 x 2TB Western Digital WD2001FASS 7200 RPM
Network Card: 1 x Embedded Broadcom BCM5723 Gigabit
The HP Proliant MicroServer is a good entry level VMware vSphere lab server it is extremely quiet, consumes little power (ie: 45W approx) and has enough horsepower under the hood to easily run a handful of low utilisation VMs. The downside to it however is that despite having storage RAID available on its system board this type of RAID isn’t fully hardware based and requires drivers to be installed on host OS or hypervisor. Unfortunately VMware vSphere doesn’t contain or support this type of RAID so you are unable to run the disks in a RAID configuration for added performance and/or resilience. Though this isn’t necessarily a show-stopper as at the end of the day it is just a lab server and as such should only be running non-production VMs, it’s a good excuse to experiment and implement a VM and file level backup application such as VMware’s VDR or Veeam’s Backup and Replication.
Another way to ensure the VMs on the MicroServer are backed up and available quickly in the event that the single disk containing the VM fails is to utilise the in-built replication functionality of the Celerra VSA. In doing this you are also getting to learn how to setup and configure replication between two storage devices (Celerra VSAs in this instance) which is useful and all round good fun (in a geeky kinda way).
So what you would end up with is two Celerra UBER VSA instances, each residing and being run from their own single SATA disk (hence the reason I have two disks in the MicroServer) with the replication of the VMs occuring between the two VSA instances. If the SATA disk running the VMs fails then all is not lost as you have a replicated copy on the second VSA. Of course there would be some downtime whilst you bring the replicated VMs on the second VSA back up and running again though as mentioned before, this is a lab environment and does it really matter? Of course in a production or work vSphere lab where budget is not as much of an issue then hardware based RAID resilience would be implemented. In part 7 of this series I will be covering how to implement this replication between the two VSA instances running on the same ESX/ESXi host.
The Celerra VSAs will be running on top of VMware ESXi 4.1 which is installed and boots from a 2GB USB key in the internal USB slot of the MicroServer. For simplicity and so as to avoid any confusion the ESXi 4.1 configuration on the MicroServer is a default build and hasn’t had any changes or ‘tweaks’ made to it.
Now seems a good time to mention the default network connections in and out of the Celerra VSA, of which there are two. The first is the ‘Management’ network connection and this, as you can probably guess, is for the connectivity to and from the Control Station and provides access to the Unisphere web management interface. The second network connection is used by the Data Mover for the presentation of the iSCSI, NFS or CIFS traffic out to the VMware ESX/ESXi host(s). Additional network connections are able to be added to the Data Mover though we won’t be covering this in this particular series and in most cases for a lab environment a single 1Gb Ethernet connection will suffice. There are a couple of other network connections within the VSA though these are loop back adapters internal the Celerra VSA.
Here’s what my basic network configuration within ESXi 4.1 looks like:
Obviously in a ‘real world’ implementation you would want to separate your storage traffic away from all other VM and management traffic for performance and security reasons though as mentioned earlier I want to keep things as simple as possible for demonstrating this installation process. If there is enough interest I could write another post just specifically covering how to configure the Celerra VSA for separated network traffic types, etc. So leave me a comment at the bottom of this post if you’d like to see this.
Come Along For The Ride
For those of you following along and want to install, configure and run your own EMC Celerra VSA you will need the following:
EMC Celerra UBER VSA DOWNLOAD LINK: To download the latest UBER VSA visit Nick Weaver’s site here, at the time of writing this post the latest offering from Nick was v3.2.
Let The EMC Celerra UBER VSA Installation Begin!
From Nick’s site you will notice that there are two flavours of Celerra UBER VSA that you can download. As clearly indicated one is for use with VMware Workstation and the other for VMware ESX/ESXi. Download the version that matches the host environment you will be running the VSA, in my case it is VMware ESXi v4.1 so I need the ‘vSphere Version OVA’.
With the vSphere OVA (Open Virtual Appliance) version of the UBER VSA downloaded it is time to start up the vSphere Client and connect it to the ESX/ESXi host where we will be running the VSA(s) from.
When you have connected through to your ESX/ESXi host click on ‘File’ from the menu bar and select ‘Deploy OVF Template’. The OVF is the descriptor file for the virtual appliance (OVA). This will now provide an easy to follow wizard with step by step instructions on how to select the virtual appliance you want to deploy (ie: the Celerra VSA). For those of you who are not familiar with deploying a virtual appliance from an OVF template I have included the following steps (click on each image for a larger version). During the installation process, because I don’t want to confuse things by veering off the path too much we are going to select all of the default options (apart from changing the disk provisioning from ‘thick’ to ‘thin’). By selecting these default options you’ll have the Celerra VSA in no time at all, I think you’ll be amazed with how straight forward it is – of course it never used to be this way before the Celerra VSA was given the ‘UBER’ makeover.
The first thing you’ll get asked by the ‘Deploy OVF Template’ wizard is the location of your Celerra UBER VSA OVA file you downloaded. Point it to the location of the OVA (don’t worry that it isn’t the OVF file) and hit ‘Next’.
This next screen shows you the OVF template details. So for the Celerra VSA you can see that the OVA file is 2.2GB in size, will consume 40GB of space once provisioned (if thick provisioned is selected – more on this in a couple of steps time) and very importantly it reminds us that the Celerra VSA is not supported for use in a production environment.
After clicking ‘Next’ you are shown the End User License Agreement and have to press ‘Accept’ before you are allowed to proceed with the deployment. I have yet to meet someone that has ever read an entire End User License Agreement on any product (and not fallen into a coma). Once you ‘Accept’, press ‘Next’.
Give your new Celerra VSA a name, I chose to name mine: ‘Celerra UBERv3.2 – 1’ as I will be adding a second one when I set up the replication in part 7 of this series.
You will have to select the datastore where you want the VM files associated with your Celerra VSA is live. I have placed this first Celerra VSA on the ‘Local Disk – Seagate’ datastore and I will later be installing my second Celerra VSA on the other local datastore, ‘Local Disk WD’ (Western Digital disk). This way each Celerra VSA instance will be running on its own underlying physical disk. Select your target datastore and press ‘Next’.
At this point you can choose whether you want to deploy the Celerra in a ‘Thin’ or ‘Thick’ provisioned format. In the interests of saving disk capacity I chose ‘Thin provisioned format’. Click ‘Next’
In this next section, ‘Network Mapping’, you are asked to assign both the ‘Management’ (ie: Control Station VM running the Unisphere management web interface) and ‘Data Mover’ networks to one of the Celerra VSA’s VM networks. As mentioned earlier in this post I only have a single network port in my MicroServer and in an effort to keep things as simple as possible and since this is just my basic lab environment I am using the single Virtual Machine Port Group, ‘VMNetwork’ that is created during installed of ESX/ESXi. I will allocate both ‘Management’ and Data Mover’ traffic to use this VM Port Group.
Notice how the OVF Deployment Wizard warns you that you have multiple source networks mapped to the same host network – nice touch. Click ‘Next’
The following screen provides a summary of the options and settings which will be applied during the deployment process. It pays to double check these as the deployment process kicks off after pressing the ‘Finish’ button. Go on, go for it… press the ‘Finish’ button!
Cogs begin to turn and creak in your ESX/ESXi host and the Celerra VSA starts to deploy… This entire process, depending on the specification of your ESX/ESXi host and the storage on which the Celerra VSA is being installed can take approximately 7-8 minutes. Time for a quick cup of tea.
Ding… Your new Celerra VSA has been deployed and is ready to be started up for the first time. Click the ‘Close’ button.
EMC Celerra UBER VSA Initial Configuration
With the Celerra VSA deployed from the OVA file we now want to start it up to configure such things as an IP address and a name. Notice (see image below) how the Celerra VSA has a single vCPU and 2GB of memory allocated and has consumed 6.68GB of disk space after the deployment.
To kick things off and to start configuring your new shiny Celerra VSA appliance ‘Power On’ the VSA from within the vSphere Client.
I have included the next few screens for those of you who are not actually installing the Celerra VSA but want an appreciation of what the process looks like. From the screen below we can see what the Celerra VSA first starts to boot the Control Station Linux based VM.
From this Celerra VSA process we can see that the Linux distro used appears to be Red Hat based.
As this is the first time we have started the Celerra VSA we need to provide it with some configuration details. Notice that the configuration process has detected that the Celerra VSA has less than 4GB of memory therefore only 1 Data Mover will be enabled.
The first piece of information we need to feed the Celerra VSA is the Management IP, ie: the IP address we will use to access the Celerra VSA via the web based Unisphere management interface or the console via SSH.
Next up, enter in the Subnet Mask and Gateway details.
Then provide your Celerra VSA with details of the Hostname, Domain name (if applicable), a local DNS server and an NTP time server. For the NTP time server I use the publicly accessible NTP time service ‘pool.ntp.org’, though for this to work your Celerra VSA will need to have access to the internet. Of course if this was an actual physical Celerra storage appliance and was running in a production environment we would point the Celerra at an internal NTP time service, eg: a local domain controller or physical centralised time service appliance.
Once all the details have been provided and you have confirmed that they are all correct then the Celerra VSA will take these details and will apply them.
This can take a few minutes and involves an automatic restart of the Celerra VSA. Once it has finished the configuration and has restarted it will present you with a screen confirming the IP address (ie: the one you provided a couple of steps earlier) through which you can access the Celerra VSA using the Unisphere web based management interface. Make note of this IP address, especially if you are running a number of VMs in your lab as all these IPs can get confusing after a while.
If you hit any key you will then see the Linux based console window for the Celerra VSA’s Control Station, remember how I mentioned at the start how the actual Celerra VSA itself was a VM of the Control Station which had an emulator (Blackbird) running on it which emulated the SPs and the Data Movers. You can login to the console using the username and password: ‘nasadmin’ (for both). To gain root access you would use the username of ‘root’ and the password of ‘nasadmin’. You will have to login as ‘root’ should you want to shutdown or restart the VSA as the ‘nasadmin’ user does not have sufficient privilege’s.
So lets now access the Celerra VSA via the Unisphere management web interface. Start up Internet Explorer (Firefox may give you a few issues with the Celerra VSA’s Certificate) and point it to the IP address allocated to the Management interface (in my case this was 192.168.123.121).
*** IMPORTANT NOTE ***
At this stage of the process you may receive a blue screen in your internet browser window when going to access EMC Unisphere. If this does happen don’t worry as this is a known “issue” and can be resolved easily following the following instructions on Nick’s blog here. This blue screen problem won’t happen to all of you, in fact I have never experienced it myself though it is good to know there is a fix, just in case,
Everything going to plan you should now be presented with the window below in your browser window. Click on ‘Start a new EMC Unisphere session’.
Don’t worry about the warning in the next box that pops up which informs you that the ‘web site’s certificate cannot be verified’, just check/tick the ‘Always trust content from this publisher’ box and click ‘Yes’. This will stop it from popping up again next time you go to access the Unisphere interface.
Another browser window will open, press ‘Continue’.
You’re almost there! At the EMC Unisphere logon screen you are prompted for a ‘Name’ and ‘Password’ to which you enter ‘nasadmin’ (case sensitive) for both. It should be noted that this username and password is also used on the default configuration of a physical Celerra storage appliance.
From this Unisphere login screen you can also see that there is an LDAP integration option which would prove useful if this was used in a production environment, but since this is purely for a lab environment we won’t bother with configuring up LDAP integration.
…And behold! You have now successfully installed the Celerra VSA and have access to it via the feature rich and simple to use Unisphere management interface. From here you can check the status of the Celerra VSA or multiple VSA’s (more on this in the next post) as well as configure up, allocate and present the underlying storage of the Celerra VSA.
This EMC Unisphere management interface is the same one used with the physical Celerra so if you get to know it well you will be armed with all the know-how that can then be applied to a the physical Celerra in a real-world production environment.
So here ends this rather lengthy post, I hope you found it useful and as mentioned at the beginning of the post, the actual process of installing and getting the Celerra VSA up and running is an easy and relatively quite quick one, though it is worth spending the time during your first Celerra VSA install to gain an appreciation of what you are doing. Here’s a description of what I will be covering in the next post:
Here is an overall list of all posts in this Celerra VSA series:
- Part 1 – The Basics: A High Level Introduction to the EMC Celerra Physical Storage Appliance
- Part 2 – Installing the EMC Celerra VSA
- Part 3 – Managing your EMC Celerra VSA with Unisphere (integrated & also free) (coming soon)
- Part 4 – Configuring the EMC Celerra VSA for NFS (coming soon)
- Part 5 – Configuring the EMC Celerra VSA for iSCSI (coming soon)
- Part 6 – EMC vCenter Plugs & Other Cool Stuff (coming soon)
- Part 7 – Configuring replication between two EMC Celerra VSAs (coming soon)