This guest post is by well known storage expert, Chris M Evans who writes his own blog on storage and virtualisation at www.thestoragearchitect.com. His blog is an excellent source of storage, virtualization and enterprise information – well worth adding to your browser favourites & RSS feed.
One of the unique selling points of the Drobo series of devices is that they are capable of detecting and understanding common O/S file systems. For a DroboPro connected to a single host, the device is able to track file deletions and immediately recover released space. However if you’re using the DroboPro in a VMware environment, exactly how can you make use of this unique feature? The answer is to use RDM devices.
Firstly, here’s a little extra background on the Drobo. The original Drobo "classic" model offered up to 4 physical drives in a single host connection using USB. The host sees a single (or multiple, depending on the drive capacity) large 2TB LUN. Version 2 supported FireWire and the DroboPro model introduced single-user iSCSI. We’ve since seen the release (last November) of the Drobo-S and Elite models which provide additional capacity, connectivity (eSATA) and multi-host iSCSI support.
Using A DroboPro With VMware ESX
When the DroboPro is connected to a PC or physical server, formatting is based on the standard file system of the host. Even using iSCSI, the devices are directly mapped to the host, enabling the DroboPro to see and understand the file system. However, when using VMware, disk devices are presented as vmdk devices to the host on a VMFS file partition. Drobo devices don’t currently support VMFS and so the benefit of deleted space reclamation is lost.
Use RDM Devices
There is a workaround. If the entire LUN can be presented as an RDM (raw device mapping) LUN in VMware, then the DroboPro is able to see and understand the file system, reclaiming deleted space automatically. Here’s how I tested this configuration on my existing Drobo environment.
My Drobo "lab" setup consists of a dual 4-core Intel CPU server, 16GB of memory and various storage devices. I’ve most data on an Iomega ix4-200d, internal SAS disks and a DroboPro. The ‘Pro is configured with 16 (sixteen) 2TB thin provisioned LUNs and 6.4TB of physical storage, however for this test I’ve removed all but two of the drives, leaving 2.4TB of raw storage available. The first screenshot from my vSphere client shows the drives, some of which I’ve specifically named as part of this test.
In order to obtain a fair comparison I’ll be using two LUNs (LUN10 and LUN11) from the DroboPro. LUN10 will be presented to my test Windows 2008 host as an RDM device; LUN11 will be formatted with vmfs as a datastore. The next two screenshots highlight this.
The formatted LUN11 with VMFS initially occupies 600MB of space.
So, I’ve presented my iSCSI LUN to the DroboPro using the vSphere client. The test host is running Windows 2008 Server and the Drobo Dashboard so we can see what’s happening as files are created.
Here’s my DroboPro dashboard. The device has around 153MB allocated to the formatted iSCSI LUN, labelled "X:".
The next step is to create some data on this test LUN. For that I’ll be using the fsutil command, which lets me create a file of any size. There’s no real data in the file, but the Windows MFT entry will reflect that a file of the specified size has been created on the volume. If the DroboPro is watching the MFT, it should detect a file has been created and change the capacity figures in the dashboard accordingly. This can only be the case if it understands the file system layout, as we’re creating no real data with this test.
The following video shows the creation of multiple 100GB files on the test volume and the change in the DroboPro Dashboard as this process occurs. This test is performed on the iSCSI RDM LUN and clearly shows that the DroboPro is identifying the space usage via the file system.
So, it seems the DroboPro can detect the creation and deletion of files through an RDM device. However just to be certain, I repeated the task on the second LUN connected to the host as a LUN on the vmfs formatted data store. As expected the Drobo wasn’t able to detect the create/delete process, however worse than that, the device wasn’t able to detect the amount of configured space in use until it resumed from standby.
I contacted Data Robotics and they provided this response:
Yes this is a known behaviour. The ‘cleverness engine’ works for a list of known file system types. Currently the list includes NTFS, EXT3, HFS+, and FAT32. There is a level of effort in supporting VMFS
and it is something we intend to do. However, we do not have a date for support.
In the mean time we recommend using VMware based tools to track the utilization of the Smart Volume(s) on which datastores have been created.
Since we do not report utilization properly, we recommend that the actual available space in a DroboElite be the same or larger than the provisioned space. For instance, if you have 10TB usable in a DroboElite, the sum of all datastores on Smart Volumes should not be greater than 10TB.
What this means is the thin provisioning benefits of the Drobo are lost in a VMware environment as the device can’t file system changes. So perhaps RDM devices are a good solution, but they’re only good if you are using them exclusively as the Drobo can’t keep track of a mixture of both supported and non-supported file systems. I’m keen to see how VMFS will be supported, however in the meantime, keep an eye open for my next test – DroboPro on Hyper-V.
This guest post is by Chris M Evans who writes his own blog on storage and virtualisation at http://www.thestoragearchitect.com.