Now that I can talk about vSphere 5, after having joined at the end of the BETA period, I am really excited to start blogging about the new features. As much as vSphere 4 was a groundbreaking release in a lot of ways, I think v5 is further cementing VMware’s dominance as “The” software suite that powers the cloud. Look at their stock price, of late, if you need any supporting evidence. I am hoping for this to be the first in a series of posts digging into v5 and how we might take advantage of the new features in our businesses.
Storage has always been a topic requiring immense analysis, testing and reassessment, especially in a virtualized environment. I have been involved with enterprise storage since before I did my first production VMware deployment, but those projects were relatively easy without virtualization factored in.
It’s no mystery that the requests on a storage array in a virtualized environment are much more demanding and hence harder to manage. In the past 3 years, and even before I joined my current company, there have always been issues scaling the storage system to meet the demands of the business. We have gone through two SAN/NAS hardware upgrades since I started.
Our approach has been a bit different than most, in that our company started virtualizing using DAS before we ever thought about getting a SAN/NAS. So in essense we added the storage to an already existing virtual environment. Because the storage was an afterthought, there hasn’t been as much care in properly sizing the storage for the virtual environment or for growth, until recently. vSphere 5, with its wealth of storage enhancements, seems a good place/time to do start fresh.
In my work as the sole storage/virtualization architect & administrator at my company I have the responsibility to supply responsive VMs and storage to application owners. As a result, I am often the first to hear about performance issues. Because of this and despite our small deployment, I have been investing a lot of time in learning about storage tiering; both manual and automated.
I have been making upgrade purchases and changing volume, LUN and VMFS datastore layout to spread the load across the storage controllers and disks more evenly. This is a manual process for us at the moment because we don’t have any fancy technologies that will automate this task. So far, I would say we have a Tier 1, 1.5 and 2. Adding SSDs would give us a Tier 0, as well, but more on that later.
Going back to what I said about using virtualization for longer than we had the SAN, this has afforded us the ability to know what applications have a higher visibility, lower latency, and higher I/O demands than others. In some cases we have performed benchmarks and load tests to determine the I/O requirements. Only now are we aggregating this data in order to determine if the spindles we have will supply enough IOPS to support a specific application category.
So now, armed with this data, I am now manually creating the new volumes, LUNs and VMFS datastores on the appropriate FC 15k RPM, FC 10k RPM and SATA storage. As you can imagine this is a difficult task. Thankfully our environment doesn’t have a high degree of change at the moment, so it’s not so bad, but in an environment larger than ours… I don’t even want to think about it.
Manual tiering serves as a good education to get to know your environment and understand how to balance the load across your storage, but it will easily get as old deploying that 1000th laptop image before too long. Your time could be spent better on providing value to the business.
The real strength of Storage DRS is that it takes care of all of these difficult storage placement and migration decisions for the virtualization administrator. I realize that EMC has a technology called FAST (Fully Automated Storage Tiering), but we are currently using NetApp storage. NetApp doesn’t have automated tiering technology as yet, however, I am certain this is a feature that they are working on.
Storage DRS allows for the creation of a new object category called “Datastore Clusters” which will let you group datastores with similar performance profiles into groups, such as a group for SSD storage, FC/SAS storage and SATA storage. Once these groupings are created, VMware can automatically place new VMs into the appropriate datastore based on capacity and latency requirements. In addition, it can move VMs using Storage vMotion from datastore to datastore to maintain optimal performance.
Storage DRS is a new technology that was sorely needed to automate storage in private clouds as well as the infrastructure used to build public clouds. Adding this auto-tiering technology at the virtualization layer, VMware has leveled the playing field for hardware vendors still working on their auto-tiering technology, while at the same time delivering this feature to the forefront of the new release and raising the profile on automation functionality.
The new era is going to be all about automation as this is a cornerstone aspect of the cloud. This is but one element moving us closer to being able to predictively automate IT infrastructure and not care about how things work as we deliver ITaaS. A kind of AI, if you will. I expect this feature to accelerate our ability to move toward a ubiquitous cloud future. Very exciting times indeed!
5,968 total views, 6 views today