When dealing with cloud computing proximally or otherwise, it’s likely that you fall within either the VMware camp or the OpenStack camp (or both) today. Some would say these solutions are at opposite ends of the cloud software spectrum. You may also have heard the term: “Pets vs. Cattle” in reference to your servers, i.e. a Pet has a name, requires constant patching, updating and altogether expensive maintenance… whereas Cattle are nameless, can be removed from the system and replaced with new gear and be online again doing their job without skipping a beat.
Well, what if it were possible to have a zoo and a farm all-in-one? and what about for storage?!
Normally when you think of storage, its persistent nature requires it to be a Pet and not Cattle, but with today’s more modern storage architectures, I’d like to propose that this isn’t necessarily the case. You can have both persistence of data and statelessness of the underlying components at the same time. Bear with me for a minute while I reason through this…
With a scale-out, shared-nothing node architecture, you have the ability to add and remove nodes on the fly without worrying about the health of your data. As you scale to larger number of nodes, you care about each node even less. Despite the fact that you have a greater quantity of data in the system, the importance of any one individual storage node is reduced. Add to that the fact that a well-built self-healing, auto-scaling system can heal itself faster when there are more “cattle” on the farm.
As a function of this architecture you can also remove nodes in much the same way, allowing you to return leased equipment or installing newer, more dense and performant nodes into the system with everything working in a heterogenous fashion, and without skipping a beat. This is great from a TCO perspective as well. It’s much better than being locked in with a fixed amount of high performance flash and capacity spinning disk for the next 3-5yr spending cycle. Extend this even one step further and you can imagine being able to automatically order new hardware to expand the system, adding it, then shipping back the old to the leasing company in a regular, predictable fashion.
One element of cloud scale systems that allows this to happen is extensibility, being able to easily extend a system beyond it’s original reason for being. Typically this is enabled via APIs and all of today’s next generation storage systems are build from the ground up to support this type of integration. Being able to organically adjust to customers’ needs quickly by offering APIs, toolkits and frameworks, is a key ingredient in delivering web-scale!
The interesting part of this whole discussion is that despite the importance of persistence in the storage world, given the right architecture we CAN indeed have the best of both worlds. Look at Coho’s scale-out enterprise storage architecture and you can see that we very much have a combination of the elements of both Pets and Cattle. We support the best from either architecture as well as any modern storage system should…
Here are some examples:
- Pets like NIC bonding for high availability – we’re cool with that
- Pets like to be managed carefully and thoughtfully – we build intelligence into our storage, but also give visibility to the admin
- Cattle can be auto-scaled by just plugging in a new node and allowing the system to grow – we do this as well
- Cattle are designed to accommodate failures – we build our failure domains across physical boundaries so that their is no single point of failure
- Pets like to have constant uptime – refer to previous feature of cattle above; accommodating failure means the system stays online if a component fails
- Pets like to have high availability – we do this as well, allocating a minimum of 2 physical nodes in a single but shared nothing hardware design
- Cattle work only when there is shared nothing architecture – utilizing independent nodes with object-based storage allows us to provide this as well!
At Coho we see the need for these differing approaches to computing from a storage perspective. We started out providing storage for VMware workloads and customers seem to like how we’re delivering on that so far. In addition, we see the need to support OpenStack from a storage perspective as well and are currently offering a tech preview of our OpenStack support. As a matter of fact, if you’re interesting in becoming a BETA participant for OpenStack, you should definitely get in contact with us.
Thanks for reading!
4,501 total views, no views today