I Can’t ‘Contain’ Myself

vmworld2015_logo

I blogged about this topic last week, but looking forward toward all of what will be announced at VMworld next week has made me want to revisit the subject again and make some predictions on what the show will be all about this year. I literally couldn’t (pardon the pun) ‘contain’ myself 😉 I do quite a bit of thinking around this topic every year so as to inform our Marketing team on what our messaging might look like as we approach VMworld, so I’ve been thinking about these topics for some time now…

What exactly do containers mean for the virtualization admin, storage admin or otherwise? 

Containers such as Docker and Rocket, among newer generation technologies, aim to provide abstraction much like virtual machines, but without the added effort of virtualizing an entire OS and all of the supporting drivers. A much more efficient method for application delivery, which I have blogged about before and have had conversations with colleagues, such as Scott Drummonds, about in the recent past couple of years. Around the time that CloudFoundry was first announced, we talked about what application delivery would look like without the need for an OS. Obviously it would make for a higher performance, transient, web-scale system that would deliver the application without all of the fluff required by a typical virtual machine. We didn’t know when it would happen, but definitely glimpsed the possibilities and implications in IT once this dream came true. We are starting to see the results of that now, with “containerization”. VMware is at the forefront of this innovation (along with Docker and CoreOS) with projects such as Lightwave and Photon. You will obviously see these heavily featured at the show, along with other DevOps and Microservices related topics. I see these trends as a 2nd or 3rd iteration of the cloud native apps paradigm that CloudFoundry hinted toward a few years back. Paul Maritz talked quite extensively about this during his tenure at VMware, stating something along the lines of: now that customers have realized the economic benefits of virtualization, they will start re-writing their apps for a cloud native future. This didn’t happen right away, but I believe that it’s telling that we are now seeing the results of this effort. This definitely follows with VMware’s “Ready for Any” theme for this year, I think you’ll agree.

Storage and the next wave of containerization…

As an emerging, cutting edge storage provider, we at Coho Data are also innovating around this trend as it relates to storage, especially persistent storage, which is in its nascent phase. We will not only be demoing some microservices running directly on our array, but we’ll also be talking about what the microservices future will look like in the context of storage. Think data-centric, not workload-centric, VM-centric or otherwise. We care about storing and managing data regardless of the abstraction it sits in.

This is just a taste of what we have in store. Come visit the Coho Data booth (#1713) or attend our CTO, Andy Warfield’s breakout session on the topic to learn more about what we have in-store.

Looking forward to seeing you there!

399 total views, no views today

0

Hybrid or All-flash?

spinning-flash-hybridI’ve been seeing a lot of commentary of late (actually it’s been happening for a while, but now that we’re approaching VMworld, I guess I’m more conscious of it) about the value provided by hybrid storage systems from a cost perspective, while at the same time, the all-flash vendors out there are touting the storage efficiency features (they have to!) of their platforms, to get to a price point that run-of-the-mill customers can stomach, while providing very high performance.

There’s been lots of FUD about how the economics of hybrid systems will always be more cost effective when compared to all-flash systems. By contrast, the all-flash vendors will say that now, with the announcement of a flash device that is larger than the biggest hard disk, that all-flash will win out, once prices come down. Finally, a select few vendors are also saying that because they provide both hybrid and flash systems that they are (falsely) offering customer choice. I beg to differ.

How about a third option all-together, or if you will, no option at all.

What exactly do I mean by this???

Well, since the announcement of our 2.x release, Coho has been able to host both hybrid AND all-flash nodes within a single cluster, in a single namespace. Not only is this easier for the customer to set-up and manage, but a much better story from an economic perspective as well.

Consider the fact that, because we leverage software-defined networking together with software-defined storage, that we can firstly, start with a single 2-node cluster, unlike any of our competitors in scale-out, that must start with 4 or more. Second, the fact that we can place data in the appropriate tier of storage based on a per-workload view (we present as NFS today) of the flash utilization over time, and we make even more efficient use of the performance and capacity in the system, while simplifying it’s deployment for the customer. This means that if our analytical data indicates that you are running out of high performance flash, you can expand your system with an all-flash node OR if the analytical data says that you’ll run out of capacity while still maintaining the same level of performance, you can expand with a hybrid system. We take a data-centric view of the customers’ workloads, so that they are freed from making these choices; and we’ve found that the message of simplicity is resonating quite well, thank you!

Expand this to next generation flash media types such as NVDIMM, PCM, RRAM, MRAM, etc. and you can imagine why having MANY different types of flash memory in a single cluster, single namespace will become a must have feature.

Coho Data is on the path to deliver this today. Our platform was built with this in mind from the word “Go”.

It’s time to upgrade your thinking!

infinite-rack-scale

409 total views, no views today

0

Web-scale Economics… and Innovation?!

coho_logo

What is Web-scale?

A good percentage of those of us out there in the trenches of enterprise IT have probably heard the term “Web-scale” thrown around. It, like many IT terms, is equal parts marketing term and technical term, hence, not-so-well-defined… and as a result, open for interpretation. My take on Web-scale is that it’s, first and foremost, a way to architect IT systems for enterprise, incorporating the best elements of public clouds. While it is very hard to mimic the architectural scale and resiliency of public and private clouds from AWS, Google, Facebook and others, one can easily see the benefits of distributed, shared-nothing architectures, API-driven automation and orchestration, self-healing application stacks… and in Coho‘s case, closer integration of the network with the storage.

The Coho approach to Web-scale has some unique elements that separate us from the other vendors that purport to do it. Hyperconverged vendors are for the most part confined to growing all datacenter resources simultaneously. Scaling all datacenter resources at the same time doesn’t necessarily make sense, unless your environment has very uniform workloads. My guess is that if you are a typical small/medium or enterprise, your compute, network and storage requirements don’t scale at an identical rate, thus performance gets left on the table, or you end up licensing software that you don’t need in order to grow your footprint. With Coho, we allow the customer to scale the compute independent of the network and storage. As you add building blocks to a Coho scale-out cluster, you add 40Gbps (or more) of network bandwidth along with multiple TBs of PCIe NVMe flash. This is a hard requirement if you expect the cluster to exhibit linear performance scaling as you add capacity. Adding flash without the adequate network bandwidth to push the bits over the wire is a lost war before the battle even begins!

This brings us to the economics part of the discussion as it relates to Web-scale…

Converged (non-hyperconverged) systems that incorporate increased network capacity along with the storage, such as Coho, give customers the ability to incorporate the best elements of public clouds with the security and performance that can only be achieved with on-premises infrastructure. This simple fact has afforded us an opportunity to talk to customers in the terms of $/GB/mo that they are likely to see quoted from Amazon and others. The shift toward OPEX pricing is already top of mind for a great many CIOs, so it serves as a convenient reference point for us when we talk with customers. Even with operational costs figured into the economics, we often talk about prices that are 1/2 to 1/3 the cost of AWS. Now let’s put a qualifier here… we’re not talking Amazon Glacier or the cheapest of the cheap that Amazon offers, but rather AWS EFS (Elastic File System) service which is advertised at around $.30/GB/mo, all-the-while preserving the jobs of the internal IT teams, and preserving corporate IP (intellectual property) security and providing better performance! Don’t even get me started on the costs associated with getting data into/out of AWS once it’s in their cloud. You ever heard of data gravity?

But wait, there’s more…

Since Coho is innovating by creating unique storage services directly on the array, by leveraging Docker, Kubernetes, VXLAN and other cutting edge technologies, we are able to offer alternatives to AWS, without the need to move to the public cloud. This is the move toward “microservices” that you may have heard about. As a matter of fact, not only will Coho be demoing these technologies, in the form of on-the-fly transcoding, a search appliance and more, but our CTO, Andy Warfield will also present a breakout session discussing this very topic. Why bother going to AWS for services that you can get as free upgrades with a paid support contract?

In my opinion, Coho is not only at the forefront of what Web-scale was intended to deliver, but taking it to a whole new level. Look for us at VMworld (booth 1713) to find out more… we’re looking forward to talking with you!

829 total views, 30 views today

1

Come Meet the vSamurai at VMworld

vmworld2015_logo

If you’ve been following me via this blog or elsewhere on this series of tubes, you may know that I have taken on more of a customer and partner facing role in the past couple of months. This has given me the opportunity to do more of what I love, which is getting people excited as well as evangelizing what Coho Data is all about. The storage marketplace today is very crowded, with more and more start-ups on the scene on what seems like a daily basis. It’s hard for me to catch up with all that’s going on, let alone the buyers of the technology themselves. It’s not going to get any easier, until some major/sudden consolidation happens, but that’s mere speculation…

I’m looking forward to talking with members of the VMware and virtualization community, customers, partners, and anyone else that finds their way to VMworld this year. Recalling back to about a year-and-a-half ago, when I was making the decision to leave NetApp to join a company that is truly innovating in the storage market, I saw the potential that Coho has to offer. Now, I feel that a lot of the promise is being realized. This will be our 2nd VMworld and we’ll be upgrading from Silver sponsor last year to Platinum sponsor this year! We’re doubling-down on VMware and I am truly excited about what we’ll be showing and talking about this year. I’m privileged to share the details with you as we get closer to the show…

Until then, if you’d like to set-up a meeting to talk with me or another one of our other experts, head on over here.

Or if you’d just like to sign-up for a chance to win a free pass to attend the show, click below:

vmworld_freepass2015

650 total views, 5 views today

0

Why I’m Excited About The Coho DataStream 2.5 Release!

coho_logo

A lot of engineering work has gone into the Coho v2.5 software release. Add to that the fact that we now have 3 distinct hardware offerings and we’ve got a pretty extensive portfolio now. I’ve been involved with the testing on this release since the Alpha days, and I can honestly say it’s our best release yet. I could tell from the beginning, as the quality of the code was much more robust (vs. some of the releases from 6 months to 1 year ago) based on my initial testing.

Here are the top 3 reasons why I’m most excited about this release:

#1 – Flashfit Analytics (Hit Ratio Curve)

hit_ratio_curve

We showed a technical preview of this at VMworld 2014 as well as Storage Field Day 6 and I think it’s a really unique differentiator in the market right now. Our analytics are extremely detailed and can pinpoint the exact amount of flash that will benefit workloads on a per-workload basis. We are able to see so much detail about the flash usage that we could make an educated guess about the application running in the workload. A bit more work is required before you do this, but the fact that we can says a lot about the level of detail captured here. The idea with Flashfit is that we give a customer the data to choose whether they have sufficient amounts of flash for their current working set, need to add more capacity (hybrid node) or need to add more performance (all-flash node). This will work it’s way into QoS and storage profiles as we move forward with development of the feature. When you combine this with the ability to choose an all-flash or hybrid node, we give the customer unparalleled economics and TCO/ROI.

#2 – Data Age

data_age

The Data Age view is something that we also previewed an early version of a while back. It’s a bit more abstract, but interesting in that we are able to show a cluster-wide view of how old the data is. You’ll find that this graph gives more supporting evidence around the the flash working set on the system and proves that in all but the busiest of customer environments, the amount of flash that’s accessed frequently is a mere fraction of the total flash on the system. In other words, we give you real-time supporting evidence showing that: 1) You probably don’t require an all-flash array 2) If you decided to go with an all-flash option, you’re paying a lot of money for a very, very small portion of your hot data. All of the rest would be better served by higher density mediums.

#3 – Scalability Improvements

When I first started at Coho, approaching a year-and-a-half ago now, we admittedly had some challenges around scalability. This new release introduces an improved global namespace that allows for orders of magnitude more objects in the cluster and thus many, many more VMs (workloads). I’m happy to have been a small part of reporting my findings and getting this prioritized and fixed. I can honestly say that we are truly living up to the promise of a scale-out storage system.

Well, that’s it for now. I’m curious what other features of DataStream OS v2.5 that you’re most excited about! Respond in the comments.

595 total views, no views today

0

Powered by WordPress. Designed by WooThemes