Coho Data at the Tokyo VMware vForum


Those readers out there that know me personally, know that I lived in Japan for almost 7 years (nearly half of my IT career). I’ve blogged a fair bit in Japanese and about some of the events around the 2011 Tōhoku earthquake and tsunami here & here. That said, Japan holds a special place in my heart. It’s always great to get back for a visit.

It’s been close to 3 years since I’ve last been back. I’m excited about visiting the far east to see what’s new and what’s changed in enterprise IT in the years since. I suspect that it’s virtually unrecognizable from when I lived there from 2004-2011.

In those days, VMware wasn’t used nearly as heavily in Japan as it is today. Not to mention the fact that Business Continuity and Disaster Recovery were merely an afterthought before the days of the 2011 earthquake, despite the fact that Japan is the #1 most seismically active country on the planet!

Add to this the fact that the storage market has changed tremendously in the years since then. There were really only 2 players in the market, EMC and NetApp, and none of the multitude of next-gen storage vendors vying for market share like there are today. Think about that for a minute; No Nimble Storage, No PureStorage, No SolidFire, No Tegile, No Tintri, No Coho Data, etc… along with too many more to name. The new guys are being disruptive and taking market share away from the old guard of EMC and NetApp. We all have our strengths and weaknesses, key use cases and varying costs, but one thing rings true: The next-gen storage vendors are innovating at a geometric rate that the legacy guys are struggling to catch up with!

These are just a couple of reasons why I am so excited for Coho Data to be a part of the vForum in Tokyo this year. I was an attendee for 2-3 years while I lived there, but I suspect a whole lot has changed since then, especially looking at things from the vendor point-of-view. I’m really looking forward to disrupting the storage market in Japan, much like we’re doing here in North America.



464 total views, no views today


I Can’t ‘Contain’ Myself


I blogged about this topic last week, but looking forward toward all of what will be announced at VMworld next week has made me want to revisit the subject again and make some predictions on what the show will be all about this year. I literally couldn’t (pardon the pun) ‘contain’ myself 😉 I do quite a bit of thinking around this topic every year so as to inform our Marketing team on what our messaging might look like as we approach VMworld, so I’ve been thinking about these topics for some time now…

What exactly do containers mean for the virtualization admin, storage admin or otherwise? 

Containers such as Docker and Rocket, among newer generation technologies, aim to provide abstraction much like virtual machines, but without the added effort of virtualizing an entire OS and all of the supporting drivers. A much more efficient method for application delivery, which I have blogged about before and have had conversations with colleagues, such as Scott Drummonds, about in the recent past couple of years. Around the time that CloudFoundry was first announced, we talked about what application delivery would look like without the need for an OS. Obviously it would make for a higher performance, transient, web-scale system that would deliver the application without all of the fluff required by a typical virtual machine. We didn’t know when it would happen, but definitely glimpsed the possibilities and implications in IT once this dream came true. We are starting to see the results of that now, with “containerization”. VMware is at the forefront of this innovation (along with Docker and CoreOS) with projects such as Lightwave and Photon. You will obviously see these heavily featured at the show, along with other DevOps and Microservices related topics. I see these trends as a 2nd or 3rd iteration of the cloud native apps paradigm that CloudFoundry hinted toward a few years back. Paul Maritz talked quite extensively about this during his tenure at VMware, stating something along the lines of: now that customers have realized the economic benefits of virtualization, they will start re-writing their apps for a cloud native future. This didn’t happen right away, but I believe that it’s telling that we are now seeing the results of this effort. This definitely follows with VMware’s “Ready for Any” theme for this year, I think you’ll agree.

Storage and the next wave of containerization…

As an emerging, cutting edge storage provider, we at Coho Data are also innovating around this trend as it relates to storage, especially persistent storage, which is in its nascent phase. We will not only be demoing some microservices running directly on our array, but we’ll also be talking about what the microservices future will look like in the context of storage. Think data-centric, not workload-centric, VM-centric or otherwise. We care about storing and managing data regardless of the abstraction it sits in.

This is just a taste of what we have in store. Come visit the Coho Data booth (#1713) or attend our CTO, Andy Warfield’s breakout session on the topic to learn more about what we have in-store.

Looking forward to seeing you there!

1,335 total views, no views today


Hybrid or All-flash?

spinning-flash-hybridI’ve been seeing a lot of commentary of late (actually it’s been happening for a while, but now that we’re approaching VMworld, I guess I’m more conscious of it) about the value provided by hybrid storage systems from a cost perspective, while at the same time, the all-flash vendors out there are touting the storage efficiency features (they have to!) of their platforms, to get to a price point that run-of-the-mill customers can stomach, while providing very high performance.

There’s been lots of FUD about how the economics of hybrid systems will always be more cost effective when compared to all-flash systems. By contrast, the all-flash vendors will say that now, with the announcement of a flash device that is larger than the biggest hard disk, that all-flash will win out, once prices come down. Finally, a select few vendors are also saying that because they provide both hybrid and flash systems that they are (falsely) offering customer choice. I beg to differ.

How about a third option all-together, or if you will, no option at all.

What exactly do I mean by this???

Well, since the announcement of our 2.x release, Coho has been able to host both hybrid AND all-flash nodes within a single cluster, in a single namespace. Not only is this easier for the customer to set-up and manage, but a much better story from an economic perspective as well.

Consider the fact that, because we leverage software-defined networking together with software-defined storage, that we can firstly, start with a single 2-node cluster, unlike any of our competitors in scale-out, that must start with 4 or more. Second, the fact that we can place data in the appropriate tier of storage based on a per-workload view (we present as NFS today) of the flash utilization over time, and we make even more efficient use of the performance and capacity in the system, while simplifying it’s deployment for the customer. This means that if our analytical data indicates that you are running out of high performance flash, you can expand your system with an all-flash node OR if the analytical data says that you’ll run out of capacity while still maintaining the same level of performance, you can expand with a hybrid system. We take a data-centric view of the customers’ workloads, so that they are freed from making these choices; and we’ve found that the message of simplicity is resonating quite well, thank you!

Expand this to next generation flash media types such as NVDIMM, PCM, RRAM, MRAM, etc. and you can imagine why having MANY different types of flash memory in a single cluster, single namespace will become a must have feature.

Coho Data is on the path to deliver this today. Our platform was built with this in mind from the word “Go”.

It’s time to upgrade your thinking!


1,283 total views, no views today


Web-scale Economics… and Innovation?!


What is Web-scale?

A good percentage of those of us out there in the trenches of enterprise IT have probably heard the term “Web-scale” thrown around. It, like many IT terms, is equal parts marketing term and technical term, hence, not-so-well-defined… and as a result, open for interpretation. My take on Web-scale is that it’s, first and foremost, a way to architect IT systems for enterprise, incorporating the best elements of public clouds. While it is very hard to mimic the architectural scale and resiliency of public and private clouds from AWS, Google, Facebook and others, one can easily see the benefits of distributed, shared-nothing architectures, API-driven automation and orchestration, self-healing application stacks… and in Coho‘s case, closer integration of the network with the storage.

The Coho approach to Web-scale has some unique elements that separate us from the other vendors that purport to do it. Hyperconverged vendors are for the most part confined to growing all datacenter resources simultaneously. Scaling all datacenter resources at the same time doesn’t necessarily make sense, unless your environment has very uniform workloads. My guess is that if you are a typical small/medium or enterprise, your compute, network and storage requirements don’t scale at an identical rate, thus performance gets left on the table, or you end up licensing software that you don’t need in order to grow your footprint. With Coho, we allow the customer to scale the compute independent of the network and storage. As you add building blocks to a Coho scale-out cluster, you add 40Gbps (or more) of network bandwidth along with multiple TBs of PCIe NVMe flash. This is a hard requirement if you expect the cluster to exhibit linear performance scaling as you add capacity. Adding flash without the adequate network bandwidth to push the bits over the wire is a lost war before the battle even begins!

This brings us to the economics part of the discussion as it relates to Web-scale…

Converged (non-hyperconverged) systems that incorporate increased network capacity along with the storage, such as Coho, give customers the ability to incorporate the best elements of public clouds with the security and performance that can only be achieved with on-premises infrastructure. This simple fact has afforded us an opportunity to talk to customers in the terms of $/GB/mo that they are likely to see quoted from Amazon and others. The shift toward OPEX pricing is already top of mind for a great many CIOs, so it serves as a convenient reference point for us when we talk with customers. Even with operational costs figured into the economics, we often talk about prices that are 1/2 to 1/3 the cost of AWS. Now let’s put a qualifier here… we’re not talking Amazon Glacier or the cheapest of the cheap that Amazon offers, but rather AWS EFS (Elastic File System) service which is advertised at around $.30/GB/mo, all-the-while preserving the jobs of the internal IT teams, and preserving corporate IP (intellectual property) security and providing better performance! Don’t even get me started on the costs associated with getting data into/out of AWS once it’s in their cloud. You ever heard of data gravity?

But wait, there’s more…

Since Coho is innovating by creating unique storage services directly on the array, by leveraging Docker, Kubernetes, VXLAN and other cutting edge technologies, we are able to offer alternatives to AWS, without the need to move to the public cloud. This is the move toward “microservices” that you may have heard about. As a matter of fact, not only will Coho be demoing these technologies, in the form of on-the-fly transcoding, a search appliance and more, but our CTO, Andy Warfield will also present a breakout session discussing this very topic. Why bother going to AWS for services that you can get as free upgrades with a paid support contract?

In my opinion, Coho is not only at the forefront of what Web-scale was intended to deliver, but taking it to a whole new level. Look for us at VMworld (booth 1713) to find out more… we’re looking forward to talking with you!

1,650 total views, no views today


Come Meet the vSamurai at VMworld


If you’ve been following me via this blog or elsewhere on this series of tubes, you may know that I have taken on more of a customer and partner facing role in the past couple of months. This has given me the opportunity to do more of what I love, which is getting people excited as well as evangelizing what Coho Data is all about. The storage marketplace today is very crowded, with more and more start-ups on the scene on what seems like a daily basis. It’s hard for me to catch up with all that’s going on, let alone the buyers of the technology themselves. It’s not going to get any easier, until some major/sudden consolidation happens, but that’s mere speculation…

I’m looking forward to talking with members of the VMware and virtualization community, customers, partners, and anyone else that finds their way to VMworld this year. Recalling back to about a year-and-a-half ago, when I was making the decision to leave NetApp to join a company that is truly innovating in the storage market, I saw the potential that Coho has to offer. Now, I feel that a lot of the promise is being realized. This will be our 2nd VMworld and we’ll be upgrading from Silver sponsor last year to Platinum sponsor this year! We’re doubling-down on VMware and I am truly excited about what we’ll be showing and talking about this year. I’m privileged to share the details with you as we get closer to the show…

Until then, if you’d like to set-up a meeting to talk with me or another one of our other experts, head on over here.

Or if you’d just like to sign-up for a chance to win a free pass to attend the show, click below:


1,450 total views, no views today


Powered by WordPress. Designed by WooThemes