Thoughts on the NetApp-SolidFire Acquisition

netapp_logosolidfire_logo

General Industry Observations

I’ve been reading and hearing a lot of rumors these days about change in the storage industry. “IPO” vs. “Acquisition” is really the only way forward, and due to stumbles by Pure Storage, Violin and others, IPO is simply not in the cards for most of the independents looking to find their product/market fit. NetApp‘s acquisition of SolidFire is only the latest in the ongoing consolidation in the storage space; it won’t be the last.

While this is definitely not the first time NetApp has attempted to acquire a company with a technology they needed, this is definitely one that many see as absolutely necessary for NetApp to keep up with & remain relevant in comparison to the pure-play AFA vendors. *I don’t normally comment on this kind of activity, but being that I was at NetApp for the better part of 3yr, I had to comment, this being their first major acquisition since I left and also due to all of the management shake-up over the past year.

Is SolidFire the Right Fit for NetApp?

No one really knows the answer to this or how this will pan out, but I happen to know that both SolidFire and NetApp have a lot of smart people to try and make it happen; many of them, I consider very close industry friends. I do see areas where NetApp is weak technically (block) and market-wise (service providers) that SolidFire could help to fill a gap. Will SolidFire remain a separate product line? I hope NetApp has learned their lesson on this one.

Both NetApp and SolidFire have “scale-out” technologies that would seem to conflict with one another; I don’t see this as a strength.

What’s interesting to me about both NetApp and SolidFire is their reliance on dedupe and/or compression to get to optimal pricing economies with both of their product lines. I’ve seen quite a bit of commentary lately as well as around how dedupe is going to be less and less important as next generation media decreases in cost and increases in density. As this happens over the next couple of years, their value prop will resonate less and less. The real winners will be vendors like Coho Data who have optimized (and are continuing to optimize) their stack from a performance perspective, regardless of the storage medium. Whether it be disk, PCIe flash, NVMe Flash, SATA SSD, SAS SSD or even NVDIMM, etc… Will we add dedupe? Absolutely, YES (it’s being worked on right now). But it’s really, really nice to know that despite how we implement it, we’ll rely on it less than the other guys. Add to this the fact that those who are leveraging complementary technologies, like SDN, to scale-out are better positioned to grow from a performance perspective.

Who are the Winners and Losers?

The winners are NetApp in the short-term as they remove a competitor in the flash market, and temporarily silence those that say they aren’t innovating and are no longer relevant in today’s storage market. The other winners are the SolidFire shareholders, of course. NetApp customers can also be considered winners, albeit with more choices and less clarity if they continue on as NetApp customers. The other guys out in the storage start-up world could be considered winers as well, since this legitimizes them, brings them unhappy SolidFire and NetApp customers and potentially starts a bidding war for the next best thing…

The losers are SolidFire customers, of course. SolidFire had a unique product and solid customer base in the service provider market, but it remains unclear on how NetApp will integrate them into their organization. What also remains to be seen is whether SolidFire can continue to keep the pace of innovation required to keep up with Pure Storage, and other scrappy start-ups with unique and game-changing technology that will now challenge NetApp for the next generation of the storage market…

We live in interesting times friends, interesting times indeed!

2,527 total views, 3 views today

0

Coho Data at the Tokyo VMware vForum

tokyo_2015

Those readers out there that know me personally, know that I lived in Japan for almost 7 years (nearly half of my IT career). I’ve blogged a fair bit in Japanese and about some of the events around the 2011 Tōhoku earthquake and tsunami here & here. That said, Japan holds a special place in my heart. It’s always great to get back for a visit.

It’s been close to 3 years since I’ve last been back. I’m excited about visiting the far east to see what’s new and what’s changed in enterprise IT in the years since. I suspect that it’s virtually unrecognizable from when I lived there from 2004-2011.

In those days, VMware wasn’t used nearly as heavily in Japan as it is today. Not to mention the fact that Business Continuity and Disaster Recovery were merely an afterthought before the days of the 2011 earthquake, despite the fact that Japan is the #1 most seismically active country on the planet!

Add to this the fact that the storage market has changed tremendously in the years since then. There were really only 2 players in the market, EMC and NetApp, and none of the multitude of next-gen storage vendors vying for market share like there are today. Think about that for a minute; No Nimble Storage, No PureStorage, No SolidFire, No Tegile, No Tintri, No Coho Data, etc… along with too many more to name. The new guys are being disruptive and taking market share away from the old guard of EMC and NetApp. We all have our strengths and weaknesses, key use cases and varying costs, but one thing rings true: The next-gen storage vendors are innovating at a geometric rate that the legacy guys are struggling to catch up with!

These are just a couple of reasons why I am so excited for Coho Data to be a part of the vForum in Tokyo this year. I was an attendee for 2-3 years while I lived there, but I suspect a whole lot has changed since then, especially looking at things from the vendor point-of-view. I’m really looking forward to disrupting the storage market in Japan, much like we’re doing here in North America.

vforum2015_logo

 

3,428 total views, 19 views today

0

I Can’t ‘Contain’ Myself

vmworld2015_logo

I blogged about this topic last week, but looking forward toward all of what will be announced at VMworld next week has made me want to revisit the subject again and make some predictions on what the show will be all about this year. I literally couldn’t (pardon the pun) ‘contain’ myself 😉 I do quite a bit of thinking around this topic every year so as to inform our Marketing team on what our messaging might look like as we approach VMworld, so I’ve been thinking about these topics for some time now…

What exactly do containers mean for the virtualization admin, storage admin or otherwise? 

Containers such as Docker and Rocket, among newer generation technologies, aim to provide abstraction much like virtual machines, but without the added effort of virtualizing an entire OS and all of the supporting drivers. A much more efficient method for application delivery, which I have blogged about before and have had conversations with colleagues, such as Scott Drummonds, about in the recent past couple of years. Around the time that CloudFoundry was first announced, we talked about what application delivery would look like without the need for an OS. Obviously it would make for a higher performance, transient, web-scale system that would deliver the application without all of the fluff required by a typical virtual machine. We didn’t know when it would happen, but definitely glimpsed the possibilities and implications in IT once this dream came true. We are starting to see the results of that now, with “containerization”. VMware is at the forefront of this innovation (along with Docker and CoreOS) with projects such as Lightwave and Photon. You will obviously see these heavily featured at the show, along with other DevOps and Microservices related topics. I see these trends as a 2nd or 3rd iteration of the cloud native apps paradigm that CloudFoundry hinted toward a few years back. Paul Maritz talked quite extensively about this during his tenure at VMware, stating something along the lines of: now that customers have realized the economic benefits of virtualization, they will start re-writing their apps for a cloud native future. This didn’t happen right away, but I believe that it’s telling that we are now seeing the results of this effort. This definitely follows with VMware’s “Ready for Any” theme for this year, I think you’ll agree.

Storage and the next wave of containerization…

As an emerging, cutting edge storage provider, we at Coho Data are also innovating around this trend as it relates to storage, especially persistent storage, which is in its nascent phase. We will not only be demoing some microservices running directly on our array, but we’ll also be talking about what the microservices future will look like in the context of storage. Think data-centric, not workload-centric, VM-centric or otherwise. We care about storing and managing data regardless of the abstraction it sits in.

This is just a taste of what we have in store. Come visit the Coho Data booth (#1713) or attend our CTO, Andy Warfield’s breakout session on the topic to learn more about what we have in-store.

Looking forward to seeing you there!

1,986 total views, 5 views today

0

Hybrid or All-flash?

spinning-flash-hybridI’ve been seeing a lot of commentary of late (actually it’s been happening for a while, but now that we’re approaching VMworld, I guess I’m more conscious of it) about the value provided by hybrid storage systems from a cost perspective, while at the same time, the all-flash vendors out there are touting the storage efficiency features (they have to!) of their platforms, to get to a price point that run-of-the-mill customers can stomach, while providing very high performance.

There’s been lots of FUD about how the economics of hybrid systems will always be more cost effective when compared to all-flash systems. By contrast, the all-flash vendors will say that now, with the announcement of a flash device that is larger than the biggest hard disk, that all-flash will win out, once prices come down. Finally, a select few vendors are also saying that because they provide both hybrid and flash systems that they are (falsely) offering customer choice. I beg to differ.

How about a third option all-together, or if you will, no option at all.

What exactly do I mean by this???

Well, since the announcement of our 2.x release, Coho has been able to host both hybrid AND all-flash nodes within a single cluster, in a single namespace. Not only is this easier for the customer to set-up and manage, but a much better story from an economic perspective as well.

Consider the fact that, because we leverage software-defined networking together with software-defined storage, that we can firstly, start with a single 2-node cluster, unlike any of our competitors in scale-out, that must start with 4 or more. Second, the fact that we can place data in the appropriate tier of storage based on a per-workload view (we present as NFS today) of the flash utilization over time, and we make even more efficient use of the performance and capacity in the system, while simplifying it’s deployment for the customer. This means that if our analytical data indicates that you are running out of high performance flash, you can expand your system with an all-flash node OR if the analytical data says that you’ll run out of capacity while still maintaining the same level of performance, you can expand with a hybrid system. We take a data-centric view of the customers’ workloads, so that they are freed from making these choices; and we’ve found that the message of simplicity is resonating quite well, thank you!

Expand this to next generation flash media types such as NVDIMM, PCM, RRAM, MRAM, etc. and you can imagine why having MANY different types of flash memory in a single cluster, single namespace will become a must have feature.

Coho Data is on the path to deliver this today. Our platform was built with this in mind from the word “Go”.

It’s time to upgrade your thinking!

infinite-rack-scale

1,908 total views, 5 views today

0

Web-scale Economics… and Innovation?!

coho_logo

What is Web-scale?

A good percentage of those of us out there in the trenches of enterprise IT have probably heard the term “Web-scale” thrown around. It, like many IT terms, is equal parts marketing term and technical term, hence, not-so-well-defined… and as a result, open for interpretation. My take on Web-scale is that it’s, first and foremost, a way to architect IT systems for enterprise, incorporating the best elements of public clouds. While it is very hard to mimic the architectural scale and resiliency of public and private clouds from AWS, Google, Facebook and others, one can easily see the benefits of distributed, shared-nothing architectures, API-driven automation and orchestration, self-healing application stacks… and in Coho‘s case, closer integration of the network with the storage.

The Coho approach to Web-scale has some unique elements that separate us from the other vendors that purport to do it. Hyperconverged vendors are for the most part confined to growing all datacenter resources simultaneously. Scaling all datacenter resources at the same time doesn’t necessarily make sense, unless your environment has very uniform workloads. My guess is that if you are a typical small/medium or enterprise, your compute, network and storage requirements don’t scale at an identical rate, thus performance gets left on the table, or you end up licensing software that you don’t need in order to grow your footprint. With Coho, we allow the customer to scale the compute independent of the network and storage. As you add building blocks to a Coho scale-out cluster, you add 40Gbps (or more) of network bandwidth along with multiple TBs of PCIe NVMe flash. This is a hard requirement if you expect the cluster to exhibit linear performance scaling as you add capacity. Adding flash without the adequate network bandwidth to push the bits over the wire is a lost war before the battle even begins!

This brings us to the economics part of the discussion as it relates to Web-scale…

Converged (non-hyperconverged) systems that incorporate increased network capacity along with the storage, such as Coho, give customers the ability to incorporate the best elements of public clouds with the security and performance that can only be achieved with on-premises infrastructure. This simple fact has afforded us an opportunity to talk to customers in the terms of $/GB/mo that they are likely to see quoted from Amazon and others. The shift toward OPEX pricing is already top of mind for a great many CIOs, so it serves as a convenient reference point for us when we talk with customers. Even with operational costs figured into the economics, we often talk about prices that are 1/2 to 1/3 the cost of AWS. Now let’s put a qualifier here… we’re not talking Amazon Glacier or the cheapest of the cheap that Amazon offers, but rather AWS EFS (Elastic File System) service which is advertised at around $.30/GB/mo, all-the-while preserving the jobs of the internal IT teams, and preserving corporate IP (intellectual property) security and providing better performance! Don’t even get me started on the costs associated with getting data into/out of AWS once it’s in their cloud. You ever heard of data gravity?

But wait, there’s more…

Since Coho is innovating by creating unique storage services directly on the array, by leveraging Docker, Kubernetes, VXLAN and other cutting edge technologies, we are able to offer alternatives to AWS, without the need to move to the public cloud. This is the move toward “microservices” that you may have heard about. As a matter of fact, not only will Coho be demoing these technologies, in the form of on-the-fly transcoding, a search appliance and more, but our CTO, Andy Warfield will also present a breakout session discussing this very topic. Why bother going to AWS for services that you can get as free upgrades with a paid support contract?

In my opinion, Coho is not only at the forefront of what Web-scale was intended to deliver, but taking it to a whole new level. Look for us at VMworld (booth 1713) to find out more… we’re looking forward to talking with you!

2,289 total views, 3 views today

1

Powered by WordPress. Designed by WooThemes