Archive | Analytics

Hybrid or All-flash?

spinning-flash-hybridI’ve been seeing a lot of commentary of late (actually it’s been happening for a while, but now that we’re approaching VMworld, I guess I’m more conscious of it) about the value provided by hybrid storage systems from a cost perspective, while at the same time, the all-flash vendors out there are touting the storage efficiency features (they have to!) of their platforms, to get to a price point that run-of-the-mill customers can stomach, while providing very high performance.

There’s been lots of FUD about how the economics of hybrid systems will always be more cost effective when compared to all-flash systems. By contrast, the all-flash vendors will say that now, with the announcement of a flash device that is larger than the biggest hard disk, that all-flash will win out, once prices come down. Finally, a select few vendors are also saying that because they provide both hybrid and flash systems that they are (falsely) offering customer choice. I beg to differ.

How about a third option all-together, or if you will, no option at all.

What exactly do I mean by this???

Well, since the announcement of our 2.x release, Coho has been able to host both hybrid AND all-flash nodes within a single cluster, in a single namespace. Not only is this easier for the customer to set-up and manage, but a much better story from an economic perspective as well.

Consider the fact that, because we leverage software-defined networking together with software-defined storage, that we can firstly, start with a single 2-node cluster, unlike any of our competitors in scale-out, that must start with 4 or more. Second, the fact that we can place data in the appropriate tier of storage based on a per-workload view (we present as NFS today) of the flash utilization over time, and we make even more efficient use of the performance and capacity in the system, while simplifying it’s deployment for the customer. This means that if our analytical data indicates that you are running out of high performance flash, you can expand your system with an all-flash node OR if the analytical data says that you’ll run out of capacity while still maintaining the same level of performance, you can expand with a hybrid system. We take a data-centric view of the customers’ workloads, so that they are freed from making these choices; and we’ve found that the message of simplicity is resonating quite well, thank you!

Expand this to next generation flash media types such as NVDIMM, PCM, RRAM, MRAM, etc. and you can imagine why having MANY different types of flash memory in a single cluster, single namespace will become a must have feature.

Coho Data is on the path to deliver this today. Our platform was built with this in mind from the word “Go”.

It’s time to upgrade your thinking!

infinite-rack-scale

5,692 total views, 5 views today

0

Why I’m Excited About The Coho DataStream 2.5 Release!

coho_logo

A lot of engineering work has gone into the Coho v2.5 software release. Add to that the fact that we now have 3 distinct hardware offerings and we’ve got a pretty extensive portfolio now. I’ve been involved with the testing on this release since the Alpha days, and I can honestly say it’s our best release yet. I could tell from the beginning, as the quality of the code was much more robust (vs. some of the releases from 6 months to 1 year ago) based on my initial testing.

Here are the top 3 reasons why I’m most excited about this release:

#1 – Flashfit Analytics (Hit Ratio Curve)

hit_ratio_curve

We showed a technical preview of this at VMworld 2014 as well as Storage Field Day 6 and I think it’s a really unique differentiator in the market right now. Our analytics are extremely detailed and can pinpoint the exact amount of flash that will benefit workloads on a per-workload basis. We are able to see so much detail about the flash usage that we could make an educated guess about the application running in the workload. A bit more work is required before you do this, but the fact that we can says a lot about the level of detail captured here. The idea with Flashfit is that we give a customer the data to choose whether they have sufficient amounts of flash for their current working set, need to add more capacity (hybrid node) or need to add more performance (all-flash node). This will work it’s way into QoS and storage profiles as we move forward with development of the feature. When you combine this with the ability to choose an all-flash or hybrid node, we give the customer unparalleled economics and TCO/ROI.

#2 – Data Age

data_age

The Data Age view is something that we also previewed an early version of a while back. It’s a bit more abstract, but interesting in that we are able to show a cluster-wide view of how old the data is. You’ll find that this graph gives more supporting evidence around the the flash working set on the system and proves that in all but the busiest of customer environments, the amount of flash that’s accessed frequently is a mere fraction of the total flash on the system. In other words, we give you real-time supporting evidence showing that: 1) You probably don’t require an all-flash array 2) If you decided to go with an all-flash option, you’re paying a lot of money for a very, very small portion of your hot data. All of the rest would be better served by higher density mediums.

#3 – Scalability Improvements

When I first started at Coho, approaching a year-and-a-half ago now, we admittedly had some challenges around scalability. This new release introduces an improved global namespace that allows for orders of magnitude more objects in the cluster and thus many, many more VMs (workloads). I’m happy to have been a small part of reporting my findings and getting this prioritized and fixed. I can honestly say that we are truly living up to the promise of a scale-out storage system.

Well, that’s it for now. I’m curious what other features of DataStream OS v2.5 that you’re most excited about! Respond in the comments.

3,657 total views, 8 views today

0

Coho DataStream 2000f All-Flash Storage Offering

coho_logo

It’s been a while since my last post, but I couldn’t pass up this chance to talk about the latest news here at Coho. The announcements we made today have been in the works for a while, and I am proud to have been a small part of bringing the all-flash offering to market, having tested it extensively for the last several months. The series-C funding round is nice, too, of course!

Our all-flash offering enters a crowded market to be sure, but instead of becoming just another all-flash storage vendor, we’re approaching all-flash a bit differently here at Coho. We believe that whether to go “all-flash” or “hybrid” shouldn’t be a decision the customer has to make themselves, but rather, with intelligence about your workloads and how much flash they “really” need to function at peak performance, we can allow both all-flash and hybrid to co-exist within a single cluster and single namespace for a balance of unmatched performance and economics.

Our Cascade tiering technology allows each AFA chassis to balance two different types of flash, NVMe flash cards for the upper tier, and up to twenty-four 2.5” SSDs for the lower tier. All told, you can fit up to 50 TB of usable capacity in each 2U chassis (that’s before calculating space efficiency from compression, etc.)

Here’s a statement from our Technical Product Manager, Forbes Guthrie‘s own blog post about the release, and this is a key point I use when talking to others about some of Coho’s key value:

“Coho realized early on, that when you’re building a storage system with outrageously fast flash devices, and you have hungry servers waiting with insatiable appetites; don’t set out to funnel that I/O through an obvious choke point.  Storage systems that come with pair of controllers, with no way to grow alongside your expanding storage needs, are a short-sighted design choice. With AFA storage, this is (obviously) way more critical.

All Coho DataStream systems are comprised of shared-nothing “MicroArrays”. Each 2U disk chassis contains 2 independent storage nodes; each have their own pair of 10 GbE NICs, their own CPUs and memory. As you add our disk chassis to a cluster, any type of Coho chassis, you’re adding controller power and I/O aperture.”

Well, I think that statement speaks for itself. I am really excited for what the future holds here at Coho. I feel that rounding out our catalog with this all-flash offering in the 2000f, entry-level hybrid offering in the 800h, to go along with our 1000h puts us in a very good place in the market right now. Add to that the fact that we use data intelligently to predict how much flash the customer needs for their workloads along with the ability to place it intelligently within a single namespace, across the different types of storage, provides us with the efficiency to compete very well against the competition, and I say: “Bring it on!

Links: 

http://www.theplatform.net/2015/05/22/the-future-of-flash-is-massive-scale/

http://www.theregister.co.uk/2015/05/21/coho_data_spawns_new_box_with_fresh_funding/

http://searchvirtualstorage.techtarget.com/news/4500246696/Coho-Data-goes-all-flash-with-new-scale-out-array

http://www.bizjournals.com/sanjose/blog/techflash/2015/05/coho-data-raises-30m-to-expand-flash-storage.html

http://vmblog.com/archive/2015/05/20/coho-data-raises-30-million-in-series-c-funding-led-by-march-capital-partners.aspx#.VVzT_1nBzRY

http://www.marketwatch.com/story/coho-data-raises-30-million-in-series-c-funding-led-by-march-capital-partners-2015-05-20

http://www.infostor.com/disk-arrays/ssd-drives/coho-data-raises-30-million-releases-all-flash-datastream-hardware.html

http://www.forbes.com/sites/benkepes/2015/05/20/coho-reels-in-a-30m-series-c-to-fuel-future-storage/

http://www.storagereview.com/coho_releases_allflash_array_and_raises_30_million_in_funding

http://www.cohodata.com/blog/2015/05/20/storage-the-coho-all-flash-2000f/

3,273 total views, no views today

0

Powered by WordPress. Designed by WooThemes