Tag Archives | storage

Thoughts on the NetApp-SolidFire Acquisition


General Industry Observations

I’ve been reading and hearing a lot of rumors these days about change in the storage industry. “IPO” vs. “Acquisition” is really the only way forward, and due to stumbles by Pure Storage, Violin and others, IPO is simply not in the cards for most of the independents looking to find their product/market fit. NetApp‘s acquisition of SolidFire is only the latest in the ongoing consolidation in the storage space; it won’t be the last.

While this is definitely not the first time NetApp has attempted to acquire a company with a technology they needed, this is definitely one that many see as absolutely necessary for NetApp to keep up with & remain relevant in comparison to the pure-play AFA vendors. *I don’t normally comment on this kind of activity, but being that I was at NetApp for the better part of 3yr, I had to comment, this being their first major acquisition since I left and also due to all of the management shake-up over the past year.

Is SolidFire the Right Fit for NetApp?

No one really knows the answer to this or how this will pan out, but I happen to know that both SolidFire and NetApp have a lot of smart people to try and make it happen; many of them, I consider very close industry friends. I do see areas where NetApp is weak technically (block) and market-wise (service providers) that SolidFire could help to fill a gap. Will SolidFire remain a separate product line? I hope NetApp has learned their lesson on this one.

Both NetApp and SolidFire have “scale-out” technologies that would seem to conflict with one another; I don’t see this as a strength.

What’s interesting to me about both NetApp and SolidFire is their reliance on dedupe and/or compression to get to optimal pricing economies with both of their product lines. I’ve seen quite a bit of commentary lately as well as around how dedupe is going to be less and less important as next generation media decreases in cost and increases in density. As this happens over the next couple of years, their value prop will resonate less and less. The real winners will be vendors like Coho Data who have optimized (and are continuing to optimize) their stack from a performance perspective, regardless of the storage medium. Whether it be disk, PCIe flash, NVMe Flash, SATA SSD, SAS SSD or even NVDIMM, etc… Will we add dedupe? Absolutely, YES (it’s being worked on right now). But it’s really, really nice to know that despite how we implement it, we’ll rely on it less than the other guys. Add to this the fact that those who are leveraging complementary technologies, like SDN, to scale-out are better positioned to grow from a performance perspective.

Who are the Winners and Losers?

The winners are NetApp in the short-term as they remove a competitor in the flash market, and temporarily silence those that say they aren’t innovating and are no longer relevant in today’s storage market. The other winners are the SolidFire shareholders, of course. NetApp customers can also be considered winners, albeit with more choices and less clarity if they continue on as NetApp customers. The other guys out in the storage start-up world could be considered winers as well, since this legitimizes them, brings them unhappy SolidFire and NetApp customers and potentially starts a bidding war for the next best thing…

The losers are SolidFire customers, of course. SolidFire had a unique product and solid customer base in the service provider market, but it remains unclear on how NetApp will integrate them into their organization. What also remains to be seen is whether SolidFire can continue to keep the pace of innovation required to keep up with Pure Storage, and other scrappy start-ups with unique and game-changing technology that will now challenge NetApp for the next generation of the storage market…

We live in interesting times friends, interesting times indeed!

12,785 total views, 3 views today


Implementing Site-to-Site Replication with Coho SiteProtect

Now that I’ve given you a quick overview of the architecture of Coho SiteProtect, I’d like to provide you with the basics for implementing SiteProtect in your data center. This is the 2nd in my series of posts on our site-to-site replication offering. As I discover the best practices for deploying SiteProtect in various infrastructures and scenarios, I’ll document those here as well, so stay tuned for those…

Without further ado, here is the step-by-step set-up procedure for SiteProtect…

Pairing the Sites

The first step in setting up remote replication is establishing a trusted relationship from the local site to the remote site. This is done from the Settings > Replication page in the Coho web UI, indicated by the gear (settings) icon (Figure 1).


Figure 1: Settings > Replication page

From here, click the “Begin replication setup” link which brings you to the configuration screen for the local site (Figure 2).


Figure 2: Settings > Replication > Local Site page

Here, you’ll specify the network settings for the site to site communication. It is worth noting that the replication traffic is sent on a VLAN to simplify network management for enterprise environments.

Here you can also configure bandwidth throttling for outbound traffic in case you need to limit the usage of the site to site interconnect. The same can be done on the remote site which means that both incoming and outgoing throughput can be controlled. Bear in mind that by limiting the traffic, you may increase the time it takes for a workload to finish replicating, in other words, increase the RPO.

Once that’s complete, you’ll click “Next” and specify the IP and password of the remote DataStream. Click “Next” again to proceed (Figure 3).


Figure 3: Settings > Replication > Remote Credentials page

Once the wizard confirms a connection to the other side, you’ll specify the remote system’s VLAN, replication IP address, and netmask, as well as the default gateway for the other side and click “Next” (Figure 4).

Note: On this page the bandwidth limit relates to outbound traffic from the remote site; or put another way, the inbound replication traffic arriving at the local site.


Figure 4: Settings > Replication > Remote Network page

Finally, you’re brought to step 4, which is the “Summary” page and allows you to review the configuration before applying the settings. Click “Apply and Connect” to complete the wizard (Figure 5).


Figure 5: Settings > Replication > Summary page

From this point forward, you’ll be presented with the following view when you go to the Settings > Replication page. You can see here (Figure 6), the IP of the remote node and that replication is active.


Figure 6: Settings > Replication page (completed)

Configuring Workloads and Schedules

Now that the initial pairing is complete, you’ll visit the “Snapshots and Replication” page to customize which workloads are replicated as well as the snapshot & replication interval for each (Figure 7).


Figure 7: Snapshots / Replication > Overview page

Here (Figure 7), we provide an overview of the workloads. This is a dashboard which tells us the number of VMs with snapshots as well as replicated snapshots. For all of a site’s workloads to be protected, they should all have replicated snapshots, ensuring that any of those workloads can be recovered on the remote site in the event of a disaster.

We also provide a summary of the workloads covered by replication, how many bytes have been transferred as well as the average replication time. These statistics provide the assurance that replication is functional, and also the rate of change of the data, allowing you to determine if your replication interval is appropriate for the bandwidth you have available. If your average replication time is greater than your snapshot schedule, you can modify it accordingly.

To configure or modify workloads, proceed to the “Workloads” page (Figure 8).


Figure 8: Snapshots and Replication > Workloads page

Here (Figure 8), we denote the local vs. the remote workloads, provide a record of when the last snapshot was taken, and display the assigned schedule.

Note: VMs which have been deleted are denoted with a strike through the name.

Under “Snapshot Record”, you can click on the calendar icon to view snapshot date, name and description, as well as the status of replication. In this example, we have recently enabled the workload for replication denoted by the word “Scheduled” (Figure 9).


Figure 9: Snapshots and Replication > Workloads > Snapshot Record page

To manually protect a specific workload, click the camera icon next to that workload. This will allow you to take a manual snapshot and replicate that snapshot (Figure 10).


Figure 10: Snapshots and Replication > Workloads > Snapshot page

Most users will want to protect a number of VMs at once. The best way to do this is from the “Default Schedule” page (Figure 11).


Figure 11: Snapshots and Replication > Default Schedule page

In this example we have selected a RPO of 15 minutes by replicating the snapshot every 15 minutes. The frequency of snapshots is best determined by the needs of the application and the automated snapshot schedule for Coho offers flexibility, from minutes to months.

Note: Quiescing snapshots puts the system in a state that maintains application consistency before taking the snapshot, however this is only available in the daily and weekly schedule. Taking quiesced snapshots more frequently may cause significant performance penalties. These performance penalties are not related to the Coho storage but to how snapshots are executed within the VMware environment. A crash consistent snapshot (no quiesce) can be done very frequently on the Coho storage without performance penalty.


In the event of a disaster you’ll want to be be able to bring up your applications in the remote site. This is done from the “Failover/Failback” view (Figure 12).


Figure 12: Snapshots and Replication > Failover/Failback page

Initially, failover and failback are disabled in order to protect you from instantiating multiple copies of the same VM. You make the decision (from either location) to put the disaster recovery plan in-motion. If you’re ready to proceed, click the “Enable” button to enable failover (Figure 13).


Figure 13: Snapshots and Replication > Failover/Failback page (enabled)

You can now go to the remote DataStream and clone your replicated workloads to the remote system. Open up the web UI of the remote DataStream and, again, go to the Snapshots and Replication > Workloads page (Figure 14).


Figure 14: Snapshots and Replication > Workloads page (remote)

Click the “Remote Workloads” checkbox to filter by those workloads. These are the workloads available for failover from the primary to the disaster site. Choose the workload by clicking the calendar icon. Browse the recent snapshots and choose one to clone from, by clicking the clone icon (Figure 15).


Figure 15: Snapshots and Replication > Workloads page (failover)

Once you’ve selected the desired snapshot, enter a VM name and choose a target vSphere host. Click “Clone” to clone it and recover it to the destination site. The workload is now failed-over to continue serving data to your users. Just power it on in vCenter and you’re ready to go.


If at some point, the primary site comes back online, we support failing workloads back to their original location. This is done from the Snapshots and Replication page. On the workload that you’d like to failback (Figure 16), click the calendar icon to view the available snapshots, then click the red arrow to sync the snapshot to the original VM. Once the VM is powered on, your app will be back in the original location with all of the changed data from snaphots replicated from the remote site since the failure occurred; simple and easy just like it should be.


Figure 16: Snapshots and Replication > Workloads page (failback)

Well, that’s it for the initial implementation. As you can see, Coho SiteProtect is easy to get set-up and configured in any environment. Next, we’ll dive into some of the best practices of how to configure SiteProtect for optimal performance for environments of various sizes and requirements.

Until then, if you’d like more info about Coho SiteProtect, click here!

8,685 total views, no views today


New Gear – Synology DS1813+


Late last week saw the arrival of a long-awaited new NAS, the Synology DS1813+. This unit will replace my extremely old ReadyNAS NVX. I am now in the process of migrating data from the old box to the new one. Once that is complete, I will have some more information to share in a series of posts in the home lab and other sections of this blog. For now, I give you the hardware specs for the Synology DS1813+ to whet your appetite for more info!



  • CPU Frequency : Dual Core 2.13GHz
  • Floating Point
  • Memory : DDR3 2GB (Expandable, up to 4GB)
  • Internal HDD/SSD : 3.5″ or 2.5″ SATA(II) X 8 (Hard drive not included)
  • Max Internal Capacity : 32TB (8 X 4TB HDD) (Capacity may vary by RAID types) (See All Supported HDD)
  • Hot Swappable HDD
  • External HDD Interface : USB 3.0 Port X 2, USB 2.0 Port X 4, eSATA Port X 2
  • Size (HxWxD) : 157 X 340 X 233 mm
  • Weight : 5.21kg
  • LAN : Gigabit X 4
  • Link Aggregation
  • Wake on LAN/WAN
  • System Fan : 120x120mm X2
  • Easy Replacement System Fan
  • Wireless Support (dongle)
  • Noise Level : 24.1 dB(A)
  • Power Recovery
  • AC Input Power Voltage : 100V to 240V AC
  • Power Frequency : 50/60 Hz, Single Phase
  • Power Consumption : 75.19W (Access); 34.12W (HDD Hibernation)
  • Operating Temperature : 5°C to 35°C (40°F to 95°F)
  • Storage Temperature : -10°C to 70°C (15°F to 155°F)
  • Relative Humidity : 5% to 95% RH
  • Maximum Operating Altitude : 6,500 feet
  • Certification : FCC Class B, CE Class B, BSMI Class B
  • Warranty : 3 Years

7,578 total views, no views today


Predictions for 2013

Just wanted to share this infographic of 2013 storage predictions from our CTO & SVP, Jay Kidd.

I am particularly interested in the one entitled: “Enterprise Alternatives to Public Drop Boxes Will Gain Traction”




7,795 total views, no views today


Launch: NetApp Storage for Midsize Businesses

NetApp has just launched an update to the FAS2000 series for midsize businesses, along with some cool, new marketing collateral.

From the brochure:

  • Deploy virtual machines in minutes rather than hours and days
  • Cut backup time by 80%
  • Manage on average 500TB per admin
  • Use up to 83% less capacity for virtual machines
  • Achieve less than 5-minute recovery time objective
  • Achieve payback period in less than a year

I don’t normally talk about company product releases, but I found the launch video entitled “IT Survival Guide – Don’t Panic” to be particular amusing. Check it out!

FAS2000 Series Datasheet

Midsize Business Portfolio Brochure

Disclaimer: I am an employee of NetApp, but this is not a NetApp blog. Opinions expressed above are mine and mine alone.

5,808 total views, no views today


Powered by WordPress. Designed by WooThemes