Coming Full Circle

full_circle

 

 

 

 

 

 

 

 

 

 

 

 

 

Legacy

I want to start by saying that, I feel lucky to have been merely a participant in what we call the “Information Technology” field for going on 18 years now. I keep going back to this realization that it’s been that long when I explain to my son & daughter how many more years they need to go to school before they graduate and enter the “real world”. At this point, I’ve been finished with school (including college) for more years that I was in school… think about that for a moment. Beside making me old, it’s hard evidence of how finite our time is on this earth and how fast time flies. In fact, 2 years back, I had my 20th high school reunion… and while people do, indeed change, I can’t think of a bigger, more “tidal” shift than that which I have experienced during my time in the IT industry. The past couple of months (as I was looking for the next big thing in my career) have given me tons of time to reflect on both life and more specifically, my career.

 

Phase One

My first IT job (like many before me) was in Technical Support for the helpdesk of a technology company. I remember it well… 1998: I had just finished a barrage of Geology courses at my university and learned that I’d have to postpone graduation in order to complete 2 courses, as the professor for said courses was going on a year-long sabbatical. I had worked at RadioShack the summer before, but I wanted to step up to the big leagues and work for a real computer store, so I applied to CompUSA. I didn’t even know what an A+ certification was, but I was convinced I could be like one of the cool kids and fix computers in the repair shop. Hell, I’d had 2 computers of my own and had home-built a couple more by that point… how hard can it be?

As luck would have it, on the exact same day that I got a job offer from the manager at CompUSA, I received a call from one of the hiring managers at Inacom, a contractor for the local telecom, Frontier Communications (and one of the oldest phone companies in the US). I had twin cousins that were working there in the IT department managing the Netware and Windows NT server environments (among other things) that were able to convince the manager at Inacom to give me a shot, with virtually no network or enterprise computing experience. They vowed to help me, should I have any difficulties in my new job. The crazy part was that I was making more money that I could have dreamed possible, given my degree in Geology wasn’t even complete. But these were the days of the rise of the Internet and so I think anyone with a pulse could get a job, if only they had a bit of technical chops. I feel very lucky to have been given that opportunity. When I returned to the Geology department the following year, I think some of the other undergrads were both shocked and deflated to learn the kind of money I was making doing tech support.

Once I finished out my Geology degree, I stuck around a bit longer at Frontier and then headed West to AZ. Frontier was also one of the first times that I had seen technologies in the enterprise software sphere, such as OS/2, AS400, mainframes and Linux! My tech support expertise lead me to another tech support job at a leader in the CRM space, SalesLogix, which was later supplanted by SalesForce.com as a dominating force in the SaaS industry. At SalesLogix, I was fortunate enough to experiment with databases such as Oracle and MSSQL as well as Linux. The big game-changer for me though, was the software that one of my colleagues found that would allow us to run multiple copies of Windows, different CRM software and database versions all encapsulated into a set of files, running on top of the native operating system… yes, indeed, I’m talking about VMware Workstation 2.0! It was absolutely amazing to me what this software could do from a technology perspective, but also how much time and effort it could save me each and every day.

 

Phase Two 

Fast forward a couple of years and 2 promotions later and I was really using VMware quite heavily. Not only did I use Workstation, but also had a chance to do some work running test plans with our QA team, that was leveraging a new product called VMware GSX and then later ESX. This lead me to learning everything I could about VMware and how I could harness it to further my IT career. In 2003, I ended up meeting my wife and in 2004 we were married. Life was changing pretty fast for me. As my wife is from Japan, and I had an interest in Japanese culture, I ended up moving there for what I thought would be a year or so, but what instead became what I call the 2nd phase of my IT career.

I tried doing some phone interviews with folks in Tokyo, but given the lack of face-to-face interaction, let alone the time difference, I found it daunting to find a job. I ended up taking a 6 week leave from work, and headed to Japan on a tourist visa. Within 3 weeks of doing interviews with various companies, I landed a job in the IT department of a major English-as-a-2nd-language school in Tokyo. It was a new role for me, to be sure (no more phone support)… but I found myself doing desktop support, server administration, backup administration, network administration and the like all at once and all the time! I ended up using my Linux skills learned before to set-up lots of server environments for both production IT services but, more interestingly environments for our developers, from JBoss, to PHP, to MySQL, MSSQL and everything in between. I feel like this was the manual or legacy way of doing DevOps, before it was called that. No one had ever even heard that or used a separate terminology for this kind of work back then.

As part of my desire to make MySQL more reliable as well as to minimize the amount of servers we had to maintain, I went back to my roots with VMware and investigated where it might fit within a Japanese company’s IT infrastructure… good luck, right? (they just weren’t ready for virtualization yet) At this time, I could get GSX and we were using Workstation in much the same way I was at SalesLogix. Then, an offering came along that would allow me to really see how far I could take VMware within the company. That service was called VMTN (I’m not talking about VMTN in its current form, but rather the $299 subscription that would allow you to try all of the products in the VMware catalog for one low price). I ended up converting a couple of the web servers over to ESX 2.5 and P2V’ed some of the servers off and onto the new ESX servers as virtual machines at didn’t look back… who cared if I didn’t have a proper licence 😉 *It was around this time, that I was also introduced to NAS storage (in the form of NetApp), but that’s a story for another day.

This lead me to another job in Japan, primarily maintaining and architecting the NetApp and VMware environment at an international testing and certification company. I had the privilege of having a young, very forward-thinking IT manager and we made a plan to go “all-in” on the VMware ecosystem technologies that made sense for the organization, namely VMware Site Recovery Manager (SRM). In 2009, we met with several individuals from VMware about implementing the 1.x version of SRM. Not a single one of them had seen SRM, much less deployed it in production (with the exception of an internal demo environment), so I took it upon myself to get fluent in the technology. We made several successful test runs of the SRM software without issue, then had a rare opportunity to utilize it in production during the 2011 Tōhoku earthquake and tsunami. This lead to a blog article, VMworld session and considerable media attention on what exactly had taken place and how to leverage what we had learned at other enterprises throughout Japan and the rest of the world. I even presented a seminar at the VMware Tokyo offices to several high profile customers that were (needless to say) quite interested in implementing SRM, especially in light of recent events.

 

Phase Three

About a year before the earthquake, I started blogging to help get my thoughts on paper as well as to participate in the larger IT (mainly VMware) community, which at the time was under the tutelage of John Mark Troyer. In light of my presentation at VMworld, interest in my story and interest in returning back to the USA post-earthquake… I ended up taking a job that would turn out to be the biggest shift in my career to-date, the world of Technical Marketing! I ended up taking a job at NetApp working on solutions in the End-user Computing and Cloud Computing space with reference to VMware technologies deployed on NetApp storage systems, and an ever deeper delving into the world of VMware. After a couple of years doing that, I decided to try my hand at doing that job, but in a start-up atmosphere, in the heart of Silicon Valley. It was a whirlwind performing technical marketing duties for a new company, just shy of 60 employees at the time, with a product that had only GA’ed the month I joined the company. *The specifics of my time at this start-up is another topic, for another time, so I’ll summarize it here. 

As we continued to develop a storage appliance for use with VMware, I decided again to change responsibilities as we kick-started our channel partner program, and that team needed technical guidance. I also got a chance to work with enterprise customers and hear their requirements and use cases for our product in their VMware environments. This shift brings me to the 4th phase of my IT career…

 

Phase Four

After working on the sales side of the house for the better part of the last year, I decided to continue down this road and get re-acquainted with my roots dealing directly with customers. This would seem to serve as a sort-of “Phase Four” of my career. To that end, I’ve managed to land a role as a Sr. Systems Engineer on the Global Accounts team, and at VMware no less! I am very excited about this new stage in my life and my career, having done lots of different things over the years. Things really do “come full circle”, or so it would seem. Most will say that it’s merely a coincidence that I’m ending up back at VMware, but I leave this as evidence… both VMware and my IT career both started back in 1998 😉 Ok… that may well be a coincidence, but the fact that the company that I owe so much to my IT career has just hired me for the role I was always dreaming of, was no accident. I am overjoyed with enthusiasm (and outpouring of support and congratulations) about the prospects for the future of IT, no matter where it might lead. I’m only humbled by the fact that I get to continue to be a part of the ride!

2,849 total views, no views today

0

Thoughts on the NetApp-SolidFire Acquisition

netapp_logosolidfire_logo

General Industry Observations

I’ve been reading and hearing a lot of rumors these days about change in the storage industry. “IPO” vs. “Acquisition” is really the only way forward, and due to stumbles by Pure Storage, Violin and others, IPO is simply not in the cards for most of the independents looking to find their product/market fit. NetApp‘s acquisition of SolidFire is only the latest in the ongoing consolidation in the storage space; it won’t be the last.

While this is definitely not the first time NetApp has attempted to acquire a company with a technology they needed, this is definitely one that many see as absolutely necessary for NetApp to keep up with & remain relevant in comparison to the pure-play AFA vendors. *I don’t normally comment on this kind of activity, but being that I was at NetApp for the better part of 3yr, I had to comment, this being their first major acquisition since I left and also due to all of the management shake-up over the past year.

Is SolidFire the Right Fit for NetApp?

No one really knows the answer to this or how this will pan out, but I happen to know that both SolidFire and NetApp have a lot of smart people to try and make it happen; many of them, I consider very close industry friends. I do see areas where NetApp is weak technically (block) and market-wise (service providers) that SolidFire could help to fill a gap. Will SolidFire remain a separate product line? I hope NetApp has learned their lesson on this one.

Both NetApp and SolidFire have “scale-out” technologies that would seem to conflict with one another; I don’t see this as a strength.

What’s interesting to me about both NetApp and SolidFire is their reliance on dedupe and/or compression to get to optimal pricing economies with both of their product lines. I’ve seen quite a bit of commentary lately as well as around how dedupe is going to be less and less important as next generation media decreases in cost and increases in density. As this happens over the next couple of years, their value prop will resonate less and less. The real winners will be vendors like Coho Data who have optimized (and are continuing to optimize) their stack from a performance perspective, regardless of the storage medium. Whether it be disk, PCIe flash, NVMe Flash, SATA SSD, SAS SSD or even NVDIMM, etc… Will we add dedupe? Absolutely, YES (it’s being worked on right now). But it’s really, really nice to know that despite how we implement it, we’ll rely on it less than the other guys. Add to this the fact that those who are leveraging complementary technologies, like SDN, to scale-out are better positioned to grow from a performance perspective.

Who are the Winners and Losers?

The winners are NetApp in the short-term as they remove a competitor in the flash market, and temporarily silence those that say they aren’t innovating and are no longer relevant in today’s storage market. The other winners are the SolidFire shareholders, of course. NetApp customers can also be considered winners, albeit with more choices and less clarity if they continue on as NetApp customers. The other guys out in the storage start-up world could be considered winers as well, since this legitimizes them, brings them unhappy SolidFire and NetApp customers and potentially starts a bidding war for the next best thing…

The losers are SolidFire customers, of course. SolidFire had a unique product and solid customer base in the service provider market, but it remains unclear on how NetApp will integrate them into their organization. What also remains to be seen is whether SolidFire can continue to keep the pace of innovation required to keep up with Pure Storage, and other scrappy start-ups with unique and game-changing technology that will now challenge NetApp for the next generation of the storage market…

We live in interesting times friends, interesting times indeed!

5,286 total views, no views today

1

Coho Data at the Tokyo VMware vForum

tokyo_2015

Those readers out there that know me personally, know that I lived in Japan for almost 7 years (nearly half of my IT career). I’ve blogged a fair bit in Japanese and about some of the events around the 2011 Tōhoku earthquake and tsunami here & here. That said, Japan holds a special place in my heart. It’s always great to get back for a visit.

It’s been close to 3 years since I’ve last been back. I’m excited about visiting the far east to see what’s new and what’s changed in enterprise IT in the years since. I suspect that it’s virtually unrecognizable from when I lived there from 2004-2011.

In those days, VMware wasn’t used nearly as heavily in Japan as it is today. Not to mention the fact that Business Continuity and Disaster Recovery were merely an afterthought before the days of the 2011 earthquake, despite the fact that Japan is the #1 most seismically active country on the planet!

Add to this the fact that the storage market has changed tremendously in the years since then. There were really only 2 players in the market, EMC and NetApp, and none of the multitude of next-gen storage vendors vying for market share like there are today. Think about that for a minute; No Nimble Storage, No PureStorage, No SolidFire, No Tegile, No Tintri, No Coho Data, etc… along with too many more to name. The new guys are being disruptive and taking market share away from the old guard of EMC and NetApp. We all have our strengths and weaknesses, key use cases and varying costs, but one thing rings true: The next-gen storage vendors are innovating at a geometric rate that the legacy guys are struggling to catch up with!

These are just a couple of reasons why I am so excited for Coho Data to be a part of the vForum in Tokyo this year. I was an attendee for 2-3 years while I lived there, but I suspect a whole lot has changed since then, especially looking at things from the vendor point-of-view. I’m really looking forward to disrupting the storage market in Japan, much like we’re doing here in North America.

vforum2015_logo

 

11,386 total views, 6 views today

0

I Can’t ‘Contain’ Myself

vmworld2015_logo

I blogged about this topic last week, but looking forward toward all of what will be announced at VMworld next week has made me want to revisit the subject again and make some predictions on what the show will be all about this year. I literally couldn’t (pardon the pun) ‘contain’ myself 😉 I do quite a bit of thinking around this topic every year so as to inform our Marketing team on what our messaging might look like as we approach VMworld, so I’ve been thinking about these topics for some time now…

What exactly do containers mean for the virtualization admin, storage admin or otherwise? 

Containers such as Docker and Rocket, among newer generation technologies, aim to provide abstraction much like virtual machines, but without the added effort of virtualizing an entire OS and all of the supporting drivers. A much more efficient method for application delivery, which I have blogged about before and have had conversations with colleagues, such as Scott Drummonds, about in the recent past couple of years. Around the time that CloudFoundry was first announced, we talked about what application delivery would look like without the need for an OS. Obviously it would make for a higher performance, transient, web-scale system that would deliver the application without all of the fluff required by a typical virtual machine. We didn’t know when it would happen, but definitely glimpsed the possibilities and implications in IT once this dream came true. We are starting to see the results of that now, with “containerization”. VMware is at the forefront of this innovation (along with Docker and CoreOS) with projects such as Lightwave and Photon. You will obviously see these heavily featured at the show, along with other DevOps and Microservices related topics. I see these trends as a 2nd or 3rd iteration of the cloud native apps paradigm that CloudFoundry hinted toward a few years back. Paul Maritz talked quite extensively about this during his tenure at VMware, stating something along the lines of: now that customers have realized the economic benefits of virtualization, they will start re-writing their apps for a cloud native future. This didn’t happen right away, but I believe that it’s telling that we are now seeing the results of this effort. This definitely follows with VMware’s “Ready for Any” theme for this year, I think you’ll agree.

Storage and the next wave of containerization…

As an emerging, cutting edge storage provider, we at Coho Data are also innovating around this trend as it relates to storage, especially persistent storage, which is in its nascent phase. We will not only be demoing some microservices running directly on our array, but we’ll also be talking about what the microservices future will look like in the context of storage. Think data-centric, not workload-centric, VM-centric or otherwise. We care about storing and managing data regardless of the abstraction it sits in.

This is just a taste of what we have in store. Come visit the Coho Data booth (#1713) or attend our CTO, Andy Warfield’s breakout session on the topic to learn more about what we have in-store.

Looking forward to seeing you there!

4,212 total views, no views today

0

Hybrid or All-flash?

spinning-flash-hybridI’ve been seeing a lot of commentary of late (actually it’s been happening for a while, but now that we’re approaching VMworld, I guess I’m more conscious of it) about the value provided by hybrid storage systems from a cost perspective, while at the same time, the all-flash vendors out there are touting the storage efficiency features (they have to!) of their platforms, to get to a price point that run-of-the-mill customers can stomach, while providing very high performance.

There’s been lots of FUD about how the economics of hybrid systems will always be more cost effective when compared to all-flash systems. By contrast, the all-flash vendors will say that now, with the announcement of a flash device that is larger than the biggest hard disk, that all-flash will win out, once prices come down. Finally, a select few vendors are also saying that because they provide both hybrid and flash systems that they are (falsely) offering customer choice. I beg to differ.

How about a third option all-together, or if you will, no option at all.

What exactly do I mean by this???

Well, since the announcement of our 2.x release, Coho has been able to host both hybrid AND all-flash nodes within a single cluster, in a single namespace. Not only is this easier for the customer to set-up and manage, but a much better story from an economic perspective as well.

Consider the fact that, because we leverage software-defined networking together with software-defined storage, that we can firstly, start with a single 2-node cluster, unlike any of our competitors in scale-out, that must start with 4 or more. Second, the fact that we can place data in the appropriate tier of storage based on a per-workload view (we present as NFS today) of the flash utilization over time, and we make even more efficient use of the performance and capacity in the system, while simplifying it’s deployment for the customer. This means that if our analytical data indicates that you are running out of high performance flash, you can expand your system with an all-flash node OR if the analytical data says that you’ll run out of capacity while still maintaining the same level of performance, you can expand with a hybrid system. We take a data-centric view of the customers’ workloads, so that they are freed from making these choices; and we’ve found that the message of simplicity is resonating quite well, thank you!

Expand this to next generation flash media types such as NVDIMM, PCM, RRAM, MRAM, etc. and you can imagine why having MANY different types of flash memory in a single cluster, single namespace will become a must have feature.

Coho Data is on the path to deliver this today. Our platform was built with this in mind from the word “Go”.

It’s time to upgrade your thinking!

infinite-rack-scale

3,993 total views, no views today

0

Powered by WordPress. Designed by WooThemes