Archive | VMware

Coming Full Circle
















I want to start by saying that, I feel lucky to have been merely a participant in what we call the “Information Technology” field for going on 18 years now. I keep going back to this realization that it’s been that long when I explain to my son & daughter how many more years they need to go to school before they graduate and enter the “real world”. At this point, I’ve been finished with school (including college) for more years that I was in school… think about that for a moment. Beside making me old, it’s hard evidence of how finite our time is on this earth and how fast time flies. In fact, 2 years back, I had my 20th high school reunion… and while people do, indeed change, I can’t think of a bigger, more “tidal” shift than that which I have experienced during my time in the IT industry. The past couple of months (as I was looking for the next big thing in my career) have given me tons of time to reflect on both life and more specifically, my career.


Phase One

My first IT job (like many before me) was in Technical Support for the helpdesk of a technology company. I remember it well… 1998: I had just finished a barrage of Geology courses at my university and learned that I’d have to postpone graduation in order to complete 2 courses, as the professor for said courses was going on a year-long sabbatical. I had worked at RadioShack the summer before, but I wanted to step up to the big leagues and work for a real computer store, so I applied to CompUSA. I didn’t even know what an A+ certification was, but I was convinced I could be like one of the cool kids and fix computers in the repair shop. Hell, I’d had 2 computers of my own and had home-built a couple more by that point… how hard can it be?

As luck would have it, on the exact same day that I got a job offer from the manager at CompUSA, I received a call from one of the hiring managers at Inacom, a contractor for the local telecom, Frontier Communications (and one of the oldest phone companies in the US). I had twin cousins that were working there in the IT department managing the Netware and Windows NT server environments (among other things) that were able to convince the manager at Inacom to give me a shot, with virtually no network or enterprise computing experience. They vowed to help me, should I have any difficulties in my new job. The crazy part was that I was making more money that I could have dreamed possible, given my degree in Geology wasn’t even complete. But these were the days of the rise of the Internet and so I think anyone with a pulse could get a job, if only they had a bit of technical chops. I feel very lucky to have been given that opportunity. When I returned to the Geology department the following year, I think some of the other undergrads were both shocked and deflated to learn the kind of money I was making doing tech support.

Once I finished out my Geology degree, I stuck around a bit longer at Frontier and then headed West to AZ. Frontier was also one of the first times that I had seen technologies in the enterprise software sphere, such as OS/2, AS400, mainframes and Linux! My tech support expertise lead me to another tech support job at a leader in the CRM space, SalesLogix, which was later supplanted by as a dominating force in the SaaS industry. At SalesLogix, I was fortunate enough to experiment with databases such as Oracle and MSSQL as well as Linux. The big game-changer for me though, was the software that one of my colleagues found that would allow us to run multiple copies of Windows, different CRM software and database versions all encapsulated into a set of files, running on top of the native operating system… yes, indeed, I’m talking about VMware Workstation 2.0! It was absolutely amazing to me what this software could do from a technology perspective, but also how much time and effort it could save me each and every day.


Phase Two 

Fast forward a couple of years and 2 promotions later and I was really using VMware quite heavily. Not only did I use Workstation, but also had a chance to do some work running test plans with our QA team, that was leveraging a new product called VMware GSX and then later ESX. This lead me to learning everything I could about VMware and how I could harness it to further my IT career. In 2003, I ended up meeting my wife and in 2004 we were married. Life was changing pretty fast for me. As my wife is from Japan, and I had an interest in Japanese culture, I ended up moving there for what I thought would be a year or so, but what instead became what I call the 2nd phase of my IT career.

I tried doing some phone interviews with folks in Tokyo, but given the lack of face-to-face interaction, let alone the time difference, I found it daunting to find a job. I ended up taking a 6 week leave from work, and headed to Japan on a tourist visa. Within 3 weeks of doing interviews with various companies, I landed a job in the IT department of a major English-as-a-2nd-language school in Tokyo. It was a new role for me, to be sure (no more phone support)… but I found myself doing desktop support, server administration, backup administration, network administration and the like all at once and all the time! I ended up using my Linux skills learned before to set-up lots of server environments for both production IT services but, more interestingly environments for our developers, from JBoss, to PHP, to MySQL, MSSQL and everything in between. I feel like this was the manual or legacy way of doing DevOps, before it was called that. No one had ever even heard that or used a separate terminology for this kind of work back then.

As part of my desire to make MySQL more reliable as well as to minimize the amount of servers we had to maintain, I went back to my roots with VMware and investigated where it might fit within a Japanese company’s IT infrastructure… good luck, right? (they just weren’t ready for virtualization yet) At this time, I could get GSX and we were using Workstation in much the same way I was at SalesLogix. Then, an offering came along that would allow me to really see how far I could take VMware within the company. That service was called VMTN (I’m not talking about VMTN in its current form, but rather the $299 subscription that would allow you to try all of the products in the VMware catalog for one low price). I ended up converting a couple of the web servers over to ESX 2.5 and P2V’ed some of the servers off and onto the new ESX servers as virtual machines at didn’t look back… who cared if I didn’t have a proper licence 😉 *It was around this time, that I was also introduced to NAS storage (in the form of NetApp), but that’s a story for another day.

This lead me to another job in Japan, primarily maintaining and architecting the NetApp and VMware environment at an international testing and certification company. I had the privilege of having a young, very forward-thinking IT manager and we made a plan to go “all-in” on the VMware ecosystem technologies that made sense for the organization, namely VMware Site Recovery Manager (SRM). In 2009, we met with several individuals from VMware about implementing the 1.x version of SRM. Not a single one of them had seen SRM, much less deployed it in production (with the exception of an internal demo environment), so I took it upon myself to get fluent in the technology. We made several successful test runs of the SRM software without issue, then had a rare opportunity to utilize it in production during the 2011 Tōhoku earthquake and tsunami. This lead to a blog article, VMworld session and considerable media attention on what exactly had taken place and how to leverage what we had learned at other enterprises throughout Japan and the rest of the world. I even presented a seminar at the VMware Tokyo offices to several high profile customers that were (needless to say) quite interested in implementing SRM, especially in light of recent events.


Phase Three

About a year before the earthquake, I started blogging to help get my thoughts on paper as well as to participate in the larger IT (mainly VMware) community, which at the time was under the tutelage of John Mark Troyer. In light of my presentation at VMworld, interest in my story and interest in returning back to the USA post-earthquake… I ended up taking a job that would turn out to be the biggest shift in my career to-date, the world of Technical Marketing! I ended up taking a job at NetApp working on solutions in the End-user Computing and Cloud Computing space with reference to VMware technologies deployed on NetApp storage systems, and an ever deeper delving into the world of VMware. After a couple of years doing that, I decided to try my hand at doing that job, but in a start-up atmosphere, in the heart of Silicon Valley. It was a whirlwind performing technical marketing duties for a new company, just shy of 60 employees at the time, with a product that had only GA’ed the month I joined the company. *The specifics of my time at this start-up is another topic, for another time, so I’ll summarize it here. 

As we continued to develop a storage appliance for use with VMware, I decided again to change responsibilities as we kick-started our channel partner program, and that team needed technical guidance. I also got a chance to work with enterprise customers and hear their requirements and use cases for our product in their VMware environments. This shift brings me to the 4th phase of my IT career…


Phase Four

After working on the sales side of the house for the better part of the last year, I decided to continue down this road and get re-acquainted with my roots dealing directly with customers. This would seem to serve as a sort-of “Phase Four” of my career. To that end, I’ve managed to land a role as a Sr. Systems Engineer on the Global Accounts team, and at VMware no less! I am very excited about this new stage in my life and my career, having done lots of different things over the years. Things really do “come full circle”, or so it would seem. Most will say that it’s merely a coincidence that I’m ending up back at VMware, but I leave this as evidence… both VMware and my IT career both started back in 1998 😉 Ok… that may well be a coincidence, but the fact that the company that I owe so much to my IT career has just hired me for the role I was always dreaming of, was no accident. I am overjoyed with enthusiasm (and outpouring of support and congratulations) about the prospects for the future of IT, no matter where it might lead. I’m only humbled by the fact that I get to continue to be a part of the ride!

9,556 total views, 3 views today


Come Meet the vSamurai at VMworld


If you’ve been following me via this blog or elsewhere on this series of tubes, you may know that I have taken on more of a customer and partner facing role in the past couple of months. This has given me the opportunity to do more of what I love, which is getting people excited as well as evangelizing what Coho Data is all about. The storage marketplace today is very crowded, with more and more start-ups on the scene on what seems like a daily basis. It’s hard for me to catch up with all that’s going on, let alone the buyers of the technology themselves. It’s not going to get any easier, until some major/sudden consolidation happens, but that’s mere speculation…

I’m looking forward to talking with members of the VMware and virtualization community, customers, partners, and anyone else that finds their way to VMworld this year. Recalling back to about a year-and-a-half ago, when I was making the decision to leave NetApp to join a company that is truly innovating in the storage market, I saw the potential that Coho has to offer. Now, I feel that a lot of the promise is being realized. This will be our 2nd VMworld and we’ll be upgrading from Silver sponsor last year to Platinum sponsor this year! We’re doubling-down on VMware and I am truly excited about what we’ll be showing and talking about this year. I’m privileged to share the details with you as we get closer to the show…

Until then, if you’d like to set-up a meeting to talk with me or another one of our other experts, head on over here.

Or if you’d just like to sign-up for a chance to win a free pass to attend the show, click below:


6,535 total views, 3 views today


“Pets” vs. “Cattle”… In the Context of Storage?


By lamoney ( [CC-BY-SA-2.0 (], via Wikimedia Commons

I’ve been thinking a little bit (more than usual) lately about the crossroads we are at in the IT industry today. I’ve been reflecting back to some early posts that I shared way back when virtualization was the tech de rigueur. Not only that but the fact that my current company Coho Data is at the nexus of this crossroads, if you will. Since we talk “web-scale”, “scale-out” and a multitude of other buzzwords in today’s IT world, it’s interesting to explore some of those in the context of cloud and distributed systems that form the new reality of enterprise IT computing.

When dealing with cloud computing proximally or otherwise, it’s likely that you fall within either the VMware camp or the OpenStack camp (or both) today. Some would say these solutions are at opposite ends of the cloud software spectrum. You may also have heard the term: “Pets vs. Cattle” in reference to your servers, i.e. a Pet has a name, requires constant patching, updating and altogether expensive maintenance… whereas Cattle are nameless, can be removed from the system and replaced with new gear and be online again doing their job without skipping a beat.

Well, what if it were possible to have a zoo and a farm all-in-one? and what about for storage?!

Normally when you think of storage, its persistent nature requires it to be a Pet and not Cattle, but with today’s more modern storage architectures, I’d like to propose that this isn’t necessarily the case. You can have both persistence of data and statelessness of the underlying components at the same time. Bear with me for a minute while I reason through this…

With a scale-out, shared-nothing node architecture, you have the ability to add and remove nodes on the fly without worrying about the health of your data. As you scale to larger number of nodes, you care about each node even less. Despite the fact that you have a greater quantity of data in the system, the importance of any one individual storage node is reduced. Add to that the fact that a well-built self-healing, auto-scaling system can heal itself faster when there are more “cattle” on the farm.

As a function of this architecture you can also remove nodes in much the same way, allowing you to return leased equipment or installing newer, more dense and performant nodes into the system with everything working in a heterogenous fashion, and without skipping a beat. This is great from a TCO perspective as well.  It’s much better than being locked in with a fixed amount of high performance flash and capacity spinning disk for the next 3-5yr spending cycle. Extend this even one step further and you can imagine being able to automatically order new hardware to expand the system, adding it, then shipping back the old to the leasing company in a regular, predictable fashion.

One element of cloud scale systems that allows this to happen is extensibility, being able to easily extend a system beyond it’s original reason for being. Typically this is enabled via APIs and all of today’s next generation storage systems are build from the ground up to support this type of integration. Being able to organically adjust to customers’ needs quickly by offering APIs, toolkits and frameworks, is a key ingredient in delivering web-scale!

The interesting part of this whole discussion is that despite the importance of persistence in the storage world, given the right architecture we CAN indeed have the best of both worlds. Look at Coho’s scale-out enterprise storage architecture and you can see that we very much have a combination of the elements of both Pets and Cattle. We support the best from either architecture as well as any modern storage system should

Here are some examples:

  • Pets like NIC bonding for high availability – we’re cool with that
  • Pets like to be managed carefully and thoughtfully – we build intelligence into our storage, but also give visibility to the admin
  • Cattle can be auto-scaled by just plugging in a new node and allowing the system to grow – we do this as well
  • Cattle are designed to accommodate failures – we build our failure domains across physical boundaries so that their is no single point of failure
  • Pets like to have constant uptime – refer to previous feature of cattle above; accommodating failure means the system stays online if a component fails
  • Pets like to have high availability – we do this as well, allocating a minimum of 2 physical nodes in a single but shared nothing hardware design
  • Cattle work only when there is shared nothing architecture – utilizing independent nodes with object-based storage allows us to provide this as well!

At Coho we see the need for these differing approaches to computing from a storage perspective. We started out providing storage for VMware workloads and customers seem to like how we’re delivering on that so far. In addition, we see the need to support OpenStack from a storage perspective as well and are currently offering a tech preview of our OpenStack support. As a matter of fact, if you’re interesting in becoming a BETA participant for OpenStack, you should definitely get in contact with us.

Thanks for reading!

6,408 total views, 3 views today


Horizon View 6 – Reference Architecture

thumbnail of coho-vdi-vmware-horizon6-1000hSince I joined Coho (back in March of last year; time flies), I’ve been hard at work to deliver on our technical marketing solutions collateral, specifically with regard to VMware integrations, among many other duties. (Coho is a start-up, first and foremost.) My first order of business was to tackle a Reference Architecture of VMware Horizon View 6 on the Coho DataStream platform. This was not without its challenges, but through much blood, sweat and tears from Engineering and me, we finally have something of value!

Fast forward to now and we have another code release (our 2.4 release) under our belt, and we’re ready to share our findings and performance numbers for VMware VDI solutions on top of the Coho Data storage solution. I can’t wait to share all the advantages of Coho’s combination of speedy PCIe flash alongside our scale-out architecture, all the while leveraging some cool SDN hotness!

This is only the first in a long line of deep technical collateral coming rapidly down the pike to help our field and especially our customers to truly leverage what “web scale” really means. Looking for more? Stay tuned!

6,218 total views, 5 views today


Top Virtualization Blogs – 2015


The time has come again to vote for the Top VMware & Virtualization Blogs of 2015. I was in the middle of a transition to my current role at Coho Data during the voting in 2014, so I didn’t rank last year, but this blog has placed twice in the Top 50 before. I have been sharing quite a few details on Coho this year, so I hope that warrants a vote!

I think that I was steadier with the delivery of content this year, versus last year, by blogging more frequently and with shorter articles that I hope were valuable to the readers. I, of course, still like to do an involved, in-depth blog here and there. I continue to work on the cadence and quality of my content. Thanks for your continued readership!

I would also like to mention the blog of my colleague, Forbes Guthrie (vReference), who is on the list as well. You may have seen his posts in my Twitter feed as well as my front page RSS.

Thanks for your vote!

6,936 total views, no views today


Powered by WordPress. Designed by WooThemes