Analyzing VMware’s past to chart the future of multicloud

Source: siliconangle.com

In this the 10th year of theCUBE’s coverage of VMworld, I’m going to take a look at the ebb and flow of VMware Inc.’s role in the waves of change in the technology industry and what’s coming next.

The rise of virtualization

Server virtualization is pervasive today, but in the early days, it took a lot of time to educate and sort out the details of what would and wouldn’t work. The timing of server virtualization was good: Server sprawl was prevalent in data centers, and the dot-com burst had companies looking at cost saving initiatives.

Server utilization was typically in the single digits, so the idea of consolidating many physical machines into a fewer with the utilization of virtual machines was attractive. One of Wikibon’s earliest projects back in the 2000s before I joined was helping customers claim rebates from energy companies for deploying VMs. In these early days, the requirements of the hardware were very important: An incompatible BIOS could take a long time to sort out.

The best thing about VMware’s virtualization is that once you sort out the hardware and configuration issues, you encapsulate the operating system and application (typically one app per OS+VM), so you no longer need to worry about something breaking when the underlying infrastructure is refreshed. An early use case was Windows NT end-of-life: Stick it in a VM and don’t worry about the OS — or the underlying server — being supported. Just keep it running.

The downside is that customers were taking old applications that should have been modernized and keeping them running for another five or more years. With the introduction of vMotion — a truly “magical” feature that allowed a VM to move between physical machines without any downtime — you could keep an app running for years through generations of server refreshes.

As VMware ESX (the server virtualization product that was eventually renamed vSphere) matured, it expanded from test/dev environments to production, and ultimately could support all applications that could live on bare metal (look back at our 2011 coverage of the vSphere 5 launch). Server virtualization is an abstraction, not a simplification, so one of the ripple effects was that it “broke” how storage and networking worked.

VMware had built strong relationships with the server vendors: Early OEM relationships with IBM Corp., Hewlett-Packard Co., and Dell Inc. drove adoption (and helped the server manufacturers sell bigger boxes that could support VM farms). As virtualization adoption grew, a strong ecosystem formed with VMware in the center, including lots of solutions from independent software vendors, big focus from channel providers and much of the data center vendor ecosystem. Even in 2010, when theCUBE was at VMworld for the first time, we noted that cloud was a potential threat. But back then virtualization was growing fast, and serious challenges were far in the future.

Fixing networking and storage

In 2012, Pat Gelsinger took over from Paul Maritz as chief executive officer of VMware. Like Maritz, Gelsinger had been an executive at EMC Corp. (my first interactions with VMware were during my tenure at EMC in the 1.5 years before EMC purchased VMware in 2003 for the bargain-of-the-century price of $625M). Before EMC, Gelsinger had a long career at Intel Corp., and we saw his new mission at VMware as similar to what Intel had accomplished for decades: Be the center of an ecosystem balancing the growth of internal product development with the stability and goodwill of partners.

The storage ecosystem was especially important: Not only was parent company EMC the leader in the space, but HP, IBM and Dell were critical server OEMs and storage partners. So VMware spent years enhancing application programming interfaces and integrations to allow storage to work better with VMs.

Networking also took a similar path, but physical and virtual networking were still two different worlds.
A swath of software-defined networking companies was starting to emerge that promised to pull together physical, virtual and even cloud networking.

The general sentiment from my networking peers was that one company stood above the rest and even threatened VMware’s position: Nicira led this revolution. Cisco Systems Inc. made a move to acquire Nicira, but with the help of EMC’s venture team, VMware had the winning bid for more than $1 billion, turning a potential threat into what would be the core of VMware’s NSX networking years later. Here’s my interview with Nicira founder and now venture capitalist Martin Casado from VMworld 2012.

That was Gelsinger’s first of many big moves to expand VMware beyond vSphere. It also marks an important point in the relationship with Cisco. The center of Cisco’s data center strategy is UCS or Unified Computing System, which was designed to be the best platform for virtualized workloads and — in joint solutions with VMware — has been integrated into environments that include EMC/VCE Vblocks, NetApp FlexPods, IBM VersaStacks and the like for billions of dollars in sales.

Although startups created SDN, the networking giant Cisco and virtualization leader VMware took over the conversation by 2013 (Cisco SVP Soni Jiandani laid out the complex relationship on theCUBE at VMworld in 2013). Six years later, VMware and Cisco are still working together — and competing with each other — over the boundaries in virtual and cloud networking.

By 2014, VMware had released vVols with the goal of allowing VM-enabled storage. While key storage partners integrated with this offering, this was also the opportunity for VMware to create a storage solutions at the hypervisor level, which fit into what we at Wikibon called Server SAN, a slightly broader definition than the general term of hyperconverged infrastructure or HCI that much of the industry used.

VMware vSAN now has over twenty thousand customers and is the leader in revenue and units in HCI, while still maintaining a stronger relationship with the storage ecosystem than it has with the networking companies. The No. 2 company in HCI is Nutanix Inc., with complicated relationships with both VMware and Dell, which both resells Nutanix and sells vSAN as part of VxRail.

Pivotal and how applications fit into infrastructure

As mentioned earlier, the typical workload of a VM was Windows. For many years, the biggest competitive threat to VMware’s dominance in hypervisors was Microsoft Hyper-V. Microsoft has a strong position in enterprise applications, attempting to eliminate competition from VMware by closing the gap between hypervisors and — if you run a Windows shop — providing Hyper-V for “free.”

Still, VMware maintained a strong position against other hypervisors (open-sourced options such as KVM and Xen were also free). Although VMware expanded what applications could live in a VM, there was a growing discussion of application modernization and platform as a service, which promised to allow applications to be developed and to live in any environment: physical, virtualized or cloud.

A small group inside VMware created Cloud Foundry, which in 2013 was a central asset in the creation of Pivotal (a new company of products and over 1000 employees from EMC and VMware). In 2014, the Cloud Foundry Foundation was formed for full open-source governance. VMware focused on infrastructure software (bottom-up in the stack) and Pivotal on application modernization (top-down), so it made sense to have separation while maintaining a strong partnership.

Pivotal helps customers modernize both applications and their organizations to be more agile in development. Pivotal has relationships with all of the public cloud providers; customer applications can live on-premises and/or in cloud deployments. Microsoft’s Azure solution started as a PaaS offering, then shifted more into infrastructure as a service, yet one of the biggest impacts that Microsoft had on application deployment was the full push of Microsoft Office into a SaaS offering. Customers no longer need to think about rolling out and maintaining servers for these deployments: They simply consume the software.

Though the term PaaS has become passé, containers and Kubernetes are at the forefront of discussions around microservices architectures. Docker, the company that brought Linux containers into the mainstream, was a side project of a PaaS company, and Kubernetes came out of Google’s usage of containers at scale.

Fast forward through Dell buying EMC for $67 billion in 2016 (which included the ownership stakes in VMware and Pivotal), and the subsequent Pivotal IPO in 2018. Pivotal has embraced containers and is headed down the path to fully embrace Kubernetes, something that Red Hat OpenShift had done years ago.

In 2017, VMware acquired Heptio, which included some of the Google team that created Kubernetes. At the time, I made observations that it seemed odd that Heptio and later Bitnami were acquired by VMware and not Pivotal. VMware has been going down the path of containers and Kubernetes, including working closely with Pivotal on PKS, but I always saw VMware as the infrastructure and Pivotal as the application layer.

Now things come full circle as just this week, Pivotal has been acquired by VMware. I’ve noted that they should call the cloud-native pieces of the company Containerware, since it is far from the VM pieces of the business. To be successful, Pivotal and VMware must continue to help companies modernize both their applications and processes.

Clouds roll in

Remember that virtualization started in very limited deployments; so did cloud computing. For many years, cloud computing could safely be ignored or considered a niche solution for startups (often gaming or other “less serious” workloads); AWS was derided as “that bookseller” that didn’t really understand the needs of enterprise organizations.

When it became apparent that cloud computing was gaining significant traction, VMware created its own strategy: vCloud Air. After a number of years with tepid customer response, VMware made a big move: It sold off vCloud Air to OVH and focused on partnering with cloud providers.

Most people forget that the first announcement at VMworld 2017 was with IBM Cloud, because soon after the show VMware partnered with AWS — and that has been the story of the last two years. A year of deep engineering integration led to a new solution that allows users to run the same software stack in this special cloud zone and in their customer data centers.

Adding to that AWS Outposts and VMware Project Dimension will allow a stack of VMware software on AWS hardware (same as in the public cloud) or Dell hardware to create a deeply integrated hybrid cloud offering. Wikibon’s David Floyer laid out a deep Hybrid Cloud Taxonomy to examine the options in the marketplace today.

VMware also has hybrid cloud options with Microsoft Azure and Google Cloud leveraging CloudSimple. Like it did across all servers a decade ago, VMware is trying to position itself as an important player in the burgeoning multi-cloud world. With more than 600,000 customers, VMware is in a good position to educate and build a bridge to the future for customers looking to leverage cloud-native architectures and manage increasingly complex IT environments.

That said, as users improve their skills in public cloud environments, will they continue to find value in VMware, or will the bridge to public cloud lead to a decline in reliance on VMware in the center of their management and purchasing decisions? There are hybrid cloud realities that mean that customers will not simply abandon VMware in the next few years, yet all of the recent acquisitions are key indicators that VMware is aware that it must rapidly adapt to its position in a fast changing landscape.

Action Item

CIOs today know that the key to any successful strategy is this: Decisions made today must allow for agility to adapt and adopt new solutions as they become available. Applications are the long pole in the tent of modernization. Most users modernize their platforms — in other words, adoption of public cloud as well as leveraging solutions such as HCI to create private cloud — as they rationalize and update the application portfolio. There is a balance between leveraging the depth of functionality that platforms offer with the flexibility of being able to switch providers if needed.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x