NetApp’s Project Astra Aims To Unite Persistent Storage, Kubernetes
Project Astra is targeted at what NetApp called a big drawback to increased adoption of Kubernetes, namely the lack of persistent storage as containerized applications and data are moved between devops through production or between clouds and on-prem.
NetApp on Wednesday unveiled a new initiative aimed at bringing enterprise-class storage services to the Kubernetes container platform as a way to manage applications and data as they move between on-premises and multi-cloud environments.
The initiative, named Project Astra, is not yet a product, but instead is showing what NetApp sees in terms of how to work with Kubernetes in an environment where customers require application and data mobility as containers move between clouds and the on-premises world, said Beth Busenhart, market strategist for the Sunnyvale, Calif.-based storage vendor.
Project Astra is a storage and container platform based on Kubernetes, Busenhart told CRN.
[Related: CRN Exclusive: NetApp CEO George Kurian On Keystone, Clouds And Competition]
“Kubernetes is becoming ubiquitous as an orchestrator for container applications,” she said. “It is focused on portability. But the biggest block to adoption by enterprises today is that containers are stateless, and portability of the data is lost.”
With Project Astra, NetApp is bringing to bear its expertise in software-defined storage that will allow the company to marry storage services with Kubernetes, Busenhart said.
“It will allow stateful applications to run on Kubernetes in production environments and at scale,” she said.
NetApp previously offered NetApp Kubernetes Service, or NKS, but a couple of weeks ago said it would be backing away from that offering, Busenhart said. NKS was a proprietary Kubernetes distribution service that competed with 30 other similar services including GKE and Red Hat OpenShift.
“We had a single control frame in NKS to work with all cloud platforms,” she said. “But we wanted something that would work with our technology to integrate with any Kubernetes service that customers used. The big difference between NKS and Project Astra is that Project Astra allows customers to bring data and workloads on their portability journey.”
NetApp also has an open source technology for persistent storage volumes called Trident, Busenhart said. Trident will continue to be available as an open source project, but will also be included in the Project Astra platform, she said.
Project Astra represents the next generation of container technology with a focus on container management, said John Woodall, vice president of engineering west at General Datatech, a Dallas-based solution provider and NetApp channel partner.
Project Astra could do to containers what VMware did to virtualization, Woodall told CRN
Before VMware, virtualization was already in the market, particularly in mainframes, but VMware came with support for Windows and Linux, and led to the virtualization of resources in the data center which led to whole new processes such as new ways to protect data, Woodall said.
“Containers and container management may be the next wave,” he said. “Application portability, moving applications through devops to production, the ability to spin things up and down in a container, are becoming increasingly important. Businesses can set up 23 versions of something, and if one crashes, they go on to the next one.”
The biggest issue in that scenario has been the lack of persistent storage, Woodall said.
“What about an application with a database?” he said. Every time a container starts, it has to point at that database. And as the application moves, the data needs to be available. … We’re in a new evolutionary phase. If containers are becoming the defacto standard for application development going forward, being able to provide not just container management but the services that go with that is important.”
Project Astra looks at Kubernetes and storage together, and ties into NetApp’s Data Fabric platform, Woodall said.
“Rather than just give another choice for container management, NetApp is looking to provide container services beyond deployment,” he said. “The bulk of data is in the enterprise. They need services like data snapshots, backups, and management. If I can combine backup and recovery, easy migration and portability, cloning, and storage efficiency technologies, I’ve added value to make Kubernetes more workload-friendly.”
Kubernetes plays right in with what NetApp is doing with its data storage and management technologies, Woodall said.
“A lot of vendors can move data between their own storage,” he said. “But what if you want to be portable to the cloud? Kubernetes can solve that problem. And NetApp provides persistent data management to go with it.”
NetApp Wednesday is holding a preview of Project Astra to a select group of customers and channel partners seeking an early look at the technology, Busenhart said. This demonstration serves as an alpha look at the technology, with a beta version for customers to use expected to be available this Summer, she said. General availability is expect to be this Fall, she said.
Project Astra will be a great technology for NetApp’s channel partners because it is both Kubernetes-agnostic and storage-agnostic, Busenhart said.
“Our intention is to let customers manage their applications without contributing to vendor lock-in,” she said. “We will integrate with all public clouds, and it will be available for on-premises storage as well.”
Project Astra will first focus on the three primary public clouds, which Busenhart listed as Google Cloud Platform, Microsoft Azure, and Amazon Web Services. It will be available for other clouds depending on developer and user community requirements, she said.
Project Astra is the latest in a series of significant moves NetApp is making as part of its Data Fabric strategy for seamlessly moving data between on-premises and cloud infrastructures.
NetApp in October introduced NetApp Keystone, which allows customers to purchase their on-premises and cloud-based storage capabilities without worrying about future requirements by allowing data to be migrated as needed across on-premises, private clouds or public clouds, and either purchased outright or on a consumption model.