Juniper Adds Multi-Cloud Kubernetes Support to Juke
Juniper Networks released Juke 2.2, the multi-cloud container platform it acquired when it bought composable infrastructure firm HTBASE late last year. The updated version integrates with Kubernetes and its container storage interface (CSI) and has new snapshot and scheduler capabilities.
The product itself isn’t composable infrastructure — it’s essentially container-focused software-defined storage, explained Scott Sneddon, senior director and evangelist for multi-cloud solutions at Juniper. But it plays partially in that space by stretching across the multi-cloud gap that exists between composable offerings.
“Juke is primarily a distributed, persistent storage solution,” he explained. “We’re really focused in on the storage problem that exists and would coexist with some of the composable solutions that are out there by delivering storage, and some orchestration capabilities to help manage containers — Kubernetes in particular.”
The product had been “getting some traction” prior to Juniper’s acquisition of HTBASE. But the 2.2 version aims to solve two key problems within the Kubernetes ecosystem, Sneddon said. One is the challenge associated with multi-cloud Kubernetes management. “The second challenge is that there really isn’t a good platform for persistent storage in Kubernetes,” he added.
The updates address these challenges by integrating with Kubernetes and its CSI. This adds Kubernetes cluster scale out with compute and storage nodes that can now span clouds and sites or stay local to one cluster availability zone.
The new version also adds volume snapshots and clone management for better reliability and mobility as well as deployment improvements to make Juke easier to install.
These improvements give customers “the ability to manage multiple clusters across multiple clouds and then deliver a distributed storage platform to support containerized applications,” Sneddon said.
Plus, this ability to manage core storage objects for Kubernetes from the Juke user interface means administrators can do things like automate multi-cloud resource access to fit changing edge compute or proximity requirements and automate multi-cloud arbitrage for changing day-and-night usage patterns.
“Because we are able to orchestrate this persistent distributed storage, we can also understand the latency between running containers and their access to storage,” Sneddon explained. “So we can take that latency and performance information, feed that back into the orchestrator, and determine where is the best place to deploy that workload,” such as on-premises to save costs during low usage times but then to the cloud when usage spikes.