Accelerating Kubernetes Development
Building a tool for Kubernetes developers means solving a bunch of problems (e.g., multiple clusters, kubeconfigs, secrets, kubectl functionality, how to represent the hierarchy of abstractions inside a cluster, etc.) and creating a user experience that promotes speed. A good dashboard lets you select from among multiple clusters and:
See the states of everything and work with whatever you’re permissioned to access (this translates into zero need for IT operators to centrally administer dashboard-specific permissions).
Drill into abstractions—see what’s running and what’s dependent on what.
Identify what’s relevant to your work right now—i.e., let you constrain context.
Accelerate your work: navigate around local directories, control local applications (e.g., Git, VSCode, etc.) from the command line, view/edit/reapply abstractions, grab container logs, log into container shells, etc.
Browse and apply Helm charts, etc.
And maybe also maintain several contextualized work-sessions in parallel (different projects, different clusters, etc.), so you never need to spend cycles finding your place.
What’s interesting is that solving these problems well means you’ve already solved some important additional problems. You know how to create and maintain contexts in which work can happen efficiently. And in so doing, you’ve mastered important aspects of the integration puzzle that turns a ‘context’ into a ‘workflow.’
Specifically, you’ve mastered the terminal, which is the interface most devs use to coordinate local (and remote) applications, CLIs, and the artifacts they create and consume. You’ve mastered the Kubernetes API, and through that, the substrate of tools (e.g., Helm) used to configure, deploy and lifecycle-manage applications, components and services, including the integrated, non-Kubernetes-native solutions on which you depend to support your bespoke applications (such as [third-party] Ingress or Message Queue or Metrics or Database, etc.).
This given, a strong Kubernetes dashboard can, in principle, become a full-fledged Kubernetes IDE by creating some relatively simple integration ‘glue’: code that lets contributors build ‘extensions’ in a simple, secure, standardized way. A way that lets extensions automate what you, yourself could do manually (assuming you knew how).
For a specific kind of app, say an ingress controller, this might involve:
Inheriting your permissions (or, in some cases, organizing and remembering separate permissions for certain services).
Discovering relevant applications/services (i.e., ingress controller components, sidecars, manager workloads, operators, etc.) in the cluster you’re now looking at.
Dynamically representing the state of those ingress components within the ‘dashboard’ webUI.
Extending the webUI and/or dashboard configuration dialog(s) with bespoke sections that simplify ingress configuration.
Plus other features, such as automatically extracting parts of, or the whole hierarchy of ingress objects so you can store these files locally and place them in version control.
It’s easy to imagine extending this fundamentally-simple (in principle; obviously, it can be tricky to implement in practice) model to let extensions drive all sorts of components and provide a wide range of valuable services to developers. Assuming your dashboard already knows how to import and apply Helm charts; it should be simple to create the extension equivalent of an IaaS database-as-a-service (DBaaS) framework, where the extension maintains a perpetually-updated list of trusted sources for 15 major databases and provides basic configuration options and one-click installation for each type, bookmarking access to product-specific configuration tools for further work.
Such an arrangement around an extensible dashboard could benefit the entire community, including:
Developers: Who are free to install (or build) open source extensions for their favorite tools, thus creating flexible, highly-customizable, and efficient working environments within the dashboard. (Working environments that can, in principle, be packaged and shared among, for example, developers collaborating on a given project).
Operators: Who are also free to create and share collections of extensions, so able to ‘soft-standardize’ working environments without making developers feel as though they’re being dictated to. Operators can, meanwhile, closely-manage RBAC permissions on any cluster, ensuring that dashboard+extension users are never able to go ‘out of bounds.’
CNCF ecosystem participants: Building extensions to manage components under inherited RBAC permissions from within a dashboard environment that supports terminal and Kubernetes API/kubectl sessions, and that also provides sophisticated, widget-rich weblike functionality for visualization and interaction, is easier than maintaining many integrations to specific development tools. It’s also potentially far more useful to developers, who can adopt subsystem/tool-specific extensions as part of their evolving workflows—improving quality of life as opposed to learning yet another UI, CLI, or REST API. Making tools and components more consumable by developers (and operators) can be an important selling proposition in an ecosystem dominated by YAML and crude interaction principles (e.g., edit YAML, change YAML, reapply YAML, rinse, repeat).
The critical factor, of course, is whether such an extensible dashboard can reach sufficient critical mass to become the preferred way of packaging functionality and accelerating development—completing its evolution into an ‘IDE,’ and perhaps even becoming a sort of framework.
Here, it feels to me as though what’s most important is that the dashboard sits where a certain subset of Kubernetes coders and operators do most of their work, which is in the tight, iterative loop between the desktop and the cluster. This is quite different from classic IDEs, built around code editors. Obviously, the latter are still vitally important when work is focused on new application coding or on maintaining mature application code, particularly where back-end processes are fully automated with CI/CD.
But as most Kubernetes-centric developers are coming to realize, there tend to be big gaps (and latencies) in this model that only a dashboard-centric, context-aware, extensible IDE is ‘fast enough’ to fill. Once the containers are built, people need tools purpose-designed for iteratively tweaking the abstractions that put those containers to work. Once those YAML files are tested and made part of application lifecycle and/or infra-as-code repositories, and when the CI/CD has been extended to apply them automatically (in our ideal universe), there’s now forensics, break/fix and other new tasks that need to happen on that bleeding edge of dev-to-cluster iteration.