The future of devops for network engineers
Source – networkworld.com
If you still live in a world of the script-driven approach for both service provider and enterprise networks, you are going to reach limits. There is only so far you can go alone. It creates a gap that lacks modeling and database at a higher layer. Production-grade service provider and enterprise networks require a production-grade automation framework.
In today’s environment, the network infrastructure acts as the core centerpiece, providing critical connection points. Over time, the role of infrastructure has expanded substantially. In the present day, it largely influences the critical business functions for both the service provider and enterprise environments.
A service provider requires a straight-through process of receiving an order to provisioning a network. Nowadays the competition is so high the providers must at every opportunity reduce the time to market and costs. First time right is essential to ensure customer satisfaction throughout the entire workflows.
On the other hand, enterprises are driven by quality and compliance more than cost savings. They want to get a handle on the change management process. At the same time, the enterprises focus on ensuring that the configurations meet the benchmark of compliance requirements.
Networking is complex
These requirements must be fulfilled on a network infrastructure, which is of its very nature complex since it consists of many moving parts. When you open the hood of a network there are many bespoke elements. Primarily, the complexity is driven by unique network snowflakes and the variety of ways to implement the service types.
More than often, the network will be a multi-vendor, consisting of numerous domains with operational and architectural teams operating in silo. Computer networks are complex, and this complexity can be ’managed out’ by introducing a framework that abstracts the complexity. Lowering complexity and getting things right in a standardized way introduces you to the world of automation.
Within a network, there are elements that used to be either easy or hard to automate. Here, one should not assume that if something is easy to automate, we should immediately dive in without considering the easy-versus-impact ratio. Operating system upgrades are easy to automate but have a large impact if something goes wrong.
No one wants to live in a world of VLANs and ports. Realistically, they have a relatively low impact with a very basic configuration that needs to be on every switch. This type of device level automation is an easy ramp to automation as long as it does not touch any services.
Ideally, automation consists of multiple blocks. If you look at each block, then automation can be relatively simple. However, when you introduce an orchestration system, it gets even more difficult. Most assume automation goes straight to the network but there are parts of the process that can be automated. These could be creating the jobs and analyzing the show command results to name a few.
Usually, additional challenges surface when you get into the topology layer and carry out network and service changes on multi-vendor equipment across multiple domains.
My experience of automation
This pulled my imagination into the possibility of a more advanced automation use case, for example, a fully automated migration from one vendor to another. As an independent consultant in Europe, I looked at these twofold. According to my experience, long steady contracts could take at least a year, along with some painful experiences that are beyond words.
When I thought of automation at this layer, I remembered a previous colleague of mine mentioning that he carried out this level of migration complexity using automation. Like most, he too followed the traditional vendor certification path and I was surprised to find out he had arrived at such a high level of programming skill in such a short space of time.
We previously worked together on European projects as network architects and at that time, we never touched on automaton. I was curious to know how he could carry out a multi-vendor migration in days and not months. Therefore, I decided to meet with him for a weekend tech talk to discuss learning tactics and the training courses he pursued.
Shortly after our previous contract expired, he began to work full-time for a company with a large but basic network. He decided to automate the boring stuff. Upon meeting, he revealed that initially, he started with bash shell scripting to generate basic configurations. He soon moved to a real programming language of Python. He admitted that the company paid him for his networking skills, not automation skills. As a result, he found himself learning to code in his free time.
His skill level included device level automation that pushed configurations to multiple nodes. For reading and extracting the information, he was able to generate excel spreadsheets and create, for example, device inventories.
The limitations of the script-driven approach
When I asked about limitations, his first comment was “a programming language can carry out a lot of tasks but how far can you go alone?” And his instant reply actually made sense. For the basic building blocks, it’s easy to grab a code from the Internet. Libraries are readily available to communicate with multi-vendor equipment to perform the various task.
However, if other people are working alongside, additional tasks must be considered. Ideally, it needs to be documented and upon a handover, it should be error proof and packaged as a repeatable process.
A custom script only deals with special implementations and the creators are responsible for all areas of script management. There are specific automation blocks and goals that are customized. However, each one of them entails specific pieces of code without an overall holistic view of all the correlation points. As a matter of flaw, the script-driven approach lacks modeling and database at a higher layer.
He was using automation as a building block perspective, one task here and one task there. Initially, there wasn’t an orchestration tool with a team of people.
My curiosity about how he successfully completed an advanced multi-vendor migration with this level of skill was still mounting. Eventually, we began to discuss the next level of automation, from custom scripts to automation frameworks. Our discussion on his use of a production-ready automation framework was a real eye-opener for me.
The driver of an ideal automation solution
We agreed to the fact that network engineers should be network focused. With the right automation framework, engineers do not need to learn a programming language to carry out tasks. What is required is, to simply put all the network knowledge into the framework that should be flexible and transparent irrespective of the situations.
Some frameworks force you to code, they offer the basics, but the specifics require additional coding. However, not all engineers want to be programmers and hence, the proposed framework must be welcomed, accepted and exercised.
The ideal automation solution consists of a complete toolset that has the flexibility to neatly fit into any network without custom development. From day one, the automation should offer a complete package within the framework that already exists.
Besides, the framework must satisfy all network functions such as design, build and operation. With the technologies in place to deal with all vendors at the communication layer, it should entail all the tools for data-driven jobs and even more. It should eliminate the task of building custom scripts.
Authentically, the tools are already there sitting in the system. Therefore, the engineers do not need to learn a programming language, which more than often will need to be picked up during their free time. Simply apply the network knowledge into the system and it takes care of the rest.
After our discussion, I started to question about the future of devops for network engineers considering the availability of such frameworks.
The future of devops
The ability to abstract network information and automate is useful for anyone. Therefore, everyone should learn to automate. Why not master automation and make certain things easier? But after our discussion, I started to think, ‘does automation require me to be a programmer?’ I feel it’s not a necessity with frameworks like Glue Networks, Cisco Network Services Orchestrator (NSO), and NetYCE.
Putting together little pieces of code that form individual case studies on production networks is nothing less than a playground. You will never get them to production-grade, as there is a lack of scalability, traceability, compliance, documentation, and support. If you are going to program with custom scripts, you are going to reinvent the wheel every time.
Reinventing the wheel creates a gap. There is a big gap between a programmer and the automation framework. An automation solution should connect to things like active directory (AD), lightweight directory access protocol (LDAP) along with all the required support and traceability.
If you continue to put custom scripts together on the network, it’s like trying to connect specks of sand. The sand will eventually slide from your hands and so too will the scripts, as there is no framework to support them.
My conclusion is that custom scripts are like a playground. I recommend you should learn to program, there are great courses out there such as Ivan Pepelnjak building network automation solutions course. However, if you want to automate on a production network, you need a production-grade automation framework.
It should be the framework that can support multi-vendor and multi-domain; a framework that can abstract all the network complexity with inbuilt tools. Simply tell it what you want it to do in networking language terms and it does all the hard work for you.
There is only so far, we can go alone with scripts. On a production-grade network, those limits are hit with a hard landing if you do not move to a production-grade automation framework.