AppDynamics interviews container expert Liz Rice

Source – sys-con.com

I was delighted to spend time with container guru Liz Rice recently, ahead of the presentation Liz will deliver at AppD Summit Europe.

There are over 460K Dockerized applications and more than 5 billion containers have been pulled so far. Why do you think containers as a concept have caught on so quickly?
Containerization-related technologies existed before Docker arrived on the scene, but Docker made it easy to use at the command line. This opened the advantages of containers up to the mainstream developer and allowed them to explore how containers make developers’ lives easier.

For example, a team can use container images to recreate exactly the same environment on their laptops as in production, simplifying the whole process of dependency management and avoiding the syndrome of “but it works fine on my machine.”

The developer community soon started to explore how well containers fit with a microservices architecture, and with CI/CD (continuous integration / continuous deployment) pipelines that make building and shipping code really easy. This really appeals to organizations who want to be able to ship features quickly. Businesses always want their software teams delivering faster!

What tips would you share with someone embarking on their container journey?

Containers can touch everything in the software life cycle, from the development process through continuous delivery approaches, orchestration, security, and site reliability — so the process of adopting containers can seem overwhelming at first. But you don’t necessarily have to do everything all at once!

It’s a good idea to remember why you are exploring container usage in the first place, and for many organizations, at a strategic level it’s all about increasing the speed of deployment. Containers are a helpful tool when your teams are working with an agile methodology and shipping code frequently through a CI/CD pipeline.

The journey to containerization can look very different in different companies, or even different teams within the same business. We increasingly see enterprises running large-scale containerized workloads in production, but there are also organizations where they are only using containers to simplify their development processes. In many cases developer teams have brought containers to their workflow simply to make their own lives easier, and usage within the organization has grown from there. In other enterprises the initial impetus comes from a platform project that builds an orchestrated cluster and offers it to the rest of the business. Individual software teams across the business are invited to start moving their workloads into containers so that they can run within that cluster.

Fortunately, many people are willing to share stories about the approaches they took, talking about them in case study presentations at conferences and meetups.

At a practical level, there are a number of ways you can find out more about how and where to start. I’d recommend a useful site called Katacoda, an interactive learning platform which has courses and labs that introduce you to Docker and many other tools. Another option is to attend my session on May 4th, of course!

What are the typical challenges to container adoption? How can these be overcome?

One of the greatest challenges in advancing container usage is bringing the CISO on board. Security leads can have questions around how containers will affect security of deployment across the enterprise.

For example, if you’re shipping code more often, does this increase your risk profile? The good news there is that if you are embracing continuous delivery, you are shipping small incremental changes, so there isn’t a hugely different risk profile from one update to the next, and you can introduce automated checks like container image scanning to detect any known vulnerabilities in your dependencies.

Let’s also not forget microservices, which go hand in hand with containers. It’s much easier to reason about what an individual microservice should be doing in terms of accessing particular resources or user IDs than it is across a huge monolithic codebase. At Aqua we have tools that make it simple to apply these policies at the level of individual containers.

Equally, CTOs need to be confident that they can ship well-tested code using containerized processes. You need to sell the technical and cultural benefits of being able to ship code more quickly, and follow a more agile path. Using containers can turn well-established deployment processes on their head and they are not necessarily right for all organizations. If your world is dominated by GANTT charts, then containers and CI/CD may not be for you!

Focusing in on container security, do you think there has been progress in this area?

Definitely. There have been concerns around container security in the past, which led to improvements in the Linux kernel to address those concerns, like user namespacing for example.
In some ways, containerizing workloads can help with security issues. For example, the ShellShock vulnerability affected a shell that’s present on most Linux machines. Most microservices don’t need a shell, so you can run them in containers that don’t include the shell. In the event of another ShellShock being discovered, you would still have to worry about patching the host, but you wouldn’t need to change those microservices code at all. That means less testing and less risk when the patch is applied.

Many sizable organizations are using containers in secure environments and have reached a level of robustness now. The Enterprise Edition of Docker with its focus on testing and certification is an example of this maturity, and of course tools like Aqua are now helping enterprises achieve really robust security for containerized deployments.

How will skill sets need to evolve to take advantage of the potential containers can offer?

Containers are making the traditional developer role easier, but there are a lot of skills to be learned on the operations side. Orchestrated deployment is a learning curve area, as you won’t know which machine will exactly run which bit of code. As enterprises have bigger and bigger orchestrated deployments, there needs to be a mind shift from individual machines to a more holistic view. Monitoring, tracing, alerting, and diagnostics are all areas where the tooling is changing dramatically to take account of this. Traditional orchestration involves manual decisions, but in the container world, it’s more automated, less proprietary, and more collaborative.

Flash forward three years — how do you see containers developing?

Over the last year, I have seen a lot of growth in people moving from “playing” with containers to using them for real in production. Container fundamentals are already fairly stable, but there is still room for improvement in tooling for CI/CD, monitoring, diagnostics, and programmable infrastructure for example. Much like DevOps, containers are really moving into the mainstream.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x