Windows 10: Containers are the future, and here’s what you need to know

Source:-techrepublic.com

With two use cases for its containers, and five different container models, it would seem that Microsoft’s container strategy is ripe for confusion. But that’s not the case.

Microsoft offers many different container models on Windows. If you’re running Windows 10 you’re running several without even realising it: wrapping and isolating all your UWP apps; using thin virtual machines to deliver security; and, if you’re a developer, either Windows or Linux Docker instances.

That layered container model is key to the future of Windows — one that reaches into the upcoming Windows 10X and out into the wider world of public and private clouds, with Docker Windows containers now officially part of Kubernetes. Microsoft is working on shrinking Windows Server to produce lightweight container base images with a more capable Windows.

Windows or Docker?
More about Windows
How to use God Mode in Windows 10
Windows 10 PowerToys: A cheat sheet
Microsoft’s biggest flops of the decade
10 tricks and tweaks for customizing Windows 10 (free PDF)
While the desktop containers are intended to both simplify and secure your desktop applications, providing much-needed isolation for apps installed via appx or MSIX (and in Windows 10X for any other Win32 code), Windows 10’s containers are based on Windows’ own process isolation technology. It’s not the familiar Docker model that we find in our cloud-hosted enterprise applications.

That’s not to say Windows 10 can’t run Docker containers. Microsoft is using Docker’s services to underpin its Windows Server containers. You can build and test code running inside them on Windows PCs, running either Pro or Enterprise builds, and the upcoming 2004 release of Windows 10 brings WSL2 and support for Linux containers running on Windows.

Docker has been developing a new version of its Docker Desktop tools for Windows around WSL2, making it as easy to develop and test Linux containers on Windows 10 as it is to work with Windows’ own containers. With Microsoft positioning Windows as a development platform for Kubernetes and other cloud platforms, first-class Docker support on Windows PCs is essential.

Windows containers in the hybrid cloud
It’s not only Linux containers in the cloud. Windows containers have a place too, hosting .NET and other Windows platforms. Instead of deploying SQL Server or another Windows server application in your cloud services, you can install it in a container and quickly deploy the code as part of a DevOps CI/CD deployment. Modern DevOps treats infrastructures (especially virtual infrastructures) as the end state of a build, so treating component applications in containers as one of many different types of build artifact makes a lot of sense.

What’s important here is not the application, but how it’s orchestrated and managed. That’s where Kubernetes comes in, along with RedHat’s OpenShift Kubernetes service. Recent releases have added support for Windows containers alongside Linux, managing both from the same controller.

While both OpenShift and Kubernetes now support Windows containers, they’re not actually running Windows containers on Linux hosts. There’s no practical reason why they can’t use a similar technique to that used by Docker to run Linux containers on Windows. However, Windows Server’s relatively strict licensing conditions require a Windows licence for each virtual machine instance that was hosting the Windows containers.

Building Windows containers for Windows Server and Azure
Using Windows containers in Kubernetes means building a hybrid infrastructure that mixes Linux and Windows hosts, with Windows containers running on Windows Server-powered worker nodes. Using tools like OpenShift or the Azure Kubernetes Service automates the placement of code on those workers, managing a cross-OS cluster for your application. .NET code can be lifted into a Windows Docker container and deployed via the Azure Container Registry. You can manage those nodes from the same controller as your Linux nodes.

SEE: Serverless computing: A guide for IT leaders (TechRepublic Premium)

There’s no need to learn anything new, if you’re coming to Windows containers from Linux. You’re using familiar Docker tools to build and manage your container images, and then the same Kubernetes tooling as you’d use for a pure Linux application. Mixing and matching Windows and Linux microservices in a single application allows you to take advantage of OS-specific features and to keep the expertise of existing developer teams, even as you’re switching from a traditional monolithic application environment to a modern distributed system.

Microsoft is building a suite of open-source tools to help manage Windows containers, with a GitHub repository for the first one, a logging tool. Improving logging makes sense for a distributed application, where multiple containers interact under the control of Kubernetes operators.

Choosing isolation: process or Hyper-V?
Outside of Kubernetes, Windows containers on Windows Server have two different isolation modes. The first, process isolation, is similar to that used by Linux containers, running multiple images on a host OS, using the same kernel for all the images and the host. Namespaces keep the processes isolated, managing resources appropriately. It’s an approach that’s best used when you know what all the processes running on a server are, ensuring that there’s no risk of information leaking between different container images. The small security risk that comes with a shared kernel is why Microsoft offers a more secure alternative: isolated containers.

Under the hood of Windows Server’s isolated containers is, of course, Hyper-V. Microsoft has been using it to improve the isolation of Docker containers on Windows, using a thin OS layer running on top of Hyper-V to host a Docker container image, keeping performance while ensuring that containers remain fully isolated. While each container is technically a virtual machine with its own kernel, they’re optimised for running container images. Using virtualization in this way adds a layer of hardware isolation between container images, making it harder for information to leak between them and giving you a platform that can host multiple tenant images for you.

It’s easy enough to make and run a Hyper-V container. All you need to do is set the isolation parameter in the Docker command line to ‘hyperv’, which will launch the container using virtualisation to protect it. The default on desktop PCs is to use Hyper-V, for servers it’s to use Docker isolation. As a result, you may prefer to force Hyper-V containers on your Windows Server container hosts.

Microsoft has been working hard to reduce the size of the Hyper-V server image that’s used for Windows containers. It’s gone down from nearly 5GB with Windows Server 1809 and 1903, to half the size at 2.46GB in the upcoming 2004 release. And that’s Windows Server Core, not Nano! Building on Windows Server Core makes sense as it has a larger API surface, reducing the risk of application incompatibility.

With two use cases for its containers, and five different container models, it would seem that Microsoft’s container strategy is ripe for confusion. But that’s not the case. Windows’ own application isolation technologies are managed automatically by the installer, so all you need to consider is whether your server applications run using process isolation or in Hyper-V. And that’s a decision best made by whether you’re running your applications on your own servers in your own data centre, or in the public cloud.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x