How Kubernetes and Containers Enable Highly Scalable CI Applications

Source:-rtinsights.com

Building continuous intelligence applications on cloud-native architectures using containers and Kubernetes makes it easier to deploy, maintain, and update those applications to meet changing business requirements.

The first blog in this series noted how cloud-native increasingly is the architecture of choice to build and deploy continuous intelligence (CI) applications. A cloud-native approach, combined with a streaming engine, gives companies the flexibility to rapidly develop and deploy new CI applications to meet fast-changing business requirements.

Specifically, cloud-native architectures are dynamic, with on-demand allocation and release of resources from a virtualized, shared pool. Such an elastic environment enables rapid scaling to meet the varying compute and performance demands of CI applications.

See also: Data Architecture Elements for Continuous Intelligence

In addition to being highly dynamic, cloud-native applications can quickly incorporate new features, applications, and data. They are composed of independent processes that work together. Such an architecture allows companies to more easily make use of new or different technology as needed by the business. For example, a company might improve a customer engagement chatbot by using a new cognitive app that delivers better insights into a text stream or improved language processing of a call. Or, a company could use new or different datasets as the input to their CI applications. The ability to use different or new data is critical since CI apps are powered by their data feeds.

In the past, switching to a new analysis algorithm or dataset might require massive efforts to rewrite, test, and deploy an application. A cloud-native approach with a suitably selected streaming engine eliminates the problems in achieving such flexibility without having to rewrite entire applications. For example, if a streaming application is made up of microservices that work together, changes can be made to specific microservices elements without requiring changes to the rest of the application. Simply put, a cloud-native architecture offers a number of benefits, including:

Applications or services (microservices) are loosely coupled, so changes can be made to one without having to make changes in others
Applications or processes are run in software containers as isolated units, so they can be reused and do not have to be updated if other elements of a distributed CI application are changed
Processes are managed by a central orchestration manager to improve resource usage and reduce maintenance costs
Cloud-Native’s Core: Containers

CI applications often need changes to accommodate new data sources, new real-time analytics techniques, and more sophisticated artificial intelligence applications to meet changing business requirements. As a result, developing, deploying, and maintaining CI apps can require a lot of ongoing work. Cloud-native CI apps based on containers are easier to update and maintain.

Containers are an executable unit of software in which application code is packaged, along with its libraries and dependencies, in common ways so that it can be run anywhere, whether it be on-premises or the cloud.

To do this, containers take advantage of a form of operating system (OS) virtualization in which features of the OS (such as namespaces and cgroups primitives) are used to isolate processes and control the amount of CPU, memory, and storage those processes may access. Essentially programs running inside of a container can only see the container’s contents and assigned devices.

The advantages of using containers are that they are:

Lightweight and scalable. Containers, which do not need a full OS instance, are small and do not consume large amounts of resources. Their smaller size—especially compared to virtual machines—means they can spin up quickly and better support cloud-native applications that scale horizontally.
Portable and platform independent. Containers can be used throughout an application’s lifecycle from development to test to production. Because containers carry all their dependencies with them, software can be written once and then run without needing to be reconfigured across laptops, cloud, and on-premises computing environments.
Support modern development and architecture. Containers allow large applications to be broken into smaller components and presented to other applications as microservices. Due to a combination of portability across platforms and small size, containers are an ideal fit for modern development and application strategies such as DevOps, serverless, and microservices.
Comparable to the way virtual machines and virtualization helped simplify the management of compute workloads, containers are helping businesses more easily create, deploy, and scale cloud-native CI applications. Containers offer a way for processes and applications to work independently yet are bundled and run as a single distributed system.

Many businesses are moving to container-based microservices to develop modern applications, including new CI applications, such as real-time fraud detection, decision support, and enhanced customer service using bots or personalized recommendations. The result is a situation where there can be many containers to manage and maintain. Those containers must be managed and scaled over time.

Just as virtual machines used a hypervisor, containers can make use of Kubernetes, which is an open-source platform to automate the deployment and management of containerized applications. Specifically, Kubernetes provides service discovery and load balancing, storage orchestration, self-healing, automated rollouts and rollbacks, and more.

Kubernetes has been embraced by the industry as the dominant solution for container orchestration. Many consider it the de facto standard for container orchestration. Groups like the Cloud Native Computing Foundation (CNCF), which is backed by IBM, Red Hat, and many other technology providers, have been Kubernetes proponents for years.

Kubernetes makes it easier to deploy and manage microservices architecture applications. For example, Kubernetes can balance an application’s load across an infrastructure; control, monitor, and automatically limit resource consumption; and move applications instances from one host to another.

Teaming with a Cloud-Native Technology Partner

IBM has been a leader in the industry move to cloud-native applications. In the past few years, it has transformed its software portfolio to be cloud-native and optimized it to run on Red Hat OpenShift, which is an enterprise-ready Kubernetes container platform. With this transformation, businesses can build mission-critical CI applications once and run them on all leading public clouds—including Amazon Web Services, Microsoft Azure, Google Cloud Platform, Alibaba, and IBM Cloud—as well as on private clouds.

With specific regard to CI, IBM addresses streaming data ingestion and analysis issues with IBM Cloud Pak for Data. IBM Cloud Pak for Data is a fully integrated data and AI platform that helps businesses collect, organize, and analyze data and infuse AI throughout their organizations. Built on Red Hat OpenShift, IBM Cloud Pak for Data integrates IBM Watson AI technology with IBM Hybrid Data Management Platform, DataOps, data governance, streaming analytics, and business analytics technologies. Together, these capabilities provide the architecture for CI that can meet ever-changing business needs.

IBM Cloud Pak for Data is easily extendable using a growing array of IBM and third-party services. It runs across any cloud, allowing businesses to integrate their analytics and applications to speed innovation.

Complementing IBM Cloud Pak for Data, IBM Cloud Pak for Data System is a cloud-native data and AI platform in a box that provides a pre-configured, governed, and secure environment to collect, organize, and analyze data. Built on the same Red Hat OpenShift container platform, IBM Cloud Pak for Data System gives businesses access to a broad set of data and AI services and allows quick integration of these capabilities into applications to accelerate innovation. The hyperconverged, plug-and-play system is easily deployable in four hours.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x