ReplicasSets-DeamonSets-Statefulsets-Service-Loadbalancer-Discovery Services-CoreDNS-SANDEEP


ReplicasSets-DeamonSets-Statefulsets-Service-Loadbalancer-Discovery Services-CoreDNS


Assignment – 1
What is ReplicasSets?
A ReplicaSet is a process that runs multiple instances of a Pod and keeps the specified number of Pods constant.
Its purpose is to maintain the specified number of Pod instances running in a cluster at any given time to prevent users from losing access to their application when a Pod fails or is inaccessible.

How to work with ReplicasSets?
ReplicaSet helps bring up a new instance of a Pod when the existing one fails,
scale it up when the running instances are not up to the specified number, and scale down or delete Pods if another instance with the same label is created.
A ReplicaSet ensures that a specified number of Pod replicas are running continuously and helps with load-balancing in case of an increase in resource usage.

Example of ReplicasSets?
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-replicaset
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
– name: my-container
image: nginx

Assignment – 2
What is DeamonSets?
The DaemonSet feature is used to ensure that some or all of your pods are scheduled and running on every single available node. This essentially runs a copy of the desired pod across all nodes.

How to work with DeamonSets?
When a new node is added to a Kubernetes cluster, a new pod will be added to that newly attached node.
When a node is removed, the DaemonSet controller ensures that the pod associated with that node is garbage collected. Deleting a DaemonSet will clean up all the pods that DaemonSet has created.
By default, the node that a pod runs on is decided by the Kubernetes scheduler. However, DaemonSet pods are created and scheduled by the DaemonSet controller

Example of DeamonSets?
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: test-daemonset
namespace: test-daemonset-namespace
Labels:
app-type: test-app-type
spec:
template:
metadata:
labels:
name: test-daemonset-container
selector:
matchLabels:
name: test-daemonset-container
ref : https://www.bmc.com/blogs/kubernetes-daemonset/

Assignment – 3
What is Statefulsets?
A StatefulSet is a controller that helps you deploy and scale groups of Kubernetes pods.
How to work with Statefulsets?
When using Kubernetes, most of the time you don’t care how your pods are scheduled, but sometimes you care that pods are deployed in order, that they have a persistent storage volume, or that they have a unique, stable network identifier across restarts and reschedules.
In those cases, StatefulSets can help you accomplish your objective.

Example of Statefulsets?

Assignment – 4
What is Service?
Services provide network connectivity to Pods
The basic building block starts with the Pod, which is just a resource that can be created and destroyed on demand.
Because a Pod can be moved or rescheduled to another Node, any internal IPs that this Pod is assigned can change over time.
If we were to connect to this Pod to access our application, it would not work on the next re-deployment.
To make a Pod reachable to external networks or clusters without relying on any internal IPs, we need another layer of abstraction.
K8s offers that abstraction with what we call a Service Deployment.

How Service Works?
What are the types of Service?
Example Yaml of Service

Assigment5
What is Loadbalancer?
Load Balancer acts as a ‘traffic controller’ for your server and directs the requests to an available one, capable of fulfilling the request efficiently.
This ensures that requests are responded to fast and no server is over-stressed to degrade the performance.
In an organization’s attempt to meet the application demands, Load Balancer assists in deciding which server can efficiently handle the requests. This creates a better user experience.

What are the types of Loadbalancer?
a.) Network Load Balancer / Layer 4 (L4) Load Balancer: Transport Layer
Based on the network variables like IP address and destination ports, Network Load balancing is the distribution of traffic at the transport level through the routing decisions. Such load balancing is TCP
i.e. level 4, and does not consider any parameter at the application level like the type of content, cookie data, headers, locations, application behavior etc.
Performing network addressing translations without inspecting the content of discrete packets, Network Load Balancing cares only about the network layer information and directs the traffic on this basis only.
b.) Application Load Balancer / Layer 7 (L7) Load Balancer: Application Layer
Ranking highest in the OSI model, Layer 7 load balancer distributes the requests based on multiple parameters at the application level.
A much wider range of data is evaluated by the L7 load balancer including the HTTP headers and SSL sessions and distributes the server load based on the decision arising from a combination of several variables.
This way application load balancers control the server traffic based on the individual usage and behavior.
c.) Global Server Load Balancer/Multi-site Load Balancer:
With the increasing number of applications being hosted in cloud data centers, located at varied geographies, the GSLB extends the capabilities of general L4 and L7 across various data centers facilitating the efficient global load distribution,
without degrading the experience for end users. In addition to the efficient traffic balancing, multi-site load balancers also help in quick recovery and seamless business operations, in case of server disaster or disaster at any data center, as other data centers at any part of the world can be used for business continuity.
Load Balancing Methods
All kinds of Load Balancers receive the balancing requests, which are processed in accordance with a pre-configured algorithm.
3.1: Industry Standard Algorithms
The most common load balancing methodologies include:
a) Round Robin Algorithm:
It relies on a rotation system to sort the traffic when working with servers of equal value. The request is transferred to the first available server and then that server is placed at the bottom of the line.
b) Weighted Round Robin Algorithm:
This algorithm is deployed to balance loads of different servers with different characteristics.
c) Least Connections Algorithm:
In this algorithm, traffic is directed to the server having the least traffic. This helps maintain the optimized performance, especially at peak hours by maintaining a uniform load at all the servers.
d) Least Response Time Algorithm:
This algorithm, like the least connection one, directs traffic to the server with a lower number of active connections and also considers the server having the least response time as its top priority.
e) IP Hash Algorithm:
A fairly simple balancing technique assigns the client’s IP address to a fixed server for optimal performance.

How Loadbalancer works?
Distributes a set of tasks over different computing units (or related resources), to make the overall process easier to execute and much more efficient.
Ensuring no single server bears too much of demand and evenly spreading the load, it improves the responsiveness and availability of applications or websites for the user.

Assigment6
What is Discovery Services?
Service discovery is the actual process of figuring out how to connect to a service.
Automatically detecting devices and services on a network.
Service discovery protocol (SDP) is a networking standard that accomplishes detection of networks by identifying resources.

Types of service discovery
There are two types of service discovery:
Server-side service discovery allows clients applications to find services through a router or a load balancer.
Client-side service discovery allows clients applications to find services by looking through or querying a service registry, in which service instances and endpoints are all within the service registry.

How Discovery Services works?
There are three components to Service Discovery: the service provider, the service consumer and the service registry.
1) The Service Provider registers itself with the service registry when it enters the system and de-registers itself when it leaves the system.
2) The Service Consumer gets the location of a provider from the service registry, and then connects it to the service provider.
3) The Service Registry is a database that contains the network locations of service instances.
The service registry needs to be highly available and up to date so clients can go through network locations obtained from the service registry.
A service registry consists of a cluster of servers that use a replication protocol to maintain consistency.
ref : https://avinetworks.com/glossary/service-discovery/

What is CoreDNS?
The coredns add-on is a DNS server that provides domain name resolution services for Kubernetes clusters
coredns is an open-source software
It provides a means for cloud services to discover each other in cloud-native deployments

Use of CoreDNS?
In a Kubernetes cluster, coredns can automatically discover services in the cluster and provide domain name resolution for these services.
Kubernetes v1.11 and later back CoreDNS as the official default DNS for all clusters going forward.