Swetha- Kubernetes Assignment
What is ReplicaSets ?
A ReplicaSets is a bug fixed version of replica controller and it’s purpose is to maintain a stable set of replica Pods running at any given time.
Working with ReplicaSets with example :
A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template.
vi rs.yaml kubectl create -f rs.yaml kubectl get rs [root@swetha swetha]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-example 2 2 2 5s kubectl get pods kubectl edit rs replicaset-example kubectl get pods vi rs.yaml kubectl apply -f rs.yaml kubectl get pods kubectl scale --replicas=2 -f rs.yaml kubectl get pods kubectl scale --replicas=3 rs/replicaset-example kubectl get pods kubectl delete rs replicaset-example kubectl get pods
apiVersion: apps/v1 kind: ReplicaSet metadata: # Unique key of the ReplicaSet instance name: replicaset-example spec: # 3 Pods should exist at all times. replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: # Run the nginx image - name: nginx image: scmgalaxy/nginx-devopsschoolv1
What is DaemonSets ?
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
Working with DaemonSets with example :
A DaemonSet ensures that all eligible nodes run a copy of a Pod. Normally, the node that a Pod runs on is selected by the Kubernetes scheduler. However, DaemonSet pods are created and scheduled by the DaemonSet controller instead. Depending on the requirement, you can set up multiple DaemonSets for a single type of daemon, with different flags, memory, CPU, etc. that supports multiple configurations and hardware types.
The apiVersion, kind, and metadata are required fields in every Kubernetes manifest. The DaemonSet specific fields come under the spec section.
vi ds.yml apiVersion: apps/v1 kind: DaemonSet metadata: # Unique key of the DaemonSet instance name: daemonset-example namespace: test-daemonset spec: # 3 Pods should exist at all times. selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: # Run the nginx image - name: nginx image: scmgalaxy/nginx-devopsschoolv1 [root@swetha swetha]# kubectl get daemonsets NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset-example 1 1 1 1 1 <none> 5m15s [root@swetha swetha]# kubectl get pods NAME READY STATUS RESTARTS AGE daemonset-example-b9jkl 1/1 Running 0 86s kubectl edit daemonsets daemonset-example vi ds.yaml kubectl apply -f ds.yaml kubectl get pods kubectl delete daemonsets daemonset-example
What is Statefulsets ?
StatefulSet is the workload API object used to manage stateful applications. Manages the deployment and scaling of a set of pods and provides guarantees about the ordering and uniqueness of these Pods.
Working with Statefulsets with example :
StatefulSets are valuable for applications that require one or more of the following.
- Stable, unique network identifiers.
- Stable, persistent storage.
- Ordered, graceful deployment and scaling.
- Ordered, automated rolling updates.
In the below example:
- A Headless Service, named nginx, is used to control the network domain.
- The StatefulSet, named web, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods.
- The volumeClaimTemplates will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner.
The name of a StatefulSet object must be a valid DNS subdomain name
apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx # has to match .spec.template.metadata.labels serviceName: "nginx" replicas: 3 # by default is 1 template: metadata: labels: app: nginx # has to match .spec.selector.matchLabels spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "my-storage-class" resources: requests: storage: 1Gi
What is Service ?
A Service in Kubernetes is an abstraction which defines a logical set of Pods (which performs same function) and a policy by which to access them. A service is responsible for enabling network access to a set of pods.
Working of Services with example :
Services match a set of pods using labels and selectors
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec:
- ClusterIP (default) – Exposes the Service on an internal IP in the cluster. This service is only reachable from within the cluster.
- NodePort – Exposes the Service on the same port of each selected Node in the cluster which makes a Service accessible from outside the cluster.
- LoadBalancer – Exposes the service via the cloud provider’s load balancer and assigns a fixed, external IP to the Service.
- ExternalName – Maps the Service to the contents of the externalName field, by returning a CNAME record with its value.
apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376
This specification creates a new Service object named “my-service”, which targets TCP port 9376 on any Pod with the app=MyApp label. Kubernetes assigns this Service an IP address (sometimes called the “cluster IP”), which is used by the Service proxies. The controller for the Service selector continuously scans for Pods that match its selector, and then POSTs any updates to an Endpoint object also named “my-service”.
What is Load balancers and how do they work ?
Load balancing is defined as the methodical and efficient distribution of network or application traffic across multiple servers in a server farm. Each load balancer sits between client devices and backend servers, receiving and then distributing incoming requests to any available server capable of fulfilling them. Load balancers detect the health of back end resources and do not send traffic to servers that are not able to fulfill request.
A load balancer disburses traffic to different web servers in the resource pool to ensure that no single server becomes overworked and subsequently unreliable. Load balancers effectively minimize server response time and maximize throughput.
Types of Load balancers –
a.) Hardware Load Balancers: physical, on-premise, hardware equipment to distribute the traffic on various servers. Though they are capable of handling a huge volume of traffic but are limited in terms of flexibility, and are also fairly high in prices.
b.) Software Load Balancers: computer applications that need to be installed in the system and function similarly to the hardware load balancers. They are of two kinds- Commercial and Open Source and are a cost-effective alternative to the hardware counterparts.
c.) Virtual Load Balancers: combination of the program of a hardware load balancer working on a virtual machine.
What is Discovery Services and how it works?
Service discovery takes advantage of the labels and selectors to associate a service with a set of pods. A single pod or a ReplicaSet may be exposed to internal or external clients via services, which associate a set of pods with a specific criterion. Any pod whose labels match the selector defined in the service manifest will automatically be discovered by the service. This architecture provides a flexible, loosely-coupled mechanism for service discovery.
What is coreDNS ?
CoreDNS is a flexible, extensible DNS server that can serve as the Kubernetes cluster DNS. Like Kubernetes, the CoreDNS project is hosted by the CNCF.
Use of coreDNS –
We can use CoreDNS instead of kube-dns in our cluster by replacing kube-dns in an existing deployment, or by using tools like kubeadm that will deploy and upgrade the cluster for you.
How certificate based Authentication works? Explain in kubernetes context.
– Kubernetes uses client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth to authenticate API requests through authentication plugins
– Client certificates generated by kubeadm expire after 1 year.
– By default, kubeadm generates all the certificates needed for a cluster to run. You can override this behavior by providing your own certificates.
– you must place them in whatever directory is specified by the –cert-dir flag or the certificatesDir field of kubeadm’s ClusterConfiguration. By default this is /etc/kubernetes/pki.
If a given certificate and private key pair exists before running kubeadm init, kubeadm does not overwrite them. This means for example, copy an existing CA into /etc/kubernetes/pki/ca.crt and /etc/kubernetes/pki/ca.key, and kubeadm will use this CA for signing the rest of the certificates.
command : kubeadm certs check-expiration
User/administrator generates a private key (PK) and certificate signing request (CSR). Administrator approves the request and signs it with their CA. Administrator provides the resulting certificate (CRT) back to the user. User uses private key and certificate to login to API Server.
Admin creates private key –> which is later converts into csr –> Admin take this csr file and CA and create .crt file.
What is Block Storage?
Block storage, sometimes referred to as block-level storage, is a technology that is used to store data files on Storage Area Networks (SANs) or cloud-based storage environments. Developers favor block storage for computing situations where they require fast, efficient, and reliable data transportation.
Block storage breaks up data into blocks and then stores those blocks as separate pieces, each with a unique identifier. The SAN places those blocks of data wherever it is most efficient. That means it can store those blocks across different systems and each block can be configured (or partitioned) to work with different operating systems. Block storage also decouples data from user environments, allowing that data to be spread across multiple environments. This creates multiple paths to the data and allows the user to retrieve it quickly.
How volume plugins in kubernetes works?
Volumes offer storage shared between all containers in a Pod. This allows you to reliably use the same mounted file system with multiple services running in the same Pod. This is, however, not automatic. Containers that want to use a volume have to specify which volume they want to use, and where to mount it in the container’s file system
Admin will be creating persistent volume (pv) and users/developers claims that ( persistent volume claim).
What is StorageClass? and How to work with it with sample StorageClass, PV, PVC, POD.yaml ?
A StorageClass provides a way for administrators to describe the “classes” of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators.
Kubernetes itself is unopinionated about what classes represent.
This concept is sometimes called “profiles” in other storage systems.
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV.
This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources.
Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes.
Each StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard provisioner: kubernetes.io/aws-ebs parameters: type: gp2 reclaimPolicy: Retain allowVolumeExpansion: true mountOptions: - debug volumeBindingMode: Immediate