How certificates are used by your cluster
– Kubernetes uses client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth to authenticate API requests through authentication plugins
– Client certificates generated by kubeadm expire after 1 year.
– By default, kubeadm generates all the certificates needed for a cluster to run. You can override this behavior by providing your own certificates.
– you must place them in whatever directory is specified by the –cert-dir flag or the certificatesDir field of kubeadm’s ClusterConfiguration. By default this is /etc/kubernetes/pki.
If a given certificate and private key pair exists before running kubeadm init, kubeadm does not overwrite them. This means you can, for example, copy an existing CA into /etc/kubernetes/pki/ca.crt and /etc/kubernetes/pki/ca.key, and kubeadm will use this CA for signing the rest of the certificates.
command : kubeadm certs check-expiration
Kubernetes requires PKI for the following operations:
Client certificates for the kubelet to authenticate to the API server
Server certificate for the API server endpoint
Client certificates for administrators of the cluster to authenticate to the API server
Client certificates for the API server to talk to the kubelets
Client certificate for the API server to talk to etcd
Client certificate/kubeconfig for the controller manager to talk to the API server
Client certificate/kubeconfig for the scheduler to talk to the API server.
Client and server certificates for the front-proxy
Where certificates are stored
If you install Kubernetes with kubeadm, certificates are stored in /etc/kubernetes/pki.
ref : https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/
volume plugins in kubernetes
A StorageClass provides a way for administrators to describe the “classes” of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators.
Kubernetes itself is unopinionated about what classes represent.
This concept is sometimes called “profiles” in other storage systems.
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes,
but have a lifecycle independent of any individual Pod that uses the PV.
This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources.
Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes
There are two ways PVs may be provisioned: statically or dynamically.
A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
When none of the static PVs the administrator created match a user’s PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC.
This provisioning is based on StorageClasses: the PVC must request a storage class and the administrator must have created and configured that class for dynamic provisioning to occur.
Claims that request the class “” effectively disable dynamic provisioning for themselves.
Block vs. object vs. file storage
Block storage is not alone in the world of data storage. Developers also use other systems, such as object storage and file storage. While the ultimate goal of each is to provide data to users and applications, each of those storage methods goes about storing and retrieving data differently.
– Object storage, which is also known as object-based storage, breaks data files up into pieces called objects. It then stores those objects in a single repository, which can be spread out across multiple networked systems.
– In practice, applications manage all of the objects, eliminating the need for a traditional file system. Each object receives a unique ID, which applications use to identify the object. And each object stores metadata—information about the files stored in the object.
– One important difference between object storage and block storage is how each handles metadata.
In object storage, metadata can be customized to include additional, detailed information about the data files stored in the object.
For example, metadata accompanying a video file could be customized to tell where the video was made, the type of camera used to shoot it, and even what subjects were captured in each frame. In block storage, metadata is limited to basic file attributes.
Block storage is best suited for static files that aren’t changed often because any change made to a file results in the creation of a new object.
– File storage, which is also referred to as file-level or file-based storage, is normally associated with Network Attached Storage (NAS) technology. NAS presents storage to users and applications using the same ideology as a traditional network file system.
– In other words, the user or application receives data through directory trees, folders, and individual files. This functions similarly to a local hard drive. However, NAS or the Network Operating System (NOS) handle access rights, file sharing, file locking, and other controls.
– File storage can be very easy to configure, but access to data is constrained by a single path to the data, which can impact performance compared to block or object storage. File storage also only operates with common file-level protocols, such as a New Technology File System (NTFS) for Windows or a Network File System (NFS) for Linux. This could limit usability across dissimilar systems.