Kubernetes Notes - Part 1
Kubernetes Notes - Part 1
Kubernetes works with desire states. The user declares what she/he wants to have as a final state and Kubernetes will take care to have that desire satisfied.
To do so, Kubernetes is divided in a few components and each of those components has a task to perform.
Instead of having to provide a set of instructions, monitor things and then provide more instructions
In Kubernetes world you create an API object that is persisted on Kube API server until deletion. All the components work in parallel to drive to towards that state.
The scheduler is another component that will monitor the Kubernetes API Server to find PODS with no Node assigned to it. Later the scheduler will sign that POD to some node.
When a node come up they monitor the Kubernetes API Server to figure out what they should do. They are actually monitoring the Kubernetes API Server to be sure their actual state is the desired state defined by the scheduler.
Each component is responsible to itself and nodes are no different. By not having a main process executing commands every where Kubernetes makes any sort of recover much easier. Nodes for example can easily recover from a crash just by asking what the Kubernetes API Server what to do. Once the node knows what to do it will execute all the internal commands to fulfill that state.
This way of working is called level triggered. That means events are never missed. It is a way to design the system to tolerate failure. In Kubernetes there is no single point of failure
Another good side of working this way is that since every component works independently the nodes won’t change their state or stop working if the Kubernetes API Server goes down. They will keep their last seen state running until there are changes to be fetched from the recovered Kubernetes API Server.
Kubernetes control plane is transparent
There are no hidden internal APIs
That is a very interesting concept. Every API used by internal Kubernetes components to communicate to each other are documented and open to be used by any other components. That makes Kubernetes very modular. With Kubernetes you can “easily” replace some internal component to your own implementation of it.
kubernetes API Data
Kubernetes can store sensitive information to be used by any component. An example is to save passwords as secrets to be used by a container running on any node.
Kubernetes also has a ConfigMap API to store and get Application Configurations
DownloadAPI allows you to fetch POD information
No change to your Application is needed
Kubernetes follows another principle which is “Meet the used where they are”. That means the user won’t need to adapt their application to run on Kubernetes.
Actually, users can change their applications to use Kubernetes APIs if they like to do so! Since Kubernetes has no hidden APIs and everything is documented and transparent the user can choose to call some of the APIs from their own Application to get the information they need but it.
In order to meet the user where they are Kubernetes gives you the ability to consume secrets, config maps and download api objects as files within the container or as environment variables. It is the user, when creating the desired state, duty to decide what to expose to the container when brought up by a node. If your application knows how to read a file or an environment variable they don’t need to have any specific implementation related to Kubernetes
Kubernetes Volume plugins
Containers are ephemeral. That means that as soon as you terminate a container all the data related to that container will be also deleted.
Kubernetes Volume plugins give you a way to persist data beyond the file of an individual POD.
They are plugin to allow you to plug into remote storage systems as GC persistent disks, Amazon EBS block volumes, NFS share, etc.
Once again it is the user, when creating the pod files, duty to decide what sort of volume plugin she/he want to use to persist data.
It is then Kubernetes work to figure out how to make that volume available inside of that container. To be more specific there is a controller just for that. It is the Attached/detached controller.
Just like any other component, the AD controller will monitor the Kubernetes API Server looking for Pods that were already scheduled for a Node that requires a remove volume. When it finds one of these pods it figures out if the volume is available on that node it is scheduled to. If the volume is not available on that node. AD controller will contact the Volume back-end asking it to attach the volume to that specific node. After the attachment to that node AD controller will update the Kubernetes API Server so it will know a new volume is attached.
Later when the node figures out something needs to be changed it will also see that the new container needs the remote volume so it will monitor the creating of that remote volume by the remote service and only move on when it is there. With the user can be sure the volume is mounted and every read and write to that mounted folder is being executed on the remote volume.
PV and PVC
Kubernetes Volume Plugins are pretty cool but attaching them to your POD is not a good idea. It is not a good idea because later if you have to move your cluster to a place in which that plugin or that network is not available you will loose access to your persistence and the data you have there. The correct way to work is
PersistenVolume and PersistenVolumeClaim is an abstraction to decouple storage implementation from storage consumption.
PV objects are objects creates ahead in time that represents the storage available on your cluster.
PVC is a simple Kubernetes objects that contains information of the type of storage you want (capacity, read-only, read-write, etc) and those objects are manage by the Persistence Volume Controller that will try to match your request to the available storage. The PV Controller will bind any PVCs that to available created PVs.
Notes taken from
Kubernetes Design Principles: Understand the Why - Saad Ali, Google) Kubernetes - Carson