![]() High Availability of etcdĮtcd, when deployed, has high availability functionality built into it. The nginx-proxy is a simple container that performs L4 round robin load balancing across the known controlplane node IPs with health checking, ensuring that the nodes are able to continue operating even during transient failures. You may be asking yourself, “how does 127.0.0.1:6443 resolve to anything on a worker node?” The reason this works is due to the existence of an nginx-proxy container on each non- controlplane designated node. The kubelet and kube-proxy components on each Kubernetes cluster node are configured by rke to connect to 127.0.0.1:6443. kubelet / kube-proxy Load Balancing of the kube-apiserver This provides a generally highly available solution for the Kubernetes API server internally, and libraries which connect to the API server from within pods should be able to handle failover through retry. More specifically, it uses NAT pre-routing to masquerade traffic to the desired Kubernetes API server endpoint with the probability determined by the number of API server hosts available. In a default-configured RKE cluster, the ClusterIP is load balanced using iptables. A ClusterIP service and endpoint named kubernetes are created in the default namespace which resolve to the IP that is designated to the Kubernetes API server. User (Cluster Internal) Load Balancing of the kube-apiserverīy default, rke designates 10.43.0.1 as the internal Kubernetes API server endpoint (based on the default service cluster CIDR of 10.43.0.0/16). It will be necessary to generate a specific kube_config file for users to utilize that includes the L4 API server load balancer as the server value. If you are adding the L4 load balancer after the fact, it is important that you perform an RKE certificate rotation operation in order to properly add the additional hostname to the SAN list of the kube-apiserver.pem certificate.Īdditionally, the kube_config_cluster.yml file will not be configured to access the API server through the load balancer, but rather through the first controlplane node in the list. Here is a basic diagram outlining a configuration for external load balancing of the kube-apiserver: User (External) Load Balancing of the kube-apiserver User-facing API server communication must be configured out-of-band in order to ensure maximum availability. When using RKE, the internal component of the API server load balancing is handled by the nginx-proxy container. The kube-apiserver is capable of being run in parallel across multiple nodes, providing a highly available solution when requests are balanced/failed over between nodes. More information on the kube-scheduler can be found here. The kube-scheduler component is responsible for scheduling Pods to nodes. More information on this component can be found here. The kube-controller-manager component runs the various controllers that operate on the Kubernetes control objects, like Deployments, DaemonSets, ReplicaSets, Endpoints, etc. For example, the kubelet on each node will not be able to update the API server, the controllers will not be able to operate on the various control objects, and users will not be able to interact with the Kubernetes cluster using kubectl. Without a functional API endpoint for your cluster, the cluster will come to a halt. It is important to keep in mind that API server availability must be maintained to ensure the functionality of your cluster. To review, the kube-apiserver component runs the Kubernetes API server. When using rke to deploy a Kubernetes cluster, controlplane designated nodes have a few unique components deployed onto them. If working with a on-premise deployment, it’s highly recommended to look at a solution like MetalLB. ![]() This is omitted due to the cloud-specific nature of the various load balancer implementations. ![]() ![]() Load Balancer type services are not included in this guide, but are also a consideration you should keep in mind. We’ll be going over a few of the core components that are required to make a highly available Kubernetes cluster function: However, the concepts outlined here can easily be translated to other Kubernetes installers and tools. In this article, we will be using the Rancher Kubernetes Engine (RKE) as the installation tool for our cluster. This guide covers some of the options that are available for deploying highly available clusters, as well as an example of deploying a highly available cluster. As Kubernetes becomes more and more prevalent as a container orchestration system, it is important to properly architect your Kubernetes clusters to be highly available. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |