Recently I got a chance to build a cluster on on-premise nodes which was a great experience I could do.
And the process was very easy so I wil try to explain it very shortly and try tell how to build it yourself :) just follow me.
* Prerequisite
- I highly recommend you to know what is ansible because you will be needing that.
https://github.com/kubernetes-sigs/kubespray
GitHub - kubernetes-sigs/kubespray: Deploy a Production Ready Kubernetes Cluster
Deploy a Production Ready Kubernetes Cluster. Contribute to kubernetes-sigs/kubespray development by creating an account on GitHub.
github.com
The above repo is called Kubespray which is really convenient to install kubernetes on each node and bind them all together as Cluster.
Now lets look inside repo
As you can see in the "wow that is long" pic the repo has too much stuff which makes us a bit confusing : ( , But do not worry I am here who is not that expert but knows how to use kubespray for minimum use haha.
The first you might want to visit is the inventory folder.
Now every thing is more simple than it used to be haha.
you can then just get into the sample and find the inventory.ini
The folder contains the nodes information to deploy and what to deploy on them whether they are a worker or master node.
# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
# node1 ansible_host=95.54.0.12 # ip=10.3.0.1 etcd_member_name=etcd1
# node2 ansible_host=95.54.0.13 # ip=10.3.0.2 etcd_member_name=etcd2
# node3 ansible_host=95.54.0.14 # ip=10.3.0.3 etcd_member_name=etcd3
# node4 ansible_host=95.54.0.15 # ip=10.3.0.4 etcd_member_name=etcd4
# node5 ansible_host=95.54.0.16 # ip=10.3.0.5 etcd_member_name=etcd5
# node6 ansible_host=95.54.0.17 # ip=10.3.0.6 etcd_member_name=etcd6
# ## configure a bastion host if your nodes are not directly reachable
# [bastion]
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube_control_plane]
# node1
# node2
# node3
[etcd]
# node1
# node2
# node3
[kube_node]
# node2
# node3
# node4
# node5
# node6
[calico_rr]
[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr
The above code is the inventory.ini which will contain the node inforamtions.
FYI .
* Too roughly say kubespray is just a bunch of ansible script to deploy kubernetes easily which is a good repo : )
so for example if I would want to deploy 3 nodes which ip is 192.168.1.101, 192.168.1.102,192.168.1.102,192.168.1.103,192.168.1.104,192.168.1.105,192.168.1.106
# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
node1 ansible_host=192.168.1.101 ansible_ssh_user=<USERNAME> ansible_ssh_port=22
node2 ansible_host=192.168.1.102 ansible_ssh_user=<USERNAME> ansible_ssh_port=22
node3 ansible_host=192.168.1.103 ansible_ssh_user=<USERNAME> ansible_ssh_port=22
node4 ansible_host=192.168.1.104 ansible_ssh_user=<USERNAME> ansible_ssh_port=22
node5 ansible_host=192.168.1.105 ansible_ssh_user=<USERNAME> ansible_ssh_port=22
node6 ansible_host=192.168.1.106 ansible_ssh_user=<USERNAME> ansible_ssh_port=22
# ## configure a bastion host if your nodes are not directly reachable
# [bastion]
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube_control_plane]
node1
node2
node3
[etcd]
node1
node2
node3
[kube_node]
node2
node3
node4
node5
node6
[calico_rr]
[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr
you can just fill in the USERNAME and if the node is sevicing the ssh port differently you can also change the ssh port
* kube_control_plane
- This is the section were you demand the nodes to be masters nodes.
- Kube-api-server, kube-scheduler, etc , tools that are needed to orchestrate the cluster will be deployed, and if the node is not on kube_node the node will have the taint annotation which means service pods will be not allocated.
* etcd
- key-value database which saves all the information of kubernetes
- if you set the nodes number even error will occur which requires you to set up the clusters etcd to odd.
* kube_node
- The working class
- your services will mainly deployed on this nodes
- master node can also be a worker node ( which is a bit awkward to say : / )
Now the inventory is set we will be looking at the groups_vars which has the variable for the ones we need.
I only modified the k8s-cluster.yml file because I was only going to change the CNI configuration.
There are so many variable which I cannot explain all ( sorry ).
I wish I could post the whole yml but it is a bit long so in the line 20 I changed kubernetes version to the version I want to test, and at line 70 I change the CNI config to Cillium.
Now lets try to deploy it.
Getting out from the inventory folder in the root of the repo we can find the files name cluster.yml, reset.yml, scale.yml and etc.
Those are the anisble playbook which we are going to use.
* reset.yml resets the node which means it will erase the kubernetes stuff on the node so when you are planning to wipe out just use it.
* cluster.yml is the playbook we are going to use to set up the kubernetes cluster.
* scale.yml I think it is a playbook to scale up the cluster ( put in new nodes ) but I am not sure because I did not run it.
So the process is really *2 simple
ansible-playbook -i inventory/sample/inventory.ini --become --become-user=root --ask-become-pass cluster.yml
the above cmd is all, I added --ask-become-pass to use the root password to pass the authentication
after some time the node will build a kubernetes cluster and that is done : )
Hope this words help you and if you have any questions feel free to ask or comment below.
'DevOps 잡다구리 > Kubernetes Stuff' 카테고리의 다른 글
[Kubernetes] CRI-O 에 관하여 (0) | 2024.09.01 |
---|---|
[Kubernetes] Agones simple server 를 EKS 에 올려보기? (0) | 2024.03.20 |
[Argocd] Using Helm chart while using my values.yaml (0) | 2024.02.22 |
[Kubernetes] Istio for access log (0) | 2023.05.11 |
[Kubernetes] 당분간 Image 를 불러올 시 k8s.gcr.io 에서 registry.k8s.io 로 리다이렉트 된다. (0) | 2023.03.21 |