how to setup Kubernetes cluster on servers running on CentOS 7 (Bare-metal installation)

how to setup Kubernetes cluster on servers running on CentOS 7 (Bare-metal installation) as well as deploy add-on services such as DNS and Kubernetes Dashboard.

Prerequisites:You need at least 2 servers for setting Kubernetes cluster. For this blog, we are using three servers to form Kubernetes cluster. Make sure that each of these servers has at least 1 core and 2 GB memory.

Master 172.16.106.140
Node1/Minion1 172.16.106.141
Node2/Minion2 172.16.106.142

hostnamectl set-hostname kubemaster

  • Disable firewall

systemctl disable firewalld && systemctl stop firewalld

Disable SELinux & setup firewall rules on master,node1,node2

# setenforce 0

# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

# vi /etc/hosts

172.16.106.140 master

172.16.106.141 node1

172.16.106.142 node2

Cluster Configuration:

  1. Infrastructure Private subnet IP range: 172.16.106.0/16
  2. Flannel subnet IP range: 172.30.0.0/16 (You can choose any IP range just make sure it does not overlap with any other IP range)
  3. Service Cluster IP range for Kubernetes: 10.254.0.0/16 (You can choose any IP range just make sure it does not overlap with any other IP range)
  4. Kubernetes Service IP: 10.254.0.1 (First IP from service cluster IP range is always allocated to Kubernetes Service)
  5. DNS service IP: 10.254.3.100 (You can use any IP from the service cluster IP range just make sure that the IP is not allocated to any other service)

Step 1: Create a Repo on all Host i.e. Master, Node1, and Node2.

#vim /etc/yum.repos.d/virt7-docker-common-release.repo

[virt7-docker-common-release]

name=virt7-docker-common-release

baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/

gpgcheck=0

Step 2: Installing Kubernetes, etcd and flannel.

#yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

Step 3: Next Configure Kubernetes Components.(All nodes master ,node1,node2)

#vi /etc/kubernetes/config

please comment with # for KUBE_MASTER Line default and paste below

#added later

# Comma separated list of nodes running etcd cluster

KUBE_ETCD_SERVERS=”–etcd-servers=http://172.16.106.140:2379″

# Logging will be stored in system journal

KUBE_LOGTOSTDERR=”–logtostderr=true”

# Journal message level, 0 is debug

KUBE_LOG_LEVEL=”–v=0″

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV=”–allow-privileged=false”

# Api-server endpoint used in scheduler and controller-manager

KUBE_MASTER=”–master=http://172.16.106.140:8080″

Step 4: ETCD Configuration (On Master)

Next, we need to configure etcd for Kubernetes cluster. Etcd configuration is stored in /etc/etcd/etcd.conf

#vi /etc/etcd/etcd.conf

search and change the below values.

#[member]

ETCD_NAME=default

ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”

ETCD_LISTEN_CLIENT_URLS=”http://0.0.0.0:2379″

#[cluster]

ETCD_ADVERTISE_CLIENT_URLS=”http://0.0.0.0:2379″

Step 5: API Server Configuration (On Master)

Download this script and update line number 30 in the file.

#wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/saltbase/salt/generate-cert/make-ca-cert.sh

[[email protected] ~]

# id kube

uid=992(kube) gid=989(kube) groups=989(kube)

vi make-ca-cert.sh

Change to

cert_group=${CERT_GROUP:-kube-cert}

cert_group=${CERT_GROUP:-kube}

Run this command to genterate certificate

#bash make-ca-cert.sh “172.16.106.140” “IP:172.16.106.140,IP:10.254.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local”

bash make-ca-cert.sh “10.0.1.46” “IP:10.0.1.46,IP:10.254.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local”

Where 172.16.106.140 is the IP of the master server, 10.254.0.1 is the IP of Kubernetes service

Step 6: Now, we can configure API server: (ON master)

Comment all the lines and paste below

vi /etc/kubernetes/apiserver

# Bind kube API server to this IP

KUBE_API_ADDRESS=”–address=0.0.0.0″

# Port that kube api server listens to.

KUBE_API_PORT=”–port=8080″

# Port kubelet listen on

KUBELET_PORT=”–kubelet-port=10250″

# Address range to use for services(Work unit of Kubernetes)

KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.0.0/16″

# default admission control policies

KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota”

# Add your own!

KUBE_API_ARGS=”–client-ca-file=/srv/kubernetes/ca.crt –tls-cert-file=/srv/kubernetes/server.cert –tls-private-key-file=/srv/kubernetes/server.key”

Step 7:Controller Manager Configuration (On Master)

vi /etc/kubernetes/controller-manager

# Add your own!

KUBE_CONTROLLER_MANAGER_ARGS=”–root-ca-file=/srv/kubernetes/ca.crt –service-account-private-key-file=/srv/kubernetes/server.key”

Step 8: Kubelet Configuration (On Node1)

Comment out default and add these line

vi /etc/kubernetes/kubelet

# kubelet bind ip address(Provide private ip of minion)

KUBELET_ADDRESS=”–address=0.0.0.0″

# port on which kubelet listen

KUBELET_PORT=”–port=10250″

# leave this blank to use the hostname of server

KUBELET_HOSTNAME=”–hostname-override=172.16.106.141″

# Location of the api-server

KUBELET_API_SERVER=”–api-servers=http://172.16.106.140:8080″

# Add your own!

KUBELET_ARGS=””

Step 9: Kubelet Configuration (On Node2)

Comment out default and add these line

vi /etc/kubernetes/kubelet

# kubelet bind ip address(Provide private ip of minion)

KUBELET_ADDRESS=”–address=0.0.0.0″

# port on which kubelet listen

KUBELET_PORT=”–port=10250″

# leave this blank to use the hostname of server

KUBELET_HOSTNAME=”–hostname-override=172.16.106.142″

# Location of the api-server

KUBELET_API_SERVER=”–api-servers=http://172.16.106.140:8080″

# Add your own!

KUBELET_ARGS=””

Step 10:- start the etcd node on the master

systemctl start etcd && systemctl enable etcd

Create a new key in etcd to store Flannel configuration using the following command:

etcdctl mkdir /kube-centos/network

Next, we need to define the network configuration for Flannel: using this command

etcdctl mk /kube-centos/network/config “{ \”Network\”: \”172.30.0.0/16\”, \”SubnetLen\”: 24, \”Backend\”: { \”Type\”: \”vxlan\” } }”

The above command allocates the 172.30.0.0/16 subnet to the Flannel network. A flannel subnet of CIDR 24 is allocated to each server in Kubernetes cluster.

vi /etc/sysconfig/flanneld

# etcd URL location. Point this to the server where etcd runs

FLANNEL_ETCD=”http://172.16.106.140:2379″

# etcd config key. This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_PREFIX=”/kube-centos/network”

# Any additional options that you want to pass

FLANNEL_OPTIONS=””

Note: Please make sure Flannel subnet doesn’t overlap with infrastructure subnet or service cluster IP range

Step 11: Flannel Configuration (On Node1)

Comment default and added these line

vi /etc/sysconfig/flanneld

# etcd URL location. Point this to the server where etcd runs

FLANNEL_ETCD=”http://172.16.106.140:2379″

# etcd config key. This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_PREFIX=”/kube-centos/network”

# Any additional options that you want to pass

FLANNEL_OPTIONS=””

Step 12: Flannel Configuration (On Node2)

Comment default and added these line

vi /etc/sysconfig/flanneld

# etcd URL location. Point this to the server where etcd runs

FLANNEL_ETCD=”http://172.16.106.140:2379″

# etcd config key. This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_PREFIX=”/kube-centos/network”

# Any additional options that you want to pass

FLANNEL_OPTIONS=””

Step 13: Start services on Master

systemctl enable kube-apiserver && systemctl start kube-apiserver

systemctl enable kube-controller-manager && systemctl start kube-controller-manager

systemctl enable kube-scheduler && systemctl start kube-scheduler

systemctl enable etcd && systemctl start etcd

systemctl enable flanneld && systemctl start flanneld

Setp 14: Start service on Node 1

systemctl enable kube-proxy && systemctl start kube-proxy

systemctl enable kubelet && systemctl start kubelet

systemctl enable flanneld && systemctl start flanneld

systemctl enable docker && systemctl start docker

Setp 14: Start service on Node 2

systemctl enable kube-proxy && systemctl start kube-proxy

systemctl enable kubelet && systemctl start kubelet

systemctl enable flanneld && systemctl start flanneld

systemctl enable docker && systemctl start docker

Note: In each node1 and node2, make sure that IP address allocated to Docker0 is the first IP address in the Flannel subnet, otherwise your cluster won’t work properly. To check this, use “ifconfig” command.

Step 15.Deploying Addons in Kubernetes

Configuring DNS for Kubernetes Cluster

To enable service name discovery in our Kubernetes cluster, we need to configure DNS for our Kubernetes cluster. To do so, we need to deploy DNS pod and service in our cluster and configure Kubelet to resolve all DNS queries from this DNS service (local DNS).

You can download DNS Replication Controller and Service YAML from my repository. You can also download the latest version of DNS from official Kubernetes repository (kubernetes/cluster/addons/dns).

Next, use the following command to create a replication controller and service:

mkdir kube-cluster-config
cd kube-cluster-config/
mkdir DNS
cd DNS
wget https://bitbucket.org/Dhyaniarun/kubernetes/raw/bcc3c46179773691ae51c11785eb70bfca58ba9e/DNS/skydns-rc.yaml
wget https://bitbucket.org/Dhyaniarun/kubernetes/raw/bcc3c46179773691ae51c11785eb70bfca58ba9e/DNS/skydns-svc.yaml

kubectl create -f skydns-rc.yaml
kubectl create -f skydns-svc.yaml

Note :-change skydns-svc.yaml 10.254.3.100 as DNS service IP in this file as we have used default no more change for this documentation

Checking the status of the pods and service

Step 16: configure Kubelet in all Nodes to resolve all DNS queries from our local DNS service

IN Node1

vim /etc/kubernetes/kubelet

# Add your own!

KUBELET_ARGS=”–cluster-dns=10.254.3.100 –cluster-domain=cluster.local”

systemctl restart kubelet

IN Node2

vim /etc/kubernetes/kubelet

# Add your own!

KUBELET_ARGS=”–cluster-dns=10.254.3.100 –cluster-domain=cluster.local”

systemctl restart kubelet

Step 17: Configuring Dashboard for Kubernetes Cluster

Kubernetes Dashboard provides User Interface through which we can manage Kubernetes work units. We can create, delete or edit all work unit from Dashboard.

Kubernetes Dashboard is also deployed as a Pod in the cluster. You can download Dashboard Deployment and Service YAML from my repository. You can also download the latest version of Dashboard from official Kubernetes repository (kubernetes/cluster/addons/dashboard)

After downloading YAML, run the following commands from the master:

mkdir Dashboard

cd Dashboard/

wget https://bitbucket.org/Dhyaniarun/kubernetes/raw/bcc3c46179773691ae51c11785eb70bfca58ba9e/Dashboard/dashboard-controller.yaml

wget https://bitbucket.org/Dhyaniarun/kubernetes/raw/bcc3c46179773691ae51c11785eb70bfca58ba9e/Dashboard/dashboard-service.yaml

kubectl create -f dashboard-controller.yaml

kubectl create -f dashboard-service.yaml

Now you can access Kubernetes Dashboard on your browser.

Open http://master_public_ip:8080/ui on your browser.

###### Running your first containers in Kubernetes######

The kubectl run line below will create two nginx pods listening on port 80. It will also create a deployment named my-nginx to ensure that there are always two pods running.

kubectl run my-nginx –image=nginx –replicas=2 –port=80

Once the pods are created, you can list them to see what is up and running:

kubectl get pods

You can also see the deployment that was created:

kubectl get deployment

Exposing your pods to the internet.

kubectl expose deploy my-nginx –port=8888 –target-port=80 –type=NodePort –external-ip=172.16.106.141

Leave a Reply

Your email address will not be published. Required fields are marked *