Kubernetes with UI installation on Ubuntu 14.04



Kubernetes with UI over Docker

1. Install prerequisite software packages on both
$apt-get update
$apt-get install ssh curl vim

2.  Docker installation
sudo apt-get update
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

touch /etc/apt/sources.list.d/docker.list
echo deb https://apt.dockerproject.org/repo ubuntu-trusty main > /etc/apt/sources.list.d/docker.list
sudo apt-get update
sudo apt-get purge lxc-docker
sudo apt-cache policy docker-engine
sudo apt-get install docker-engine -y

3.ssh-keygen -t rsa

4.Copy the ssh id_rsa key locally ///  optional for passwordless auth
$ ssh-copy-id -i /root/.ssh/id_rsa.pub 127.0.0.1
5. In case this command fails please use this alternative solution in order to add the key //optional
$ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
6.Validate the password-less ssh-login // optional passwordless auth test
$ ssh root@127.0.0.1
root@virtual-machine:~$ exit
logout
Connection to 127.0.0.1 closed

7. Get the Kubernetes release bundle from the official github repository


8. Untar the Kubernetes bundle in the same directory
$ tar -xvf kubernetes.tar.gz
We will build the binaries of Kubernetes code specifically for ubuntu cluster
9. Execute the following shell script
$ cd kubernetes/cluster/ubuntu  
 $ ./build.sh

10 .Configure the cluster information by editing only the following parameters of the filecluster/ubuntu/config-default.sh in the editor of your choice.

$ cd
$ vi kubernetes/cluster/ubuntu/config-default.sh
export nodes="root@127.0.0.1  root@172.16.1.8"
export roles="ai i"
export NUM_MINIONS=${NUM_MINIONS:-2}
export FLANNEL_NET=172.16.0.0/16

NOTE :- for multiple node need to edit export roles=("ai" "i" "i")   /// Script BUG

/* Only update the above mentioned information in the file, rest of the configuration will remain as it is. The first variable nodes defines all the cluster nodes, in our case same machine will be configured as master and node so it contains only one entry.The role below “ai” specifies that same machine will act as master, “a” stands for master and “i” stands for node.
Now, we will be starting the cluster with the following command; */

11. startup cluster configuration

$ cd kubernetes/cluster
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
Cluster validation succeeded
Done, listing cluster services:
Note :-  if validations


Kubernetes master is running at http://127.0.0.1:8080



12 .We will add kubectl binary to path in order to manage the Kubernetes cluster.

$ export PATH=$PATH:~/kubernetes/cluster/ubuntu/binaries //  to get command access
Now, we will validate if the K8s cluster created above is properly configured;
$ kubectl get nodes

NAME        LABELS                             STATUS
127.0.0.1   kubernetes.io/hostname=127.0.0.1   Ready
13. The Kubernetes UI can be accessed at the following address https://127.0.0.1:8080/ui . Suppose in case it is not accessible please run the following services and try again

// for UI configuration //optional

$ kubectl create -f addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
$ kubectl create -f addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
14 TO container monitor uI // Cadvisor

http://127.0.0.1:4194/containers/ // it can be ip of master instend of 127.0.0.1

15 To check other resource through UI

http://127.0.0.1:8080/ // and selected options

TO check kubernetes pod list
#kubernetes get pod
To get kubernetes services
#kubernetes get services

Run a container nginx (name my-nginx)  and running on 80 port(without replica)
#kubectl run-container my-nginx --image=nginx --port=80


TO create a container with yaml
#kubectl create -f replication.yaml


Replication/Scaling

Kubernetes container replication:-
1.pull image nginx at docker
#docker pull nginx
3.create a container with replication
# kubectl run my-nginxtest --image=nginx --replicas=2 --port=80
4.to check replication node
#kubectl get rc
5. to check replication node in POD
#kubectl get pod

6. create a load balancer for my-nginx container  
#kubectl expose rc my-nginxtest --port=80 --create-external-load-balancer
7.to check container full detials including IP
#kubectl describe pod my-nginxtest-h91jo
8. to stop replication service
Scaling replication
scaling our nginx replication
#kubectl scale --current-replicas=2 --replicas=3 replication controllers my-nginxtest

Comments

  1. good explanation in single node but how to add nodes in masternode

    ReplyDelete
  2. @cheftestn as explained in point 10 you can add new node by updating in config-default.sh you should have to add new node like there is config for 1 node and master

    export nodes="root@127.0.0.1 root@172.16.1.8"
    export roles="ai i"
    export NUM_MINIONS=${NUM_MINIONS:-2}

    ReplyDelete
  3. how to setup Kubernetes cluster[multi node] on the top of docker 1.12.1[rc2]?

    thanks
    ~Yogesh

    ReplyDelete
    Replies
    1. The steps remain same you need to install required docker version on node and master.

      Delete
  4. Hi Sir,
    i am getting below error
    root@ip-172-31-20-169:~/kubernetes/cluster# kubectl get nodes
    The connection to the server localhost:8080 was refused - did you specify the right host or port?

    ReplyDelete
    Replies
    1. Sorry for late reply. this eror because of kube-api service was not running. Please check kube-api service

      Delete
  5. The post is very clear and easy to follow. However I see that my cluster build wasn't successful. It completed only on my master node.

    cp: cannot create regular file '/opt/bin/etcd': Text file busy
    cp: cannot create regular file '/opt/bin/flanneld': Text file busy
    cp: cannot create regular file '/opt/bin/kube-apiserver': Text file busy
    cp: cannot create regular file '/opt/bin/kube-controller-manager': Text file busy
    cp: cannot create regular file '/opt/bin/kube-scheduler': Text file busy
    start: Job is already running: etcd
    Connection to 192.168.56.150 closed.
    root@master:/home/admin/kubernetes/cluster# kubectl get nodes
    NAME LABELS STATUS
    root@master:/home/admin/kubernetes/cluster#

    The UI doesn't show any nodes, below are the configs from the config-default.sh
    roles=${roles:-"ai" "i" "i"}
    export roles=($roles)
    export nodes="root@192.168.56.150 root@192.168.56.151"

    I'm trying to build a two node cluster, nothing appears to have installed on the other node. Could you please help me in troubleshooting this issue ?

    ReplyDelete
    Replies
    1. You have just define 3 servers in role(roles=${roles:-"ai" "i" "i"}) and export only 2 nodes. please correct it first. And have you try multiples time kube.up please stop first kube services and then run again thats wil work

      Delete
  6. Hi Jogendra Jangid, Its taking long time for Validating the nodes, For last one hour its doing the same Validating , Is that a some configuration error or version error?

    Validating xyz@1.2.3.4.................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

    ReplyDelete

Post a Comment

Popular posts from this blog

Pnp4Nagios installation and configuration on nagios4