What is helm?
Helm is a Kubernetes package and operations
manager, A Helm chart will usually contain at least a
Deployment and a Service, but it can also contain an Ingress, Persistent Volume
Claims, or any other Kubernetes object. Helm charts are used to
deploy an application, or one component of a larger application
Helm
can be useful in different scenarios
- Find and use popular
software packaged as Kubernetes charts
- Share your own applications
as Kubernetes charts
- Create reproducible builds
of your Kubernetes applications
- Intelligently manage your
Kubernetes object definitions
- Manage releases of Helm
packages
Making Kubernetes Cluster
In this lesson we will do a quick review of getting a Kubernetes cluster
up and running. This will be the basis for all of the future work that we do
using Helm. We will also cover the installation of the Rook volume provisioner.
All of this takes place on our Cloud Playground servers using the Cloud Native
Kubernetes image.
During the installation you might see a warning message in the
pre-flight checks indicating that the version of Docker is not validated. It is
safe to ignore this warning, the version of Docker that is installed works
correctly with the installed Kubernetes version
We will be installing version 1.13.12 of Kubernetes using the following
commands on all servers/nodes.
apt
install -y docker.io
systemctl
enable docker
curl
-s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat
<<EOF >/etc/apt/sources.list.d/kubernetes.list
deb
http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get
update
apt-get
install -y kubeadm=1.13\* kubectl=1.13\* kubelet=1.13\* kubernetes-cni=0.7\*
On the master node we will run the init command for the version of
Kubernetes that we are installing. The following commands are run only on the
master node.
kubeadm
init --kubernetes-version stable-1.13 --token-ttl 0 --pod-network-cidr=10.244.0.0/16
be sure that you run the token command on the worker nodes, also you
need to run the post install commands to make the .kube directory and cp the
config and chown it.
Then install flannel,
kubectl
apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Make sure that you get the correct version of rook, in this course we
are using rook 0.9
git
clone https://github.com/linuxacademy/content-kubernetes-helm.git ./rook
cd
./rook/cluster/examples/kubernetes/ceph
kubectl
create -f operator.yaml
Once the agent, operator and discover pods are started in the
rook-ceph-system namespace then setup the cluster
kubectl
create -f cluster.yaml
Once this is run wait for the appearance of the OSD pods in the name space
rook-ceph
kubectl
get pods -n rook-ceph
Create a storage class so that we can attach to it.
kubectl
create -f storageclass.yaml
Installing helm and triller:
In this lesson, we
will look at installing Helm using available packages. These methods include
package management with Snap and installing from binaries that are precompiled.
As some commands have changed in recent versions, please ensure that you are
installing the same version that is being installed in the video.
We will explore the
commands:
helm init
as well as:
helm init --upgrade
Installing Helm and Tiller part 2
In this lesson we
continue with the installation of Helm and Tiller as we compile the Helm
binaries from source code. We will also quickly cover in setup of the golang
environment required to compile Helm. Once we have the binaries available we
will install Helm and Tiller. Then we'll discuss service accounts and ensure
that our installation is able to create a release.
The installation for
golang can be found at :
https://golang.org/doc/install
The glide project is
located at:
https://github.com/Masterminds/glide
The official helm repo
is located at
Here is a command
reference for this lesson:
Build command for
Helm:
make bootstrap build
Kubernetes service
account:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'