Saturday 15 August 2020

HelmChat

 

What is helm?

Helm is a Kubernetes package and operations manager,  A Helm chart will usually contain at least a Deployment and a Service, but it can also contain an Ingress, Persistent Volume Claims, or any other Kubernetes object. Helm charts are used to deploy an application, or one component of a larger application

 

Helm can be useful in different scenarios

  • Find and use popular software packaged as Kubernetes charts
  • Share your own applications as Kubernetes charts
  • Create reproducible builds of your Kubernetes applications
  • Intelligently manage your Kubernetes object definitions
  • Manage releases of Helm packages

 

 

 

 

Making Kubernetes Cluster

In this lesson we will do a quick review of getting a Kubernetes cluster up and running. This will be the basis for all of the future work that we do using Helm. We will also cover the installation of the Rook volume provisioner. All of this takes place on our Cloud Playground servers using the Cloud Native Kubernetes image.

During the installation you might see a warning message in the pre-flight checks indicating that the version of Docker is not validated. It is safe to ignore this warning, the version of Docker that is installed works correctly with the installed Kubernetes version

We will be installing version 1.13.12 of Kubernetes using the following commands on all servers/nodes.

apt install -y docker.io

systemctl enable docker

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

deb http://apt.kubernetes.io/ kubernetes-xenial main

EOF

apt-get update

apt-get install -y kubeadm=1.13\* kubectl=1.13\* kubelet=1.13\* kubernetes-cni=0.7\*

On the master node we will run the init command for the version of Kubernetes that we are installing. The following commands are run only on the master node.

kubeadm init --kubernetes-version stable-1.13 --token-ttl 0 --pod-network-cidr=10.244.0.0/16

be sure that you run the token command on the worker nodes, also you need to run the post install commands to make the .kube directory and cp the config and chown it.

Then install flannel,

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Make sure that you get the correct version of rook, in this course we are using rook 0.9

git clone https://github.com/linuxacademy/content-kubernetes-helm.git ./rook

cd ./rook/cluster/examples/kubernetes/ceph

 

 

kubectl create -f operator.yaml

Once the agent, operator and discover pods are started in the rook-ceph-system namespace then setup the cluster

kubectl create -f cluster.yaml

Once this is run wait for the appearance of the OSD pods in the name space rook-ceph

kubectl get pods -n rook-ceph

Create a storage class so that we can attach to it.

kubectl create -f storageclass.yaml

 

 

 

 

 

 

 

 

 

 

 

 

Installing helm and triller:

 

In this lesson, we will look at installing Helm using available packages. These methods include package management with Snap and installing from binaries that are precompiled.
As some commands have changed in recent versions, please ensure that you are installing the same version that is being installed in the video.

We will explore the commands:

helm init

as well as:

helm init --upgrade

 

 

 



Installing Helm and Tiller part 2

 

In this lesson we continue with the installation of Helm and Tiller as we compile the Helm binaries from source code. We will also quickly cover in setup of the golang environment required to compile Helm. Once we have the binaries available we will install Helm and Tiller. Then we'll discuss service accounts and ensure that our installation is able to create a release.

The installation for golang can be found at :

https://golang.org/doc/install

The glide project is located at:

https://github.com/Masterminds/glide

The official helm repo is located at

https://github.com/helm/helm

Here is a command reference for this lesson:

Build command for Helm:

make bootstrap build

Kubernetes service account:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'



 

Sceptre Tool

Sceptre is a tool to drive AWS CloudFormation. It automates the mundane, repetitive and error-prone tasks, enabling you to concentrate on building better infrastructure.

Features

  • Code reuse by separating a Stack's template and its configuration
  • Support for templates written in JSON, YAML, Jinja2 or Python DSLs such as Troposphere
  • Dependency resolution by passing of Stack outputs to parameters of dependent Stacks
  • Stack Group support by bundling related Stacks into logical groups (e.g. dev and prod)
  • Stack Group-level commands, such as creating multiple Stacks with a single command
  • Fast, highly parallelised builds
  • Built in support for working with Stacks in multiple AWS accounts and regions
  • Infrastructure visibility with meta-operations such as Stack querying protection
  • Support for inserting dynamic values in templates via customisable Resolvers
  • Support for running arbitrary code as Hooks before/after Stack builds

Benefits

  • Utilises cloud-native Infrastructure as Code engines (CloudFormation)
  • You do not need to manage state
  • Simple templates using popular templating syntax - Yaml & Jinja
  • Powerful flexibility using a mature programming language - Python
  • Easy to integrate as part of a CI/CD pipeline by using Hooks
  • Simple CLI and API
  • Unopinionated - Sceptre does not force a specific project structure

Install

Using pip

$ pip install sceptre

More information on installing sceptre can be found in our Installation Guide

Example

Sceptre organises Stacks into "Stack Groups". Each Stack is represented by a YAML configuration file stored in a directory which represents the Stack Group. Here, we have two Stacks, vpc and subnets, in a Stack Group named dev:

$ tree

.

├── config

│   └── dev

│        ├── config.yaml

│        ├── subnets.yaml

│        └── vpc.yaml

└── templates

    ├── subnets.py

    └── vpc.py

We can create a Stack with the create command. This vpc Stack contains a VPC.

$ sceptre create dev/vpc.yaml

 

dev/vpc - Creating stack dev/vpc

VirtualPrivateCloud AWS::EC2::VPC CREATE_IN_PROGRESS

dev/vpc VirtualPrivateCloud AWS::EC2::VPC CREATE_COMPLETE

dev/vpc sceptre-demo-dev-vpc AWS::CloudFormation::Stack CREATE_COMPLETE

The subnets Stack contains a subnet which must be created in the VPC. To do this, we need to pass the VPC ID, which is exposed as a Stack output of the vpc Stack, to a parameter of the subnets Stack. Sceptre automatically resolves this dependency for us.

$ sceptre create dev/subnets.yaml

dev/subnets - Creating stack

dev/subnets Subnet AWS::EC2::Subnet CREATE_IN_PROGRESS

dev/subnets Subnet AWS::EC2::Subnet CREATE_COMPLETE

dev/subnets sceptre-demo-dev-subnets AWS::CloudFormation::Stack CREATE_COMPLETE

Sceptre implements meta-operations, which allow us to find out information about our Stacks:

$ sceptre list resources dev/subnets.yaml

 

- LogicalResourceId: Subnet

  PhysicalResourceId: subnet-445e6e32

  dev/vpc:

- LogicalResourceId: VirtualPrivateCloud

  PhysicalResourceId: vpc-c4715da0

Sceptre provides Stack Group level commands. This one deletes the whole dev Stack Group. The subnet exists within the vpc, so it must be deleted first. Sceptre handles this automatically:

$ sceptre delete dev

 

Deleting stack

dev/subnets Subnet AWS::EC2::Subnet DELETE_IN_PROGRESS

dev/subnets - Stack deleted

dev/vpc Deleting stack

dev/vpc VirtualPrivateCloud AWS::EC2::VPC DELETE_IN_PROGRESS

dev/vpc - Stack deleted

Note: Deleting Stacks will only delete a given Stack, or the Stacks that are directly in a given StackGroup. By default Stack dependencies that are external to the StackGroup are not deleted.

Sceptre can also handle cross Stack Group dependencies, take the following example project:

$ tree

.

├── config

│   ├── dev

│   │   ├── network

│   │   │   └── vpc.yaml

│   │   ├── users

│   │   │   └── iam.yaml

│   │   ├── compute

│   │   │   └── ec2.yaml

│   │   └── config.yaml

│   └── staging

│       └── eu

│           ├── config.yaml

│           └── stack.yaml

├── hooks

│   └── stack.py

├── templates

│   ├── network.json

│   ├── iam.json

│   ├── ec2.json

│   └── stack.json

└── vars

    ├── dev.yaml

    └── staging.yaml

In this project staging/eu/stack.yaml has a dependency on the output of dev/users/iam.yaml. If you wanted to create the Stack staging/eu/stack.yaml, Sceptre will resolve all of it's dependencies, including dev/users/iam.yaml, before attempting to create the Stack.

Usage

Sceptre can be used from the CLI, or imported as a Python package.

CLI

Usage: sceptre [OPTIONS] COMMAND [ARGS]...

 

  Sceptre is a tool to manage your cloud native infrastructure deployments.

 

Options:

  --version              Show the version and exit.

  --debug                Turn on debug logging.

  --dir TEXT             Specify sceptre directory.

  --output [yaml|json]   The formatting style for command output.

  --no-colour            Turn off output colouring.

  --var TEXT             A variable to template into config files.

  --var-file FILENAME    A YAML file of variables to template into config

                         files.

  --ignore-dependencies  Ignore dependencies when executing command.

  --help                 Show this message and exit.

 

Commands:

  create         Creates a stack or a change set.

  delete         Deletes a stack or a change set.

  describe       Commands for describing attributes of stacks.

  estimate-cost  Estimates the cost of the template.

  execute        Executes a Change Set.

  generate       Prints the template.

  launch         Launch a Stack or StackGroup.

  list           Commands for listing attributes of stacks.

  new            Commands for initialising Sceptre projects.

  set-policy     Sets Stack policy.

  status         Print status of stack or stack_group.

  update         Update a stack.

  validate       Validates the template.

Python

Using Sceptre as a Python module is very straightforward. You need to create a SceptreContext, which tells Sceptre where your project path is and which path you want to execute on, we call this the "command path".

After you have created a SceptreContext you need to pass this into a SceptrePlan. On instantiation the SceptrePlan will handle all the required steps to make sure the action you wish to take on the command path are resolved.

After you have instantiated a SceptrePlan you can access all the actions you can take on a Stack, such as validate()launch()list() and delete().

from sceptre.context import SceptreContext

from sceptre.plan.plan import SceptrePlan

 

context = SceptreContext("/path/to/project", "command_path")

plan = SceptrePlan(context)

plan.launch()