Wednesday, 16 June 2021

GCP Interview Questions

 1)What are potential advantages in the GCP application design? 

Load balancers do not require pre-warning

GCP offers encryptin at-rest and in-flight for all services

No Queueing services required.


2)Which of the following GCP resources are regional?

Ans:GCP App engine applications


3)Which CLI tools are installed with the GCP SDK?

Ans:gcloud, gsutil, bq


4)Which statements regarding GCP automatic and custom subnetworks are correct?

automatic subnet creates one subnet in each region

Automatic subnets can converted back to automatic


5)Each GCP region includes at least how many zones?

Ans:3


6)What is the name of the Google Cloud Platform DNS service?

Ans:CloudDNS


7)To build a PCI compliant application in GCP, what level of isolation is recommended for your payment processing system?

Ans:GCP account


8)Which statements regarding the relationship between regions and availability zones are correct?

Ans:a region contain multiple instanes

9)A region contains multiple zones.

If a zone fails, other zones within the region are not affected.

Ans:Zones are independent sections of a region.


10)Google Cloud Platform deployment templates are made available via 

Ans:GCP MarketPlace


11)What statement(s) about GCP managed, multi-regional resources are correct? (Select all that apply)

They optimize efficiency and availability.

They are distributed in and across multiple regions.


12)Which of the following accurately describes Google Cloud Platform’s multi-region product availability?

Ans:GCP globally is divided into regions that map to continents. Each region has multiple zones that constitute individual data centers in specific countries.


13)____________________ is not available API types in Google Cloud Platform Console.

Ans:Kerberos


14)Which of the following is a zonal GCP resource?

Ans:GCP Compute Engine virtual machine instances


15)Which two instance data storage services are ideal for temporary data?

Ans:RAM Disks

Ans:Local SSDs


Each GCP region includes at least how many zones?

Ans:3


16)which of these can not be modified by user?

Ans:Project number


17)Which of the following statements about the Google Cloud Platform pricing tool is true?

The pricing tool has features that let you save estimates and email them to yourself for easy preservation.

GCP pricing tool can generate multiple estimates for the same product. This means that if you need an estimate for several Cloud SQL instances, for example, you can add them all to one aggregate estimate.

It is possible to interact with the GCP pricing tool over an API, meaning you can create scripts to generate pricing reports.


18)is similar to direct peering but it goes further by creating, literally, a dedicated physical connection.

yes dedicated internet connection

Ans:regional


19)In Google Cloud Compute Engine, which instance type isolates your VMs and workloads on their own physical servers?

Ans:sole-tenant


20)If you have configured Stackdriver Logging to export logs to BigQuery, but logs entries are not getting exported to BigQuery, what is the most likely cause?

Ans:The Cloud Data Transfer Service has not been enabled.


21)Dave has been asked to track down logging on some actions that were initiated by the GCP infrastructure. He's also been asked to review the audit logs for some actions that are associated with API calls.  Which audit log(s) should Dave be interested in?

Ans:Admin activity and system event logs.


22)Which two instance data storage services are ideal for temporary data? 

Local SSDs

RAM


23)In terms of accessibility on Google Cloud Platform, static external IP addresses represent

Ans:Regional 


24)Which of the following GCP resources is regional?

Ans:App Engine


25)Which of these is always assigned by GCP, and cannot be modified by the user?

Ans:project number 


26)Using _____ in Visual Studio Code allows you to simply open a project, click on “Run on Kubernetes,” and test your application running on a local Kubernetes cluster without even leaving Visual Studio Code.

Ans:Google Cloud Code 


27)If your existing application runs on virtual machines, then the easiest way to migrate it to Google Cloud is to use _____ Engine.

Ans:Compute


28)The unit of deployment in Google Cloud Deployment is called a _____.

Ans:deployment


29)Which of the following GCP resources are multi-regional? 

Ans:Cloud Storage data

Ans:GCP Virtual Private Cloud 


30)Which statements regarding GCP automatic and custom subnetworks are correct? (Choose 2 answers)

Ans:

Automatic subnets create one subnet in each region.

Automatic subnets can be converted to custom subnets, but not returned back to automatic.


31)GCP does not recommend editing the default permissions for ____________________, to avoid breaking service functionality for features such as auto scaling.

Ans:Google API service accounts



32)Which of the following statements about the Google Cloud Platform pricing tool is false?

Ans:Using the GCP pricing tool, I can generate a complete estimate for all of my needed serv


33)The CTO wants to build out a prototype cloud application in GCP using a serverless architecture model. Which GCP service will be of most interest to him?

Ans:App Engine


34)You have an application that is not intended for the web, or as a mobile app. It also is not built to host on containers. What GCP compute service would work best for this application?

Ans:Compute Engine


35)Which of the following statements is false?

Using Google Cloud Run to deploy your containerized applications gives you more control than deploying your own GKE cluster.

36)In Google Cloud Platform, what happens if the load on an instance group gets too high?

Ans:The autoscaler will add more instances.

37)Which of the following statements about Google Cloud Source Repositories is false?

For small applications with small development teams, you will likely use a combination of both branches and tags to keep code revisions organized.

38)The development team has submitted a ticket for a Cloud SQL instance for a web application that is crashing due to high CPU utilization. The network admin has suggested creating a Managed Instance Group to handle the load. Which suggested configuration below could solve the problem?

Ans:Create a Managed Instance Group with an autoscaling policy based on CPU utilization.


39)When you deploy a cluster in Google Cloud Platform that uses alias IP address ranges, it’s referred to as a(n) _

Ans:VPC Native cluster


40)An engineer alerts you that a production application on a web server VM seems to be receiving updates from a development VM.

Ans:The firewall log confirms that the production VM is receiving packets from the IP address of the development VM. You confirm that there is an ingress firewall rule that denies traffic with a target of the production VM, a source of the development VM, and a priority of 1000.


41)Which of the following explanations would explain the problem?

Ans:There is another ingress firewall rule that allows traffic with a target of the production VM, a source of the development VM, a priority of 1.


42)When designing VPC networks connected by a Cloud VPN, which statement below is a network design best practice Google recommends?

Ans:connected by custom networks 


43) In Google Cloud Platform, what does using a Shielded VM image offer over a regular image?

advanced security features---correct

the ability to share images

the ability to take snapshots

increased efficiency


44)Which of the following statements about Google Cloud Platform (GCP) firewall rules is false?

Every VPC network actually functions as a distributed firewall.

Connections to instances are allowed and denied on a per-network basis.--wrong answer

GCP firewall rules are used to allow or deny traffic to and from VM instances, based on your security needs.

Once configured and enabled, GCP firewall rules are always enforced, which means deployed instances are protected, regardless of their OS, configuration, or even startup status.

Connections to instances are allowed and denied on a per-network basis.


45)When adding storage to a compute instance in Google Cloud Platform, which of the following is not one of the available disk type choices?

zonal persistent

local SSDs

regional persistent

local flash---correct


46)In Google Cloud Platform, a(n) _____ allows you to operate an application across multiple identical VMs.

managed instance group--correct

access group

live migration

VM role


47)Fill in the blanks: To configure a Google Cloud instance's availability policy, you need to configure the instance's _____ behavior and _____ behavior.

maintenance, restart---correct

termination, migration

initialization, serialization

availability, downtime


48)Google Compute Engine provides _____ that you can add to VM instances to accelerate certain workloads such as data processing and machine learning.

GPUs--correct

load balancers

SSDs

functions

49)In Google Compute Engine, which type of workload is not a good choice for a managed instance group?

one in which you need to apply load balancing to groups of heterogeneous instances--correct

high-performance workloads

batch workloads

stateless workloads


50)Which of the following statements about reserving static internal IP addresses in Google Cloud Platform is false?

The reservation of a static internal IP address requires specific IAM permissions.--true 

By leveraging reserved static internal IP addresses, you can ensure that the resource that is assigned the reserved address always uses the same IP address, even if the resource is deleted and recreated.--true

You can reserve up to 200 static internal IP addresses per region, by default.--true

You can reserve static internal IP addresses for both VPC and legacy mode networks.--false

51)Which of the following statements about reserving static external IP addresses in Google Cloud Platform is true?

If you deploy an instance that requires a static external IP address that might change, you can reserve a static external IP for that instance.--false

A regional IP address can be used for global load balancers.--false

A reserved external IP address can be assigned to a new instance during creation of the instance.---correct 

A global IP address can be used by VM instances with one or more network interfaces.-false


52)To expand the primary IP range of an existing CIDR block subnet in Google Cloud Platform, you need to modify its _____.

subnet mask---correct

subnet identifier

router

host identifier


53) When a VM instance in Google Cloud Platform is stopped, what is lost?

MAC addresses

configured persistent disks

its application state---will be lost 

internal IPs


54)In Google Cloud Platform, the purpose of a(n) _____ is to facilitate the creation of identically configured instances.

shard

instance template--correct

image

snapshot


55) Which Google App Engine configuration file is used for overriding routing rules?

dispatch.yaml--correct

cron.yaml

queue.yaml

dos.yaml


56) In Google Kubernetes Engine, _____ are groups of nodes within a cluster that share the same configuration.

node controllers

containers

pods

node pools--correct


57)Which of the following is not one of the reasons we always want to use controllers when creating pods in Kubernetes?

Controllers reduce the monitoring workload.

Controllers help us ensure that pods are healthy.

Controllers give us more control.

Controllers allow us to work with the lowest level of abstraction possible.---correct


58)Which type of Kubernetes controller is designed for scenarios where we want to ensure that all or a specified set of nodes run a copy of the pod?

Deployment

ReplicaSet

StatefulSet

DaemonSet---correct


59)Which type of Kubernetes controller handles transitioning a set of pods from its current state to a defined desired state through declarative updates to a set of pods?

StatefulSet

ReplicaSet

Deployment--correct

DaemonSet


60)Which of the following statements about Standard and Flexible Google App Engine environments is true?


The Standard envionment is less resilient.

The Flexible environment has a bit more flexibility with its autoscaling options.

The Standard environment gives you direct control over your application runtime via Docker files.

The Flexible envionment is slower.---correct


61)In Kubernetes, servers are referred to as _____.

pods

nodes---correct

containers

controllers


62)Which of the following statements about service accounts in Google Cloud Identity and Access Management is true?

A service account can be treated as both an identity and a resource.--correct

A service account can be treated as an identity, but not as a resource.

A service account can be treated as a resource, but not as an identity.

A service account can be treated as neither an identity nor a resource


63)In Google Cloud Identity and Access Management, what is the topmost level of the resource hierarchy?

project

folder

resource

organization--correct


64)In Google Cloud Identity and Access Management, when you have a number of users and they all need similar permissions, it is useful to add _____ as members and assign roles to them.

organizations

Google Roles

Google Groups---correct

projects


65) In Google Cloud Identity and Access Management, a virtual identity attached to a cloud service is known as a _____.


service account--correct

virtual account

cloud account

cloud identity


66) The most common use case for folders in Google Cloud Identity and Access Management is providing a separate folder for each _____ in your organization.

department---correct

application

project

individual


67)In Google Operations (formerly Stackdriver), which type of audit logs record actions associated with API calls?

data access---correct

user access

system events

admin activity


68)_____ includes a centralized logging interface where you can see several different types of logs from different services in Google Cloud Platform in a single place.

Logdriver

storage buckets

Compute Engine

Google Operations (formerly Stackdriver)---correct


69)Through _____, you add data one record at a time to Google BigQuery, instead of adding a whole table at a time.

streaming--correct

queries

uploading

the command line


70)_____ were created to collect data from a wide variety of sources, and they were designed specifically for reporting and data analysis.


Cloud services

Query languages

Data warehouses---correct

Databases


71)Which is the most expensive Google BigQuery operation?

queries

storage

uploading data through the command line

streaming---correct


72)If you need to upload lots of files to Google BigQuery at the same time, the best choice is to use _____.

BigQuery's Large File Uploader tool

the command line---correct

the API

BigQuery's web interface


73)To find errors in your Google Cloud application, use Google _____.

Cloud Error Reporting--correct

Cloud Trace

Cloud Profiler

App Engine


74)Which of the following is not a step you can take to ensure the integrity of your Google Cloud Audit Logs?

Implement object versioning on the log buckets.---correct

Delete old log entries.

Apply the principle of least privilege.

Require two people to inspect the logs.


75)In Google Cloud Trace, what does each dot in the trace list represent?

an individual request to the application---correct

one minute

an error in the operation of your application

an individual user using the application


76) In Google Cloud Audit Logs, which type of audit log tracks Google's actions on Compute Engine resources?

Data Access

System Event---correct

Admin Activity

Compute


77)_____ is Google’s powerful monitoring, logging, and debugging tool.

Cloud Operations---correct

Cloud Audit Logs

BigQuery

Cloud Storage

Friday, 11 June 2021

Kubernetes Interview Questions

 1. How to run Kubernetes locally?

ans: Kubernetes can be setup locally by using the minikube tool.


2. You have removed a Node from Service but it is kept in the cluster during the maintenance operation. How can you tell Kubernetes that it can resume scheduling new pods onto the node?

ans: kubectl uncordon <node-name>


3. Which command would print logs from your Pod's event with multiple containers?

Ans: kubectl logs <podname> --all-containers=true


4. What does a Kubernetes service do?

Ans: Defines a set of Pods and a policy to access them


5. What services does Ingress expose?

Ans: HTTP and HTTPS


6. What role does IP forwarding play with respect to Kubernetes?

Ans: It allows the kernel to route traffic from containers to the outside world.



7. When configuring a Highly-Available Kubernetes cluster, how many machines are needed for the masters?

Ans: 3 



8. You are creating your own StorageClass with your cluster. You create your .yaml file with parameters customized for this cluster. What is the next step in order to complete this setup?

Ans: Apply your .yaml file with the kubectl create -f command.


9. The following is a kubectl command dealing with network policies:

Ans: --image=nginx



10. You have a Kubernetes application with multiple clusters being used. You would like to implement a way to monitor the application while also being able to visualize it with a dashboard and a means to query your data. Which would satisfy your request?

Ans: Prometheus


11. What are the 4 C’s of the Cloud Native Security paradigm?

Ans: Code, Container, Cluster, Cloud


12. You are monitoring your application and then you step away from it for one hour. When you come back you see that something has gone wrong with one of your clusters. What can you do to find out what happened during this hour?

Ans: kubectl logs --since=1h <podname>


13. What is a binary file?

Ans:  A non-text file


14. You need to expose the single Service with an Ingress named test-ingress by specifying a default backend with no rules using the kubectl apply -f command. How can you view the state of the Ingress you just added?

Ans:  kubectl get ingress test-ingress


15. What language is the Kubernetes end-to-end testing framework written in?

Ans:Go


16. Inside a cluster, which command can list the service in the cluster?

Ans: kubectl get service dns-backend


17. How can you take a back up snapshot using the built-in snapshot method supported by etcd?

Ans:  By using the etcdctl snapshot save command.


18. Which Kubernetes object allows decoupling of an app's configuration from a Pod's specification?

Ans: ConfigMap

19. What is the load balancer in Kubernetes?

Ans: One of the most common services to expose the service in Kubernetes , there are 2 types of the load balancer 

Internal load balancer: this will manage the incoming load 

External Load balancer: This will manage the external load and direct that load to backend pods.

20. What are the main benefits that Deployments offer that Replication Controllers do not?

Ans:  Strong update and roll-back model.

21. How to validate the cluster in Kubernetes?

Ans: Kubeadm validate cluster

22. Kubeadm command to create cluster?

Ans: kubeadm init 

23. you are deploying tightly coupled containers that share the same volume and memory?

Ans: deploy the containers in the same pod

24. Command to get the detailed info?

Ans: kubectl describe pods

25. which component of K8s will register the node with cluster and wait for API server inputs?

Ans: Kubelet 

26. What is the default service type?

Ans: ClusterIP

27. What is the default protocol in the Kubernetes service?

Ans: TCP

28. Will containers share the same IP address in POD?

Ans: yes

29. How to deploy a pod in a particular node? 

Ans: Node affinity and node selector, you can deploy the pod in a particular node

30. What is headless service?

Ans: A Headless service is used to interface with the service discovery mechanism without being tied to the cluster. IP.





Nexus installation on linux

 Installation of sona type nexus on Linux server.

Sonatype Nexus System Requirements

  1. Minimum 1 VCPU & 2 GB Memory
  2. Server firewall opened for port 22 & 8081
  3. Java latest version 
  4. All Nexus processes should run as a non-root nexus user.


After login to the server 

step1: 

sudo yum update -y sudo yum install wget -y

Step2:

sudo yum install java-version-openjdk.x86_64 -y

Step3:

sudo wget -O nexus.tar.gz https://download.sonatype.com/nexus/3/latest-unix.tar.gz

Step4:

sudo tar -xvf nexus.tar.gz
sudo mv nexus-3* nexus

Step5:

sudo adduser nexus

Step6:

sudo chown -R nexus:nexus /opt/nexus sudo chown -R nexus:nexus /opt/sonatype-work

Step7:

sudo vi /app/nexus/bin/nexus.rc
run_as_user="nexus"

Step8:

sudo chkconfig nexus on
sudo systemctl start nexus


Step9:

Open the ports 22 and 8081

Step10:

http://ipaddress:8081




Tuesday, 8 June 2021

Full Stack Developer Interview questions

 1) What is your favorite language and why?

  Full-stack developer means who knows or works with multiple technologies, like front end and back end and DevOps and so on.

 Pannel would like to know what is on your mind I mean which programming language you're comfortable with etc.

very famous and old HTML and CSS 

you can add JavaScript and Angular JS, React Js 

And Java and python programming languages are widely using nowadays.

you can pick any of the ones in which you are interested.


2) What are the Backend technologies?

Oracle

Mysql

SQLserver

Cassandra

MangoDB

DB2




 







Saturday, 1 May 2021

Azure Interview Questions

1. Why might network route time be different within your Azure application?

Answer:  Azure prioritizes some routes over others.

2. Which of the following is a common practice for designing self healing systems in Azure?

Ans:  Embrace chaos engineering, which is intentionally injecting failures and abnormal conditions into your environment

3. What kind of access does a user need to provide feedback to your Azure application?

Ans: Stakeholder

4. What is HDInsight in Azure?

Ans: HDInsight is a could service which that makes it easy. It is fast and cost-effective to process a massive amount of data using with the help of open-source frameworks like Spark, Hadoop, Hive, Storm and R. HDInsight offers various type of scenarios which includes ETL, data warehousing, and Machine Learning. 

5. Where can you find why certain alerts were grouped into smart groups?

Ans:  The Smart Group Detail Page

6. How could you get a custom script extension to run after every time the VM is rebooted?

Ans: Use the extension to create a Windows scheduled task that runs on start-up

6. What types of data do Azure-monitoring tools (such as SolarWinds) monitor?

Ans: Metrics & Logs

7. Explain the term 'service fabric' in Azure?

Ans: Service fabric is a middleware platform which gives more scalable outcome. It mostly renders with a more managed and reliable enterprise.

8. Why would it be useful for a developer to track feature usage?

Ans: So that the developer can determine unused features and services for cost optimization

9. How would you store video or image feedback from users within Azure?

Ans: Use Azure Blob storage to store the unstructured data.

10. What are the three main components of Azure?

Ans: Compute, Storage, AppFabric

11. Name of the two blobs in Azure?

Ans: Block Blob and Page Blob

12. Which Azure tool visualizes user navigation within an application?

Ans: User Flows

13. You need to share your Azure Dashboard with other people in your organization. In order to best maintain your dashboard, what standard operating procedures should you implement before doing so?

Ans: Use Role-Based Access control. This allows you to control who can edit the dashboard and who only gets read access.

14. What is used to visually represent work items with columns to indicate progress in the software development lifecycle?

Ans:  Kanban Board

15. Within Azure DevOps, what defines the requirements, application, and elements that the development team needs from the user's feedback?

Ans: User Stories

16. Your company has undergone multiple outages to their applications hosted in Azure in the past few weeks. Each time, the company has been notified by a customer reporting the system is not accessible. What advice would you give to the company to be more proactive in finding out when outages occur?

Ans: Use Azure Application Insights and configure alarms to notify employees with a webhook or email when the application does not pass health checks.

17. How can you provide the ability to view exceptions for Azure applications and relate them to failed requests?

Ans: Application Insights

18. What functionality is provided by alerts in Microsoft Azure?

Ans: Alerts proactively notify you when important conditions are found in your monitoring data.

19. Which service offering monitors and detects anomalies such as poor performance and failures for your applications?

Ans: Application Insights

20. Your company is using the OData Analytics service in Azure to query and gain deeper insights on application data. What action does the following query parameters perform with OData?

/WorkItems?$select=WorkItemId,WorkItemType,Title,State&$filter=State eq 'Stopped'

Ans: It returns work items that are in the state of Stopped







Saturday, 15 August 2020

HelmChat

 

What is helm?

Helm is a Kubernetes package and operations manager,  A Helm chart will usually contain at least a Deployment and a Service, but it can also contain an Ingress, Persistent Volume Claims, or any other Kubernetes object. Helm charts are used to deploy an application, or one component of a larger application

 

Helm can be useful in different scenarios

  • Find and use popular software packaged as Kubernetes charts
  • Share your own applications as Kubernetes charts
  • Create reproducible builds of your Kubernetes applications
  • Intelligently manage your Kubernetes object definitions
  • Manage releases of Helm packages

 

 

 

 

Making Kubernetes Cluster

In this lesson we will do a quick review of getting a Kubernetes cluster up and running. This will be the basis for all of the future work that we do using Helm. We will also cover the installation of the Rook volume provisioner. All of this takes place on our Cloud Playground servers using the Cloud Native Kubernetes image.

During the installation you might see a warning message in the pre-flight checks indicating that the version of Docker is not validated. It is safe to ignore this warning, the version of Docker that is installed works correctly with the installed Kubernetes version

We will be installing version 1.13.12 of Kubernetes using the following commands on all servers/nodes.

apt install -y docker.io

systemctl enable docker

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

deb http://apt.kubernetes.io/ kubernetes-xenial main

EOF

apt-get update

apt-get install -y kubeadm=1.13\* kubectl=1.13\* kubelet=1.13\* kubernetes-cni=0.7\*

On the master node we will run the init command for the version of Kubernetes that we are installing. The following commands are run only on the master node.

kubeadm init --kubernetes-version stable-1.13 --token-ttl 0 --pod-network-cidr=10.244.0.0/16

be sure that you run the token command on the worker nodes, also you need to run the post install commands to make the .kube directory and cp the config and chown it.

Then install flannel,

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Make sure that you get the correct version of rook, in this course we are using rook 0.9

git clone https://github.com/linuxacademy/content-kubernetes-helm.git ./rook

cd ./rook/cluster/examples/kubernetes/ceph

 

 

kubectl create -f operator.yaml

Once the agent, operator and discover pods are started in the rook-ceph-system namespace then setup the cluster

kubectl create -f cluster.yaml

Once this is run wait for the appearance of the OSD pods in the name space rook-ceph

kubectl get pods -n rook-ceph

Create a storage class so that we can attach to it.

kubectl create -f storageclass.yaml

 

 

 

 

 

 

 

 

 

 

 

 

Installing helm and triller:

 

In this lesson, we will look at installing Helm using available packages. These methods include package management with Snap and installing from binaries that are precompiled.
As some commands have changed in recent versions, please ensure that you are installing the same version that is being installed in the video.

We will explore the commands:

helm init

as well as:

helm init --upgrade

 

 

 



Installing Helm and Tiller part 2

 

In this lesson we continue with the installation of Helm and Tiller as we compile the Helm binaries from source code. We will also quickly cover in setup of the golang environment required to compile Helm. Once we have the binaries available we will install Helm and Tiller. Then we'll discuss service accounts and ensure that our installation is able to create a release.

The installation for golang can be found at :

https://golang.org/doc/install

The glide project is located at:

https://github.com/Masterminds/glide

The official helm repo is located at

https://github.com/helm/helm

Here is a command reference for this lesson:

Build command for Helm:

make bootstrap build

Kubernetes service account:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'



 

Sceptre Tool

Sceptre is a tool to drive AWS CloudFormation. It automates the mundane, repetitive and error-prone tasks, enabling you to concentrate on building better infrastructure.

Features

  • Code reuse by separating a Stack's template and its configuration
  • Support for templates written in JSON, YAML, Jinja2 or Python DSLs such as Troposphere
  • Dependency resolution by passing of Stack outputs to parameters of dependent Stacks
  • Stack Group support by bundling related Stacks into logical groups (e.g. dev and prod)
  • Stack Group-level commands, such as creating multiple Stacks with a single command
  • Fast, highly parallelised builds
  • Built in support for working with Stacks in multiple AWS accounts and regions
  • Infrastructure visibility with meta-operations such as Stack querying protection
  • Support for inserting dynamic values in templates via customisable Resolvers
  • Support for running arbitrary code as Hooks before/after Stack builds

Benefits

  • Utilises cloud-native Infrastructure as Code engines (CloudFormation)
  • You do not need to manage state
  • Simple templates using popular templating syntax - Yaml & Jinja
  • Powerful flexibility using a mature programming language - Python
  • Easy to integrate as part of a CI/CD pipeline by using Hooks
  • Simple CLI and API
  • Unopinionated - Sceptre does not force a specific project structure

Install

Using pip

$ pip install sceptre

More information on installing sceptre can be found in our Installation Guide

Example

Sceptre organises Stacks into "Stack Groups". Each Stack is represented by a YAML configuration file stored in a directory which represents the Stack Group. Here, we have two Stacks, vpc and subnets, in a Stack Group named dev:

$ tree

.

├── config

│   └── dev

│        ├── config.yaml

│        ├── subnets.yaml

│        └── vpc.yaml

└── templates

    ├── subnets.py

    └── vpc.py

We can create a Stack with the create command. This vpc Stack contains a VPC.

$ sceptre create dev/vpc.yaml

 

dev/vpc - Creating stack dev/vpc

VirtualPrivateCloud AWS::EC2::VPC CREATE_IN_PROGRESS

dev/vpc VirtualPrivateCloud AWS::EC2::VPC CREATE_COMPLETE

dev/vpc sceptre-demo-dev-vpc AWS::CloudFormation::Stack CREATE_COMPLETE

The subnets Stack contains a subnet which must be created in the VPC. To do this, we need to pass the VPC ID, which is exposed as a Stack output of the vpc Stack, to a parameter of the subnets Stack. Sceptre automatically resolves this dependency for us.

$ sceptre create dev/subnets.yaml

dev/subnets - Creating stack

dev/subnets Subnet AWS::EC2::Subnet CREATE_IN_PROGRESS

dev/subnets Subnet AWS::EC2::Subnet CREATE_COMPLETE

dev/subnets sceptre-demo-dev-subnets AWS::CloudFormation::Stack CREATE_COMPLETE

Sceptre implements meta-operations, which allow us to find out information about our Stacks:

$ sceptre list resources dev/subnets.yaml

 

- LogicalResourceId: Subnet

  PhysicalResourceId: subnet-445e6e32

  dev/vpc:

- LogicalResourceId: VirtualPrivateCloud

  PhysicalResourceId: vpc-c4715da0

Sceptre provides Stack Group level commands. This one deletes the whole dev Stack Group. The subnet exists within the vpc, so it must be deleted first. Sceptre handles this automatically:

$ sceptre delete dev

 

Deleting stack

dev/subnets Subnet AWS::EC2::Subnet DELETE_IN_PROGRESS

dev/subnets - Stack deleted

dev/vpc Deleting stack

dev/vpc VirtualPrivateCloud AWS::EC2::VPC DELETE_IN_PROGRESS

dev/vpc - Stack deleted

Note: Deleting Stacks will only delete a given Stack, or the Stacks that are directly in a given StackGroup. By default Stack dependencies that are external to the StackGroup are not deleted.

Sceptre can also handle cross Stack Group dependencies, take the following example project:

$ tree

.

├── config

│   ├── dev

│   │   ├── network

│   │   │   └── vpc.yaml

│   │   ├── users

│   │   │   └── iam.yaml

│   │   ├── compute

│   │   │   └── ec2.yaml

│   │   └── config.yaml

│   └── staging

│       └── eu

│           ├── config.yaml

│           └── stack.yaml

├── hooks

│   └── stack.py

├── templates

│   ├── network.json

│   ├── iam.json

│   ├── ec2.json

│   └── stack.json

└── vars

    ├── dev.yaml

    └── staging.yaml

In this project staging/eu/stack.yaml has a dependency on the output of dev/users/iam.yaml. If you wanted to create the Stack staging/eu/stack.yaml, Sceptre will resolve all of it's dependencies, including dev/users/iam.yaml, before attempting to create the Stack.

Usage

Sceptre can be used from the CLI, or imported as a Python package.

CLI

Usage: sceptre [OPTIONS] COMMAND [ARGS]...

 

  Sceptre is a tool to manage your cloud native infrastructure deployments.

 

Options:

  --version              Show the version and exit.

  --debug                Turn on debug logging.

  --dir TEXT             Specify sceptre directory.

  --output [yaml|json]   The formatting style for command output.

  --no-colour            Turn off output colouring.

  --var TEXT             A variable to template into config files.

  --var-file FILENAME    A YAML file of variables to template into config

                         files.

  --ignore-dependencies  Ignore dependencies when executing command.

  --help                 Show this message and exit.

 

Commands:

  create         Creates a stack or a change set.

  delete         Deletes a stack or a change set.

  describe       Commands for describing attributes of stacks.

  estimate-cost  Estimates the cost of the template.

  execute        Executes a Change Set.

  generate       Prints the template.

  launch         Launch a Stack or StackGroup.

  list           Commands for listing attributes of stacks.

  new            Commands for initialising Sceptre projects.

  set-policy     Sets Stack policy.

  status         Print status of stack or stack_group.

  update         Update a stack.

  validate       Validates the template.

Python

Using Sceptre as a Python module is very straightforward. You need to create a SceptreContext, which tells Sceptre where your project path is and which path you want to execute on, we call this the "command path".

After you have created a SceptreContext you need to pass this into a SceptrePlan. On instantiation the SceptrePlan will handle all the required steps to make sure the action you wish to take on the command path are resolved.

After you have instantiated a SceptrePlan you can access all the actions you can take on a Stack, such as validate()launch()list() and delete().

from sceptre.context import SceptreContext

from sceptre.plan.plan import SceptrePlan

 

context = SceptreContext("/path/to/project", "command_path")

plan = SceptrePlan(context)

plan.launch()