Sunday, 12 November 2017

Creating infra with Terraform and VSphere

Terraform used to created infra from many providers, let us one example with VSphere

we will export the path for terraform log file as per given commands

Step1) export TF_LOG_PATH=./terraform.log

Creating tf file which has following contents

Step 2) create file with following contents.

# Configure the VMware vSphere Provider
provider “vsphere” {
user           = “${var.vsphere_user}”
 password       = “${var.vsphere_password}”
vsphere_server = “${var.vsphere_server}”
allow_unverified_ssl = “true”
# Create a folder
resource “vsphere_folder” “Backend” {
path = “VirtualMachines”
datacenter = “datacenter1”

resource “vsphere_virtual_machine” “My_Vm” {
name   = “My_Vm”
vcpu   = 1
memory = 1024
datacenter = “datacenter1″
resource_pool = “mypool”
folder = “${vsphere_folder.backend.path}”
 network_interface {
label = “VM Network”

disk {
datastore = “datastore1”
vmdk = “/test2/test2”   
First section defines provider with credentials, based on your understanding you can seperate this file, like writing file, but now write only one file.
Second section creates one folder “VMs” under “datacenter1”.
Third section actually creates a VM. resource_pool defines from where resources like cpu and memory should be assigned.  Network interface is the existing network.
Fourth section defines vmdk or template path ( use any one of ).

Step 3) Create file , to define variables
variable “vsphere_user” {}
variable “vsphere_password” {}
variable “vsphere_server” {}

Here, variables are defined that are being used in file.

Step 4) Create terraform.tfvars file where you have placed values for the variables.
vsphere_user = “root”
vsphere_password = “mypswd”
vsphere_server = “IpAddress”
In this file, provide the values for the defined variables. “terraform.tfvars”  is the default name for variables file. If you used any other file name, then you need to pass it while applying the terraform.

Step 5) Run from that directory where all these files are placed.

$ terraform plan
$ terraform apply
$ terraform show

Sunday, 16 July 2017

AWS-SAA Certifications Questions and Answers

1) Amazon Glacier is designed for?

    Ans: Amazon Glacier is designed for infrequent access data, may be it will take it 3 to 5 hours to retrieve the data from the Glacier.

It is very family with its lifecycle, Glacier can store infrequent data and can be deleted after some point of time.

2) Which of the following correctly applies to changing the DB subnet group of your DB instance?
An existing DB subnet group can be updated to add more subnets for existing availability Zones.
Removing subnets from existing subnet group can cause unavailbility
Updating an existing DB subnet group of deployed db instance is not currently allowed.
Explicitly changing the DB subnet group of depoyed DB instances is not currently allowed.

3) If you want to build your own payment application in AWS?
Amazon AWS DevPay
Amazon AWS FPS

4) Which of the following should be referred to if you want to map Amazon Elastic Block Store to an Amazon EC2 instance for AWS Cloud formation resources?
Reference the logical ID's of both the block stores and the instance

5) After creating the instance in AWS getting the error messages "Network error: connection timeout" or  "Error connecting to [instance], reason:-> connection timed out:"
Ans: Verify the private key file corresponds to the Amaxon key pair assigned to launch
Verify that you are entering correct username or not

6) Auto-scaling can not do?
Auto scaling can not increase instance size of EC2-instance
Autoscaling scale horizontly
Autoscaling can not do anything with respect to RDS instance

All about S3?
Ans: S3 stands for Simple storage service

S3 is cloud file storage service more akin to DropBox or GoogleDrive
S3 is very useful for storing the static data eg: you treat whatever data in the form of file
S3 is very useful when we are working full stack application which has layers, if you would like to
move the data from one layer to another layer, it will save the some time we can call it as intermediate state.
S3 namespace is valid global, it is available for all the regions.
S3 and Glacier works hand in hand because later is advance version or  enhancement of S3
Even we can lock the data in Amazon glacier access control with vault lock polocies to protect the data from the deletion.
In such way we can trigger after some point of time or expiry you can move the data or archival data to Glacier.

7) Amazon S3 bucket prevent and recover from accidental data loss?
Ans: object versioning and Multi factor authentication.

8) All about EBs? 
EBS is only for Single Region not across regions
Group of EBS is known as placement group.

9) What is Cloud Formation?

Cloud Formation is template which will allow us to scale or optimize the AWS resources.

10) what are the different types of block store in AWS?
Instance block store
Elastic block store 

11) Amazon offers a 5 different options for Database?
RDS: Relational Database service
DynamoDB: is Amazon NoSQL database.
RedShift: Amazon Relational database for warehouse solution

12) Subnets are not distributed to across Zone, It should be limited to one AV Zone it self.

Subnet and AV are one-to-one communication.

13) A customer has established an AWS Direct Connect connection to AWS. The link is up and routes 
are being advertised from the customer's end, however the customer is unable to connect from 
EC2 instances inside its VPC to servers residing in its datacenter. 
Which of the following options provide a viable solution to remedy this situation? 

 Add a route to the route table with an iPsec VPN connection as the target
 Enable route propagation to the customer gateway (CGW).

14) What does the following command do with respect to the Amazon Ec2 security groups?
ec2-create-group CreateSecurityGroup 

Creates a new security group for use with your account.

15) if you want to launch Amazon Elastic Compute cloud(Ec2) instances and assign each instance a 
predetermined private IP address you should?

You should launch instance in Virtual Private Cloud.

16) Why do you make subnets?
If there is network with large no.of hosts will be very difficult to mange and tedious, 
so divide this network into subnets and manage the hosts.

One to one connectivity between subnet and avilability Zone

17) What is Cloud Trail?
AWS Cloud trail has been designed to track and monitor API calls.
Cloud Trail is enabled for region level,
Cloud Trail logs are delivered to single Amazon S3 for aggregation.
Cloud Trail and Region--one-to-one communication

18)  Keys are associated with Regions?
One to one combination between Master key and Region.

19) What is MFA in AWS?
MFA stands for Multi factor Authentication, which is very useful and extra layer on your user name and password. 
Authentication code from MFA device.

20) When should i prefer IOPS over standard RDS Storage?
When you are running batch oriented  jobs, then prefer IOPS

21) Which option meets the requirements for captioning and analyzing this data?
Use Amazon Kinesis Streams to collect and process large streams of data records in real time.

22) What happens when you create a topic on SNS?
when you create a SNS, ARN(Amazon Resource Name) will be created.

23) what happens to the I/O operation while you take a database snapshot?
I/O operation to the db are suspended for a few minutes while backup is in progress.

24) All About EBS Volumes?
EBS volumes persist independently from the running life of an EC2
we can attach an EBS volume to more than one EC2 instance( but please remember not at same time).
To view info about an Amazon EBS open the EC2 console at volumes

25)Response time for premium support?
Ans: 1 hour

26)What is oracle SQL developer?
Ans: A graphical java tool distributed without cost by oracle.

27) where SQL Server stores logins and passwords?
Ans: SQL Server stores logins and passwords in master database

28)What dynamo DB store from the following?
Web session
Huge Data

29) All about security groups?
you can change the rules of the security group of the running instance
you can assign multiple security group to an instance(but remember max is 3)
you can delete rules from an existing security groups.

30) All about Auto scaling?
It can add instances when CPU utilization is above threshold
It can remove the instances when CPU utilization is below threshold
It can maintain fixed number of running instances

31) To improve the performance of t2.small, what are steps you can take?
To increase the instance size, have to add array of EBS volumes.

32) Company looking for relasing all unused Elastic IP Address that incur charges?
when it allocated and associated with running instance

33) EBS backed instances are C4, M4, and T2

34) which route must be added to your instances which are in the subnet?
Destination : and target is internal gateway.

35) authentication of SQS queue?
Access Key id and request Signature 
X509 certificate

36) AWS Direct connect?
Allows you to establish a direct network connection from your data center to AWS.

37) Cloud Front expiration time?
An Expiration can be check how often to check for an updation version of file
By default each object expires after being in an edge location after 24 hours.

38)Distribute content to end users?
Cloud Front

Saturday, 6 May 2017

DevOps trainings and Job Support

Devops is very popular buzzword in the current IT industry and It offers lot of benefits to the entire delivery process of the organizations

Devops is cutting edge technology
Devops changes the way to look at the IT organization infrastructure.

We Offer DevOps trainings and Job Support.

Please contact for trainings and job support email: and contact no:   7780199676

Thursday, 13 April 2017


For Django

yum_repository 'epel' do
   action :create

######  Global Install django through pip
package "python-pip" do
action :install

package "django" do
action :install

######  Set up directories to create a virtual environment
execute "Start project and create virtual environment" do
  command "mkdir /root/project; cd /root/project"

For Memcached cookbook:

package "memcached" do
  action :upgrade

##### 2: Make sure memcached is running
service "memcached" do
 enabled true
 running true
 action [:enable, :start]


yum_repository 'epel' do
   action :create
execute "update package" do
 command "yum update"
 ignore_failure true
 action :nothing

###### Set up Postgresql with psycopg2 and python django required packages #######
%w{postgresql postgresql-server postgresql-contrib python-django python-pip python-psycopg2}.each do |pkg|
package pkg do
     action :install

######  Make sure postgresql service working and running fine
execute "Start Postgres daemon and enable it to start on boot" do
  command "service postgresql initdb; service postgresql start; chkconfig postgresql on"

Wednesday, 15 June 2016

Docker for sample applications

Sample docker file:

FROM ubuntu:14.04

USER root
RUN apt-get update && apt-get install -y \
    python3 \
    curl && \
    rm -rf /var/lib/apt/lists/*
RUN groupadd -r nonroot && \
    useradd -r -g nonroot -d /home/nonroot -s /sbin/nologin -c "Nonroot User" nonroot && \
    mkdir /home/nonroot && \
    chown -R nonroot:nonroot /home/nonroot

USER nonroot
WORKDIR /home/nonroot/
RUN curl -o index.html

USER nonroot
WORKDIR /home/nonroot/
CMD ["python3", "-m", "http.server", "3000"]

sudo docker build -t=docker-example .
sudo docker run -p 3000:3000 -i -t docker-example

yum install -y gcc-c++ make
curl -sL | sudo -E bash -
yum install nodejs
npm install http-server


FROM centos:centos6
#Install WGET
RUN yum install -y wget
#Install tar
RUN yum install -y tar
# Download JDK
RUN cd /opt;
RUN yum -y install java-1.7.0-openjdk-devel
#gunzip JDK
#RUN cd /opt;gunzip jdk-7u67-linux-x64.tar.gz
#RUN cd /opt;tar xvf jdk-7u67-linux-x64.tar
#RUN alternatives –install /usr/bin/java java /opt/jdk1.7.0_67/bin/java 2
# Download Apache Tomcat 7
RUN cd /tmp;wget
# untar and move to proper location
RUN cd /tmp;gunzip apache-tomcat-8.0.36.tar.gz
RUN cd /tmp;tar xvf apache-tomcat-8.0.36.tar
RUN cd /tmp;mv apache-tomcat-8.0.36 /opt/tomcat8
RUN chmod -R 755 /opt/tomcat8
ENV JAVA_HOME /usr/lib/jvm/java-1.7.0-openjdk-
CMD /opt/tomcat8/bin/ run

to run the docker:
docker run -it --rm -p 8080:8080 tel/tom1