Tuesday, December 18, 2018

DevOps Interview Question Set-1

DevOps Interview Question

1) What Service we should use when want to connect between private instance account A to private instance account B?

Options 
1) VPN 
2) VPC Peering
3) Internet gateway
4) All of Above


Tuesday, December 11, 2018

How to check python package version using pip

$ pip show <packege>
===========



$pip show boto3
Name: boto3
Version: 1.9.55
Summary: The AWS SDK for Python
Home-page: https://github.com/boto/boto3
Author: Amazon Web Services
Author-email: UNKNOWN
License: Apache License 2.0
Location: /Library/Python/2.7/site-packages
Requires: botocore, s3transfer, jmespath

Required-by: 

Tuesday, October 23, 2018

Installing and Configuring SSM Agent

AWS Systems Manager Agent (SSM Agent) is Amazon software that runs on your Amazon EC2 instances and your hybrid instances that are configured for Systems Manager (hybrid instances). SSM Agent processes requests from the Systems Manager service in the cloud and configures your machine as specified in the request. SSM Agent sends status and execution information back to the Systems Manager service by using the EC2 Messaging service. If you monitor traffic, you will see your instances communicating with ec2messages.* endpoints. For more information, see Reference: ec2messages, ssmmessages, and Other API Calls.
Starting with version 2.3.50.0 of SSM Agent, the agent creates a local user account called ssm-user and adds it to /etc/sudoers (Linux) or to the Administrators group (Windows) every time the agent starts. This ssm-user is the default OS user when a Session Manager session is started, and the password for this user is reset on every session. You can change the permissions by moving ssm-user to a less-privileged group or by changing the sudoers file. The ssm-user account is not removed from the system when SSM Agent is uninstalled.
SSM Agent is installed, by default, on the following Amazon EC2 Amazon Machine Images (AMIs):
  • Windows Server (all SKUs)
  • Amazon Linux
  • Amazon Linux 2
  • Ubuntu Server 16.04
  • Ubuntu Server 18.04
You must manually install SSM Agent on Amazon EC2 instances created from other Linux AMIs. You must also manually install SSM Agent on servers or virtual machines in your on-premises environment. For more information, see Setting Up AWS Systems Manager in Hybrid Environments.

REFERENCE:- https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html

Tuesday, October 16, 2018

What is DevSecOps?

What is DevSecOps?

DevSecOps is the philosophy of integrating security practices within the DevOps process. DevSecOps involves creating a ‘Security as Code’ culture with ongoing, flexible collaboration between release DevOps engineers and security teams. The DevSecOps movement, like DevOps itself, is focused on creating new solutions for application development processes within an agile framework.
Now, in the collaborative framework of DevSecOps is a shared responsibility integrated from end to end. It’s a mindset that is so important,  “DevSecOps” to emphasize the need to build a secure foundation into DevOps initiatives.
DevOps isn’t just about development and operations teams. If you want to take full advantage of the agility and responsiveness of a DevOps approach, IT security must also play an integrated role in the full life cycle of your apps.

Why DevSecOps is important?
IT infrastructure has undergone huge changes in recent years. The shift to dynamic provisioning, shared resources, and cloud computing has driven benefits around IT speed, agility, and cost, and all of this has helped to improve application development.
“DevOps has become second nature for agile, high-performing enterprises and a foundation for the success of their online business,” says Pascal Geenens, a security evangelist and researcher at Radware.
“However, application security was most important, and at times perceived as a roadblock to staying ahead of the competition,” says Geenens. “Given the reliance of applications to keep operations running, bypassing security must be considered a high-risk strategy -- a distributed or permanent denial of service attack could easily catch you out.

          KEEP READING THIS SPACE FOR LATEST DEVSECOPS UPDATE

Monday, October 8, 2018

Application Load Balancer

Your Amazon ECS service can optionally be configured to use Elastic Load Balancing to distribute traffic evenly across the tasks in your service.
Elastic Load Balancing supports the following types of load balancers: Application Load Balancers, Network Load Balancers, and Classic Load Balancers, and Amazon ECS services can use either type of load balancer. Application Load Balancers are used to route HTTP/HTTPS (or Layer 7) traffic. Network Load Balancers and Classic Load Balancers are used to route TCP (or Layer 4) traffic. For more information, see Load Balancer Types.
Application Load Balancers offer several features that make them attractive for use with Amazon ECS services:


  • Application Load Balancers allow containers to use dynamic host port mapping (so that multiple tasks from the same service are allowed per container instance).
  • Application Load Balancers support path-based routing and priority rules (so that multiple services can use the same listener port on a single Application Load Balancer).
We recommend that you use Application Load Balancers for your Amazon ECS services so that you can take advantage of these latest features, unless your service requires a feature that is only available with Network Load Balancers or Classic Load Balancers. For more information about Elastic Load Balancing and the differences between the load balancer types, see the Elastic Load Balancing User Guide.

Reference details : 



Tuesday, September 11, 2018

What is AWS CloudWatch ?

What is AWS CloudWatch?



Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. You can use these insights to react and keep your application running smoothly.


Monday, August 27, 2018

AWS Architect interviews Questions [Part-2]


AWS Architect interviews Questions [Part-2]


Q1. What are the various AMI design options?
Ans. Fully Baked AMI, JeOS (just enough operating system) AMI, and Hybrid AMI.
Q2. What is Geo Restriction in CloudFront?
Ans. Geo-restriction, also known as geoblocking, is used to prevent users in specific geographic locations from accessing content that you’re distributing through a CloudFront web distribution.
Q3. Explain what is T2 instances?
Ans. T2 instances are designed to provide moderate baseline performance and the capability to burst to higher performance as required by the workload.
Q4. What is AWS Lambda?
Ans. AWS Lambda is a compute service that lets you run code in the AWS Cloud without provisioning or managing servers.
Q5. What is a Serverless application in AWS?
Ans. The AWS Serverless Application Model (AWS SAM) extends AWS CloudFormation to provide a simplified way of defining the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application.
Q6. What is the use of Amazon ElastiCache?
Ans. Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud.
Q7. Explain how the buffer is used in Amazon web services?
Ans. The buffer is used to make the system more robust to manage traffic or load by synchronizing different component.
Q8. Differentiate between stopping and terminating an instance
Ans. When an instance is stopped, the instance performs a normal shutdown and then transitions to a stopped state.
When an instance is terminated, the instance performs a normal shutdown, then the attached Amazon EBS volumes are deleted unless the volume’s deleteOnTermination attribute is set to false.
Q9. Is it possible to change the private IP addresses of an EC2 while it is running/stopped in a VPC?
Ans. The primary private IP address cannot be changed. Secondary private addresses can be unassigned, assigned or moved between interfaces or instances at any point.
Q10. Give one instance where you would prefer Provisioned IOPS over Standard RDS storage?
Ans. When you have batch-oriented workloads.


=========================================
FOR MORE DETAILS OF AWS CLOUD/ DEVOPS/ TERRAFORM/ CLOUD FORMATION / DEVOPS TOOLS VISIT
================================================






AWS Architect interviews Questions [Part-1]


AWS Architect interviews Questions  [Part-1]


Q1. What is auto-scaling?
Ans. Auto-scaling is a feature of AWS which allows you to configure and automatically provision and spin-up new instances without the need for your intervention.
Q2. What are the different types of cloud services?
Ans. Software as a Service (SaaS), Data as a Service (DaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS).
Q3. What is Amazon S3?
Ans. Amazon S3 (Simple Storage Service) is an object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web.
Q4. What is SimpleDB?
Ans. It is a structured data store that supports indexing and data queries to both EC2 and S3.
Q5. What is an AMI?
Ans. AMI (Amazon Machine Image) is a snapshot of the root filesystem.
Q6. What is the type of architecture, where half of the workload is on the public load while at the same time half of it is on the local storage?
Ans. Hybrid cloud architecture.
Q7. Can I vertically scale an Amazon instance? How do you do it?
Ans. Yes. Spin up a new larger instance than the one you are running, then pause that instance to detach the root EBS volume from this server and discard. After that, stop the live instance and detach its root volume. Note the unique device ID and attach that root volume to the new server, and start again. This way you will have scaled vertically.
Q8. How can you send a request to Amazon S3?
Ans. You can send a request by using the REST API or the AWS SDK wrapper libraries that wrap the underlying Amazon S3 REST API.
Q9. How many buckets can be created in AWS by default?
Ans. By default, 100 buckets can be created.
Q10. Should encryption be used for S3?
Ans. Encryption should be considered for sensitive data as S3 is a proprietary technology.

Friday, August 24, 2018

What are the prerequisites to learn Cloud Computing AWS ?



What are the prerequisites to learn Cloud Computing AWS ?


The cloud has been known to be a mystery. But, we have moved ahead in time and we have managed to reveal the mysterious cloud and make it a part and parcel of our daily lives – whether an individual or a business. Cloud has emerged from being an enigma to the soul of the IT industry.

So, What are the Prerequisites to learn Cloud Computing?

This is the question that bothers most of the IT professionals who want to delve into the world of cloud. There are also several myths surrounding the requirements to become a cloud computing professional. In this article, while taking you through the skills required to learn cloud computing, we will also attempt to bust the false assumptions about the requirements.
The term Cloud Computing is an umbrella term and encompasses many different concepts of Information Technology. It basically today the areas of IT that involve software infrastructures, hardware infrastructures, virtualization technologies, data center facilities, and software engineering concepts.

Knowledge of Operating Systems

As Cloud Computing is a broad area, it is essential to know the basic concepts related to Operating Systems, like Windows, Linux, etc. (e.g. how they work and operate at a high level).
Learning to use Linux operating system is essential as most organizations that work with web applications and scalable environments use Linux as their preferred Operating System. Linux is also the main choice for using an Infrastructure-as-a-Service (IaaS) platform i.e.the AWS platform. The best way to learn Linux is to start using it and going through the documentation and basic courses online.

Knowledge of Virtualization

Once you acquire a working knowledge of operating systems, the next thing to learn is Virtualization Technology. Virtualization plays a huge role in cloud computing.
Virtualization is a technique to the house and runs multiple operating systems (virtual machines) within a single physical machine. Each virtual machine has specific CPU, RAM, and disk space capacities and runs its own operating system.
Virtual machines share the same hardware and the same network equipment. They are just virtually separated from one another.

Knowledge of Networking

Networking is an essential element of AWS cloud computing as all operations in a cloud platform involve networking. To learn cloud computing, you should at least have the understanding of how IP addresses work and comprehend what public and private networks are.
Each cloud instance needs to be connected to the Internet. Mastering the concepts of networking can be a difficult task as it requires you to learn certain key skills that demand time to understand.

Understanding of the Difference Between Public and Private Cloud Computing

To become an AWS cloud computing professional, it is essential to understand the difference between Public Cloud Computing and Private Cloud Computing.
Public Cloud:  A publicly accessible cloud infrastructure that allows you to store data, virtual machines, and other cloud resources. Public clouds can be used with a pay per use approach. It is like renting an infrastructure for a specific period of time.
Private Cloud: It is similar to the public cloud in terms of services like flexibility and scalability and self-service, however, it is dedicated to a single enterprise and cannot be accessed publicly. In other words, it refers to an organization’s own private data centre that has all the advantages of Cloud Computing but everything is housed within the company’s own infrastructure which is managed privately.

Coding skills (Good To Have)

Although it is not a prerequisite, it is good to have knowledge of coding as building applications for the cloud and deploying them into the AWS cloud requires programming knowledge.
However, it is not mandatory to have coding skills as most of the cloud computing platforms like Amazon Web Services, contain API sets to automate all the operations and orchestrate all the resources with the organization software. Moreover, cloud computing has several facets spanning across different roles and can be grasped by non-programmers as well.
Now you know the basic requirements to learn cloud computing, let’s clarify the false assumptions now.
It all depends on what particular service you want to learn or want to use AWS.
Let’s assume you are talking about AWS as a whole, if you will be learning or using AWS, it could be because of two reasons:
  • You would want to use it to architect an AWS model for your company’s application meaning taking your application to the AWS Infrastructure.
  • You want to learn because you want to make a career shift to AWS and maybe want to become an AWS Solution Architect, or AWS Certified Developer or some other AWS Certified something. You can learn about all the AWS Certifications in this blog.
Well for either, you have to first know about all the AWS Services, you can get started with this AWS Tutorial.
Now there are no prerequisites for learning AWS, you can get started from scratch.
Also, let me clear the on one thing, you don’t need a coding background to learn AWS. A lot of people have this misconception, so this is for you!

Installing Jenkins on Ubuntu

Installing Jenkins on Ubuntu

wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins

Monday, February 26, 2018

What is AWS Lambda ?

What is AWS LAMBDA?



AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running.

With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.




WHY LAMBDA?

NO SERVERS TO MANAGE

AWS Lambda automatically runs your code without requiring you to provision or manage servers. Just write the code and upload it to Lambda.



CONTINUOUS SCALING
AWS Lambda automatically scales your application by running code in response to each trigger. Your code runs in parallel and processes each trigger individually, scaling precisely with the size of the workload.



SUBSECOND METERING
With AWS Lambda, you are charged for every 100ms your code executes and the number of times your code is triggered. You don't pay anything when your code isn't running.


 Lambda Function for Start Ec2 Instance

import boto3
# Enter the region your instances are in. Include only the region without specifying Availability Zone; e.g.; 'us-east-1'
region = 'XX-XXXXX-X'
# Enter your instances here: ex. ['X-XXXXXXXX', 'X-XXXXXXXX']
instances = ['X-XXXXXXXX']

def lambda_handler(event, context):
    ec2 = boto3.client('ec2', region_name=region)
    ec2.start_instances(InstanceIds=instances)
    print 'started your instances: ' + str(instances)



Please find the Details Step By Step for Ec2 Start Stop Through lambda https://aws.amazon.com/premiumsupport/knowledge-center/start-stop-lambda-cloudwatch/








Lambda Function for Stop Ec2 Instance


 boto3
# Enter the region your instances are in. Include only the region without specifying Availability Zone; e.g., 'us-east-1'
region = 'XX-XXXXX-X'
# Enter your instances here: ex. ['X-XXXXXXXX', 'X-XXXXXXXX']
instances = ['X-XXXXXXXX']

def lambda_handler(event, context):
    ec2 = boto3.client('ec2', region_name=region)
    ec2.stop_instances(InstanceIds=instances)
    print 'stopped your instances: ' + str(instances)

Saturday, February 17, 2018

Run shell script on Your Linux Instance at Launch

Run shell script on Your Linux Instance at Launch

When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives. You can also pass this data into the launch wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls).
If you are interested in more complex automation scenarios, consider using AWS CloudFormation and AWS OpsWorks. For more information, see the AWS CloudFormation User Guide and the AWS OpsWorks User Guide.
For information about running commands on your Windows instance at launch, see Running Commands on Your Windows Instance at Launch and Managing Windows Instance Configuration in the Amazon EC2 User Guide for Windows Instances.
In the following examples, the commands from the Installing a LAMP Web Server tutorial are converted to a shell script and a set of cloud-init directives that executes when the instance launches. In each example, the following tasks are executed by the user data:

  • The distribution software packages are updated.
  • The necessary web server, php, and mysql packages are installed.
  • The httpd service is started and turned on via chkconfig.
  • The www group is added, and ec2-user is added to that group.
  • The appropriate ownership and file permissions are set for the web directory and the files contained within it.
  • A simple web page is created to test the web server and PHP engine.
By default, user data and cloud-init directives only run during the first boot cycle when you launch an instance. However, AWS Marketplace vendors and owners of third-party AMIs may have made their own customizations for how and when scripts run.

The following is example output.
#!/bin/bash yum update -y service httpd start chkconfig httpd on contect reference https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts

Friday, February 16, 2018

Amazon ElastiCache

Amazon ElastiCache




Managed, in-memory data store services. Choose Redis or Memcached to power real-time applications.




Amazon ElastiCache offers fully managed Redis and Memcached. Seamlessly deploy, operate, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for Gaming, Ad-Tech, Financial Services, Healthcare, and IoT apps.


Benefits

EXTREME PERFORMANCE

Amazon ElastiCache works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times. By utilizing an end-to-end optimized stack running on customer dedicated nodes, Amazon ElastiCache provides secure, blazing fast performance.

FULLY MANAGED

You no longer need to perform management tasks such as hardware provisioning, software patching, setup, configuration, monitoring, failure recovery, and backups. ElastiCache continuously monitors your clusters to keep your workloads up and running so that you can focus on higher value application development.

SCALABLE

Amazon ElastiCache can scale-out, scale-in, and scale-up to meet fluctuating application demands.  Write and memory scaling is supported with sharding. Replicas provide read scaling.

Saturday, February 3, 2018

An Introduction to Terraform

An Introduction to Terraform



Learn the basics of Terraform in this tutorial, step-by-step tutorial of how to deploy a EC2 with Running web server on AWS.

In this post, we’re going to introduce the basics of how to use Terraform to define and manage your infrastructure.




What is Terraform?


Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.


Infrastructure as Code


Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your data center to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.



The Official Terraform Getting Started documentation Having a good introducing the individual elements of Terraform (i.e. resources, input variables, output variables, etc), so in this guide, we’re going to focus on how to put those elements together to use a real-time example of web server deployment in EC2.

This guide is targeted at AWS and Terraform newbies, so don’t worry if you haven’t used either one before. We’ll walk you through the entire process, step-by-step:
1.  Set up your AWS account
2. Install terraform
3. Configure the terraform script
4. Test the server accesibility
5. Clean up


Set Up Your AWS Account
Terraform can provision infrastructure across many different types of cloud providers, including AWS, Azure, Google Cloud and many others. For this tutorial, we picked Amazon Web Services (AWS) because:
  It provides a huge range of reliable and scalable cloud hosting services, including Elastic Compute Cloud (EC2)Auto Scaling Groups (ASGs), and Elastic Load Balancing (ELB).
● AWS is the most popular cloud infrastructure provider, so far. AWS offers a generous Free Tier which should allow you to run all of these examples for free.
When you first register for AWS, you initially sign in as the root user. This user account has access permissions to everything, so from a security perspective, we recommend only using it to create other user accounts with more limited permissions (see IAM Best Practices).
To create a more limited user account, Move over to the Identity and Access Management (IAM) console, click “Users”, and click the  “Add Users” button. Enter a name for the user and make sure “Programmatic access” is checked in Access Type:



Once you’ve done with credentials, click to “Next:Permissions” Button,On Next Screen will ask for Set Permission for User and you will get 3 Options:
1) Add User in specific Group
2) Copy Permission from Existing User
3) Attached the Policy
Here we selected Options  3) Attached the Policy. By default, a new IAM user does not have permissions to do anything in the AWS account. To be able to use Terraform for the examples in this tutorial, add the AmazonEC2FullAccess permission (learn more about Managed IAM Policies here):


Click the “Create” button and you’ll be able to see security credentials for that user, which consist of Access Key ID and a Secret Access Key. You MUST save these immediately, as they will never be shown again. We recommend storing them somewhere securely  ,so you can use them a little later in this tutorial.You can also download the csv file with these created user credentials information.




Install Terraform
Follow the instructions here to install Terraform. When you’re done, you should be able to run the terraform command:


 # terraform
usageterraform [--version] [--help] <command> [args]
In order for Terraform to be able to make changes in your AWS account, you will need to set the AWS credentials for the user you created earlier as environment variables:

export AWS_ACCESS_KEY_ID=(your access key id)
export AWS_SECRET_ACCESS_KEY=(your secret access key)

Deploy a Web server in EC2


Terraform code is written in a language called HCL in files with the extension “.tf”. It is a declarative language, so your goal is to describe the infrastructure you want, and Terraform will figure out how to create it. Terraform can create infrastructure across a wide variety of platforms, or what it calls providersincluding AWS, Azure, Google Cloud and many others. The first step to using Terraform is typically to configure the provider(s) you want to use. Create a file called “main.tf” and put the following code in it:

 provider "aws" {
 
region = "eu-west-1"
}

This tells Terraform that you are going to be using the AWS provider and that you wish to deploy your infrastructure in the “eu-west-1” region (AWS has data centers all over the world, grouped into regions and availability zones, and eu-west-1 is the name for data centers in Ireland). You can configure other settings for the AWS provider, but for this example, since you’ve already configured your credentials as environment variables, you only need to specify the region.
For each provider, there are many different kinds of “resources” you can create, such as servers, databases, and load balancers. Before we deploy a whole cluster of servers, let’s  first figure out how to deploy a single server that will run simple “Hello, World” web server. In Amazon Web services a server is called an “EC2 Instance.” To deploy an EC2 Instance, add the following code to main.tf:


 resource "aws_instance" "myfirstec2" {
 
ami = "ami-d834aba1"
 instance_type = 
"t2.micro"
}
Each resource specifies a type (in this case, “aws_instance”), a name (in this case “example”) to use as an identifier within the Terraform code, and a set of configuration parameters specific to the resource. The aws_instance resource documentation lists all the parameters it supports. Initially, you only need to set the 
following ones:
    amiThe Amazon Machine Image to run on the EC2 Instance. Amazon Linux AMI 2017.09.1 (HVM), SSD Volume Type in eu-west-1.
  instance_typeThe type of EC2 Instance to run. Each EC2 Instance Type has different amount CPU, memory, disk space, and networking specs. The example above uses “t2.micro”, which has 1 virtual CPU, 1GB of memory, and is part of the AWS free tier.
In a terminal, go into the folder where you created main.tf, and run the “terraform plan” command:


# terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_instance.myfirstec2
      id:                           <
computed>
      ami:                          "ami-d834aba1"
      associate_public_ip_address:  <
computed>
      availability_zone:            <
computed>
      ebs_block_device.#:           <
computed>
      ephemeral_block_device.#:     <
computed>
      instance_state:               <
computed>
      instance_type:                "t2.micro"
      ipv6_address_count:           <
computed>
      ipv6_addresses.#:             <
computed>
      key_name:                     <
computed>
      network_interface.#:          <
computed>
      network_interface_id:         <
computed>
      placement_group:              <
computed>
      primary_network_interface_id: <
computed>
      private_dns:                  <
computed>
      private_ip:                   <
computed>
      public_dns:                   <
computed>
      public_ip:                    <
computed>
      root_block_device.#:          <
computed>
      security_groups.#:            <
computed>
      source_dest_check:            "true"
      subnet_id:                    <
computed>
      tenancy:                      <
computed>
      volume_tags.%:                <
computed>
      vpc_security_group_ids.#:     <
computed>


Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

The plan command lets you see what Terraform will do before actually doing it. This is a great way to sanity check your changes before unleashing them onto the world. The output of the plan command is a little like the output of the diff command: resources with a plus sign (+) are going to be created, resources with a minus sign (-) are going to be deleted, and resources with a tilde sign (~) are going to be modified. In the output above, you can see that Terraform is planning on creating a single EC2 Instance and nothing else, which is exactly what we want.  

To actually create the instance, run the “terraform apply” command:
    
     # terraform apply

Terraform will perform the following actions:
  + aws_instance.myfirstec2
      id:                           
<computed>
      ami:                          "ami-d834aba1"
      associate_public_ip_address: 
<computed>
      availability_zone:           
<computed>
      ebs_block_device.#:          
<computed>
      ephemeral_block_device.#:     
<computed>
      instance_state:              
<computed>
      instance_type:                "t2.micro"
      ipv6_address_count:          
<computed>
      ipv6_addresses.#:            
<computed>
      key_name:                    
<computed>
      network_interface.#:         
<computed>
      network_interface_id:        
<computed>
      placement_group:             
<computed>
      primary_network_interface_id:
<computed>
      private_dns:                 
<computed>
      private_ip:                   
<computed>
      public_dns:                  
<computed>
      public_ip:                   
<computed>
      root_block_device.#:         
<computed>
      security_groups.#:           
<computed>
      source_dest_check:            "true"
      subnet_id:                   
<computed>
      tenancy:                     
<computed>
      volume_tags.%:               
<computed>
      vpc_security_group_ids.#:    
<computed>

Plan: 1 to add, 0 to change, 0 to destroy.

You have deployed a Server with Terraform! To verify this, you can login to the EC2 console, and you will see your EC2 Instance running in EC2 Console as per below view:




It’s working, but it’s not the most exciting example. For one thing, the Instance doesn’t have a name. To add one, you can add a tag to the EC2 instance:

resource "aws_instance" "myfirstec2" {
          
ami = "ami-d834aba1"
           instance_type =
"t2.micro"
           tags {
                
Name = "terraform-myfirstec2"
           }
}

Run the plan command again to see what this would do:

# terraform plan
Refreshing Terraform state
in-memory prior to plan...
The refreshed state will be used to calculate
this plan, but will not be
persisted to local
or remote state storage.

aws_instance.myfirstec2: Refreshing state... (ID: i
-00c8bbb1c554945e7)

------------------------------------------------------------------------

An execution plan has been generated
and is shown below.
Resource actions are indicated with the following symbols:
  ~ update
in-place

Terraform will perform the following actions:

  ~ aws_instance.myfirstec2
      tags.%:   
"0" => "1"
      tags.Name:
"" => "terraform-myfirstec2"


Plan:
0 to add, 1 to change, 0 to destroy.

------------------------------------------------------------------------


Terraform keeps track of all the resources it already created for this set of templates, so it knows your EC2 Instance already exists (note how Terraform says “Refreshing state…” when you run the plan command), and it can show you a diff between what’s currently deployed and what’s in your Terraform code . The difference above shows that Terraform wants to create a single tag called “Name”, which is exactly what we want, so you should run the “apply” command again. When you refresh your EC2 console, you will see the given changes as below:




      Deploy  HTTP web server in EC2


The next step is to run a web server on this Instance. In a real-world use case, we’re going to run a simple web server that always returns the text “Welcome to My First EC2 Instance Web Server” using a index.html file present in web server:


#!/bin/bash
Yum install httpd
echo "Welcome to My First EC2 Instance Web Server" > /var/www/html/index.html
Service httpd start

This is a bash script that writes the text “Welcome to My First EC2 Instance Web Server” into index.html to serve that file at the URL “/”. With install httpd will install apache server in EC2 and will create the directory /var/www/html and we will create index.html file in that location to server web page and then we starting the http service using command service httpd start.




We’re going to run the script above as part of the EC2 Instance’s User Data, which AWS will execute when the instance is booting time so in case of autoscale new instance will have predefined configuration installed.




resource "aws_instance" "myfirstec2" {
           
ami = "ami-d834aba1"
            instance_type =
"t2.micro

            user_data = <<-EOF
                        #!/bin/bash
                        yum install httpd
                        echo "
Welcome to My First EC2 Instance Web   
                        Server" >  /var/www/html/index.html
                        service httpd start
                        EOF
            tags {
                  Name = "
terraform-myfirstec2"
             }
}


The “<<-EOF” and “EOF” are Terraform’s heredoc syntax, which allows you to create multiline strings without having to put “\n” all over the place (learn more about Terraform syntax here).

You need to do one more thing before this web server works. By default, AWS does not allow any incoming or outgoing traffic from an EC2 Instance. To allow the EC2 Instance to receive traffic on port 80, you need to create a security group:





resource "aws_security_group" "instance" {
 
name = "terraform-example-instance"
 ingress {
  
from_port = 80
   to_port =
80
   protocol =
"tcp"
   cidr_blocks = [
"0.0.0.0/0"]
 }
}
The code above creates a new resource called aws_security_group (notice how all resources for the AWS provider start with “aws_”) and specifies that this group allows incoming TCP requests on port 80 from the CIDR block 0.0.0.0/0. CIDR blocks are a concise way to specify IP address ranges. For example, a CIDR block of 10.0.0.0/24 represents all IP addresses between 10.0.0.0 and 10.0.0.255. The CIDR block 0.0.0.0/0 is an IP address range that includes all possible IP addresses, so the security group above allows incoming requests on port 80 from any IP.



Note that in the security group above, we copied & pasted port 8080. To keep your code DRY and to make it easy to configure the code, Terraform allows you to define input variables:


variable "web_server_port" {
 
description = "The port the server will use for HTTP requests"
}
You can use this variable in your security group via Terraform interpolation syntax:
from_port = "${var.web_server_port}"
to_port =
"${var.web_server_port}"
If you now run the plan or apply command, Terraform will prompt you to enter a value for the from_port and to_port variable:
# terraform plan
var.web_server_port
 The port the server will use for HTTP requests

  Enter a value: 80
Another way to provide a value for the variable is to use the “-var” command line option:
# terraform plan -var web_server_port="80"
If you don’t want to enter the port manually every time, you can specify a default value as part of the variable declaration (note that this default can still be overridden via the “-var” command line option):
variable "web_server_port" {
 
description = "The port the server will use for HTTP requests"
 
default = 80
}

One last thing to do: you need to tell the EC2 Instance to actually use the new security group. To do that, you need to pass the ID of the security group into the vpc_security_group_ids parameter of the aws_instance resource. How do you get this ID?
In Terraform, every resource has attributes that you can reference using the same syntax as interpolation. You can find the list of attributes in the documentation for each resource. For example, the aws_security_group attributes include the ID of the security group, which you can reference in the EC2 Instance as follows:
vpc_security_group_ids = ["${aws_security_group.instance.id}"]
The syntax is “${TYPE.NAME.ATTRIBUTE}”. When one resource references another resource, you create an implicit dependency. Terraform parses these dependencies, builds a dependency graph from them, and uses that to automatically figure out in what order it should create resources (e.g. Terraform knows it needs to create the security group before using it with the EC2 Instance). In fact, Terraform will create as many resources in parallel as it can, which means it is very fast at applying your changes. That’s the beauty of a declarative language: you just specify what you want and Terraform figures out the most efficient way to make it happen.
If you run the plan command, you’ll see that Terraform wants to replace the original EC2 Instance with a new one that has the new user data (the “-/+” means “replace”) and to add a security group:
# terraform apply
var.web_server_port
  Port the server will use
for HTTP requests serve

  Enter a value:
80

aws_instance.myfirstec2: Refreshing state... (ID: i
-00c8bbb1c554945e7)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_instance.myfirstec2
     
id:                                    <computed>
      ami:                                  
"ami-d834aba1"
      associate_public_ip_address:           <computed>
      availability_zone:                     <computed>
      ebs_block_device.
#:                    <computed>
      ephemeral_block_device.
#:              <computed>
      instance_state:                        <computed>
      instance_type:                        
"t2.micro"
      ipv6_address_count:                    <computed>
      ipv6_addresses.
#:                      <computed>
      key_name:                              <computed>
      network_interface.
#:                   <computed>
      network_interface_id:                  <computed>
      placement_group:                       <computed>
      primary_network_interface_id:          <computed>
      private_dns:                           <computed>
      private_ip:                            <computed>
      public_dns:                            <computed>
      public_ip:                             <computed>
      root_block_device.
#:                   <computed>
      security_groups.
#:                     <computed>
      source_dest_check:                    
"true"
      subnet_id:                             <computed>
      tags.%:                                
"1"
      tags.Name:                            
"terraform-myfirstec2"
      tenancy:                               <computed>
      user_data:                            
"e24dcf69cef29301fdb38a432284c77f963990bb"
      volume_tags.%:                         <computed>
      vpc_security_group_ids.
#:              <computed>

  + aws_security_group.instance
     
id:                                    <computed>
      description:                          
"Managed by Terraform"
      egress.
#:                              <computed>
      ingress.
#:                             "1"
      ingress
.2214680975.cidr_blocks.#:      "1"
      ingress
.2214680975.cidr_blocks.0:      "0.0.0.0/0"
      ingress
.2214680975.description:        ""
      ingress
.2214680975.from_port:          "80"
      ingress
.2214680975.ipv6_cidr_blocks.#: "0"
      ingress
.2214680975.protocol:           "tcp"
      ingress
.2214680975.security_groups.#:  "0"
      ingress
.2214680975.self:               "false"
      ingress
.2214680975.to_port:            "80"
      name:                                 
"terraform-security-group"
      owner_id:                              <computed>
      revoke_rules_on_delete:               
"false"
      vpc_id:                                <computed>


Plan:
2 to add, 0 to change, 0 to destroy.
This is exactly what we want, so run the apply command again and you’ll see your new EC2 Instance deploying:
In the description panel at the bottom of the screen, you’ll also see the public IP address of this EC2 Instance. Give it a minute or two to boot up and then try to wget or Browser access with Instance Public IP Address at port 80 with index.html file (you can also do the same with curl utility) 

# wget http://34.217.9.203/index.html
--
2018-01-31 11:08:27--  http://34.217.9.203/index.html
Connecting to
34.217.9.203:80... connected.
HTTP request sent, awaiting response...
200 OKLength: 44 [text/html]
Saving to:
'index.html'

index.html          
100%[===================>]      44  --.-KB/s    in 0s      2018-01-31 11:08:27 (9.65 MB/s) - 'index.html' saved [44/44]




Terraform also giving the better way to find the Pubic Ip as output, However, having to manually play around the EC2 console to find this IP address is no suggestable. Should use terraform output variable:

output "public_ip" {
 
value = "${aws_instance.myfirstec2.public_ip}"
}

We’re using the interpolation syntax again to reference the public_ip attribute of the aws_instance resource. If you run the apply command again, Terraform will not apply any changes (since you haven’t changed any resources), but it’ll show you the new output:



# terraform applyaws_security_group.instance: Refreshing state... (ID: sg-db91dba1)
aws_instance.example: Refreshing state... (ID: i-61744350)
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
public_ip = 34.217.9.203



Wireless Security Configuration: Protect Your Network Now!

Introduction: In today’s connected world, wireless networks are as common as smartphones, and they’re often the gateway to our personal, pr...