Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. You can use these insights to react and keep your application running smoothly.
Saturday, January 20, 2018
EC2 A Backbone of AWS.
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.
Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate them from common failure scenarios.
ELASTIC WEB-SCALE COMPUTING
Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds, or even thousands of server instances simultaneously. You can also use Amazon EC2 Auto Scaling to maintain availability of your EC2 fleet and automatically scale your fleet up and down depending on its needs in order to maximize performance and minimize cost. To scale multiple services, you can use AWS Auto Scaling.
COMPLETELY CONTROLLED
You have complete control of your instances including root access and the ability to interact with them as you would any machine. You can stop any instance while retaining the data on the boot partition, and then subsequently restart the same instance using web service APIs. Instances can be rebooted remotely using web service APIs, and you also have access to their console output.
FLEXIBLE CLOUD HOSTING SERVICES
You have the choice of multiple instance types, operating systems, and software packages. Amazon EC2 allows you to select a configuration of memory, CPU, instance storage, and the boot partition size that is optimal for your choice of operating system and application. For example, choice of operating systems includes numerous Linux distributions and Microsoft Windows Server.
INTEGRATED
Amazon EC2 is integrated with most AWS services such as Amazon Simple Storage Service (Amazon S3), Amazon Relational Database Service (Amazon RDS), and Amazon Virtual Private Cloud (Amazon VPC) to provide a complete, secure solution for computing, query processing, and cloud storage across a wide range of applications.
RELIABLE
Amazon EC2 offers a highly reliable environment where replacement instances can be rapidly and predictably commissioned. The service runs within Amazon’s proven network infrastructure and data centers. The Amazon EC2 Service Level Agreement commitment is 99.99% availability for each Amazon EC2 Region.
SECURE
Cloud security at AWS is the highest priority. As an AWS customer, you will benefit from a data center and network architecture built to meet the requirements of the most security-sensitive organizations. Amazon EC2 works in conjunction with Amazon VPC to provide security and robust networking functionality for your compute resources.
INEXPENSIVE
Amazon EC2 passes on to you the financial benefits of Amazon’s scale. You pay a very low rate for the compute capacity you actually consume. See Amazon EC2 Instance Purchasing Options for more details.
Content source:-https://aws.amazon.com/ec2
Friday, January 19, 2018
What is DynamoDB?
Amazon DynamoDB
Fast, Consistent Performance
DynamoDB is designed to deliver consistent, fast performance at any scale for all applications. Average service-side latencies are typically single-digit milliseconds. As your data volumes grow and application performance demands increase, DynamoDB uses automatic partitioning and SSD technologies to meet your throughput requirements and deliver low latencies at any scale.
Highly Scalable
DynamoDB automatically scales capacity up or down, as application request volumes increase or decrease. Auto-scaling is enabled by default, and you only need to specify the target utilization. Actual throughput consumption is continuously monitored in the background by CloudWatch alarms and provisioned throughput adjusted, whenever utilization deviates from target. If you need to scale your application to serve globally dispersed users, Global Tables enables you to automatically replicate your data across your choice of AWS regions.
Fully Managed
DynamoDB is a fully managed non-relational database service – you simply create a database table, set your target utilization for Auto Scaling, and let the service handle the rest. You no longer need to worry about database management tasks such as hardware or software provisioning, setup and configuration, software patching, operating a reliable, distributed database cluster, or partitioning data over multiple instances as you scale. DynamoDB also lets you backup and restore all your tables for data archival, helping you meet your corporate and governmental regulatory requirements.
Event Driven Programming
DynamoDB integrates with AWS Lambda to provide Triggers which enables you to architect applications that automatically react to data changes.
Fine-grained Access Control
DynamoDB integrates with AWS Identity and Access Management (IAM) for fine-grained access control for users within your organization. You can assign unique security credentials to each user and control each user's access to services and resources.
Flexible
DynamoDB supports both document and key-value data structures, giving you the flexibility to design the best architecture that is optimal for your application.
Content Source:-https://aws.amazon.com/dynamodb/
Friday, January 12, 2018
How to setup Wordpress on AWS EC2 Instance ?
Host your Wordpress website on Amazon Web Services using the AWS EC2 Free Tier Instance! Easily and quickly install your WordPress blog on Amazon's servers.
Install Wordpress on AWS EC2 in 5 Minutes
1 HTTP,PHP,MYSQL Server Installation
2 Database Configuration
3 Download wordpress site
4 Configure the wordpress with database
Steps
#!/bin/bash
yum update -y
yum install httpd
yum install php php-mysql -y
yum install mysql-server -y
service httpd start
service mysqld start
mysqladmin -uroot create mydb
cd /var/www/html
wget http://wordpress.org/latest.tar.gz
tar -xzf latest.tar.gz
mv wordpress/ testwordpress
cd testwordpress
mv wp-config-sample.php wp-config.php
Tuesday, January 9, 2018
What is Amazon EBS ?
Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability. Amazon EBS volumes offer the consistent and low-latency performance needed to run your workloads. With Amazon EBS, you can scale your usage up or down within minutes – all while paying a low price for only what you provision.
Amazon EBS is designed for application workloads that benefit from fine tuning for performance, cost and capacity. Typical use cases include Big Data analytics engines (like the Hadoop/HDFS ecosystem and Amazon EMR clusters), relational and NoSQL databases (like Microsoft SQL Server and MySQL or Cassandra and MongoDB), stream and log processing applications (like Kafka and Splunk), and data warehousing applications (like Vertica and Teradata).
content source:-https://aws.amazon.com/ebs
Monday, January 8, 2018
What is CloudFront ?
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, for example, .html, .css, .php, image, and media files, to end users. CloudFront delivers your content through a worldwide network of edge locations. When an end user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency, so content is delivered with the best possible performance. If the content is already in that edge location, CloudFront delivers it immediately. If the content is not currently in that edge location, CloudFront retrieves it from an Amazon S3 bucket or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content.
Content Source:- https://aws.amazon.com/documentation/cloudfront/
Content Source:- https://aws.amazon.com/documentation/cloudfront/
What Is the AWS Command Line Interface?
The AWS CLI is an open source tool built on top of the AWS SDK for Python (Boto) that provides commands for interacting with AWS services. With minimal configuration, you can start using all of the functionality provided by the AWS Management Console from your favorite terminal program.
- Linux shells – Use common shell programs such as
Bash
,Zsh
, andtsch
to run commands in Linux, macOS, or Unix. - Windows command line – On Microsoft Windows, run commands in either PowerShell or the Windows Command Processor.
- Remotely – Run commands on Amazon EC2 instances through a remote terminal such as PuTTY or SSH, or with Amazon EC2 systems manager.
The AWS CLI provides direct access to AWS services' public APIs. Explore a service's capabilities with the AWS CLI, and develop shell scripts to manage your resources. Or take what you've learned to develop programs in other languages with the AWS SDK.
In addition to the low level, API equivalent commands, the AWS CLI also provides customizations for several services. Customizations are higher level commands that simplify using a service with a complex API. For example, the
aws s3
set of commands provide a familiar syntax for managing files in Amazon S3
Example Upload a file to Amazon S3
aws s3 cp
provides a shell-like copy command, and automatically performs a multipart upload to transfer large files quickly and resiliently.~$
aws s3 cp myfile.mp4 s3://mybucket/
What is terraform ?
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.
The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.
Content Source:https://www.terraform.io/intro/index.html
Wednesday, January 3, 2018
Connect to Amazon EC2 file directory using Filezilla and SFTP
How to upload/download files to ec2 Instance using FileZilla and SFTP
In this tutorial you will learn how to upload/download files to ec2 Instance using FileZilla and SFTP
1. Convert (.pem) file to (.ppk) which was downloaded during Instance creation using putty key generator file.
2. Add public IP/Elastic IP in host address, add port 22, user name ubuntu (If in ubuntu OS) of your ec2 instance.
3. Then click on Edit--Settings--Sftp add your .ppk file
4. Then click on quick connect
5. Now just drag and drop to upload and download files.
========
How to Use Filezilla with Amazon Web Services EC2
1) Go to https://aws.amazon.com
2) Create a free account if you haven't created an account already
3) Go to Amazon Web Services Management Console
4) Select region and go to EC2
5) Create instance (Ubuntu is step 1 AMI that I chose)
6) Step 6, choose Security group and include HTTP, HTTPS, and 8080 along with the default SSH
7) Launch that instance and download the pem file
Get that Filezilla
1) https://filezilla-project.org/
2) Download the client and proceed to set-up
3) Make sure you decline all those extra programs
4) Install Filezilla
5) Settings go to SFTP, and add the pem file, save the ppk file
6) Put the DNS into the hostname, user for Ubuntu instances is ubuntu, and make sure that the port is 22
7) Connect into your virtual instance via Filezilla!
In this tutorial you will learn how to upload/download files to ec2 Instance using FileZilla and SFTP
1. Convert (.pem) file to (.ppk) which was downloaded during Instance creation using putty key generator file.
2. Add public IP/Elastic IP in host address, add port 22, user name ubuntu (If in ubuntu OS) of your ec2 instance.
3. Then click on Edit--Settings--Sftp add your .ppk file
4. Then click on quick connect
5. Now just drag and drop to upload and download files.
========
How to Use Filezilla with Amazon Web Services EC2
1) Go to https://aws.amazon.com
2) Create a free account if you haven't created an account already
3) Go to Amazon Web Services Management Console
4) Select region and go to EC2
5) Create instance (Ubuntu is step 1 AMI that I chose)
6) Step 6, choose Security group and include HTTP, HTTPS, and 8080 along with the default SSH
7) Launch that instance and download the pem file
Get that Filezilla
1) https://filezilla-project.org/
2) Download the client and proceed to set-up
3) Make sure you decline all those extra programs
4) Install Filezilla
5) Settings go to SFTP, and add the pem file, save the ppk file
6) Put the DNS into the hostname, user for Ubuntu instances is ubuntu, and make sure that the port is 22
7) Connect into your virtual instance via Filezilla!
Monday, January 1, 2018
What is EFS ?
Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2 instances in the AWS Cloud. Amazon EFS is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it.
When mounted on Amazon EC2 instances, an Amazon EFS file system provides a standard file system interface and file system access semantics, allowing you to seamlessly integrate Amazon EFS with your existing applications and tools. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, allowing Amazon EFS to provide a common data source for workloads and applications running on more than one Amazon EC2 instance.
You can mount your Amazon EFS file systems on your on-premises datacenter servers when connected to your Amazon VPC with AWS Direct Connect. You can mount your EFS file systems on on-premises servers to migrate data sets to EFS, enable cloud bursting scenarios, or backup your on-premises data to EFS.
Amazon EFS is designed for high availability and durability, and provides performance for a broad spectrum of use cases, including web serving and content management, enterprise applications, media and entertainment processing workflows, home directories, database backups, developer tools, container storage, and big data and analytics applications.
Content Source :https://aws.amazon.com/efs/
What is Cloud Formation?
AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This file serves as the single source of truth for your cloud environment.
MODEL IT ALL
AWS CloudFormation allows you to model your entire infrastructure in a text file. This template becomes the single source of truth for your infrastructure. This helps you to standardize infrastructure components used across your organization, enabling configuration compliance and faster troubleshooting.
AUTOMATE AND DEPLOY
AWS CloudFormation provisions your resources in a safe, repeatable manner, allowing you to build and rebuild your infrastructure and applications, without having to perform manual actions or write custom scripts. CloudFormation takes care of determining the right operations to perform when managing your stack, and rolls back changes automatically if errors are detected.
IT'S JUST CODE
Codifying your infrastructure allows you to treat your infrastructure as just code. You can author it with any code editor, check it into a version control system, and review the files with team members before deploying into production.
Content Source:- https://aws.amazon.com/cloudformation/
What is Amazon S3?
Companies today need the ability to simply and securely collect, store, and analyze their data at a massive scale. Amazon S3 is object storage built to store and retrieve any amount of data from anywhere – web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry. S3 provides comprehensive security and compliance capabilities that meet even the most stringent regulatory requirements. It gives customers flexibility in the way they manage data for cost optimization, access control, and compliance. S3 provides query-in-place functionality, allowing you to run powerful analytics directly on your data at rest in S3. And Amazon S3 is the most supported storage platform available, with the largest ecosystem of ISV solutions and systems integrator partners.
Unmatched Durability, Reliability, & Scalability
Amazon S3 runs on the world’s largest global cloud infrastructure and was built from the ground up to deliver a customer promise of 99.999999999% of durability. Data is automatically distributed across a minimum of three physical facilities that are geographically separated within an AWS Region, and Amazon S3 can also automatically replicate data to any other AWS Region.
Most Comprehensive Security & Compliance Capabilities
Amazon S3 is the only cloud storage platform that supports three different forms of encryption. S3 offers sophisticated integration with AWS CloudTrail to log, monitor and retain storage API call activities for auditing. Amazon S3 is the only cloud storage platform with Amazon Macie, which uses machine learning to automatically discover, classify, and protect sensitive data in AWS. S3 supports security standards and compliance certifications including PCI-DSS, HIPAA/HITECH, FedRAMP, EU Data Protection Directive, and FISMA, helping satisfy compliance requirements for virtually every regulatory agency around the globe.
Query in Place
Amazon S3 allows you to run sophisticated big data analytics on your data without moving the data into a separate analytics system. Amazon Athena gives anyone who knows SQL on-demand query access to vast amounts of unstructured data. Amazon Redshift Spectrum lets you run queries spanning both your data warehouse and S3. And only AWS offers Amazon S3 Select (currently in Preview), a way to retrieve only the subset of data you need from an S3 object, which can improve the performance of most applications that frequently access data from S3 by up to 400%.
Flexible Management
Amazon S3 offers the most flexible set of storage management and administration capabilities. Storage administrators can classify, report and visualize data usage trends to reduce costs and improve service levels. Objects can be tagged with unique, customizable metadata so customers can see and control storage consumption, cost, and security separately for each workload. The S3 Inventory feature delivers scheduled reports about objects and their metadata for maintenance, compliance, or analytics operations. S3 can also analyze object access patterns to build lifecycle policies that automate tiering, deletion, and retention. Since Amazon S3 works with AWS Lambda, customers can log activities, define alerts, and invoke workflows, all without managing any additional infrastructure.
Most Supported Platform with the Largest Ecosystem
In addition to integration with most AWS services, the Amazon S3 ecosystem includes tens of thousands of consulting, systems integrator and independent software vendor partners, with more joining every month. And the AWS Marketplace offers 35 categories and more than 3,500 software listings from over 1,100 ISVs that are pre-configured to deploy on the AWS Cloud. AWS Partner Network partners have adapted their services and software to work with S3 for solutions like Backup & Recovery, Archiving, and Disaster Recovery. No other cloud provider has more partners with solutions that are pre-integrated to work with their service
Easy, Flexible Data Transfer
You can choose from the widest range of options to transfer your data into (or out of) Amazon S3. S3’s simple and reliable APIs make it easy to transfer data over the Internet. Amazon S3 Transfer Acceleration is ideal for larger objects that need to be uploaded across large geographical distances. AWS Direct Connectprovides consistently high bandwidth and low latencies for transferring large amounts of data to AWS using a dedicated network connection. You can use AWS Snowball and AWS Snowball Edge appliances for petabyte-scale data transfer, or AWS Snowmobile for even larger datasets. AWS Storage Gateway provides you a physical or virtual appliance to use on-premises to easily move volumes or files into the AWS Cloud.
Object Lifecycle Management
Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:
- Transition actions – In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.
- Expiration actions – In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.
How Do I Configure a Lifecycle?
A lifecycle configuration, an XML file, comprises a set of rules with predefined actions that you want Amazon S3 to perform on objects during their lifetime.
Amazon S3 provides a set of API operations that you use to manage lifecycle configuration on a bucket. Amazon S3 stores the configuration as a lifecycle subresource that is attached to your bucket.
You can also configure the lifecycle by using the Amazon S3 console or programmatically by using the AWS SDK wrapper libraries, and if you need to you can also make the REST API calls directly. For more information, see Setting Lifecycle Configuration On a Bucket.
Source : https://aws.amazon.com/s3/
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
Subscribe to:
Posts (Atom)
Wireless Security Configuration: Protect Your Network Now!
Introduction: In today’s connected world, wireless networks are as common as smartphones, and they’re often the gateway to our personal, pr...
-
sudo apt update sudo apt install ubuntu-desktop sudo apt install tightvncserver sudo apt install gnome-panel gnome-settings-daemon metac...
-
How to Setup GUI on Amazon EC2 Ubuntu Server Amazon EC2 Linux servers do not come with GUI, all the operations have to be done using ssh co...
-
Amazon ElastiCache Managed, in-memory data store services. Choose Redis or Memcached to power real-time applications. Amazon...