Tuesday, February 26, 2019

Aws news update

AWS Shield Increases Default Resource Limits For Advanced Protection

You can now use AWS Shield Advanced to get enhanced DDoS protection for even more resources. You can now protect up to a default limit of 1000 resources of each supported resource type: Amazon CloudFront distributions, Elastic Load Balancing load balancers, Amazon Route 53 hosted zones, Elastic IP addresses and AWS Global Accelerator accelerators. 

AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards web applications running on AWS. There are two tiers of AWS Shield - Standard and Advanced. AWS Shield Standard is available on all AWS Regions and Amazon CloudFront edge locations. It is also enabled by default for all AWS customers to protect against several common infrastructure layer attacks. With AWS Shield Advanced, you get protection against more sophisticated and larger DDoS attacks, by enhanced detection and mitigation. With AWS Shield Advanced you also get near-real-time attack visibility, 24x7 access to the AWS DDoS Response Team (DRT) for escalations, and economic protections against DDoS-related usage spikes in your protected resources.

By default, you can now protect up to 5000 resources: 1000 Amazon CloudFront distributions, 1000 Elastic Load Balancing load balancers (Classic Load Balancers + Application Load Balancers), 1000 Amazon Route 53 hosted zones, 1000 Elastic IP addresses (can be associated with Network Load Balancers or Amazon EC2 instances) and 1000 AWS Global Accelerator accelerators.

Existing users of AWS Shield Advanced get their current limits automatically increased to 1000 for each resource type. If you want to increase these limits beyond 1000 each, please submit a ticket via the AWS Support Center.

https://aws.amazon.com/about-aws/whats-new/2019/02/aws-shield-advanced-increases-default-resource-limits/

Aws news update

Amazon Data Lifecycle Manager (DLM) Adds Support For Shorter Backup Intervals

Using Amazon Data Lifecycle Manager (DLM) policies, you can now schedule automated backups for your Amazon Elastic Block Store (EBS) volumes every 2, 3, 4, 6, or 8 hours (in addition to the currently supported 12 or 24 hours).

We launched DLM in July 2018 to enable automation of creation and retention of EBS volume snapshots via policies. Since then, we have made DLM easier to use with automatic copy of tags from source volume to snapshots and CloudFormation support for DLM policies. Today, we are adding a wider selection of backup intervals to support more frequent backups.

The new scheduling options allow you to take more frequent backups to meet shorter Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO). To use the newly supported intervals, customers can either modify existing lifecycle policies or create new policies.

https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-data-lifecycle-manager-adds-support-for-shorter-backup-intervals/

Aws news update

Amazon RDS for MySQL and MariaDB Now Support T3 Instance Types

You can now launch T3 instance types when using Amazon Relational Database Service (RDS) for MySQL and Amazon RDS for MariaDB. Amazon EC2 T3 instances are the next generation burstable general-purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage for any period of time, whenever and however long required.

T3 instances offer a balance of compute, memory, and network resources and are ideal for database workloads with moderate CPU usage that experience temporary spikes in use. T3 instances accumulate CPU credits when a workload is operating below the baseline. Each earned CPU credit provides the T3 instance the opportunity to burst with the performance of a full CPU core for a minute. Amazon RDS T3 instances are configured for Unlimited Mode, which means they can burst beyond the baseline over a 24-hour window for an additional charge. 

You can scale up to the new instance types by modifying your existing DB instance in the AWS RDS Management Console. Refer to the Amazon RDS User Guide for more details.

Review the Amazon RDS pricing page for pricing details and regional availability.

https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-rds-for-mysql-and-mariadb-now-support-t3-instance-types/

Aws news update

AWS Resource Groups Tag Editor Is Now Available in the AWS Europe (Stockholm) Region

AWS Resource Groups makes it easier to manage and automate tasks on large numbers of AWS resources at one time. AWS Resource Groups Tag Editor allows you to add tags to – or edit or delete tags of – multiple AWS resources at once. With Tag Editor, you can search for the resources that you want to tag, and then manage tags for the resources in your search results. Starting today, AWS Resource Groups Tag Editor is available in the AWS Europe (Stockholm) Region.

AWS Resource Groups Tag Editor, which can be found in the navigation bar of the AWS Management Console (under Resource Groups), is available in Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Paris), South America (São Paulo), US East (Northern Virginia), US East (Ohio), US West (Northern California), and US West (Oregon).

AWS Resource Groups supports 120 AWS resources. AWS Resource Groups Tag Editor supports 40 AWS resources. You can start creating your own resource groups with the help of our documentation.

Share your feedback with us! Let us know which AWS resources you’d like us to offer in the next release! Your input helps shape our roadmap, so that we can better meet your needs. If you have questions or suggestions, please leave a comment.

https://aws.amazon.com/about-aws/whats-new/2019/02/aws-resource-groups-tag-editor-is-now-available-in-the-aws-europ/

Aws news update

AWS Elemental MediaConvert Adds Support for Video Rotation and Ad Marker Insertion

AWS Elemental MediaConvert now supports video rotation and ad marker insertion.

Using MediaConvert, you can rotate video either automatically, based on input metadata, or manually, by specifying a rotation value. This enables you to encode video created on devices such as mobile phones by rotating it to the orientation that you require.

In addition, you can specify ad insertion points in your outputs, even if your input video doesn’t contain SCTE-35 markers. You do this by adding Event Signaling and Management (ESAM) XML documents to your MediaConvert job settings. This enables you to better monetize VOD assets by inserting advertisements or to enforce content restrictions with blackouts.

AWS Elemental MediaConvert allows video providers with any size content library to easily and reliably transcode on-demand content for broadcast and multiscreen delivery. The service functions independently or as part of AWS Elemental Media Services, a family of services that forms the foundation of cloud-based workflows and offers the capabilities needed to transport, create, package, and deliver video.

AWS Elemental MediaConvert is available in the US East (Virginia), US East (Ohio), US West (Oregon), US West (Northern California), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), EU (London), South America (São Paulo), and GovCloud (US-West) regions. To learn more, please visit http://aws.amazon.com/mediaconvert/.

https://aws.amazon.com/about-aws/whats-new/2019/02/aws-elemental-mediaconvert-adds-support-for-video-rotation-and-ad-marker-insertion/

Aws news update

AWS RoboMaker now supports new languages, tagging, and AWS CloudFormation

AWS RoboMaker makes it easy to develop, test, and deploy intelligent robotics applications at scale. Today we have added support for nine new languages in the RoboMaker Console, including French, Korean, Simplified and Traditional Chinese, Japanese, German, Italian, Spanish, and Brazilian Portuguese. In addition, RoboMaker is now integrated with tagging and AWS CloudFormation for easy resource management and creation. You can now use tags to allocate cost and control access for RoboMaker resources such as robot applications, simulation applications, simulation jobs, robots, and fleets. You can now also use CloudFormation to easily create RoboMaker resources such as robot applications, simulation applications, robots, and fleets.

AWS RoboMaker is available in US East (N. Virginia), US West (Oregon), and EU (Ireland) regions. To get started, run a sample simulation job in the RoboMaker console or explore the RoboMaker webpage.  

https://aws.amazon.com/about-aws/whats-new/2019/02/robomaker-now-supports-new-languages-tagging-and-cloudformation/

Aws news update

Amazon RDS for MySQL and MariaDB Now Support R5 Instance Types

You can now launch R5 instance types when using Amazon Relational Database Service (RDS) for MySQL and Amazon RDS for MariaDB. Amazon EC2 R5 instances are the next generation of Amazon EC2 memory optimized instances. R5 instances are based on the Amazon EC2 Nitro System, a combination of dedicated hardware and lightweight hypervisor which delivers practically all of the compute and memory resources of the host hardware to your database instance.

With a 1:8 vCPU to memory ratio, R5 instances are well suited for running memory intensive database workloads including transaction processing, data warehousing, and analytics. R5 instances introduce a new larger instance size, r5.24xlarge, which provides 96 vCPUs and 768GiB of memory per instance. Depending on your workload, you may be able to achieve up to a 39% performance boost with R5 instances compared to R4 instances.

You can scale up to the new instance types by modifying your existing DB instance in the AWS RDS Management Console. Refer to the Amazon RDS User Guide for more details.

Review the Amazon RDS pricing page for pricing and regional availability.

https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-rds-for-mysql-and-mariadb-now-support-r5-instance-types/

Aws news update

Amazon Kinesis Video Streams adds Synchronized Audio Video Playback and MPEG-TS Container Format Support via HTTP Live Streaming (HLS)

Amazon Kinesis Video Streams enables developers to use its fully-managed HLS capability for synchronized audio video (AV) playback and also output HLS streams in MPEG-TS container format. Developers can now ingest both audio and video in a single multi-track Kinesis Video stream and use HLS APIs to deliver pre-segmented files and playlists in fragmented MP4 (fMP4) and MPEG-TS container formats for playback on compatible mobile and web players.

As an increasing number of camera-enabled devices now include a microphone, developers want to ingest both audio and video data in a Kinesis Video stream and build player applications for synchronized AV playback. Developers can now enable this capability by ingesting both video (H.264) and audio (AAC) in a single multi-track Kinesis Video stream. As the media is ingested, developers can then use KVS HLS APIs to playback the ingested media in both live and on-demand mode. Kinesis Video Streams now also allows developers to deliver media in both fMP4 and MPEG-TS as the HLS output container, increasing compatibility with earlier generation platforms such as iOS9.

Please refer to the producer integration guide and the developer documentation to learn more about ingesting audio and video data in a single Kinesis Video stream.  

Refer to the AWS global region table for regional availability.

https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-kinesis-video-streams-adds-synchronized-audio-video-playback-and-mpeg-ts-container-format-support-via-http-live-streaming-hls/

Aws news update

AWS Server Migration Service Adds Support for Importing Applications from AWS Migration Hub

AWS Server Migration Service now offers support for importing and migrating applications discovered by AWS Migration Hub. This new feature allows you to quickly migrate applications identified during discovery phase, eliminating the need to recreate groupings, and as a result, reduce the time to migrate and lower the risk of errors in the migration process. You can discover servers and create application groupings using the Migration Hub console, CLI, or CSV import, and use those groupings as-is within the Server Migration Service to perform application migration.

AWS Migration Hub provides a single location to track the progress of application migrations across multiple AWS and partner solutions.

AWS Server Migration Service is an agentless service which makes it easier and faster for you to migrate on-premises workloads to AWS from VMware vSphere and Microsoft Hyper-V environments. It offers multi-server support to enable coordinated migration of a group of servers, making it easier to coordinate large-scale server migrations.

There are no additional charges for AWS Migration Hub or AWS Server Migration Service. You pay only for the cost of the migration tools you use, and any resources being consumed on AWS. To learn more, visit AWS Migration Hub, AWS Server Migration Service (SMS) and the technical documentation page. All capabilities of Server Migration Service are offered free of cost. Customers only pay for the resources they use such as EBS and EC2. To learn more, visit AWS Migration Hub, AWS Server Migration Service (SMS) and the technical documentation page

https://aws.amazon.com/about-aws/whats-new/2019/02/AWS_Server_Migration_Service/

Aws news update

HuAmazon FSx for Windows File Server Now Supports On-Premises Access to File Systems and Supports Access Across AWS VPCs, Accounts, and Regions

Amazon FSx for Windows File Server, a service that provides fully-managed native Microsoft Windows file systems, now allows you to access your file systems from on-premises via an AWS Direct Connect or AWS VPN connection. Additionally, it now allows you to access your file systems from multiple Amazon Virtual Private Clouds (VPCs), AWS accounts, and AWS Regions via VPC Peering or AWS Transit Gateway. 

With on-premises access, you can easily migrate your on-premises data sets to Amazon FSx, use Amazon FSx for hosting user shares accessible by on-premises end-users, and use Amazon FSx for backup and disaster recovery solutions. With inter-VPC, inter-account, and inter-Region access, you can share your file data sets across multiple applications, internal organizations, or environments spanning multiple VPCs, accounts, or Regions. 

These new access capabilities are now available at no additional cost for all new file systems in all regions where Amazon FSx is available. You will be billed for any Direct Connect, VPN, or VPC Peering connections, or any Transit Gateway attachments that you create, as well as associated data transfer fees. To learn more about Amazon FSx for Windows File Server visit here. To learn more about the new access capabilities visit here.

https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-fsx-for-windows-file-server-now-supports-on-premises-access/

Friday, February 22, 2019

Aws news date

Amazon RDS for MySQL and MariaDB Now Support R5 Instance Types

Posted On: Feb 21, 2019

You can now launch R5 instance types when using Amazon Relational Database Service (RDS) for MySQL and Amazon RDS for MariaDBAmazon EC2 R5 instances are the next generation of Amazon EC2 memory optimized instances. R5 instances are based on the Amazon EC2 Nitro System, a combination of dedicated hardware and lightweight hypervisor which delivers practically all of the compute and memory resources of the host hardware to your database instance.

With a 1:8 vCPU to memory ratio, R5 instances are well suited for running memory intensive database workloads including transaction processing, data warehousing, and analytics. R5 instances introduce a new larger instance size, r5.24xlarge, which provides 96 vCPUs and 768GiB of memory per instance. Depending on your workload, you may be able to achieve up to a 39% performance boost with R5 instances compared to R4 instances.

You can scale up to the new instance types by modifying your existing DB instance in the AWS RDS Management Console. Refer to the Amazon RDS User Guide for more details.

Review the Amazon RDS pricing page for pricing and regional availability.

https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-rds-for-mysql-and-mariadb-now-support-r5-instance-types/

Wednesday, February 20, 2019

Aws Free tire Details

AWS Free Tire Account Details

AWS Marketplace offers free and paid software products that run on the AWS Free Tier. If you qualify for the AWS Free Tier, you can use these products on an Amazon EC2 t2.micro instance for up to 750 hours per month and pay no additional charges for the Amazon EC2 instance (during the 12 months).


These free tier offers do not automatically expire at the end of your 12 month AWS Free Tier term and are available to all AWS customers.

AWS Free Tier (12 Month Introductory Period):
These free tier offers are only available to new AWS customers, and are available for 12 months following your AWS sign-up date. When your 12 month free usage term expires or if your application use exceeds the tiers, you simply pay standard, pay-as-you-go service rates (see each service page for full pricing details). Restrictions apply; see offer terms for more details.
Elastic Compute Cloud (EC2)
750 hours of Amazon EC2 Linux t2.micro instance usage (1 GiB of memory and 32-bit and 64-bit platform support) – enough hours to run continuously each month*

Amazon Simple Storage Service (S3)
5 GB of Amazon S3 standard storage, 20,000 Get Requests, and 2,000 Put Requests*

Amazon Elastic File System (EFS)
5 GB per month of Amazon EFS capacity free*

Amazon Relational Database Service (RDS)
750 hours of Amazon RDS Single-AZ db.t2.micro Instances,for running MySQL, PostgreSQL, MariaDB, Oracle BYOL or SQL Server (running SQL Server Express Edition) – enough hours to run a DB Instance continuously each month*

Amazon Cloud Directory
1GB of storage per month*
10,000 write requests per month*
100,000 read requests per month*

Amazon Connect
90 minutes per month of Amazon Connect usage*
A local direct inward dial (DID) number for the AWS region*
30 minutes per month of local (to the AWS region) inbound DID calls*
30 minutes per month of local (to the AWS region) outbound calls*
For US regions, a US toll-free number for use per month and 30 minutes per month of US inbound toll-free calls*

Amazon GameLift
125 hours per month of Amazon GameLift c4.large.gamelift On-Demand instance usage*
50 GB EBS General Purpose (SSD) storage*


For More details for AWS free tire visit: https://aws.amazon.com/free/ 

Content Reference:- https://aws.amazon.com/free/

Aws news update

Amazon Kinesis Data Analytics for Java Now Supports AWS CloudFormation

You can now use AWS CloudFormation templates to model and provision Amazon Kinesis Data Analyticsfor Java applications using version 2 of the Kinesis Data Analytics API. This improvement enables you to use AWS CloudFormation to deploy Kinesis Data Analytics applications in a safe and repeatable manner.  
Amazon Kinesis Data Analytics is the easiest way to analyze streaming data, gain actionable insights, and respond to your business and customer needs in real-time. With Kinesis Data Analytics, SQL users and Java developers build streaming applications to transform and analyze data in real-time. AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. You can now use AWS CloudFormation to provision Java applications in Kinesis Data Analytics. 
AWS CloudFormation support for Kinesis Data Analytics for Java is available now, in all regions where Kinesis Data Analytics for Java is available. You can learn more here


Reference : https://aws.amazon.com/about- aws/whats-new/2019/02/amazon-kinesis-data-analytics-for-java-now-supports-aws-cloudformation/

Tuesday, February 19, 2019

Aws news update

Announcing Updated Professional-Level AWS Certification Exams

We've updated our Professional-level AWS Certification exams to reflect new features, services, and best practices. AWS releases a growing number of new features and services each year, so we periodically update and restructure our exams to be sure we are validating the latest skills.
The updated AWS Certified Solutions Architect – Professional exam validates advanced technical skills and experience in designing distributed applications and systems on the AWS platform. It now covers five domains: design for organization complexity, design for new solutions, migration planning, cost control, and continuous improvements for existing solutions. The exam is recommended for solutions architects with two or more years of hands-on experience designing and deploying cloud architecture on AWS.
The updated AWS Certified DevOps Engineer – Professional exam validates technical expertise in provisioning, operating, and managing distributed applications systems on the AWS platform. It now covers six domains: SDLC automation; configuration management and infrastructure as code; monitoring and logging; policies and standards automation; incident and event response; and high availability, fault tolerance, and disaster recovery. The exam is recommended for DevOps engineers with two or more years of experience provisioning, operating, and managing AWS environments.
We now only recommend, rather than require, that candidates have an Associate-level certification before taking Professional-level exams. See our updated exam guides to learn more about each exam and find resources to help you prepare. Professional-level exams are currently available in English or Japanese for 300 USD. Register today


Content reference:- https://aws.amazon.com/about-aws/whats-new/2019/02/updated-professional-level-aws-certification-exams/

Monday, February 18, 2019

Aws news update



Deploy a Kubernetes Cluster Using Amazon EKS with New Quick Start

This Quick Start automatically deploys a Kubernetes cluster that uses Amazon Elastic Container Service for Kubernetes (Amazon EKS), enabling you to deploy, manage, and scale containerized applications running on Kubernetes on the Amazon Web Services (AWS) Cloud. The deployment takes about 25 minutes. Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS Availability Zones to eliminate a single point of failure. Amazon EKS is also certified Kubernetes-conformant.
This reference deployment provides custom resources that enable you to deploy and manage your Kubernetes applications using AWS CloudFormation by declaring Kubernetes manifests or Helm charts directly in AWS CloudFormation templates.
This Quick Start is intended for users who are looking for a repeatable, customizable reference deployment for Amazon EKS using AWS CloudFormation.
To get started:

View the deployment guide for step-by-step instructions

Download the AWS CloudFormation templates that automate the deployment

For additional AWS Quick Start reference deployments, see catalog.



Content Reference:- https://aws.amazon.com/about-aws/whats-new/2019/02/deploy-a-kubernetes-cluster-using-amazon-eks-with-new-quick-start/

Thursday, February 14, 2019

Aws most popular interview questions

Aws most popular interview questions

1.] Explain what is AWS ? 
◆ : AWS attains as Amazon Web Service; this is a gathering of remote computing settings also identified as cloud computing policies. This unique realm of cloud computing is also recognized as IaaS or Infrastructure as a Service. (Top 50 AWS Interview Questions and Answers Pdf)
2.] What are the key components of AWS ?
◆ : The fundamental elements of AWS are
• Route 53: A DNS web service
• Easy E-mail Service: It permits addressing e-mail utilizing RESTFUL API request or through normal SMTP
• Identity and Access Management: It gives heightened protection and identity control for your AWS account
• Simple Storage Device or (S3): It is a warehouse equipment and the well-known widely utilized AWS service
• Elastic Compute Cloud (EC2): It affords on-demand computing sources for hosting purposes. It is extremely valuable in trouble of variable workloads
• Elastic Block Store (EBS): It presents persistent storage masses that connect to EC2 to enable you to endure data beyond the lifespan of a particular EC2
• CloudWatch: To observe AWS sources, It permits managers to inspect and obtain key Additionally, one can produce a notification alert in the state of crisis.
3.] Explain what is S3 ?
◆ : S3 holds for Simple Storage Service. You can utilize S3 interface to save and recover the unspecified volume of data, at any time and from everywhere on the web. For S3, the payment type is “pay as you go”.
4.] What does an AMI include ?
◆ : An AMI comprises the following elements
• A template to the source quantity concerning the instance
• Launch authorities determine which AWS accounts can avail the AMI to drive instances • A base design mapping that defines the amounts to join to the instance while it is originated.
5.] How can you send request to Amazon S3 ?
◆ : Amazon S3 is a REST service, you can transmit the appeal by applying the REST API or the AWS SDK wrapper archives that envelop the underlying Amazon S3 REST API.

Wednesday, February 13, 2019

Aws most popular interview questions

Aws most popular interview questions

1】. What are the important features of a classic load balancer in EC2 ? 

● The high availability feature ensures that the traffic is distributed among EC2 instances in single or multiple availability zones.This ensures high scale of availability for incoming traffic. 

• Classic load balancer can decide whether to route the traffic or not based on the results of health check. 

• You can implement secure load balancing within a network  by creating security groups in a VPC. 

• Classic load balancer supports sticky sessions which ensure that the traffic from a user is always routed to the same instance for a seamless experience. 

2.】 What parameters will you take into consideration when choosing the availability zone ? 

● Performance, pricing, latency, and response time are some of the factors to consider when selecting the availability zone. 

3.】 Which instance will you use for deploying a 4-node Hadoop cluster in AWS ? 

● We can use a c4.8x large instance or i2.large for this, but using a c4.8x will require a better configuration on PC. 

4.】 Will you use encryption for S3 ? 

● It is better to consider encryption for sensitive data on S3 as it is a proprietary technology. 

5.】 How can you send request to Amazon S3 ? 

● Using the REST API or the AWS SDK wrapper libraries which wrap the underlying Amazon S3 REST API. 

Most popular aws interview questions

AWS most popular interview questions


1. What type of performance can you expect from Elastic Block Storage? How do you back it up and enhance the performance?
Performance of elastic block storage varies i.e. it can go above the SLA performance level and after that drop below it. SLA provides an average disk I/O rate which can at times frustrate performance experts who yearn for reliable and consistent disk throughput on a server. Virtual AWS instances do not behave this way. One can backup EBS volumes through a graphical user interface like elasticfox or use the snapshot facility through an API call. Also, the performance can be improved by using a Linux software raid and striping across four volumes.

2. Imagine that you have an AWS application that requires 24x7 availability and can be down only for a maximum of 15 minutes. How will you ensure that the database hosted on your EBS volume is backed up?
Automated backup is the key processes here as they work in the background without requiring any manual intervention. Whenever there is a need to back up the data, AWS API and AWS CLI play a vital role in automating the process through scripts. The best way is to prepare for a timely backup of EBS of the EC2 instance. The EBS snapshot should be stored on Amazon S3 and can be used for recovery of the database instance in case of any failure or downtime. 

3. You create a Route 53 latency record set from your domain to a system in Singapore and a similar record to a machine in Oregon. When a user located in India visits your domain, to which location will he be routed to?
Assuming that the application is hosted on Amazon EC2 instance and multiple instances of the applications are deployed on different EC2 regions. The request is most likely to go to Singapore because Amazon Route 53 is based on latency and it routes the requests based on the location that is likely to give the fastest response possible.

4. Differentiate between on-demand instance and spot instance.
Spot Instances are spare unused EC2 instances which one can bid for. Once the bid exceeds the existing spot price (which changes in real-time based on demand and supply) the spot instance will be launched. If the spot price becomes more than the bid price then the instance can go away anytime and terminate within 2 minutes of notice. The best way to decide on the optimal bid price for a spot instance is to check the price history of the last 90 days that is available on the AWS console. The advantage of spot instances is that they are cost-effective and the drawback is that they can be terminated anytime. Spot instances are ideal to use when –

• There are optional nice to have tasks.
• You have flexible workloads which can be run when there is enough compute capacity.
• Tasks that require extra computing capacity to improve performance.
On-demand instances are made available whenever you require them and you need to pay for the time you use them on an hourly basis. These instances can be released when they are no longer required and do not require any upfront commitment. The availability fo these instances is guaranteed by AWS unlike spot instances.
The best practice is to launch a couple of on-demand instances which can maintain a minimum level of guaranteed compute resources for the application and add-on few spot instances whenever there is an opportunity.


5. How will you access the data on EBS in AWS ?
Elastic block storage as the name indicates provides persistent, highly available and high-performance block-level storage that can be attached to a running EC2 instance. The storage can be formatted and mounted as a file system or the raw storage can be accessed directly.

Ethical Hacking Techniques: Cracking WPA/WPA2 Wi-Fi Using WPS and Capturing Handshakes

In the realm of cyber security, ethical hacking plays a crucial role in identifying and addressing vulnerabilities. One of the areas where e...