Saturday, March 16, 2019

Aws news update

AWS IoT Greengrass Adds New Connector for AWS IoT Analytics, Support for AWS CloudFormation Templates, and Integration with Fleet Indexin

AWS IoT Greengrass now offers a new connector for AWS IoT Analytics and supports AWS CloudFormation templates. Now, you can quickly set up connections between your IoT Greengrass cores and AWS IoT Analytics for analysis of complex IoT data, and you can streamline IoT Greengrass deployments for one or more accounts using familiar CloudFormation templates.

The new IoT Analytics connector can be installed via either the API or console. You can configure the maximum memory footprint and data retention behavior of the connector, and after deployment the connector will consume data via a defined MQTT topic and transmit them to IoT Analytics immediately or in batches. You can get started with this connector by visiting the AWS IoT Greengrass Developer Guide.

IoT Greengrass resource types (groups, cores, devices, functions, subscriptions, and connectors) are now supported in CloudFormation. Now you can write CloudFormation templates that automate setup and configuration for your IoT Greengrass deployments. Getting started is easy - you can build a new template from scratch in CloudFormation or start from an example template available via the IoT Greengrass documentation.

IoT Greengrass devices are now fully integrated with AWS IoT Fleet Indexing, an IoT Device Management service that enables customers to index and search for their devices based on device attributes or state in the cloud. IoT Greengrass devices establish at least one connection where the ClientID is the same as the device's Thing Name, so customers can use device connectivity indexing to quickly discover which IoT Greengrass devices are currently connected or disconnected to AWS IoT.

These features are available to all customers in all regions where IoT Greengrass is available. For more information about IoT Greengrass, visit https://aws.amazon.com/greengrass/.

Thursday, March 14, 2019

Aws video

What Are Resource Groups?

Note

This content describes legacy Resource Groups. For information about the new AWS Resource Groups service, see the AWS Resource Groups User Guide.

In AWS, a resource is an entity that you can work with. Examples include an Amazon EC2 instance, an AWS CloudFormation stack, and an Amazon S3 bucket. If you work with multiple resources, you might find it useful to manage them as a group rather than move from one AWS service to another for each task.

Resource Groups helps you do just that. By default, the AWS Management Console is organized by AWS service. But with the Resource Groups tool, you can create a custom console that organizes and consolidates information based on your project and the resources that you use. If you manage resources in multiple regions, you can create a resource group to view resources from different regions on the same page.

Resource Groups can display metrics, alarms, and configuration details. If you need more detailed information or you want to change a setting for a given resource, choosing a link takes you to the page you need.

For example, let's say you are developing a web application, and you are maintaining separate sets of resources for your alpha, beta, and release environments. Each version runs on Amazon EC2 with an Amazon Elastic Block Store storage volume. You use Elastic Load Balancing to manage traffic and Route 53 to manage your domain. Without the Resource Groups tool, you might have to access multiple consoles just to check the status of your services or modify the settings for one version of your application.

UpWith the Resource Groups tool, you use a single page to view and manage your resources. For example, let’s say you use the tool to create a resource group for each version—alpha, beta, and release—of your application. To check your resources for the alpha version of your application and see whether any CloudWatch alarms have been triggered, simply open your resource group. Then view the consolidated information on your resource group page. To modify a specific resource, choose the appropriate links on your resource group page to quickly access the service console with the settings that you need.

As other examples, you could also use the Resources Groups tool for the following types of projects:

A blog that has different phases, such as development, staging, and production

 

Projects managed by multiple departments or individuals

 

A set of AWS resources that you use together for a common project or that you want to manage or monitor as a group

How Resource Groups Work

A resource group is a collection of resources that share one or more tagsor portions of tags. To create a resource group, you simply identify the tags that contain the items that members of the group should have in common.

If you or your administrator uses the AWS Identity and Access Management (IAM) service to create multiple users in the same account, those users have their own individual resource groups. These groups are not visible to other users. However, each user can share a resource group with others in the same account by sharing a URL, which lets another user create a resource group with the same parameters. For information about creating IAM users, see Creating an IAM User in the IAM User Guide. For information about sharing resources, see Sharing a Resource Group.

The tags themselves function like properties of a resource, so they are shared across the entire account. That way, users in a department can draw from a common vocabulary (tags) within the department or account to create resource groups that are meaningful to their roles and responsibilities. Having a common pool of tags also means that when users share a resource group, they don't have to worry about missing or conflicting tag information.

How Tagging Works

Tags are words or phrases that act as metadata for organizing your AWS resources. With most AWS resources, you have the option of adding tags when you create the resource, whether it's an Amazon EC2 instance, an Amazon S3 bucket, or other resource. However, you can also add tags to multiple resources at once by using Tag Editor. You simply search for resources of various types and then add, remove, or replace tags for the resources in your search results.

https://youtu.be/7VMRplShhnI

Tuesday, March 12, 2019

Aws news

Tag-on Create and Tag-Based IAM Application for Amazon Kinesis Data Firehose

Tag-on create is now available for Kinesis Data Firehose. Upon creation, you can tag your Amazon Kinesis Data Firehose from the AWS Management Console or using the AWS APIs. By tagging resources at the time of creation you can eliminate the need to run custom tagging scripts after resource creation.  

In addition, you can now set IAM policies on your Amazon Kinesis Data Firehose during creation using tags. This enables automated policy application to give you more granular control over who has access to Amazon Kinesis Data Firehose APIs. You can also enforce the use of tagging and can control which tag keys and values are set on your resources.

Visit the AWS User Documentation to learn more about tag-on create and tag-based IAM application for Amazon Kinesis Data Firehose.  

These capabilities are available at no additional cost in all AWS commercial regions where Amazon Kinesis Data Firehose is available.

https://aws.amazon.com/about-aws/whats-new/2019/03/tag-on-create-and-tag-based-iam-application-for-amazon-kinesis-data-firehose/

Monday, March 11, 2019

3 Steps to Perform SSH Login Without Password Using ssh-keygen & ssh-copy-id

ssh-keygen creates public and private keys. ssh-copy-id copies the local host's public key to the remote host's authorized_keys file. ssh-copy-id also assigns proper permission to the remote host's home, ~/.ssh, and ~/.ssh/authorized_keys.


                                          

Step 1: Create public and private keys using ssh-keygen on local-host


root@local-host$ [Note: You are on local-host here]

root@local-host$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/root/.ssh/id_rsa):[Enter key]
Enter passphrase (empty for no passphrase): [Press enter key]
Enter same passphrase again: [Pess enter key]
Your identification has been saved in /home/root/.ssh/id_rsa.
Your public key has been saved in /home/root/.ssh/id_rsa.pub.
The key fingerprint is:
33:b3:fe:af:95:95:18:11:31:d5:de:96:2f:f2:35:f9 root@local-host

Step 2: Copy the public key to remote-host using ssh-copy-id or you can copy manually in a remote server in the user's home directory

root@local-host$ ssh-copy-id -i ~/.ssh/id_rsa.pub remote-host
root@remote-host's password:
Now try logging into the machine, with "ssh 'remote-host'", and check in:

.ssh/authorized_keys

Step 3: Login to remote-host without entering the password

root@local-host$ ssh remote-host
Last login: Sun Nov 16 17:22:33 2008 from 192.168.1.2
[Note: SSH did not ask for password.]

root@remote-host$ [Note: You are on remote-host here]
 in case  No identities found the use below commands


If you have loaded keys to the ssh-agent using the ssh-add, then ssh-copy-id will get the keys from the ssh-agent to copy to the remote-host. i.e, it copies the keys provided by ssh-add -L command to the remote-host, when you don’t pass option -i to the ssh-copy-id.


root@local-host$ ssh-add -L
The agent has no identities.

root@local-host$ ssh-add
Identity added: /home/root/.ssh/id_rsa (/home/root/.ssh/id_rsa) 
root@local-host$ ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAsJIEILxftj8aSxMa3d8t6JvM79DyBV
aHrtPhTYpq7kIEMUNzApnyxsHpH1tQ/Ow== /home/root/.ssh/id_rsa

Saturday, March 9, 2019

Aws

Top 12 SSL Interview Questions | Network Security


SSL short for Secure Socket Layer is solely responsible for protecting data while transfer from source to destination. Here is a list of SSL interview questions and answers which generally asked in an interview.

Q1. What are SSL certificates?

Ans: SSL is a standard security protocol which ensures confidentiality and integrity of data while in transit. It encrypts the data flow between the web browser and web server, hence ensures confidentiality. Also, web server and browser exchanges key to decrypt the data, which ensures the integrity of data.

Q2. Explain how SSL works?

Ans: SSL/TLS layer provides confidentiality and integrity while data is transmitting from source to destination.

Steps involved:

The user initiates the connection by typing the website address. Browser initiates SSL/TLS communication by sending a message to the website’s server.


Website’s server sends back the public key or certificate to the user’s browser.


User’s browser checks for public key or certificate. If it is ok, it creates a symmetric key and sends back to the website’s server. If the certificate is not ok, the communication fails.


On receiving the symmetric key, the website’s server sends the key and encrypted requested data.


User’s browser decrypts the content by using a symmetric key and this completes the SSL/TLS handshake. The user is able to see content as now connection is established.


Q3. What is asymmetric and symmetric encryption?

Ans: The major difference between symmetric and asymmetric cryptography is the use of the single key for encryption and decryption in case of symmetric cryptography while the use of the public and private key for encryption and decryption in case of asymmetric cryptography.

Q4. How SSL uses both asymmetric and symmetric encryption?

Ans: SSL used symmetric encryption to encrypt data between browser and web server while asymmetric encryption is used to exchange generated symmetric key which validates the identity of client and server.

Q5. What is a Certificate Signing Request (CSR)?

Ans: Certificate Signing Request or CSR is encoded information which contains the applicant’s information such as common name, a name of an organization, email address, city, state,  and country. This encoded information is used by certifying authority (CA) to issue an SSL certificate to the applicant.

Q6. What does a CSR look like?

Ans: CSR is base 64 encoded text to start with “—–BEGIN CERTIFICATE REQUEST—–” and end with“—–END CERTIFICATE REQUEST—–” lines.

Q7. Discuss some public-key encryption algorithms used in SSL.

Ans: Public key encryption is used to exchange the symmetric key between browser and web server. Some of the algorithms used Elliptic curve cryptography (ECC), RSA etc.

Q8. What are pre-shared key encryption algorithms?

Ans: Pre-shared key encryption algorithms refer to the symmetric key used to encrypt data between browser and web server. Most commonly used algorithms are Twofish, AES, or Blowfish as pre-shared key encryption algorithms.

Q9. What are the authentication levels of SSL/TLS certificates?

Ans: Authentication levels refers to the trustworthiness of hosted URL. Certifying Authority (CA) issue certificates to an organization on validating their identities.  It mainly categorizes into Domain Validation (DV), Organization Validation (OV) and Extended Validation (EV).

Q10. Explain Domain Validation (DV) authentication in SSL.

Ans: This is the lowest level of validation done by the Certifying Authority (CA) to issue a certificate to an organization. Here, CA only verifies whether the domain is controlled by an organization or not. This process can be done via email.

Q11. Explain Organization Validation (OV) authentication in SSL.

Ans: This is the medium level of validation done by the Certifying Authority (CA) to issue a certificate to an organization. Here, CA validates the name, state, and country of an organization. This process can be done by physically verifying the organization location.

Q12. Explain Extended Validation (EV) authentication in SSL.

Ans: This is the highest level of validation done by the Certifying Authority (CA) to issue a certificate to an organization. Here, CA validates ownership, physical location, state, and country of organization. This process can be done by physically verifying the organization location and checks the legal existence of the company.

Subscribe us to receive more such articles updates in your email.

If you have any questions, feel free to ask in the comments section below. Nothing gives me greater joy than helping my readers!

Wednesday, March 6, 2019

Aws

OpenSSL s_client Commands



OpenSSL is a multi-platform, open source SSL/TLS toolkit. OpenSSL can be downloaded from http://www.openssl.org/

The OpenSSL command line tool can be used for several purposes like creating certificates, viewing certificates and testing https services/connectivity etc. This document provides a summary of "openssl s_client" commands which can be used to test connectivity to SSL services. This document assumes that you have openssl software installed. 

Testing HTTPS Services Using "openssl s_client -connect" Command

 The following command can be used to test connectivity to an https service.

openssl s_client -connect <hostname>:<port>

For example : 

openssl s_client -connect pingfederate.example.com:443 
 

This will open an SSL connection to pingfederate.example.com port 443 and print the ssl certificate used by the service. After connecting you can manually send http requests. This is similar to using telnet to connect to an http service and manually sending an http, i.e GET, request.

If openssl fails to connect it will wait until a timeout occurs and will print an error similar to the following : 

connect: Operation timed out

If you use openssl client to connect to a non-ssl service (i.e port 80 instead of 443) the client will connect but an ssl handshake will not take place. "CONNECTED(00000003)" message will be printed as soon as a socket is opened but then the client will wait until a timeout occurs and an error similar to the following will be printed.

44356:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-47.1/src/ssl/s23_lib.c:182:

-showcerts

Adding -showcerts parameter to this command will print all certificates in the certificate chain presented by the SSL service. This may be useful in troubleshooting missing intermediate certificate authority certificate issues as described in this knowledge base article

openssl s_client -connect <hostname>:<port> -showcerts

-ssl2

Adding this parameter forces openssl to use only SSLv2. This option is useful in testing supported SSL protocol versions. For example  you can use this command to test if SSLv2 is enabled or not. 

openssl s_client -connect <hostname>:<port> -ssl2

-ssl3,-tls1,-dtls1

Similar to -ssl2 switch -ssl3, -tls1 and -dtls1 force SSLv3, TLSv1 and DTLSv1 respectively. 

-cipher 

This parameter allows you to force a specific cipher. This option is useful in testing enabled SSL ciphers. For example after disabling weak ciphers you can test connecting using a disabled cipher to verify that it has been disabled successfully. 

You can use "openssl ciphers" command to see a list of available ciphers for OpenSSL(These are the ciphers available to the openssl client, this list is not related to the PingFederate service).

For example:

openssl s_client -connect <hostname>:<port> -cipher DHE-RSA-AES256-SHA

Using a cipher not supported by the server results in an error similar to the following.

openssl s_client -connect google.com:443 -cipher EXP-RC4-MD5

CONNECTED(00000003)

42792:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:/SourceCache/OpenSSL098/OpenSSL098-47.1/src/ssl/s23_clnt.c:602

So after disabling a weak cipher you can verify if it has been disabled or not by using this command. 

The following Ping Identity knowledge base articles also refer to OpenSSL commands : 

New SSL certificate not trusted by Firefox web browser


Converting a DER x509 certificate to PEM


Search on "OpenSSL" in the knowledge base for a complete list of articles on OpenSSL.

Category: 

Administration  

https://ping.force.com/Support/PingFederate/Administration/OpenSSL-s-client-Commands

Monday, March 4, 2019

Aws video

Jenkins is an open source, Java-based automation server that offers an easy way to set up a continuous integration and continuous delivery (CI/CD) pipeline.


Continuous integration (CI) is a DevOps practice in which team members regularly commit their code changes to the version control repository, after which automated builds and tests are run. Continuous delivery (CD) is a series of practices where code changes are automatically built, tested and deployed to production.


This tutorial, will walk you through the steps of installing Jenkins on a CentOS 7 system using the official Jenkins repository.



Before continuing with this tutorial, make sure you are logged in as a user with sudo privileges.



To install Jenkins on your CentOS system, follow the steps below:


Jenkins is a Java application, so the first step is to install Java. Run the following command to install the OpenJDK 8 package:

sudo yum install java-1.8.0-openjdk-develCopy

The current version of Jenkins does not support Java 10 (and Java 11) yet. If you have multiple versions of Java installed on your machine make sure Java 8 is the default Java version.


The next step is to enable the Jenkins repository. To do that, import the GPG key using the following curlcommand:

curl --silent --location http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo | sudo tee /etc/yum.repos.d/jenkins.repoCopy

And add the repository to your system with:

sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.keyCopy


Once the repository is enabled, install the latest stable version of Jenkins by typing:

sudo yum install jenkinsCopy

After the installation process is completed, start the Jenkins service with:

sudo systemctl start jenkinsCopy

To check whether it started successfully run:

systemctl status jenkinsCopy

You should see something similar to this:

● jenkins.service - LSB: Jenkins Automation Server Loaded: loaded (/etc/rc.d/init.d/jenkins; bad; vendor preset: disabled) Active: active (running) since Thu 2018-09-20 14:58:21 UTC; 15s ago Docs: man:systemd-sysv-generator(8) Process: 2367 ExecStart=/etc/rc.d/init.d/jenkins start (code=exited, status=0/SUCCESS) CGroup: /system.slice/jenkins.serviceCopy

Finally enable the Jenkins service to start on system boot.

sudo systemctl enable jenkinsCopy

jenkins.service is not a native service, redirecting to /sbin/chkconfig. Executing /sbin/chkconfig jenkins onCopy



If you are installing Jenkins on a remote CentOS server that is protected by a firewall you need to port 8080.


Use the following commands to open the necessary port:

For more information please click this link..

https://youtu.be/7RIozgzYkX0

Friday, March 1, 2019

Aws videos

Jenkins Tutorial



Jenkins is a powerful application that allows continuous integration and continuous delivery of projects, regardless of the platform you are working on. It is a free source that can handle any kind of build or continuous integration. You can integrate Jenkins with a number of testing and deployment technologies. In this tutorial, we would explain how you can use Jenkins to build and test your software projects continuously.

Audience


This tutorial is going to help all those software testers who would like to learn how to build and test their projects continuously in order to help the developers to integrate the changes to the project as quickly as possible and obtain fresh builds.

Prerequisites


Jenkins is a popular tool for performing continuous integration of software projects. This is a preliminary tutorial that covers the most fundamental concepts of Jenkins. Any software professional having a good understanding of Software Development Life Cycle should benefit from this tutorial

For more information please click this link..

https://youtu.be/KJDo6YmjQhg

Wireless Security Configuration: Protect Your Network Now!

Introduction: In today’s connected world, wireless networks are as common as smartphones, and they’re often the gateway to our personal, pr...