SSH Tunnel AWS Elasticsearch Service from local machine

No more signing every request

Remember this?

let creds = new AWS.SharedIniFileCredentials({profile: esProfile });
let signer = new AWS.Signers.V4(req, 'es');
signer.addAuthorization(creds, new Date());

Every request had to be signed with AWS’s SigV4 so that the Elasticsearch endpoint could be properly authorized. That meant additional code to sign all your requests, and additional time for the endpoint to decode it. It might only be a few milliseconds of extra processing time, but those can add up. Now we can call our VPC Elasticsearch endpoint with a simple HTTP request.

2. No need to set up NATs or Internet Gateways

If your apps don’t require outgoing access to the Internet, there is no longer a need to set up NATs and IGs to access your Elasticsearch cluster. This saves you both complexity and money by not needing to maintain the extra configurations. Especially in a multi-availability zone deployment.

3. More secure, no more publicly available URLs protected by weak IP restrictions

Elasticsearch has no built-in security, so we used to simply restrict access to our EC2 instances that were running ES using security groups. AWS’s Elasticsearch Service, however, only allowed for a publicly accessible URL, requiring additional levels of security to authorize access, like signing the request. This meant managing your cluster locally from the command line, or accessing Kibana, required you to compromise security by authorizing specific IP addresses to have access to the cluster. This was a terrible idea and opened up huge security risks. VPC-based ES clusters are no longer publicly accessible, which closes that security hole.

Accessing Your Elasticsearch Cluster Locally

All this new VPC stuff is great, but if you read the ES documentation you probably noticed this:

To access the default installation of Kibana for a domain that resides within a VPC, users must first connect to the VPC. This process varies by network configuration, but likely involves connecting to a VPN or corporate network.

This is also true if you want to access your ES cluster from the command line. The URL is no longer publicly accessible, and in fact, routes to an internal VPC IP address. If you already have a VPN solution that allows you to connect to your VPC, then configuring your security groups correctly should work. However, if you don’t have a VPN configured, you can solve your problem using a simple SSH tunnel with port forwarding.

Step 1:
You need to have an EC2 instance running in the same VPC as your Elasticsearch cluster. If you don’t, fire up a micro Linux instance with a secure key pair.

NOTE: Make sure your instance’s security group has access to the Elasticsearch cluster and that your Elasticsearch cluster’s access policy uses the “Do not require signing request with IAM credential” template.

Step 2:
Create an entry in your SSH config file (~/.ssh/config on a Mac/Linux Distro):

# Elasticsearch Tunnel
Host estunnel
HostName 12.34.56.78 # your server's public IP address
User ec2-user
IdentitiesOnly yes
IdentityFile ~/.ssh/MY-KEY.pem
LocalForward 9200 vpc-YOUR-ES-CLUSTER.us-east-1.es.amazonaws.com:443

NOTE: The “HostName” should be your instance’s PUBLIC IP address or DNS. “User” should be your Linux distro’s default user (ec2-user if using Amazon Linux).

Step 3:
Run ssh estunnel -N from the command line

Step 4:
localhost:9200 should now be forwarded to your secure Elasticsearch cluster.

Access via a web browser, ignore the invalid SSL certificate:
Search: https://localhost:9200
Kibana: https://localhost:9200/_plugin/kibana

Access via cURL, be sure to use the -k option to ignore the security certificate:
curl -k https://localhost:9200/

Access programmatically (example in Node.js). Be sure to use the corresponding option to ignore SSL certificates (as in strictSSL below):

const REQUEST = require('request-promise')

const options = {
  method: 'GET',
  uri: 'https://localhost:9200/',
  strictSSL: false,
  json: true
};

REQUEST(options).then(res => { console.log(res) })

And that’s it! Now you can take advantage of the benefits of VPC-based Elasticsearch clusters and still maintain your local development workflows.

Advertisement

LOG-ROTATE AND Throw backups into AWS S3.

Every restaurant has a base sauce ready for different dishes, the same way I am sharing a base shell script to log-rotate custom logfiles and push archives into S3. You can modify the script as per the taste (^_-)

To begin with, you must install s3cmd on your server first and configure it then create a tmp directory inside your /<absolute_application_log_path>

I have a habit of storing all my customized scripts in “/opt/scripts“, which I call my script home.

  • <file_name> – name of your application logfile name Eg:- access.log
  • <absolute_application_log_path> – absolute path of your log path location Eg:- /var/log/nginx

I’ll create a logrotate configuration file (logrotate_<file_name>.log) in my script home.

/<absolute_application_log_path>/<file_name>.log {
size 10M
missingok
rotate 10
dateext
dateformat -%d%m%Y
notifempty
copytruncate
}

I’ll create my shell script (<script_name>.sh) in my script home.

!/bin/bash
now=date +"%Y-%m-%d"

rm -rf /<absolute_application_log_path>/tmp/*
logrotate -v /opt/scripts/logrotate_<file_name>.log
mv -f /<absolute_application_log_path>/.log- /<absolute_application_log_path>/tmp/
cd /<absolute_application_log_path>/tmp/ && tar -czvf <file_name>-${now}.tar.gz *
s3cmd put /<absolute_application_log_path>/tmp/*.tar.gz s3://<s3_bucket_name>/

Finally, setup a daily cron

59 23 * * * /bin/bash /opt/scripts/<script_name>.sh

Test AMAZON SES Mailer region-wise openssl script

We use the Amazon SES basically to configure our commercial mailers for promotions and marketing.

Considering your Amazon SES is configured in 4 regions for the same mailer and you have gotten it out of sandbox mode, and if you want to test mailers from all these locations, how will you do ?

I created a solution for the same, i would like to share the script with procedure keeping it short and sweet. We need the following 2 scripts

  • convercreds.sh
#!/bin/bash
echo -n "$1" | openssl enc -base64
echo -n "$2" | openssl enc -base64
  • input.txt
EHLO mailer.<domain>.com 
AUTH LOGIN
<base64-key>
<base64-secret>
MAIL FROM: <user>@mailer.<domain>.com
RCPT TO: <any_of_you_email_id_to_test>@<mailer>.com
DATA
From: <User> Mailer <user@mailer.<domain>.com>
To: <any_of_you_email_id_to_test>@<mailer>.com
Subject: Amazon SES SMTP Test (<Region> Activated)

This message was sent using the Amazon SES SMTP interface (<Region>).
.
QUIT
  • First run ./convercreds.sh <Amazon SES key> <Amazon SES Secret>
  • Now you will get <base-64 – key> and <base64 – secret>, paste these values in input.txt
  • Now replace all the ‘<>’ fields in the input.txt i have mentioned above with your values.
  • Now run the following command
openssl s_client -crlf -quiet -starttls smtp -connect email-smtp.<region>.amazonaws.com:587 < input.txt
  • Replace <region> every time you want to test any new region.
  • Also ensure when ever you change the region, you also create a new base64 credentials out of key and secret for that region account.
  • Try this out and let me know if any queries.

Perform AWS EC2 Backup: Step-By-Step Guide.

Over the last decade, the sheer amount of data in the world has grown exponentially, thus making it hard for some organizations to manage and store critical pieces of information on a daily basis, let alone protect it from unexpected data loss as a result of hardware failure, software corruption, accidental deletion, malicious attack, or an unpredictable disaster. More issues may arise still when it comes to managing AWS EC2 environments and protecting data stored in the cloud.

In short, AWS EC2 backup instances, you should choose one of the following options:

  1. Take an EBS snapshot;
  2. Create a new AMI;
  3. Design an AWS EC2 Backup plan;
  4. Automate AWS EC2 backup with a third-party solution.

AWS Backup is a rather new addition to the rich set of AWS services and tools, and is definitely worth your attention. AWS Backup is a valuable tool which can help you automatically back up and protect your data and applications in the AWS cloud as well as on-premises IT environments.

How to Back Up AWS EC2 Instances

AWS is a high-performance, constantly evolving cloud computing platform that allows you to store data and applications in the cloud environment. AWS can provide you with the tools you need to create EC2 instances which act as virtual servers with varying CPU, memory, storage, and networking capacity.

Currently, there are three ways to back up AWS EC2 instances: taking EBS snapshots, creating AMIs, or designing an AWS Backup plan. Let’s take a closer look at each of these approaches and see how they differ.

Taking EBS Snapshots

If you want to back up an AWS EC2 instance, you should create snapshots of EBS volumes, which are stored with the help of Amazon Simple Storage Service (S3). Snapshots can capture all data within EBS volumes and create their exact copies. Moreover, these EBS snapshots can then be copied and transferred to another AWS region to ensure safe and reliable storage of critical data. Thus, in case of a disaster or accidental data loss, you can be sure that you have a backup copy securely stored in a remote location which you can use for restoring critical data.

Prior to running AWS EC2 backup, it is recommended that you stop the instance or at least detach an EBS volume which is about to be backed up. This way, you can prevent failure or errors from occurring and affecting the newly created snapshots.

Please note that, for security purposes, some sensitive information has been removed.

To back up AWS EC2 instance, you need to take the following steps:

1. Sign in to your AWS account to open the AWS console.

2. Select Services in the top bar and click EC2 to launch the EC2 Management Console.

EC2 Services in AWS EC2 Backup

3. Select Running Instances and choose the instance you would like to back up.

Running Instances in AWS EC2 Backup

4. In the bottom pane, you can view the central technical information about the instance. In the Description tab, find the Root device section and select the /dev/sda1 link.

Selecting Root Device in AWS EC2 Backup

5. In the pop-up window, find the volume’s EBS ID name and click it.

6. The Volumes section should open. Click Actions and select Create Snapshot.

Creating Snapshot in AWS EC2 Backup

7. The Create Snapshot box should open, where you can add a description for the snapshot to make it distinct from other snapshots, as well as assign tags to easily monitor this snapshot. Click Create Snapshot.

Configuring a New Snapshot in AWS EC2 Backup

8. The snapshot creation should start and be completed in a minimal amount of time. The main factor here is the size of data in your Amazon EBS volume.

After the snapshot creation is complete, you can find your new snapshot by selecting the Snapshots section in the left pane. As you can see, we have successfully created a point-in-time copy of the EBS volume, which can later be used to restore your EC2 instance.

Snapshot Storage (AWS EC2 Backup)

For this purpose, you need to select the snapshot of the backed up volume, press the Actions button above, and click Create Volume. Following the prompts, configure the volume details (volume type, size, IOPS, availability zone, tags). Then, click Create Volume for the new volume to be created, which can later be added to the AWS EC2 instance of your choice.

Restoring the snapshot in AWS EC2 Backup

Creating a new AMI

The next approach to performing AWS EC2 backups is creating an Amazon Machine Image (AMI) of your AWS EC2 instances. An AMI contains all the information required for creating an EC2 instance in the AWS environment, including configuration settings, the root volume template, launch permissions, and block device mapping. Basically, the AMI can act as a template for launching a new AWS EC2 instance and replacing the corrupted one. Note that, prior to creating the new AMI, it is recommended that you stop the AWS EC2 instance which you want to back up.

To create a new AMI and ensure AWS EC2 backup, you should do the following:

1. Sign in to your AWS account to open the AWS console.

2. Select Services in the top bar and click EC2 to launch the EC2 Management Console.

EC2 Services in AWS EC2 Backup 2

3. Select Running Instances and choose the instance you want to back up.

Select Running Instances in AWS EC2 Backup

4. Click Actions > Image > Create Image.

How to Create Image in AWS EC2 Backup

5. The Create Image menu should open. Here, you can specify the image name, add the image description, enable/disable reboot after the AMI creation, and configure instance volumes.

Do note that when you create an EBS image, an EBS snapshot should also be created for each of the above volumes. You can access these snapshots by going to the Snapshots section.

The Create Image menu in AWS EC2 Backup

6. Click Create Image.

7. The image creation process should now start. Click the link to view the pending AMI.

8. It should take some time for the new AMI to be created. You can starting using the AMI when its status switches from pending to available.

After the AMI has been successfully created, it can then be used to create a new AWS EC2 instance, which will be an exact copy of the original instance. For this purpose, simply go to the Instances section, click Launch Instance, select the AMI you have created in the My AMIs section, and follow the prompts to finish the instance creation.

Restoring EC2 Instance with the AMI (AWS EC2 Backup)

Creating AMIs is arguably a more effective backup strategy than taking EBS snapshots. This is due to the fact that AMIs often contain EBS snapshots as well as a software configuration which allows you to simply and easily launch the new AWS EC2 instance in just a few clicks, created free of charge (you only pay for snapshot storage).

However, both methods require significant manual input on your part and cannot be set to run automatically. AWS EC2 backup in large-scale environments using these two approaches has proven itself to be a complicated and error-prone process.

Automating AWS EC2 backup

Previously, the only way to automate AWS EC2 backup was by running scripts or using API calls, which was a very challenging and resource-intensive process. The person responsible for backup automation had to be highly proficient in scripting in order to avoid any issues and inconsistencies. However, there was still a high risk that you would waste your time, effort, and money on a backup job configuration and still be left with failed or corrupted AWS EC2 backups.

Due to this ongoing concern, AWS decided to introduce the AWS Lambda service which allowed you to run your codes for managing the AWS services you need and performing various tasks in AWS environments. However, the downside of this approach is that you had to create your own codes or look for those available in open-source platforms. Ultimately, it could end up taking an excessive amount of time and effort to set up a workable code to perform the AWS Lambda function the way you want.

To deal with the existing issues further, the new AWS EC2 backup service referred to as AWS Backup was designed, allowing you to rapidly create automated data backups across AWS services and easily manage them using the central console. With AWS Backup, you can finally create a policy-based backup plan which can automatically back up the AWS resources of your choosing. At the core of each plan lies a backup rule which defines the backup schedule, backup frequency, and backup window, thus allowing you to automate the AWS EC2 backup process and requiring minimum input on your part.

To create an AWS backup plan, take the following steps:

1. Sign in to your AWS account to open the AWS Management Console.

2. Select Services in the top bar and then type AWS Backup in the search bar. Click Backup plans in the left pane.

3. Press the Create Backup plan button.

Backup Plans in AWS EC2 Backup

4. Here, you have three start options: Start from an existing plan, Build a new plan, and Define a plan using JSON. Click Info if you want to learn more about available options to help you make the right decision.

As we don’t have any existing backup plans, let’s build a new plan from scratch. Enter the new backup plan name and proceed further.

Building a New Plan in AWS EC2 Backup

5. The next step is Backup rule configuration. Here, you should specify the backup rule name.

6. After that, you can set up a backup schedule. You should determine the backup frequency (Every 12 hours, Daily, Weekly, Monthly, Custom cron expression); backup window (Use backup window defaults or Customize backup window); backup lifecycle (Transition to cold storage and Expiration of the backup).

Backup Rule Configuration in AWS EC2 Backup

7. At this step, you should select the backup vault for storing your recovery points (the ones created by this Backup rule). You can click Create new Backup vault if you want to have a new customizable vault. You can also use the existing Backup vault if you have one. Alternatively, you can choose the default AWS Backup vault.

Choosing the Backup Vault in AWS EC2 Backup

8. Next, you must add tags to recovery points and your backup plan in order to organize them and easily monitor their current status.

Adding Tags in AWS EC2 Backup

After that, you can click Create plan to proceed to the next stage, the backup rule creation.

9. Your backup plan has been successfully created. However, before you can run this plan and deploy it in your environment, you should also assign resources which need to be backed up. Click the Assign resources button, which can be found in the top bar.

New Backup Plan in AWS EC2 Backup

10. In the next menu, you can specify the resource assignment name and define the IAM (Identity and Access Management) role.

By selecting the IAM role, you specify what a user can or cannot do in AWS and determine which users are granted permission to manage selected AWS resources and services.

Additionally, you can assign resources to this Backup plan using tags or resource IDs, meaning that any AWS resources matching these key-pair values should be automatically backed up by this Backup plan.

Assigning Resources in AWS EC2 Backup

11. Click Assign resources to complete the configuration process. After that, the backup job should run automatically. You can go to the AWS Backup dashboard to see the current status of your backup jobs and verify that they are working as planned.

Data Protection Options in AWS EC2 Backup

As you can see, our backup job is already in progress. In this menu, you can also Manage Backup plans, Create an on-demand backup, or Restore backup. Choose the required option and set up another data protection job in AWS environment following the prompts.

Mount S3 bucket on EC2 Linux Instance as a drive

A S3 bucket can be mounted in a AWS instance as a file system known as S3fs. S3fs is a FUSE file-system that allows you to mount an Amazon S3 bucket as a local file-system. It behaves like a network attached drive, as it does not store anything on the Amazon EC2, but user can access the data on S3 from EC2 instance.

Filesystem in Userspace (FUSE) is a simple interface for userspace programs to export a virtual file-system to the Linux kernel. It also aims to provide a secure method for non privileged users to create and mount their own file-system implementations.

S3fs-fuse project is written in python backed by Amazons Simple Storage Service. Amazon offers an open API to build applications on top of this service, which several companies have done, using a variety of interfaces (web, rsync, fuse, etc).

Follow the below steps to mount your S3 bucket to your Linux Instance.

This Tutorial assumes that you have a running Linux EC2 instance on AWS with root access and a bucket created in S3 which is to be mounted on your Linux Instance. You will also require Access and Secret key pair with sufficient permissions of S3 or else an IAM access to generate or Create it.

We will perform the steps as a root user. You can also use sudo command if you are a normal user with sudo access. So lets get started.

Step-1:- If you are using a new centos or ubuntu instance. Update the system.

For CentOS or Red Hat

yum update all

For Ubuntu

apt-get update

Step-2:- Install the dependencies.

-> In CentOS or Red Hat

sudo yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel

In Ubuntu or Debian

sudo apt-get install automake autotools-dev fuse g++ git libcurl4-gnutls-dev libfuse-dev libssl-dev libxml2-dev make pkg-config

Step-3:- Clone s3fs source code from git.

git clone https://github.com/s3fs-fuse/s3fs-fuse.git

Step-4:- Now change to source code  directory, and compile and install the code with the following commands:

cd s3fs-fuse

./autogen.sh

./configure –prefix=/usr –with-openssl

make

sudo make install

Step-5:- Use below command to check where s3fs command is placed in O.S. It will also tell you the installation is ok.

which s3fs

Step-6:- Getting the access key and secret key.

You will need AWS Access key and Secret key with appropriate permissions in order to access your s3 bucket from your EC2 instance. You can easily manage your user permissions from IAM (Identity and Access Management) Service provided by AWS. Create an IAM user with S3 full access(or with a role with sufficient permissions) or use root credentials of your Account. Here we will use the root credentials for simplicity.

Go to AWS Menu -> Your AWS Account Name -> My Security Credentials. Here your IAM console will appear. You have to go to Users > Your Account name and under permissions Tab, check whether you have sufficient access on S3 bucket. If not, you can manually assign an existing  “S3 Full-Access” policy or create a new policy with sufficient permissions.

Now go to Security Credentials Tab and Create Access Key. A new Access Key and Secret Key pair will be generated. Here you can see access key and secret key (secret key is visible when you click on show tab) which you can also download. Copy these both keys separately.

Note that you can always use an existing access and secret key pair. Alternatively, you can also create a new IAM user and assign it sufficient permissions to generate the access and secret key.

Step-7 :- Create a new file in /etc with the name passwd-s3fs and Paste the access key and secret key in the below format .

touch /etc/passwd-s3fs

vim /etc/passwd-s3fs

Your_accesskey:Your_secretkey

Step-8:- change the permission of file

sudo chmod 640 /etc/passwd-s3fs

Step-9:- Now create a directory or provide the path of an existing directory and mount S3bucket in it.

If you have a simple bucket without dot(.) in the bucket name, use the commands used in point “a” or else for bucket with dot(.) in bucket name, follow step “b”:

a) Bucket name without dot(.):

mkdir /mys3bucket

s3fs <your_bucketname> -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /mys3bucket

where, “your_bucketname” = the name of your S3 bucket that you have created on AWS S3, use_cache = to use a directory for its cache purpose, allow_other= to allow other users to write to the mount-point, uid= uid of the user/owner of the mountpoint (can also add “-o gid=1001” for group), mp_umask= to remove other users permission. multireq_max= parameter to send request to s3 bucket, /mys3bucket= mountpoint where the bucket will be mounted.

You can make an entry in /etc/rc.local to automatically remount after reboot.  Find the s3fs binary file by “which” command and make the entry before the “exit 0” line as below.

which s3fs

/usr/local/bin/s3fs

vim /etc/rc.local

/usr/local/bin/s3fs your_bucketname -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /mys3bucket

b) Bucket name with dot(.):

s3fs your_bucketname /mys3bucket -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 -o use_path_request_style -o url=https://s3-{{aws_region}}.amazonaws.com

where, “your_bucketname” = the name of your S3 bucket that you have created on AWS S3, use_cache = to use a directory for its cache purpose, allow_other= to allow other users to write to the mount-point, uid= uid of the user/owner of the mountpoint (can also add “-o gid=1001” for group), mp_umask= to remove other users permission. multireq_max= parameter to send request to s3 bucket, /mys3bucket= mountpoint where the bucket will be mounted .

Remember to replace “{{aws_region}}” with your bucket region (example: eu-west-1).

You can make an entry in /etc/rc.local to automatically remount after reboot.  Find the s3fs binary file by “which” command and make the entry before the “exit 0” line as below.

which s3fs /usr/local/bin/s3fs

vim /etc/rc.local

s3fs your_bucketname /mys3bucket -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 -o use_path_request_style -o url=https://s3-{{aws_region}}.amazonaws.com

To debug at any point, add  “-o dbglevel=info -f -o curldbg” in the s3fs mount command.

Step-10:- Check mounted s3 bucket. Output will be similar as shown below but Used size may differ.

df -Th

“or”

df -Th /mys3bucket

Filesystem Type Size Used Avail Use% Mounted on

s3fs  fuse.s3fs 256T  0   256T   0%  /mys3bucket

If it shows the mounted file system, you have successfully mounted the S3 bucket on your EC2 Instance. You can also test it further by creating a test file.

cd /mys3bucket

echo “this is a test file to check s3fs” >> test.txt

ls

This change should also reflect on S3 bucket. So Login to your S3 bucket to verify if the test file is present or not.

Note : If you already had some data in s3bucket and it is not visible, then you have to set permission in ACL at the S3 AWS management console for that s3 bucket.

Also, If you get any s3fs error such as “transport end point is not connected”, you have to unmount and remount the file-system. You can also do so through a custom script to detect and perform remount automatically.

Congrats!! You have successfully mounted your S3 bucket to your EC2 instance. Any files written to /mys3bucket will be replicated to your Amazon S3 bucket.

%d bloggers like this: