SSH Tunnel AWS Elasticsearch Service from local machine

No more signing every request

Remember this?

let creds = new AWS.SharedIniFileCredentials({profile: esProfile });
let signer = new AWS.Signers.V4(req, 'es');
signer.addAuthorization(creds, new Date());

Every request had to be signed with AWS’s SigV4 so that the Elasticsearch endpoint could be properly authorized. That meant additional code to sign all your requests, and additional time for the endpoint to decode it. It might only be a few milliseconds of extra processing time, but those can add up. Now we can call our VPC Elasticsearch endpoint with a simple HTTP request.

2. No need to set up NATs or Internet Gateways

If your apps don’t require outgoing access to the Internet, there is no longer a need to set up NATs and IGs to access your Elasticsearch cluster. This saves you both complexity and money by not needing to maintain the extra configurations. Especially in a multi-availability zone deployment.

3. More secure, no more publicly available URLs protected by weak IP restrictions

Elasticsearch has no built-in security, so we used to simply restrict access to our EC2 instances that were running ES using security groups. AWS’s Elasticsearch Service, however, only allowed for a publicly accessible URL, requiring additional levels of security to authorize access, like signing the request. This meant managing your cluster locally from the command line, or accessing Kibana, required you to compromise security by authorizing specific IP addresses to have access to the cluster. This was a terrible idea and opened up huge security risks. VPC-based ES clusters are no longer publicly accessible, which closes that security hole.

Accessing Your Elasticsearch Cluster Locally

All this new VPC stuff is great, but if you read the ES documentation you probably noticed this:

To access the default installation of Kibana for a domain that resides within a VPC, users must first connect to the VPC. This process varies by network configuration, but likely involves connecting to a VPN or corporate network.

This is also true if you want to access your ES cluster from the command line. The URL is no longer publicly accessible, and in fact, routes to an internal VPC IP address. If you already have a VPN solution that allows you to connect to your VPC, then configuring your security groups correctly should work. However, if you don’t have a VPN configured, you can solve your problem using a simple SSH tunnel with port forwarding.

Step 1:
You need to have an EC2 instance running in the same VPC as your Elasticsearch cluster. If you don’t, fire up a micro Linux instance with a secure key pair.

NOTE: Make sure your instance’s security group has access to the Elasticsearch cluster and that your Elasticsearch cluster’s access policy uses the “Do not require signing request with IAM credential” template.

Step 2:
Create an entry in your SSH config file (~/.ssh/config on a Mac/Linux Distro):

# Elasticsearch Tunnel
Host estunnel
HostName # your server's public IP address
User ec2-user
IdentitiesOnly yes
IdentityFile ~/.ssh/MY-KEY.pem
LocalForward 9200

NOTE: The “HostName” should be your instance’s PUBLIC IP address or DNS. “User” should be your Linux distro’s default user (ec2-user if using Amazon Linux).

Step 3:
Run ssh estunnel -N from the command line

Step 4:
localhost:9200 should now be forwarded to your secure Elasticsearch cluster.

Access via a web browser, ignore the invalid SSL certificate:
Search: https://localhost:9200
Kibana: https://localhost:9200/_plugin/kibana

Access via cURL, be sure to use the -k option to ignore the security certificate:
curl -k https://localhost:9200/

Access programmatically (example in Node.js). Be sure to use the corresponding option to ignore SSL certificates (as in strictSSL below):

const REQUEST = require('request-promise')

const options = {
  method: 'GET',
  uri: 'https://localhost:9200/',
  strictSSL: false,
  json: true

REQUEST(options).then(res => { console.log(res) })

And that’s it! Now you can take advantage of the benefits of VPC-based Elasticsearch clusters and still maintain your local development workflows.


ott Concepts – Transcoding

If a file format is not supported by a system, transcoding comes into the picture by helping you out in such a situation. Transcoding is all about conversion of file format from one analog-to-analog file or digital-to-digital file to another file format. While if we discuss about video transcoding i.e. process of converting a video file from one format to another. It is done to make the video content viewable across the different platforms and devices. Also, transcoding is performed when the target device does not support the format that the original data is in. Video transcoding is very useful to convert incompatible and obsolete files types into modern format (supported by new devices). Transcoding has become popular as it is helpful over video sharing websites. transcode the video data into one of the formats that are well supported by the particular website or if needed your own website.

Though, each time file is transcoded, there is loss in quality. that is a reason why transcoding is often termed as a loss process. If the input compression is lossless and the output is uncompressed or lossless compressed, transcoding can be lossless.

The other important use of Transcoding is to fit HTML and Graphic files to the unique Mobile Device constraint and also to other Web-enabled products. This became constraint as these devices usually have smaller screen sizes, low memory and slow bandwidth rates. Here, transcoding is performed by a transcoding proxy server or device, which receives the requested document or file and uses a specified annotation to adapt it to the client.

What is Transcoding?

Digital storage of any media requires conversion from analog to a digital format. The initial digital format after media production is still a raw file, and for it to be stored and accessed across different devices it needs to be compressed in the particular digital format compatible across the device. This form of video compression for files to make them compatible with a target device is called encoding.

The terms encoding and transcoding are sometimes used interchangeably, inspite of the use-case difference between the two.

Generally speaking, encoding refers to the process of converting uncompressed data to the desired format. This is understood to be a lossy process. On the other hand, transcoding is the process of decoding a video file from one format to an uncompressed format and then encoding the uncompressed data to the desired format. Video transcoding is commonly used when the video file is being moved from a source to a different destination, and when the two support different file formats.

One of the most important uses of video transcoding is in uploading video from one source – a desktop, to an online video hosting site, so that the format is supported by video hosting site.

Some other terminology in use regarding video transcoding and encoding are:

  • Transmuxing – Conversion to a different container format without changing the file itself
  • Transrating – Conversion to a different bitrate using the same file format

What is a Codec?

A video codec is any device/software that compresses a video file. A device/software that only compresses an analog file is called an encoder, whereas a device/software that only decompresses a compressed digital file to analog is a decoder. The term ‘codec’ comes out of the concatenation of the two terms encoding and decoding.

How do codecs work?

For any codec to work it needs to compress the frames. There are two types of frame compressions – inter-frame and intra-frame compression. In intra-frame compression, each frame is compressed independently of the adjacent frames. It is therefore essentially image-compression applied to video.

Inter-frame compression on the other hand identifies redundancies across frames to compress videos. This is includes any elements of the moving image that may be static – say a static background in a talking head video. Inter-frame compression is much more efficient than intra-frame compression, and so most codecs are optimized to identify redundancies across frames.

Transcoding on cloud server for secure video hosting

What are some of the most prominent codecs?

MPEG (Motion Picture Experts Group) is the most common family of video codecs. International Standards Organization (ISO), whose standards impact the computer and consumer electronics markets has had at the following list of codecs as its standards:

  • MPEG-1 in 1993
  • MPEG-2 in 1994
  • MPEG-4 in 1999
  • AVC/H.264 in 2002

H.264 is the most prevalent compression standard in use currently. For a period in the late 90s and early 2000s, the rivalry between RealNetworks and Microsoft was around creating their own proprietary formats as the standards for codecs. RealNetwork’s RealVideo, Microsoft’s Windows Media Video, ON2’s VP6 and Sorenson Video 3 were the dominant proprietary codecs. The H.264 codec was added to Apple’s QuickTime 7 in 2005, and in 2007 added H.264 support for Flash.

VP9 is the proprietary video compression format and codec developed by ON2 Technologies, which were acquired by Google in 2010. VP9 is available under new BSD License by Google with source code available as libvpx VP9 codec SDK. 

Theora is a free lossy video compression format, distributed without licensing fees. Theora is derived from VP3 codec, which has been released into the public domain. Ogg Video Container format uses Theora as its compression format.

How is a media file format distinct from a codec?

A file format is a container, inside which is the data that has been compressed by a video codec. A single file format may support multiple video codecs. For example the Audio Video Interleaved (AVI) file format contains data that compressed from any of a range of codecs.

What are some of the most prominent containers?

QuickTime File Format is a multimedia container format used by Apple’s multimedia framework.

MP4 is the most popular container format used for storing digital audio and video. Both QuickTime and MP4 file formats use the same MPEG-4 format.

FLV is the file container format used for video content using Adobe Flash Player. Flash video content can be embedded within SWF files.

WebM is the royalty-free container format, sponsored by Google. WebM uses the VP8 and VP9 codecs as compression formats.

Ogg container format is maintained by Xiph.Org foundation, and is not restricted by software patents used in H.264. Ogg is supported by the Wikipedia community.

Advanced Systems Format (ASF) is Microsoft’s proprietary video container format designed for streaming media.

The big difference between video transcoding and video encoding

Encoding deals with converting uncompressed data to a specific format or codec and is a lossy process. While transcoding at a higher level is taking encoded or already compressed media content and decoding (decompressing) it to an uncompressed format, and then altering the content in any way possible and recompressing it. For example, you are adding watermarks, graphics or logos into the video. Or in other words re-encoding an existing video (or ongoing stream) file from one digital encoding format to another with a different codec or settings and involves translation of all three elements of a video file.

Video files are large and contain a lot of data, consuming a lot of storage and bandwidth during transmission, while some files also have limitations in terms of compatibility issues with the ever-growing number of appliances consuming video content. To address these problems, there’s a technology known as “codecs” to compress video files, removing extraneous data from the video to reduce the size (i.e. from WMA to MP4) and resolving compatibility concerns while maintaining the high-quality content.

The burgeoning variety of technologies day by day with different generations of equipment, low to high-speed networks and over-the-top (OTT) services are creating different demands for video formats and qualities to maintain interoperability across the plethora of devices, making sure of a high-quality experience for the end-user. Transcoders offer audio conversion, packaging and metadata transfer, caption conversion, etc. enabling the provider to provide several audio formats along with multiple video formats, i.e. H.264, MPEG-4.

Most of the OTT service providers, like Netflix, employ real-time transcoders and whenever a request is made to view a video the transcoders process the request and transcode video depending on the capability and type of requesting user’s device. The transcoder can repackage into adaptive bitrate like Flash HDS, MSS, HTTP Live Streaming (HLS) and the client device can select the optimal stream depending on bandwidth available.

Transcoding is widely an umbrella term that covers multiple digital media tasks like transmuxing, trans-sizing and trans-rating.

  • Trans-rating – refers to changing bitrate, using the same file format and converting to a different bitrate. Like taking a 4K video stream and converting it into one or several lower bitrate streams, this is also known as rendition.
  • Trans-sizing – refers to resizing the video frame, e.g. from a resolution 4K UHD of 3840 pixels x 2160 below to 1920×1080 or 1080p.

So, while referring to transcoding, you might be indicating any combination of these activities. Another essential thing, video conversion (encoding) is a computationally intensive task and requires powerhouse hardware resources equipped with graphics acceleration capabilities.

What Transcoding Is Not

Transcoding should not be mixed with transmuxing, as it refers to repackaging, rewrapping, or packetizing – conversion to a different container format without changing the file itself. It is like when you take compressed video and audio and without altering the actual content – repackage it into distinctive delivery format. For example, changing the h.264/AAC content container to send it as HTTP Live Streaming (HLS), HTTP Dynamic Streaming (HDS), etc. The content remains the same, unaltered so the computational overhead is much smaller than for transcoding.

Codec: Encoding and Decoding

The term Codec comes from the nexus of the two terms encoding and decoding. Therefore, a codec is any software/device that compresses and/or decompresses a compressed digital file. The device/software that only compresses an analog file is known as encoder, while the device/software that only decompresses is known as decoder.

The way codec works is it needs to compress frames. There are two ways to frame compression – inter-frame and intra-frame compression.

Inter-frame compression identifies redundancies across frames to compress the videos. While the intra-frame compression is essentially image compression as it compresses each frame independently. That’s why inter-frame compression is more efficient and used by most codecs.

3 Big Reasons to Trans-code Video Files

Bridging the gap – Creating Multiple Video Formats

With diverse proprietary file formats and codec supports, the need for media exchange between a plethora of systems is clear. Transcoding enables to you to re-encode a video stream into multiple formats like MPEG or HLS – to offer streaming to a range of appliances that only support certain formats. Production, post-production, distribution, and archiving are all distinct and operate on their own standards, requirements, and practices. Transcoding bridges the gaps among them, enables media exchange between disparate systems and makes the vast range of uses to which digital media are now possible.

Boosting QoE (Quality of Experience)

Live streaming and media service providers like Netflix are the top tier users of transcoding allowing them to serve their user base much more fittingly and efficiently. With video transcoding, broadcasters are able to accommodate the bitrate of your video stream based on certain factors i.e. the device your using, bandwidth available, and the codec supported. For example, using HLS allows dynamic switching between video sources i.e. 1080p and 720p versions of stream, depending on the network speed and device.

If a user regularly experiences lagging, buffering, slow video startup or frequent failures to play the content altogether, the QoE goes down and with that the customer’s video quality. Maximizing the QoE is crucial for any media broadcaster and transcoding can help.

Reducing customer storage

The source files on which transcoding is performed are much larger than the asset files produced after the tweaking and brushing are done. It takes the burden off the user system storage.

Custom requirements

Video content creators might have their specialized design and implementation requirements. Such as special formats, multi-lingual audio streams, clipping and trimming, etc. to be applied on the video stream before delivering to the user system.

Numerous factors go into video transcoding and makes an essential part of getting your content ready to be delivered for best user experience and efficacy.

LOG-ROTATE AND Throw backups into AWS S3.

Every restaurant has a base sauce ready for different dishes, the same way I am sharing a base shell script to log-rotate custom logfiles and push archives into S3. You can modify the script as per the taste (^_-)

To begin with, you must install s3cmd on your server first and configure it then create a tmp directory inside your /<absolute_application_log_path>

I have a habit of storing all my customized scripts in “/opt/scripts“, which I call my script home.

  • <file_name> – name of your application logfile name Eg:- access.log
  • <absolute_application_log_path> – absolute path of your log path location Eg:- /var/log/nginx

I’ll create a logrotate configuration file (logrotate_<file_name>.log) in my script home.

/<absolute_application_log_path>/<file_name>.log {
size 10M
rotate 10
dateformat -%d%m%Y

I’ll create my shell script (<script_name>.sh) in my script home.

now=date +"%Y-%m-%d"

rm -rf /<absolute_application_log_path>/tmp/*
logrotate -v /opt/scripts/logrotate_<file_name>.log
mv -f /<absolute_application_log_path>/.log- /<absolute_application_log_path>/tmp/
cd /<absolute_application_log_path>/tmp/ && tar -czvf <file_name>-${now}.tar.gz *
s3cmd put /<absolute_application_log_path>/tmp/*.tar.gz s3://<s3_bucket_name>/

Finally, setup a daily cron

59 23 * * * /bin/bash /opt/scripts/<script_name>.sh

Rendezvous with corporate myrmidons

Guys please check out my new Daily Blog

The common blogger

Good Morning all to my wonderful readers. I came across a very interesting topic for my post. I have many colleagues, who respect and value me and I equally reciprocate them with what they deserve. But there are few, like those t shirts that lose color at first wash. That’s the nature of their quality. These are some individuals of the organization, who gets paid only for their mere existence. These are corporate myrmidons and I have dealt with a few of them in life, so here I am sharing some instances with ya’all.

I have a tendency to keep my distance from people who are dominating and sarcastic in their lingual dexterity. Therefore I hardly mingled with people personally. In the beginning of my career, when I was way below average at my work and a struggling learner, I was often a victim of corporate bullying. Many treated me…

View original post 519 more words

Test AMAZON SES Mailer region-wise openssl script

We use the Amazon SES basically to configure our commercial mailers for promotions and marketing.

Considering your Amazon SES is configured in 4 regions for the same mailer and you have gotten it out of sandbox mode, and if you want to test mailers from all these locations, how will you do ?

I created a solution for the same, i would like to share the script with procedure keeping it short and sweet. We need the following 2 scripts

echo -n "$1" | openssl enc -base64
echo -n "$2" | openssl enc -base64
  • input.txt
EHLO mailer.<domain>.com 
MAIL FROM: <user>@mailer.<domain>.com
RCPT TO: <any_of_you_email_id_to_test>@<mailer>.com
From: <User> Mailer <user@mailer.<domain>.com>
To: <any_of_you_email_id_to_test>@<mailer>.com
Subject: Amazon SES SMTP Test (<Region> Activated)

This message was sent using the Amazon SES SMTP interface (<Region>).
  • First run ./ <Amazon SES key> <Amazon SES Secret>
  • Now you will get <base-64 – key> and <base64 – secret>, paste these values in input.txt
  • Now replace all the ‘<>’ fields in the input.txt i have mentioned above with your values.
  • Now run the following command
openssl s_client -crlf -quiet -starttls smtp -connect email-smtp.<region> < input.txt
  • Replace <region> every time you want to test any new region.
  • Also ensure when ever you change the region, you also create a new base64 credentials out of key and secret for that region account.
  • Try this out and let me know if any queries.
%d bloggers like this: