SSH Tunnel AWS Elasticsearch Service from local machine

No more signing every request

Remember this?

let creds = new AWS.SharedIniFileCredentials({profile: esProfile });
let signer = new AWS.Signers.V4(req, 'es');
signer.addAuthorization(creds, new Date());

Every request had to be signed with AWS’s SigV4 so that the Elasticsearch endpoint could be properly authorized. That meant additional code to sign all your requests, and additional time for the endpoint to decode it. It might only be a few milliseconds of extra processing time, but those can add up. Now we can call our VPC Elasticsearch endpoint with a simple HTTP request.

2. No need to set up NATs or Internet Gateways

If your apps don’t require outgoing access to the Internet, there is no longer a need to set up NATs and IGs to access your Elasticsearch cluster. This saves you both complexity and money by not needing to maintain the extra configurations. Especially in a multi-availability zone deployment.

3. More secure, no more publicly available URLs protected by weak IP restrictions

Elasticsearch has no built-in security, so we used to simply restrict access to our EC2 instances that were running ES using security groups. AWS’s Elasticsearch Service, however, only allowed for a publicly accessible URL, requiring additional levels of security to authorize access, like signing the request. This meant managing your cluster locally from the command line, or accessing Kibana, required you to compromise security by authorizing specific IP addresses to have access to the cluster. This was a terrible idea and opened up huge security risks. VPC-based ES clusters are no longer publicly accessible, which closes that security hole.

Accessing Your Elasticsearch Cluster Locally

All this new VPC stuff is great, but if you read the ES documentation you probably noticed this:

To access the default installation of Kibana for a domain that resides within a VPC, users must first connect to the VPC. This process varies by network configuration, but likely involves connecting to a VPN or corporate network.

This is also true if you want to access your ES cluster from the command line. The URL is no longer publicly accessible, and in fact, routes to an internal VPC IP address. If you already have a VPN solution that allows you to connect to your VPC, then configuring your security groups correctly should work. However, if you don’t have a VPN configured, you can solve your problem using a simple SSH tunnel with port forwarding.

Step 1:
You need to have an EC2 instance running in the same VPC as your Elasticsearch cluster. If you don’t, fire up a micro Linux instance with a secure key pair.

NOTE: Make sure your instance’s security group has access to the Elasticsearch cluster and that your Elasticsearch cluster’s access policy uses the “Do not require signing request with IAM credential” template.

Step 2:
Create an entry in your SSH config file (~/.ssh/config on a Mac/Linux Distro):

# Elasticsearch Tunnel
Host estunnel
HostName 12.34.56.78 # your server's public IP address
User ec2-user
IdentitiesOnly yes
IdentityFile ~/.ssh/MY-KEY.pem
LocalForward 9200 vpc-YOUR-ES-CLUSTER.us-east-1.es.amazonaws.com:443

NOTE: The “HostName” should be your instance’s PUBLIC IP address or DNS. “User” should be your Linux distro’s default user (ec2-user if using Amazon Linux).

Step 3:
Run ssh estunnel -N from the command line

Step 4:
localhost:9200 should now be forwarded to your secure Elasticsearch cluster.

Access via a web browser, ignore the invalid SSL certificate:
Search: https://localhost:9200
Kibana: https://localhost:9200/_plugin/kibana

Access via cURL, be sure to use the -k option to ignore the security certificate:
curl -k https://localhost:9200/

Access programmatically (example in Node.js). Be sure to use the corresponding option to ignore SSL certificates (as in strictSSL below):

const REQUEST = require('request-promise')

const options = {
  method: 'GET',
  uri: 'https://localhost:9200/',
  strictSSL: false,
  json: true
};

REQUEST(options).then(res => { console.log(res) })

And that’s it! Now you can take advantage of the benefits of VPC-based Elasticsearch clusters and still maintain your local development workflows.

Advertisement

Kibana Troubleshooting [Kibana server is not ready yet]

Hi Bloggers,

I am sure if you are doing a proof of concept. on ELK Stack  and if you have installed Elasticsearch and Kibana already and when you try to open your Kibana URL:- “Http://<URL>:5601” and apparently end up the page showing

“Kibana server is not ready yet”

In such a case, there are multiple possibilities for this error.

Considering my configurations:-

  1. OS:- Ubuntu 18.04.3 LTS (Bionic Beaver)
  2. Disk Space:- 1TB
  3. RAM:- 8 GB

I am running elasticsearch and kibana on the localhost.

I did the troubleshooting as follows:-

  1. Stopped the kibana service “sudo systemctl stop kibana”
  2. Verify the kibana.yml i.e. the kibana.yml should only have three parameters customized “kibana port, kibana host and elastic url”.
  3. Then navigate to “/usr/share/kibana/bin” and start kibana with no parameters “./kibana”.
  4. Check for any warning or errors in the console, in the meanwhile try to access the URL.
  5. In my case, I encountered a warning
    1. {“type”:”log”,”@timestamp“:”2019-12-16T16:14:02Z”,”tags”:[“info”,”migrations”],”pid”:6147,”message”:”Creating index .kibana_1.”}
      {“type”:”log”,”@timestamp“:”2019-12-16T16:14:02Z”,”tags”:[“warning”,”migrations”],”pid”:8924,”message”:”Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana.”}
  6. Therefore stopped the kibana and executed the following command to delete the .kibana_1 index curl -XDELETE ‘http://localhost:9200/.kibana_1&#8217; –header “content-type: application/JSON”
  7. Started ./kibana again and ta-da the URL started working.
  8. Now I stopped kibana and started using systemctl.

If anyone of you have encountered something similar and resolved differently, please do post in the comment section.

ELK on CEntOS 7 – (Source UnixMen)

Introduction

For those who don’t know, Elastic Stack (ELK Stack) is an infrastructure software program made up of multiple components developed by Elastic. The components include:

  • Beats: open-source data shippers working as agents on the servers to send different types of operational data to Elasticsearch.
  • Elasticsearch: a highly scalable open source full-text search and analytics engine. It allows you to store, search, and analyze big volumes of data quickly and in near real time. It is generally used as the underlying engine/technology that powers applications that have complex search features and requirements.
  • Kibana: open source analytics and visualization platform designed to work with Elasticsearch. It is used to interact with data stored in Elasticsearch indices. It has a browser-based interface that enables quick creation and sharing of dynamic dashboards that display changes to Elasticsearch queries in real time.
  • Logstash: logs and events collection engine, which provides a real-time pipeline. It can take data from multiple sources and convert them into JSON documents.

This tutorial will take you through the process of installing the Elastic Stack on a CentOS 7 server.

Getting started

First of all, we need Java 8, so you’ll need to download the official Oracle rpm package.

# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http:%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u77-b02/jdk-8u77-linux-x64.rpm"

Install it with rpm:

# rpm -ivh jdk-8u77-linux-x64.rpm

Ensure that it is working properly by checking it on your server:

# java -version

Install Elasticsearch

First, download and install the public signing key:

# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Next, create a file called elasticsearch.repo in /etc/yum.repos.d/, and paste the following lines:

[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Now, the repository is ready for use. Install elasticsearch with yum code:

# yum install elasticsearch

Configuring Elasticsearch

Go to the configuration directory and edit the elasticsaerch.yml configuration file, like this:

# $EDITOR /etc/elasticsearch.yml

Enable memory lock removing comment on line 43:
bootstrap.memory_lock: true
Then, scroll until you reach the “Network” section, and there remove comment on lines:

network.host: 192.168.0.1
http.port: 9200

Save and exit.

Next, it’s time to configure memory lock. In /usr/lib/systemd/system/ edit elasticsearch.service. There, uncomment the line:

LimitMEMLOCK=infinity

Save and exit.

Now go to the configuration file for Elasticsearch:

# $EDITOR /etc/sysconfig/elasticsearch

Uncomment line 60 and be sure that it contains the following content:

MAX_LOCKED_MEMORY=unlimited

Now, Elastisearch is configured. It will run on the IP address you specified (change it to “localhost” if necessary) on port 9200. Next:

# systemctl daemon-reload
# systemctl enable elasticsearch
# systemctl start elasticsearch

Install Kibana

When Elasticsearch has been configured and started, install and configure Kibana with a web server. In this case, we will use Nginx.
As in the case of Elasticsearch, install Kibana with wget and rpm:

# wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
# rpm -ivh kibana-5.1.1-x86_64.rpm

Edit Kibana configuration file:

# $EDITOR /etc/kibana/kibana.yml

There, uncomment:

server.port: 5601
server.host: "localhost"
elasticsearch.url: "http://localhost:9200"

Save, exit and start Kibana.

# systemctl enable kibana
# systemctl start kibana

Now, install Nginx and configure it as reverse proxy. This way it’s possible to access Kibana from the public IP address.
Nginx is available in the Epel repository:

# yum -y install epel-release

Next:

# yum -y install nginx httpd-tools

In Nginx configuration file( /etc/nginx/nginx.conf) remove the server { } block. Then save and exit.

Create a Virtual Host configuration file:

# $EDITOR /etc/nginxconf.d/kibana.conf

There, paste the following content:

server {
    listen 80;
 
    server_name elk-stack.co;
 
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.kibana-user;
 
    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Create a new authentication file:

# htpasswd -c /etc/nginx/.kibana-user admin
my_strong_password

Lastly:

# systemctl enable nginx
# systemctl start nginx

Install Logstash

As for Elastisearch and Kibana:

# wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm
# rpm -ivh logstash-5.1.1.rpm

It’s necessary to create a new SSL certificate. First, edit the openssl.cnf file:

# $EDITOR /etc/pki/tls/openssl.cnf

In the [ v3_ca ] section for the server identification:

[ v3_ca ]

# Server IP Address
subjectAltName = IP: IP_ADDRESS

After saving and exiting, generate the certificate:

# openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt

Next, you can create a new file to configure the log sources for Filebeat, then a file for syslog processing and the file to define the Elasticsearch output.

These configurations depends on how you want to filter the data.

Finally:

# systemctl enable logstash
# systemctl start logstash

You have now successfully installed and configured the ELK Stack server-side!

The ELK Stack (Elasticsearch Logstash and Kibana) – Introduction

The ELK Stack

ELK stands for Elasticsearch Logstash and Kibana which are technologies for creating visualizations from raw data.

Elasticsearch is an advanced search engine which is super fast. With Elasticsearch, you can search and filter through all sorts of data via a simple API. The API is RESTful, so you can not only use it for data-analysis but also use it in production for web-based applications.

Logstash is a tool intended for organizing and searching logfiles. But it can also be used for cleaning and streaming big data from all sorts of sources into a database. Logstash also has an adapter for Elasticsearch, so these two play very well together.

Kibana is a visual interface for Elasticsearch that works in the browser. It is pretty good at visualizing data stored in Elasticsearch and does not require programming skills, as the visualizations are configured completely through the interface.

Implementation of TIBCO EMS with the ELK Stack

Please Observe the you-tube video …….

%d bloggers like this: