How-To Use Elasticsearch, Logstash, and Kibana for Centralized Logging

As a system administrator or developer, it’s important to have a centralized logging infrastructure in place. It allows you to easily monitor your applications and systems, troubleshoot issues, and analyze data. In this article, we’ll guide you through the process of setting up Elasticsearch, Logstash, and Kibana (ELK stack) for centralized logging.

What is ELK Stack?

The ELK stack is an open-source platform that consists of three main components:

  • Elasticsearch: A distributed search engine that stores logs as documents.
  • Logstash: A data processing pipeline that ingests logs from various sources.
  • Kibana: A web interface used for visualizing and analyzing log data stored in Elasticsearch.

Together these tools form a powerful solution for centralized logging.

Prerequisites

Before we get started with the installation process, make sure you have the following prerequisites:

  • Ubuntu server
  • Java 8+ installed
  • SUDO access

Step 1: Install Elasticsearch

The first step is to install Elasticsearch on your server. You can do this by following these steps:

  1. Add GPG Key
sudo wget --no-check-certificate -qO - 'https://packages.elasticsearch.org/GPG-KEY-elasticsearch' |apt-key add -
  1. Add Elastic repository source list
echo "deb http://packages.elasticsearch.org/logstash/7.x/debian stable main" | sudo tee /etc/apt/sources.list.d/logstash.list
  1. Update package cache
sudo apt-get update && sudo apt-get upgrade -y
  1. Install Elasticsearch
sudo apt-get install elasticsearch
  1. Start and Enable Elasticsearch.
systemctl start elasticsearch.service && systemctl enable elasticsearch.service
  1. Verify Installation by opening a browser to http://{your_server_ip}:9200/

Great! Now that we’ve installed Elasticsearch let’s move ahead with installing another component called LogStash.

Step 2: Install LogStash

After Installing Elasticsearch next step would be installing second component of our stack which is Logstash. Let’s follow below mentioned steps:

  1. Install logstash.
sudo apt-get install logstash

Now Let’s configure our newly installed LogStash.

Configure input plugin

We need to tell what type of inputs are coming into our stack.In other words where do we want to pull our logs from? We can configure different types of inputs based on application such as filebeat , syslog etc but here I am going to discuss about reading logs directly from files .To achieve this please execute below command:

Create configuration file named “input.conf” under path /etc/logstash/conf.d/input.conf using nano or any other text editor :

Use Case : Here I am adding one sample input config where i will read apache/nginx access.log .

Sample Input Configuration :

input {
  file {
    path => "/var/log/apache/access.log"
    start_position => beginning #read files from beginning when starting initially otherwise set ‘end’ if don’t want old records again while restarting service.
    sincedb_path => "/dev/null" #if sincedb files contains record then read only new records else whole file again (when container restarts)
  }
}

Configure output plugin

Output plugins define how filtered events should be processed/stored.Here we are going store all incoming events into elastic search cluster.This could be done via multiple ways such as tcp/http protocol but here i am using most common way elastisearch_http_plugin.Follow Below commands.

Create configuration file named “output.conf” under path /etc/logstash/conf.d/output.conf using nano or any other text editor : Here specify ES url along with user credentials .

Sample Output Configuration :

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    user => "admin"
    password => "< enter password >"
  }
}

Note : Please ensure specified user has permission over index creation eg.(kubernetes-*)

Once both configurations are done verify those by executing below command :

logstash –config.test_and_exit –path.config /etc/logstash/conf.d/*

If there aren’t any errors found then go ahead start logstash service by running below command .

systemctl daemon-reload;
systemctl enable logsta sh;
service lo gsta sh start;

Done! Now let’s move forward towards last component which is kibana.

Step 3: Install Kibana

After configuring both Elasticsearch and Logstash now its time tog et installed third component which provides UI interface called kibana. Follow bellow mentioned commands:

  1. Install Kibana
apt-get install kibana

Configure Kibana

Open the Kibana Configuration File Under Path ‘/usr/share/kibana/config/kibana.cfg.yml’

Note : Look for Server.host parameter under ‘Server’ Section If Host Is Localhost Then Leave As Default otherwise set to your IP Address. Also check tor the Below Parameters And Update According To Your Need.I Have Added Sample Values Which Can Be Updated Later On Based On Requirements.

server.port: 5601
server.host:"127 .0 .0 .1"
#elasticsearch.hosts:["http//:localhost::9210"]

Start the Service and make sure It runs successfully by running the below commands:

systemctl daemon-reload;
systemctl enable kibaba;
service kabina start;

Finally!! Congratulations. You have successfully configured Elasticsearch + Logstash + Kibana for centralized logging.

Access URL:http://{Your_Server_IP}:5601/app/home#

Hope This Article Was Helpful.Thanks For Reading !

> Elasticsearch for Beginners
An overview of Elasticsearch, its features, benefits, and how to get started with Elasticsearch
> Advanced Elasticsearch
Let’s talk about Elasticsearch and some of its advanced tools that tap into its powerful features.
> Installing Elasticsearch
I’ll walk you through the steps to install Elasticsearch on different operating systems