ELK Stack

How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on Ubuntu 20.04

ELK Stack is a full-featured data analytics platform, consists of three open-source tools Elasticsearch, Logstash, and Kibana. This stack helps you store and manage logs centrally and gives an ability to analyze them.

In this post, we will see how to install the ELK stack on Ubuntu 20.04.

Beats – Installed on client machines, and it collects and sends logs to Logstash.

Logstash – Processing of logs sent by beats (installed on client machines).

Elasticsearch – Stores logs and events from Logstash and offers an ability to search the logs in a real-time

Kibana – Provides visualization of events and logs.

Install Java

Elasticsearch requires either OpenJDK or Oracle JDK available on your machine.

Here, for this demo, I am using OpenJDK. Install Java using the below command along with the wget and HTTPS support package for APT.

sudo apt update

sudo apt install -y openjdk-11-jdk wget apt-transport-https curl

Check the Java version.

java -version


openjdk version "11.0.7" 2020-04-14
OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-3ubuntu1)
OpenJDK 64-Bit Server VM (build 11.0.7+10-post-Ubuntu-3ubuntu1, mixed mode, sharing)

Add ELK repository

ELK stack packages are available in the Elastic official repository.

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

echo "deb https://artifacts.elastic.co/packages/oss-7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

Install & Configure Elasticsearch

Elasticsearch is an open-source search engine that provides the real-time distributed, multitenant-capable full-text search engine with a web interface (HTTP) and schema-free JSON documents.

Install the latest version of Elasticsearch using the apt command.

sudo apt update

sudo apt install -y elasticsearch-oss

Start and enable the Elasticsearch service.

sudo systemctl start elasticsearch

sudo systemctl enable elasticsearch

Wait for a minute or two and then run the below command to see the status of the Elasticsearch.

curl -X GET http://localhost:9200


  "name" : "ubuntu.itzgeek.local",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "AB9giOoWQo2nReENAICKig",
  "version" : {
    "number" : "7.7.1",
    "build_flavor" : "oss",
    "build_type" : "deb",
    "build_hash" : "ad56dce891c901a492bb1ee393f12dfff473a423",
    "build_date" : "2020-05-28T16:30:01.040088Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  "tagline" : "You Know, for Search"

The above output confirms that Elasticsearch is up and running fine.

Install & Configure Logstash

Logstash is open-source log-parsing software that collects logs, parse, and stores them on Elasticsearch for future use. With the help of available plugins, it can process different types of events with no extra work.

sudo apt install -y logstash-oss

Logstash configuration consists of three plugins, namely input, filter, and the output. You can put all plugins details in a single file or separate file for each section, end with .conf.

Here, we will use a single file for placing all the three plugins.

Create a configuration file under /etc/logstash/conf.d/ directory.

 sudo nano /etc/logstash/conf.d/logstash.conf

In the input plugin, we will configure Logstash to listen on port 5044 for incoming logs from the agent (Beats) that is running on client machines.

input {
  beats {
    port => 5044

For the filter plugin, we will use Grok to parse syslog messages ahead of sending it to Elasticsearch for storing.

filter {
  if [type] == "syslog" {
     grok {
        match => { "message" => "%{SYSLOGLINE}" }
     date {
        match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

In the output plugin, we will define where logs to get stored, obviously an Elasticsearch instance.

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"

Now start and enable the Logstash service.

sudo systemctl start logstash

sudo systemctl enable logstash

Logstash log:

sudo cat /var/log/logstash/logstash-plain.log

Install and Configure Kibana

Kibana provides visualization of data stored on an Elasticsearch instance. Install Kibana using the apt command.

sudo apt install -y kibana-oss

By default, Kibana listens on the localhost, which means you can not access the Kibana web interface from external machines. To access Kibana from external machines, you need to set the server.host to the system IP address in /etc/kibana/kibana.yml file.

sudo nano /etc/kibana/kibana.yml

Make a change like below.

server.host: ""

Also, in some cases, Elasticsearch and Kibana may run on different machines. In that case, update the below line with the IP address of the Elasticsearch server.

elasticsearch.hosts: ["http://localhost:9200"]

Start and enable Kibana on machine startup.

sudo systemctl start kibana

sudo systemctl enable kibana

Install Filebeat

Filebeat is a software client that runs on the client machines to send logs to the Logstash server for parsing (in our case) or directly to Elasticsearch for storing.

We will use the Logstash server’s hostname in the configuration file. So, add a DNS record or a host entry for the Logstash server on the client machine.

sudo nano /etc/hosts

Make an entry something like below. server.itzgeek.local

Install HTTPS support for apt.

sudo apt update

sudo apt install -y apt-transport-https

Set up the Elastic repository on your system for Filebeat installation.

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

echo "deb https://artifacts.elastic.co/packages/oss-7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

Install Filebeat using the following command.

sudo apt update

sudo apt install -y filebeat

Edit the filebeat configuration file /etc/filebeat/filebeat.yml to send logs to the Logstash server.

sudo nano /etc/filebeat/filebeat.yml

The below configuration in the inputs section is to send system logs (/var/log/syslog) to the Logstash server.

For this demo, I have commented out /var/log/*.log to avoid sending all logs to the Logstash server.

.    .    .

#=========================== Filebeat inputs =============================


# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
    - /var/log/syslog
    #- /var/log/*.log

.    .    .

Since we are sending logs to the Logstash for parsing, comment out the section output.elasticsearch: and uncomment output.logstash: in the output section.

.    .    .

#----------------------------- Logstash output --------------------------------
  # The Logstash hosts
  hosts: ["server.itzgeek.local:5044"]

.    .    .

Start the Filebeat service.

sudo systemctl start filebeat

Filebeat’s log:

sudo cat /var/log/syslog

Access ELK Dashboard

Access the Kibana web interface by going to the following

URL. http://your-ip-address:5601/



You would get the Kibana’s home page.

Monitor Linux System Metrics with ELK Stack

Elastic stack can be used to monitor Linux system metrics with the aid of the Metricbeat. Metricbeat is a lightweight shipper for system metrics. It can collect system-level CPU usage, memory, file system, disk IO, and network IO statistics, as well as top-like statistics for every process running on your systems and ship them to Elasticsearch for indexing to enable visualization through the visualization tools like Kibana.

Install Metricbeat in Linux Systems

In this tutorial we will be using a CentOS 8 and an Ubuntu 20.04 systems for demo.

Install Metricbeat on Ubuntu 20.04

You can install Metricbeat from the official Elastic repos or directly using the Metricbeat DEB binary. We use the former method in this setup.

Install Elastic stack repo Public Signing Key:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Install Elastic stack repo on Ubuntu 20.04;

echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

Update package cache;

apt update

Install Metricbeat on Ubuntu 20.04;

apt install metricbeat

Install Metricbeat on CentOS 8

Similarly, you can install Metricbeat on CentOS 8 using the RPM binary or directly from the Elastic stack repos. For convenience sake, we install it from the Elastic stack repo, hence;

Install Elastic stack repo Public Signing Key:

rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

Install Elastic stack repo on CentOS 8;

cat << EOL > /etc/yum.repos.d/elasticstack.repo 


name=Elastic repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md EOL

Update package cache;

dnf update

Install Metricbeat on Ubuntu 20.04;

dnf install metricbeat

Configure Metricbeat to collect various system metrics

Once the installation of metric is done, configure it to collect various system metrics and ship them to either Elasticsearch or Logstash depending on your setup.

Configure Metricbeat in Linux

To begin with, configure Metricbeat output. In this setup, we are sending Metricbeat data directly to Elasticsearch running on a remote host. As such, edit the Metricbeat configuration file and define Elasticsearch output.

vim /etc/metricbeat/metricbeat.yml
# ---------------------------- Elasticsearch Output ----------------------------
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]
  hosts: [""]

If you are sending the metric data to Logstash instead, comment the Elasticsearch output configs and enable Logstash output as follows;

# ---------------------------- Elasticsearch Output ----------------------------
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]
  #hosts: [""]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
  # The Logstash hosts
  #hosts: ["localhost:5044"]
  hosts: [""]

Configure Kibana Endpoint

If you will want to use Metricbeat prebuilt Kibana dashboards and both Kibana and Elasticsearch are running on separate hosts, then you will need to load the dashboard manually. For this reason, you need to configure Kibana endpoint. Otherwise, if Kibana and Elasticsearch are running on the same host, you don’t need to configure Kibana endpoint and can safely ignore this configuration.

vim /etc/metricbeat/metricbeat.yml
# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.

  # Kibana Host
  #host: "localhost:5601"
  host: ""

Configure Metricbeat logging

To ensure that Metricbeat is logging to its specific file instead of the syslog (/var/log/syslog, /var/log/messages), enter the following lines into the configuration file;

# ================================= Logging ====================================
logging.level: info
logging.to_files: true
  path: /var/log/metricbeat
  name: metricbeat
  keepfiles: 7
  permissions: 0644

Save and exit the configuration.

Enable and configure metrics collection modules

Metricbeat ships with a dozen modules that enabled it to collect specific service metrics. You can list modules by running the command below;

metricbeat modules list

The command shows enabled and disabled modules;



To enable a module, simply run;

metricbeat modules enable <module-name>

Where a module-name you can obtain from the list above.

Similarly, to disable a module;

metricbeat modules disable <module-name>

In this setup, we are using the Metricbeat system module, which is already enabled. This module collects the following metricsets; cpu ,load ,memory ,network ,process ,process_summary ,socket_summary ,entropy ,core ,diskio ,socket ,service ,users.

If you want to modify the module, simply open the file, /etc/metricbeat/modules.d/system.yml, for editing.

Load Metric Kibana Dashboards/Elasticsearch Index Template

You can load the recommended index template for writing to Elasticsearch and deploy the sample dashboards for visualizing the Metricbeat data in Kibana by executing the command below;

metricbeat setup -e

If you are using Logstash or any other processing engines instead of using Elasticsearch as output, you need to load the index template manually into Elasticsearch. This requires a direct connection to Elasticsearch. The command below is used to load the index template manually to Elasticsearch (It temporarily disables Logstash Output).

metricbeat setup -e -E output.logstash.enabled=false -E output.elasticsearch.hosts=[''] -E setup.kibana.host=

Running Metricbeat in Linux System

Once the installation and configuration of Metricbeat is done, you can start and enable Metricbeat service to run on system boot;

systemctl enable --now metricbeat

To check the status;

systemctl status metricbeat
● metricbeat.service - Metricbeat is a lightweight shipper for metrics.
     Loaded: loaded (/lib/systemd/system/metricbeat.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2020-11-26 20:03:35 UTC; 2s ago
       Docs: https://www.elastic.co/products/beats/metricbeat
   Main PID: 95897 (metricbeat)
      Tasks: 9 (limit: 2281)
     Memory: 23.6M
     CGroup: /system.slice/metricbeat.service
             └─95897 /usr/share/metricbeat/bin/metricbeat --environment systemd -c /etc/metricbeat/metricbeat.yml --path.home /usr/share/metricbeat --path.config /etc/metr>

Nov 26 20:03:35 solr.kifarunix-demo.com systemd[1]: Started Metricbeat is a lightweight shipper for metrics..

View Metricbeat Data in Kibana

Navigate to your Elastic stack dashboard and create Kibana Metricbeat index to enable you view and visualize the metric data. It should be already existing if you loaded with the metricbeat setup command above;

Metricbeat Raw Data

Under the Discover navigation, select metricbeat-* index to view the Metricbeat raw data. Adjust the time range accordingly.

About the Author


Leave a Reply

Your email address will not be published.