DEV Community

Cover image for Setting up ELK Stack in Linux
Waji
Waji

Posted on • Edited on

Setting up ELK Stack in Linux

Introduction

Elasticsearch, Logstash, and Kibana are the three open-source tools that make up the ELK Stack.

For the purpose of identifying issues with servers or applications, the ELK stack offers centralized logging. It enables comprehensive log searches in one location.

Elasticsearch

It was created using Java. An open source search engine. It uses JSON format to get data, search and also save it. It comes in a HTTP dashboard web interface (Kibana).

Logstash

An open-source tool to manage logs and different events. It saves log data in JSON format then saves it in Elasticsearch.

Kibana

An open-source tool that let's you visualize the Elaticsearch data in the format of a dashboard.

Beats

A data transfer service which can be installed on the Client and send a large amount of data from the Client to the Logstash & Elasticsearch server.

ELK Stack Architecture

A simple ELK Stack looks like

Simple

With Beats it looks like

Beats

If we are dealing with very large amounts of data, we can add kafka and for security we can include Nginx as well

Kafka + Nginx


Setting up the ELK Stack

As it is created from Java, we will need a JVM environment to run ELK. (4GB+ RAM required)

💡 Selinux can interrupt with retrieving the logs using ELK Stack so we will be disabling that



vi /etc/sysconfig/selinux


Enter fullscreen mode Exit fullscreen mode


setenforce 0


Enter fullscreen mode Exit fullscreen mode


reboot 

getenforce
Disabled


Enter fullscreen mode Exit fullscreen mode

Step 1 👉 JDK Installation

As Elasticsearch is based on Java, we need the JDK.



rpm -qa | grep java


Enter fullscreen mode Exit fullscreen mode

We can search for java packages and if we aren't able to find any, we can just install it using yum



yum -y install java-1.8 open-jdk*


Enter fullscreen mode Exit fullscreen mode

After the installation, we can check the java version



java -version
openjdk version "1.8.0_362"


Enter fullscreen mode Exit fullscreen mode

Step 2 👉 Elasticsearch Installation

We will first create a directory named ELK and work inside there



mkdir ./ELK

cd ELK


Enter fullscreen mode Exit fullscreen mode

Now, we need to download the Elasticsearch package



wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.1-x86_64.rpm


Enter fullscreen mode Exit fullscreen mode

💡 If you don't have wget installed, you can install it first using yum

Importing the GPG Key



rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch


Enter fullscreen mode Exit fullscreen mode

💡 The GPG Key ensures that the RPM package file was not altered

We can now install the package that we downloaded



rpm -ivh elasticsearch-7.6.1-x86_64.rpm


Enter fullscreen mode Exit fullscreen mode


rpm -qa | grep elasticsearch
elasticsearch-7.6.1-1.x86_64


Enter fullscreen mode Exit fullscreen mode

We have to edit some configurations by accessing the elasticsearch config files and its yml file



vi /etc/elasticsearch/elasticsearch.yml

43 bootstrap.memory_lock: true
55 network.host: localhost
59 http.port: 9200


Enter fullscreen mode Exit fullscreen mode


vi /usr/lib/systemd/system/elasticsearch.service

33 # Memory mlockall
34 LimitMEMLOCK=infinity


Enter fullscreen mode Exit fullscreen mode


vi /etc/sysconfig/elasticsearch
46 MAX_LOCKED_MEMORY=unlimited


Enter fullscreen mode Exit fullscreen mode

💡 As I am setting up the Elasticsearch on the same system as the Logstash, I have set the network host as the localhost

Also enabling the mlockall function to change the memory usage setting for all operations to use physical memory

The mlockall locks all pages mapped to the address of the call process. It is sometimes used to ensure that the Java virtual machine (JVM) running Elasticsearch does not run out of memory, as this can cause the JVM to crash.

We will have to reload the daemon to make the mlockall setting configured correctly



systemctl daemon-reload
systemctl enable elasticsearch
systemctl start elasticsearch


Enter fullscreen mode Exit fullscreen mode

Enabled and started the elasticsearch daemon as well

To confirm JAVA process is running on 9200 port



netstat -antp | grep 9200


Enter fullscreen mode Exit fullscreen mode

Next, we will use curl to confirm mlockall status



curl -XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty'

{
  "nodes" : {
    "1JrdEndhRaS5Le2JUxiDsw" : {
      "process" : {
       "mlockall" : true
      }
    }
  }
}



Enter fullscreen mode Exit fullscreen mode

Also, we can confirm if the elasticsearch is running or not



curl -XGET 'localhost:9200/?pretty'


Enter fullscreen mode Exit fullscreen mode

Step 3 👉 Nginx & Kibana Installation

Downloading the kibana rpm package



wget https://artifacts.elastic.co/downloads/kibana/kibana-7.6.1-x86_64.rpm


Enter fullscreen mode Exit fullscreen mode

Installing the Kibana package



rpm -ivh kibana-7.6.1-x86_64.rpm


Enter fullscreen mode Exit fullscreen mode

Confirming if the package was successfully installed



rpm -qa | grep kibana
kibana-7.6.1-1.x86_64


Enter fullscreen mode Exit fullscreen mode

We will edit the configuration files for kibana now



vi /etc/kibana/kibana.yml

2 server.port: 5601
7 server.host: "0.0.0.0"
28 elasticsearch.hosts: ["http://localhost:9200"]


Enter fullscreen mode Exit fullscreen mode

💡 Proceeded with all-network setup to set up the production port and allow external access (only certain hosts can be specified)

Enabling and starting the kibana daemon



systemctl enable kibana
systemctl start kibana


Enter fullscreen mode Exit fullscreen mode

Confirming kibana service status



netstat -antp | grep 5601
tcp        0      0 0.0.0.0:5601            0.0.0.0:*               LISTEN      8691/node  


Enter fullscreen mode Exit fullscreen mode

💡 It will take some time for the tcp LISTEN node to show

Now we have to install Nginx

We will be using the epel repository as well so



yum -y install epel-release
yum -y install nginx httpd-tools


Enter fullscreen mode Exit fullscreen mode

The Kibana Service displays information on the screen through the Reverse WEB Proxy Server. That is why we will configure the Reverse WEB Proxy Server with the Nginx Web Server

💡 A **forward proxy* sits in front of clients and a reverse proxy sits in front of servers. Both types of proxies serve as intermediaries between clients and servers*

Editing some entries in the configuration file for Nginx



vi /etc/nginx/nginx.conf

38 server {
39 listen 80 default_server;
40 listen [::]:80 default_server;
41 server_name _;
42 root /usr/share/nginx/html;
43
44 # Load configuration files for the default server block.
45 include /etc/nginx/default.d/*.conf;
46
47 location / {
48 }
49
50 error_page 404 /404.html;
51 location = /40x.html {
52 }
53
54 error_page 500 502 503 504 /50x.html;
55 location = /50x.html {
56 }
57 }


Enter fullscreen mode Exit fullscreen mode

We have to delete the above lines from the Nginx config file as we will be adding a virtual host

Including the following in kibana configuration file



server {
 listen 80;
 server_name example.com;
 auth_basic "Restricted Access";
 auth_basic_user_file /etc/nginx/.kibana-user;
 location / {
 proxy_pass http://localhost:5601;
 proxy_http_version 1.1;
 proxy_set_header Upgrade $http_upgrade;
 proxy_set_header Connection 'upgrade';
 proxy_set_header Host $host;
 proxy_cache_bypass $http_upgrade;
 }
}


Enter fullscreen mode Exit fullscreen mode

This will set up virtual host for Reverse Proxy Server operation

Next, we have to set login credentials for kibana access user



htpasswd -c /etc/nginx/.kibana-user Admin
New password: 
Re-type new password: 
Adding password for user Admin


Enter fullscreen mode Exit fullscreen mode

Verifying Nginx settings



nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful


Enter fullscreen mode Exit fullscreen mode

Enabling and starting nginx. We can also confirm if its working



systemctl enable nginx
systemctl start nginx
netstat -antp | grep nginx
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      8889/nginx: master


Enter fullscreen mode Exit fullscreen mode

Step 4 👉 Logstash Installation

Downloading and installing the rpm package



wget https://artifacts.elastic.co/downloads/logstash/logstash-7.6.1.rpm

rpm -ivh logstash-7.6.1.rpm

rpm -qa | grep logstash
logstash-7.6.1-1.noarch


Enter fullscreen mode Exit fullscreen mode

Now, we need to edit the openssl config file



226 [ v3_ca ]
227 # Server IP Address
228 subjectAltName = IP: <Your-IP-Address>


Enter fullscreen mode Exit fullscreen mode

💡 When sending log information using SSL/TLS, it is recommended to encrypt and transmit log information

Issuing the openssl certificate



openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt


Enter fullscreen mode Exit fullscreen mode

After generating and verifying the public key certificate and private key used in SSL/TLS, we generated the public key certificate and private key by referring to the SSL/TLS configuration file



ls -ld /etc/pki/tls/certs/logstash-forwarder.crt
-rw-r--r-- 1 root root 1241  2월 13 13:09 /etc/pki/tls/certs/logstash-forwarder.crt

ls -ld /etc/pki/tls/private/logstash-forwarder.key
-rw-r--r-- 1 root root 1704  2월 13 13:09 /etc/pki/tls/private/logstash-forwarder.key


Enter fullscreen mode Exit fullscreen mode

Specifying expiration date and RSA Bit value for each key through additional options



# Using Filebeat to determine which format to accept data sent from the Client
input {
 beats {
                client_inactivity_timeout => 600
                port => 5044
                ssl => true
                ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
                ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
                  }
        }


# Logstash supports various Filter Plugins (mainly using "grok" Filter Plugins)
 filter {
 if [type] == "syslog" {
 grok {
 match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?: \[%{POSINT:syslog_pid}\])?: 
%{GREEDYDATA:syslog_message}" }
 }
 date {
 match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
 }
 }
}

# Proceeding with settings to transfer data collected by Logstash to Elasticsearch
output {
 elasticsearch {
 hosts => ["localhost:9200"]
 index => "syslog-%{+YYYY.MM.dd}"
 }
}



Enter fullscreen mode Exit fullscreen mode

Finally, we just need to enable and start the logstash



systemctl enable logstash
systemctl start logstash


Enter fullscreen mode Exit fullscreen mode

And also, adding 5044 port TCP in the firewall along with the http protocol



firewall-cmd --permanent --add-service=http
firewall-cmd --permanent --add-port=5044/tcp
firewall-cmd --reload


Enter fullscreen mode Exit fullscreen mode

We can check the logstash if its working or not



netstat -antp | grep 5044
tcp6       0      0 :::5044                 :::*                    LISTEN      12868/java  


Enter fullscreen mode Exit fullscreen mode

💡 It will take some time for logstash to start working


Step 5 👉 Logstash Client Filebeat Installation

We need the tls certificate inside the Logstash Client, so we will copy the public key certificate from the ELK Server to the Client Linux using scp



# The Client with IP (192.168.1.129)
scp root@192.168.1.128:/etc/pki/tls/certs/logstash-forwarder.crt ./
root@192.168.1.128's password: 
logstash-forwarder.crt                                                                               100% 1241     1.0MB/s   00:00

# Moving the `.crt` file to the `/etc/pki/tls/certs` directory

mv ./logstash-forwarder.crt /etc/pki/tls/certs/


Enter fullscreen mode Exit fullscreen mode

Again importing the GPG Key for elasticsearch and downloading the filebeat rpm



rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.1-x86_64.rpm

# Installing the filebeat package
rpm -ivh filebeat-7.6.1-x86_64.rpm

rpm -qa | grep filebeat
filebeat-7.6.1-1.x86_64


Enter fullscreen mode Exit fullscreen mode

As usual, we have to edit some entries in the config file



vi /etc/filebeat/filebeat.yml


Enter fullscreen mode Exit fullscreen mode


# Enabling the Filebeat and also declaring what logs we want to see
24 enabled: true
27 paths:
28 - /var/log/*.log
29 - /var/log/secure
30 - /var/log/messages

# Commenting the Elasticsearch output as we will be getting Logstash output
150 #-------------------------- Elasticsearch output ------------------------------
151 #output.elasticsearch:
152 # Array of hosts to connect to.
153 #hosts: ["localhost:9200"]


# Uncommenting the logstash output + declaring the ELK Host IP and the ssl certificate location
163 #----------------------------- Logstash output --------------------------------
164 output.logstash:
165 # The Logstash hosts
166 hosts: ["<ELK HOST IP>:5044"]
167 ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
168 bulk_max_size: 1024 # Specifying the maximum number of events that can be sent at a time


Enter fullscreen mode Exit fullscreen mode

We now just have to enable and start the filebeat daemon



systemctl enable filebeat
systemctl start filebeat
systemctl status filebeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
   Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2023-02-13 14:43:53 KST; 2min 49s ago


Enter fullscreen mode Exit fullscreen mode

If we check the connection from the ELK Host



netstat -antp | grep 5044
tcp6       0      0 :::5044                 :::*                    LISTEN      17021/java          
tcp6       0      0 <ELK HOST>:5044      <Client>:35030     ESTABLISHED 17021/java     


Enter fullscreen mode Exit fullscreen mode

Now from your browser, you can open the ELK Host server by using its IP address:

ELK
Using the username and password that we set before

Once we are in, we will be able to see this screen

Welcome

We will click on "Try our sample data" and navigate to

Index patterns

We just need to Define the index pattern

Syslog

And then configure the settings

timestamp

Finally, we just need to navigate to the "Discover" tab to see our logs

Logs

✔ We can always filter logs to meet our needs. I demonstrated on how to set up ELK Stack on your Linux Server.

Top comments (1)

Collapse
 
iur profile image
Imran ur Rehman • Edited

very comprehencive article @waji97 . Thank you for such nice article.

Just a suggestion. Please add the file name that needs to be added/edit at line:
"Specifying expiration date and RSA Bit value for each key through additional options"

I believe you are referring to the logstash filter file there. I added it at:
vi /etc/logstash/conf.d/logstash-filter.conf

that has content:
"input {
beats {
client_inactivity_timeout => 600
port => 5044......
"

Again, that is one of the most comprehensive article I saw about basic ELK installation.