top of page

Elastic Stack & Monitoring Application

Hello dear friends, in this writing I’ll create a monitoring application that collects and analyses system data real-time. I’ll use the current version 8.2.0 of the Elastic Stack Technologies at hand. I hope you'll like it.

What is Elastic (ELK) Stack?

Elastic Stack is an open-source group of products consisting of Elastic products which are designed to help users collect data from any source in any way and to search, analyse and visualise that data real-time.

Kibana: Kibana is a data visualization and discovery tool used for usage examples of daily and time series analysis, application monitoring and operational intelligence. It presents easy-to-apply and strong qualities like histograms, line charts, pie charts, heat maps and geospatial support. Also, it is high integration with Elasticsearch which has a popular analysis and search infrastructure which makes Kibana default option to visualise the data stored in Elasticsearch.

Elasticsearch: Elasticsearch is a distributed search and analysis infrastructure developed on Apache Lucene. It has been the most popular search infrastructure since 2010, the year it was published and it is generally preferred for usage examples of daily analytics, full-text search, security intelligence, business analytics and operational intelligence.

Logstash: Logstash is a lightweight, open-source and server-side data processing line which allows to collect data from various sources, transform immediately and send to the requested target. Generally, it is used as a data line for Elasticsearch which is an open-source analysis and search infrastructure. It is a popular option to upload data to Elasticsearch with its strong daily processing qualities, more than 200 pre-created open-source attachments and high integration with Elasticsearch.

Beats: Beats is a lightweight and open-source platform which allows the beat data to be carried to Elasticsearch.

In the architecture seen above, the data can be directly sent to Elasticsearch or first to logstash to parse and to Elasticsearch by various beats. This data is visualised with dashboards dedicated through Kibana.

Beat types can be seen above. I’ll prefer MetricBeat in this guide to collect system data.

Inventory for Monitoring Application

In this writing, I’ll install Elastic Stack on non-cluster Oracle Linux 7.9 operating system. You may follow the same constructions in this guide for the versions RedHat 7.x and CentOS 7.x. I preferred current 8.2.0 versions for Elasticsearch, Kibana and MetricBeat.

  • Oracle Linux 7.9

  • Elasticsearch 8.2.0

  • Kibana 8.2.0

  • MetricBeat 8.2.0

  • JDK 1.8

For those, who’ll install in different platforms, here is current Support Matrix published in Elastic.

Elastic Stack Installation and Monitoring Application

During the installation, all configurations will be adjusted as to run in localhost. However, I’ll also tell you how to collect system data of all servers in only one central monitoring server if you have more than one server in the same or different networks.

First, stop/disable firewall and selinux. The aim is to prevent any problem that may occur when the beats running in different servers send data to this server.

# systemctl disable firewalld.service
# systemctl stop firewalld.service
# vi /etc/selinux/config 

Enter “disabled” on the line stated in the file.


Download Elasticsearch 8.2.0 RPM.

# curl -L -O

Install RPM and note down the elastic password. After installing RPM, you may run the sh shown in the picture below to reset password. To create the enrolment token necessary for Kibana and Elasticsearch configuration, you may run the other sh, which we’ll run in the coming steps. The enrolment token necessary to add nodes to Elastic Cluster is gotten by the other sh. We don’t need that sh for now because we work on a single node.

# rpm -vi elasticsearch-8.2.0-x86_64.rpm
# echo vvjwC7p5nYlEnIPWrks= > elasticpw

I didn’t hide the password I set in the installation since it is auto-generate and test-purposed.

When the installation is completed, configure as needed in the elasticsearch.yml file before running the service.

# vi /etc/elasticsearch/elasticsearch.yml

You see screenshots of my file below. is the location of the data you’ve stored in Elasticsearch. If you want to store data in an external disc, you may mount the disc and enter its path. Owner of the index must be elasticsearch user, be careful about that! The path.logs index is where the logs are and also the owner must be elasticsearch.

Since the value is line comment in this guide, it runs as the default value localhost and only localhost can connect to Elasticsearch. If you want Elasticsearch to be accessible anywhere, you should define this value as “”.

In the former version Elasticsearch 8.1.3, the default value only allowed localhost and local networks. That’s why in order to make HTTP API connections accessible, this value had to be adjusted. In the version 8.2.0, default value is “connection from anywhere”, no need to change it.

Upon completing the settings in the configuration file, save and close.

Reload daemon, run the Elasticsearch service and test it with the curl command below.

# systemctl daemon-reload
# systemctl enable elasticsearch.service
# systemctl start elasticsearch.service
# curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:9200 

Go to the “https://localhost:9200” address in the browser and sign in with elastic username and password. Remember you’ve noted down your password upon Elasticsearch RPM installation.

As you see, Elasticsearch is installed and running on localhost. If you’ve permitted Elasticsearch to be accessible anywhere and if the server to access is permitted to access the port 9200 in the elastic server; you can access by entering the IP address or the hostname registered in DNS to the localhost in the address bar.

Download and install Kibana 8.2.0 RPM.

# curl -L -O
# rpm -vi kibana-8.2.0-x86_64.rpm

Make the necessary adjustments in the kibana.yml file before running the Kibana service.

# vi /etc/kibana/kibana.yml

First thing to look at in the file is the parameter. This parameter is set as “localhost” in default which means you can access Kibana only through that server. If you want to connect to Kibana from the servers away, you need to enter IP address or the hostname registered in DNS. This guide will go on with localhost, so leave it as default.

If you’ve installed Kibana in a server other than Elasticsearch, you need to change that parameter. I’m using the port 9200 which is default on the same server, so I won’t set anything. While setting that parameter, be careful about the http/https difference where the Elasticsearch is running!

Reload daemon and run the Kibana service. Upon running Kibana, create an enrolment token and a verification code to connect to Elasticsearch.

# systemctl daemon-reload
# systemctl enable kibana.service
# systemctl start kibana.service

# /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
# sh /usr/share/kibana/bin/kibana-verification-code

Go to the “http://localhost:5601” address in the browser. Enter the enrolment token you’ve created in the former step and configure between Kibana and Elasticsearch.

NOTE: If you want to connect to Kibana from an away server, you may enter the value you’ve set in kibana.yml to the localhost in the address bar. If you don’t enter parameter, you can connect to Kibana only through localhost.

Enter the six-digit verification code you’ve created in the former step.

Sign in with elastic username and password.

Click "Explore on my own".

It means you’ve completed the installation and configuration if you see the screen below. You can load sample data by clicking “Try sample data”. You can import a data file at hand with drag-and-drop and make analysis by clicking “Upload a file”.

Now, it’s time to install MetricBeat which will help us retrieve system data. It should be installed in all servers to be retrieved data and those servers should have access to the port 9200 in the server where Elasticsearch is installed. In this installation, only localhost data will be used.

Download and install MetricBeat RPM. Enable the system module.

# curl -L -O
# rpm -vi metricbeat-8.2.0-x86_64.rpm
# metricbeat modules enable system

Go to metricbeat.yml file to set base settings configuration of MetricBeat.

# vi /etc/metricbeat/metricbeat.yml

In “output.elasticsearch:”, find the server information where the metric data produced will be sent. If an away server is to connect Elasticsearch, localhost should be entered Elasticsearch IP address or the hostname registered in DNS. The server where MetricBeat is installed should have access to the port 9200 in the Elasticsearch server.

Other important points are username, password and ssl. Go on with your own elastic password and ssl information here. Be careful because the mistakes you make here hinder access of metric data to Elasticsearch server!

  username: "elastic"
  password: "YOUR_PASSWORD"
  enabled: true
  verification_mode: "none"

Logging.level is in debug mode as default. It may cause unnecessary burden since everything produce log in /var/log/messages. That’s why I suggest you set log level as error level.

Reload daemon and run MetricBeat service.

# systemctl daemon-reload
# systemctl enable metricbeat.service
# systemctl start metricbeat.service

Run this command only for once to install MetricBeat dashboard in Kibana, then every new server data is viewed in that interface. To make the installation, MetricBeat should have access to Kibana.

# metricbeat setup -e

Go back to Kibana and click “Dashboard”.

Here are interfaces you can visualise the data you retrieved from various environments with MetricBeat. I’ll examine “Host Overview” and “System Overview” parts under the [Metricbeat System] title.

You’ll see general view of all servers where data is retrieved in “System Overview”.

You can access all system data by querying any server in “Host Overview”. I shared result images of the monitoring application below.

So, you can store the server data in your inventory in Elasticsearch long term and follow the state of your servers, go back to problem occurrence time and detect the cause of the error.


Hope to see you in new posts, take care.

77 views0 comments

Recent Posts

See All
bottom of page