How to Install Elasticsearch and Kibana on Linux - Part I



ELK stands for Elasticsearch, Logstash, and Kibana, the three components of the so called ELK stands. The ELK stack (or just ELK) provides a powerful platform for managing, analyzing and visualizing data. For us, Mulesoft Architects and Developers, ELK is a great solution for the observability of our Mule apps.

One of the main use cases of ELK for Mule is the log management. Using ELK for our Mule apps will allow us to collect, process, store and visualize log data

In this post we will see how to install and setup Elasticsearch and Kibana on a Linux server, so that we can use it for our Mule apps.

If you already have your Elasticsearch instance, check out our next post on How to Install Elasticsearch and Kibana on Linux - Part II

Prerequisites

The hardware requirements for installing and running the ELK Stack (Elasticsearch, Logstash, and Kibana) depend on your use case, data volume, and expected workload. Below are minimum and recommended resource requirements for each component of the ELK Stack.

Elasticsearch

Elasticsearch is the core component, and it is the most resource-intensive part of the stack.

Minimum Requirements:

  • CPU: 2 vCPUs
  • Memory: 4 GB RAM (2 GB for the JVM heap)
  • Disk Space: 20 GB
  • Network: 1 Gbps

Recommended for Production:

  • CPU: 4+ vCPUs
  • Memory: 8+ GB RAM (50% allocated to the JVM heap)
  • Disk Space: Depends on data retention, at least 500 GB or more. Use SSDs for better performance.
  • Network: 1+ Gbps with low latency

Kibana

Kibana provides the UI for Elasticsearch, so its resource requirements are lighter compared to Elasticsearch. The memory requirement depends on the number of concurrent users and visualizations. 

Minimum Requirements:

  • CPU: 1 vCPU
  • Memory: 2 GB RAM
  • Disk Space: 10 GB
  • Network: 1 Gbps

Recommended for Production:

  • CPU: 2+ vCPUs
  • Memory: 4+ GB RAM
  • Disk Space: 20 GB
  • Network: 1+ Gbps

Logstash processes and transforms the raw data into a structured format. Although it’s a powerful tool in some cases is not necessary if our deployment does not require a pre-processing of our logs before they’re pushed to Elasticsearch. We’ll not be installing Logstash in this tutorial

In this tutorial, we’ll be installing Elasticsearch and Kibana on the same server. As a best practice, Kibana should be colocated with Elasticsearch when possible to minimize latency. We’ll use an Ubuntu Server 24.04 LTS. For the installation, we’ll use the APT (Advanced Package Tool) package management system, which is the recommended way for Debian-based distributions.


Install and setup Elasticsearch

Installation

The first thing we need to do is to add the Elastic repository to our system, so that APT can find the binaries of the ELK stack. For that, follow the next steps:
  • Download and install the public signing key. A public signing key is cryptographic key used to verify the authenticity and integrity of software packages retrieved from an APT repository. It ensures that the packages come from a trusted source and have not been tampered with during transit.

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg

  • Install the apt-transport-https packages. The apt-transport-https package provides support for fetching APT repository files over HTTPS

sudo apt-get install apt-transport-https

  • Save the repository definition

echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list

  • Now, we can Install the Elasticsearch Debian package from its repository

sudo apt-get update && sudo apt-get install elasticsearch=[VERSION]

where 
VERSION is the Elasticsearch version to install. In this tutorial we’re using the latest available, 8.15.3. 

Once the installation is complete, have a look at the information provided. You’ll find the password generated for the superuser elastic. Copy that, we’ll need it to access our elasticsearch instance.



Setup Elasticsearch

The elasticsearch stores the configuration of the instance in a YAML file. We need to edit that file to provide the details of our instance before running it. For that, edit the file:

sudo vi /etc/elasticsearch/elasticsearch.yml

You’ll see that the file contains a great amount of configuration parameters, most of them within comments. For our basic installation we will set up the following parameters:

  • cluster.name: mule-elk - Descriptive name for the cluster
  • node.name: mule-elk-01 - Descriptive name for the node
  • network.host: 0.0.0.0 - This will make our Elasticsearch available to any network. If we wanted to restrict the access to a specific sub-network we would just add the CIDR notation.
  • http.port: 9200 - The port on which Elasticsearch will be listening for connections.
  • cluster.initial_master_nodes: ["mule-elk-01"] - This specifies what node will act as the master on the cluster. There are two entries for this configuration in this file. Make sure only one is enabled.
  • xpack.security.enabled: true - should be enabled by default. With the security pack, Elasticsearch will require authentication for any connection to the instance.
  • xpack.security.http.ssl/enabled: false - This parameter enables HTTPS for API client connections, such as Kibana, Logstash, and Agents (in case you don’t want to use https for the client connections). We will set it to false for simplicity, otherwise we’d have to add the certificate generated to our mule app in our log4j configuration.

Give permissions to read elasticsearch logs

sudo chmod 755 -R /var/log/elasticsearch/


Install Elasticsearch as a Service

Configure Elasticsearch to start automatically when the system boots up. When a new service unit file is created, or an existing one is removed, a daemon reload is necessary to update systemd's list of available services. We need to tell systemd to reload its configuration so that it recognizes the new service added for elasticsearch.

sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service

With that, we can now start Elasticsearch as follows

sudo systemctl start elasticsearch.service
sudo systemctl status elasticsearch.service


Test the installation

Let’s see if our Elasticsearch instance is working. If you only have the terminal available, make sure CURL is installed. Other wise you can install it with the command:

sudo apt-get install curl

Now, verify Elasticsearch is running sending a get request to 
http://localhost:9200. Our Elasticsearch has the security pack enabled, so we’ll need to provide user name and password (basic auth) in our request with curl. 

curl http://localhost:9200 -u elastic:[YOUR_PASSWORD]

Where 
YOUR_PASSWORD is the password you got for the elastic superuser. 


In case you forgot to write it down after the installation, you can reset the password with the command

sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic


You can also set up a specific password for the user, we just need to include the -i flag and the terminal will prompt for a specific password

sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic -i

If you enabled SSL for your Elasticsearch instance, notice that the certificate provided by the Elasticsearch is a self-signed certificate. For testing purposes that’s ok, but in a production environment we would need to provide a certificate signed by a CA of our Trust. If you’re using SSL, then you can test Elasticsearch is running adding the -k flag to ignore the self-signed certificate:

curl -k https://localhost:9200 -u elastic:[YOUR_PASSWORD]

(The -k flag is to ignore the certificate in the response)



Now let’s try if our Elasticsearch instance is also accessible from outside the local network, using the public DNS name of our server. Alternatively to curl, you can also try it from Postman where it’s easier to provide the Authentication


Create a Service Account Token for Kibana

To connect Kibana to our Elasticsearch instance, we’ll need to provide some credentials. We might be tempted to use the elastic superuser credentials for Kibana but: 
  • First, that’s not a good practice. The superuser account should be highly restricted to only a few users and used only for admin purposes.
  • Second, in later versions, Kibana won’t let you start the service if it identifies the elastic superuser. The startup of the service will fail with an error.
So, as a best practice, we’ll use a Service Account Token for Kibana. We’ll generate the token via the elasticsearch REST API, sending the following request (you can use curl or postman):
With CURL:

curl "http://localhost:9200/_security/enroll/kibana" -u elastic:[YOUR_PASSWORD]


With Postman:


The response will be a JSON with the name and the value of the token we’ll use to authenticate Kibana.

Next Steps

With that our Elasticsearch instance is ready. The next step will be to add Kibana on our server so that we can have a web UI to manage ELK. That’s what we’ll see in our next post - How to Install Elasticsearch and Kibana on Linux - Part II
Previous Post Next Post