ELK (or Elastic) is one of the preferred platforms for the Observability of our Mule apps. The installation and setup of ELK is not an easy task and it can take time. Sometimes, as in my case, we need a quick way of testing some aspects of the integration between Mule and ELK. In these cases, Docker is a great option to simplify and speed up the spinning up of an Elastic instance for testing.
In a previous post, we saw how Docker compose could be very helpful to get your ELK on Docker with just one command. That’s the best way for a quick test. But remember, in that example, all the settings were the deault and the security pack disabled (no authentication required).
In some cases, we might need to change some of the default settings of the Elasticsearch installation.
In my case, for testing purposes with Mule apps, I prefer to disable the HTTPS and make my Elasticsearch instance only available on HTTP. It’s not something I’d do in a production environment, but for testing that simplifies my connection to Elasticsearch from my mule apps. Also, I prefer to enable the Authentication, to make it more real world.
In this cases, as we’ll see, we need to modify configuration files, which makes Composer not the quickest way.
In the next two posts, we’ll see how to install and setup Elasticsearch and Kibana on Docker containers, adding some custom configuration. If you already have your Elasticsearch container, check out the next post on How to Install Kibana on Docker
Pull the Elasticsearch image
First, we will pull the image from the Elastic repository. Pay attention to the version you’re downloading, make sure you use the same version for all the components in your ELK stack. In this tutorial we’ll be using version 8.15.3, the latest available at the time of writing this post.docker pull docker.elastic.co/elasticsearch/elasticsearch:8.15.3
Create a Docker Network
For our case we’ll be deploying two containers (Elasticsearch and Kibana), and we need to connect both. For that, we’ll create a Docker Network, this way both containers will be able to communicate with each other and with the host system, just by using the container name as dns name. This way, kibana will be able to find the elasticsearch instance under http://elasticsearch:9200 . Run the command:docker network create elastic
Create a Volume
We will create a docker volume for the container, this way we can retain data even if the container is stopped or removed. Declaring a volume ensures that data written to that location is stored outside the container, making it persistent and not tied to the lifecycle of the container.docker volume create esdata
Run the container
Next, we will run the container with the following command:docker run -d --name elasticsearch -p 9200:9200 \
--net elastic \
-v esdata:/usr/share/elasticsearch \
-e "discovery.type=single-node" \
-e "cluster.name=elk-mule" \
-e "node.name=elk-mule-01" \
-e "http.port=9200" \
-e "ELASTIC_PASSWORD=Mule1234" \
-e "network.host=0.0.0.0" \
-e "xpack.security.enabled=true" \
-e "xpack.security.http.ssl.enabled=false" \
-e "xpack.security.authc.token.enabled=true" \
docker.elastic.co/elasticsearch/elasticsearch:8.15.3
-d
- to run the container in the background--name elasticsearch
- Provides a name for our container-p 9200:9200
- maps the port 9200 in the host to the port 9200 in the container. This the default port where Elastic is listening for requests--net elastic
- Adds the container to the elastic network we created in the previous step-v esdata:/usr/share/elasticsearch
- Creates a dedicated volume to persist data even if the container is removed- Environment variables for custom configuration:
-e "discovery.type=single-node"
- Specifies this will be a single node cluster-e "cluster.name=elk-mule"
- Descriptive name for the cluster-e "node.name=elk-mule-01"
- Descriptive name for the node-e "http.port=9200"
- Specifies the HTTP port-e "ELASTIC_PASSWORD=Mule1234"
- Sets the password for the superuserelastic
-e "network.host=0.0.0.0"
- Defines the network interfaces Elasticsearch will bind to. With 0.0.0.0 we’ll bind it to all interfaces-e "xpack.security.enabled=true"
- Enables authentication for the elasticsearch connentions-e "xpack.security.http.ssl.enabled=false"
- Disables HTTPS-e "xpack.security.authc.token.enabled=true"
- Enables authentication via tokens (we’ll need it for Kibana)docker.elastic.co/elasticsearch/elasticsearch:8.15.3
- refers to the image we’ve just downloaded in the first step
docker ps
docker logs elasticsearch
Lastly, verify Elasticsearch is running sending a get request to http://localhost:9200
curl 'https://localhost:9200' -u elastic:Mule1234
Alternatively you can also try it from Postman where it’s easier to provide the Authentication
(Optional) Customize the Elasticsearch installation
As we mentioned earlier, in some cases, we might need to change some of the default settings of the Elasticsearch installation. For that, we need to modify the elasticsearch config file, which contains all the configuration of the instance. Let’s see how to do it.Let’s suppose the HTTPS config and authentication were not set up with the env variables of the docker run command. We could do that, after running the container, by modifying the elasticsearch.yml file and set the following parameters:
xpack.security.enabled: true
xpack.security.http.ssl:
enabled: false
The problem here is that, as a best practice, images are trimmered to the bare minimum and sometimes there’s no text editor installed, which means we can’t edit the file from within the container.
There are two possible solutions:
Option 1 (Docker Desktop)
If you’re running Docker on Windows or Mac, docker desktop gives you the option to access and edit the files of a container from the docker desktop dashboard. For that, go to the dashboard, click on your container and then click on the Files tab. Then navigate to the path and file we need to modfy. In our case, the elastic.yml file is located at/usr/share/elasticsearch/config/elasticsearch.yml
Right click on the file and edit.
Click on the Save button and restart the container
First, copy the file from the container to the host:
Option 2 (From the terminal)
Another option to modify a file in a container is to copy the file from the container to the host, edit the file and copy it back to the container. Let’s see it:First, copy the file from the container to the host:
docker cp <container>:/path/to/file.ext .
docker cp elasticsearch:/usr/share/elasticsearch/config/elasticsearch.yml .
xpack.security.enabled: true
xpack.security.http.ssl:
enabled: false
docker cp file.ext <container>:/path/to/file.ext
docker cp elasticsearch.yml elasticsearch:/usr/share/elasticsearch/config/elasticsearch.yml
docker restart [CONTAINER]
docker restart elasticsearch
Verify the changes
Let’s see if after the restart our Elasticsearch has picked up our changes. Let’s send a GET request now over HTTP to the health endpoint:curl http://localhost:9200 -u elastic:Mule1234
Next Steps
With all of the above you can get your Elasticsearch for testing in very few minutes. From here, depending on what you need your testing for, the next steps could be:- Install Kibana on Docker
- Install Logstash on Docker
- Set up ELK for our Mule Apps