Filebeat is a lightweight data shipper from the Elastic Stack, designed to forward and centralize log data. It works by collecting log files from specified sources, processing them (if necessary), and then sending the data to a destination like Elasticsearch, Logstash, or another storage system for further analysis.
For us, Mulesoft Architects and Developers is a great tool to forward the logs of our Mule Apps in the Standalone deployment model. Filebeat is installed as a service, in the same server as our Mule Runtime. In Cloudhub 1.0 or 2.0 we don’t have access to the machine hosting the runtime, that’s why we cannot use Filebeat in Cloudhub.
In this post, we will see how to install Filebeat on Mule server and how to configure it to send the mule logs to a Elasticsearch instance.
It’s always a good practice to apply the Principle of Least Privilege (PoLP) in our deployments and not to use the elastic superuser or another user with high level of permissions on our Elasticsearch.
Where:
With Postman:
Where:
With Postman
For that, send an HTTP request like this:
This command will try to connect to Elasticsearch using the connection parameters in the filebeat config file.
The configuration of Filebeat basically requires to define inputs and outputs:
For the input, we’ll specify:
For the output, we need to provide the connection details to our Elasticsearch instance:
Where:
If you’re using TLS/SSL for Elasticsearch, include the CA certificate in Filebeat's configuration:
Save and close the file
With that, we can now start Filebeat as follows
First, we’ll check if Filebeat has created a new Data Stream. Go to Management > Data > Index Management and click on the Data Streams tab. We should see there’s a new filestream for filebeat
In this post, we will see how to install Filebeat on Mule server and how to configure it to send the mule logs to a Elasticsearch instance.
Prerequisites
To follow this tutorial we will need:- An instance of Elasticsearch and Kibana installed in another server. Check out these posts to see different options to get your Elasticsearch and Kibana instances:
- How to Install Elasticsearch and Kibana on Linux - Part I
- How to Install Elasticsearch and Kibana on Linux - Part II
- How to Install Elasticsearch on Docker
- How to Install Kibana on Docker
- Install Elasticsearch and Kibana with Docker Compose
- In this post, we will ship the mule logs directly to Elasticsearch. We can also send the logs to Logstash for further processing, but that´s something we will do in another post.
- A server with the Mule Runtime installed. In this tutorial we’ll use mule runtime 4.8 with Java 17. Check out this post on How to install the Mule Runtime on Ubuntu Server. This is the server on which we’ll install Filebeat
Install Filebeat
The first thing we need to do is to add the Elastic repository to our system, so that APT can find the binaries of the ELK stack. For that, follow the next steps:- Download and install the public signing key. A public signing key is cryptographic key used to verify the authenticity and integrity of software packages retrieved from an APT repository. It ensures that the packages come from a trusted source and have not been tampered with during transit.
- Download and install the public signing key:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
- Install the apt-transport-https packages. The
apt-transport-https
package provides support for fetching APT repository files over HTTPS
sudo apt-get install apt-transport-https
- Save the repository definition
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
- Install the Filebeat Debian package
sudo apt-get update && sudo apt-get install filebeat=[VERSION]
where
VERSION
is the Elasticsearch version to install. As a best practice, make sure your version of Filebeat matches the version of Elasticsearch, otherwise we might get issues when connecting with each other. In this tutorial we’re using the latest available, 8.15.3. To get the versions of filebeat available.sudo apt list filebeat -a
Create a Custom Role and a User for Filebeat
Before we run Filebeat, let’s first get the credentials for Filebeat to connect to Elasticsearch. Filebeat will need a set of credentials to connect to our Elasticsearch instance and ship the logs.It’s always a good practice to apply the Principle of Least Privilege (PoLP) in our deployments and not to use the elastic superuser or another user with high level of permissions on our Elasticsearch.
For that reason, we will create a custom role with the minimum permissions required for Filebeat and a user assigned to that role. These will be the username and password in the outputs configuration of the filebeat.
Cluster Privileges:
With CURL:
Create a Custom Role
For our Filebeat in our Mule server we need the following privileges (according to Elastic official docs)Cluster Privileges:
- monitor - Grants access to monitor cluster state and details (e.g. version)
- read_ilm - To read the ILM policy if the Elastic cluster supports ILM. Not needed when
setup.ilm.check_exists
isfalse
. - read_pipeline - Check for ingest pipelines used by modules. Needed when using modules.
- create_doc on filebeat-* indices - Grants permissions to write to the indices or data streams with the prefix filebeat-* created by the Filebeat
- auto_configure on filebeat-* indices - To update the datastream mapping for indices or data streams with the prefix filebeat-*
With CURL:
curl -X POST 'http://[YOUR_ELASTICSEARCH_SERVER]:9200/_security/role/[FILEBEAT_CUSTOM_ROLE]' \
-u [YOUR_ELASTIC_ADMIN_USER]:[YOUR_ADMIN_PASSWORD] \
--header 'Content-Type: application/json' \
--data '{
"cluster": ["read_ilm
", "monitor", "read_pipeline"
],
"indices": [
{
"names": ["filebeat-*"],
"privileges": ["create_doc", "auto_configure"
]
}
]
}'
YOUR_ELASTICSEARCH_SERVER
- Hostname/DNS name that identifies your elasticsearch server in the networkFILEBEAT_CUSTOM_ROLE
- The name of the custom role you want to create- Default port for Elasticsearch is 9200. Change it if you’re using another port in your Elasticsearch instance.
[YOUR_ELASTIC_ADMIN_USER]
- An admin user with permissions to create roles and users on elastic. In here, we can use the elastic superuser[YOUR_ADMIN_PASSWORD]
- The password of your admin user- The body of the request will contain all the Privileges we defined previously for our filebeat custom role
With Postman:
Create a User and assign the custom role
Next, we’ll create a user and assign the custom role we’ve defined, in our casemule-filebeat
With CURL:
curl -X POST 'http://[YOUR_ELASTICSEARCH_SERVER]:9200/_security/user/[FILEBEAT_USER]' \
--header 'Content-Type: application/json' \
-u [YOUR_ELASTIC_ADMIN_USER]:[YOUR_ADMIN_PASSWORD] \
--data '{
"password": "[FILEBEAT_USER_PASSWORD]",
"roles": ["FILEBEAT_CUSTOM_ROLE"],
"full_name": "Mule Filebeat User",
"email": "filebeat@mulesoft.com"
}'
FILEBEAT_USER
- The name of the user you want to create- Default port for Elasticsearch is 9200. Change it if you’re using another port in your Elasticsearch instance.
[YOUR_ELASTIC_ADMIN_USER]
- An admin user with permissions to create roles and users on elastic. In here, we can use the elastic superuser[YOUR_ADMIN_PASSWORD]
- The password of your admin user- The body of the request will contain:
password
: Sets the password for the user.roles
: Assign the custommule-filebeat
custom role we’ve createdfull_name
: Provides a descriptive/display name of the user
Test the connection to Elasticsearch
Before running the Filebeat service, let’s double check that there’s connectivity between our Mule+Filebeat server and the Elasticsearch server with the connection parameters we used in the filebeat config file.For that, send an HTTP request like this:
curl "http://[YOUR_ELASTICSEARCH_SERVER]:9200" -u [FILEBEAT_USER]:[YOUR_PASSWORD]
Alternatively, you can also test the Filebeat’s configuration with the command:
sudo filebeat test output
Setup Filebeat
Once Filebeat is installed on our server and our Elasticsearch is ready to receive data from Filebeat, we need to do the initial setup. The configuration of Filebeat is stored in thefilebeat.yml
file, located at etc/filebeat
Use a text editor, such as vi, to modify the file:sudo vi /etc/filebeat/filebeat.yml
- Inputs - Filebeat will be watching our system for log files to be shipped. We need to tell Filebeat on which folders and what types of files we want it to watch for.
- Outputs - We need to tell Filebeat where to send our logs to. So, in here, we will need to tell Filebeat where the destination is located and how to connect to. In our case, we’ll specify our elasticsearch connecton details (hostname, port, user/password, index...)
For the input, we’ll specify:
type
- filestream (don’t use type log, as it is deprecated in Elastic 8+)id
- Uniquely identifies the stream in the systempaths
- add the path to the mule logs folder. By default, it should be at $MULE_HOME/logs. Specify *.log so that Filebeat sends only the files with that extension. This will send the logs of our apps and also the logs of the runtime.enabled: true
- This activate/deactivate the input for Filebeat. Make sure it’s set to true.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.
# filestream is an input for collecting log messages from files.
- type: filestream
# Unique ID among all inputs, an ID is required.
id: mule-filestream-id
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /home/ubuntu/mule/mule-enterprise-standalone-4.8.0/logs/*.log
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["http://[YOUR_ELASTICSEARCH_SERVER]:9200"]
username: "[FILEBEAT_USER]"
password: "[YOUR_PASSWORD]"
YOUR_ELASTICSEARCH_SERVER
- Hostname/DNS name that identifies your elasticsearch server in the network- Default port for Elasticsearch is 9200. Change it if you’re using another port in your Elasticsearch instance.
FILEBEAT_USER
- The name of the user we created in the previous step specific for FilebeatYOUR_PASSWORD
- Password of your Filebeat user
If you’re using TLS/SSL for Elasticsearch, include the CA certificate in Filebeat's configuration:
output.elasticsearch:
ssl.certificate_authorities: ["/path/to/ca.crt"]
Install as a service and Start Filebeat
Next, we’ll install Filebeat as a service. This will allow us to run Filebeat in the background, without us having to manage it from a shell session and also to start automatically Filebeat when the system boots up. When a new service unit file is created, or an existing one is removed, a daemon reload is necessary to updatesystemd
's list of available services. We need to tell systemd
to reload its configuration so that it recognizes the new service added for Filebeat.sudo systemctl daemon-reload
sudo systemctl enable filebeat
sudo sytemctl start filebeat
sudo systemctl status filebeat
Test
Time to see if Filebeat is working and shipping our Mule logs to Elasticsearch. For that, we’ll create a simple app to generate logs, we’ll run it in the mule runtime and we’ll go to our Elasticsearch insance to verify that the logs we’re generating are in there.Create and Deploy an App for Testing
First, let’s create an app for testing. Head over to Anypoint Studio, create a New Mule Project and drag & drop the following elements to our flow:- An HTTP listener - A simple GET /hello
- A Logger processor to show how the app writes to the log. Write any text in the message that can help you identify the log is coming from this component when we’ll see the logs in ELK
- A Set Payload processor to create a response for our test endpoint. Enter any text that confirms the app is running well
Take the jar file of your app and deploy it to our Standalone runtime
Verify
Once it’s deployed send some requests to our testing app so that it generates some logs. After that, head over Kibana and log in to check if our filebeat is shipping our logs to Elasticsearch.First, we’ll check if Filebeat has created a new Data Stream. Go to Management > Data > Index Management and click on the Data Streams tab. We should see there’s a new filestream for filebeat
Create Dataview
Let’s see now if we can actually see the logs of our app. For that, first we’ll have to create a Data View for our Data Stream. For that, go to Management > Kibana > Data Views and click on Create data viewThen, provide a name for our Data View and search for the filebeat. We should see on the right the list of indexes and data streams. Copy and paste the name of the filebeat data stream to the Index pattern and click on Save
We can see now that our Data View contains data
Then, go to Analytics > Discover, select the Data view we’ve just created and verified you can see the logs from your app