Why and When we should use the Log4j JSONLayout for our Mule Apps


As we’ve seen recently in this post, the Layouts in Log4j are responsible of formatting log messages before they are sent to their final destination. The Log4j framework provides us with a few built-in Layouts like PatternLayout or JSONLayout.


By default, Mule apps use the PatternLayout in their initial log4j2.xml file. However, as we’ll see in this other post, the JSONLayout is a great alternative.
The JSONLayout format the log events as JSON objects which is much more convenient if we are going to send our logs to a log aggregation platform like Splunk or ELK. But why? In this post, we will give you some valid points:

Structured Logging for Easy Parsing

  • Traditional logs are unstructured, which makes parsing them challenging for log aggregation and analysis tools. JSON, on the other hand, is structured and can be easily parsed by log management systems like ELK (Elasticsearch, Logstash, Kibana), Splunk, Graylog, or Datadog.
  • JSON logs provide clear key-value pairs for important data like timestamps, log levels, thread names, and custom fields. This structured format makes searching, filtering, and correlating logs straightforward.
  • Example of a structured log entry with JSON:
{
"timestamp": "2024-09-06T12:34:56.123+0200",
"level": "INFO",
"thread": "main",
"logger": "com.example.MyClass",
"message": "Application started",
"userId": "user123",
"transactionId": "txn98765"
}


Compatibility with Log Aggregation and Monitoring Systems

  • Many modern log aggregation systems like ELK, Splunk, or Datadog are designed to work seamlessly with structured data formats such as JSON. These systems can automatically index JSON logs, enabling powerful search capabilities, real-time analytics, and visualizations.
  • MuleSoft applications typically run in distributed environments like CloudHub, Runtime Fabric or on-premise clusters. Structured JSON logs make it easy to aggregate logs from multiple instances and services into centralized logging systems (e.g., ELK).
  • JSON makes it easy to track the flow of events across multiple microservices, systems, or nodes in a distributed environment, which is critical for root-cause analysis or performance monitoring

Enhanced Log Filtering and Querying

  • With JSON logs, you can write complex queries that target specific fields. This makes filtering and querying logs more precise compared to traditional text logs.
  • For example, you can query all ERROR level logs for a particular user (userId) or search for all logs within a specific transaction (transactionId).

Standardized Format for Distributed Systems

  • In distributed architectures, such as microservices or an API led network of apps, it's essential to have logs from different services follow a consistent format. JSON is an ideal format for this purpose.
  • For us, Mulesoft Architects and 
  • When all services log in JSON format, it becomes easier to correlate logs across different services and trace requests as they flow through the system (often using fields like correlationId, transactionId, etc.).

Debugging and Monitoring

With JSON logs, you can easily search and correlate logs for a specific transaction or user, making debugging distributed applications easier.


Better Support for Log Enrichment

  • JSON logs can include additional metadata (such as MDC or custom fields) that can be added dynamically at runtime. This makes it easy to enrich logs with transaction-specific or flow-specific data such as:
    • Transaction ID
    • Session ID
    • User ID
    • Geo-location
    • Service Name
  • Here’s an example of a log entry in JSON with enriched data
{
"timestamp": "2024-09-06T12:34:56.123+0200",
"level": "INFO",
"logger": "com.example.MyClass",
"message": "Order placed",
"transactionId": "txn98765",
"userId": "user123",
"orderId": "ORD001",
"amount": "100.00"
}


Human and Machine Readability

  • JSON is both human-readable (when compact=false) and machine-readable. While compact mode (compact=true) optimizes log size, pretty-print (compact=false) allows developers to easily read and debug logs.
  • We’ll see in this post how the compact field works.
  • JSON logs are easier to visually parse compared to traditional multiline stack traces or text logs

Handling Complex Data (Exceptions, Nested Fields)

  • JSON logs can elegantly represent nested objects or complex data structures like exceptions. For instance, instead of a multiline string representation of an exception, JSON can structure it in a machine-readable way, making it easy for log aggregation systems to capture and analyze.
Example of a structured exception log:
{
"timestamp": "2024-09-06T12:34:56.123+0200",
"level": "ERROR",
"logger": "com.example.MyClass",
"message": "An error occurred",
"exception": {
"class": "java.lang.NullPointerException",
"message": "Null pointer exception",
"stacktrace": [
"com.example.MyClass.method(MyClass.java:42)",
"com.example.MyClass.main(MyClass.java:28)"
]
}
}


Facilitates Better Log Monitoring and Alerts

  • Since JSON logs are easily parsed and indexed, log aggregation systems can trigger alerts based on certain log patterns or fields. For example, you could set up an alert whenever there are more than 5 ERROR level logs for a particular transactionId in a given timeframe.
  • JSON logs can include structured fields that allow for real-time metrics and dashboarding in tools like Grafana or Kibana, offering visibility into application health.

In short, JSONLayout is ideal for any modern applications like Mule apps that use centralized log management systems and requires structured, searchable, and filterable logs. It provides clear, machine-readable data that supports complex queries, enhances traceability, and integrates well with log monitoring and alerting systems. So, if you are externalizing your mule apps logs to a log aggregation system like Splunk, ELK, Datadog or alikes you should probably use the JSONLayout instead of the default PatternLayout in Log4j.
Previous Post Next Post