We run Mule apps on Runtime Fabric every day. We know when a flow breaks, we want answers fast. That means we need traces — and we want them in Splunk, where our ops team lives.
Starting with Mule runtime 4.11.0, MuleSoft ships a feature called Direct Telemetry Stream. It lets the Mule runtime push OpenTelemetry (OTel) traces directly to any OTLP-compatible backend — without routing data through Anypoint Monitoring. In other words, now traces can be sent directly from the Runtime Plane (direct stream) instead of going through the Control Plane (telemetry exporter).A Quick Word on OpenTelemetry
OpenTelemetry (OTel) is an open standard for collecting and exporting telemetry data. It defines how traces, logs, and metrics are structured and transported.A trace is a record of one complete operation. For example: an HTTP request enters our Mule flow and exits. A trace is made of spans. Each span represents one unit of work — a flow execution, a Set Variable call, or a Transform Message step. Spans nest inside each other, giving us a full picture of how a request moved through our app.
Mule instruments these spans automatically. We don't write any tracing code. We enable the exporter, and the runtime does the rest.
Architecture
Splunk Enterprise does not accept OTLP natively. It ingests data through the HTTP Event Collector (HEC). We'll place an OpenTelemetry Collector between the Mule runtime and Splunk. The collector receives OTLP from Mule and translates it into HEC events that Splunk understands.
We'll run Splunk Enterprise on a Docker container (for demo purposes) and the OTel Collector as K8s deployment on the same cluster as our RTF instance. This keeps the setup simple and self-contained for a test environment.
Mule App (RTF pod)
│
│ OTLP/HTTP — internal K8s network
▼
OTel Collector ◄── Deployment + Service inside K8s cluster
│
│ HEC (port 8088) — out to Docker host
▼
Splunk Enterprise ◄── Docker container on our hostThe collector is co-located with the workload it serves. The Mule runtime sends trace data in OTLP format to the OTel Collector. The collector converts and forwards that data to Splunk's HEC endpoint. Splunk indexes the traces, and we search them from the Splunk web console.
Prerequisites
Before we start, we need:- A Kubernetes cluster with RTF installed. If we don't have one yet, this guide walks us through a quick setup.
- A simple Mule test app deployed on RTF using Mule runtime 4.11.0 or later. The app needs an HTTP Listener and a Set Payload that returns "Hello World." That's enough to generate spans.
- Splunk Enterprise running in Docker container in a different host. If you haven't set it up yet, this guide covers the full installation in a few easy steps. We'll assume Splunk is running with port
8000for the web console and port8088for the HTTP Event Collector. kubectlaccess to our K8s cluster with permissions to create namespaces, ConfigMaps, Deployments, and Services.- Helm installed on our local machine. We'll use it to deploy the OTel Collector.
Step 1 - Create a New Index in Splunk
A Splunk index is a data repository where Splunk stores incoming data for search and analysis. When data is ingested into Splunk (via logs, metrics, or other sources), it is indexed to make it searchable. Each index is essentially a collection of data that is organized and optimized for quick retrieval and analysis. Think of Splunk as a big database where each index is a table.Although we could skip this step and use an existing index, it’s a best practice to create a dedicated index for the mule traces and logs. When ingesting data from different sources (e.g., web server logs, application logs, database logs, mule logs), creating separate indexes for each type of data allows for better organization and more efficient searches.
To create a dedicated index for the Mule traces we will do the following:
- Go to Settings > Data > Indexes
- Click on New Index on the top right corner
- Give it a name and leave the rest of the options as default. In this example our index is called mule-rtf-traces
Step 2 — Create an HTTP Event Collector in Splunk
Splunk's HTTP Event Collector (HEC) is the ingestion endpoint that receives data from the OTel Collector. We'll create an HEC for our mule traces and create an authentication token.- Go to Settings > Data > Data Inputs.
- In the HTTP Event Collector, select Add new
- Give it a name
mule-rtf-tracesand click Next
- In the Input settings:
- Select json for Source Type
- Add the index we've created in the previous step to Allowed Indexes
- Make that index the Default Index
a1b2c3d4-e5f6-7890-abcd-ef1234567890After that, we'll navigate to Settings → Data Inputs → HTTP Event Collector. We'll click Global Settings in the top-right corner. We'll set All Tokens to Enabled. We'll confirm the HTTP Port Number is
8088. We'll also disable SSL, just for simplicity on this tutorial. We'll click Save.Step 3 — Create a Namespace for the OTel Collector
We'll keep the collector in its own namespace to separate it from RTF workloads.kubectl create namespace otel-collectorStep 4 — Deploy the OTel Collector with Helm
We'll use the official OpenTelemetry Collector Helm chart. Thecontrib distribution includes the splunk_hec exporter, which is not in the core image.Add the OpenTelemetry Helm repository:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo updateotel-values.yaml on our local machine. We'll replace <HEC_TOKEN> with the token we copied from Splunk, and <our-host-ip> with the IP address of the host running our Splunk Docker container.image:
repository: otel/opentelemetry-collector-contrib
mode: deployment
config:
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
exporters:
splunk_hec/traces:
token: "<HEC_TOKEN>"
endpoint: "http://<our-splunk-hostname>:8088/services/collector"
source: "mule-rtf"
sourcetype: "mule:otel:trace"
index: "mule-rtf-traces"
tls:
insecure_skip_verify: true
processors:
batch: {}
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [splunk_hec/traces]Here is what each section does:
receivers.otlp— The collector listens for OTLP data over HTTP on port4318. The Mule runtime sends its trace batches here.exporters.splunk_hec/traces— The collector forwards traces to Splunk HEC. We point it to our Docker host IP on port8088. Thesourcetypefield tags every event in Splunk, making our Mule traces easy to search.processors.batch— Groups spans into batches before sending. This reduces HTTP call frequency to Splunk and improves throughput.service.pipelines.traces— Wires the pipeline together. Traces flow from the OTLP receiver, through the batch processor, and out to the Splunk HEC exporter.- Note on
insecure_skip_verify. We set this totruebecause our Splunk Docker container uses plain HTTP on port8088. In production we will configure TLS properly.
helm install otel-collector open-telemetry/opentelemetry-collector \
--namespace otel-collector \
--values otel-values.yamlkubectl get pods -n otel-collectorRunning status before we continue.kubectl logs -n otel-collector \
-l app.kubernetes.io/name=opentelemetry-collector4318 with no errors.Step 5 — Note the Collector's Internal Service DNS Name
Helm creates a Kubernetes Service for the collector automatically. We'll find it:kubectl get svc -n otel-collectorotel-collector-opentelemetry-collector. Its internal K8s DNS address is:otel-collector-opentelemetry-collector.otel-collector.svc.cluster.localStep 6 — Configure Runtime Manager Properties on Our Mule App
This is where we activate the Direct Telemetry Stream on our Mule application. We don't change any code in our app. We set application properties directly in Runtime Manager.Navigate to our app in Runtime Manager. We'll log in to Anypoint Platform. We'll go to Runtime Manager → Applications. We'll click the name of our Hello World test app.
Open the Properties tab. We'll click Settings in the left panel. We'll click the Properties tab.
Add the following properties:
| Property | Value |
|---|---|
mule.openTelemetry.tracer.exporter.enabled | TRUE |
mule.openTelemetry.tracer.exporter.type | HTTP |
mule.openTelemetry.tracer.exporter.endpoint | http://otel-collector-opentelemetry-collector.otel-collector.svc.cluster.local:4318/v1/traces |
mule.openTelemetry.tracer.exporter.sampler.arg | 1 |
mule.openTelemetry.tracer.exporter.enabled— Activates the exporter. It defaults tofalse. We set it totrue.mule.openTelemetry.tracer.exporter.type— Sets the transport protocol toHTTP. The OTel Collector's OTLP receiver accepts HTTP on port4318.mule.openTelemetry.tracer.exporter.endpoint— The full URL where the Mule runtime sends traces. We use the internal K8s service DNS name. The Mule runtime pod resolves this name without leaving the cluster network.mule.openTelemetry.tracer.exporter.sampler.arg— Sets the sampling rate. A value of1means 100% of traces are exported. This is the right setting for a test. In production we'll use a lower value like0.1(10%) to control data volume.
Step 7 — Trigger a Trace and Verify in Splunk
Send a request to our app. We'll call our Hello World app's HTTP Listener endpoint:curl http://<our-rtf-app-endpoint>/Search for the traces in Splunk. We'll open the Splunk web console at
http://<our-host-ip>:8000. We'll navigate to Search & Reporting. We'll run the following search:index="mule-rtf-traces" sourcetype="mule:otel:trace"http.method, http.route, and correlation.id.traceId value from any result and filter by it:index="mule-rtf-traces" sourcetype="mule:otel:trace" traceId="<trace-id-value>"This shows every span Mule generated for that single request — the flow span, the HTTP Listener span, the Set Payload span, and any other components that executed.What We Built and Why It Works
The MuleSoft docs describe two paths to get telemetry out of RTF: through the Telemetry Exporter (routed through Anypoint Monitoring) and through the Direct Telemetry Stream (sent directly from the runtime to any OTLP endpoint). We used the Direct Telemetry Stream.Splunk Enterprise does not speak OTLP natively, so the OTel Collector acts as the translation layer. It receives OTLP from the Mule runtime and converts trace data into HEC events using the
splunk_hec exporter. Splunk indexes those events and makes them searchable.Deploying the collector inside the K8s cluster keeps trace data off the public network until it reaches Splunk. The Mule runtime pod talks to the collector via an internal ClusterIP service. Only the final leg — from the collector to Splunk HEC on port
8088 — crosses the cluster boundary. This also removes any firewall dependency between the RTF cluster and the host, since the Mule runtime never needs to reach the host directly.The four Runtime Manager properties translate directly to OpenTelemetry SDK configuration inside the Mule runtime. When our app starts, the runtime reads these properties and initializes the OTLP exporter. Every trace generates spans. The runtime batches those spans and flushes them to the OTel Collector every five seconds by default.
Troubleshooting
No events appear in Splunk. We'll check the collector logs:kubectl logs -n otel-collector -l app.kubernetes.io/name=opentelemetry-collector. If the collector can't reach Splunk HEC, it will log an HTTP error. We'll confirm port 8088 is accessible from the cluster nodes to the Docker host.The OTel Collector pod is not receiving data from Mule. We'll check the Mule application logs in Runtime Manager. The runtime logs OTLP connection errors and export failures. We'll confirm the service DNS name resolves correctly from within the RTF namespace by running:
kubectl run dns-test --rm -it --image=busybox \
--restart=Never -n <rtf-namespace> \
-- nslookup otel-collector-opentelemetry-collector.otel-collector.svc.cluster.localSpans appear but data looks malformed. We'll verify the sourcetype in the collector config matches the search term we use in Splunk. We'll also confirm HEC Global Settings has All Tokens enabled.Performance reminder: Direct Telemetry Stream is not a zero-cost feature. Always performance-test our app before enabling this in production. A sampling rate of
1 (100%) is for testing only. We'll lower that value and tune batch and queue properties based on actual traffic before going to production.Summary
We enabled Direct Telemetry Stream on a Mule app deployed in Runtime Fabric. We configured Splunk Enterprise's HTTP Event Collector. We deployed the OTel Collector as a Kubernetes workload inside the same cluster as RTF. We set four Runtime Manager properties on our app. We verified end-to-end trace data in Splunk's Search & Reporting console.The four properties that drive the entire integration:
mule.openTelemetry.tracer.exporter.enabled=true
mule.openTelemetry.tracer.exporter.type=HTTP
mule.openTelemetry.tracer.exporter.endpoint=http://otel-collector-opentelemetry-collector.otel-collector.svc.cluster.local:4318/v1/traces
mule.openTelemetry.tracer.exporter.sampler.arg=1