We run Mule apps on Runtime Fabric every day. We know when a flow breaks, we want answers fast. That means we need traces — and we want them in Splunk, where our ops team lives.
Starting with Mule runtime 4.11.0, MuleSoft ships a feature called Direct Telemetry Stream. It lets the Mule runtime push OpenTelemetry (OTel) traces directly to any OTLP-compatible backend — without routing data through Anypoint Monitoring. In this post, we'll configure a test Mule app deployed on Runtime Fabric to stream traces into Splunk Enterprise running in a Docker container.We'll use the minimal set of Runtime Manager properties needed to get this working. In a future post, we'll dive into the full property reference and advanced tuning options.
A trace is a record of one complete operation. For example: an HTTP request enters our Mule flow and exits. A trace is made of spans. Each span represents one unit of work — a flow execution, a Set Variable call, or a Transform Message step. Spans nest inside each other, giving us a full picture of how a request moved through our app.
Mule instruments these spans automatically. We don't write any tracing code. We enable the exporter, and the runtime does the rest.
We'll deploy the OTel Collector inside the same Kubernetes cluster where RTF runs. This means the Mule runtime reaches the collector over the internal K8s network using a standard service DNS name. No firewall rules to open. No external network hops for trace data. The collector then pushes data out to Splunk Enterprise over port
A Quick Word on OpenTelemetry
OpenTelemetry (OTel) is an open standard for collecting and exporting telemetry data. It defines how traces, logs, and metrics are structured and transported.A trace is a record of one complete operation. For example: an HTTP request enters our Mule flow and exits. A trace is made of spans. Each span represents one unit of work — a flow execution, a Set Variable call, or a Transform Message step. Spans nest inside each other, giving us a full picture of how a request moved through our app.
Mule instruments these spans automatically. We don't write any tracing code. We enable the exporter, and the runtime does the rest.
Architecture
Splunk Enterprise does not accept OTLP natively. It ingests data through the HTTP Event Collector (HEC). We'll place an OpenTelemetry Collector between the Mule runtime and Splunk. The collector receives OTLP from Mule and translates it into HEC events that Splunk understands.We'll deploy the OTel Collector inside the same Kubernetes cluster where RTF runs. This means the Mule runtime reaches the collector over the internal K8s network using a standard service DNS name. No firewall rules to open. No external network hops for trace data. The collector then pushes data out to Splunk Enterprise over port
8088 on our Docker host.Mule App (RTF pod)
│
│ OTLP/HTTP — internal K8s network
▼
OTel Collector ◄── Deployment + Service inside K8s cluster
│
│ HEC (port 8088) — out to Docker host
▼
Splunk Enterprise ◄── Docker container on our hostPrerequisites
Before we start, we need:- A Kubernetes cluster with RTF installed. If we don't have one yet, this guide walks us through a quick setup.
- A simple Mule test app deployed on RTF using Mule runtime 4.11.0 or later. The app needs an HTTP Listener and a Set Payload that returns "Hello World." That's enough to generate spans.
- An Anypoint Platform Advanced or Titanium subscription. Direct Telemetry Stream is not available on lower tiers.
- Splunk Enterprise running in Docker. If we haven't set it up yet, this guide covers the full installation in a few easy steps. We'll assume Splunk is running with port
8000for the web console and port8088for the HTTP Event Collector. kubectlaccess to our K8s cluster with permissions to create namespaces, ConfigMaps, Deployments, and Services.- Helm installed on our local machine. We'll use it to deploy the OTel Collector.
Step 1 - Create an New Index in Splunk
A Splunk index is a data repository where Splunk stores incoming data for search and analysis. When data is ingested into Splunk (via logs, metrics, or other sources), it is indexed to make it searchable. Each index is essentially a collection of data that is organized and optimized for quick retrieval and analysis. Think of Splunk as a big database where each index is a table.Although we could skip this step and use an existing index, it’s a best practice to create a dedicated index for the mule traces and logs. When ingesting data from different sources (e.g., web server logs, application logs, database logs, mule logs), creating separate indexes for each type of data allows for better organization and more efficient searches.
To create a dedicated index for the Mule logs we will do the following:
- Go to Settings > Data > Indexes
- Click on New Index on the top right corner
- Give it a name and leave the rest of the options as default. In this example our index is called mule-rtf-traces
Step 2 — Configure the HTTP Event Collector in Splunk
Splunk's HTTP Event Collector (HEC) is the ingestion endpoint that receives data from the OTel Collector. We'll create an HEC for our mule traces and create an authentication token.- Go to Settings > Data > Data Inputs.
- In the HTTP Event Collector, select Add new
- Give it a name
mule-rtf-tracesand click Next
- In the Input settings:
- Select json for Source Type
- Add the index we've created in the previous step to Allowed Indexes
- Make that index the Default Index
We'll click Review and then Submit. Splunk will display the token value. We'll copy it immediately — Splunk shows it only once.
The token looks like this:
We'll need this token in Step 4.
After that, we'll navigate to Settings → Data Inputs → HTTP Event Collector. We'll click Global Settings in the top-right corner. We'll set All Tokens to Enabled. We'll confirm the HTTP Port Number is

Add the OpenTelemetry Helm repository:
Create a values file. We'll create a file named
Here is what each section does:

a1b2c3d4-e5f6-7890-abcd-ef1234567890After that, we'll navigate to Settings → Data Inputs → HTTP Event Collector. We'll click Global Settings in the top-right corner. We'll set All Tokens to Enabled. We'll confirm the HTTP Port Number is
8088. We'll also disable SSL, just for simplicity on this tutorial. We'll click Save.Step 3 — Create a Namespace for the OTel Collector
We'll keep the collector in its own namespace to separate it from RTF workloads.kubectl create namespace otel-collectorStep 4 — Deploy the OTel Collector with Helm
We'll use the official OpenTelemetry Collector Helm chart. Thecontrib distribution includes the splunk_hec exporter, which is not in the core image.Add the OpenTelemetry Helm repository:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo updateotel-values.yaml on our local machine. We'll replace <HEC_TOKEN> with the token we copied from Splunk, and <our-host-ip> with the IP address of the host running our Splunk Docker container.image:
repository: otel/opentelemetry-collector-contrib
mode: deployment
config:
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
exporters:
splunk_hec/traces:
token: "<HEC_TOKEN>"
endpoint: "http://<our-splunk-hostname>:8088/services/collector"
source: "mule-rtf"
sourcetype: "mule:otel:trace"
index: "mule-rtf-traces"
tls:
insecure_skip_verify: true
processors:
batch: {}
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [splunk_hec/traces]image:
repository: otel/opentelemetry-collector-contrib
mode: deployment
config:
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
exporters:
splunk_hec/traces:
token: "ef55790f-0604-4c49-a31f-cf02efafc2dc"
endpoint: "http://ip-172-31-33-131.eu-central-1.compute.internal:8088/services/collector"
source: "mule-rtf"
sourcetype: "mule:otel:trace"
index: "mule-rtf-traces"
tls:
insecure_skip_verify: true
processors:
batch: {}
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [splunk_hec/traces]Here is what each section does:
receivers.otlp— The collector listens for OTLP data over HTTP on port4318. The Mule runtime sends its trace batches here.exporters.splunk_hec/traces— The collector forwards traces to Splunk HEC. We point it to our Docker host IP on port8088. Thesourcetypefield tags every event in Splunk, making our Mule traces easy to search.processors.batch— Groups spans into batches before sending. This reduces HTTP call frequency to Splunk and improves throughput.service.pipelines.traces— Wires the pipeline together. Traces flow from the OTLP receiver, through the batch processor, and out to the Splunk HEC exporter.- Note on
insecure_skip_verify. We set this totruebecause our Splunk Docker container uses plain HTTP on port8088. In production we will configure TLS properly.
helm install otel-collector open-telemetry/opentelemetry-collector \
--namespace otel-collector \
--values otel-values.yamlVerify the pod is running:
We'll wait until the pod shows
kubectl get pods -n otel-collectorRunning status before we continue.Check the collector logs for startup errors:
We should see a line confirming the OTLP HTTP receiver started on port

The service name follows the pattern
This is the address we'll use in the Runtime Manager property in the next step. The Mule runtime resolves this name through the cluster's internal DNS — no external network needed.
Navigate to our app in Runtime Manager. We'll log in to Anypoint Platform. We'll go to Runtime Manager → Applications. We'll click the name of our Hello World test app.
Open the Properties tab. We'll click Settings in the left panel. We'll click the Properties tab.
Add the following properties:
kubectl logs -n otel-collector \
-l app.kubernetes.io/name=opentelemetry-collector4318 with no errors.Step 5 — Note the Collector's Internal Service DNS Name
Helm creates a Kubernetes Service for the collector automatically. We'll find it:kubectl get svc -n otel-collectorotel-collector-opentelemetry-collector. Its internal K8s DNS address is:otel-collector-opentelemetry-collector.otel-collector.svc.cluster.localStep 6 — Configure Runtime Manager Properties on Our Mule App
This is where we activate the Direct Telemetry Stream on our Mule application. We don't change any code in our app. We set application properties directly in Runtime Manager.Navigate to our app in Runtime Manager. We'll log in to Anypoint Platform. We'll go to Runtime Manager → Applications. We'll click the name of our Hello World test app.
Open the Properties tab. We'll click Settings in the left panel. We'll click the Properties tab.
Add the following properties:
| Property | Value |
|---|---|
mule.openTelemetry.tracer.exporter.enabled | TRUE |
mule.openTelemetry.tracer.exporter.type | HTTP |
mule.openTelemetry.tracer.exporter.endpoint | http://otel-collector-opentelemetry-collector.otel-collector.svc.cluster.local:4318/v1/traces |
mule.openTelemetry.tracer.exporter.sampler.arg | 1 |
Here is what each property does:

mule.openTelemetry.tracer.exporter.enabled— Activates the exporter. It defaults tofalse. We set it totrue.mule.openTelemetry.tracer.exporter.type— Sets the transport protocol toHTTP. The OTel Collector's OTLP receiver accepts HTTP on port4318.mule.openTelemetry.tracer.exporter.endpoint— The full URL where the Mule runtime sends traces. We use the internal K8s service DNS name. The Mule runtime pod resolves this name without leaving the cluster network.mule.openTelemetry.tracer.exporter.sampler.arg— Sets the sampling rate. A value of1means 100% of traces are exported. This is the right setting for a test. In production we'll use a lower value like0.1(10%) to control data volume.
Click Apply and redeploy. Runtime Manager will apply the properties and restart our application. The exporter activates on startup.

Step 7 — Trigger a Trace and Verify in Splunk
Send a request to our app. We'll call our Hello World app's HTTP Listener endpoint:curl http://<our-rtf-app-endpoint>/The Mule runtime will process the request. It will generate spans for the flow execution and ship them to the OTel Collector over the internal cluster network. The collector will batch the spans and forward them to Splunk via HEC.
Search for the traces in Splunk. We'll open the Splunk web console at
We'll see events appear for each span the Mule runtime exported. Each event contains the span name, trace ID, span ID, timestamps, and the attributes Mule attached — including
Search for the traces in Splunk. We'll open the Splunk web console at
http://<our-host-ip>:8000. We'll navigate to Search & Reporting. We'll run the following search:index="mule-rtf-traces" sourcetype="mule:otel:trace"http.method, http.route, and correlation.id.To find all spans belonging to one trace, we'll pick a
Splunk Enterprise does not speak OTLP natively, so the OTel Collector acts as the translation layer. It receives OTLP from the Mule runtime and converts trace data into HEC events using the
Deploying the collector inside the K8s cluster keeps trace data off the public network until it reaches Splunk. The Mule runtime pod talks to the collector via an internal ClusterIP service. Only the final leg — from the collector to Splunk HEC on port
The four Runtime Manager properties translate directly to OpenTelemetry SDK configuration inside the Mule runtime. When our app starts, the runtime reads these properties and initializes the OTLP exporter. Every trace generates spans. The runtime batches those spans and flushes them to the OTel Collector every five seconds by default.
The OTel Collector pod is not receiving data from Mule. We'll check the Mule application logs in Runtime Manager. The runtime logs OTLP connection errors and export failures. We'll confirm the service DNS name resolves correctly from within the RTF namespace by running:
Performance reminder: Direct Telemetry Stream is not a zero-cost feature. Always performance-test our app before enabling this in production. A sampling rate of
The four properties that drive the entire integration:
That's all it takes. In the next post in this series, we'll explore the full property reference — sampling strategies, batch tuning, backpressure settings, and TLS for production-grade deployments.
traceId value from any result and filter by it:index="mule-rtf-traces" sourcetype="mule:otel:trace" traceId="<trace-id-value>"This shows every span Mule generated for that single request — the flow span, the HTTP Listener span, the Set Payload span, and any other components that executed.What We Built and Why It Works
The MuleSoft docs describe two paths to get telemetry out of RTF: through the Telemetry Exporter (routed through Anypoint Monitoring) and through the Direct Telemetry Stream (sent directly from the runtime to any OTLP endpoint). We used the Direct Telemetry Stream.Splunk Enterprise does not speak OTLP natively, so the OTel Collector acts as the translation layer. It receives OTLP from the Mule runtime and converts trace data into HEC events using the
splunk_hec exporter. Splunk indexes those events and makes them searchable.Deploying the collector inside the K8s cluster keeps trace data off the public network until it reaches Splunk. The Mule runtime pod talks to the collector via an internal ClusterIP service. Only the final leg — from the collector to Splunk HEC on port
8088 — crosses the cluster boundary. This also removes any firewall dependency between the RTF cluster and the host, since the Mule runtime never needs to reach the host directly.The four Runtime Manager properties translate directly to OpenTelemetry SDK configuration inside the Mule runtime. When our app starts, the runtime reads these properties and initializes the OTLP exporter. Every trace generates spans. The runtime batches those spans and flushes them to the OTel Collector every five seconds by default.
Troubleshooting
No events appear in Splunk. We'll check the collector logs:kubectl logs -n otel-collector -l app.kubernetes.io/name=opentelemetry-collector. If the collector can't reach Splunk HEC, it will log an HTTP error. We'll confirm port 8088 is accessible from the cluster nodes to the Docker host.The OTel Collector pod is not receiving data from Mule. We'll check the Mule application logs in Runtime Manager. The runtime logs OTLP connection errors and export failures. We'll confirm the service DNS name resolves correctly from within the RTF namespace by running:
kubectl run dns-test --rm -it --image=busybox \
--restart=Never -n <rtf-namespace> \
-- nslookup otel-collector-opentelemetry-collector.otel-collector.svc.cluster.localSpans appear but data looks malformed. We'll verify the sourcetype in the collector config matches the search term we use in Splunk. We'll also confirm HEC Global Settings has All Tokens enabled.Performance reminder: Direct Telemetry Stream is not a zero-cost feature. Always performance-test our app before enabling this in production. A sampling rate of
1 (100%) is for testing only. We'll lower that value and tune batch and queue properties based on actual traffic before going to production.Summary
We enabled Direct Telemetry Stream on a Mule app deployed in Runtime Fabric. We configured Splunk Enterprise's HTTP Event Collector. We deployed the OTel Collector as a Kubernetes workload inside the same cluster as RTF. We set four Runtime Manager properties on our app. We verified end-to-end trace data in Splunk's Search & Reporting console.The four properties that drive the entire integration:
mule.openTelemetry.tracer.exporter.enabled=true
mule.openTelemetry.tracer.exporter.type=HTTP
mule.openTelemetry.tracer.exporter.endpoint=http://otel-collector-opentelemetry-collector.otel-collector.svc.cluster.local:4318/v1/traces
mule.openTelemetry.tracer.exporter.sampler.arg=1