In our last post, we learned how to filter time series using selectors and matchers. We can now write precise queries that target exactly the metrics we need from our Linux host.In this post, we will learn how to control when Prometheus reads those metrics. Two modifiers give us that control:
offsetand @.Our setup, as in previous posts, is a Prometheus server scraping a Linux host via Node Exporter. The target has these labels:
instance="172.31.33.131:9100"job="node_exporter"
http://<our-server>:9090. We will use Table view to read the results clearly.Why We Need to Shift Time
By default, every PromQL query evaluates at the current moment. Prometheus reads the most recent scraped value for each matched series and returns it.That is fine for a live dashboard. But we often need to answer questions like:
- Was the CPU load higher at this time yesterday?
- How much memory did we have 30 minutes before that alert fired?
- What did disk usage look like at the exact moment the deployment went out?
The offset Modifier
The offset modifier shifts the evaluation time backwards by a fixed duration. We place it after the selector, followed by a duration:node_load1{instance="172.31.33.131:9100"} offset 1hnode_load1 query but the value will reflect what Prometheus scraped 1 hour ago. Now let's run the two side by side and note the difference:node_load1{instance="172.31.33.131:9100"}
node_load1{instance="172.31.33.131:9100"} offset 1hms, s, m, h, d, w, and y. We can combine them: 1h30m, 2d12h, 1wCompare today against yesterday
One of the most practical uses ofoffset is overlaying two expressions on the same Grafana panel — today's load and the same metric from 24 hours ago:node_load1{instance="172.31.33.131:9100"}
node_load1{instance="172.31.33.131:9100"} offset 24hWe can use the same pattern for weekly comparisons. Many MuleSoft integration workloads follow a weekly rhythm — heavier on weekdays, lighter on weekends:
node_load1{instance="172.31.33.131:9100"} offset 7dDetect rapid memory drain
offset also powers rate-of-change alert rules. Let's write a condition that fires when available memory drops below 70% of what it was 30 minutes ago:node_memory_MemAvailable_bytes{instance="172.31.33.131:9100"}
<
node_memory_MemAvailable_bytes{instance="172.31.33.131:9100"} offset 30m * 0.7Detect disk space consumption
We'll compare current available disk space against 6 hours ago:node_filesystem_avail_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
/
node_filesystem_avail_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
} offset 6h1.0 means a filesystem has less free space now than 6 hours ago. A result around 0.6 means we lost 40% of free space in 6 hours. On a MuleSoft host, that often points to a runaway log file.The @ Modifier
The @ modifier pins the evaluation to a specific Unix timestamp, regardless of when the query runs. The result always reflects the value at that exact point in time.First, we'll get the Unix timestamp for the moment we want to investigate. We'll run this on our Linux host:
date -d "2026-03-13 03:47:00 UTC" +%s
# Output: 1773373620Now we'll use that value in the Prometheus UI:
node_load1{instance="172.31.33.131:9100"} @ 17733736202026-03-13 03:47:00 UTC. It does not matter when we run this query — the result will not change.Incident post-mortem
@ is our main tool for incident post-mortems. When we know an alert fired at a specific timestamp, we pin every query to that moment and reconstruct the full state of the system.Let's say an alert fired at
2026-03-13 03:47:00 UTC — Unix timestamp 1773373620. We'll run each of these queries in sequence:What was the CPU doing?

node_cpu_seconds_total{
instance="172.31.33.131:9100",
mode!~"idle|iowait"
} @ 1773373620How much memory was available?

How much disk space was left?

node_memory_MemAvailable_bytes{instance="172.31.33.131:9100"} @ 1773373620How much disk space was left?
node_filesystem_avail_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
} @ 1773373620Every result reflects exactly what Prometheus scraped at
Combining
We can use both modifiers in the same expression.
This reads the load average 30 minutes before timestamp


03:47:00 UTC. We build the full picture of the system at the moment the incident occurred — without scrolling through graphs or guessing.Combining offset and @
We can use both modifiers in the same expression. @ sets the anchor point. offset shifts backwards from that anchor.node_load1{instance="172.31.33.131:9100"} @ 1773373620 offset 30m1735703220. We'll run this alongside the unshifted version:node_load1{instance="172.31.33.131:9100"} @ 1773373620 offset 30m
node_load1{instance="172.31.33.131:9100"} @ 1773373620We will see what the system looked like before the incident and at the moment it happened. Let's apply this to memory — the metric most likely to tell the story of a MuleSoft runtime under pressure:
If memory dropped sharply between those two points, we know the drain happened in that 30-minute window and we can focus our investigation there.
node_memory_MemAvailable_bytes{instance="172.31.33.131:9100"} @ 1773373620 offset 30m
node_memory_MemAvailable_bytes{instance="172.31.33.131:9100"} @ 1773373620Modifier Quick Reference
| Syntax | What it does |
|---|---|
metric offset 1h | Read the metric from 1 hour ago |
metric offset 24h | Read the metric from 24 hours ago |
metric offset 7d | Read the metric from 7 days ago |
metric @ 1735703220 | Read the metric at a specific Unix timestamp |
metric @ 1735703220 offset 30m | Read 30 minutes before a fixed timestamp |
Summary
Theoffset modifier shifts evaluation backwards by a relative duration — useful for comparing current behavior against historical baselines and for rate-of-change alert rules. The @ modifier pins evaluation to an absolute Unix timestamp — essential for incident post-mortems. We combine both to read the system state just before a known event.In the next post, we will learn about operators and aggregations. We will compute CPU usage percentages, sum memory across cores, and find the busiest disk mount — using PromQL arithmetic and functions like
sum(), avg(), max(), and rate().