In our previous post PromQL Operators — Part 2 (Aggregation), we learned how to collapse many series into fewer using aggregation operators. We can now compute totals, averages, and extremes across groups of series.
In this post, we will learn transformation functions. These functions operate on the values of an instant vector — they do not change the labels or the number of series. They clean up raw values, round them to a useful precision, and enforce safe boundaries before we display or alert on a result.
Our setup is a Prometheus server scraping a Linux host via Node Exporter. The target has these labels:
instance="172.31.33.131:9100"job="node_exporter"
All examples will run in the Prometheus UI at http://<our-server>:9090.
abs() — Absolute value
abs() returns the absolute value of each sample. It turns any negative value into its positive equivalent. Positive values pass through unchanged.
Raw Node Exporter metrics are always positive, so abs() rarely applies to them directly. We will use it when a difference or delta between two series can go negative.
Let's compare current memory availability against 1 hour ago. The result can be positive — memory freed up — or negative — memory was consumed:
node_memory_MemAvailable_bytes{instance="172.31.33.131:9100"}
-
node_memory_MemAvailable_bytes{instance="172.31.33.131:9100"} offset 1h
We'll run this. A negative result means less memory is available now than an hour ago. A positive result means more memory is free.
We will use
abs() when we want to measure the magnitude of that change regardless of direction:
abs(
node_memory_MemAvailable_bytes{instance="172.31.33.131:9100"}
-
node_memory_MemAvailable_bytes{instance="172.31.33.131:9100"} offset 1h
)
We'll run this. We now see how many bytes memory shifted in either direction over the last hour — useful for a volatility panel that flags unstable memory without caring whether it went up or down.
ceil() — Round up to the nearest integer
ceil() rounds every sample value up to the nearest integer. A value of 1.1 becomes 2. A value of 1.9 also becomes 2. A value of 2.0 stays 2.
We will use ceil() when we need a whole-number result and we want to be conservative — never underreporting a count or a size.
Let's convert available filesystem space from bytes to gigabytes and round up:
ceil(
node_filesystem_avail_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
} / (1024 * 1024 * 1024)
)
We'll run this. A filesystem with 3.2 GB free becomes 4. We are rounding up — the display value is conservative. We will use this in a capacity report where we want to avoid overstating available space.
floor() — Round down to the nearest integer
floor() rounds every sample value down to the nearest integer. A value of 1.9 becomes 1. A value of 1.1 also becomes 1. A value of 1.0 stays 1.
floor() is the conservative complement to ceil() — we use it when we want to avoid overstating a value.
Let's convert total memory from bytes to gigabytes and round down to report a conservative capacity figure:
floor(
node_memory_MemTotal_bytes{instance="172.31.33.131:9100"}
/ (1024 * 1024 * 1024)
)
We'll run this. A host with 7.8 GB of total RAM becomes 7. We are rounding down — the reported figure is what we can reliably count on, not the ceiling.
We will also use floor() to strip decimal noise from a percentage before displaying it on a dashboard:
floor(
node_memory_MemAvailable_bytes{instance="172.31.33.131:9100"}
/
node_memory_MemTotal_bytes{instance="172.31.33.131:9100"}
* 100
)
We'll run this. Instead of 43.871...% we see 43. Clean, readable, no false precision.
round() — Round to the nearest value
round(v, to_nearest) rounds each sample to the nearest multiple of to_nearest. When we omit to_nearest, it rounds to the nearest integer — equivalent to standard mathematical rounding.
round(
node_memory_MemAvailable_bytes{instance="172.31.33.131:9100"}
/
node_memory_MemTotal_bytes{instance="172.31.33.131:9100"}
* 100
)
We'll run this. A value of 43.5% rounds to 44. A value of 43.4% rounds to 43. This is the function we will use most often for display values — it produces the result closest to the true value rather than always going up or always going down.
The to_nearest argument is where round() becomes more powerful than ceil() and floor(). Let's round filesystem usage percentage to the nearest 5%:
round(
(
node_filesystem_size_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
-
node_filesystem_avail_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
)
/
node_filesystem_size_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
* 100
, 5)
We'll run this. A filesystem at 67.3% usage becomes 65. One at 68.1% becomes 70. Rounding to the nearest 5 reduces dashboard noise — minor fluctuations in usage no longer cause the displayed value to flicker between numbers.
clamp() — Enforce a minimum and maximum boundary
clamp(v, min, max) constrains every sample value to stay within a defined range. Values below min are raised to min. Values above max are lowered to max. Values already within the range pass through unchanged.
clamp_min(v, min) and clamp_max(v, max) are convenience variants that enforce only one boundary.
We will use clamp() in two situations: preventing a division result from producing values outside a valid range, and capping a metric before feeding it into an alert or a display panel.
Let's compute disk usage percentage and clamp the result to the valid range of 0 to 100. Floating-point arithmetic can occasionally produce values slightly outside this range due to scrape timing differences between the two metrics:
clamp(
(
node_filesystem_size_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
-
node_filesystem_avail_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
)
/
node_filesystem_size_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
* 100
, 0, 100)
We'll run this. No result will fall below 0 or exceed 100. Our gauge panel in Grafana will never display an invalid percentage.
Let's use clamp_min() to prevent a memory delta from reporting negative capacity — useful when we only care about memory growth, not reduction:
clamp_min(
node_memory_MemAvailable_bytes{instance="172.31.33.131:9100"}
-
node_memory_MemAvailable_bytes{instance="172.31.33.131:9100"} offset 30m
, 0)
We'll run this. When memory has been freed in the last 30 minutes, we see a positive number. When memory has been consumed, we see 0 instead of a negative value. This expression measures only memory growth — it ignores shrinkage entirely.
And clamp_max() to cap CPU seconds for display — useful when we want to highlight that a value has hit a ceiling without showing how far past it a rogue process went:
clamp_max(
sum by(instance, cpu) (
node_cpu_seconds_total{
instance="172.31.33.131:9100",
mode!~"idle|iowait"
}
)
, 1000)
We'll run this. Any core accumulating more than 1000 active CPU seconds is capped at 1000 in the display. The underlying data is unchanged — only the value returned by the query is bounded.
Combining Transformation Functions
Transformation functions compose naturally. We can chain them to produce a clean, safe, display-ready value in a single expression.
Let's build a production-ready disk usage percentage for a Grafana gauge panel — clamped, rounded, and human-readable:
round(
clamp(
(
node_filesystem_size_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
-
node_filesystem_avail_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
)
/
node_filesystem_size_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
* 100
, 0, 100)
, 1)
We'll run this final query. We will see disk usage as a clean integer percentage between 0 and 100 for each real filesystem on our host. No negative values, no values above 100, no floating-point noise. This is the query we will drop directly into a Grafana gauge panel.
Transformation Function Quick Reference
| Function | What it does |
|---|---|
abs(v) |
Absolute value of each sample |
ceil(v) |
Round each sample up to the nearest integer |
floor(v) |
Round each sample down to the nearest integer |
round(v, to_nearest) |
Round each sample to the nearest multiple ofto_nearest |
clamp(v, min, max) |
Constrain each sample to the range[min, max] |
clamp_min(v, min) |
Raise any sample below min tomin |
clamp_max(v, max) |
Lower any sample above max tomax |
Summary
Transformation functions operate on sample values without changing labels or series count. abs() measures magnitude regardless of sign. ceil() and floor()round conservatively in opposite directions. round() rounds to the nearest value or nearest multiple — the right choice for display-ready output. clamp(), clamp_min(), and clamp_max() enforce safe boundaries and protect dashboards and alert rules from impossible values caused by scrape timing differences or floating-point edge cases. We will compose these functions with aggregations and arithmetic in every production query we write.
In the next post, we will learn about counter functions — rate(), irate(), and increase(). These are the tools we use to turn raw ever-increasing counters into meaningful per-second rates and growth measurements.