In our last post we learned transformation functions —
abs(), ceil(), floor(), round(), and clamp(). We can now clean and constrain metric values before displaying or alerting on them.In this post, we will learn date and time functions. These functions extract time components from the current moment or from a metric's timestamp. We will use them to build time-aware alert rules, detect stale metrics, and scope queries to specific hours or days of the week.
Our setup is a Prometheus server scraping a Linux host via Node Exporter. The target has these labels:
instance="172.31.33.131:9100"job="node_exporter"
http://<our-server>:9090.time() — The current Unix timestamp
time() returns the current evaluation timestamp as a Unix timestamp in seconds. It takes no arguments and returns a scalar.time()1970-01-01 00:00:00 UTC. At the time of writing, that number is around 1774449874.176
time() is the building block for every other time-aware expression in PromQL. On its own it is rarely useful, but we will use it constantly in combination with other functions and metrics.timestamp() — The timestamp of each sample
timestamp(v) returns the timestamp of each sample in an instant vector — not the sample's value, but the moment Prometheus last scraped it.timestamp(node_load1{instance="172.31.33.131:9100"})We'll run this. Instead of the load average value, we will see the Unix timestamp of the most recent scrape. On a healthy setup with a 15-second scrape interval, this value should be very close to time().The real power of
We'll run this. On a healthy host we will see a value between
timestamp() is detecting stale or missing metrics. Let's compute how many seconds have passed since Prometheus last scraped our Node Exporter target:time() - timestamp(node_load1{instance="172.31.33.131:9100"})0 and 15 — within one scrape interval. A value above
We can turn this into an alert condition with a comparison operator:
This returns a result only when the last scrape is more than 60 seconds old — the signal we need to fire a "target unreachable" alert.
These three functions extract calendar components from a Unix timestamp. They all accept an optional instant vector argument. When we omit the argument, they operate on
We'll run each of these. We will see the current year, the current month as a number from 1 to 12, and the current day of the month from 1 to 31. All values are in UTC.
We can pass a Unix timestamp vector as the argument to extract components from a specific metric's scrape time:
We'll run these. We will see the year, month, and day of the most recent scrape of


60means Prometheus has missed several scrapes. A value above 300 means our Node Exporter is likely down.We can turn this into an alert condition with a comparison operator:
time() - timestamp(node_load1{instance="172.31.33.131:9100"}) > 60year(), month(), day() — Calendar date components
These three functions extract calendar components from a Unix timestamp. They all accept an optional instant vector argument. When we omit the argument, they operate on time() — the current moment.year()
month()
day()We can pass a Unix timestamp vector as the argument to extract components from a specific metric's scrape time:
year(timestamp(node_load1{instance="172.31.33.131:9100"}))
month(timestamp(node_load1{instance="172.31.33.131:9100"}))
day(timestamp(node_load1{instance="172.31.33.131:9100"}))node_load1 on our host.hour() — The current hour in UTC
hour() returns the hour of the day as a number from 0 to 23 in UTC. Like the calendar functions above, it defaults to the current time when called with no argument.hour()We'll run this. We will see the current UTC hour.
hour() is one of the most useful time functions for alert rules. Many MuleSoft integration workloads have predictable traffic patterns — batch jobs that run at night, API peaks during business hours. We use hour() to restrict alerts to windows where a condition is actually unexpected.Let's build an alert condition that fires only during business hours — UTC 08:00 to 18:00 — when load average is high:
node_load1{instance="172.31.33.131:9100"} > 2
and
on() hour() >= 8
and
on() hour() < 18on() with empty parentheses matches the scalar result of hour() against every series in the left side. The full expression returns a result only when load is above 2 and the current UTC hour is between 08:00 and 18:00. Outside those hours, a high load average is expected — a nightly batch job — and we do not want to wake anyone up for it.minute() and second() — Sub-hour precision
minute() returns the current minute from 0 to 59. second() returns the current second from 0 to 59. Both default to the current time when called with no argument.minute()
second()We will use
We'll run this. We see the minute component of the last scrape timestamp. Combining with
If scrapes happen at consistent second values — for example always near
We'll run this. We will see a number from 0 to 6 representing today.

minute() to detect clock drift or scrape jitter. Let's check whether our Node Exporter scrape timestamps are aligned with the expected 15-second interval:minute(timestamp(node_load1{instance="172.31.33.131:9100"}))second():second(timestamp(node_load1{instance="172.31.33.131:9100"}))0, 15, 30, or 45 seconds — the scrape interval is stable. Irregular second values suggest jitter or load on the Prometheus server itself.day_of_week() — The day of the week
day_of_week() returns the day of the week as a number from 0 (Sunday) to 6 (Saturday) in UTC.day_of_week()We use
We'll run this. The expression now fires only on weekdays — Monday through Friday — and only during business hours. This is the standard pattern for business-hours alerting on a MuleSoft host.
We'll run this. We will see the ordinal day of the current year.

day_of_week() in alert rules to suppress noise on weekends. A MuleSoft API gateway that handles zero traffic on a Sunday is behaving correctly — we should not fire a low-throughput alert for it. Let's add a weekday filter to an alert:node_load1{instance="172.31.33.131:9100"} > 2
and
on() (day_of_week() >= 1 and day_of_week() <= 5)
and
on() hour() >= 8
and
on() hour() < 18day_of_year() — The day of the year
day_of_year() returns the day of the year as a number from 1 to 365 (or 366 in a leap year) in UTC.day_of_year()We use
A more direct use: flag when we are in the last 10 days of the year, when year-end batch processes often run and generate unusual load:
We'll run this. The expression returns a result only when load is high and we are in the final 10 days of the year — a period when elevated load is likely expected but still worth tracking.
We'll run this. We will see the number of days in the current month.

day_of_year() for capacity planning queries. Let's compute what fraction of the year has elapsed — useful for projecting annual disk consumption:day_of_year() / (days_in_month() * month())node_load1{instance="172.31.33.131:9100"} > 3
and
on() day_of_year() > 355days_in_month() — The number of days in the current month
days_in_month() returns the total number of days in the current month — 28, 29, 30, or 31 depending on the month and year. It accounts for leap years automatically.days_in_month()We use
We'll run this. We will see the projected end-of-month disk usage percentage based on today's consumption rate. If we are on day 10 of a 31-day month and disk usage is at 20%, this projects 62% by month end — still safe. If it projects above 100%, we have a problem to solve before the month ends.
We'll run this in the Prometheus UI. When disk usage on any real filesystem exceeds 80% during weekday business hours, we will see a result. Outside those windows, the expression returns nothing even if disk usage is high.
We will also add a stale metrics check alongside it. A missing scrape is as dangerous as a full disk — we will not know the disk is full if Node Exporter has stopped reporting:
Both expressions together give us confidence: disk is safe, and we are actually receiving data to confirm it.
days_in_month() for month-aware capacity calculations. Let's estimate how much disk space our host will consume by the end of the month, based on how much it has consumed so far this month:(
node_filesystem_size_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
-
node_filesystem_avail_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
)
/
node_filesystem_size_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
* 100
* days_in_month() / day()Putting It All Together
Let's build a time-aware disk space alert for our MuleSoft host. We want to fire when disk usage exceeds 80%, but only during business hours on weekdays — because we have automated cleanup jobs that run at night and on weekends that will bring it back down, and we do not want to alert during those windows.(
(
node_filesystem_size_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
-
node_filesystem_avail_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
)
/
node_filesystem_size_bytes{
instance="172.31.33.131:9100",
fstype!~"tmpfs|squashfs|devtmpfs"
}
* 100 > 80
)
and
on() (day_of_week() >= 1 and day_of_week() <= 5)
and
on() hour() >= 8
and
on() hour() < 18We will also add a stale metrics check alongside it. A missing scrape is as dangerous as a full disk — we will not know the disk is full if Node Exporter has stopped reporting:
time() - timestamp(node_load1{instance="172.31.33.131:9100"}) > 60Date and Time Function Quick Reference
| Function | Returns | Range |
|---|---|---|
time() | Current Unix timestamp (scalar) | — |
timestamp(v) | Scrape timestamp of each sample | — |
year(v) | Year component | e.g. 2026 |
month(v) | Month component | 1–12 |
day(v) | Day of the month | 1–31 |
hour(v) | Hour of the day (UTC) | 0–23 |
minute(v) | Minute of the hour (UTC) | 0–59 |
second(v) | Second of the minute (UTC) | 0–59 |
day_of_week(v) | Day of the week (UTC) | 0 (Sun)–6 (Sat) |
day_of_year(v) | Day of the year (UTC) | 1–365/366 |
days_in_month(v) | Days in the current month | 28–31 |
All functions default to
In the next post, we will learn about counter functions —
time() when called with no argument. All values are in UTC.Summary
time() returns the current Unix timestamp. timestamp() returns the scrape timestamp of each sample — we subtract it from time() to detect stale or missing metrics. The calendar functions — year(), month(), day() — extract date components in UTC. hour() and day_of_week() are the two we will use most often in alert rules to restrict firing to expected windows. days_in_month() enables month-aware capacity projections that account for February and leap years automatically.In the next post, we will learn about counter functions —
rate(), irate(), and increase(). These are the tools we use to turn raw ever-increasing counters into meaningful per-second rates and growth measurements.