Storage
Absurd supports two queue storage modes:
unpartitioned(default)partitioned(weekly range partitions)
This page explains when to use each mode, what gets created in Postgres, and
how to automate partition creation and cleanup with absurdctl and pg_cron.
Quick Recommendation
- Start with unpartitioned for small/medium workloads or when you want the simplest operations.
- Use partitioned when task history volume is high and you want easier, safer retention operations over time windows.
What Each Mode Creates
For every queue, Absurd creates queue-local tables in the absurd schema:
t_<queue>tasksr_<queue>runsc_<queue>checkpointse_<queue>eventew_<queue>waits
For partitioned queues:
t_,r_,c_,w_are declarative partitioned parent tables- weekly child partitions are created with a suffix like
<YWW>(ISO year + ISO week) whereYis the last ISO year digit, so partition names roll over every 10 years - default catch-all partitions are created with
_dsuffix - an additional
i_<queue>table is created for idempotency-key mapping
Both i_<queue> and e_<queue> remain unpartitioned.
Creating Queues
Queues need to be created manually before usage though the SDKs can create them for you as well.
Unpartitioned
absurdctl create-queue jobs
Equivalent SQL:
select absurd.create_queue('jobs');
Partitioned
absurdctl create-queue jobs --storage-mode partitioned
Equivalent SQL:
select absurd.create_queue('jobs', 'partitioned');
create_queue is idempotent if the queue already exists with the same storage
mode. If you try to recreate with a different mode, Absurd raises an error.
Queue Storage Policy
Each queue has policy fields in absurd.queues:
default_partition(enabledordisabled, defaultenabled)partition_lookahead(default28 days)partition_lookback(default1 day)cleanup_ttl(default30 days)cleanup_limit(default1000)detach_mode(noneorempty, defaultnone)detach_min_age(default30 days)
Read/update policy with absurdctl:
# show
absurdctl queue-policy jobs
# update
absurdctl queue-policy jobs \
--default-partition enabled \
--partition-lookahead '42 days' \
--partition-lookback '2 days' \
--cleanup-ttl '30 days' \
--cleanup-limit 2000 \
--detach-mode empty \
--detach-min-age '30 days'
Equivalent SQL:
select absurd.set_queue_policy(
'jobs',
'{
"default_partition": "enabled",
"partition_lookahead": "42 days",
"partition_lookback": "2 days",
"cleanup_ttl": "30 days",
"cleanup_limit": 2000,
"detach_mode": "empty",
"detach_min_age": "30 days"
}'::jsonb
);
Partition Provisioning (Creating Partitions Ahead of Time)
Partitioned queues use absurd.ensure_partitions(...).
For each partitioned queue, it creates partitions over this window:
start = week_bucket_utc(now - partition_lookback)end = week_bucket_utc(now + partition_lookahead)
Run manually:
-- one queue
select absurd.ensure_partitions('jobs');
-- all partitioned queues
select absurd.ensure_partitions();
This is safe to run repeatedly.
Cleanup Retention
Cleanup is policy-driven per queue.
Run for all queues at once:
select * from absurd.cleanup_all_queues();
You can still run queue-specific cleanup directly:
select absurd.cleanup_tasks('jobs', 30 * 86400, 1000);
select absurd.cleanup_events('jobs', 30 * 86400, 1000);
See also:
- Cleanup and Retention for the policy-driven model
- absurdctl for operational commands
Detaching Old Empty Partitions
For partitioned queues, Absurd supports policy-based detach planning:
absurd.list_detach_candidates(...)finds old, empty weekly partitions.- Those partitions can be detached with
ALTER TABLE ... DETACH PARTITION .... When no DEFAULT partition is attached,... CONCURRENTLYcan be used. - Detached tables can then be dropped.
Why this is split: DETACH ... CONCURRENTLY must run as top-level SQL.
Manual flow with absurdctl
absurdctl list-detach-candidates --queue jobs
absurdctl detach-candidate --queue jobs <partition_table>
absurdctl detach-candidate --queue jobs <partition_table> --drop
Automating with pg_cron
If pg_cron is available, Absurd can manage cron jobs for you.
One command via absurdctl
# Global jobs for all queues
absurdctl cron --enable
# Queue-scoped jobs with custom schedules
absurdctl cron --enable --queue jobs \
--partition-schedule '*/15 * * * *' \
--cleanup-schedule '7 * * * *' \
--detach-schedule '29 * * * *'
Disable again:
absurdctl cron --disable
absurdctl cron --disable --queue jobs
What gets scheduled
absurd.enable_cron(...) installs three recurring jobs:
- partition provisioning (
absurd.ensure_partitions(...)) - cleanup (
absurd.cleanup_all_queues(...)) - detach planning (
absurd.schedule_detach_jobs(...))
Detach planning creates one active detach/drop pipeline per parent table. It schedules only the oldest eligible partition for each parent at a time.
- detach jobs run top-level
DETACH PARTITIONstatements - when possible, they use
CONCURRENTLY - if a parent still has an attached
DEFAULTpartition, they fall back to non-concurrent detach (Postgres limitation) - drop jobs call
absurd.drop_detached_partition(...)until the table is safe to drop - both unschedule themselves when done
Practical Operating Model
A common setup for high-throughput queues:
- Create queue as
partitioned. - Set policy (
queue-policy) for lookahead/lookback and retention. - Enable cron (
absurdctl cron --enable --queue ...). - Periodically inspect:
absurdctl queue-policy <queue>absurdctl list-detach-candidates --queue <queue>select * from cron.job(if you want direct DB visibility)
For lower-volume queues, keep unpartitioned and only configure cleanup.
Notes and Caveats
- Partitioning is currently based on UUIDv7 time ranges and weekly buckets.
- Weekly partition names use
<YWW>whereYis one ISO year digit, so names roll over every 10 years; practical retention must stay below that horizon to avoid partition name collisions. default_partition=enabledkeeps_dpartitions as a safety net for rows outside pre-created weekly windows.default_partition=disabledremoves attached_dpartitions (if empty) and causes out-of-window writes to fail until matching weekly partitions are created.detach_mode=emptyonly detaches partitions that are both old enough and empty.pg_cronintegration requirescron.jobpluscron.scheduleandcron.unschedulefunctions to exist.