Skip to content

Add queue-state count regression benchmark#1211

Draft
bgentry wants to merge 1 commit intobrandur-queue-state-countfrom
bg/queue-state-count-regression-benchmark
Draft

Add queue-state count regression benchmark#1211
bgentry wants to merge 1 commit intobrandur-queue-state-countfrom
bg/queue-state-count-regression-benchmark

Conversation

@bgentry
Copy link
Copy Markdown
Contributor

@bgentry bgentry commented Apr 13, 2026

Stacked on #1203.

This adds a small pgx benchmark around JobCountByQueueAndState so we have a cheap default signal in normal runs, but can still scale it up locally with RIVER_BENCH_QUEUE_STATE_COUNT_NUM_JOBS when we want enough rows to make the planner choice obvious. The benchmark seeds a migrated river_job table in Go using batched JobInsertFullMany calls, and keeps the states evenly distributed across queues so every queue exercises every state.

On my machine (PostgreSQL 16.13, 200k rows, 100 queues, all 8 states distributed across every queue), the current query in #1203 regressed sharply against the previous query shape when I temporarily restored the old implementation locally and reran the same benchmark harness.

Case Current query Old query Regression
Benchmark, 2 queues ~17.6 ms/op ~0.37 ms/op ~47x slower
Benchmark, 10 queues ~16.7 ms/op ~1.19 ms/op ~14x slower
EXPLAIN ANALYZE, 2 queues ~27.6 ms ~1.7 ms ~16x slower
EXPLAIN ANALYZE, 10 queues ~15.4 ms ~8.4 ms ~1.8x slower

EXPLAIN (ANALYZE, BUFFERS) shows the reason. The new query filters on queue and groups by (queue, state), so PostgreSQL no longer has a left-edge match on the existing (state, queue, priority, scheduled_at, id) index and falls back to a parallel sequential scan. The old query shape keeps separate state = 'available' and state = 'running' branches, so PostgreSQL can still use river_job_prioritized_fetching_index via index-only scans.

The committed benchmark only measures the current query. The old-query numbers above came from temporarily restoring the previous implementation locally, collecting the benchmark and plan data, and then restoring the branch state. This PR is only here to make the regression easy to reproduce and discuss before changing the SQL or adding another index.

Full EXPLAIN (ANALYZE, BUFFERS) output for 2 queues

Current query:

Finalize GroupAggregate  (cost=5225.31..5430.70 rows=795 width=22) (actual time=25.476..27.461 rows=16 loops=1)
  Group Key: queue, state
  Buffers: shared hit=3157
  ->  Gather Merge  (cost=5225.31..5410.83 rows=1590 width=22) (actual time=25.468..27.437 rows=48 loops=1)
        Workers Planned: 2
        Workers Launched: 2
        Buffers: shared hit=3157
        ->  Sort  (cost=4225.29..4227.28 rows=795 width=22) (actual time=19.113..19.115 rows=16 loops=3)
              Sort Key: queue, state
              Sort Method: quicksort  Memory: 25kB
              Buffers: shared hit=3157
              Worker 0:  Sort Method: quicksort  Memory: 25kB
              Worker 1:  Sort Method: quicksort  Memory: 25kB
              ->  Partial HashAggregate  (cost=4179.04..4186.99 rows=795 width=22) (actual time=19.053..19.061 rows=16 loops=3)
                    Group Key: queue, state
                    Batches: 1  Memory Usage: 49kB
                    Buffers: shared hit=3125
                    Worker 0:  Batches: 1  Memory Usage: 49kB
                    Worker 1:  Batches: 1  Memory Usage: 49kB
                    ->  Parallel Seq Scan on river_job  (cost=0.00..4166.67 rows=1650 width=14) (actual time=0.042..18.665 rows=1333 loops=3)
                          Filter: (queue = ANY ('{queue_001,queue_002}'::text[]))
                          Rows Removed by Filter: 65333
                          Buffers: shared hit=3125
Planning:
  Buffers: shared hit=181 read=3
Planning Time: 1.690 ms
Execution Time: 27.628 ms

Old query:

Sort  (cost=71.16..71.16 rows=2 width=48) (actual time=1.675..1.676 rows=2 loops=1)
  Sort Key: (unnest('{queue_001,queue_002}'::text[]))
  Sort Method: quicksort  Memory: 25kB
  Buffers: shared hit=10 read=13
  ->  Hash Right Join  (cost=68.78..71.15 rows=2 width=48) (actual time=1.666..1.672 rows=2 loops=1)
        Hash Cond: (river_job.queue = (unnest('{queue_001,queue_002}'::text[])))
        Buffers: shared hit=10 read=13
        ->  HashAggregate  (cost=37.26..38.25 rows=99 width=18) (actual time=0.570..0.570 rows=2 loops=1)
              Group Key: river_job.queue
              Batches: 1  Memory Usage: 24kB
              Buffers: shared hit=4 read=5
              ->  Index Only Scan using river_job_prioritized_fetching_index on river_job  (cost=0.42..34.77 rows=497 width=10) (actual time=0.182..0.506 rows=500 loops=1)
                    Index Cond: ((state = 'available'::queue_state_count_explain.river_job_state) AND (queue = ANY ('{queue_001,queue_002}'::text[])))
                    Heap Fetches: 0
                    Buffers: shared hit=4 read=5
        ->  Hash  (cost=31.50..31.50 rows=2 width=40) (actual time=1.088..1.089 rows=2 loops=1)
              Buckets: 1024  Batches: 1  Memory Usage: 9kB
              Buffers: shared hit=6 read=8
              ->  Hash Right Join  (cost=29.25..31.50 rows=2 width=40) (actual time=1.064..1.073 rows=2 loops=1)
                    Hash Cond: (river_job_1.queue = (unnest('{queue_001,queue_002}'::text[])))
                    Buffers: shared hit=6 read=8
                    ->  HashAggregate  (cost=29.18..30.17 rows=99 width=18) (actual time=1.050..1.051 rows=2 loops=1)
                          Group Key: river_job_1.queue
                          Batches: 1  Memory Usage: 24kB
                          Buffers: shared hit=6 read=8
                          ->  Index Only Scan using river_job_prioritized_fetching_index on river_job river_job_1  (cost=0.42..26.71 rows=493 width=10) (actual time=0.327..0.995 rows=500 loops=1)
                                Index Cond: ((state = 'running'::queue_state_count_explain.river_job_state) AND (queue = ANY ('{queue_001,queue_002}'::text[])))
                                Heap Fetches: 0
                                Buffers: shared hit=6 read=8
                    ->  Hash  (cost=0.05..0.05 rows=2 width=32) (actual time=0.009..0.010 rows=2 loops=1)
                          Buckets: 1024  Batches: 1  Memory Usage: 9kB
                          ->  Unique  (cost=0.04..0.05 rows=2 width=32) (actual time=0.006..0.007 rows=2 loops=1)
                                ->  Sort  (cost=0.04..0.04 rows=2 width=32) (actual time=0.005..0.006 rows=2 loops=1)
                                      Sort Key: (unnest('{queue_001,queue_002}'::text[]))
                                      Sort Method: quicksort  Memory: 25kB
                                      ->  ProjectSet  (cost=0.00..0.03 rows=2 width=32) (actual time=0.001..0.002 rows=2 loops=1)
                                            ->  Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.000 rows=1 loops=1)
Planning:
  Buffers: shared hit=3
Planning Time: 0.195 ms
Execution Time: 1.717 ms
Full EXPLAIN (ANALYZE, BUFFERS) output for 10 queues

Current query:

Finalize GroupAggregate  (cost=5484.38..5691.06 rows=800 width=22) (actual time=13.719..15.387 rows=80 loops=1)
  Group Key: queue, state
  Buffers: shared hit=3157
  ->  Gather Merge  (cost=5484.38..5671.06 rows=1600 width=22) (actual time=13.710..15.346 rows=240 loops=1)
        Workers Planned: 2
        Workers Launched: 2
        Buffers: shared hit=3157
        ->  Sort  (cost=4484.35..4486.35 rows=800 width=22) (actual time=10.958..10.961 rows=80 loops=3)
              Sort Key: queue, state
              Sort Method: quicksort  Memory: 28kB
              Buffers: shared hit=3157
              Worker 0:  Sort Method: quicksort  Memory: 28kB
              Worker 1:  Sort Method: quicksort  Memory: 28kB
              ->  Partial HashAggregate  (cost=4437.78..4445.78 rows=800 width=22) (actual time=10.891..10.903 rows=80 loops=3)
                    Group Key: queue, state
                    Batches: 1  Memory Usage: 49kB
                    Buffers: shared hit=3125
                    Worker 0:  Batches: 1  Memory Usage: 49kB
                    Worker 1:  Batches: 1  Memory Usage: 49kB
                    ->  Parallel Seq Scan on river_job  (cost=0.03..4375.02 rows=8367 width=14) (actual time=0.026..9.899 rows=6667 loops=3)
                          Filter: (queue = ANY ('{queue_001,queue_002,queue_003,queue_004,queue_005,queue_006,queue_007,queue_008,queue_009,queue_010}'::text[]))
                          Rows Removed by Filter: 60000
                          Buffers: shared hit=3125
Planning:
  Buffers: shared hit=5 read=1
Planning Time: 0.271 ms
Execution Time: 15.429 ms

Old query:

Sort  (cost=379.27..379.29 rows=10 width=48) (actual time=8.039..8.045 rows=10 loops=1)
  Sort Key: (unnest('{queue_001,queue_002,queue_003,queue_004,queue_005,queue_006,queue_007,queue_008,queue_009,queue_010}'::text[]))
  Sort Method: quicksort  Memory: 25kB
  Buffers: shared hit=70 read=29
  ->  Hash Right Join  (cost=376.63..379.10 rows=10 width=48) (actual time=7.967..7.985 rows=10 loops=1)
        Hash Cond: (river_job.queue = (unnest('{queue_001,queue_002,queue_003,queue_004,queue_005,queue_006,queue_007,queue_008,queue_009,queue_010}'::text[])))
        Buffers: shared hit=67 read=29
        ->  HashAggregate  (cost=187.17..188.17 rows=100 width=18) (actual time=3.198..3.202 rows=10 loops=1)
              Group Key: river_job.queue
              Batches: 1  Memory Usage: 24kB
              Buffers: shared hit=29 read=15
              ->  Index Only Scan using river_job_prioritized_fetching_index on river_job  (cost=0.42..174.58 rows=2518 width=10) (actual time=0.042..2.066 rows=2500 loops=1)
                    Index Cond: ((state = 'available'::queue_state_count_explain.river_job_state) AND (queue = ANY ('{queue_001,queue_002,queue_003,queue_004,queue_005,queue_006,queue_007,queue_008,queue_009,queue_010}'::text[])))
                    Heap Fetches: 0
                    Buffers: shared hit=29 read=15
        ->  Hash  (cost=189.33..189.33 rows=10 width=40) (actual time=4.641..4.644 rows=10 loops=1)
              Buckets: 1024  Batches: 1  Memory Usage: 9kB
              Buffers: shared hit=38 read=14
              ->  Hash Right Join  (cost=187.06..189.33 rows=10 width=40) (actual time=4.614..4.629 rows=10 loops=1)
                    Hash Cond: (river_job_1.queue = (unnest('{queue_001,queue_002,queue_003,queue_004,queue_005,queue_006,queue_007,queue_008,queue_009,queue_010}'::text[])))
                    Buffers: shared hit=38 read=14
                    ->  HashAggregate  (cost=186.71..187.71 rows=100 width=18) (actual time=4.565..4.568 rows=10 loops=1)
                          Group Key: river_job_1.queue
                          Batches: 1  Memory Usage: 24kB
                          Buffers: shared hit=38 read=14
                          ->  Index Only Scan using river_job_prioritized_fetching_index on river_job river_job_1  (cost=0.42..174.21 rows=2501 width=10) (actual time=0.420..3.977 rows=2500 loops=1)
                                Index Cond: ((state = 'running'::queue_state_count_explain.river_job_state) AND (queue = ANY ('{queue_001,queue_002,queue_003,queue_004,queue_005,queue_006,queue_007,queue_008,queue_009,queue_010}'::text[])))
                                Heap Fetches: 0
                                Buffers: shared hit=38 read=14
                    ->  Hash  (cost=0.22..0.22 rows=10 width=32) (actual time=0.020..0.021 rows=10 loops=1)
                          Buckets: 1024  Batches: 1  Memory Usage: 9kB
                          ->  HashAggregate  (cost=0.09..0.22 rows=10 width=32) (actual time=0.010..0.012 rows=10 loops=1)
                                Group Key: unnest('{queue_001,queue_002,queue_003,queue_004,queue_005,queue_006,queue_007,queue_008,queue_009,queue_010}'::text[])
                                Batches: 1  Memory Usage: 24kB
                                ->  ProjectSet  (cost=0.00..0.07 rows=10 width=32) (actual time=0.004..0.006 rows=10 loops=1)
                                      ->  Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.001 rows=1 loops=1)
Planning:
  Buffers: shared hit=154 read=8
Planning Time: 4.956 ms
Execution Time: 8.376 ms

@bgentry bgentry force-pushed the bg/queue-state-count-regression-benchmark branch 3 times, most recently from 847afa7 to f1bd30e Compare April 13, 2026 19:18
This adds a benchmark on top of #1203 to make the queue-state count
query regression easy to reproduce and discuss. It compares the current
`JobCountByQueueAndState` implementation against the legacy query shape
on the same migrated `river_job` schema.

The benchmark stays lightweight by default, but can be scaled locally
with `RIVER_BENCH_QUEUE_STATE_COUNT_NUM_JOBS` to reproduce the planner
regression with a couple hundred thousand rows and quantify the gap.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant