Skip to content

Conversation

@camc314
Copy link
Contributor

@camc314 camc314 commented Dec 17, 2025

This PR adds a capacity limit to FixedSizeAllocatorPool to prevent unbounded memory consumption.

Previously, each call to get() on an empty pool would create a new 4 GiB allocator without any limit, risking memory exhaustion in multi-threaded scenarios. Now the pool caps the number of allocators at thread_count and blocks threads using a Condvar when all allocators are in use.

Key changes include:

  • Eager creation of the first allocator during construction (fail-fast).
  • Lock-free ID assignment with compare-and-swap.
  • Blocking with condition variables when at capacity.
  • Tracking allocation failures to avoid repeated attempts.

Copy link
Contributor Author

camc314 commented Dec 17, 2025


How to use the Graphite Merge Queue

Add either label to this PR to merge it via the merge queue:

  • 0-merge - adds this PR to the back of the merge queue
  • hotfix - for urgent hot fixes, skip the queue and merge this PR next

You must have a Graphite account in order to use the merge queue. Sign up using this link.

An organization admin has enabled the Graphite Merge Queue in this repository.

Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue.

This stack of pull requests is managed by Graphite. Learn more about stacking.

@camc314 camc314 changed the title feat(allocator): block, waiting for new allocator if none are available feat(allocator): add cap to FixedSizeAllocatorPool and block when exhausted Dec 17, 2025
@codspeed-hq
Copy link

codspeed-hq bot commented Dec 17, 2025

CodSpeed Performance Report

Merging #17023 will degrade performances by 3.08%

Comparing c/12-17-refactor_allocator_add_docs_for_fixedsizeallocatorpool_fields_and_handle_thread_count_0 (da244dd) with main (1e2d011)

Summary

❌ 1 regression
✅ 41 untouched
⏩ 3 skipped1

⚠️ Please fix the performance issues or acknowledge them on CodSpeed.

Benchmarks breakdown

Mode Benchmark BASE HEAD Change
Simulation minifier[cal.com.tsx] 36.3 ms 37.5 ms -3.08%

Footnotes

  1. 3 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@camc314 camc314 marked this pull request as ready for review December 17, 2025 23:13
@camc314 camc314 force-pushed the c/12-17-refactor_allocator_fixed-size_return_result_from_allocation_instead_of_panicking branch from 58e5959 to 4f715fe Compare December 17, 2025 23:24
@camc314 camc314 force-pushed the c/12-17-refactor_allocator_add_docs_for_fixedsizeallocatorpool_fields_and_handle_thread_count_0 branch from c45c03e to 907ddd8 Compare December 17, 2025 23:24
@camc314 camc314 requested a review from Copilot December 17, 2025 23:25
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a capacity limit to FixedSizeAllocatorPool and implements blocking behavior when the pool is exhausted. Previously, the pool would create allocators on demand without limit. Now it enforces a maximum number of allocators (based on thread count) and blocks threads in get() when all allocators are in use, using a condition variable to wake threads when allocators are returned via add().

Key changes:

  • Adds capacity limiting with blocking wait when exhausted
  • Creates one allocator upfront during pool initialization instead of lazy creation
  • Implements condition variable-based thread synchronization for allocator availability

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@camc314 camc314 force-pushed the c/12-17-refactor_allocator_add_docs_for_fixedsizeallocatorpool_fields_and_handle_thread_count_0 branch from 907ddd8 to 0937385 Compare December 17, 2025 23:45
@camc314 camc314 force-pushed the c/12-17-refactor_allocator_fixed-size_return_result_from_allocation_instead_of_panicking branch from 4f715fe to 477d0fd Compare December 18, 2025 00:05
@camc314 camc314 force-pushed the c/12-17-refactor_allocator_add_docs_for_fixedsizeallocatorpool_fields_and_handle_thread_count_0 branch from 0937385 to c9dba3c Compare December 18, 2025 00:05
Copy link
Contributor Author

@camc314 camc314 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@camc314 camc314 force-pushed the c/12-17-refactor_allocator_fixed-size_return_result_from_allocation_instead_of_panicking branch from 49217f5 to efa712a Compare December 18, 2025 11:32
@camc314 camc314 force-pushed the c/12-17-refactor_allocator_add_docs_for_fixedsizeallocatorpool_fields_and_handle_thread_count_0 branch from c9dba3c to 540bcb4 Compare December 18, 2025 11:32
@overlookmotel overlookmotel force-pushed the c/12-17-refactor_allocator_fixed-size_return_result_from_allocation_instead_of_panicking branch from efa712a to 63c3286 Compare December 18, 2025 22:12
@overlookmotel overlookmotel force-pushed the c/12-17-refactor_allocator_add_docs_for_fixedsizeallocatorpool_fields_and_handle_thread_count_0 branch from 540bcb4 to 361ca74 Compare December 18, 2025 22:12
Base automatically changed from c/12-17-refactor_allocator_fixed-size_return_result_from_allocation_instead_of_panicking to main December 18, 2025 22:23
@overlookmotel overlookmotel force-pushed the c/12-17-refactor_allocator_add_docs_for_fixedsizeallocatorpool_fields_and_handle_thread_count_0 branch from 361ca74 to da244dd Compare December 18, 2025 22:25
Copy link
Member

@overlookmotel overlookmotel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, I'm not 100% sure this works perfectly. Synchronization across threads is hard to reason about, and I'm not familiar with Condvar.

I worry there might be scope for a deadlock on the Mutex, as there's 2 places that take a lock on it in FixedSizeAllocatorPool::get. Just might though - I can't figure out a scenario in which that could happen, but I don't feel I understand it fully, so I might be missing something.

Hopefully it's OK!

I also worry that we're potentially not leaving enough memory free for standard allocators. If there's 24.01 GB memory available, and we create 4 x 6 GiB fixed-size allocators, we'll successfully create those 4, but only leave 0.01 GB of free memory remaining. So then other memory allocations might hit OOM.

It's just luck of the draw how close your total available memory is to a multiple of 6 GiB! If it's close, bad times...


Pushed one nit. Maybe I'm stupid, but I always find a.max(b) confusing. I find cmp::max(a, b) clearer.

Am also going to follow-up with some refactoring PRs.

@overlookmotel overlookmotel merged commit d209c21 into main Dec 18, 2025
28 checks passed
@overlookmotel overlookmotel deleted the c/12-17-refactor_allocator_add_docs_for_fixedsizeallocatorpool_fields_and_handle_thread_count_0 branch December 18, 2025 22:40
@overlookmotel overlookmotel added the A-linter-plugins Area - Linter JS plugins label Dec 18, 2025
graphite-app bot pushed a commit that referenced this pull request Dec 18, 2025
…ool::get` (#17078)

Follow-on after #17023. Pure refactor.

Move code out of `FixedSizeAllocatorPool::get`, to make it easier to read. Also add comments.
graphite-app bot pushed a commit that referenced this pull request Dec 18, 2025
…torPool::get` (#17079)

Follow-on after #17023. Small perf optimization.

Minimize the time that the `Mutex` is locked, by dropping the `MutexGuard` before checking whether `pop()` returned `Some` or not , rather than after.
graphite-app bot pushed a commit that referenced this pull request Dec 18, 2025
…:create_new_allocator` (#17080)

Follow-on after #17023. Pure refactor.

Returning `Option<Result>` was over-complicated. Just return `Option`.
overlookmotel added a commit that referenced this pull request Dec 19, 2025
### 🚀 Features

- d209c21 allocator: Add cap to FixedSizeAllocatorPool and block when
exhausted (#17023) (Cameron)
- fb2af91 allocator: Add bitset utils (#17042) (zhaoting zhou)
- c16082c tasks/compat_data: Integrate `node-compat-table` (#16831)
(Boshen)
- 5586823 span: Extract TS declaration file check to its own function
(#17037) (camchenry)
- 3d2b492 minifier: Fold iife arrow functions in call expressions
(#16477) (Armano)
- 67e9f9e codegen: Keep comments on the export specifiers (#16943)
(夕舞八弦)
- cb515fa parser: Improve error message for `yield` as identifier usage
(#16950) (sapphi-red)
- dcc856b parser: Add help for `new_dynamic_import` error (#16949)
(sapphi-red)
- c3c79f8 parser: Improve import attribute value error message (#16948)
(sapphi-red)
- 291b57b ast_tools: Generate TS declaration files for deserializer and
walk files (#16912) (camc314)
- 74eae13 minifier: Remove unused import specifiers (#16797) (camc314)

### 🐛 Bug Fixes

- fb9e193 linter: OOM problems with custom plugins (#17082)
(overlookmotel)
- e59132b parser/napi: Fix lazy deser (#17069) (overlookmotel)
- a92faf0 ast_tools: Support `u128` in `assert_layouts` generator
(#17050) (overlookmotel)
- 47b4c2f minifier/docs: Correct hyperlink path in OPTIMIZATIONS.md
(#16986) (GRK)
- 3002649 transformer/typescript: Remove unused import equals
declaration (#16776) (Dunqing)
- 5a2af88 regular_expression: Correct named capture group reference
error (#16952) (sapphi-red)

### ⚡ Performance

- b657bb6 allocator: Reduce time `Mutex` lock is held in
`FixedSizeAllocatorPool::get` (#17079) (overlookmotel)
- 1f3b19b ast: `#[ast]` macro use `#[repr(transparent)]` for
single-field structs (#17052) (overlookmotel)
- 225f229 parser: Use SmallVec for duplicate default export detection
(#16801) (camc314)

### 📚 Documentation

- a9c419f traverse: Update safety comments (#16944) (overlookmotel)

Co-authored-by: overlookmotel <[email protected]>
graphite-app bot pushed a commit that referenced this pull request Dec 19, 2025
…17112)

#17023 introduced a queuing system to limit the number of `FixedSizeAllocator`s in play at any given time. However, there was a subtle race condition, which could result in deadlock. Fix it.

See comments in the code for explanation.
graphite-app bot pushed a commit that referenced this pull request Dec 19, 2025
…17094)

Modification of fixed-size allocator limits, building on #17023.

### The problem

This is an alternative design, intended to handle one flaw on Windows:

Each allocator is 4 GiB in size, so if system has 16.01 GiB of memory available, we could succeed in creating 4 x 4 GiB allocators, but that'd only leave 10 MiB of memory free. Likely then some other allocation (e.g. creating a normal `Allocator`, or even allocating a heap `String`) would fail due to OOM later on.

Note that "memory available" on Windows does not mean "how much RAM the system has". It includes the swap file, the size of which depends on how much free disk space the system has. So numbers like 16.01 GiB are not at all out of the question.

### Proposed solution

On Windows, create as many allocators as possible when creating the pool, up to `thread count + 1`. Then return the last allocator back to the system. This ensures that there's at least 4 GiB of memory free for other allocations, which should be enough.

### Redesign

In working through the various scenarios, I realized that the implementation can be simplified for both Linux/Mac and Windows.

In both cases, no more than `thread_count` fixed-size allocators can be in use at any given time - see doc comment on `FixedSizeAllocatorPool` for full explanation.

So create the pool with `thread_count` allocators (or as close as we can get on Windows). Thereafter the pool does not need to grow, and cannot.

This allows removing a bunch of synchronization code.

* On Linux/Mac, #17013 solved the too-many-allocators problem another way, so all we need is the `Mutex`.
* On Windows, we only need a `Mutex` + a `Condvar`.

In both cases, it's much simplified, which makes it much less likely for subtle race conditions like #17112 to creep in.

Removing the additional synchronization should also be a little more performant.

Note that the redesign is not the main motivator for this change - preventing OOM on Windows is.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-linter-plugins Area - Linter JS plugins C-enhancement Category - New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants