Skip to content

Conversation

@overlookmotel
Copy link
Member

@overlookmotel overlookmotel commented Dec 19, 2025

#17023 introduced a queuing system to limit the number of FixedSizeAllocators in play at any given time. However, there was a subtle race condition, which could result in deadlock. Fix it.

See comments in the code for explanation.

@github-actions github-actions bot added the C-bug Category - Bug label Dec 19, 2025
Copy link
Member Author


How to use the Graphite Merge Queue

Add either label to this PR to merge it via the merge queue:

  • 0-merge - adds this PR to the back of the merge queue
  • hotfix - for urgent hot fixes, skip the queue and merge this PR next

You must have a Graphite account in order to use the merge queue. Sign up using this link.

An organization admin has enabled the Graphite Merge Queue in this repository.

Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue.

This stack of pull requests is managed by Graphite. Learn more about stacking.

@overlookmotel overlookmotel marked this pull request as ready for review December 19, 2025 05:22
Copilot AI review requested due to automatic review settings December 19, 2025 05:22
@overlookmotel overlookmotel self-assigned this Dec 19, 2025
@overlookmotel overlookmotel added the A-linter-plugins Area - Linter JS plugins label Dec 19, 2025
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes a race condition in FixedSizeAllocatorPool that could cause deadlock when multiple threads are waiting for allocators from a pool that has reached its capacity limit.

Key Changes:

  • Modified the waiting logic in get() to acquire the mutex lock before entering the wait loop, preventing a lost wakeup scenario
  • Renamed allocators variables to allocators_guard for clarity
  • Enhanced comments to explain the deadlock prevention mechanism

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@codspeed-hq
Copy link

codspeed-hq bot commented Dec 19, 2025

CodSpeed Performance Report

Merging #17112 will not alter performance

Comparing om/12-19-fix_allocator_fix_potential_deadlock_in_fixedsizeallocatorpool_ (662d5d6) with main (3e2ae7b)1

Summary

✅ 42 untouched
⏩ 3 skipped2

Footnotes

  1. No successful run was found on main (c8d2382) during the generation of this report, so 3e2ae7b was used instead as the comparison base. There might be some changes unrelated to this pull request in this report.

  2. 3 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@camc314 camc314 added the 0-merge Merge with Graphite Merge Queue label Dec 19, 2025
Copy link
Contributor

camc314 commented Dec 19, 2025

Merge activity

…17112)

#17023 introduced a queuing system to limit the number of `FixedSizeAllocator`s in play at any given time. However, there was a subtle race condition, which could result in deadlock. Fix it.

See comments in the code for explanation.
@graphite-app graphite-app bot force-pushed the om/12-19-fix_allocator_fix_potential_deadlock_in_fixedsizeallocatorpool_ branch from 662d5d6 to b87600a Compare December 19, 2025 12:15
@graphite-app graphite-app bot merged commit b87600a into main Dec 19, 2025
21 checks passed
@graphite-app graphite-app bot deleted the om/12-19-fix_allocator_fix_potential_deadlock_in_fixedsizeallocatorpool_ branch December 19, 2025 12:21
@graphite-app graphite-app bot removed the 0-merge Merge with Graphite Merge Queue label Dec 19, 2025
graphite-app bot pushed a commit that referenced this pull request Dec 19, 2025
…17094)

Modification of fixed-size allocator limits, building on #17023.

### The problem

This is an alternative design, intended to handle one flaw on Windows:

Each allocator is 4 GiB in size, so if system has 16.01 GiB of memory available, we could succeed in creating 4 x 4 GiB allocators, but that'd only leave 10 MiB of memory free. Likely then some other allocation (e.g. creating a normal `Allocator`, or even allocating a heap `String`) would fail due to OOM later on.

Note that "memory available" on Windows does not mean "how much RAM the system has". It includes the swap file, the size of which depends on how much free disk space the system has. So numbers like 16.01 GiB are not at all out of the question.

### Proposed solution

On Windows, create as many allocators as possible when creating the pool, up to `thread count + 1`. Then return the last allocator back to the system. This ensures that there's at least 4 GiB of memory free for other allocations, which should be enough.

### Redesign

In working through the various scenarios, I realized that the implementation can be simplified for both Linux/Mac and Windows.

In both cases, no more than `thread_count` fixed-size allocators can be in use at any given time - see doc comment on `FixedSizeAllocatorPool` for full explanation.

So create the pool with `thread_count` allocators (or as close as we can get on Windows). Thereafter the pool does not need to grow, and cannot.

This allows removing a bunch of synchronization code.

* On Linux/Mac, #17013 solved the too-many-allocators problem another way, so all we need is the `Mutex`.
* On Windows, we only need a `Mutex` + a `Condvar`.

In both cases, it's much simplified, which makes it much less likely for subtle race conditions like #17112 to creep in.

Removing the additional synchronization should also be a little more performant.

Note that the redesign is not the main motivator for this change - preventing OOM on Windows is.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-linter-plugins Area - Linter JS plugins C-bug Category - Bug

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants