Skip to content

fix(module): harden executor with resource limits, async context, and concurrency safety#206

Merged
HugoRCD merged 3 commits intonuxt-modules:mainfrom
Mat4m0:fix/codemode-async-context-and-concurrency
Apr 7, 2026
Merged

fix(module): harden executor with resource limits, async context, and concurrency safety#206
HugoRCD merged 3 commits intonuxt-modules:mainfrom
Mat4m0:fix/codemode-async-context-and-concurrency

Conversation

@Mat4m0
Copy link
Copy Markdown
Contributor

@Mat4m0 Mat4m0 commented Apr 6, 2026

The problems

1. AsyncLocalStorage context was silently lost (correctness bug)

When a Nuxt middleware sets up per-request context (e.g. event.context.user), that context lives in Node's AsyncLocalStorage. But the singleton RPC server runs in its own async context — so when a sandbox tool call arrived, the dispatched function could no longer see the caller's store. This meant middleware-injected auth context, request-scoped databases, and similar patterns silently broke inside Code Mode.

2. Concurrent executions corrupted each other (race condition)

The executor stored a single fns map and a single onReturn callback directly on the singleton RpcState. When two execute() calls ran concurrently, the second overwrote the first's function map and return callback. This could cause tool calls to dispatch to the wrong handler or deliver results to the wrong caller.

3. Sandbox could DoS the host (resource exhaustion — 4 vectors)

Vector Impact Root cause
Unbounded RPC body Memory exhaustion No size cap on for await (const chunk of req) body += chunk
Unbounded tool responses Memory pressure JSON.stringify(result) runs on full result before any limit applies
No wall-clock timeout Indefinite execution cpuTimeLimitMs only covers CPU in the isolate; while(true) await codemode.tool() yields between calls
No RPC call quota Runaway tool loops Unlimited tool calls in a tight loop

4. Error messages leaked internal details (information disclosure)

Infrastructure errors (file paths, stack traces) were returned verbatim to the sandbox and surfaced to the MCP client — exposing server internals.

5. Zero server-side logging (observability gap)

All 10 catch blocks in the executor swallowed errors silently. When something went wrong, there was no server-side trace to diagnose it.

Solution

Async context restoration

Each execute() call now captures AsyncLocalStorage.snapshot() at entry and stores it on the ExecutionContext. The RPC handler calls exec.restoreContext(fn, args) before dispatching, re-entering the caller's async context. This requires Node.js >=18.16.0 (documented, with a clear error if unavailable).

Per-execution isolation

Replaced the shared fns/onReturn on the singleton with a Map<string, ExecutionContext> keyed by a random execId. Each execution gets its own frozen function map, return callback, deadline, and call counter. The sandbox sends execId with every RPC call; the server routes to the correct context. Cleanup happens in a finally block.

####Resource limits (4 new configurable options)

Option Default HTTP code What it does
maxRequestBodyBytes 1 MB 413 Byte-counting body reader rejects oversized payloads early
maxToolResponseSize 1 MB 200 (truncated) Per-tool response truncated using existing truncateResult()
wallTimeLimitMs 60s 408 Deadline checked before every RPC dispatch (tool calls + returns)
maxToolCalls 200 429 Per-execution counter, incremented only for tool calls (not __return__)

All limits are configurable via CodeModeOptions and documented.

Error sanitization

New sanitizeErrorMessage() strips Unix/Windows file paths and stack trace lines, caps at 500 chars. Applied at both boundaries: RPC catch block and execute() catch block. Full errors are logged server-side before sanitization.

Logging

Every catch block now logs with console.error (operational failures) or console.warn (best-effort/teardown failures), all prefixed with [nuxt-mcp-toolkit].

Bug fixes

  • exec.returned = true moved after the callback succeeds (was set before, causing inconsistent state on throw)
  • void handleRpcRequest(...) replaced with .catch() that logs and sends 500

Files changed

File Lines What changed
executor.ts 470 → 535 Core hardening, logging, bug fixes, sanitization
types.ts +8 4 new CodeModeOptions fields
handlers.ts +5 Updated JSDoc listing all options
codemode-executor.test.ts 0 → 738 18 tests
8.code-mode.md +40 Resource limits table, config docs, error sanitization section
5.handlers.md +2 Node.js version requirement note
2.installation.md +2 Node.js version requirement note

Test coverage

All changes are covered by 18 unit tests in test/codemode-executor.test.ts:

Concurrency & context (3 tests)

  • Concurrent execute() calls dispatch to isolated function maps
  • Concurrent return values delivered to correct callers
  • AsyncLocalStorage context preserved through RPC dispatch

Hardening — new (8 tests)

  • RPC token rejection (403)
  • Unknown/stale execId rejection (400)
  • Missing execId rejection (400)
  • Oversized body rejection (413)
  • Wall-clock timeout (408)
  • RPC call quota exceeded (429)
  • Oversized tool response truncation
  • File path sanitization in error messages

Hardening — existing (7 tests)

  • Singleton RPC server sharing across cold starts
  • Server startup failure recovery
  • Request body stream failure containment
  • Runtime crash propagation
  • Double-return prevention
  • Dispose and re-create lifecycle
  • AsyncLocalStorage.snapshot() unavailability fallback

AI Tools used

  • Claude Opus 4.6
  • GPT 5.4

…nd concurrency safety

- Fix AsyncLocalStorage context loss through singleton RPC server by
  using per-execution snapshots via AsyncLocalStorage.snapshot()
- Fix concurrent execute() calls overwriting shared fns/onReturn by
  introducing per-execution ExecutionContext keyed by execId
- Add bounded RPC body reader (1MB default, HTTP 413)
- Add wall-clock execution timeout (60s default, HTTP 408)
- Add per-execution RPC call quota (200 default, HTTP 429)
- Add per-tool response size limit (1MB default, truncation)
- Sanitize error messages to strip file paths and stack traces
- Add server-side logging to all catch blocks
- Fix exec.returned set before callback completes
- Replace void handleRpcRequest with .catch() safety net
@vercel
Copy link
Copy Markdown
Contributor

vercel bot commented Apr 6, 2026

@Mat4m0 is attempting to deploy a commit to the Nuxt Team on Vercel.

A member of the Team first needs to authorize it.

@github-actions github-actions bot added the bug Something isn't working label Apr 6, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 6, 2026

Thank you for following the naming conventions! 🙏

@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new bot commented Apr 6, 2026

npm i https://pkg.pr.new/@nuxtjs/mcp-toolkit@206

commit: ec9f297

@Mat4m0 Mat4m0 changed the title fix(codemode): harden executor with resource limits, async context, and concurrency safety fix(module): harden executor with resource limits, async context, and concurrency safety Apr 6, 2026
@Mat4m0
Copy link
Copy Markdown
Contributor Author

Mat4m0 commented Apr 6, 2026

A scope question for review: this PR currently bundles the fix for AsyncLocalStorage context restoration and per-execution RPC scoping together with some executor hardening in the same area (limits, timeout/quotas, response truncation, error sanitization/logging, and docs/tests).

I kept it together because it all touches the same executor surface and I think this is the best end state for code mode. That said, if you’d prefer a narrower review, I can scope this PR down to just the correctness fix (ALS restoration + concurrent safety + focused tests) and move the hardening pieces into a follow-up.

@HugoRCD HugoRCD merged commit 5197153 into nuxt-modules:main Apr 7, 2026
7 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants