Problem
12+ gstack skills hardcode codex exec as the cross-model second opinion engine:
/codex, /review, /ship, /plan-eng-review, /plan-ceo-review, /plan-design-review, /design-consultation, /design-review, /office-hours, /autoplan
These calls use Codex-specific flags (-s read-only, -c 'model_reasoning_effort=...', --enable web_search_cached, resume <session-id>) that have no direct equivalent in other CLIs.
Users who don't have (or don't pay for) OpenAI Codex CLI but do have Google Gemini CLI installed currently have to maintain a translation shim to use these skills. The shim works for basic calls but can't fully translate:
codex exec resume <session-id> (session continuity — no Gemini equivalent)
-c 'model_reasoning_effort=...' (dropped silently)
--enable web_search_cached (dropped — Gemini has built-in search but no equivalent flag)
--json JSONL output mode (Gemini doesn't support structured trace output the same way)
Related
Proposal
Add a configurable "outside voice" backend in ~/.gstack/config.yaml:
outside_voice:
backend: gemini # or: codex, claude-subagent
# backend-specific config inherited from each CLI's own auth
Then in skill files, replace direct codex exec calls with an abstraction that dispatches to the configured backend. Each backend adapter handles flag translation:
| Codex flag |
Gemini equivalent |
Claude subagent |
-s read-only |
--sandbox |
Agent tool (read-only by default) |
-c 'model_reasoning_effort=...' |
n/a (drop) |
model param |
--enable web_search_cached |
built-in (drop) |
WebSearch tool |
resume <session-id> |
n/a |
SendMessage to existing agent |
--json (JSONL traces) |
n/a |
parse agent output |
This would:
- Let users choose their preferred second-opinion CLI
- Unify the inconsistent
CODEX_NOT_AVAILABLE handling across skills
- Make the "outside voice" pattern a first-class gstack concept rather than a Codex-specific feature
Workaround
A bash shim at ~/.local/bin/codex that intercepts codex exec and translates to gemini -p works for most cases but breaks on session continuity and JSONL output parsing.
Environment
- macOS (Apple Silicon)
@google/gemini-cli@0.35.1 via npm
- gstack 0.13.0.0
Problem
12+ gstack skills hardcode
codex execas the cross-model second opinion engine:/codex,/review,/ship,/plan-eng-review,/plan-ceo-review,/plan-design-review,/design-consultation,/design-review,/office-hours,/autoplanThese calls use Codex-specific flags (
-s read-only,-c 'model_reasoning_effort=...',--enable web_search_cached,resume <session-id>) that have no direct equivalent in other CLIs.Users who don't have (or don't pay for) OpenAI Codex CLI but do have Google Gemini CLI installed currently have to maintain a translation shim to use these skills. The shim works for basic calls but can't fully translate:
codex exec resume <session-id>(session continuity — no Gemini equivalent)-c 'model_reasoning_effort=...'(dropped silently)--enable web_search_cached(dropped — Gemini has built-in search but no equivalent flag)--jsonJSONL output mode (Gemini doesn't support structured trace output the same way)Related
/office-hoursCODEX_NOT_AVAILABLEdetection but handle it inconsistently (some skip silently, some fall back to Agent tool)Proposal
Add a configurable "outside voice" backend in
~/.gstack/config.yaml:Then in skill files, replace direct
codex execcalls with an abstraction that dispatches to the configured backend. Each backend adapter handles flag translation:-s read-only--sandbox-c 'model_reasoning_effort=...'--enable web_search_cachedresume <session-id>--json(JSONL traces)This would:
CODEX_NOT_AVAILABLEhandling across skillsWorkaround
A bash shim at
~/.local/bin/codexthat interceptscodex execand translates togemini -pworks for most cases but breaks on session continuity and JSONL output parsing.Environment
@google/gemini-cli@0.35.1via npm