Bug
When an upstream provider stream is cut mid-generation, the AI SDK emits
a fallback finishReason: "other" with zero output tokens. opencode's
session processor accepts this as a normal end-of-step, persists a
truncated assistant message, and continues — no error, no retry, no
recovery indicator.
Symptom
The session stores a half-finished message and the user gets a partial
response with no warning. Users typically respond by manually
re-prompting, get a few good turns, then hit the bug again — the
"session degradation" pattern reported in #16214.
Trigger
{ type: "text-delta", delta: "..." }
{ type: "text-delta", delta: "..." } ← upstream stream cuts here
{ type: "finish", finishReason: "other", usage: { outputTokens: 0 } }
↑
AI SDK's "no stop reason was given" fallback
The session processor sees finish-step with
finishReason === "other" and usage.tokens.output === 0.
Real-world evidence
I found more than a dozen instances of this exact pattern in my own
opencode session database, spanning two providers (anthropic, openai)
and four models (gpt-5.3-codex, claude-opus-4-6, claude-opus-4-7,
claude-haiku-4-5). All exhibit the shape:
{
"role": "assistant",
"finish": "other",
"tokens": { "input": 0, "output": 0, "reasoning": 0, "cache": {...} },
"cost": 0
}
In one diagnostic example the reasoning text literally ends mid-word
("...whichlang::detect_language() functi"), proving the upstream stream
was severed before the next chunk arrived — this is not a model decision
to stop.
Why this is distinct from related reports
Suggested fix
Detect finishReason === "other" && tokens.output === 0 at the session
processor layer (covers all AI-SDK providers), surface as a retryable
APIError, cap retries at a small number to avoid loops on persistently
misbehaving providers.
Environment
Bug
When an upstream provider stream is cut mid-generation, the AI SDK emits
a fallback
finishReason: "other"with zero output tokens. opencode'ssession processor accepts this as a normal end-of-step, persists a
truncated assistant message, and continues — no error, no retry, no
recovery indicator.
Symptom
The session stores a half-finished message and the user gets a partial
response with no warning. Users typically respond by manually
re-prompting, get a few good turns, then hit the bug again — the
"session degradation" pattern reported in #16214.
Trigger
The session processor sees
finish-stepwithfinishReason === "other"andusage.tokens.output === 0.Real-world evidence
I found more than a dozen instances of this exact pattern in my own
opencode session database, spanning two providers (
anthropic,openai)and four models (
gpt-5.3-codex,claude-opus-4-6,claude-opus-4-7,claude-haiku-4-5). All exhibit the shape:{ "role": "assistant", "finish": "other", "tokens": { "input": 0, "output": 0, "reasoning": 0, "cache": {...} }, "cost": 0 }In one diagnostic example the reasoning text literally ends mid-word
(
"...whichlang::detect_language() functi"), proving the upstream streamwas severed before the next chunk arrived — this is not a model decision
to stop.
Why this is distinct from related reports
was an upstream proxy synthesizing
finishReason: "stop"aftertruncating content (free-tier MiniMax over OpenRouter), not
"other".The mechanisms differ.
@ai-sdk/openai-compatibleprovider'sflush()callback, missingAnthropic-direct, Bedrock, and Vertex.
Suggested fix
Detect
finishReason === "other" && tokens.output === 0at the sessionprocessor layer (covers all AI-SDK providers), surface as a retryable
APIError, cap retries at a small number to avoid loops on persistentlymisbehaving providers.
Environment
claude-haiku-4-5, gpt-5.3-codex