Bug Report
Description
Using DeepSeek models with thinking mode enabled (thinking: {type: "enabled"}), every API call after the first assistant response fails with:
The reasoning_content in the thinking mode must be passed back to the API.
The session becomes stuck — you can't continue the conversation beyond the first assistant reply.
System Info
- OpenCode version: 1.14.19 (running from source, branch anomalyco/opencode)
- OS: Windows 11
- Provider:
@ai-sdk/openai-compatible (bundled)
- DeepSeek API:
api.deepseek.com, thinking mode enabled
Model Configuration
"deepseek-v4-flash": {
"limit": { "context": 200000, "output": 393216 },
"options": { "thinking": { "type": "enabled" } }
}
Also reproduced with deepseek-reasoner (R1) when conversation history contains assistant messages that were stored before thinking mode was enabled.
Steps to Reproduce
- Configure a DeepSeek V4 model (e.g.
deepseek-v4-flash) in opencode.json with thinking: {type: "enabled"} in options
- Start a new session with this model
- Send a message — the first assistant reply works fine
- Send a follow-up message — API returns 400 with the reasoning_content error
- The session is now stuck; every retry produces the same error
Expected Behavior
Conversation should continue normally across multiple turns when using thinking mode models.
Actual Behavior
Second API call fails with reasoning_content must be passed back. The @ai-sdk/openai-compatible provider's convertToOpenAICompatibleChatMessages() function strips reasoning_content from assistant messages that don't have explicit reasoning parts in their content. On history replay (second turn), these messages lack the reasoning_content field that DeepSeek's API requires for all assistant messages in thinking mode.
Workaround
Add interleaved: { field: "reasoning_content" } to the model config in opencode.json, which causes the existing normalizeMessages() code to inject reasoning_content into providerOptions. But this only partially helps — messages replayed from DB history still get skipped by the narrow condition in normalizeMessages().
Related Issues
Bug Report
Description
Using DeepSeek models with thinking mode enabled (
thinking: {type: "enabled"}), every API call after the first assistant response fails with:The session becomes stuck — you can't continue the conversation beyond the first assistant reply.
System Info
@ai-sdk/openai-compatible(bundled)api.deepseek.com, thinking mode enabledModel Configuration
Also reproduced with
deepseek-reasoner(R1) when conversation history contains assistant messages that were stored before thinking mode was enabled.Steps to Reproduce
deepseek-v4-flash) inopencode.jsonwiththinking: {type: "enabled"}in optionsExpected Behavior
Conversation should continue normally across multiple turns when using thinking mode models.
Actual Behavior
Second API call fails with
reasoning_contentmust be passed back. The@ai-sdk/openai-compatibleprovider'sconvertToOpenAICompatibleChatMessages()function stripsreasoning_contentfrom assistant messages that don't have explicit reasoning parts in their content. On history replay (second turn), these messages lack thereasoning_contentfield that DeepSeek's API requires for all assistant messages in thinking mode.Workaround
Add
interleaved: { field: "reasoning_content" }to the model config inopencode.json, which causes the existingnormalizeMessages()code to injectreasoning_contentinto providerOptions. But this only partially helps — messages replayed from DB history still get skipped by the narrow condition innormalizeMessages().Related Issues