feat: add GLM-4.7 model with preserved thinking support for Z.ai provider #10278
+67
−0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Related GitHub Issue
Closes: #10269
Roo Code Task Context (Optional)
N/A
Description
This PR adds support for the GLM-4.7 model with preserved thinking capabilities to the Z.ai provider, as requested in issue #10269.
Changes made:
Added GLM-4.7 documentation links to the file header comments:
Added
glm-4.7entry tointernationalZAiModelswith:maxTokens: 98,304contextWindow: 200,000supportsPromptCache: truesupportsNativeTools: truedefaultToolProtocol: "native"preserveReasoning: true (enables interleaved thinking mode)Added
glm-4.7entry tomainlandZAiModelswith similar configuration but using mainland China pricingAdded tests to verify the GLM-4.7 model has
preserveReasoningenabled for both international and China endpointsHow preserved thinking works:
The
preserveReasoning: trueflag enables interleaved thinking mode for tool calls. When enabled:reasoning_contentalongside regular contentThis follows the existing pattern used for DeepSeek reasoner models and aligns with the Z.ai documentation on thinking mode.
Test Procedure
cd packages/types && npx tsc --noEmitpnpm lintcd src && npx vitest run api/providers/__tests__/zai.spec.ts(29 tests passed)Pre-Submission Checklist
Screenshots / Videos
N/A - This is a data/configuration change only.
Documentation Updates
Additional Notes
This PR attempts to address Issue #10269 by adding GLM-4.7 with preserved thinking support as requested by @kavehsfv. The existing infrastructure already handles
reasoning_contentin streaming responses via theBaseOpenAiCompatibleProvider, so no additional handler changes were needed.Feedback and guidance are welcome!