feat: add MiniMax as a new LLM provider with M2.7 default#11367
feat: add MiniMax as a new LLM provider with M2.7 default#11367octo-patch wants to merge 4 commits intocontinuedev:mainfrom
Conversation
Add MiniMax (https://platform.minimax.io) as a new LLM provider with OpenAI-compatible API support. Changes: - Add MiniMax LLM provider class extending OpenAI with temperature clamping (must be in (0, 1]) and response_format removal - Register provider in LLMClasses, openai-adapters, and config-types - Add model info for MiniMax-M2.5 and MiniMax-M2.5-highspeed (204K context, 192K max output) - Add GUI model selection entries and provider configuration - Add provider documentation page
|
I have read the CLA Document and I hereby sign the CLA 1 out of 2 committers have signed the CLA. |
There was a problem hiding this comment.
1 issue found across 10 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/openai-adapters/src/index.ts">
<violation number="1" location="packages/openai-adapters/src/index.ts:145">
P1: MiniMax is wired to generic `OpenAIApi`, which skips the repo’s MiniMax-specific request fixes (temperature clamping and `response_format` removal), creating a real incompatibility path in adapter-based runtime flows.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
|
I have read the CLA Document and I hereby sign the CLA |
The minimax provider was wired to the generic OpenAIApi via openAICompatible(), which skips MiniMax-specific request fixes. This creates a dedicated MiniMaxApi adapter class that overrides modifyChatBody to apply temperature clamping (MiniMax requires temperature in (0.0, 1.0]) and response_format removal, matching the adaptations already present in core/llm/llms/MiniMax.ts. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list - Set MiniMax-M2.7 as default model - Keep all previous models (M2.5, M2.5-highspeed) as alternatives - Update docs and GUI model selection
RomneyDa
left a comment
There was a problem hiding this comment.
@octo-patch thanks for the contribution, I think this provider should definitely be added.
Could you check a couple things?
PROVIDER_SUPPORTS_IMAGESPROVIDER_HANDLES_TEMPLATING- I think yes for minimaxPARALLEL_PROVIDERS
You can look up a provider string like "nebius" to see other places you might want to double check for minimax for completeness
|
note CLA requirement! |
MiniMax uses an OpenAI-compatible chat completions API, so it handles templating natively. MiniMax also supports parallel tool calls. Not added to PROVIDER_SUPPORTS_IMAGES since MiniMax M2.7 is text-only.
|
Thanks for the review @RomneyDa! Great catch on the missing provider arrays. I've pushed a fix in c0b5501:
Regarding CLA — I signed it in my earlier comment above. Let me know if anything else needs attention! |
Summary
Add MiniMax as a first-class LLM provider for Continue, with the latest M2.7 model as default.
Changes
docs/customize/model-providers/more/minimax.mdxModels
MiniMax-M2.7(default)MiniMax-M2.7-highspeedMiniMax-M2.5MiniMax-M2.5-highspeedWhy
MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities, available via an OpenAI-compatible API at
https://api.minimax.io/v1.Testing