-
Notifications
You must be signed in to change notification settings - Fork 63
fix: stale modelSlug #99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request fixes a stale modelSlug issue that occurs when users provide their own API key (BYOK - Bring Your Own Key) in the middle of a conversation. The core problem is that OpenRouter expects OpenAI models to be prefixed with openai/ (e.g., openai/gpt-4.1), while the standard OpenAI API does not use this prefix (e.g., gpt-4.1). When users switch between these modes, the backend was using a stale model slug that could cause invalid-model-id errors.
Changes:
- Frontend automatically refreshes the model list and updates the current conversation's
modelSlugwhen the OpenAI API key is changed - Backend health check function now accepts configurable base URL and model parameters to properly validate both OpenRouter and OpenAI endpoints
- Backend endpoint selection logic updated to use the standard OpenAI base URL when users provide their own API key
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| webapp/_webapp/src/views/settings/setting-text-input.tsx | Adds logic to fetch updated model list and update conversation modelSlug when openaiApiKey setting is changed |
| internal/services/toolkit/client/utils_v2.go | Updates CheckOpenAIWorksV2 signature to accept baseUrl and model parameters for flexible health checking |
| internal/services/toolkit/client/client_v2.go | Modifies GetOpenAIClient to use OpenAIBaseURL for BYOK scenarios and updates NewAIClientV2 to perform conditional health checks |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| CheckOpenAIWorksV2(oaiClient, cfg.InferenceBaseURL+"/openrouter", "openai/gpt-5-nano", logger) | ||
| } else { | ||
| oaiClient := openai.NewClient( | ||
| option.WithBaseURL(cfg.OpenAIBaseURL), | ||
| option.WithAPIKey(cfg.OpenAIAPIKey), | ||
| ) | ||
| CheckOpenAIWorksV2(oaiClient, cfg.OpenAIBaseURL, "gpt-5-nano", logger) |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model used for the health check ('gpt-5-nano') may not exist or be available with user-provided API keys. When using BYOK (Bring Your Own Key), the health check should use a model that is guaranteed to exist in the standard OpenAI API, such as 'gpt-4o-mini' or 'gpt-4o'. Using 'gpt-5-nano' could cause the health check to fail even when the API key is valid.
| CheckOpenAIWorksV2(oaiClient, cfg.InferenceBaseURL+"/openrouter", "openai/gpt-5-nano", logger) | |
| } else { | |
| oaiClient := openai.NewClient( | |
| option.WithBaseURL(cfg.OpenAIBaseURL), | |
| option.WithAPIKey(cfg.OpenAIAPIKey), | |
| ) | |
| CheckOpenAIWorksV2(oaiClient, cfg.OpenAIBaseURL, "gpt-5-nano", logger) | |
| CheckOpenAIWorksV2(oaiClient, cfg.InferenceBaseURL+"/openrouter", "openai/gpt-4o-mini", logger) | |
| } else { | |
| oaiClient := openai.NewClient( | |
| option.WithBaseURL(cfg.OpenAIBaseURL), | |
| option.WithAPIKey(cfg.OpenAIAPIKey), | |
| ) | |
| CheckOpenAIWorksV2(oaiClient, cfg.OpenAIBaseURL, "gpt-4o-mini", logger) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok. to consider informing user
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Junyi-99 oh yeah need to explicitly enable the model otherwise would get an invalid-model-id error. Do you think its wise to change it to a more commonly allowed model like 4o-mini, or should we inform the user somewhere to minimally enable gpt-5-nano.
| const matchingModel = response.models.find(m => | ||
| currentSlugLower.includes(m.name.toLowerCase()) |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model matching logic has a flaw: it checks if the current slug includes the model name, but this can cause incorrect matches when model names are substrings of each other. For example, if the current slug is 'openai/gpt-4.1-mini', the find will match 'GPT-4.1' first (since 'openai/gpt-4.1-mini'.includes('gpt-4.1') is true), instead of matching 'GPT-4.1-mini'. This means a user with 'gpt-4.1-mini' would incorrectly switch to 'gpt-4.1' when adding/removing BYOK. Consider matching on the exact model name by extracting the base model name from the slug (e.g., by removing the 'openai/' prefix and comparing directly).
| const matchingModel = response.models.find(m => | |
| currentSlugLower.includes(m.name.toLowerCase()) | |
| const baseModelName = currentSlugLower.startsWith("openai/") | |
| ? currentSlugLower.slice("openai/".length) | |
| : currentSlugLower; | |
| const matchingModel = response.models.find(m => | |
| m.name.toLowerCase() === baseModelName |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed. improved logic checks for suffix match.
| } | ||
| } | ||
| } | ||
| }, [value, settingKey, updateSettings]); // settingKey is an outer scope value, not a dependency |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dependency array is missing 'setCurrentConversation' which is used inside the callback. While it's stable (from a zustand store), it should still be included in the dependency array for completeness and to follow React best practices.
| }, [value, settingKey, updateSettings]); // settingKey is an outer scope value, not a dependency | |
| }, [value, settingKey, updateSettings, setCurrentConversation]); // settingKey is an outer scope value, not a dependency |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not necessary. Reduce deps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Junyi-99 i added settingKey here as a dependency because i thought this callback should be re-created whensettingKey is updated. Was is not the case previously too?
|
This issue is probably already solved with my PR #97 (updated today 25/01), where |
|
Also I fixed some bugs for this PR in #100 . |
I think the primary issue is inconsistent state in ultimately, the flow should be:
we can still have this additional stripping logic in the backend as a safeguard, but it should not be relied upon. |
Okay thanks! Can u propose merge to this branch instead? Final changes will be integrated together after testing. |
1. fix: remove definition of `currentConversation` as it is not used (to fix the following error when running `npm run build:prd:chrome` ```bash > webapp@0.0.0 build:prd:chrome > bash -c 'source ./scripts/build-prd-chrome.sh' 🧩 Start building Chrome Extension ... 📦 Version: 2.16.0 📦 Monorepo Revision: 9fc435 📦 Beta Build: false 📦 API Endpoint: https://app.paperdebugger.com ❌ Failed to build Chrome Extension, please check logs/build.log > webapp@0.0.0 build > tsc -b && npm run _build:default && npm run _build:background && npm run _build:intermediate && npm run _build:settings && npm run _build:popup src/views/settings/setting-text-input.tsx(32,13): error TS6133: 'currentConversation' is declared but its value is never read. ``` 2. fix: change matching from substring to exact, to prevent `gpt-4.1-mini` from matching to `gpt-4.0` (edit: typo, should be `gpt-4.1`), making the method more robust to the order of supported models defined. 3. [edit] refactor: use IsCustom and not redefine logic in NewAIClientV2
Junyi-99
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm

When users input BYOK in the middle of a conversation and resume conversation with the same model, backend uses a stale version of the modelSlug which may cause
invalid-model-iderror as OpenRouter expects OpenAI models to be prefixed withopenai/whereas OpenAI API does not require so.Key Change is in
setting-text-input.tsx. Every time settings is updated, we force an update to modelSlug.gpt-4owill be matched toopenai/gpt-4oand vice-versa but will not matchopenai/gpt-4o-mini. The logic is insetting-text-input.tsxFor more info, see the issue documented: #98