All notable changes to ClawRouter.
- Predexon agent tool surface expanded from 8 → 9 tools, covering the full 48-endpoint catalog. ClawRouter previously exposed only 8 named
predexon_*tools to LLM agents (events, leaderboard, markets, smart_money, smart_activity, wallet, wallet_pnl, matching_markets) — but BlockRun's source-of-truth (predexon.ts) and the marketing site atblockrun.ai/marketplace/predexonalready list 48 endpoints across Polymarket Tier 1 (markets/events/orderbooks/candlesticks/leaderboard/cohorts/top-holders/UMA oracle), Polymarket Tier 2 wallet analytics (PnL/positions/profiles/filter/smart-money/identity/cluster), Kalshi/Limitless/Opinion/Predict.Fun (markets + orderbooks each), dFlow (trades + wallet), Binance Futures (candles + ticks), and cross-platform matching/search. The existing 8 named tools stay (well-tuned for the most common paths); a newblockrun_predexon_endpoint_callis added as a catch-all withpath+queryparams and the full endpoint directory in its description (LLMs read this as the schema'sdescriptionfield). Skill files (skills/predexon/SKILL.md+skills/clawrouter/SKILL.md) updated to point at the new tool — the 48-row reference table in the predexon skill was already complete. - Tool runner extended for dynamic-path services (
src/partners/tools.ts): whenservice.proxyPath === "/pm/__dynamic__"the runner readspathfrom args (validated to start with/pm/and reject..traversal), parsesqueryas JSON, and assembles the URL. Existing fixed-path tools are unaffected. - OpenClaw devDep bumped
^2026.4.21→^2026.5.4;minGatewayVersionbumped2026.4.5→2026.5.2. This is the version where strict provider/baseHash validation shipped; we now declare compat with the regime we've adapted to instead of pretending to support older permissive runtimes. - Fixed the v0.12.185 deferred follow-up: ClawRouter no longer mutates
tools.web.search.{provider,enabled}onapi.config(runtime) or~/.openclaw/openclaw.json(disk) inside the plugin install path. Root cause discovered via Docker e2e on a clean OpenClaw 2026.5.4 image: OpenClaw runs a strict known-providers validator ontools.web.search.providerat TWO points — (a) config-load time beforeregister()runs, and (b)replaceConfigFilewhen the install commit persists the runtime config to disk. Both rejectblockrun-exabecause the validator's known-providers list is independent of plugin registrations, causingunknown web_search provider: blockrun-exaand install rollback. Fix:- Removed the disk write of
providerininjectModelsConfig(previouslysrc/index.ts:449–457). Wrote a forward-migration in its place: whenprovider === "blockrun-exa"is found on disk, it's deleted on the next file write — picked up automatically byclawrouter setup --forceWriteor first gateway start. - Removed all runtime writes to
api.config.tools.web.search.*insideregister(). Earlier attempts gated them ontypeof api.registerWebSearchProvider === "function", but OpenClaw 2026.5.4 still auto-injects the registered provider id during install commit. Net: ClawRouter'sregister()only callsapi.registerWebSearchProvider(blockrunExaWebSearchProvider)and lets OpenClaw's auto-detection pick it up via "Auto-detected from available API keys if omitted" (per OpenClaw schema). tools.web.search.enabled = trueis set only via the file-write path ininjectModelsConfig(gated to gateway mode or--forceWrite), so it lands on disk without touching the validator-flaggedproviderfield.- Migration in install scripts (
scripts/update.sh,scripts/reinstall.sh) strips legacyprovider: blockrun-exaBEFORE runningopenclaw plugins install. Combined with the in-config migration, existing v0.12.185 users are cleaned up via either path. - The deactivate hook (
src/index.ts:2043) already removes the field on uninstall — kept as belt-and-suspenders.
- Removed the disk write of
- Test fix:
test/integration/security-scanner.test.tspreviously found the scanner via "first function export" heuristic, which worked when OpenClaw minified its names. The 2026.5.4skill-scanner-*.jschunk re-exports under proper names, so the heuristic returned the wrong function (one ofclearSkillScanCacheForTest/isScannable/scanDirectory/scanSource) and the test crashed on undefined fields. Test now prefers thescanDirectoryWithSummarynamed export, falling back to "first function" for older builds. - New Docker e2e harness:
test/docker-install/Dockerfile.openclaw-2026.5+run-openclaw-e2e.shbuild a clean Debian + Node 22 + OpenClaw 2026.5.4 image and exercise the full install flow — fresh install on empty config,clawrouter setup, validator collision repro, migration + reinstall. All assertions pass on this fresh path. Run withdocker buildthendocker run --rm. - Net behavior on OpenClaw 2026.5.4: clean install with no validator failures;
clawrouter setupno longer needs to work around the web_search collision (still useful for barenpm install -gusers to sync allowlist). Existing v0.12.185 users withprovider: blockrun-exaon disk get cleaned up automatically byscripts/update.sh/scripts/reinstall.shbefore install runs. - Edge case noted (out of scope for this fix): re-running
openclaw plugins install --forceafter a previously failed install on a setup-populated config triggers an OpenClaw 2026.5.4 internal auto-injection that re-emitsprovider: blockrun-exaand trips its own validator. The triggering log line[plugins] Forced web_search provider to blockrun-exadoes not appear in any deployed file (verified via exhaustivefind / | xargs grepin the Docker container) — it's emitted from somewhere inside the OpenClaw runtime not reachable from a clean filesystem search. Not ascripts/update.shflow, no user impact in normal upgrades.
clawrouter setup— new CLI command for users who installed via barenpm install -g. A user reported/modelsin their Telegram bot showing only 7 entries despite having@blockrun/clawrouter@0.12.184installed and the gateway restarted. Investigation: barenpm install -g @blockrun/clawrouterputs the package on disk and adds theclawrouterbinary to PATH but performs zero OpenClaw integration — noplugins.entries.clawrouterregistration, no models allowlist sync, no auth profile injection. The user's bot showed OpenClaw's hardcoded fallback default models (which includegpt-5-nanoandgemini-2.5-flash— neither in ourtop-models.json) instead of our 38-entry list. Confirmed by reproducing locally on OpenClaw 2026.5.2 (8b2a6e5):npm install -galone leavesmodels listshowing 1 default entry; onlyopenclaw plugins install @blockrun/clawroutertriggers ourregister()callback.- Fix:
clawrouter setupruns the missing integration steps:- Detect
openclawon PATH (refuse to proceed if missing). openclaw plugins install --force @blockrun/clawrouterto register the plugin.- Direct call to
injectModelsConfig({ forceWrite: true })andinjectAuthProfile()to populateagents.defaults.models(the 38-entry allowlist),models.providers.blockrun.models(picker),tools.web.search.provider = "blockrun-exa", andagents/<id>/agent/auth-profiles.jsonwith theblockrun:defaultplaceholder. - Tell the user to
openclaw gateway restartto pick up the new plugin code.
- Detect
- Resilient against OpenClaw 2026.5.2's stricter validation: 2026.5.2 added a
tools.web.search.providervalidator that rejectsblockrun-exauntil that provider is actually registered (chicken-and-egg: we register it inside our plugin, but validation runs on the openclaw.json file before plugin code executes). When this trips, OpenClaw rolls back its install record. The setup command continues anyway and runs the manual config sync — even if registration didn't stick, the user's openclaw.json gets the full 38-entry allowlist, and the bot will see all models on next gateway start. A warning prints suggesting a manualopenclaw plugins install --force @blockrun/clawrouterretry post-gateway-start if needed. injectModelsConfiggained anoptions.forceWriteparameter (src/index.ts:214). Defaultfalsepreserves the v0.12.184 deferred-write behavior for plugin-activation hooks;forceWrite: trueis only used by the newsetupCLI command since it's an explicit user action outside any install transaction. Plugin lifecycle paths (theregister()callback atsrc/index.ts:1602) keep the unconditional defer.- Both
injectModelsConfigandinjectAuthProfileare now exported from the package entry (src/index.ts:2074) so the CLI can call them directly without re-implementing the logic. - README updated with explicit guidance on the two install paths: A1 (
curl … clawrouter-install.sh | bash— recommended) and A2 (npm install -g … && clawrouter setup— required two-step). The pure-npm path now has a prominent warning that skippingsetupcauses the 7-models symptom. - End-to-end verified locally:
clawrouter setupran against my own~/.openclawpopulatedagents.defaults.modelswith 39blockrun/*entries (vs the prior partial state); themodels.providers.blockrun.modelspicker plane synced to 39 too; auth profile written. Hit OpenClaw 2026.5.2's web_search validation as expected, but the manual sync ran around it.
Followup (deferred): OpenClaw 2026.5.2's tools.web.search.provider validator running before plugin activation is a structural mismatch — we register blockrun-exa inside our plugin, but validation expects the provider to be known statically. Either OpenClaw needs to relax this check post-plugin-load, or ClawRouter should declare the web_search provider via the plugin manifest rather than at runtime. Tracked separately; today's setup workaround unblocks users.
- Plugin install no longer crashes OpenClaw with
ConfigMutationConflictError. v0.12.183 fixed the install script soopenclaw plugins install --force @blockrun/clawrouteractually executes instead of bouncing on "plugin already exists". But once the install proceeded, OpenClaw 2026.5.2 crashed insidecommitPluginInstallRecordsWithConfig→replaceConfigFile→assertBaseHashMatches: ClawRouter's plugin activation hook (injectModelsConfig) reads~/.openclaw/openclaw.jsondirectly from disk and writes it back atomically (viatmp + rename) during activation. OpenClaw's install flow holds a baseHash on that exact file from before activation; when our hook bumped the hash, OpenClaw's own commit step refused to write its install record, threw, and the install rolled back. Two fixes in two releases, same user, same Vultr box, same rollback banner — no progress. - Fix:
injectModelsConfignow skips the disk write when not in gateway mode (isGatewayMode()returns false duringopenclaw plugins install,openclaw plugins list, etc. — only true foropenclaw gateway start/restart/stop). The in-memory mutations still compute, the info logs still print, but thewriteFileSync(tmpPath) + renameSync(configPath)is deferred. The same hook re-runs on firstopenclaw gateway start(gateway mode = true, no install transaction in flight) and persists the changes cleanly there. New log line:Deferring config write to first gateway start (outside gateway mode). - No regression on the gateway path. The guard at
src/index.ts:477only short-circuits whenprocess.argvdoes not containgateway. Sanity-tested locally: started clawrouter vianode dist/cli.js, hit/v1/chat/completionswithfree/gpt-oss-120b, returned 200 in 0.6s — same as v0.12.183. - Why this didn't surface before today: OpenClaw 2026.5.2 (commit
8b2a6e5, the version on the field-reporting Vultr box) added theassertBaseHashMatchesstrict check insidereplaceConfigFile. Earlier OpenClaw versions silently allowed plugin-side disk writes to clobber the install transaction; the conflict went unnoticed because the install record was lost but the plugin still appeared installed. With the new strict check, the conflict surfaces as a hardConfigMutationConflictErrorand the install genuinely rolls back. The bug has been latent ininjectModelsConfigsince v0.12.176 (when active config writes from this hook were introduced); it only became user-visible with OpenClaw 2026.5.2. - No
scripts/changes — no blockrun re-deploy required. The fix is insrc/index.ts, bundled into the v0.12.184 npm tarball. The install script atblockrun.ai/clawrouter-install.shis already correct as of v0.12.183; running it again now pulls the new tarball, plugin activation skips the conflicting write, OpenClaw commits its install record, gateway starts cleanly.
- Install/update scripts no longer roll back when the plugin is already installed.
scripts/update.sh:321andscripts/reinstall.sh:422ranopenclaw plugins install @blockrun/clawrouterwithout--force. On any machine where the plugin already lives at~/.openclaw/npm/node_modules/@blockrun/clawrouter(i.e. every existing user running an upgrade), OpenClaw rejects the install withplugin already exists: ... (delete it first)and a non-zero, non-124 exit code. The script's|| { ... exit $exit_code; }guard then fires, the EXIT trap rolls back to the prior install (✗ Reinstall failed. Restoring previous ClawRouter install...), and the user is silently stranded on the version they had — never reaching the new release. - Fix: both shell scripts now invoke
openclaw plugins install --force @blockrun/clawrouter. Per OpenClaw's own error message ("rerun install with--forceto replace it"),--forceis the documented and idempotent way to handle both fresh-install and upgrade flows. Applied at all four call sites (timeout-wrapped + non-timeout paths in each script). - PowerShell counterpart
scripts/update.ps1already uses a different approach — it manuallynpm packs +Remove-Item -Recurse -Forcethe plugin dir + extracts (lines 112-129), bypassingopenclaw plugins installentirely. No bug there, no change needed. - Field reproduction: a Vultr-hosted user attempted to update to v0.12.182 and saw the rollback banner. Without the manual workaround
openclaw plugins update @blockrun/clawrouter, they would have stayed on v0.12.181 indefinitely — defeating every prior fix in this session (image polling, predexon SKILL sync, reasoning-aware timeout). - Note for users currently stranded: this fix lives on npm
@blockrun/clawrouter@0.12.183but reaches users only vianpm install -g,openclaw plugins update, or the self-hostedblockrun.ai/clawrouter-install.sh. The self-hosted install script copy atblockrun/public/clawrouter-install.shshould be re-synced from this release before the next user attempts an upgrade — until that sync, a user pulling the install script via curl from blockrun.ai will still hit the broken behavior.
- Reasoning models no longer get aborted before they emit their first token.
PER_MODEL_TIMEOUT_MSwas hard-coded to 60s for every model. Reasoning/thinking-mode models (o-series, GPT-5 reasoning, Claude opus thinking, Gemini Pro, Grok reasoning, DeepSeek V4 Pro / reasoner, Kimi K2.x, Qwen3-thinking, etc. — 37 IDs total flagged withreasoning: trueinBLOCKRUN_MODELS) routinely take 60–120s to produce the first token on a cold cache. ClawRouter was firing the per-attempt abort right at the moment the model was about to start streaming, so a hard-pinned reasoning model would 100% time out, andauto-routed reasoning fallbacks chained more reasoning timeouts back-to-back. End user surfaces this asLLM request failed: network connection errorfrom the agent's HTTP client. - Fix: per-attempt timeout is now model-aware:
REASONING_MODEL_TIMEOUT_MS = 180_000(3 min) for any model withreasoning: truePER_MODEL_TIMEOUT_MS = 60_000(unchanged) for everything elsetimeoutForModel(id)helper looks up the flag fromBLOCKRUN_MODELS(computed once into a Set at module init for O(1) lookup)- All three AbortController setup sites updated: primary attempt loop (
src/proxy.ts:4694), explicit-pin retry (src/proxy.ts:4827), and 429 backoff retry (src/proxy.ts:4897).
DEFAULT_REQUEST_TIMEOUT_MS180s → 300s (5 min). The global deadline now leaves headroom for one reasoning attempt (180s) + a non-reasoning fallback (60s) + on-chain settlement (~30s buffer). Was 180s, which would have collided exactly with a single reasoning attempt and starved fallback.- Heartbeat path unchanged. Streaming requests already get an immediate
: heartbeat\n\nfollowed by 2s-cadence keep-alive (src/proxy.ts:4378-4389). Non-streaming clients can't be helped by heartbeats over HTTP/1.1; they need to extend their own client-side HTTP timeouts (or switch to streaming). - Diagnosed in the field: a Telegram bot user reported
LLM request failed: network connection errorafter pinning their default model toclawrouter/free/deepseek-v4-pro. Reproduced locally on v0.12.181 with $36 balance: V4 Pro upstream took >30s for first token, client-side curl--max-time 30gave up, and ClawRouter's 60s per-model abort would have fired at 60s if the upstream hadn't returned by then. New 180s window covers normal V4 Pro cold-start. (Today V4 Pro is also experiencing an upstream NIM outage that's unrelated to this fix;autoprofile correctly routes around it to other free models.)
- Main
clawrouterSKILL caught up to multi-venue scope. v0.12.180 expanded the dedicatedpredexonSKILL to BlockRun's 49-endpoint registry, but the headlineclawrouterSKILL (the one OpenClaw and AI agents read first to decide whether ClawRouter is relevant) still said "Polymarket prediction market data" + "8 tools, Polymarket ↔ Kalshi". That description would have steered agents away from prediction-market questions about Kalshi/Limitless/Opinion/Predict.Fun, UMA resolution status, and wallet identity — even though the proxy and the predexon SKILL handle them. - Updates:
- Front-matter
description: now lists Polymarket, Kalshi, Limitless, Opinion, Predict.Fun, dFlow + UMA oracle + wallet identity & clustering — so the discovery layer matches the actual capability. - Section
### Polymarket (Predexon)→ renamed### Prediction Markets (Predexon). Body rewritten as a 4-bucket summary (Markets & trading, Leaderboard & smart money, Wallet analytics, UMA oracle + wallet identity) with 49-endpoint count and accurate pricing tiers. Pointer to the dedicatedpredexonskill for the full reference.
- Front-matter
- No code changes, no other SKILLs changed. The
predexonskill itself was already complete in v0.12.180. Pure visibility/triage fix on the headline SKILL.
- Predexon skill catches up to BlockRun's 49-endpoint registry. BlockRun shipped 10 new prediction-market endpoints on 2026-05-03 (commits
9640528+a06c652, prod revisions00442-jqfand00443-45g); ClawRouter's/v1/pm/*catch-all whitelist already proxied them silently, butskills/predexon/SKILL.mddocumented none — so OpenClaw users and AI agents using the skill couldn't discover them. - New endpoints documented:
- Cross-venue search —
markets/search?q=...($0.005) — single call across Polymarket, Kalshi, Limitless, Opinion, Predict.Fun - Other venues markets list —
limitless/markets,opinion/markets,predictfun/markets($0.001 each) — closes the prior gap where only orderbooks were exposed - UMA oracle resolution —
polymarket/uma/markets?state=...andpolymarket/uma/market/{conditionId}($0.001 each) — track proposal/dispute/resolution lifecycle - Wallet identity & clustering —
polymarket/wallet/identity?wallet=...,polymarket/wallet/identities-batch?wallets=...(GET, not POST — upstream docs are wrong),polymarket/wallet/cluster?wallet=...($0.005 each) - Per-token candlesticks —
polymarket/candlesticks/token/{tokenId}($0.001) — OHLCV for a single outcome token (sibling to the existing market-levelcandlesticks/{conditionId})
- Cross-venue search —
- SKILL.md additions: 4 new section blocks (Search Across All Venues, Other Venues, UMA Oracle Resolution Status, Wallet Identity & Clustering), 5 new example interactions, 10 new rows in the endpoint reference table (36 → 46 documented; 3 long-standing gaps from BlockRun's 49 —
polymarket/activity, per-market volume, open_interest — deliberately left for a follow-up). Front-matterdescriptionand 8 new triggers for the new categories (limitless / opinion markets / predict.fun / uma oracle / wallet identity / wallet cluster / cross-venue search). - No code changes. Proxy whitelist (
src/proxy.ts:2669) already matches/v1/pm/*; no new path needed. Pure docs/skill release.
- Slow image generation no longer silently breaks.
openai/gpt-image-2(and any future model whose generation exceeds BlockRun's 30s inline window) returns202 + { id, poll_url, poll_instructions }fromPOST /v1/images/generations. ClawRouter previously took that 202 body and replied to the client with200 OK+ the queued-job stub — nodataarray, no images, no error signal. The client (OpenClaw, SDK callers, curl) saw "success" with nothing usable. - Fix: mirror the existing video polling loop into
/v1/images/generations. After the initialpayFetchPOST, if the response is 202 withpoll_url, ClawRouter now pollsGET /v1/images/generations/{id}every 3s (after a 2s warmup) for up to 5 minutes — exactly the pattern used for/v1/videos/generationssince 2026-04-23. Onstatus=completedthe response is rewritten to the final{ data: [...] }body and flows through the same image-saving / localhost-rewrite path as fast models. Onfailed→ 502 with details. On 5min timeout → 504 (no payment settled — server only settles on first completed poll). Client still sees a single blocking POST. /v1/images/image2imagedeliberately untouched. BlockRun'simage2imageroute has no[id]poll endpoint and noINLINE_GEN_TIMEOUT— it's fully synchronous server-side, so there's no 202 path to handle. Adding speculative polling there would be dead code.- No payment-flow change.
payFetchhandles wallet signing for the initial POST and each subsequent poll GET; BlockRun's[id]route binds the job to the payer wallet and settles idempotently on the first completed poll, identical to the video flow.paymentStore.amountUsdstill reflects the verified-then-settled amount forlogUsage.
- DeepSeek V4 Pro added to REASONING fallbacks (auto + eco). Backend shipped
deepseek/deepseek-v4-pro(1.6T MoE / 49B active, 1M context — strongest open-weight reasoner; MMLU-Pro 87.5, GPQA 90.1, SWE-bench 80.6, LiveCodeBench 93.5) at $0.50 in / $1.00 out per 1M under the 75% promo through 2026-05-31 (list $2.00/$4.00 after). Wired intoauto.tiers.REASONING.fallbackafterdeepseek-reasoner/grok-4-fast-reasoningand intoeco.REASONING.fallbackafterdeepseek-reasoner. V4 Flash thinking (deepseek-reasoner, $0.20/$0.40) stays primary because it's cheaper; V4 Pro is the harder-task escape hatch. - DeepSeek chat/reasoner now V4 Flash semantics.
deepseek/deepseek-chatanddeepseek/deepseek-reasoner(already in tier configs) had their upstream rerouted to V4 Flash non-thinking / thinking modes — repriced from $0.28/$0.42 to $0.20/$0.40 with 1M context (was 128K). No SDK source change needed — pricing fetched from/v1/modelsat runtime; tier configs got comment refresh to note the V4 Flash repricing. deepseek/deepseek-v4-proadded totop-models.jsonso the OpenClaw/modelpicker surfaces the new flagship.- No
FREE_MODELSchanges.nvidia/gpt-oss-120bandnvidia/gpt-oss-20bwere briefly delisted 2026-04-28 but re-enabled 2026-04-30 withavailable: true+hidden: true— they no longer appear in/v1/models(so the picker hides them) but ClawRouter'sFREE_MODELSset still uses them as the historical free defaults; direct calls work.
- Picker actually filtered now via the right layer. v0.12.175 + v0.12.176 both targeted
cfg.models.providers.blockrun.models, but per v0.11.8's checked-in design (src/index.ts:379), the OpenClaw/modelpicker is whitelisted bycfg.agents.defaults.models— that's the canonical filter. The path-based-plugin install case (where users install ClawRouter from a local checkout viainstallPath = sourcePath = ...) never runsscripts/update.sh/scripts/reinstall.sh, so the install-script prune-and-add never fires.injectModelsConfiginsrc/index.tsonly added entries — never pruned — so retired models accumulated forever in the allowlist. - Fix:
injectModelsConfignow actively syncsblockrun/*allowlist entries to TOP_MODELS exactly — adds missing AND removes stale. Mirrors the install-script behavior so plugin-load-only users (no install-script flow) get correct picker visibility on next OpenClaw restart. Non-blockrun/*entries (other providers like OpenRouter) are preserved. /v1/modelsHTTP endpoint deliberately unchanged — keeps the full ~175-entry list including aliases, so API-level discovery and/model <alias>resolution stay open. Filter only applies to picker UI.- v0.12.175 + v0.12.176 changes retained as defense-in-depth:
buildProviderModelsstill returnsVISIBLE_OPENCLAW_MODELS, andindex.tsstill writesVISIBLE_OPENCLAW_MODELStocfg.models.providers.blockrun.models. Even though the picker filter is allowlist-driven, keeping these aligned costs nothing.
- Picker filter v0.12.175 didn't actually take effect. Root cause:
src/index.tsindependently writescfg.models.providers.blockrun.modelsat plugin startup (lines 293, 331, 1582), and it referenced the unfilteredOPENCLAW_MODELS(~175 entries) — so on every plugin activate it overwrote any pruned array with the full list, completely bypassing the v0.12.175 fix atbuildProviderModels. Users updating to v0.12.175 still saw 50–58+ entries becauseindex.tsre-injected the full set right after my filter ran. - Fix:
src/index.tsnow importsVISIBLE_OPENCLAW_MODELSand writes that tocfg.models.providers.blockrun.modelsat all three injection points (provider config injection, validation refresh, runtime port re-injection). The validation logic also gained a "stale superset" check — if the on-disk array contains IDs NOT inVISIBLE_OPENCLAW_MODELS, it triggers a rewrite to actively shrink the array (was previously additive-only). This means existing users with stale 159+ entry arrays get their picker auto-pruned on first plugin activate after upgrading. - No registry, alias, or routing changes.
OPENCLAW_MODELS(full set) remains the resolution layer for proxy routing and alias matching; only the picker-advertisement layer (provider.modelsgetter +index.tswrites) is filtered.
- Picker filter actually works now. v0.12.173's
top-models.jsontrim was supposed to slim the OpenClaw/modelpicker but didn't, because the picker reads fromcfg.models.providers.blockrun.models— populated by ClawRouter'sprovider.modelsgetter (src/provider.ts:43) →buildProviderModels()(src/models.ts:1163) — which returned the FULLOPENCLAW_MODELSarray (~175 entries: 68 BLOCKRUN_MODELS + 107 ALIAS_MODELS).top-models.jsononly droveagents.defaults.models(a separate allowlist that controls "which models can be set as default", NOT what shows in the picker). Net effect for users on v0.12.173/v0.12.174: their picker still showed 50–58+ entries including long-retired models (gpt-5.2,gpt-4.1,o1,o1-mini,o3-mini,nvidia/kimi-k2.5,xai/grok-2-vision,free/nemotron-ultra-253b, etc.). - Fix:
buildProviderModelsnow filtersOPENCLAW_MODELSthrough aTOP_MODELS_SETderived fromsrc/top-models.json. Picker drops to ~38 visible entries on next OpenClaw refresh of the provider models. NewVISIBLE_OPENCLAW_MODELSexport insrc/models.tsis the canonical "what the picker advertises" list. - /v1/models HTTP endpoint deliberately unchanged — still returns the full ~175-entry list for API-level discovery (per Your Majesty's original v0.12.173 intent: "hide from list, but still callable"). Direct ID + alias resolution unaffected; router fallbacks unaffected; proxy routing unaffected.
- Migration note for existing users: OpenClaw merges, never deletes, from
cfg.models.providers.blockrun.models. So users who installed v0.12.174 or earlier still have their old 159-entry array on disk; they'll need either a fresh OpenClaw plugin re-install (which re-readsprovider.models) or manual openclaw.json cleanup. Future install/update scripts should add a prune step here, similar to the existingagents.defaults.modelsprune — tracked as a follow-up.
profile=autoandprofile=agenticMEDIUM-tier primary swapped from Kimi K2.5 → K2.6. Per-call cost on these MEDIUM routes goes from $0.60/$3.00 → $0.95/$4.00 — that's +58% on input tokens, +33% on output tokens for default-profile users whose classifier returns MEDIUM. The decision deliberately reverses v0.12.170's "tier primaries unchanged pending K2.6 retention/IQ data" stance. The trigger: BlockRun hid K2.5 from its public UI on 2026-04-28 (commitbfbdedf) and we hid it from ClawRouter's picker in v0.12.173, so the trajectory toward server-side K2.5 retirement is clear. Promoting K2.6 now is future-proofing — if BlockRun pulls K2.5 server-side later, every MEDIUM call would otherwise 400 → fallback-second-choice silently, which is harder to debug than a clean primary that is already on the still-supported model.- Cost-stability opt-out: users who prefer K2.5's pricing can pin
model: "moonshot/kimi-k2.5"directly (or use thekimi-k2.5alias). K2.5 stays inBLOCKRUN_MODELS, the alias map, and is now wired in as the first fallback in bothautoTiers.MEDIUMandagenticTiers.MEDIUMchains — so even on the auto path, if K2.6 has an infra hiccup the next attempt is K2.5 (same model, same cost as the v0.12.173 default). Profilesecoandpremiumare unaffected (eco MEDIUM =gemini-3.1-flash-lite, premium SIMPLE was already K2.6). - Registry, picker, and other tier primaries unchanged. Both Kimi versions remain in
src/models.ts,src/top-models.jsonis identical to v0.12.173, and no other auto/agentic/eco/premium primaries moved. The two known "hidden but still primary" inconsistencies (autoTiers.SIMPLE=google/gemini-2.5-flash,agenticTiers.SIMPLE=openai/gpt-4o-mini) are tracked but deferred — they don't have the same urgency signal (BlockRun hasn't pulled them from its UI).
- Picker decluttered: 12 superseded long-tail models hidden from OpenClaw
/modelUI.src/top-models.jsontrimmed from 50 → 38 entries. Hidden:anthropic/claude-opus-4.5,openai/gpt-5.3,openai/gpt-5-mini,openai/gpt-5-nano,openai/gpt-4o,openai/gpt-4o-mini,openai/o3,openai/o4-mini,google/gemini-2.5-pro,google/gemini-2.5-flash,google/gemini-2.5-flash-lite,moonshot/kimi-k2.5. Picker count drops from "55 available" to ~43 once users runclawrouter updateor reinstall. - No callability regression and no fallback impact. This is a UX-only change:
BLOCKRUN_MODELSregistry,MODEL_ALIASES, andsrc/router/config.tsfallback chains are all untouched. Direct calls (model: "openai/gpt-4o") and aliases (gpt,gpt4,mini,o3,gemini,flash,kimi-k2.5,nvidia/kimi-k2.5,anthropic/claude-opus-4-5,minimax-m2.5) continue to resolve and route normally. The/v1/modelsHTTP endpoint still advertises all 175 entries (registry + aliases) for API-level model discovery — only the OpenClaw picker is filtered. openai/gpt-5.3-codexdeliberately kept visible. The codex variant is treated as a distinct developer-targeted entry and stays in the picker.minimax/minimax-m2.5already absent fromtop-models.json(onlyminimax/minimax-m2.7was listed); no action needed and theminimax-m2.5alias still works.
- Three new free NVIDIA-hosted models added. BlockRun refreshed the free catalog on 2026-04-29 with three additions, all wired into ClawRouter as
free/-prefixed entries:free/deepseek-v4-pro— 1.6T MoE / 49B active, 1M context, MMLU-Pro 87.5, GPQA 90.1, SWE-bench 80.6, LiveCodeBench 93.5. NIM ~150 tok/s on Blackwell. Strongest free reasoning model.free/deepseek-v4-flash— 284B / 13B active MoE, 1M context, ~5x faster than v4-pro. Strong on chat/summarization (MMLU-Pro 86.2). Weaker factual recall (SimpleQA 34% vs Pro's 58%) — pick v4-pro for fact-heavy agentic loops.free/nemotron-3-nano-omni-30b-a3b-reasoning— 31B / 3.2B active MoE, 256K context. First vision-capable free model in the catalog. Accepts text, images, video (up to 2min), audio (up to 1hr). ChartQA 90.3, DocVQA 95.6, MMMU 70.8.
free/deepseek-v3.2phased out in favor offree/deepseek-v4-pro(strict-superset replacement: same family, larger context, higher benchmarks). Removed fromBLOCKRUN_MODELS,FREE_MODELSset,top-models.jsonpicker, README pricing table, and SKILL.md model list. Aliases kept and redirected:nvidia/deepseek-v3.2,free/deepseek-v3.2, anddeepseek-freenow all resolve tofree/deepseek-v4-proso existing pins continue to work and silently get the upgrade.gpt-oss-120b/gpt-oss-20bdeliberately kept as defaults despite BlockRun's 2026-04-28 retirement (available:falseserver-side). Heavy user demand outweighs the source-of-truth alignment for these specific IDs —free/nvidia/gpt-120b/gpt-20baliases all still resolve tofree/gpt-oss-120b(or 20b),FREE_MODELconstant still points atfree/gpt-oss-120b, andecoTiers.SIMPLEprimary stays unchanged. ClawRouter's existing fallback-chain logic handles any 400 ("Model not available") from BlockRun by trying the next chain entry, so failures degrade gracefully rather than break user workflows.- New shorthand aliases for the additions:
deepseek-v4-pro,deepseek-v4-flash,v4-pro,v4-flash,nemotron-omni,nano-omni,vision-free— chosen to mirror BlockRun's bare-name aliases atroute.ts:639-640plus avision-freediscovery shortcut for the new vision-capable model. ecoTiers.SIMPLEfallback chain extended with the three new free models (mistral-small, deepseek-v4-flash, qwen3-next) inserted before the paid Gemini fallbacks, so eco-profile users get more all-free chain depth before paid models kick in. Primary is unchanged (free/gpt-oss-120b).- Provider routing safety note. BlockRun's
NVIDIA_MODEL_MAPinsrc/lib/ai-providers.ts:2094-2111does NOT have explicit entries for the 3 new models, butcallOpenAICompatiblefalls through to the bare model name (modelMap[k] || k), so ClawRouter sendingnvidia/deepseek-v4-proreaches NVIDIA NIM as baredeepseek-v4-pro— which is what NIM expects. Documented in the BLOCKRUN_MODELS comment block insrc/models.ts. If BlockRun later adds explicit map entries with different upstream names, this side needs no change. - Net free-model count: 8 → 10 (8 originals + 3 added - 1 phased out). README badge, tagline, "Quick Start" sections, and SKILL.md description all updated to reflect "10 free NVIDIA models". Pricing table in README adds three new rows in benchmark order.
- Test fixtures.
src/router/strategy.test.tsMODEL_PRICINGmap gains entries for the 3 new free models. No assertion changes anywhere else — gpt-oss-120b stays the asserted default insrc/exclude-models.test.ts,src/models.test.ts,test/fallback.ts, andtest/integration/exclude-models.test.ts.
- Bare
kimi/moonshotaliases now resolve to Kimi K2.6. BlockRun hid Kimi K2.5 from its public model UI on 2026-04-28 (commitbfbdedf) and now features K2.6 as the Moonshot flagship. ClawRouter's local alias map followed the old direction and still pointedkimiandmoonshotat K2.5, which created a quiet drift from the source-of-truth registry: agents asking for "kimi" got the previous-gen model while BlockRun's homepage advertised K2.6. The aliases now resolve tomoonshot/kimi-k2.6and a new barekimi-k2alias is added for the same target. Users who explicitly pinnedkimi-k2.5continue to get K2.5 — the explicit pin is preserved as a cost-stability opt-in ($0.60/$3.00 vs K2.6's $0.95/$4.00). NVIDIA-hosted K2.5 (retired 2026-04-21) still redirects tomoonshot/kimi-k2.5. - Routing tier primaries deliberately unchanged.
autoTiers.MEDIUMandagenticTiers.MEDIUMcontinue to anchor onmoonshot/kimi-k2.5. Promoting them to K2.6 would silently raise per-call cost +58% on input / +33% on output for every default user — that's a separate decision tracked outside this release, ideally with measured retention/IQ data on K2.6 vs K2.5.premiumTiers.SIMPLEwas alreadymoonshot/kimi-k2.6and is unchanged. Net effect: behavior shift is opt-in via thekimialias /kimi-k2shorthand, not forced through default routing. - Doc and test fixture refresh. README's profile-overview table now shows
kimi-k2.6in the PREMIUM column (matchingdocs/routing-profiles.mdandsrc/router/config.ts:1134).src/router/strategy.test.tsgains a K2.6 pricing fixture so cost-calc tests stay honest if K2.6 ever appears in test scenarios.src/proxy.models-endpoint.test.tsnow asserts bothkimi-k2.6andmoonshot/kimi-k2.6are discoverable through the/modelsendpoint.test/fallback.ts's "Unknown model" example list leads withmoonshot/kimi-k2.6.
- Synthesize structured
tool_callsfrom XML/text formats some models emit incontent. Earlier tool-call hardening (v0.12.165, v0.12.166) handled the case where upstream returned a structuredtool_callsarray (or signaledfinish_reason: "tool_calls") and the model also leaked planning prose intocontent. This release closes a third gap where upstream returns no structured tool calls at all and the model's actual tool invocations live as XML/text insidecontent— typical when a downstream client (OpenClaw is the visible offender) prompt-engineers tool instructions instead of sending a structuredtools[]schema, so the model dutifully honors the prompt format and emits the call as text. Two formats observed in the wild are now recognized and converted to OpenAI-shapedtool_calls:- OpenClaw-style —
<tool_call>NAME<arg_key>K1</arg_key><arg_value>V1</arg_value>...<arg_key>Kn</arg_key><arg_value>Vn</arg_value></tool_call>. Requires at least onearg_key/arg_valuepair so prose like<tool_call>name</tool_call>in documentation does not mis-fire. Surfaced via a real ClawRouter→OpenClaw session where the agent emitted six identical<tool_call>web_search<arg_key>...</arg_key>...blocks in 60 seconds, none executed, then hallucinated "I need a Brave API key" as the failure explanation. - Anthropic-style —
<function_calls><invoke name="NAME"><parameter name="K">V</parameter>...</invoke></function_calls>. Reproduction confirmed Moonshot Kimi K2.6 emits this format when given prompt-engineered tool instructions without a structuredtools[]schema. - Values are best-effort coerced via
JSON.parseso<arg_value>5</arg_value>becomes5(number) and<arg_value>true</arg_value>becomestrue(boolean); strings that don't parse stay as strings. Synthesized IDs are OpenAI-shaped (call_<base64url>). - Wired into both response paths: the SSE conversion path (
src/proxy.ts:5081+) and the non-streaming JSON path (src/proxy.ts:5325+). When extraction succeeds,contentis blanked,message.tool_callsis populated, andfinish_reasonflips to"tool_calls"— matching exactly the shape downstream tool executors already handle from the v0.12.165/166 paths. - New module
src/textual-tool-calls.tsplussrc/textual-tool-calls.test.ts(13 unit tests) and four new integration tests insrc/proxy.tool-forwarding.test.tscovering OpenClaw format / non-streaming, OpenClaw format / SSE, Anthropic format / non-streaming, and a negative test (plain prose passes through unchanged withfinish_reason: "stop").
- OpenClaw-style —
/modelpicker allowlist now lives insrc/top-models.json(single source of truth, loaded bysrc/top-models.ts). PreviouslyinjectModelsConfig()insrc/index.tscarried a literal array that drifted from the install scripts'TOP_MODELS(which carry their own copies inscripts/reinstall.sh+scripts/update.sh). The JSON file is the version anyone actually edits going forward; both runtime (src/index.ts) and the test suite (src/top-models.test.ts) read from it. The install scripts still carry their own embedded copies because they run before npm dependencies are resolved — but now there's one canonical list to copy from when adding a new model.- Alias adds.
br-sonnet→anthropic/claude-sonnet-4.6(matching the existingbr-partner shorthand pattern), andgpt5now resolves toopenai/gpt-5.5instead ofopenai/gpt-5.4(following v0.12.167's GPT-5.5 promotion as BlockRun's newest visible flagship).
-
Propagate
openai/gpt-5.5everywhere it should appear. v0.12.167 added the model toBLOCKRUN_MODELS, thegpt-5.5alias, and the install-scriptTOP_MODELSallowlist — but every other place ClawRouter advertises a flagship still pointed atgpt-5.4. This release closes the gap so 5.5 is a first-class citizen across routing, the picker, marketing, and the OpenClaw skill page.-
src/router/config.ts— three fallback-chain insertions, no primary changes.openai/gpt-5.5slots in immediately beforeopenai/gpt-5.4inauto.COMPLEX.fallback,premiumTiers.COMPLEX.fallback, andagenticTiers.COMPLEX.fallback. Both stay reachable; 5.5 gets preference when the chain reaches OpenAI. Comments updated so 5.5 is "newest flagship — 1M+ ctx, native agent + computer use" and 5.4 is "previous flagship — benchmarked at 6,213ms, IQ 57". Tier primaries are unchanged: promoting 5.5 to a primary slot needs measured latency/IQ data, which we don't have yet — that's a separate decision tracked outside this release. -
src/index.ts—/modelpicker allowlist updated.src/index.tscarries its own copy ofTOP_MODELS(separate from the install scripts' identical-but-distinct list — both populate the OpenClaw allowlist depending on install path). Addedopenai/gpt-5.5andanthropic/claude-opus-4.5(also missed in v0.12.167'sBLOCKRUN_MODELSadd for opus-4.5), and replaced the now-deprecatedminimax/minimax-m2.5withminimax/minimax-m2.7so the picker matches the deprecation we landed yesterday. -
README.md— Premium Models pricing table. Added theopenai/gpt-5.5row at $5.00/$30.00 per 1M tokens (~$0.0175 per 0.5K-in-0.5K-out request), 1M context, full feature set. Placed betweenclaude-opus-4.6($0.0150) ando1($0.0375) so the table stays sorted by approximate $ /request. -
skills/clawrouter/SKILL.md— model list line. The "55+ models including..." line now leadsgpt-5.5, gpt-5.4, ...and includesclaude-opus-4.5alongside 4.7/4.6.
-
-
Files deliberately not touched:
docs/smart-llm-router-14-dimension-classifier.mdanddocs/llm-router-benchmark-46-models-sub-1ms-routing.mdare frozen benchmark archives — adding 5.5 to a benchmark table without measured numbers would falsify the document. Theposts/*.mdmarketing content is similarly point-in-time. Those will be refreshed if/when 5.5 gets benchmarked.
-
Realign the model registry to BlockRun source-of-truth. Audit found three drifts where ClawRouter's
BLOCKRUN_MODELStable didn't match whatblockrun/src/lib/models.tsactually exposes. The server is the source of truth for which models exist and what they cost; the proxy's local view should mirror that 1:1 so cost estimation, the/modelpicker, and routing tier selection all see the same world the server does.-
Add
openai/gpt-5.5. BlockRun's newest visible OpenAI flagship — first fully retrained base since GPT-4.5, 1M+ context, 128K output, native agent + computer use. Pricing $5/$30 per 1M tokens. Added toBLOCKRUN_MODELS, thegpt-5.5alias, and theTOP_MODELSallowlist in both install scripts. Routing tiers insrc/router/config.tscontinue to anchor ongpt-5.4because that's what's benchmarked; users can pin5.5explicitly. Routing change is a separate decision. -
Add
anthropic/claude-opus-4.5as a distinct model. Previously ClawRouter'sMODEL_ALIASESsilently rewroteanthropic/claude-opus-4.5to4.7, making 4.5 unreachable through ClawRouter even though BlockRun lists it as a separate visible model with its own pricing and 200K context (vs 4.6/4.7's 1M). Removed the alias, added 4.5 toBLOCKRUN_MODELSwith its real 200K/32K shape, and added ananthropic/claude-opus-4-5(dashed) alias for the slug variant. Test insrc/models.test.tswas codifying the old upgrade-to-4.7 behavior — flipped to assert the pin is preserved end-to-end. -
Mark
minimax/minimax-m2.5deprecated → fallbackminimax/minimax-m2.7. BlockRun retired m2.5 entirely (only m2.7 is in theirMODELStable). ClawRouter still listed both; m2.5 now flips todeprecated: truewith the m2.7 fallback so existing pins keep working. -
scripts/reinstall.sh+scripts/update.sh: dropminimax/minimax-m2.5from theTOP_MODELSpicker allowlist (still reachable, just hidden from the picker) and addopenai/gpt-5.5+anthropic/claude-opus-4.5.
-
Add
- Tool-call planning prose suppressed even when
finish_reasonis the only signal (thanks @0xCheetah1, #162). Follow-up to v0.12.165's #161 fix. Live Telegram/OpenClaw testing caught one more shape the planning-prose leak could wriggle through: some upstreams (Moonshot Kimi K2.6 again) mark a turn withfinish_reason: "tool_calls"without exposingmessage.tool_calls/delta.tool_callsat the same inspection point. The #161 gate (toolCalls.length > 0) saw no array and let the prose through. The gate is nowendsWithToolCalls || toolCalls.length > 0— applied consistently across the non-streaming JSON path and the SSE emission path, plus the finish-reason override in the SSE terminal chunk. Two new regression tests insrc/proxy.tool-forwarding.test.ts— one per response shape — lock the behavior in: a response withfinish_reason: "tool_calls"and no tool_calls array has itscontentblanked and thetool_callsfinish_reason preserved. User-visible impact: fewer "I should look up X before replying" preambles sneaking into agent chat surfaces for turns that are supposed to be pure tool invocations.
- Tool-call planning prose no longer leaks to chat surfaces (thanks @0xCheetah1, #161). Some OpenAI-compatible providers — Moonshot's Kimi K2.6 was the visible offender through OpenClaw Telegram — return
{ content: "The user wants the current time. I should call get_current_time with Chicago.", tool_calls: [...] }. Tool execution only needstool_calls; thecontentfield is internal planning that the upstream should have hidden behind a<think>tag but didn't. ClawRouter now suppressescontentwhenevertool_calls.length > 0, in both the non-streaming JSON response path and the SSE-conversion path that clients like OpenClaw hit withstream: true. Tool execution is unaffected; only the user-visible planning prose goes away. Covered by two regression tests insrc/proxy.tool-forwarding.test.ts(one per response shape). - Plugin restart loop killed.
injectModelsConfig()insrc/index.tswrites ClawRouter-owned keys into~/.openclaw/openclaw.jsonon every plugin load. OpenClaw's config watcher has a catch-all rule — any change with no matching plugin-declared prefix triggers a full gateway restart — somcp.servers.blockrunwrites kept ping-ponging the gateway. The plugin definition now exposesreload: { noopPrefixes: ["mcp.servers.blockrun"] }(new optional field onOpenClawPluginDefinition) to tell OpenClaw's loader that ClawRouter self-manages that prefix. Silently ignored on OpenClaw runtimes that predate thereloadfield. - Dedup + response cache now isolate streaming and non-streaming callers. Discovered while adding the SSE regression test for the tool-call fix: a
stream: truerequest that followed an identical-bodystream: falserequest was gettingcontent-type: application/jsoninstead oftext/event-stream. Two compounding bugs. ClawRouter rewritesparsed.stream = falsebefore the upstream call (BlockRun API doesn't support streaming), and bothRequestDeduplicator.hash(body)andResponseCache.generateKey(body)ran AFTER that rewrite — so astream:trueandstream:falserequest hashed identically. Worse,response-cache.ts'snormalizeForCacheexplicitly strippedstreamfrom the key with the comment "we handle streaming separately" (it never did). Fix: (1) prefix bothdedupKeyandcacheKeyinsrc/proxy.tswith the originalisStreamingintent ("sse:"vs"json:"), so the two shapes never share a cache slot; (2) stop strippingstreaminnormalizeForCache. Latent bug — real-world impact was small because the exact scenario (identical body, different stream flag, within 30s/10min TTL) is rare in practice — but a correctness bug nonetheless. Regression test added (isolates dedup cache between streaming and non-streaming requests with identical bodies); the existingresponse-cache.test.tsexpectation was inverted (it was codifying the broken behavior).
-
Video generation switched to async submit + poll (tracks BlockRun server commit 654cd35). The server-side
/v1/videos/generationsendpoint no longer blocks for the full 60–180s upstream generation — POST now returns202 { id, poll_url }in ~3–20s, and a separate GET on thepoll_url(same x-payment header) returns202while the job is queued/in_progress and200with the final video on completion. Server settles only on the first completed poll, so upstream failure or caller disconnect = zero USDC charged. ClawRouter's proxy handler insrc/proxy.tsnow collapses this back into a single blocking POST for the client: submit upstream, poll thepoll_urlevery 5s (initial 3s grace) up to a 5-min deadline, then backup + serve locally as before. Legacy sync-shaped server responses still work — the handler checks forpoll_urlbefore switching to the poll loop. Client-side timeouts bumped:buildVideoGenerationProvider.timeoutMs200s → 330s;/videogenslash 200s → 330s; both sit above the 5-min internal poll deadline so the lastdata[0].urlfinishes streaming back. User-facing impact: same blocking POST as before, but Cloudflare's 100s edge timeout no longer kills long-running Seedance 2.0 jobs. -
Image/video plumbing parity — four exposure surfaces now match the backend. The BlockRun server has supported 8 image models (DALL-E 3, GPT Image 1, Nano Banana / Pro, Flux 1.1 Pro, Grok Imagine / Pro, CogView-4) and 4 video models (Grok Imagine, Seedance 1.5 Pro / 2.0 Fast / 2.0) since v0.12.162, but the ClawRouter client exposed them inconsistently:
buildImageGenerationProviderinsrc/index.tsonly advertised 4 image models. OpenClaw's native image picker couldn't see Flux, Grok Imagine (×2), or CogView-4 — the only way to hit them was raw curl with an explicitmodelfield. Themodelsarray now lists all 8; defaultModel switched fromopenai/gpt-image-1togoogle/nano-banana(cheapest general-purpose default);capabilities.geometry.sizesadds CogView-4's 512x512, 768x768, 768x1344, 1344x768, 1440x1440 sizes;capabilities.edit.enabledflipped totrueso OpenClaw's edit UI surfaces gpt-image-1's/v1/images/image2imagepath.MODEL_ALIASESinsrc/models.tshad zero image/video shortcuts. All 140+ aliases were LLM chat models. Added 17 new aliases soresolveModelAlias("dalle")→openai/dall-e-3,"flux"→black-forest/flux-1.1-pro,"seedance"→bytedance/seedance-1.5-pro, plusbanana,banana-pro,nano-banana-pro,gpt-image,flux-pro,grok-imagine/-pro,grok-video,cogview,seedance-1.5,seedance-2,seedance-2-fast./imagegenand/videogenslash commands now actually exist. README documented/imagegen a dog dancing on the beachas if it worked, but no such command was ever registered — it was silent drift from the aspirational README. Both commands now register viaapi.registerCommand, accept--model=<alias>,--size=WxH,--n=<int>,--duration=<5|8|10>flags (parsed by a sharedparseGenArgshelper), resolve aliases throughresolveModelAlias, POST to the proxy's/v1/images/generationsand/v1/videos/generationsendpoints, and return inline markdown () or video URLs. 402 responses surface as "top up with/wallet" hints; video timeout is 200s to cover upstream polling./img2imgremains README-only for now — will land in a follow-up.- Partner framework now includes image/video as LLM-callable tools. Added three new
PartnerServiceDefinitionentries insrc/partners/registry.ts—image_generation,image_edit,video_generation— so the existingbuildPartnerTools→api.registerToolpipeline surfaces them asblockrun_image_generation,blockrun_image_edit,blockrun_video_generationtools. Agents can now tool-call image/video from chat without the skill layer guessing at raw HTTP shapes.
-
Dropped the Twitter/X user-lookup partner. We no longer run X data as a product surface. Removed
x_users_lookupfromPARTNER_SERVICES, deleted theskills/x-api/skill directory, and strippedx|from the/v1/(?:x|partner|pm|...)/paid-route regex insrc/proxy.ts(so/v1/x/*no longer short-circuits to the partner proxy — it now falls through to the usual chat-completion path or 404s cleanly). Server-side/v1/x/*endpoints are still live atblockrun.ai/apifor any existing integrations; only the client wiring is retired. -
/partners+clawrouter partnersCLI output compressed ~4×. Previously 6 lines per service (name, full agent-facing description, tool name, method, pricing block, blank) × 17 services ≈ 100 lines of wall-of-text, which is what @vicky was calling out as "读不了" (unreadable).PartnerServiceDefinitiongained two fields —category("Prediction markets" / "Market data" / "Image & Video") andshortDescription(≤ 40 chars) — driving a new grouped, column-aligned one-liner per tool. The longdescriptionfield stays intact for the LLM-facing JSON Schema (agents still see "Call this ONLY when..." guidance). Output is now ~25 lines, one screen.
- README leads with the free tier. Post-v0.12.160 the product story changed — 8 NVIDIA models free forever, no wallet required to start — but the README still opened "fund your wallet" as step 2 of Quick Start and buried the free tier in a single line at the bottom. Rewrites so the free tier is the hook, not a footnote: hero tagline adds "8 models free, no crypto required. No signup. No API key. No credit card." plus a 🆓 shields.io badge; the "Why ClawRouter exists" list opens with "Starts at $0"; the comparison-vs-others table adds a "Free tier" row showing ClawRouter's "8 models, no signup" against OpenRouter's rate limits and LiteLLM/Martian/Portkey's "no"; Quick Start gets a "No wallet? 8 models work free out of the box" callout and reframes step 2 as optional; routing-profiles table adds
/model freeat 100% savings; the Costs section lists the current 8 free model IDs by name (was a stale 11-model list referencing the retired Nemotron Ultra / Mistral Large / Devstral). This release is README-only — code is identical to v0.12.162 — version bump exists so the updated marketing reaches the npmjs.com package page and the clawhub marketplace listing.
-
ByteDance Seedance video models wired into the client. BlockRun server has exposed three Seedance models since late April —
bytedance/seedance-1.5-pro($0.03/sec),bytedance/seedance-2.0-fast($0.15/sec, ~60–80s gen time), andbytedance/seedance-2.0Pro ($0.30/sec) — all 720p, text-to-video + image-to-video, 5s default and up to 10s. The/v1/videos/generationsproxy passthrough insrc/proxy.tsalready forwarded anymodelvalue untouched, so actual USDC charges were always correct (server dictates the amount in its 402 response andpayment-preauth.tscaches the server-sentPaymentRequired, not a local estimate — charges never depended on ClawRouter's local pricing table). Three client-side gaps were fixed anyway:-
Usage telemetry was wrong for Seedance.
estimateVideoCostinsrc/proxy.tsonly knewxai/grok-imagine-video, so every Seedance request logged$0.42/cliptologUsageregardless of what the user was actually billed — skewing/usageoutput, savings %, and journal cost fields.VIDEO_PRICINGnow carries all four models at real server rates. -
OpenClaw's native video UI only saw one model.
buildVideoGenerationProviderinsrc/index.tsadvertisedmodels: ["xai/grok-imagine-video"], so users of the UI picker couldn't pick Seedance at all; the only path was raw curl with an explicitmodelfield. Themodelsarray now lists all four, and provider capabilities widen tomaxDurationSeconds: 10/supportedDurationSeconds: [5, 8, 10]to cover both vendors' ranges (server still validates per-modelmaxDurationSeconds, so invalid combos return a clean 400). -
README docs only mentioned Grok. Video-generation section now lists all four models in the table, swaps the curl example to
bytedance/seedance-2.0-fast(sweet-spot price/quality), and makes the upstream-polling note vendor-neutral instead of xAI-specific.
-
Usage telemetry was wrong for Seedance.
-
Docs: fixed proxy port in free-models guide. Thanks to @Bortlesboat (#160) for catching
4402→8402typos indocs/11-free-ai-models-zero-cost-blockrun.md. The rest of the repo,src/config.ts(DEFAULT_PORT = 8402), and all other docs have always said 8402; that one guide was sending new users at the wrong local port.
- De-Gemini the Anthropic-primary fallback chains. When Anthropic hiccups (503s, capacity), Gemini's own "high demand" 503s correlate with the same events — agents fall back from Claude to Gemini together, both overloaded. Reordered
src/router/config.tsfallback arrays in the two places Anthropic sits primary:premiumTiers.COMPLEX(claude-opus-4.7 primary) andagenticTiers.COMPLEX(claude-sonnet-4.6 primary). New order: in-family Anthropic hot swap (opus-4.6 / sonnet-4.6) → xAI Grok (independent infra, strong on complex + tool use) → Moonshot Kimi K2.6 / K2.5 (separate Moonshot infra) → OpenAI flagship (slow but reliable) → DeepSeek (cheap reliable) →free/qwen3-coder-480b(NVIDIA free ultimate backstop). Gemini removed entirely from both chains. Other Anthropic-primary tiers (premiumTiers.REASONING,agenticTiers.REASONING) already had no Gemini and were not touched.
- Free-tier catalog realigned with BlockRun server (13 → 8 NVIDIA free models). BlockRun retired five NVIDIA free models on 2026-04-21 (
nemotron-ultra-253b,nemotron-3-super-120b,nemotron-super-49b,mistral-large-3-675b,devstral-2-123b) and introduced two new ones benchmark-validated at 114–116 tok/s (qwen3-next-80b-a3b-thinking— fastest free reasoning;mistral-small-4-119b— fastest free chat). ClawRouter now exposes the same 8 visible free models:gpt-oss-120b,gpt-oss-20b,deepseek-v3.2,qwen3-coder-480b,glm-4.7,llama-4-maverick,qwen3-next-80b-a3b-thinking,mistral-small-4-119b. Retired IDs still resolve locally viaMODEL_ALIASESredirects to successors (free/nemotron-*→free/qwen3-next-80b-a3b-thinking,free/mistral-large-3-675b→free/mistral-small-4-119b,free/devstral-2-123b→free/qwen3-coder-480b), matching server-side behavior so stale user configs keep working. Touched:BLOCKRUN_MODELS+MODEL_ALIASESinsrc/models.ts,FREE_MODELSset insrc/proxy.ts, free-model list insrc/index.tspicker,MODEL_PRICINGfixture insrc/router/strategy.test.ts,scripts/update.sh+scripts/reinstall.shTOP_MODELS+ slash-command help, README Budget Models pricing table + Free tier note, skills/clawrouter/SKILL.md description + Available Models section. - Kimi K2.5 routing inverted: Moonshot direct is now primary. NVIDIA-hosted
nvidia/kimi-k2.5was retired 2026-04-21 (slow throughput) and redirects server-side tomoonshot/kimi-k2.5. ClawRouter mirrors this:moonshot/kimi-k2.5is the primary entry (no deprecation flag, full 16K output),nvidia/kimi-k2.5retained but markeddeprecated: truewithfallbackModel: "moonshot/kimi-k2.5". Aliaseskimi/moonshot/kimi-k2.5/nvidia/kimi-k2.5all resolve tomoonshot/kimi-k2.5. Router tier configs insrc/router/config.ts(auto + premium + agentic profiles, 7 occurrences) updated to point at the Moonshot variant.
- Market data tools — BlockRun gateway now exposes realtime and historical market data; ClawRouter wires them into OpenClaw as 6 first-class agent tools so the model stops scraping finance sites. Paid ($0.001 via x402, same wallet as LLM calls):
blockrun_stock_priceandblockrun_stock_historyacross 12 global equity markets (US, HK, JP, KR, UK, DE, FR, NL, IE, LU, CN, CA). Free (no x402 charge):blockrun_stock_list(ticker lookup / company-name search),blockrun_crypto_price(BTC-USD, ETH-USD, SOL-USD, …),blockrun_fx_price(EUR-USD, GBP-USD, JPY-USD, …),blockrun_commodity_price(XAU-USD gold, XAG-USD silver, XPT-USD platinum). Tool schemas advertise market codes, session hints (pre/post/on), and bar resolutions (1/5/15/60/240/D/W/M). Path routing extended: the partner-proxy whitelist insrc/proxy.tsnow matches/v1/(?:x|partner|pm|exa|modal|stocks|usstock|crypto|fx|commodity)/, routing all new paths throughproxyPaidApiRequest(payFetch handles 402 when present, passes through 200 for free categories). Tool definitions added insrc/partners/registry.ts;skills/clawrouter/SKILL.mdgains a "Built-in Agent Tools" section listing market data + X intelligence + Polymarket alongside the LLM router.
- SKILL.md data-flow + key-storage transparency — second-pass fix for the OpenClaw scanner on clawhub.ai. After v0.12.157 cleared the original scanner concerns (opaque credentials, implied multi-provider keys, no install artifact), a deeper rescan surfaced three new, more nuanced flags: (1) prompts go to blockrun.ai as a data-privacy risk not obvious from a "local router" framing, (2) wallet private-key storage location/encryption undocumented, (3) users may expect strictly-local routing. All three addressed: (a) description frontmatter and body lead reframed as "Hosted-gateway LLM router" + "This is not a local-inference tool" with explicit Ollama pointer for users who need local-only, (b) new Data Flow section with ASCII diagram + enumerated sent/not-sent lists + link to https://blockrun.ai/privacy, (c) new Credentials & Local Key Storage section documenting config file locations per OS (
~/.config/openclaw,~/Library/Application Support/openclaw,%APPDATA%\openclaw),0600POSIX permissions, plaintext storage parity with other OpenClaw provider keys, encryption guidance (FileVault/LUKS/BitLocker or burner wallet), and asrc/wallet.tssource pointer for key-derivation auditing, (d) new Supply-Chain Integrity section withnpm packverification instructions and tagged-release invariant from the release checklist.
- SKILL.md credential transparency — rewrote
skills/clawrouter/SKILL.mdto clear the OpenClaw scanner's medium-confidence suspicious verdict on clawhub.ai. Frontmatter now declaresrepository: https://github.com/BlockRunAI/ClawRouter,license: MIT, and a structuredmetadata.openclaw.installarray (kind: node,package: @blockrun/clawrouter,bins: [clawrouter]) so the registry entry has an auditable install artifact instead of a bare bash block. Body adds a Credentials & Data Handling section fully enumerating whatmodels.providers.blockrunstores (walletKey/solanaKey— auto-generated locally, never transmitted;gateway/routing— non-sensitive), and explicitly states the plugin does not collect or forward third-party provider API keys (OpenAI/Anthropic/Google/DeepSeek/xAI/NVIDIA) — the blockrun.ai gateway owns those relationships and routes on the server side. Addresses the three scanner flags (opaque credential declaration, implied multi-provider credential collection, no install artifact for review) raised against v0.12.156 on https://clawhub.ai/1bcmax/clawrouter.
-
Kimi K2.6 added — Moonshot's new flagship (
moonshot/kimi-k2.6, 256K context, vision + reasoning, $0.95 in / $4.00 out per 1M) registered inBLOCKRUN_MODELSwithkimi-k2.6alias. Added to the curated/modelpicker list (src/index.ts,scripts/update.sh,scripts/reinstall.sh), the README pricing table,docs/routing-profiles.md, and the AI-agent-facing model catalog inskills/clawrouter/SKILL.md. Premium routing tier (blockrun/premium) now uses K2.6 as the SIMPLE primary and as a fallback in MEDIUM/COMPLEX, withnvidia/kimi-k2.5retained as the first fallback for reliability. The generickimi/moonshotaliases still resolve tonvidia/kimi-k2.5(matches BlockRun server'sblockrun/kimistance); users opt in to K2.6 explicitly viakimi-k2.6orblockrun/premium. -
GitHub restored as canonical source — BlockRunAI GitHub org is back.
package.jsonrepository.url, README badges, CONTRIBUTING clone URL,openclaw.security.json, all docs (anthropic-*,clawrouter-cuts-*,clawrouter-vs-openrouter,11-free-ai-models,llm-router-benchmark-*,smart-llm-router-14-dimension-classifier,subscription-failover,troubleshooting),skills/release/SKILL.md, and thesse-error-formatregression-test comment now point atgithub.com/BlockRunAI/ClawRouter. GitLab mirror (gitlab.com/blockrunai/ClawRouter) is kept as a secondary remote for redundancy but is no longer advertised. Metadata + docs only; no runtime/code changes.
-
Docs: video generation endpoint — README now documents
POST /v1/videos/generationswithxai/grok-imagine-video($0.05/sec, 8s default). The proxy handler, cost estimator (estimateVideoCost), and local-file download path were already in place inproxy.ts; only the README was missing. -
Docs: Grok Imagine image models — README image table now includes
xai/grok-imagine-image($0.02) andxai/grok-imagine-image-pro($0.07), already wired into the image pricing map.
-
Claude Opus 4.7 flagship — BlockRun API has promoted
anthropic/claude-opus-4.7to flagship (1M context, 128K output, adaptive thinking; $5/$25 per 1M tokens). Added toBLOCKRUN_MODELS, now the primary for theCOMPLEXrouting tier across default/premium profiles and the new cost-savingsBASELINE_MODEL_ID. Aliases:opus,opus-4,anthropic/opus,anthropic/claude-opus-4, andanthropic/claude-opus-4.5now resolve to 4.7. Explicit 4.6 pins (opus-4.6,anthropic/claude-opus-4-6) still route to 4.6, which the server keeps available. Opus 4.7 is also added to the curatedTOP_MODELSpicker list anddoctorcommand. Opus 4.6 ClawRouter metadata updated to match server specs (1M/128K, was stale at 200K/32K).
- Repository URL fixed —
package.jsonrepository.urlnow points atgitlab.com/blockrunai/ClawRouter. The previous value (github.com/BlockRunAI/ClawRouter) has been dead since the GitHub org was banned 2026-04-15. Metadata-only bump; no code changes.
- Stop bundling blockrun-mcp — ClawRouter no longer auto-injects
mcp.servers.blockruninto~/.openclaw/openclaw.json. Thenpx -y @blockrun/mcp@latestspawns were leaking shell-wrapper + node grandchildren processes on the host (see reports of 70+ orphaned processes accumulating). Removal of the injection call is matched by a one-shot migration that strips any previously managedmcp.servers.blockrunentry the next time the gateway starts. User-definedblockrunMCP entries are preserved. Restart your gateway after upgrading to free any already-leaked processes. Users who still want the MCP bridge can opt in manually:openclaw mcp add blockrun npx -y @blockrun/mcp@latest.
- Predexon tools registered — 8 Predexon endpoints now registered as real OpenClaw tools (
blockrun_predexon_events,blockrun_predexon_leaderboard,blockrun_predexon_markets,blockrun_predexon_smart_money,blockrun_predexon_smart_activity,blockrun_predexon_wallet,blockrun_predexon_wallet_pnl,blockrun_predexon_matching_markets). Agent will now call these directly instead of falling back to browser scraping. - Partner tools GET support —
tools.tsexecute function now handles GET endpoints with query params and path param substitution (:wallet,:condition_id, etc.).
- Skill priority fix —
predexonandx-apiskills now explicitly instruct the agent not to use browser/web_fetch for these data sources, ensuring the structured API is always used over scraping.
- Predexon skill — New vendor skill ships with ClawRouter: 39 prediction market endpoints (Polymarket, Kalshi, dFlow, Binance, cross-market matching, wallet analytics, smart money). OpenClaw agents now auto-invoke this skill when users ask about prediction markets, market odds, or smart money positioning.
- Partner proxy extended —
/v1/pm/*paths now route through ClawRouter's partner proxy (same as/v1/x/*), enabling automatic x402 payment for all Predexon endpoints vialocalhost:8402.
-
Free model cost logging — Usage stats incorrectly showed non-zero cost for free models (e.g.
free/gpt-oss-120bshowed $0.001 per request due to theMIN_PAYMENT_USDfloor incalculateModelCost). Free models now logcost: $0.00andsavings: 100%, accurately reflecting that no payment is made.
/doctorchecks correct chain balance — Previously always checked Base (EVM), showing $0.00 for Solana-funded wallets. Now callsresolvePaymentChain()and usesSolanaBalanceMonitorwhen on Solana. Shows active chain label and hints to run/wallet solanaif balance is empty on Base.- Strip thinking tokens from non-streaming responses — Free models leaked
<think>...</think>blocks in non-streaming responses.stripThinkingTokens()was only applied in the streaming path — now also runs on non-streaming JSON responses. - Preserve OpenClaw channels on install/update —
reinstall.shandupdate.shnow backup~/.openclaw/credentials/beforeopenclaw plugins installand always restore after, preventing WhatsApp/Telegram channel disappearance.
- Blog section in README — 6 blog posts linked from the repo, including "11 Free AI Models, Zero Cost".
- BRCC ecosystem block — Replaced SocialClaw with BRCC (BlockRun for Claude Code) in the README ecosystem section.
blockrun.ai/brcc-installshort link — Redirect for BRCC install script.
- 11 free models — GPT-OSS 20B/120B, Nemotron Ultra 253B, Nemotron Super 49B/120B, DeepSeek V3.2, Mistral Large 3, Qwen3 Coder 480B, Devstral 2 123B, GLM 4.7, Llama 4 Maverick. All free, no wallet balance needed.
/model freealias — Points to nemotron-ultra-253b (strongest free model). All 11 free models individually selectable via/modelpicker.- New model aliases —
nemotron,devstral,qwen-coder,maverick,deepseek-free,mistral-free,glm-free,llama-free, and more (16 total).
- Skills not found by OpenClaw agents — Auto-copies bundled skills (imagegen, x-api, clawrouter) to
~/.openclaw/workspace/skills/on plugin registration. FixesENOENTerrors when agents invoke/imagegen. - Internal
releaseskill excluded — No longer installed to user workspaces. - Sync package-lock.json
- Skills not found by OpenClaw agents — Agents tried to read skill files (imagegen, x-api, etc.) from
~/.openclaw/workspace/skills/but ClawRouter only bundled them inside the npm package. Now auto-copies all user-facing bundled skills into the workspace directory on plugin registration. SupportsOPENCLAW_PROFILEfor multi-profile setups. Only updates when content changes. FixesENOENT: no such file or directoryerrors when agents invoke/imagegen. - Internal
releaseskill excluded — The release checklist skill is for ClawRouter maintainers only and is no longer installed to user workspaces. - Sync package-lock.json — Lock file was stuck at v0.12.69, now matches package.json.
- Plugin crash on string model config — ClawRouter crashed during OpenClaw plugin registration with
TypeError: Cannot create property 'primary' on string 'blockrun/auto'. This happened whenagents.defaults.modelin the OpenClaw config was a plain string (e.g."blockrun/auto") instead of the expected object{ primary: "blockrun/auto" }. Now auto-converts string/array/non-object model values to the correct object form.
- Config duplication on update —
update.shandreinstall.shaccumulated staleblockrun/*model entries inopenclaw.jsonon every update because only 2 hardcoded deprecated models were removed. Now performs a full reconciliation: removes anyblockrun/*entries not in the currentTOP_MODELSlist before adding new ones. Non-blockrun entries are untouched.
- OpenClaw skills registration — added
"skills": ["./skills"]toopenclaw.plugin.jsonso OpenClaw actually loads bundled skills (was missing, skills were never active) - imagegen skill — new
skills/imagegen/SKILL.md: teaches Claude to generate images viaPOST /v1/images/generations, model selection table (nano-banana, banana-pro, dall-e-3, flux), size options, example interactions - x-api skill — new
skills/x-api/SKILL.md: teaches Claude to look up X/Twitter user profiles viaPOST /v1/x/users/lookup, with pricing table, response schema, and example interactions
- Image generation docs — new
docs/image-generation.mdwith API reference, curl/TypeScript/Python/OpenAI SDK examples, model pricing table, and/imagegencommand reference - Comprehensive docs refresh — architecture updated for dual-chain (Base + Solana), configuration updated with all env vars (
CLAWROUTER_SOLANA_RPC_URL,CLAWROUTER_WORKER), troubleshooting updated for USDC-on-Solana funding, CHANGELOG backfilled for v0.11.14–v0.12.24
- Preserve user-defined blockrun/* allowlist entries —
injectModelsConfig()no longer removes user-addedblockrun/*allowlist entries on gateway restarts
/chaincommand — persist payment chain selection (Base or Solana) across restarts via/chain solanaor/chain base- Update nudge improved — now shows
npx @blockrun/clawrouter@latestinstead ofcurl | bash - Zero balance cache fix — funded wallets are detected immediately (zero balance never cached)
wallet recovercommand — restorewallet.keyfrom BIP-39 mnemonic on a new machine- Solana balance retry — retries once on empty to handle flaky public RPC endpoints
- Balance cache invalidated at startup — prevents false free-model fallback after fresh install
- openai/ prefix routing fix — virtual profiles (
blockrun/auto, etc.) now handleopenai/prefix injected by some clients - Body-read timeout increased — 5-minute timeout for slow reasoning models prevents proxy hangs
- Server-side update nudge — 429 responses from BlockRun now surface update hints when running an outdated ClawRouter version
- Body-read timeout — prevents proxy from hanging on stalled upstream streams
- @solana/kit version fix — pinned to
^5.0.0to resolve cross-version signing bug causingtransaction_simulation_failed(#74) /stats clearcommand — reset usage statistics- Gemini 3 models excluded from tool-heavy routing (#73)
- GPT-5.4 and GPT-5.4 Pro — added to model catalog
- Force agentic tiers on tool presence — requests with
toolsarray always route to agentic-capable models
- Solana sweep fix — correctly attaches signers to sweep transaction message (#70)
- Multi-account sweep — correctly handles partial reads and JSONL resilience in sweep migration
- SPL Token Program ID fix — corrected in Solana sweep transaction
Full Solana chain support. Pay with USDC on Solana (not SOL) alongside Base (EVM).
- SLIP-10 Ed25519 derivation — Solana wallet uses BIP-44 path
m/44'/501'/0'/0', compatible with Phantom and other wallets (#69) SolanaBalanceMonitor— reads SPL Token USDC balance;proxy.tsselects EVM or Solana monitor based on active chain- Solana address shown in
/wallet— displays both EVM (0x...) and Solana (base58) addresses - Health endpoint — returns Solana address alongside EVM address
- Pre-auth cache skipped for Solana — prevents double payment on Solana chain
- Startup balance uses chain-aware monitor — fixes EVM-only startup log when Solana is active
- Chain-aware proxy reuse — validates payment chain matches on EADDRINUSE path
etherspeer dep — added for@x402/evmvia SIWE compatibility
- Free model fallback notification — notifies user when routing to
gpt-oss-120bdue to insufficient USDC balance
- Input token logging — usage logs now include
inputTokensfrom provider responses
- Gemini 3.x in allowlist — replaced Gemini 2.5 with Gemini 3.1 Pro and Gemini 3 Flash Preview
- Top 16 model allowlist — trimmed from 88 to 16 curated models in
/modelpicker (4 routing profiles + 12 popular models)
- Populate model allowlist — populate
agents.defaults.modelswith BlockRun models so they appear in/modelpicker
- Auto-fix broken allowlist —
injectModelsConfig()detects and removes blockrun-only allowlist on every gateway start
- Allowlist cleanup in reinstall.sh — detect and remove blockrun-only allowlist that hid all other models
clawrouter reportcommand — daily/weekly/monthly usage reports vianpx @blockrun/clawrouter reportclawrouter doctorcommand — AI diagnostics for troubleshooting
- catbox.moe image hosting —
/imagegenuploads base64 data URIs to catbox.moe (replaces broken telegra.ph)
- Image upload for Telegram — base64 data URIs from Google image models converted to hosted URLs
- Output raw image URL —
/imagegenreturns plain URL instead of markdown syntax for Telegram compatibility
Session-level repetition detection: 3 consecutive identical request hashes auto-escalate to the next tier (SIMPLE → MEDIUM → COMPLEX → REASONING). Fixes Kimi K2.5 agentic loop problem without manual model switching.
Generate images from chat. Calls BlockRun's image generation API with x402 micropayments.
/imagegen a cat wearing sunglasses
/imagegen --model dall-e-3 a futuristic city
/imagegen --model banana-pro --size 2048x2048 landscape
| Model | Shorthand | Price |
|---|---|---|
| Google Nano Banana (default) | nano-banana |
$0.05/image |
| Google Nano Banana Pro | banana-pro |
$0.10/image (up to 4K) |
| OpenAI DALL-E 3 | dall-e-3 |
$0.04/image |
| OpenAI GPT Image 1 | gpt-image |
$0.02/image |
| Black Forest Flux 1.1 Pro | flux |
$0.04/image |
- Stop hijacking model picker — removed allowlist injection that hid non-BlockRun models from
/modelpicker - Silent fallback to free model — insufficient funds now skips remaining paid models and jumps to the free tier instead of showing payment errors
- Anthropic array content extraction — routing now handles
[{type:"text", text:"..."}]content format (was extracting empty string) - Session startup bias fix — never-downgrade logic: sessions can upgrade tiers but won't lock to the low-complexity startup message tier
- Session re-pins to fallback — after provider failure, session updates to the actual model that responded instead of retrying the failing primary every turn
/debugcommand — type/debug <prompt>to see routing diagnostics (tier, model, scores, session state) with zero API cost- Tool-calling model filter — requests with tool schemas skip incompatible models automatically
- Session persistence enabled by default —
deriveSessionId()hashes first user message; model stays pinned 30 min without client headers - baselineCost fix — hardcoded Opus 4.6 fallback pricing so savings metric always calculates correctly
- Tool call leaking fix — removed
grok-code-fast-1from all routing paths (was outputting tool invocations as plain text) - Systematic tool-calling guard —
toolCallingflag on models; incompatible models filtered from fallback chains - Async plugin fix —
register()made synchronous; OpenClaw was silently skipping initialization
- Agentic mode false trigger —
agenticScorenow scores user prompt only, not system prompt. Coding assistant system prompts no longer force all requests to Sonnet.
- OpenClaw tool API contract — fixed
inputSchema→parameters,execute(args)→execute(toolCallId, params), and return format
- Partner tool trigger reliability — directive tool description so AI calls the tool instead of answering from memory
- Baseline cost fix —
BASELINE_MODEL_IDcorrected fromclaude-opus-4-5toclaude-opus-4.6 - Wallet corruption safety — corrupted wallet files throw with recovery instructions instead of silently generating new wallet
- 9-language router — added ES, PT, KO, AR keywords across all 12 scoring dimensions (was 5 languages)
- Claude 4.6 — all Claude models updated to newest Sonnet 4.6 / Opus 4.6
- 7 new models — total 41 (Gemini 3.1 Pro Preview, Gemini 2.5 Flash Lite, o1, o1-mini, gpt-4.1-nano, grok-2-vision)
- 5 pricing fixes — 15-30% better routing from corrected model costs
- 67% cheaper ECO tier — Flash Lite for MEDIUM/COMPLEX