Cut your Claude Code bill by 50%.
git-status, but it tells you what to do next.
Saves tokens. Saves money. Saves turns. Works the same in interactive sessions and autonomous runs — humans pair-programming with Claude Code use it every day, not just Kevin-style headless agents. One Python file, zero deps, Python 3.9+.
Why • Four pillars • Receipt • Batching • Parallel • Expand it • Install
# 7 ops, 1 round-trip, parallel where safe
supertool 'read:src/Module.py' 'read:src/Auth.py' 'grep:TODO:src/:20' 'map:src/'Hammer in 2026. Claude Code's default toolbelt is 1995 unix: cat one file, grep one pattern, git status returns 200 bytes of porcelain. Every tool call re-sends the entire conversation cache — system prompt, CLAUDE.md, rules, every prior turn — at 10% of input price. Read 7 files? Pay that prefix 7 times. Run git status then realize you needed ahead/behind too? Pay it twice for one decision. The bill compounds turn over turn.
Drill in 2026. supertool gives the agent variants that pack the next question into the current call:
git-status— branch + tracking + ahead/behind + dirty files + open MR/PR + suggested next step. One call, decision ready.gl-mr:NUMBER/gh-pr:NUMBER— full MR/PR dashboard: branch, pipeline, reviewer, approval, diff stat, comments. Replaces 4-5glab/ghcalls.claude-log-summary:UUID— model, duration, tool calls, tokens, cache hit %, errors-by-tool. Audit your own runs.
That's a sample. supertool ships ~40 ops out of the box (built-ins + gitlab / github / git / claude-log presets) — add your own and you're past 60 fast.
The variant is the lever. A turn saved isn't free time — it's a cached prefix you didn't re-pay.
| Pillar | What it does |
|---|---|
| Right tool | Variants pack state + guards + next-step into one call. Less to remember. |
| Batched | 7 ops, 1 round-trip. The cached prefix gets re-paid once, not seven times. |
| Parallel | Read-only ops in a batch run concurrently — ~3-5× faster on cold I/O. |
| Expandable | Add a custom op in 4 lines of JSON. Presets ship gitlab, github, git, claude-log. |
| Mode | Cache reads | Output | Turns | Savings |
|---|---|---|---|---|
| Hammer (no batching) | 436K | 1,400 | 10 | — |
| supertool | 133K | 750 | 3 | 50% |
| Pre-computed + supertool | 85.5K | 600 | 2 | 56% |
50% fewer tokens, 3-4× faster wall time. Fewer turns = fewer prefix re-reads. Multiply by task count and team size — the bill cut is real.
Three things happen once you ship variants instead of raw shell:
1. You build your own ops. Digital Process Tools built a stack on top — none ship with supertool, all written in 5-15 lines of JSON: git-commit (stage + commit + receipt), mr (push + MR + reviewer), mysql_read/mysql_write, verify_staged (phpstan + phpmd + phplint on the staged diff). Every project has its own "what's the next question I always ask" — bake the answer in, save the round-trip forever.
2. The op holds the guards. mysql_write refuses UPDATE/DELETE without WHERE. mysql_read auto-LIMIT 50s. mr can enforce branch policy and reviewer. Every guard is a class of mistake the agent can't make. Tokens saved, yes — but the session that didn't get derailed cleaning up "oops, emptied the user table" is the expensive one.
3. The agent thinks less. A variant that returns everything in one shot is a variant the agent doesn't have to think through. Thinking tokens bill at output rate. Every "let me also check..." that becomes "the op already told me" is output cost saved on top of round-trip cost.
From the DPT marketplace:
/plugin marketplace add Digital-Process-Tools/claude-marketplace
/plugin install supertool@dpt-plugins
This auto-registers both hooks (SessionStart + PreToolUse) via the plugin's hooks/hooks.json — no manual settings.json editing.
Or directly — clone the repo and symlink supertool.py onto your $PATH as supertool:
git clone https://github.com/Digital-Process-Tools/claude-supertool.git
ln -s "$(pwd)/claude-supertool/supertool.py" /usr/local/bin/supertool
chmod +x /usr/local/bin/supertoolVerify:
supertool 'read:README.md'Standalone install doesn't wire up the hooks (no plugin system). You get the binary; the enforcement mode and session-start prompt come with the marketplace install.
Just install. The session-start hook runs ./supertool 'introduction' 'output-format' 'ops-compact' to output the project-specific operations reference from .supertool.json. The model learns what's available and how to batch. Falls back to native Grep/Read when those are better.
Heads-up — hook output cap. Claude Code truncates hook stdout around 7KB; over that, only a ~2KB preview reaches the model and the rest is silently saved to disk. With many ops, the tail of the listing gets hidden until rediscovered mid-task.
The session-start hook uses
ops-compactto stay under the cap: examples are dropped on self-explanatory ops, and only kept on ops marked"hint": truein.supertool.json. If the body still exceeds the cap,ops-compactprepends a warning telling the model to fetch the full listing via./supertool 'ops'. Plain'ops'always returns everything.
For Kevin-style runs where you want the model to always batch via SuperTool:
/supertool on
This writes ~/.claude/supertool-enforced, which the PreToolUse hook reads to block:
Grep,Glob,LS(native builtins)Bash(cat ...),Bash(find ...),Bash(grep ...),Bash(ls ...)Bash(sed ...),Bash(awk ...),Bash(tail ...),Bash(head ...)
Blocked calls receive a redirect message ("Use ./supertool instead: ..."). Model learns to batch.
Read stays allowed — Claude Code's Edit tool needs the built-in Read for state-based file checks. Don't try to disable it.
Turn off when you're done:
/supertool off
Check state:
/supertool status
If you're running claude -p in bypass mode, you can use the CLI flag directly (plugin not required):
claude -p "..." --permission-mode bypassPermissions \
--disallowedTools "Grep,Glob,LS,Bash(find:*),Bash(cat:*),Bash(grep:*),Bash(ls:*),Bash(sed:*),Bash(awk:*),Bash(tail:*),Bash(head:*)"--allowedTools is ignored in bypass mode — always use --disallowedTools when bypassing.
| Op | Syntax | Notes |
|---|---|---|
read |
read:PATH or read:PATH:OFFSET:LIMIT |
300 lines / 20KB cap |
read (filter) |
read:PATH:OFFSET:LIMIT:grep=PATTERN |
Only show lines matching PATTERN (original line numbers preserved). Use read:PATH:::grep=PATTERN for defaults. |
grep |
grep:PATTERN:PATH or grep:PATTERN:PATH:LIMIT |
10 results default, code + doc extensions only. Auto-reads full file if PATH is a concrete file < 20KB with a match. |
grep (context) |
grep:PATTERN:PATH:LIMIT:CONTEXT |
Show CONTEXT lines before/after each match (like grep -C). Match lines: path:lineno:content. Context lines: path-lineno-content. Non-adjacent groups separated by --. |
grep (count) |
grep:PATTERN:PATH:LIMIT:CONTEXT:count |
Return match counts per file instead of content. Output: filepath:COUNT per line. |
glob |
glob:PATTERN |
** supported. Auto-reads if PATTERN is a concrete file path (no wildcards). |
ls |
ls:PATH |
Trailing / on subdirs |
tail |
tail:PATH:N |
Last N lines (default 20) |
head |
head:PATH:N |
First N lines (default 20) |
wc |
wc:PATH |
Line/word/char count (like unix wc). Output: LINES WORDS CHARS PATH. |
around |
around:PATTERN:PATH or around:PATTERN:PATH:N |
Show N lines (default 10) before and after the first match of PATTERN in a single file. Uses line-numbered output like read. |
grep_around |
grep_around:PATTERN:PATH or grep_around:PATTERN:PATH:N:LIMIT |
Every match across files with N lines context (default N=3, LIMIT=10). Alias for grep:PATTERN:PATH:LIMIT:CONTEXT with sane defaults — useful for "show me how everyone uses this". |
map |
map:PATH |
Symbol map of a file or directory. Shows classes, functions, methods, constants as an indented tree with line numbers. Three-tier: tree-sitter → ctags → regex. Supports PHP, Python, JS, TS, Go, Rust, Java, Ruby. |
introduction |
introduction |
Output the project introduction text from .supertool.json. No --- dispatch header — clean markdown. |
output-format |
output-format |
Output format examples from .supertool.json. Shows what responses look like. |
ops |
ops |
Full operations reference from .supertool.json — built-in ops, custom ops, and aliases with descriptions and examples. |
diff |
diff:PATH1:PATH2 |
Unified diff between two files. |
stat |
stat:PATH |
File/directory metadata: size (bytes), last modified (ISO datetime), type (file/dir). |
around_line |
around_line:PATH:LINE or around_line:PATH:LINE:N |
Show N lines (default 10) of context around a specific line number. Target line marked with →. |
between |
between:SYMBOL:PATH or between:re:START:END:PATH |
Return a chunk of a file. Symbol mode (default): full body of a named function/method/class via tree-sitter (PHP, Python, JS, TS, Go, Rust, Java, Ruby — symbols with :: like PHP Foo::bar work). Pattern mode (re: prefix): inclusive line slice from first line matching START regex to first line after matching END regex (language-agnostic). |
tree |
tree:PATH or tree:PATH:DEPTH |
Directory structure with depth limit (default 3). Hides dotfiles. Files listed before subdirectories. |
blame |
blame:PATH:LINE or blame:PATH:LINE:N |
Git blame for N lines (default 5) around a specific line number. Requires git repo. |
version |
version |
Show supertool version. |
edit |
edit:::OLD:::NEW:::PATH |
Single-file, single-occurrence edit (mirrors native Edit). Errors if 0 or >1 matches. Bypasses native Edit must-Read state — saves a round-trip when you already know the unique snippet. Use ::: separator so content with : works. |
replace_lines |
replace_lines:::PATH:::START:::END:::CONTENT |
Swap lines [START, END] (1-indexed, inclusive) with CONTENT. END < START = pure insert before line START. Empty CONTENT = delete. Receipt shows new line numbers + ±2 context. |
edit_session |
edit_session:::PATH:::SCRIPT |
Cursor-based multi-action edit in one op. SCRIPT actions (separated by ; or newline): @L:C goto, /PATTERN find next match, ^/$ BOL/EOL, ^^/$$ BOF/EOF, <N/>N left/right, kN/jN up/down rows, +TEXT insert (escapes decoded), -N delete N chars. Self-contained — /PATTERN removes the need for a prior Read to know coordinates. Token-cheap for many small edits. Example: edit_session:::foo.py:::/def foo;$;+ # marker. |
replace / replace_dry |
replace:::OLD:::NEW:::PATH |
Recursive find/replace across PATH (replace_dry = preview). Use ::: separator when content has :. |
LLM onboarding in one call: ./supertool 'introduction' 'output-format' 'ops' — outputs everything an LLM needs to use supertool.
Supertool works with no configuration. The .supertool.json is optional — it enables self-documenting ops for LLM onboarding via ./supertool 'introduction' 'ops'.
Create a .supertool.json in your project root. Supertool walks up from cwd to find it. A starter template ships with the plugin as .supertool.example.json.
{
"introduction": "This project uses supertool for batched file reads and static analysis. Invoke with: ./supertool 'read:src/app/Module.py' 'grep:pattern:src/'",
"output-format": "Each operation returns a header followed by its output:\n\n--- read:src/app/Module.py ---\n(45 lines, 1230 bytes)\n 1→import os\n 2→import sys\n\n--- grep:class:src/app/:5 ---\n(2 results, limit 5)\nsrc/app/Module.py\n 4:class Module:\nsrc/app/Config.py\n 8:class Config:",
"builtin-ops": {
"read": {
"syntax": "read:PATH[:OFFSET:LIMIT]",
"description": "Read file (300 lines, 20KB cap)",
"example": "read:src/app/Module.py:1:50"
},
"read-grep": {
"syntax": "read:PATH:::grep=PATTERN",
"description": "Inline filter — matching lines, line nums kept",
"example": "read:src/app/Module.py:::grep=class"
},
"grep": {
"syntax": "grep:PATTERN:PATH[:LIMIT[:CONTEXT]]",
"description": "Search (10 results def). CONTEXT=N lines around match",
"example": "grep:def handle:src/:20:2"
},
"map": {
"syntax": "map:PATH",
"description": "Symbol tree. tree-sitter>ctags>regex",
"example": "map:src/app/"
}
},
"ops": {
"mypy": {
"cmd": "python -m mypy --no-error-summary {file}",
"timeout": 60,
"description": "Type-check a Python file with mypy.",
"example": "mypy:src/app/Module.py"
},
"pytest": {
"cmd": "python -m pytest --no-header -q {file}",
"timeout": 120,
"description": "Run pytest on a test file.",
"example": "pytest:tests/test_module.py"
},
"lint": {
"cmd": "ruff check {file}",
"timeout": 30,
"description": "Lint a file with ruff.",
"example": "lint:src/app/Module.py"
}
},
"aliases": {
"verify": {
"ops": ["mypy:{file}", "lint:{file}"],
"description": "Type-check + lint in one round-trip.",
"example": "verify:src/app/Module.py"
},
"qa": {
"ops": ["mypy:{file}", "lint:{file}", "pytest:tests/"],
"description": "Full quality check: types, lint, tests.",
"example": "qa:src/app/Module.py"
}
}
}introduction and output-format are user-controlled strings output by meta-ops:
./supertool 'introduction' # prints the introduction string
./supertool 'output-format' # prints the output-format string
./supertool 'introduction' 'output-format' 'ops' # full LLM onboarding in one callUse this in session-start hooks or agent prompts to onboard LLMs to your project's supertool setup without reading config files manually.
builtin-ops entries document built-in operations (syntax, description, example). Set "status": 0 to hide an entry from ./supertool 'ops' output (works on builtin-ops, ops, and aliases). Besides documentation, builtin-ops entries can also override default behavior:
| Op | Key | Default | Effect |
|---|---|---|---|
read |
max_lines |
300 | Max lines per read |
read |
max_bytes |
20000 | Max bytes per read (truncates at cap) |
grep |
max_results |
10 | Default result limit when not specified in the op |
grep |
extensions |
[] (all files) |
Restrict grep to these file patterns (e.g. ["*.py", "*.js"]). Empty = search all files |
glob |
max_results |
50 | Max files returned |
Example — increase read cap and restrict grep to PHP/XML:
{
"builtin-ops": {
"read": { "max_lines": 500, "max_bytes": 40000 },
"grep": { "extensions": ["*.php", "*.xml"] }
}
}ops are custom shell commands called directly by name:
./supertool 'mypy:src/app/Module.py' 'pytest:tests/test_module.py'Each op has cmd, timeout, description, example, and optional status. Ops accept {file} and {dir} (dirname of file) placeholders. Shorthand string ops ("lint": "ruff check {file}") still work with a 60s default timeout.
aliases expand one name to multiple ops. Format changed from array to object:
./supertool 'verify:src/app/Module.py' # runs mypy + lint in one round-tripEach alias has ops (array), description, example, and optional status. Aliases don't recurse.
Dispatch order: built-in ops → custom ops (including preset ops) → aliases. Built-ins always win. Project ops override preset ops on name conflict.
| Placeholder | Expands to | Example |
|---|---|---|
{file} |
First argument, shell-quoted, treated as file path | cat {file} |
{dir} |
Directory of {file} |
ls {dir} |
{arg} |
First argument, shell-quoted, no path validation | glab issue view {arg} |
{args} |
All arguments, each shell-quoted | python3 tool.py {args} |
{path} |
Preset directory with trailing / (presets only) |
python3 {path}gitlab/issue.py {arg} |
Use {file}/{dir} for file operations, {arg}/{args} for non-file arguments (issue numbers, job IDs, etc.).
Any key in a custom op config that isn't a reserved key (cmd, timeout, description, syntax, example, status) is passed to the subprocess as a SUPERTOOL_ prefixed environment variable:
{
"ops": {
"job": {
"cmd": "python3 job.py {arg}",
"lines": 80,
"error_patterns": "ERROR,FAIL,Fatal"
}
}
}The script receives SUPERTOOL_LINES=80 and SUPERTOOL_ERROR_PATTERNS=ERROR,FAIL,Fatal in its environment. This lets users tune op behavior from JSON without modifying scripts.
Presets are JSON files that declare custom ops for a specific tool or platform. Enable them in .supertool.json:
{
"presets": ["gitlab"]
}Supertool looks for each preset in three locations (first found wins):
./presets/{name}.json— project-level (team-specific ops)~/.config/supertool/presets/{name}.json— user-level (personal ops){supertool install dir}/presets/{name}.json— shipped with supertool
Preset ops merge into your config. Project-level ops always override preset ops on name conflict.
gitlab — GitLab ops via glab CLI. Requires glab installed and authenticated.
| Op | Syntax | What it does |
|---|---|---|
gl-issue |
gl-issue:NUMBER |
Issue metadata, description, human comments, related MRs, image download |
gl-mr |
gl-mr:NUMBER_OR_BRANCH |
MR dashboard: branch, pipeline, reviewer/approval, linked issue, diff stat, comments |
gl-pipeline |
gl-pipeline:NUMBER |
Pipeline job list grouped by stage with pass/fail |
gl-job |
gl-job:NUMBER |
Job log with MR context, error pattern search + configurable tail |
All ops are namespaced with gl- to avoid collisions with other presets.
gl-mr accepts either an MR number (gl-mr:42) or a branch name (gl-mr:feature/my-branch) — it resolves branches to MRs automatically.
gl-job searches logs for error patterns before falling back to tail. Configure via JSON:
{
"presets": ["gitlab"],
"ops": {
"gl-job": {
"cmd": "python3 {path}gitlab/job.py {arg}",
"lines": 120,
"error_patterns": "ERROR,FAILURES!,Fatal,Failed asserting",
"error_context": 10
}
}
}github — GitHub ops via gh CLI. Requires gh installed and authenticated (gh auth login).
| Op | Syntax | What it does |
|---|---|---|
gh-issue |
gh-issue:NUMBER |
Issue metadata, description, comments, linked PRs, image download |
gh-pr |
gh-pr:NUMBER_OR_BRANCH |
PR dashboard: branch, checks, reviews/approval, linked issue, diff stat, comments |
gh-run |
gh-run:NUMBER |
GitHub Actions workflow run: job list with statuses and failed step names |
gh-job |
gh-job:NUMBER |
Job log with PR context, error pattern search (##[error]) + configurable tail |
All ops are namespaced with gh- to avoid collisions with other presets.
gh-pr accepts either a PR number (gh-pr:42) or a branch name (gh-pr:feature/my-branch) — it resolves branches to PRs automatically.
Both forge presets include actionable error messages — when something fails (404, auth, permissions, rate limit), the error tells the LLM exactly what went wrong and what command to run to fix it.
git — Git investigation ops. No auth needed — works on any git repo.
| Op | Syntax | What it does |
|---|---|---|
git-status |
git-status |
Dashboard: branch, ahead/behind, last 5 commits, staged/unstaged/untracked, stashes, open MR/PR |
git-investigate |
git-investigate:PATH |
File investigation: recent commits, uncommitted changes, blame hotspots |
git-trail |
git-trail:PATTERN:PATH |
Trace a symbol through history via pickaxe search — when added, modified, removed |
git-blame |
git-blame:PATH:LINE[:N] |
Blame N lines around a line number (moved from builtin) |
git-status tries glab then gh to show the open MR/PR for the current branch — skips gracefully if neither is installed. All other ops are pure git.
git-investigate combines 3-5 git commands into one report: log, diff, and blame hotspots (most recently changed lines). Configurable via SUPERTOOL_COMMITS and SUPERTOOL_BLAME_RECENT.
git-trail answers "when was this added/changed/removed?" using git log -S (pickaxe), with regex fallback. Shows timeline + contextual diffs filtered to relevant hunks.
claude-log — Inspect Claude Code session logs (~/.claude/projects/<encoded-cwd>/*.jsonl). No auth, no deps — pure stdlib Python.
| Op | Syntax | What it does |
|---|---|---|
claude-log-list |
claude-log-list[:N] |
N most recent sessions for the current project: UUID, mtime, turn count, line count, first user-message excerpt |
claude-log-tail |
claude-log-tail:UUID[:N] |
Last N events in compact form: [role] TOOL name(input) / [result] output / [result/ERR] msg / [bootstrap] preview |
claude-log-summary |
claude-log-summary:UUID |
Full digest: model, duration, turn counts, tool calls + errors-by-tool, tokens (input/output/cache read/cache create) + cache hit %, final assistant text |
Useful for measuring autonomous-run efficiency — spotting wasted round-trips, validating that a skill change reduced tool calls, comparing model performance across runs. Windows-friendly cwd encoding (handles \ and drive colons), with closest-prefix sibling fallback when the encoded directory doesn't exist.
Create ./presets/mytools.json in your project (or ~/.config/supertool/presets/mytools.json for personal use):
{
"description": "My team's deployment tools",
"requires": "kubectl",
"ops": {
"deploy-status": {
"cmd": "python3 {path}mytools/status.py {arg}",
"timeout": 15,
"description": "Check deployment status for a service.",
"syntax": "deploy-status:SERVICE"
}
}
}The {path} placeholder resolves to the preset JSON's directory, so scripts can live alongside the manifest. The requires field is documentation only (not enforced).
Then enable it:
{
"presets": ["mytools"]
}The check:PRESET:PATH op still works — it reads from the ops section first, then falls back to .supertool-checks.json for backward compatibility. New projects should use direct ops (mypy:file) instead of check:mypy:file.
map:PATH generates a symbol tree (classes, functions, methods, constants) for a file or directory. Three-tier extraction — uses the best available tool:
| Tier | Detection | What you get |
|---|---|---|
| 1. tree-sitter | tree_sitter_language_pack or tree_sitter_languages importable |
Full AST: accurate nesting, signatures, all node types |
| 2. ctags | ctags on PATH (universal-ctags) |
JSON tags: class/method/function/constant with scope |
| 3. regex | Always available | Pattern matching: class, function, def, interface, trait, enum, const, struct, impl |
Supported languages: PHP, Python, JavaScript, TypeScript (+ JSX/TSX), Go, Rust, Java, Ruby.
# Single file
supertool 'map:src/Module.php'
# Directory (recursive, skips vendor/.git/Generated/node_modules)
supertool 'map:src/SiProject/'Output:
src/SiProject/SiProjectModule.class.php (55 lines)
class SiProjectModule [31]
const TYPE_PRIMARY [39]
const MENU_ITEM [42]
method init [48]
Install optional deps for richer output:
# tree-sitter (best — full AST, Python 3.10+)
pip install tree-sitter-language-pack
# OR ctags (good — works everywhere)
brew install universal-ctags # macOS
apt install universal-ctags # LinuxWithout either, regex fallback works for all supported languages — just no nesting detection (except Python indentation).
Set "compact": true in .supertool.json to enable compact reads. When enabled, read ops skip blank lines and comment-only lines (//, #, /* */, <!-- -->, PHPDoc * lines), preserving original line numbers. Reduces token cost for exploration without losing structure.
Compact is disabled when using grep= filter or offset (editing needs exact lines).
Read-only ops in a batch can run concurrently. Output order is preserved (matches input order, not completion order).
Enable in .supertool.json:
{ "parallel": 4 }parallel: N runs up to N ops concurrently via a thread pool. 0 (default) = sequential. Boolean true is accepted as 4 for back-compat.
Override via env: SUPERTOOL_PARALLEL=4 ./supertool 'read:a' 'grep:x:b/' 'glob:c/**'. Env wins over JSON. Set 0 to force off for one call.
Safe ops (parallelized): read, grep, glob, ls, head, tail, wc, stat, map, tree, around, around_line, between, diff, blame, version.
Unsafe — batch falls back to sequential whenever any op is mutating (edit, replace, replace_dry, replace_lines) or custom (anything in ops: — could shell out to anything). All-or-nothing per call: no partial parallelism.
Speedup: I/O-bound ops on different files. ~3-5× faster on cold filesystem; modest gain on warm cache.
glob, grep, tree, and map walk the filesystem recursively. On large repos this can be slow and noisy — .git/objects/, node_modules/, vendor/, and similar dirs rarely contain what you're looking for.
Supertool prunes these at the directory boundary (never opens them), not after the fact.
Built-in defaults — always active unless overridden:
.git/ node_modules/ .svn/ .hg/ .idea/ .vscode/
__pycache__/ .venv/ venv/ dist/ build/
Project-level additions — add to .supertool.json under ops.<op-name>.exclude-paths. These are merged additively with the defaults (not replacing):
{
"ops": {
"glob": { "exclude-paths": ["vendor/", "Dvsi/dvsi-private/libs/"] },
"grep": { "exclude-paths": ["vendor/", "Dvsi/dvsi-private/libs/"] }
}
}Per-call escape hatch — append :::no-exclude to bypass all excludes for one call:
./supertool 'grep:somePattern:vendor/:10:::no-exclude'
./supertool 'glob:**/*.php:::no-exclude'Ops that take explicit paths and don't traverse (ls, read, head, tail, wc, stat, around, around_line, between, diff, blame) are not affected — they always work on exactly the path you give them.
See issue #4 for the full design rationale.
When rtk is installed, supertool automatically delegates read, grep, and wc to RTK for compressed output. No configuration needed — detected via which rtk at first use.
- With RTK + compact: uses
rtk read --level aggressive(maximum compression) - With RTK, no compact: uses
rtk read(RTK formatting, no stripping) - Without RTK + compact: native regex-based blank/comment stripping
- Without RTK, no compact: supertool's own output (default)
RTK is optional. Supertool works identically without it — RTK is just an accelerator.
When tree-sitter-language-pack (Python 3.10+) or tree-sitter-languages (Python 3.8–3.12) is installed, map uses tree-sitter for AST-based symbol extraction instead of ctags or regex.
- Detects installed package at first
mapcall (cached for session) - Prefers
tree-sitter-language-packovertree-sitter-languageswhen both are present - Falls back to ctags → regex when neither is installed
- No configuration needed — pure detection
tree-sitter is optional. The map op works without it — tree-sitter just gives more accurate nesting and signature details.
Six or seven ops per call is routine; two is too few.
supertool \
'read:src/Module.py' \
'read:src/Permissions.py' \
'read:src/Options.py' \
'grep:extends:src/:20' \
'grep:@related:src/:10' \
'glob:src/Components/**/*.xml' \
'glob:src/EventsManagers/*.py'One round-trip. Seven ops worth of output. The session-start hook reminds the model of this each session.
The tool auto-promotes these wasted patterns silently, but you should still recognize them and batch up front:
glob:concrete/path.xmlfollowed byread:concrete/path.xml— glob on a path with no wildcards is useless; justread:. SuperTool auto-reads it.grep:FOO:single_file.pyfollowed byread:single_file.py— same file, two turns. SuperTool auto-reads if the file is < 20KB with a match.- A second SuperTool call whose ops could have fit in the first.
Self-check: if the output contains [auto-read: ...], SuperTool just salvaged a wasted turn you asked for. Tighten your next prompt to batch up front.
Every SuperTool call is logged to /tmp/supertool-calls.log with this format:
2026-04-16 21:05:42 | user=alice ppid=74394 entry=cli | ops=3 out=12400b | read:a.py read:b.py grep:X:src/:20
Fields:
user=— the shell userppid=— parent process (stable within one Claude Code session, useful for grouping)entry=— how Claude Code was invoked (cli,sdk, etc.)ops=N— number of ops in this callout=Nb— output bytes emitted to the model
awk -F'|' '{ for (i=1;i<=NF;i++) if ($i ~ /ops=/) print $i }' /tmp/supertool-calls.log \
| sort | uniq -c | sort -rnA healthy run has most calls at ops=3+. A run dominated by ops=1 means the model is using SuperTool but not batching — tighten the system prompt.
awk -F'|' '
{ for (i=1;i<=NF;i++) if ($i ~ /ops=/) { gsub(/[^0-9]/,"",$i); t+=$i; n++ } }
END { printf "%d ops in %d calls → %d round-trips saved vs all-single\n", t, n, t-n }
' /tmp/supertool-calls.logEach saved round-trip avoids one prefix cache re-read. The bigger your prefix, the bigger the saving per trip.
Run the suite:
python3 -m pytest tests/293 tests, 80% minimum coverage (enforced by pytest-cov). Current: 94%.
Enable the pre-push hook (runs pytest + enforces 80% coverage before every push):
git config core.hooksPath .githooksThe hook is in .githooks/pre-push, committed to the repo. Bypass with git push --no-verify (discouraged).
Linux/macOS: works out of the box.
Windows: works via Git Bash or WSL (the plugin's hooks/session-start.sh + .githooks/pre-push are bash scripts; the Python tool itself is cross-platform). Native cmd.exe / PowerShell without bash won't fire the hooks.
Paths with spaces: fine. Arguments arrive via sys.argv pre-tokenized by the shell, so supertool "'read:/home/jo bob/file.py'" works unchanged.
Windows drive letters: the tool recognizes C:\... and D:/... automatically and reassembles them after colon-splitting. So supertool 'read:C:\Users\file.py' and supertool 'grep:needle:C:/src:20' both parse correctly. If you hit edge cases, forward slashes (C:/path) work everywhere on Windows too.
Temp/log location: the call log uses tempfile.gettempdir() — macOS: /var/folders/.../T/supertool-calls.log, Linux: /tmp/supertool-calls.log, Windows: %TEMP%\supertool-calls.log.
- One file.
supertool.pyis ~980 LoC (16 ops, 3 integration tiers). No package, nosetup.py, no required deps. Drop in and use. - Python 3.9+. macOS ships 3.9 via CommandLineTools; we don't force upgrades.
- No MCP server. MCP is server-process-and-JSON-RPC ceremony for what's literally "run a script, get output." A Bash-invoked binary is simpler, faster, and plugs into Claude Code's existing
--allowedTools/--disallowedToolsflow. - Enforcement via PreToolUse hook, not config mutation. The plugin doesn't edit your
settings.json. Toggling is a state file (~/.claude/supertool-enforced) read by the hook. Your config stays yours. - Trade Python work for LLM tokens. LLM compute is expensive; local CPU is cheap. Any time the model would spend tokens computing, parsing, formatting, or finding — supertool should spend milliseconds instead. Richer op output (state hints, guards, semantic anchors, auto-formatting, syntax checks) is not feature creep — it's the whole thesis. Heavy Python is fine if it shaves tokens off the model side.
Community License — free for personal, educational, and internal business use. © 2026 Digital Process Tools.
