You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your interest in contributing to Dynamic Context Pruning (DCP)!
4
+
5
+
## License and Contributions
6
+
7
+
This project uses the **GNU Affero General Public License v3.0 (AGPL-3.0)**.
8
+
9
+
### Contribution Agreement
10
+
11
+
By submitting a Pull Request to this project, you agree that:
12
+
13
+
1. Your contributions are licensed under the **AGPL-3.0**.
14
+
2. You grant the project maintainer(s) a non-exclusive, perpetual, irrevocable, worldwide, royalty-free, transferable license to use, modify, and re-license your contributions under any terms they choose, including commercial or proprietary licenses.
15
+
16
+
This arrangement ensures the project remains Open Source while providing a path for commercial sustainability.
17
+
18
+
## Getting Started
19
+
20
+
1. Fork the repository.
21
+
2. Create a feature branch.
22
+
3. Implement your changes and add tests if applicable.
23
+
4. Ensure all tests pass and the code is formatted.
Automatically reduces token usage in OpenCode by removing obsolete tools from conversation history.
6
+
Automatically reduces token usage in OpenCode by removing obsolete content from conversation history.
6
7
7
-

8
+

8
9
9
10
## Installation
10
11
@@ -27,15 +28,17 @@ DCP uses multiple tools and strategies to reduce context size:
27
28
28
29
### Tools
29
30
30
-
**Discard** — Exposes a `discard` tool that the AI can call to remove completed or noisy tool content from context.
31
+
**Distill** — Exposes a `distill` tool that the AI can call to distill valuable context into concise summaries before removing the tool content.
31
32
32
-
**Extract** — Exposes an `extract` tool that the AI can call to distill valuable context into concise summaries before removing the tool content.
33
+
**Compress** — Exposes a `compress` tool that the AI can call to collapse a large section of conversation (messages and tools) into a single summary.
34
+
35
+
**Prune** — Exposes a `prune` tool that the AI can call to remove completed or noisy tool content from context.
33
36
34
37
### Strategies
35
38
36
39
**Deduplication** — Identifies repeated tool calls (e.g., reading the same file multiple times) and keeps only the most recent output. Runs automatically on every request with zero LLM cost.
37
40
38
-
**Supersede Writes** — Prunes write tool inputs for files that have subsequently been read. When a file is written and later read, the original write content becomes redundant since the current file state is captured in the read result. Runs automatically on every request with zero LLM cost.
41
+
**Supersede Writes** — Removes write tool calls for files that have subsequently been read. When a file is written and later read, the original write content becomes redundant since the current file state is captured in the read result. Runs automatically on every request with zero LLM cost.
39
42
40
43
**Purge Errors** — Prunes tool inputs for tools that returned errors after a configurable number of turns (default: 4). Error messages are preserved for context, but the potentially large input content is removed. Runs automatically on every request with zero LLM cost.
41
44
@@ -47,90 +50,113 @@ LLM providers like Anthropic and OpenAI cache prompts based on exact prefix matc
47
50
48
51
**Trade-off:** You lose some cache read benefits but gain larger token savings from reduced context size and performance improvements through reduced context poisoning. In most cases, the token savings outweigh the cache miss cost—especially in long sessions where context bloat becomes significant.
49
52
50
-
> **Note:** In testing, cache hit rates were approximately 65% with DCP enabled vs 85% without.
53
+
> **Note:** In testing, cache hit rates were approximately 80% with DCP enabled vs 85% without for most providers.
54
+
55
+
**Best use case:** Providers that count usage in requests, such as Github Copilot and Google Antigravity, have no negative price impact.
56
+
57
+
**Best use cases:**
58
+
59
+
-**Request-based billing** — Providers that count usage in requests, such as Github Copilot and Google Antigravity, have no negative price impact.
60
+
-**Uniform token pricing** — Providers that bill cached tokens at the same rate as regular input tokens, such as Cerebras, see pure savings with no cache-miss penalty.
51
61
52
-
**Best use case:**Providers that count usage in requests, such as Github Copilot and Google Antigravity have no negative price impact.
62
+
**Claude Subscriptions:**Anthropic subscription users (who receive "free" caching) may experience faster limit depletion than hit-rate ratios suggest due to the higher relative cost of cache misses. See [Claude Cache Limits](https://she-llac.com/claude-limits) for details.
53
63
54
64
## Configuration
55
65
56
66
DCP uses its own config file:
57
67
58
68
- Global: `~/.config/opencode/dcp.jsonc` (or `dcp.json`), created automatically on first run
59
69
- Custom config directory: `$OPENCODE_CONFIG_DIR/dcp.jsonc` (or `dcp.json`), if `OPENCODE_CONFIG_DIR` is set
60
-
- Project: `.opencode/dcp.jsonc` (or `dcp.json`) in your project’s `.opencode` directory
61
-
62
-
<details>
63
-
<summary><strong>Default Configuration</strong> (click to expand)</summary>
>// Enable debug logging to ~/.config/opencode/logs/dcp/
81
+
>"debug":false,
82
+
>// Notification display: "off", "minimal", or "detailed"
83
+
>"pruneNotification":"detailed",
84
+
>// Notification type: "chat" (in-conversation) or "toast" (system toast)
85
+
>"pruneNotificationType":"chat",
86
+
>// Slash commands configuration
87
+
>"commands": {
88
+
>"enabled":true,
89
+
>// Additional tools to protect from pruning via commands (e.g., /dcp sweep)
90
+
>"protectedTools": [],
91
+
> },
92
+
>// Protect from pruning for <turns> message turns past tool invocation
93
+
>"turnProtection": {
94
+
>"enabled":false,
95
+
>"turns":4,
96
+
> },
97
+
>// Protect file operations from pruning via glob patterns
98
+
>// Patterns match tool parameters.filePath (e.g. read/write/edit)
99
+
>"protectedFilePatterns": [],
100
+
>// LLM-driven context pruning tools
101
+
>"tools": {
102
+
>// Shared settings for all prune tools
103
+
>"settings": {
104
+
>// Nudge the LLM to use prune tools (every <nudgeFrequency> tool results)
105
+
>"nudgeEnabled":true,
106
+
>"nudgeFrequency":10,
107
+
>// Token limit at which the model begins actively
108
+
>// compressing session context. Best kept around 40% of
109
+
>// the model's context window to stay in the "smart zone".
110
+
>// Set to "model" to use the model's full context window.
111
+
>"contextLimit":100000,
112
+
>// Additional tools to protect from pruning
113
+
>"protectedTools": [],
114
+
> },
115
+
>// Distills key findings into preserved knowledge before removing raw content
116
+
>"distill": {
117
+
>// Permission mode: "allow" (no prompt), "ask" (prompt), "deny" (tool not registered)
118
+
>"permission":"allow",
119
+
>// Show distillation content as an ignored message notification
120
+
>"showDistillation":false,
121
+
> },
122
+
>// Collapses a range of conversation content into a single summary
123
+
>"compress": {
124
+
>// Permission mode: "ask" (prompt), "allow" (no prompt), "deny" (tool not registered)
125
+
>"permission":"ask",
126
+
>// Show summary content as an ignored message notification
127
+
>"showCompression":false,
128
+
> },
129
+
>// Removes tool content from context without preservation (for completed tasks or noise)
130
+
>"prune": {
131
+
>// Permission mode: "allow" (no prompt), "ask" (prompt), "deny" (tool not registered)
132
+
>"permission":"allow",
133
+
> },
134
+
> },
135
+
>// Automatic pruning strategies
136
+
>"strategies": {
137
+
>// Remove duplicate tool calls (same tool with same arguments)
138
+
>"deduplication": {
139
+
>"enabled":true,
140
+
>// Additional tools to protect from pruning
141
+
>"protectedTools": [],
142
+
> },
143
+
>// Prune write tool inputs when the file has been subsequently read
144
+
>"supersedeWrites": {
145
+
>"enabled":true,
146
+
> },
147
+
>// Prune tool inputs for errored tools after X turns
148
+
>"purgeErrors": {
149
+
>"enabled":true,
150
+
>// Number of turns before errored tool inputs are pruned
151
+
>"turns":4,
152
+
>// Additional tools to protect from pruning
153
+
>"protectedTools": [],
154
+
> },
155
+
> },
156
+
> }
157
+
>```
158
+
>
159
+
> </details>
134
160
135
161
### Commands
136
162
@@ -141,14 +167,10 @@ DCP provides a `/dcp` slash command:
141
167
- `/dcp stats` — Shows cumulative pruning statistics across all sessions.
142
168
- `/dcp sweep` — Prunes all tools since the last user message. Accepts an optional count: `/dcp sweep 10` prunes the last 10 tools. Respects `commands.protectedTools`.
143
169
144
-
### Turn Protection
145
-
146
-
When enabled, turn protection prevents tool outputs from being pruned for a configurable number of message turns. This gives the AI time to reference recent tool outputs before they become prunable. Applies to both `discard` and `extract` tools, as well as automatic strategies.
147
-
148
170
### Protected Tools
149
171
150
-
By default, these tools are always protected from pruning across all strategies:
0 commit comments