Skip to content

feat: recursive self-improvement — operational learning + full skill wiring (v0.13.8.0)#647

Open
garrytan wants to merge 8 commits intomainfrom
garrytan/learn-from-reviews
Open

feat: recursive self-improvement — operational learning + full skill wiring (v0.13.8.0)#647
garrytan wants to merge 8 commits intomainfrom
garrytan/learn-from-reviews

Conversation

@garrytan
Copy link
Copy Markdown
Owner

Summary

gstack now learns from its own operational failures and wires learnings into every insight-producing skill.

Operational self-improvement (universal)

  • Every skill session now reflects on CLI failures, wrong approaches, and project quirks at completion
  • Logs operational learnings to per-project JSONL via gstack-learnings-log with new operational type
  • Preamble surfaces top 3 learnings inline when count > 5 (gated per Codex cross-model review)
  • No opt-in needed, runs in the preamble completion protocol for ALL skills at ALL tiers

Full learning loop (13 skills wired)

  • office-hours, plan-ceo-review, plan-eng-review: added LEARNINGS_LOG (had SEARCH)
  • plan-design-review: added both LEARNINGS_SEARCH + LEARNINGS_LOG (had neither)
  • design-review, design-consultation, cso, qa, qa-only: added both SEARCH + LOG
  • retro: added LEARNINGS_SEARCH (had LOG)
  • Previously only 3 skills (review, ship, investigate) were fully wired

Dead code removal

  • Removed contributor mode (generateContributorMode, _CONTRIB bash var, 2 E2E tests, touchfile, doc references)
  • Never fired in 18 days of heavy use (required manual opt-in via gstack-config)
  • Cleaned up "Contributor Mode" skip-list references in plan-ceo-review, autoplan, review resolver, document-release templates

E2E test fixes

  • New operational-learning gate-tier E2E test validates the write path
  • Fixed learnings-show slug mismatch (seeded at hardcoded path, but gstack-slug computed different path)
  • Added operational seed entry to learnings-show test

Test Coverage

All new code paths have test coverage.

  • Unit: gen-skill-docs.test.ts validates operational self-improvement in preamble output + operational type in LEARNINGS_LOG
  • E2E: operational-learning (PASS, $0.05, 13s) validates write path
  • E2E: learnings-show (PASS, $0.13, 42s) validates read path with all 4 types including operational

Pre-Landing Review

Pre-Landing Review: No issues found. Template/resolver changes only.

Reviews

  • CEO Review: CLEAR (selective expansion, 3 proposals, 2 accepted, 1 deferred)
  • Eng Review: CLEAR (0 issues, 0 critical gaps)
  • Codex Outside Voice: 6 findings, 1 accepted (preamble summary gated to count > 5)

Test plan

  • All free tests pass (bun test)
  • operational-learning E2E passes (gate-tier)
  • learnings-show E2E passes (gate-tier)
  • All 7 bws E2E tests pass (7/7)

🤖 Generated with Claude Code

garrytan and others added 7 commits March 29, 2026 21:02
…-improvement slot

Contributor mode never fired in 18 days of heavy use (required manual opt-in
via gstack-config, gated behind _CONTRIB=true, wrote disconnected markdown).

Removes: generateContributorMode(), _CONTRIB bash var, 2 E2E tests, touchfile
entry, doc references. Cleans up skip-lists in plan-ceo-review, autoplan,
review resolver, and document-release templates.

The operational self-improvement system (next commit) replaces this slot with
automatic learning capture that requires no opt-in.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds universal operational learning capture to the preamble completion protocol.
At the end of every skill session, the agent reflects on CLI failures, wrong
approaches, and project quirks, logging them as type "operational" to the
learnings JSONL. Future sessions surface these automatically.

- generateCompletionStatus(ctx) now includes operational capture section
- Preamble bash shows top 3 learnings inline when count > 5
- New "operational" type in generateLearningsLog alongside pattern/pitfall/etc
- Updated unit tests + operational seed entry in learnings E2E

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds LEARNINGS_SEARCH and/or LEARNINGS_LOG to 10 skill templates that
produce reusable insights but were previously disconnected from the
learning system:

- office-hours, plan-ceo-review, plan-eng-review: add LOG (had SEARCH)
- plan-design-review: add both SEARCH + LOG (had neither)
- design-review, design-consultation, cso, qa, qa-only: add both
- retro: add SEARCH (had LOG)

13 skills now fully participate in the learning loop (read + write).
Every review, QA, investigation, and design session both consults prior
learnings and contributes new ones.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Validates the write path: agent encounters a CLI failure, logs an
operational learning to JSONL via gstack-learnings-log. Replaces the
removed contributor-mode E2E test.

Setup: temp git repo, copy bin scripts, set GSTACK_HOME.
Prompt: simulated npm test failure needing --experimental-vm-modules.
Assert: learnings.jsonl exists with type=operational entry.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…rdcoded

The test seeded learnings at projects/test-project/ but gstack-slug computes
the slug from basename(workDir) when no git remote exists. The agent's search
looked at the wrong path and found nothing.

Fix: compute slug the same way gstack-slug does (basename + sanitize) and
seed the learnings there.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…eviews

# Conflicts:
#	SKILL.md
#	autoplan/SKILL.md
#	benchmark/SKILL.md
#	browse/SKILL.md
#	canary/SKILL.md
#	codex/SKILL.md
#	connect-chrome/SKILL.md
#	cso/SKILL.md
#	design-consultation/SKILL.md
#	design-review/SKILL.md
#	design-shotgun/SKILL.md
#	document-release/SKILL.md
#	investigate/SKILL.md
#	land-and-deploy/SKILL.md
#	learn/SKILL.md
#	office-hours/SKILL.md
#	plan-ceo-review/SKILL.md
#	plan-design-review/SKILL.md
#	plan-eng-review/SKILL.md
#	qa-only/SKILL.md
#	qa/SKILL.md
#	retro/SKILL.md
#	review/SKILL.md
#	scripts/resolvers/preamble.ts
#	setup-browser-cookies/SKILL.md
#	setup-deploy/SKILL.md
#	ship/SKILL.md
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown

E2E Evals: ✅ PASS

61/61 tests passed | $5.85 total cost | 12 parallel runners

Suite Result Status Cost
e2e-browse 7/7 $0.33
e2e-deploy 6/6 $1.03
e2e-design 3/3 $0.53
e2e-plan 7/7 $1.07
e2e-qa-workflow 3/3 $0.87
e2e-review 6/6 $1.07
e2e-workflow 4/4 $0.45
llm-judge 25/25 $0.5

12x ubicloud-standard-2 (Docker: pre-baked toolchain + deps) | wall clock ≈ slowest suite

@harshitgavita-07
Copy link
Copy Markdown

harshitgavita-07 commented Mar 30, 2026

Hey @garrytan ,

The operational self-improvement loop here is brilliant — skills learning from their own failures is exactly the right approach.

Reading through the implementation, I notice /learn now stores four types:

  • patterns
  • pitfalls
  • preferences
  • operational learnings

The gap I see: these are all retrospective — they capture what happened after a decision. But there's no layer for prospective decisions — what the team decided to do and why.


The idea: Add a fifth type to /learndecisions

// When /plan-ceo-review approves a direction
await learn.add({
  type: "decision",
  skill: "plan-ceo-review",
  content: "Use PostgreSQL over MongoDB for user data",
  rationale: "ACID compliance critical for financial data",
  tags: ["database", "infrastructure"]
});

// When /plan-eng-review implements
const relevantDecisions = await learn.query({
  type: "decision",
  since: "14d",
  match: "database"
});

Why this matters for /retro:

The operational learnings capture how things went wrong. But decisions capture what was chosen and why. When /retro runs, it currently shows git history — but it can't tell you:

"You chose PostgreSQL because of ACID requirements. Verdict: right call — caught 3 race conditions in testing that MongoDB would have missed."


How it fits the existing loop:

Operational learnings: What went wrong? → Store in /learn
Decisions: What did we choose and why? → Store alongside

/review  → Reads both operational learnings + decisions  
/retro   → Shows both what failed + whether decisions were right

This would make /retro dramatically more useful — not just "what did we ship" but "were our architectural choices correct."


Questions for discussion:

  1. Would decisions be a separate skill (/decisions) or an extension to /learn?

  2. Decision storage — should rationale be required? Optional? Could be AI-generated summary.

  3. Query interface — /learn query --type decision --since 30d makes sense?

Curious if this fits the /learn philosophy, or if decisions should be a completely separate system?

- @harshitgavita-07

…eviews

Resolved conflicts:
- VERSION: bumped to 0.13.10.0 (our changes on top of main's 0.13.9.0)
- CHANGELOG.md: kept both entries, ours on top with updated version
- plan-ceo-review/SKILL.md.tmpl: took main's INVOKE_SKILL resolver
- scripts/resolvers/review.ts: took main's invokeBlock pattern
- scripts/resolvers/preamble.ts: wrapped JSONL writes in telemetry conditional
- test/skill-validation.test.ts: removed contributor-mode tests (feature removed)
- test/touchfiles.test.ts: updated test refs from contributor-mode to session-awareness
- Regenerated all SKILL.md files from merged templates

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants