ahoffer/bin
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|
Repository files navigation
# bin
The contents of my bin dir. Useful when I crash, burn, corrupt, despoil, savage, or ravage my system.
## Paused git tracking
.git renamed to .git.paused, so git no longer sees this as a repo.
.git.paused added to .stignore, so Syncthing won't sync it to other hosts.
To resume later: mv ~/bin/.git.paused ~/bin/.git
## Notes
* fresh-start is the big script that install almost everything
1. Run install-pkcs11
1. Download DoD certs and run install-os-dodcerts
1. Run setup-nssdb
1. Run install-nssdb-dodcerts
## bashrc
Read the comments in bashrc.
## On a new machine...
- Generate an ssh kepair with ssh-keygen -b 2048 -t rsa
- Copy pub key to GitHub account
- Clone from home dir, git clone git@github.com:ahoffer/bin.git
After cloning...
- Copy gitconfig_template to ~/.gitconfig
- Open ~/bin/bashrc. Copy the parts that add ~/bin to the path into ~/.bashrc and source .bashrc
- Source ~/.bashrc
- Set up secret env var in ~/.bashrc
## MCP server
`mcpserve` starts the Desktop Commander MCP server via `npx @wonderwhy-er/desktop-commander@latest`
and writes stderr logs to `~/log/mcpserve.log`.
To expose it from another host over SSH, add an SSH alias such as:
Host clown-mcp
HostName clown
User aaron
Then point Codex at it with an MCP entry like:
[mcp_servers.clown]
command = "ssh"
args = ["clown-mcp", "mcpserve"]
This repo only provides the `mcpserve` wrapper; the Desktop Commander package is downloaded by `npx`
when the command runs.
## Claude Code and Codex session wrappers
`claude` and `codex` in ~/bin find the real binary and hand off to `wraplog`, which
runs the binary directly and indexes the session JSONL into `~/logs/` on exit.
The real binaries are at `~/.local/bin/claude` and the nvm-managed path for codex.
The wrappers strip ~/bin from PATH before resolving the real binary to prevent
self-invocation.
## bigfish-shell
`bigfish-shell` is the resilient interactive entrypoint from clown to bigfish.
It keeps your local terminal unchanged and uses remote tmux on bigfish so work
survives short SSH disconnects.
Behavior:
1. It connects with `ssh bigfish` and runs `tmux new-session -A -s <name>` on
bigfish (create-or-attach remote session).
1. If SSH fails, it retries with bounded exponential backoff.
1. It must run in an interactive terminal (TTY). Non-interactive runs fail fast.
Usage:
bigfish-shell
bigfish-shell my-session
Optional environment variables:
BIGFISH_HOST=bigfish
BIGFISH_TMUX_SESSION=bigfish
BIGFISH_SSH_MAX_ATTEMPTS=5
BIGFISH_SSH_MAX_DELAY=8
Why this exists:
* Keeps remote work alive inside tmux on bigfish.
* Avoids any background watchdog process; recovery is explicit and operator driven.
### Session log index
`wraplog` creates a timestamped symlink in `~/logs/` pointing to the JSONL session
file written by the tool:
~/logs/claude-YYYYMMDD-HHMMSS-<session-uuid>.jsonl -> ~/.claude/projects/.../uuid.jsonl
~/logs/codex-YYYYMMDD-HHMMSS-rollout-<ts>-<uuid>.jsonl -> ~/.codex/sessions/.../uuid.jsonl
The JSONL stays in its canonical location. The symlink provides a time-indexed entry
point. If no session file was created the symlink is omitted.
### Claude Code session JSONs
Stored at `~/.claude/projects/`. Each project gets a subdirectory named after its path
slug, for example `-home-aaron-bin`. Each conversation is one UUID-named `.jsonl` file.
Subagent threads nest under a session UUID in a `subagents/` subdirectory.
Global prompt history is at `~/.claude/history.jsonl`.
No rotation is performed. Files accumulate indefinitely.
### Codex session JSONs
Stored at `~/.codex/sessions/2026/MM/DD/` as `rollout-<timestamp>-<uuid>.jsonl`.
Directories are created per calendar day.
Global prompt history is at `~/.codex/history.jsonl`.
Persistent conversation state lives in `~/.codex/state_5.sqlite` with standard SQLite
WAL files alongside it. The TUI process log is at `~/.codex/log/codex-tui.log`.
No rotation is performed on any of these files.
## Colima scripts
Colima launch helpers were removed from `~/bin` to avoid conflicting startup paths.
Removed files:
1. `colima-start-guarded`
1. `start-colima-docker.command`
1. `stop-colima-docker.command`
On clown, Colima-related launchd labels were also disabled:
1. `local.colima.guarded`
1. `homebrew.mxcl.colima`
Check disabled state:
launchctl print-disabled gui/$(id -u) | rg -i colima
## Xpra on bigfish
`mac/xpra-chrome` is the macOS-side helper that opens `Xpra.app` and attaches it to the forwarded
xpra session exposed by `bigfish` on `tcp://127.0.0.1:14501`.
On the `bigfish` side, the attached xpra X11 display is currently visible as `:100`:
xpra list
DISPLAY=:100 xdpyinfo | head
That matters for remote GUI debugging. To run a real headed browser that is visible through the
existing xpra app windows on macOS, point the process at that display instead of using `xvfb-run`.
For Playwright:
cd ~/projects/cx-search/proximity/test/playwright
DISPLAY=:100 npx playwright test
For a narrower visible smoke test:
DISPLAY=:100 npx playwright test tests/proximity.spec.ts -g "01 place ship CoT"
This launches Playwright's own browser window on the live xpra display so it can be watched from
the existing Xpra app flow on the Mac.
### Xpra sessions
| App | Display | Port | Service | Connect script |
|---------|---------|-------|----------------|-------------------|
| Chrome | :100 | 14501 | xprachrome | mac/xpra-chrome |
| Signal | :101 | 14502 | xpra-signal | mac/xpra-signal |
Signal uses a dedicated user-data dir at `~/.config/signal-xpra`, isolated from any native
Signal installation. Connect from clown with `xprasignal` (delegates to `mac/xpra-signal`).
To add another session: create a service file and connect script following the same pattern,
then add a line to `xpra-healthcheck` on bigfish and `mac/xpra-healthcheck` on clown.
### Xpra watchdog daemons
One pair of daemons covers all sessions.
On bigfish a systemd timer fires every 60 seconds and runs `xpra-healthcheck`. The script
loops over every known display/service pair, probes each with a 10-second timeout, and
restarts the matching service if a probe fails or hangs.
On clown a launchd agent fires every 60 seconds and runs `mac/xpra-healthcheck`. The script
exits quietly if bigfish is unreachable. Otherwise it loops over every known port/connect-script
pair, checks tunnel and probe health for each, and reconnects only the sessions that are
unhealthy. Each tunnel uses its own SSH control socket (`~/.ssh/cm-xpra-bigfish-PORT`), so
reconnects terminate only the dedicated xpra tunnel instead of any shared SSH master session.
The stale-process killer at the top of the script queries all known ports together, so it never
mistakes a healthy session for a stale one.
Check watchdog status on bigfish:
systemctl --user status xpra-healthcheck.timer
systemctl --user list-timers xpra-healthcheck.timer
journalctl --user -u xpra-healthcheck -n 50
Check watchdog status on clown:
launchctl list com.aaron.xpra-healthcheck
tail -100 ~/Library/Logs/xpra-healthcheck.log
### Xpra log scripts
On bigfish, `xprachromelog` follows the Chrome server journal, `xprasignallog` follows the Signal
server journal, and `xprawatchlog` follows the single shared watchdog journal. All three
pass through extra arguments to `journalctl`.
On clown, `xpralogmac` streams Xpra.app output via `log stream` and `xprawatchlogmac`
tails the watchdog log at `~/Library/Logs/xpra-healthcheck.log`. `xprasignalclientlog`
tails `~/Library/Logs/xpra-signal.log`, the Xpra.app client output for the Signal session.
`xprachromeclientlog` tails `~/Library/Logs/xpra-chrome.log`, the equivalent for the Chrome session.