ox distill
Turn raw team activity into structured memory. ox distill collects observations from Discussions, coding sessions, and GitHub, then uses your AI coworker to synthesize daily, weekly, and monthly summaries stored in Team Context.
Usage
Run from any initialized repository. The pipeline runs locally, using your AI coworker's CLI for LLM calls.
What happens during distill
- Collects raw observations from three sources: Discussions, coding sessions (Ledger), and GitHub activity
- Extracts structured facts using your AI coworker (or direct mapping for sessions)
- Synthesizes summaries at the appropriate time layer (daily, weekly, monthly)
- Commits results to Team Context as Markdown files
- Pushes to remote unless
--no-pushis set
Content hashing skips redundant LLM calls, so re-running is cheap.
Flags
| Flag | Type | Default | Description |
|---|---|---|---|
--layer | string | all | Distill a specific layer: daily, weekly, or monthly |
--dry-run | bool | false | Show what would be distilled without invoking the LLM |
--sync | bool | false | Sync Ledger, Team Context, and code index before distilling |
--verbose | bool | false | Log full prompts to stderr |
--model | string | Override the AI coworker model (e.g., sonnet, opus) | |
--no-push | bool | false | Skip pushing Team Context commits to remote |
--concurrency | int | 1 | Max parallel LLM calls (1-8) |
--all | bool | false | Process full history instead of last 7 days |
Sources
Three extraction sources feed into the daily layer:
Discussions
Recorded team Discussions from Team Context. The LLM extracts decisions, action items, and key observations from VTT transcripts.
Output: memory/.discussion-facts/
Sessions
Coding session summaries from your Ledger. No LLM calls needed here -- structured data from summary.json is mapped directly into facts.
Sessions with a quality score below 0.2 are filtered out.
Output: memory/.session-facts/{date}/
GitHub
PRs, issues, and commits from CodeDB. The LLM extracts what shipped, what's blocked, and review decisions.
Output: memory/.github-facts/
Layers
Distillation runs in three time layers, each building on the one below it.
Daily
Extracts facts from all three sources and synthesizes them into a daily summary.
Output: memory/daily/YYYY-MM-DD-{uuid7}.md
Weekly
Synthesizes daily summaries into a weekly rollup. Runs when 7 or more days have passed since the last weekly summary.
Output: memory/weekly/YYYY-WXX.md
Monthly
Synthesizes weekly summaries into a monthly overview. Runs on month change based on your team's configured timezone.
Output: memory/monthly/YYYY-MM.md
When you run ox distill without --layer, all applicable layers run based on timestamps in the state file.
Environment variables
| Variable | Description |
|---|---|
DISTILL_REPOS | Colon-separated project roots for multi-repo distillation |
OX_TIMEZONE | IANA timezone for date boundaries (also configurable via ox config set timezone) |
State tracking
Distill tracks progress in .sageox/cache/distill-state-v2.json, which records last_weekly and last_monthly timestamps. Daily tracking uses frontmatter in the output files themselves. The default lookback window is 7 days; use --all for full history.
Examples
Customization
Place guidance files in memory/guidance/ within your Team Context repo to influence how the LLM synthesizes summaries. These files let you steer what gets emphasized, what terminology to use, and what patterns to watch for.
Troubleshooting
"No sources found" -- The lookback window (7 days by default) found no new Discussions, sessions, or GitHub activity. Use --all to process full history, or check that your Ledger and Team Context repos are up to date with --sync.
"LLM call failed" -- Verify your AI coworker CLI is configured and authenticated. Use --verbose to see the full prompt sent to the model.
Stale summaries -- Delete .sageox/cache/distill-state-v2.json to reset state tracking and re-run.
Partial failures -- The pipeline pushes at the end even after partial failures, so completed work is preserved. Re-run to retry failed extractions.
What's next
- ox prime -- Load Team Context into your AI coworker's session
- ox record -- Capture coding sessions that feed into distillation
- Discussions -- Record team conversations for extraction

