Using recordings in coding sessions
Every recording you import becomes searchable context for your AI coworkers. When Claude Code starts a session, it can reference transcripts, keyframes, summaries, and decisions from your walkthroughs.
What your AI coworker sees
Recordings are broken into layers of progressively deeper context:
| Layer | What AI sees | When it's used |
|---|---|---|
| Summary | Chapters, decisions, action items | "What was discussed in the design review?" |
| Transcript | Timestamped speech with speaker labels | "What exactly did the designer say about the nav?" |
| Keyframes | Frame images + AI-generated descriptions | "Show me the mockup from the walkthrough" |
| Metadata | Title, participants, duration | Matching the right recording to your prompt |
Your AI coworker reads these artifacts — it doesn't watch the video.
Example: Implement from a design walkthrough
Ask Claude Code to implement a feature based on a recorded walkthrough:
"Look at the UX walkthrough 'Checkout Flow Redesign' and implement the bottom action bar shown at 2:30"
Claude Code:
- Reads the summary to find the "Action Bar Design" chapter
- Finds the keyframe at 2:31 showing the mockup
- Extracts requirements from the transcript (2:15-3:45)
- Implements based on what the designer explained
The result: implementation that matches design intent, not just a static screenshot.
Example: Reference a design decision
Ask about decisions from past discussions:
"What did the team decide about the notification system in the last design review?"
Claude Code finds the relevant recording, reads the summary, and returns:
- Chapter: "Notification Redesign" (3:20-5:45)
- Decision: Toast notifications replace the modal dialog
- Action item: Implement toast component with auto-dismiss (5s default)
No digging through Slack threads or meeting notes.
Example: Debug from a bug report
Ask Claude Code to analyze a recorded bug report:
"Watch the bug report 'Cart Total Mismatch' and find the root cause"
Claude Code:
- Reads the transcript describing the issue (total shows $0 after removing last item)
- Examines the keyframe showing the empty cart state
- Traces the issue to
CartTotal.tsxwherereduce()has no initial value
Bug reports with video context lead to faster fixes.
Best practices
Give recordings descriptive titles AI coworkers search by title. "Sprint 12 Checkout Flow Redesign" beats "Recording 47".
Narrate while recording Transcript quality drives extraction quality. Silent recordings produce no searchable context. Talk through what you're doing and why.
Keep recordings 5-10 minutes Focused context is more actionable than hour-long meetings. Split long sessions by topic.
Reference recordings by title in prompts "Look at the checkout flow walkthrough" works better than "check that recording I made".
Use 720p resolution AI processes 720p images faster with zero loss in code/UI comprehension. 4K adds processing time with no benefit.
How it connects
Your recording flows through this pipeline before your AI coworker sees it:
- Import — uploaded via web UI
- Transcribe — audio extracted, transcribed with speaker identification
- Keyframes — scene changes detected, frames analyzed by vision AI
- Summarize — chapters, decisions, and action items generated
- Commit — artifacts committed to your Team Context
- Access — AI coworkers load these via
ox agent primeat session start
What's next
- Video Import — import recordings from Loom, Figma, Cap
- Team Context — where recording artifacts live
- Claude Code Integration — how context flows into coding sessions
- SageOx + Figma — design walkthrough workflow

