We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Feature
Leave comments, click Finish Review, and your agent is notified automatically via crit listen. The agent reads .crit.json, makes edits, and runs crit go <port> when done. Crit reloads with a diff, and you review again.
When you click "Finish Review", Crit writes .crit.json - structured comment data with per-file sections. Your agent is notified automatically if it was listening via crit listen.
The prompt tells the agent to read .crit.json, address unresolved comments, and run crit go <port> when done. No copy-paste needed — crit listen delivers it directly.
When the agent runs crit go, the browser starts a new round with a diff of what changed. Previous comments show as resolved or still open.
Add new comments on the updated version and repeat. When all comments are resolved, Crit detects it and generates a clean confirmation prompt — the loop ends when you're satisfied.
The default interaction with AI coding agents is conversational: you type, it responds, you type again. That works for small tasks. For anything involving real code changes across multiple files, it breaks down because the conversation history gets long, context is lost, and you can't easily point back to "that thing you changed in round 2."
Crit replaces the conversation loop with a structured review loop. You review the output, leave comments on specific lines, submit Finish Review, and crit listen notifies the agent. The agent reads exactly what you wrote and where you wrote it, then runs crit go when it's done. You get a diff. You review again. The loop has a clear state at each step.
This structure also makes it easier to stop and resume. The state lives in .crit.json, so if you close your terminal and come back the next day, the review is still there. You're not reconstructing context from a chat thread.
Most AI agent workflows treat human feedback as a free-text prompt injected into the conversation. That works, but it's informal — there's no enforced structure around what the human reviewed, what they approved, and what they asked to change. Crit externalizes that state into a file the agent reads directly, which is more reliable and easier to audit after the fact.
Install Crit and start reviewing agent output in seconds.
$ brew install tomasz-tomczyk/tap/crit
$ crit or crit plan.md
Or download a pre-built binary from GitHub Releases.