Step 1 of 8

Human Review

8of 8~5 min
  • The most common Brenner-loop failure modes
  • How to surgically request a revision from the agent
  • How to decide what to do next
What you'll do
  • Run the checklist against the artifact
  • Request revisions for any failed checks
  • Archive the artifact and choose next tests

Your agent can produce a beautiful artifact that still fails Brenner's standards. Use this checklist to catch the most common failure modes.

Checklist Progress

0/11 (0%)

Hypotheses

Discriminative Tests

Assumptions & Scale

Critique & Next Steps

Request a Revision (Copy/Paste)

When you find failures, paste this to your agent with the failed items listed:

Revision prompt
Your previous artifact is close, but it fails some Brenner review checks:
[LIST THE FAILED CHECKS HERE]
Please revise the artifact, keeping the same section headings and structure. For each fix:
- Make the change as minimally as possible
- Explicitly mark additions with "NEW:" so I can spot them
- Ensure tests are discriminative (different predictions across hypotheses)
- Ensure every test has a potency check (what we learn if null)
Return the updated artifact in full.

Compare to Worked Examples

If you’re unsure what “good” looks like, compare your artifact to the canonical worked examples. Notice how hypotheses are genuinely different, predictions are contrastive, and tests are designed to exclude.

Pro Tip
You can archive your final artifact as a Session in the web UI. Start a new session at /sessions/new (or explore existing sessions at /sessions).

Agent-Assisted Tutorial Complete!

You've accomplished:

  • Set up an AI coding agent to internalize the Brenner method
  • Refined a research question using Brenner-style critique
  • Generated a hypothesis slate + assumption ledger
  • Produced discriminative tests ranked by potency
  • Reviewed the artifact with a failure-mode checklist

Coming up: Next: run another loop on a new question, or move to Multi-Agent Cockpit for parallel role-separated orchestration.