Using Sub-Agents to Review AI Outputs Simultaneously
Press Space for next Tweet
there's a simple way to 100x your AI outputs and most people will never do it... recent models like Opus 4.6 can spawn sub-agents that run tasks at the same time most people use this for basic stuff the real play is having 3-5 sub-agents review Claude's work from completely different angles... simultaneously but not fake "expert personas" you invented in a prompt real frameworks from real people how to build these expert profiles: option 1: use someone so well-known their thinking is already in the training data > have a Karpathy-style agent tear your prompts apart > build a profile based on Ogilvy's principles to roast your copy option 2: go deeper with NotebookLM > feed it an entire youtube channel from someone you trust > or their blog, their newsletter, their podcast transcripts > prompt it to extract an expert card with their frameworks, decision patterns, and principles now upload that material into your OpenClaw memory or a Claude project what happens next: Claude generates the first draft > sub-agent 1 reviews it through expert A's lens > sub-agent 2 stress-tests it with expert B's framework > sub-agent 3 catches what the others missed > all running at the same time Claude thinks through every piece of feedback and rebuilds you only see the final version... the one that survived 3-5 rounds of real scrutiny this is close from getting the absolute best output for a specific task
Topics
Read the stories that matter.The stories and ideas that actually matter.
Save hours a day in 5 minutesTurn hours of scrolling into a five minute read.