AI Agents as Permissionless Leverage for Builders
Press Space for next Tweet
Naval wrote the playbook for wealth creation in the internet era. Specific knowledge. Leverage. Accountability. Code and media as permissionless tools that work while you sleep. That playbook still holds. But the tools have changed. AI agents are the new leverage. Not "AI will change everything someday" leverage. Leverage you can use today, this afternoon, to build things that used to require teams. I've spent the last year building with agents daily. Maintaining the Open-source Awesome LLM Apps repo (88k+ GitHub stars) with 100+ AI agent implementations. Watching teams at Google ship in months what used to take quarters. What I've learned is that the same principles Naval outlined still apply, but they look different now. The people getting rich with AI aren't the ones who got lucky with timing. They're the ones who understand that agents changed the leverage equation. Let me show you what that actually means. ## The leverage equation changed Naval called code and media "permissionless leverage." Software that runs while you sleep. Content that scales without your time. AI agents are the next version of that leverage. Same principle, higher multiplier. Last weekend I built a multi-agent VC due diligence system. Seven AI agents working as a team: company research, market analysis, financial modeling, risk assessment, investor memo generation, HTML report, and infographic design. A few hours of sustained work. A few years ago that would have taken days. Maybe longer. A team of analysts. Specialists in each domain. The barrier wasn't coding ability. The barrier was the mental model shift. Understanding that you could describe what you wanted, watch it take shape, course-correct, and iterate. The spec and the prototype becoming the same thing. This is what Naval meant by leverage. Tools that multiply your output without multiplying your input. AI agents are that multiplier, available to anyone willing to learn how to use them. ## Specific knowledge still matters, but differently Naval said specific knowledge can't be trained. If society can train you for it, society can train someone else and replace you. In the age of agents, here's what that means: the model is table stakes. You're using Claude Opus 4.5. So is your competitor. You're using Gemini 3 pro. So is the startup that launched last week. Everyone has access to the same models. So where does the alpha come from? Context. The model is the same. The difference is what you feed it. When I build agents now, they don't start from zero. They know what patterns work for my use case. They know what kills agents in production: context window overflow on long conversations, tool call loops where the agent gets stuck retrying the same failed action, silent failures that leave users confused, missing escalation paths when the agent hits its limits. Someone who hasn't built and broken dozens of agents doesn't have this context. They prompt the same model, get something that works in a demo, and watch it fall apart on first contact with real users. That gap, between "works when I show it" and "works when real users touch it," is the moat. It's not the model. It's the accumulated knowledge of what actually matters. You can't copy that. You earn it. This is specific knowledge in the agent era. Not just what you know, but how well you can externalize it. How clearly you can feed your accumulated understanding to an agent so it produces something competitors can't replicate. ## The five forms of agent-era wealth Here's where the real opportunities are: 1. Problem depth beats tool breadth The freelancers making $10K+ monthly with AI aren't the ones who know fifty tools. They're the ones who understand one problem deeply. A founder who knows exactly why users abandon their competitor's product, and can feed that context to agents that build features people actually use. A recruiter who understands what makes engineers actually respond to outreach, and uses agents to personalize at scale while maintaining that insight. The tool is the amplifier. The problem understanding is the signal. Action: Pick one problem you understand better than most people. Not "AI" generically. A specific pain point for specific people. That's your leverage point. 2. Context as a product I've watched companies try to build agents in-house with smart engineers but no domain context. They get demos that impress executives and products that frustrate users. The missing ingredient is never the model, it's the context. When I build a new agent for the Awesome LLM Apps repo, I never start from scratch. I maintain context docs that already know what "good" looks like, what patterns to use, what mistakes to avoid. First output is 90% there instead of 50%. That context library is the product. The agents are the delivery mechanism. Action: Start documenting what you know. Not generic knowledge. The specific patterns, failures, and edge cases from your domain. This becomes the context that makes your agents better than anyone else's. 3. Taste becomes the bottleneck When agents produce output quickly and in bulk, evaluation becomes the scarce skill. Agents will confidently produce things that look correct but miss the point entirely. The person who can look at ten agent-generated prototypes and immediately spot which one actually solves the problem? That's the valuable person. This is harder than it sounds. You need reps. You have to build things, evaluate them, learn what "good enough to ship" actually feels like versus "technically works." There's no shortcut except doing the work. Action: Build something small with agents every week. Not tutorials. Real problems you actually have. The goal isn't the output. It's developing the taste to know when output is good. 4. Accountability compounds Naval's original point: the most accountable people have singular, public, and risky brands. Oprah. Elon. They can't hide behind anonymity. In the agent era, this means building in public. Shipping work under your own name. Making claims you can be held to. The people building real wealth with AI aren't hiding their methods. They're documenting them. Writing about what works. Open-sourcing their implementations. Building reputation that compounds. Why? Because context advantages compound when shared. When people trust you to know what you're talking about, they send you better problems to solve. Better problems mean better context. Better context means better outputs. The flywheel spins. Action: Ship something public this week. Doesn't have to be perfect. Has to exist, with your name on it. 5. Systems beat sprints The old model: work hard on a project, get paid, find next project, repeat. The agent model: build a system once, deploy it repeatedly, improve it with each iteration. The difference? Traditional income scales with your time. System income scales with your context. A solid context doc that feeds into a repeatable agent workflow can serve ten clients as easily as one. The second client doesn't require double the work. They require marginally better context that benefits all clients. This is what Naval meant by assets that earn while you sleep. Not passive income exactly. But leverage that compounds. Action: Whatever you build, build it as a system. Document the context. Save the workflows. Make the second time easier than the first. ## What's actually left When the gap between knowing what to build and having it built disappears, what's left? Not the tools. Everyone has those. Not the models. Everyone has access. What's left is what was always valuable: understanding problems so clearly that solutions become obvious. Context that can't be copied, only earned. Taste built through doing the work. The barrier to building has never been lower. The return on understanding has never been higher. Luck follows preparation. Context is how you prepare. Start today.
Topics
Read the stories that matter.The stories and ideas that actually matter.
Save hours a day in 5 minutesTurn hours of scrolling into a five minute read.