Finding signal on Twitter is more difficult than it used to be. We curate the best tweets on topics like AI, startups, and product development every weekday so you can focus on what matters.
Today's chapter of Agentic Engineering Patterns is some good general career advice which happens to also help when working with coding agents: Hoard things you know how to do https://simonwillison.net/guides/agentic…
simonwillison.net
Hoard things you know how to do - Agentic Engineering Patterns - Simon Willison's Weblog
Last weekend I held an event that's the opposite of a hackathon. No competition, no pressure, no pitches. Just people who love vibe coding, vibe coding together. Audience was half technical, half non-technical. Everyone made new friends, got new ideas, and learned something. Planning to do this more!

there's a reason why some people profit from new AI releases on day 1 while you're still watching tutorials... they don't know the tools better than you, they know the principles a software engineer gets almost the same quality from Claude Code, Codex and OpenCode... because he understands what's happening under the hood an artist switches from midjourney to nano banana pro and the output stays fire... because they trained their eye, not their muscle memory on one UI the past few weeks have been chaos on X new model drops, new SaaS trending, new MCP server "changing the game" EVERY SINGLE DAY if you chased every shiny thing you'd have switched your entire workflow 5 or 6 times and you'd be starting from zero each time here are some of the core principles you need to master: > how LLMs fail (hallucinations, context limits, token behavior) > how to structure instructions models actually follow > how to design agent systems and prompt architecture > how reasoning works, how deep research functions learn them once & apply them everywhere
OpenAI Residency 2026 applications are OPEN btw - 6-month full-time paid research gig in SF - ~$220K annualized ($18.3K/month) + relocation - NO prior ML/AI experience required, just strong technical fundamentals & fast learning - Work on frontier AI with top researchers Interviews starts in Jan 2026 Apply: https://openai.com/careers/residency-202……
Alex Matthew (@alxmthew) is 17 years old and goes to an AI-powered high school. He does his learning in 3 hours a day through an AI platform that personalizes his lessons. He has no teachers. Instead, the adults in the classroom are called “guides.” Their job isn’t to deliver information through lectures—it’s to help point kids in the right direction. Alex spends hours a day at school building a real-world project of his own creation. In his case, it’s @berryaiplushies, an AI stuffed animal designed to help teenagers become more self-aware. I’m fascinated by how AI might change education: how we learn, and what we learn. That’s why I was psyched that Alex’s parents okay’d him to come on @every’s AI & I. Alex is the youngest guest we’ve ever had, and we covered a lot of ground: - What a day inside Alpha High School looks like - Why he doesn't use AI to cheat—even though he easily could - Whether ambitious teenagers still care about college - How Gen Z really feels about social media, books, and reading - His rankings of the foundation models from @OpenAI, @AnthropicAI, @Google, and @xAI This is a must-watch for anyone curious about what growing up with AI looks like from the inside. The kids are going to be alright. Watch below! Timestamps: Introduction: 00:01:30 A typical day inside Alpha High School: 0:04:08 Why Alpha replaced teachers with “guides” focused on motivating students: 00:06:54 Why Alex doesn’t use AI to cheat, even though he could: 00:12:09 Do ambitious teenagers care about going to college?: 00:19:51 Alex’s take on how Gen Z thinks about AI: 00:25:12 How Alex thinks about the effects of social media: 00:27:52 Gen Z’s relationship with books and reading: 00:31:29 Alex ranks ChatGPT, Claude, Gemini, and Grok: 00:38:57 Why Alex is building Berry, an AI stuffed animal for teen mental health: 00:47:12
It’s easier than ever to write code… And yet the hard parts of software engineering are still *very hard* Don’t get discouraged by the hype. There’s still so much to learn and build.
“The thing that will differentiate you more in your career than anything else is being the most hyper-curious person.” - @bgurley “If you are the most curious person constantly learning in your field, you will do extremely well.” “I can’t make you the most talented person in your company or your field, but you have no excuse not to be the most knowledgeable person. The information is all out there.”
.@polymath_labs is training world generation models to automate the creation of RL environments. Traditionally, RL environment generation has been bottlenecked by human data. Superintelligence will never be achieved by human data alone. Polymath is building the core technology to enable automated environment generation using far less human effort than traditionally required, and eventually none. This allows for more complex and realistic worlds, and higher quality, scale, and diversity of tasks. This will be essential to unlock RL scaling. The end goal is to create large-scale, long-horizon environments from a text description alone. This will enable the creation of worlds of arbitrary complexity and scale, which is foundational for training & evaluating autonomous, superintelligent AI agents. Congrats on the launch, @dylanma5621 and @narenyenuganti! https://ycombinator.com/launches/PYT-pol…
BREAKING: Google Research just dropped the textbook killer. Its called "Learn Your Way" and it uses LearnLM to transform any PDF into 5 personalized learning formats. Students using it scored 78% vs 67% on retention tests. The education revolution is here.
I’m starting to get into a habit of reading everything (blogs, articles, book chapters,…) with LLMs. Usually pass 1 is manual, then pass 2 “explain/summarize”, pass 3 Q&A. I usually end up with a better/deeper understanding than if I moved on. Growing to among top use cases. On the flip side, if you’re a writer trying to explain/communicate something, we may increasingly see less of a mindset of “I’m writing this for another human” and more “I’m writing this for an LLM”. Because once an LLM “gets it”, it can then target, personalize and serve the idea to its user.
Video Overviews are now available on the NotebookLM mobile app (and in full-screen!) 😍 Generate and enjoy Video Overviews directly from your phone, because learning is an anywhere and everywhere activity.
please stop outsourcing your thinking to strangers on the internet downloading skills from random sources and hoping your AI output gets better is the same energy as copy pasting prompts without reading them... you get garbage results, you don't understand why, you blame the tool the problem with this approach is very simple: > someone builds a skill based on THEIR workflow > THEIR writing style > THEIR specific use case > you install it and expect it to just work for YOUR situation that's not how any of this works and it gets worse... thousands of malicious skills are live on github: prompt injections, credential theft, reverse shells hidden in skills that look completely legit you're literally giving random code full access to your machine without reading it the fix exists tho: Anthropic released a full 32-page guide on building skills... they even have a skill-creator skill that drafts your first one for you a skill is just a folder with a system prompt, plain english instructions and no coding required take 30 minutes to understand: - what the main prompt is doing - what the referenced files contain - why certain instructions are structured a specific way when you understand the engineering behind a skill you get dramatically better results because you can tune it to YOUR context and you stop being a target for injections you can't even see build your own & read what you download
the truth is no matter how hard you try you’ll never be able to keep up with 100% of what’s going on in AI right now there’s just too much action right now
We’re launching full-length, on demand practice exams for standardized tests in @GeminiApp, starting with the SAT, available now at no cost. Practice SATs are grounded in rigorously vetted content in partnership with @ThePrincetonRev, and Gemini will provide immediate feedback highlighting where you excelled and where you might need to study more. To try it out, tell Gemini, “I want to take a practice SAT test.”
New podcast on AI (full episode). Links below. A Motorcycle for the Mind 0:00 If you want to learn, do 2:13 Vibe coding is the new product management 6:49 Training models is the new coding 10:13 Is traditional software engineering dead? 13:07 There is no demand for average 14:12 The hottest new programming language is English 18:36 AI is adapting to us faster than we are adapting to it 22:56 No entrepreneur is worried about AI taking their job 26:46 The goal is not to have a job 29:49 AIs are not alive 32:55 AI fails the only true test of intelligence 36:49 Early adopters of AI have an enormous edge 39:37 AI meets you exactly where you are 43:02 Always leverage the best intelligence 44:37 If you can't define it, you can't program it 49:37 The solution to AI anxiety is action
Sharing an interesting recent conversation on AI's impact on the economy. AI has been compared to various historical precedents: electricity, industrial revolution, etc., I think the strongest analogy is that of AI as a new computing paradigm (Software 2.0) because both are fundamentally about the automation of digital information processing. If you were to forecast the impact of computing on the job market in ~1980s, the most predictive feature of a task/job you'd look at is to what extent the algorithm of it is fixed, i.e. are you just mechanically transforming information according to rote, easy to specify rules (e.g. typing, bookkeeping, human calculators, etc.)? Back then, this was the class of programs that the computing capability of that era allowed us to write (by hand, manually). With AI now, we are able to write new programs that we could never hope to write by hand before. We do it by specifying objectives (e.g. classification accuracy, reward functions), and we search the program space via gradient descent to find neural networks that work well against that objective. This is my Software 2.0 blog post from a while ago. In this new programming paradigm then, the new most predictive feature to look at is verifiability. If a task/job is verifiable, then it is optimizable directly or via reinforcement learning, and a neural net can be trained to work extremely well. It's about to what extent an AI can "practice" something. The environment has to be resettable (you can start a new attempt), efficient (a lot attempts can be made), and rewardable (there is some automated process to reward any specific attempt that was made). The more a task/job is verifiable, the more amenable it is to automation in the new programming paradigm. If it is not verifiable, it has to fall out from neural net magic of generalization fingers crossed, or via weaker means like imitation. This is what's driving the "jagged" frontier of progress in LLMs. Tasks that are verifiable progress rapidly, including possibly beyond the ability of top experts (e.g. math, code, amount of time spent watching videos, anything that looks like puzzles with correct answers), while many others lag by comparison (creative, strategic, tasks that combine real-world knowledge, state, context and common sense). Software 1.0 easily automates what you can specify. Software 2.0 easily automates what you can verify.
A few days ago I shared a life calendar I built: your entire life, shown as weeks on your iPhone lock screen. A lot of people asked for it, so here it is: https://thelifecalendar.com I also added a yearly view to visualize the progress of the current year. Happy New Year