We doomscroll, you upskill.
Finding signal on X is harder than ever. We curate high-value insights on AI, Startups, and Product so you can focus on what matters.

ceo @intelligenceco , i am going to get ai to run companies. timeline to agi is 10 months
Page 1 • Showing 1 tweet
My predictions for 2026 "Everything works now and that's insane" Coding is, for all intents and purposes, solved in the next generation of models. Everything related to coding that isnt “solved” is more of a harness/context problem than anything else. This means everything from backend, reliability and deployment, security, and frontend is able to be handed off to an agent. The exception is actual business logic and user experience, which still doesn’t feel right till two more generations of visual and spatial reasoning improve. Browser agents become highly effective and are used for the long tail of problems that cant be solved by purpose built software. As a result of this, bot protection becomes a really big thing people have to fight/worry about to get this tooling to work and companies like browserbase make a shit ton of money doing so.Everything from booking a flight, getting a dinner reservation, to scraping leads on linkedin and applying to jobs for you is doable with a browser agent. The browser won’t be the place people access this - it’ll just be in a normal chatbot style interface that’s async. There’s a breakthrough in “continuous learning” in the first quarter, and a new paradigm labs call “learning models” (or something similar) start getting released every quarter. Anthropic gets there first, then openai follows. Continuous learning is a combined system of evolving context and continued finetuning on a model. This doesn’t solve the context engineering problem, but does effectively eliminate the need for finetuning entirely. Agents get really good at managing their own context and context of sub-agents around them. Because of this, multi-agent swarms start to work really well. Context expansion, context compression, and retrieval become a solved problem by the end of the year the same way structured output generation is now a solved problem. AI designing interfaces with things like the figma MCP or just writing straight to code get really good. We enter uncanny valley of design in early-to-mid 2026. AI assistants move to single threaded and away from having a ton of different chats. Single threaded is the way people interact with other people, and was a difficult problem pre-2025. But, since evolving context and memory became mainstream, single threaded experiences like Poke dominate. This is in prep for voice mode to become one of the default ways to interact with AI. Voice mode gets a big step up around the middle of the year. Pre-2026, voice modes were just odd and still feel like talking to a robot. Greatly improved voice modes make voice a default way of interacting with the models. “Her” is basically reality by fall. AGI, if defined as Samantha from the movie “Her” (my personal definition) is a reality by October 2026. It’s a single threaded experience but you can access context in a very clever and guided way. OpenAI claims they have it first, though for various legal reasons might not actually say its AGI, and it’s released as a consumer product. There is much debate over whether this is actually AGI. The ending of the movie where the OS’ all band together to make ASI is not happening in 2026. OpenAI IPOs cause they need the capital. They release “Chat-1” or whatever they wanna call it during the roadshow. Nano-Banana-67 and similar new tooling basically automates investment analysts out of their jobs; it still takes a long time to diffuse because of tooling and organizational lag. Google becomes the leader in AI for biology, and acquires some biotech companies to deploy it. Basically no human trials are seen, though the company wants to run trials on people in developing countries to accelerate time to market and this causes some PR issues for approximately 48 hours on twitter. Apple leads the way in proposing a new standard to verify images and videos are not AI generated, and starts requiring social media apps on the app store to follow this standard. It uses camera and microphone metadata to prove something came directly from a camera. They get buy in from Google, Tiktok, and Meta by the end of the year by basically forcing them to do it. We see some seriously insane game demos from the world simulation companies, but none of it is really playable until early 2027. Sim gaming exists, and kinda works, but its not great yet. Similar vibe to early VR. Cerebres or Groq gets acquired by one of the labs. Microsoft acquires Cursor for some absurdly high number. Govt gets scared of China, so they give OpenAI and Anthropic 100 billion dollars or something crazy like that (long shot but wanted to put this down anyways). Apple buys Thinking Machines after Anthropic turns them down. E2B or Daytona (the agent sandbox companies) gets acquired by Cursor. Waymos are in every major city except New York (they’ll never work in new york). Waymo has a problem with manufacturing enough cars to meet demand. A few founders go to jail for lying about ARR numbers. Seed round valuations decline from current levels but the AI hype doesn’t die down too hard. (Some of these are more thought through than others. also published to my personal blog)