Finding signal on Twitter is more difficult than it used to be. We curate the best tweets on topics like AI, startups, and product development every weekday so you can focus on what matters.
"Nobody starts in the right place. You don’t begin with the correct tool and work sensibly within its constraints until you organically graduate to a more capable one. That is not how obsession works. Obsession works by taking whatever is available and pressing on it until it either breaks or reveals something. The machine’s limits become a map of the territory. You learn what computing actually costs by paying too much of it on hardware that can barely afford it... I know this because I was running Final Cut Pro X on a 2006 Core 2 Duo iMac with 3GB RAM and 120GB of spinning rust. I was nine. I had no business doing this. I did it every day after school until my parents made me go to bed." A great paragraph in a great essay about the MacBook Neo: https://samhenri.gold/blog/20260312-this… Reminds me of my early days on computers. You did what you could with what you had because you didn't have a choice. via @daringfireball

“This Is Not The Computer For You” · Sam Henri Gold
Kimi K2.5 continues to be my daily driver for all the basic stuff where I don't need PhD-level intelligence. I just need it done quickly. Running it at 200 tps through @FireworksAI_HQ within @opencode is just such a delight.
I'm Boris and I created Claude Code. I wanted to quickly share a few tips for using Claude Code, sourced directly from the Claude Code team. The way the team uses Claude is different than how I use it. Remember: there is no one right way to use Claude Code -- everyones' setup is different. You should experiment to see what works for you!
.@dylan522p gives a deep dive on the 3 big bottlenecks to scaling AI compute: logic, memory, and power. And walks through the economics of labs, hyperscalers, foundries, and fab equipment manufacturers. Learned a ton about every single level of the stack. 0:00:00 – Why an H100 is worth more today than 3 years ago 0:24:52 – Nvidia secured TSMC allocation early; Google is getting squeezed 0:34:34 – ASML will be the #1 constraint for AI compute scaling by 2030 0:56:06 – Can’t we just use TSMC’s older fabs? 1:05:56 – When will China outscale the West in semis? 1:16:20 – The enormous incoming memory crunch 1:42:53 – Scaling power in the US will not be a problem 1:55:03 – Space GPUs aren't happening this decade 2:14:26 – Why aren’t more hedge funds making the AGI trade? 2:18:49 – Will TSMC kick Apple out from N2? 2:24:35 – Robots and Taiwan risk Look up Dwarkesh Podcast on YouTube, Apple Podcasts, or Spotify. Enjoy!

Introducing our biggest upgrade to @googlemaps since the original launch, featuring Ask Gemini (with personalization), Immersive Navigation, and much more!! 🗺️
We trained a new flood forecasting model designed to predict flash floods in urban areas up to 24 hours in advance. To help address a flash floods data gap, we created Groundsource: a new AI methodology using Gemini to identify 2.6M+ historical events across 150+ countries. We’re open-sourcing this dataset to advance global research, and urban flash flood forecasts are live now in Flood Hub to help communities stay safe.
