Tech Twitter

We doomscroll, you upskill.

Finding signal on X is harder than ever. We curate high-value insights on AI, Startups, and Product so you can focus on what matters.

GPT-5.1 is out! It's a nice upgrade. I particularly like the improvements in instruction following, and the adaptive thinking. The intelligence and style improvements are good too.

14.4K
2.2K
2.1K
1.1K

GPT-5.1 is now available in the API. Pricing is the same as GPT-5. We are also releasing gpt-5.1-codex and gpt-5.1-codex-mini in the API, specialized for long-running coding tasks. Prompt caching now lasts up to 24 hours! Updated evals in our blog post.

784
220
75
46

Anthropic has very much regressed in the last few months. Their web app is quite sluggish and latency is very high. You can't do basic things like changing the model mid-conversation. It takes a good while to get Sonnet or Opus to respond. Claude Code is also very fucked with the recent limit changes. Not great overall.

203
62
9
21

Reddit AMA on GPT-5.1 and our customization updates. Tomorrow, 2PM PT.

From the OpenAI community on Reddit

From the OpenAI community on Reddit

144
24
24
18

Sam Altman suggested that OpenAI could reach $100B in revenue by 2027. Anthropic reportedly forecasted $70 billion in revenue by 2028. Satya reacts to these projections.

Video thumbnail
View
2.1K
70
165
911

Dear everyone who uses AI to write: You aren't fooling anyone. That is all.

310
163
22
17

Claude's domination of Design Arena has ended, GPT-5 and the new GPT-5.1 now top the benchmark.

Tweet image 1
Tweet image 2
224
16
18
30

We’ve developed a new way to train small AI models with internal mechanisms that are easier for humans to understand. Language models like the ones behind ChatGPT have complex, sometimes surprising structures, and we don’t yet fully understand how they work. This approach helps us begin to close that gap.

Understanding neural networks through sparse circuits

Understanding neural networks through sparse circuits

1.0K
77
151
511

If Gemini 3 matches Sonnet in code debugging by keeping the 1M token context, this will be the end for Anthropic. Otherwise, the end for Anthropic will be next year.

407
54
10
32

i don't want my ai to sound human tbh i just want it to do stuff the more "personal" it tries to sound, the less i trust it

113
44
10
5