Finding signal on Twitter is more difficult than it used to be. We curate the best tweets on topics like AI, startups, and product development every weekday so you can focus on what matters.

Context Management in Large Language Model Prompts

context rot is the silent killer of AI output quality... i tested identical prompts with full context files in Gemini 3.1 Pro and Opus 4.6 Gemini's window is significantly larger... shouldn't that give it an edge? nope, both models hit the same wall at roughly the same point in the conversation output goes from sharp to generic to useless the window size is a distraction today, the real bottleneck is how you manage what goes inside it what matters now: - how token-efficient your context is - knowing which context to load and when - calling the right skill at the right moment instead of dumping everything upfront this is a very underrated skill

Topics

Read the stories that matter.

Save hours a day in 5 minutes