Continual learning as an engineering problem not research
Press Space for next Tweet
Prime Intellect's @willccbb says continual learning could be solved in the first half of 2026: "Continual learning is going to fall pretty quickly, I think. It's more of an engineering problem. No one's actually trying." "OpenAI and Anthropic don't want to continuously train their models for each user. It's expensive and annoying and hard to serve at scale. But from a research perspective, we do continue learning where they just keep training the model more and it knows more stuff because they put more of the Internet in it." "I think there's a lot of experimentation around exactly the recipe that's going to be the most reliable. But we kind of have a grab bag of six or seven tricks that kind of work, or they work in different ways, and you can mix and match them. And it's just going to be like whatever's the best combination of these tricks." "People are going to experiment with it and find the versions that work the best. And there doesn't seem to be any big wall inside that prevents that from being practical."
Topics
Read the stories that matter.The stories and ideas that actually matter.
Save hours a day in 5 minutesTurn hours of scrolling into a five minute read.