Qwen3.5 with Linear Attention and Sparse MoE Design
Press Space for next Tweet
The new @Alibaba_Qwen Qwen3.5-397B-A17B is live on OpenRouter now! This multimodal model uses a hybrid architecture combining linear attention with sparse MoE for higher inference efficiency. Available as both the open weights version and Qwen3.5 Plus with extended 1M context.
Topics
Read the stories that matter.The stories and ideas that actually matter.
Save hours a day in 5 minutesTurn hours of scrolling into a five minute read.