Skip to content
Models · Apr 23, 2026

Qwen releases 27B dense model claiming coding performance comparable to 397B predecessor

Qwen's new 27-billion parameter open-weight model reportedly matches the coding capabilities of its larger 397B MoE predecessor while reducing model size from 807GB to 55.6GB, according to Qwen benchmarks.

Trust66
HypeSome hype

3 sources

ShareXLinkedInEmail
TL;DR
  • Qwen released Qwen3.6-27B, claiming it surpasses the Qwen3.5-397B-A17B across major coding benchmarks despite being 15x smaller (27B vs 397B parameters)
  • The model is available on Hugging Face at 55.6GB full precision, with a quantized 16.8GB version also available
  • Simon Willison tested the quantized version locally and demonstrated SVG generation tasks, achieving 25.57 tokens per second on generation and 54.32 tokens per second on reading
  • No independent benchmark verification of the performance claims is presented in the source

Qwen announced Qwen3.6-27B, a 27-billion parameter dense model designed for coding tasks. The company claims the model delivers coding performance matching its predecessor Qwen3.5-397B-A17B—a 397-billion parameter mixture-of-experts model—across major coding benchmarks. The new model is available on Hugging Face at 55.6GB in full precision, a significant size reduction from the 807GB predecessor.

The model has been quantized for local inference; Willison tested a 16.8GB Q4_K_M quantized version using llama.cpp's server implementation. In testing, the quantized model achieved reading speed of 54.32 tokens per second and generation speed of 25.57 tokens per second on an SVG generation task involving 4,444 output tokens. A second task generating 6,575 tokens completed at 24.74 tokens per second.

Willison's testing focused on qualitative output quality rather than standardized benchmarks. The source does not provide the specific coding benchmarks used to substantiate Qwen's performance claims, nor independent verification of those results. The comparison between dense and MoE architectures of significantly different sizes introduces architectural variables beyond simple parameter count.

Sources
  1. 01Qwen BlogQwen3.6-27B: Flagship-Level Coding in a 27B Dense Model
  2. 02Simon Willison — everythingQwen3.6-27B: Flagship-Level Coding in a 27B Dense Model
  3. 03Hugging FaceQwen3.6-27B Model Card
Also on Models

Stories may contain errors. Dispatch is assembled with AI assistance and curated by human editors; despite the trust-score filter, mistakes happen. We correct publicly — every article links to its revision history. Nothing here is financial, legal, or medical advice. Verify before relying on any claim.

© 2026 Dispatch. No ads. No sponsorships. No paid placement. Reader-supported via Ko-fi.

Built by a person who cares about honest AI news.