Localized public docs for qwen-code

A deployable docs surface for the original Qwen open-source line

This site turns the original README-only source input into a bilingual product and documentation experience across install, models, benchmarks, demos, API, tool-use, long-context, FAQ, license, and editorial resources.

English and Chinese parityStandalone Next.js runtimeCrawlable docs routes

Model line

Expose the four original public sizes with their documented memory requirements, context windows, and variant links.

32K context · 2.2T tokens

Qwen-1.8B

The smallest family member still ships 32K context and system prompt support in the chat variant.

  • Release: 2023-11-30
  • Q-LoRA: 5.8GB
  • Int4 generation: 2.9GB
  • Tool use: Yes

32K context · 2.4T tokens

Qwen-7B

The most practical open deployment target in the original line, with base, chat, Int4, and Int8 checkpoints.

  • Release: 2023-08-03
  • Q-LoRA: 11.5GB
  • Int4 generation: 8.2GB
  • Tool use: Yes

8K context · 3.0T tokens

Qwen-14B

The 14B release pushed the original line deeper into coding and Chinese knowledge while retaining tool-use support.

  • Release: 2023-09-25
  • Q-LoRA: 18.7GB
  • Int4 generation: 13.0GB
  • Tool use: Yes

32K context · 3.0T tokens

Qwen-72B

The flagship open release in the original repo, combining 32K context, stronger system prompts, and the top benchmark results.

  • Release: 2023-11-30
  • Q-LoRA: 61.4GB
  • Int4 generation: 48.9GB
  • Tool use: Yes

Key signals

Keep the highest-signal product data visible on the landing page.

32K

Max context in the original public line

Qwen-1.8B, Qwen-7B, and Qwen-72B are presented with 32K context in the upstream table.

3.0T

Largest pretrained token count reported

The README cites up to 3.0T multilingual tokens for Qwen-14B and Qwen-72B.

98.2%

Top reported tool-selection accuracy

Qwen-72B-Chat reaches the best score in the upstream Chinese tool-use benchmark block.

2 locales

Mirrored UX from day one

The web experience keeps `/en` and `/zh` in lockstep across public routes and metadata alternates.

Source-backed benchmarks

Keep historical claims visible, but keep them visibly tied to the original README and technical report so the site does not overstate freshness.

MMLU 77.4, C-Eval 83.3, GSM8K 78.9

Qwen-72B benchmark ceiling

The upstream performance table places Qwen-72B ahead of the listed LLaMA2 and GPT-3.5 references on most cited tasks.

Tool selection up to 98.2%

Tool use is part of the product surface

The README does not treat function calling as an afterthought. Tool use, ReAct prompting, and code interpreter are all first-class sections.

32K context and L-Eval comparison

Long-context claims are table-backed

The long-context section provides concrete perplexity and L-Eval data instead of only marketing language.

Benchmark preview table

ModelMMLUC-EvalGSM8KMATH
Qwen-1.8B45.356.132.32.3
Qwen-7B58.263.551.711.6
Qwen-14B66.372.161.324.8
Qwen-72B77.483.378.935.2

Preview subset from the upstream performance table.

Deployment and ecosystem

Point builders toward the original runtime touchpoints: ModelScope, Hugging Face, DashScope, FastChat, qwen.cpp, and Qwen-Agent.

ModelScope

Model hubs

Mirror the official model cards across both ModelScope and Hugging Face so download paths are visible in both English and Chinese contexts.

Open link

Managed API

DashScope API

The upstream README points to DashScope when you need a managed API surface instead of local model serving.

Open link

Agent framework

Qwen-Agent

The tool-use and code-interpreter sections connect directly to Qwen-Agent for evaluation and agent workflows.

Open link

Edge runtime

qwen.cpp

The original README highlights qwen.cpp as a lighter runtime path for the historical model line.

Open link

Runtime editorial hub

Editorial notes can be loaded from a shared filesystem directory without rebuilding the app, which keeps publishing decoupled from deployment.

News · Mar 17, 2026 · 1 min read

Deployment lane ready

Seed content proving the shared editorial contract for the standalone site.

EN中文

Source anchors