Benchmarks

Benchmark claims stay visible, but visibly historical

Because the source input is only the README, the site keeps every benchmark claim tied to the original table rather than presenting it as fresh leaderboard data.

OpenCompass-citedREADME-backedHistorical snapshot

How to read the numbers

The upstream README states that compared-model numbers use the best value between official results and OpenCompass.

That makes these tables useful as product-surface evidence, but not as a substitute for current benchmark research.

  • Natural language understanding
  • Math and reasoning
  • Code generation
  • Chinese evaluation coverage

Representative performance table

ModelMMLUC-EvalGSM8KMATHHumanEvalMBPPBBHCMMLU
LLaMA2-7B46.832.516.73.312.820.838.231.8
InternLM-20B62.158.852.67.925.635.652.559.0
Yi-34B76.381.867.915.926.238.266.482.6
Qwen-1.8B45.356.132.32.315.214.222.352.1
Qwen-7B58.263.551.711.629.931.645.062.2
Qwen-14B66.372.161.324.832.340.853.471.0
Qwen-72B77.483.378.935.235.452.267.783.6

The upstream README reports the best score between official results and OpenCompass for each compared model.

Freshness note

These scores come from the original Qwen README and technical memo, not from a live benchmark feed.

The site keeps them because they define the documented public surface for this historical model line.

Source anchors

Benchmarks | Qwen Code