Qwen3-0.6B-Base

Qwen3 Highlights

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Building upon extensive advancements in training data, model architecture, and optimization techniques, Qwen3 delivers the following key improvements over the previously released Qwen2.5:

  • Expanded Higher-Quality Pre-training Corpus: Qwen3 is pre-trained on 36 trillion tokens across 119 languages β€” tripling the language coverage of Qwen2.5 β€” with a much richer mix of high-quality data, including coding, STEM, reasoning, book, multilingual, and synthetic data.
  • Training Techniques and Model Architecture: Qwen3 incorporates a series of training techiques and architectural refinements, including global-batch load balancing loss for MoE models and qk layernorm for all models, leading to improved stability and overall performance.
  • Three-stage Pre-training: Stage 1 focuses on broad language modeling and general knowledge acquisition, Stage 2 improves reasoning skills like STEM, coding, and logical reasoning, and Stage 3 enhances long-context comprehension by extending training sequence lengths up to 32k tokens.
  • Scaling Law Guided Hyperparameter Tuning: Through comprehensive scaling law studies across the three-stage pre-training pipeline, Qwen3 systematically tunes critical hyperparameters β€” such as learning rate scheduler and batch size β€” separately for dense and MoE models, resulting in better training dynamics and final performance across different model scales.

Model Overview

Qwen3-0.6B-Base has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining
  • Number of Parameters: 0.6B
  • Number of Paramaters (Non-Embedding): 0.44B
  • Number of Layers: 28
  • Number of Attention Heads (GQA): 16 for Q and 8 for KV
  • Context Length: 32,768

For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.

Requirements

The code of Qwen3 has been in the latest Hugging Face transformers and we advise you to use the latest version of transformers.

With transformers<4.51.0, you will encounter the following error:

KeyError: 'qwen3'

Evaluation & Performance

Detailed evaluation results are reported in this πŸ“‘ blog.

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwen3,
    title  = {Qwen3},
    url    = {https://qwenlm.github.io/blog/qwen3/},
    author = {Qwen Team},
    month  = {April},
    year   = {2025}
}
Downloads last month
161
Safetensors
Model size
596M params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Qwen/Qwen3-0.6B-Base

Finetunes
5 models
Quantizations
5 models

Collection including Qwen/Qwen3-0.6B-Base