Nicolas-BZRD commited on
Commit
ef164e5
·
verified ·
1 Parent(s): 0fb2b9b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -10,8 +10,8 @@ thumbnail: >-
10
  ---
11
  # EuroBERT: Scaling Multilingual Encoders for European Languages
12
 
13
- Checkout the full technical report for all the info ! <br>
14
- Checkout the training library to express your creativity!
15
 
16
  ## Abstract
17
  General-purpose multilingual vector representations, used in retrieval, regression and classification, are traditionally obtained from bidirectional encoder models. Despite their wide applicability, encoders have been recently overshadowed by advances in generative decoder-only models. However, many innovations driving this progress are not inherently tied to decoders. In this paper, we revisit the development of multilingual encoders through the lens of these advances, and introduce EuroBERT, a family of multilingual encoders covering European and widely spoken global languages. Our models outperform existing alternatives across a diverse range of tasks, spanning multilingual capabilities, mathematics, and coding, and natively supporting sequences of up to 8,192 tokens. We also examine the design decisions behind EuroBERT, offering insights into our dataset composition and training pipeline. We publicly release the EuroBERT models, including intermediate training checkpoints, together with our training framework.
 
10
  ---
11
  # EuroBERT: Scaling Multilingual Encoders for European Languages
12
 
13
+ Checkout the [arXiv paper](https://arxiv.org/abs/2503.05500) for all the info ! <br>
14
+ Checkout the [training library](https://github.com/Nicolas-BZRD/EuroBERT) to express your creativity!
15
 
16
  ## Abstract
17
  General-purpose multilingual vector representations, used in retrieval, regression and classification, are traditionally obtained from bidirectional encoder models. Despite their wide applicability, encoders have been recently overshadowed by advances in generative decoder-only models. However, many innovations driving this progress are not inherently tied to decoders. In this paper, we revisit the development of multilingual encoders through the lens of these advances, and introduce EuroBERT, a family of multilingual encoders covering European and widely spoken global languages. Our models outperform existing alternatives across a diverse range of tasks, spanning multilingual capabilities, mathematics, and coding, and natively supporting sequences of up to 8,192 tokens. We also examine the design decisions behind EuroBERT, offering insights into our dataset composition and training pipeline. We publicly release the EuroBERT models, including intermediate training checkpoints, together with our training framework.