Model Card for TowerBase-7B-v0.1
Model Details
Model Description
TowerBase-7B is a language model that results from continuing the pretraining of Llama 2 on a mix of 20 billion tokens of monolingual data in ten different languages β English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian β and bilingual data. TowerBase-7B-v0.1 is the first model in the series. The resulting model shows improved performance on the supported languages, while maintaining Llama 2's capabilities on English. It is particularly well-suited for fine-tuning on translation and related tasks: check out TowerInstruct.
We will release more details in the upcoming technical report.
- Developed by: Unbabel, Instituto Superior TΓ©cnico, CentraleSupΓ©lec University of Paris-Saclay
- Model type: A 7B parameter model built on top of Llama 2 by continuing pretraining on multilingual data.
- Language(s) (NLP): English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
- License: CC-BY-NC-4.0, Llama 2 is licensed under the LLAMA 2 Community License, Copyright Β© Meta Platforms, Inc. All Rights Reserved.
Intended uses & limitations
The model is intended for research purposes in the 10 languages it supports. The model is able to perform well on translation and related tasks (e.g., APE, GEC) on a few-shot regime. It can also be fine-tuned to perform these tasks in a zero-shot fashion (see TowerInstruct, as well as other multilingual tasks.
Out-of-Scope Use
The model is not guaranteed to perform well for languages other than the 10 languages it supports.
Bias, Risks, and Limitations
TowerBase-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
Run the model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Unbabel/TowerBase-7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "English: My name is TowerBase.\nPortuguese:"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Data
Filtered versions of mc4 and bilingual data from various sources (e.g., OPUS).
Citation
@misc{tower_llm_2024,
title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks},
author={Duarte M. Alves and JosΓ© Pombal and Nuno M. Guerreiro and Pedro H. Martins and JoΓ£o Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and JosΓ© G. C. de Souza and AndrΓ© F. T. Martins},
year={2024},
eprint={2402.17733},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 739
Model tree for Unbabel/TowerBase-7B-v0.1
Spaces using Unbabel/TowerBase-7B-v0.1 7
Collection including Unbabel/TowerBase-7B-v0.1
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard51.020
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard77.680
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard43.480
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard37.290
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard72.060
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard13.120