GemmaX are many-to-many LLM-based multilingual translation models, which adopt multilingual continual pretraining with Parallel-First Monolingual-Second (PFMS) data mixing strategy and instruction finetuning with high-quality translation prompts.
- Jan. 23 2025: The GemmaX2 paper: Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study has been accepted at NAACL 2025!
Model checkpoints are released at huggingface:
Models | Descriptions |
---|---|
GemmaX2-28-2B-Pretrain | Developed through continual pretraining of Gemma2-2B |
GemmaX2-28-2B-v0.1 | Finetuned on GemmaX2-28-2B-Pretrain with translation instructions |
GemmaX2-28-9B-Pretrain | Developed through continual pretraining of Gemma2-9B |
GemmaX2-28-9B-v0.1 | Finetuned on GemmaX2-28-9B-Pretrain with translation instructions |
Note that GemmaX2-28-2B-Pretrain and GemmaX2-28-9B-Pretrain are NOT translation models.
GemmaX2 models support 28 languages: Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, Polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese.
Please use the language name specified above in the translation prompt.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "ModelSpace/GemmaX2-28-9B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Translate this from Chinese to English:\nChinese: 我爱机器翻译\nEnglish:"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
The translation prompt is:
Translate this from <source language name> into <target language name>:
<source language name>: <source language sentence>
<target language name>:
We train our models with the LLaMA-Factory framework. Please check here for adding pretraining and finetuning datasets in LLaMA-Factory
.
The data samples for multilingual continual pretraining are listed in examples/cpt.json
. Check the following command for reference:
bash scripts/cpt.sh
The data samples for translation instruction finetuning are listed in examples/sft.json
. Check the following command for reference:
bash scripts/sft.sh
If you find the resources in this repository helpful, please cite as:
@misc{cui2025multilingualmachinetranslationopen,
title={Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study},
author={Menglong Cui and Pengzhi Gao and Wei Liu and Jian Luan and Bin Wang},
year={2025},
eprint={2502.02481},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.02481},
}