NusaBERT
Collection
NusaBERT: Teaching IndoBERT to be Multilingual and Multicultural! https://github.com/LazarusNLP/NusaBERT/
•
3 items
•
Updated
NusaBERT Large is a multilingual encoder-based language model based on the BERT architecture. We conducted continued pre-training on open-source corpora of sabilmakbar/indo_wiki, acul3/KoPI-NLLB, and uonlp/CulturaX. On a held-out subset of the corpus, our model achieved:
eval_accuracy: 0.7117eval_loss: 1.3268perplexity: 3.7690This model was trained using the 🤗Transformers PyTorch framework. All training was done on an NVIDIA H100 GPU. LazarusNLP/NusaBERT-large is released under Apache 2.0 license.
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_checkpoint = "LazarusNLP/NusaBERT-large"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForMaskedLM.from_pretrained(model_checkpoint)
Around 16B tokens from the following corpora were used during pre-training.
The following hyperparameters were used during training:
learning_rate: 3e-05train_batch_size: 256eval_batch_size: 256seed: 42optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08lr_scheduler_type: linearlr_scheduler_warmup_steps: 24000training_steps: 500000NusaBERT Large is developed with love by:
@misc{wongso2024nusabert,
title={NusaBERT: Teaching IndoBERT to be Multilingual and Multicultural},
author={Wilson Wongso and David Samuel Setiawan and Steven Limcorn and Ananto Joyoadikusumo},
year={2024},
eprint={2403.01817},
archivePrefix={arXiv},
primaryClass={cs.CL}
}