metadata
license: apache-2.0
language:
- ind
- ace
- ban
- bjn
- bug
- gor
- jav
- min
- msa
- nia
- sun
- tet
language_bcp47:
- jv-x-bms
datasets:
- sabilmakbar/indo_wiki
- acul3/KoPI-NLLB
- uonlp/CulturaX
tags:
- bert
NusaBERT Large
NusaBERT Large is a multilingual encoder-based language model based on the BERT architecture. We conducted continued pre-training on open-source corpora of sabilmakbar/indo_wiki, acul3/KoPI-NLLB, and uonlp/CulturaX. On a held-out subset of the corpus, our model achieved:
eval_accuracy: 0.7117eval_loss: 1.3268perplexity: 3.7690
This model was trained using the 🤗Transformers PyTorch framework. All training was done on an NVIDIA H100 GPU. LazarusNLP/NusaBERT-large is released under Apache 2.0 license.
Model Detail
- Developed by: LazarusNLP
- Finetuned from: IndoBERT Large p1
- Model type: Encoder-based BERT language model
- Language(s): Indonesian, Acehnese, Balinese, Banjarese, Buginese, Gorontalo, Javanese, Banyumasan, Minangkabau, Malay, Nias, Sundanese, Tetum
- License: Apache 2.0
- Contact: LazarusNLP
Use in 🤗Transformers
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_checkpoint = "LazarusNLP/NusaBERT-large"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForMaskedLM.from_pretrained(model_checkpoint)
Training Datasets
Around 16B tokens from the following corpora were used during pre-training.
- Indonesian Wikipedia Data Repository
- KoPI-NLLB (Korpus Perayapan Indonesia)
- Cleaned, Enormous, and Public: The Multilingual Fuel to Democratize Large Language Models for 167 Languages
Training Hyperparameters
The following hyperparameters were used during training:
learning_rate: 3e-05train_batch_size: 256eval_batch_size: 256seed: 42optimizer: Adam withbetas=(0.9,0.999)andepsilon=1e-08lr_scheduler_type: linearlr_scheduler_warmup_steps: 24000training_steps: 500000
Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0+cu118
- Datasets 2.17.1
- Tokenizers 0.15.2
Credits
NusaBERT Large is developed with love by:
Citation
@misc{wongso2024nusabert,
title={NusaBERT: Teaching IndoBERT to be Multilingual and Multicultural},
author={Wilson Wongso and David Samuel Setiawan and Steven Limcorn and Ananto Joyoadikusumo},
year={2024},
eprint={2403.01817},
archivePrefix={arXiv},
primaryClass={cs.CL}
}