Added models
Browse files- README.md +83 -0
- config.json +27 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +1 -0
- spiece.model +3 -0
- tokenizer_config.json +1 -0
README.md
ADDED
|
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: ja
|
| 3 |
+
tags:
|
| 4 |
+
- t5
|
| 5 |
+
- text2text-generation
|
| 6 |
+
- seq2seq
|
| 7 |
+
license: apache-2.0
|
| 8 |
+
datasets:
|
| 9 |
+
- mc4
|
| 10 |
+
- wiki40b
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# t5-base-japanese-web (with Byte-fallback, 8K)
|
| 14 |
+
|
| 15 |
+
## Description
|
| 16 |
+
|
| 17 |
+
[megagonlabs/t5-base-japanese-web](https://huggingface.co/megagonlabs/t5-base-japanese-web) is a T5 (Text-to-Text Transfer Transformer) model pre-trained on Japanese web texts.
|
| 18 |
+
Training codes are [available on GitHub](https://github.com/megagonlabs/t5-japanese).
|
| 19 |
+
|
| 20 |
+
The vocabulary size of this model is 8K.
|
| 21 |
+
[32K version is also available](https://huggingface.co/megagonlabs/t5-base-japanese-web).
|
| 22 |
+
|
| 23 |
+
### Corpora
|
| 24 |
+
|
| 25 |
+
We used following corpora for pre-training.
|
| 26 |
+
|
| 27 |
+
- Japanese in [mC4/3.0.1](https://huggingface.co/datasets/mc4) (We used [Tensorflow native format](https://github.com/allenai/allennlp/discussions/5056))
|
| 28 |
+
- 87,425,304 pages
|
| 29 |
+
- 782 GB in TFRecord format
|
| 30 |
+
- [Japanese](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bja) in [wiki40b/1.3.0](https://www.tensorflow.org/datasets/catalog/wiki40b)
|
| 31 |
+
- 828,236 articles (2,073,584 examples)
|
| 32 |
+
- 2 GB in TFRecord format
|
| 33 |
+
|
| 34 |
+
### Tokenizer
|
| 35 |
+
|
| 36 |
+
We used Japanese Wikipedia to train [SentencePiece](https://github.com/google/sentencepiece).
|
| 37 |
+
|
| 38 |
+
- Vocabulary size: 8,000
|
| 39 |
+
- [Byte-fallback](https://github.com/google/sentencepiece/releases/tag/v0.1.9): Enabled
|
| 40 |
+
|
| 41 |
+
### Parameters
|
| 42 |
+
|
| 43 |
+
- T5 model: [models/t5.1.1.base.gin](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/models/gin/models/t5.1.1.base.gin)
|
| 44 |
+
- Training steps: 1,000,000
|
| 45 |
+
|
| 46 |
+
It took about 126 hours with TPU v3-8
|
| 47 |
+
|
| 48 |
+
## Related models
|
| 49 |
+
|
| 50 |
+
- [日本語T5事前学習済みモデル (sonoisa/t5-base-japanese)](https://huggingface.co/sonoisa/t5-base-japanese)
|
| 51 |
+
- [日本語T5事前学習済みモデル (sonoisa/t5-base-japanese-mC4-Wikipedia)](https://huggingface.co/sonoisa/t5-base-japanese-mC4-Wikipedia)
|
| 52 |
+
|
| 53 |
+
## License
|
| 54 |
+
|
| 55 |
+
Apache License 2.0
|
| 56 |
+
|
| 57 |
+
## Citations
|
| 58 |
+
|
| 59 |
+
- mC4
|
| 60 |
+
|
| 61 |
+
Contains information from `mC4` which is made available under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/).
|
| 62 |
+
|
| 63 |
+
```bibtex
|
| 64 |
+
@article{2019t5,
|
| 65 |
+
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
|
| 66 |
+
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
|
| 67 |
+
journal = {arXiv e-prints},
|
| 68 |
+
year = {2019},
|
| 69 |
+
archivePrefix = {arXiv},
|
| 70 |
+
eprint = {1910.10683},
|
| 71 |
+
}
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
- wiki40b
|
| 75 |
+
|
| 76 |
+
```bibtex
|
| 77 |
+
@inproceedings{49029,
|
| 78 |
+
title = {Wiki-40B: Multilingual Language Model Dataset},
|
| 79 |
+
author = {Mandy Guo and Zihang Dai and Denny Vrandecic and Rami Al-Rfou},
|
| 80 |
+
year = {2020},
|
| 81 |
+
booktitle = {LREC 2020}
|
| 82 |
+
}
|
| 83 |
+
```
|
config.json
ADDED
|
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"T5ForConditionalGeneration"
|
| 4 |
+
],
|
| 5 |
+
"d_ff": 2048,
|
| 6 |
+
"d_kv": 64,
|
| 7 |
+
"d_model": 768,
|
| 8 |
+
"decoder_start_token_id": 0,
|
| 9 |
+
"dropout_rate": 0.1,
|
| 10 |
+
"eos_token_id": 1,
|
| 11 |
+
"feed_forward_proj": "gated-gelu",
|
| 12 |
+
"gradient_checkpointing": false,
|
| 13 |
+
"initializer_factor": 1.0,
|
| 14 |
+
"is_encoder_decoder": true,
|
| 15 |
+
"layer_norm_epsilon": 1e-06,
|
| 16 |
+
"model_type": "t5",
|
| 17 |
+
"num_decoder_layers": 12,
|
| 18 |
+
"num_heads": 12,
|
| 19 |
+
"num_layers": 12,
|
| 20 |
+
"output_past": true,
|
| 21 |
+
"pad_token_id": 0,
|
| 22 |
+
"relative_attention_num_buckets": 32,
|
| 23 |
+
"tie_word_embeddings": false,
|
| 24 |
+
"transformers_version": "4.8.2",
|
| 25 |
+
"use_cache": true,
|
| 26 |
+
"vocab_size": 8064
|
| 27 |
+
}
|
pytorch_model.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6eb7e0cb580026b082401f28104f1e334aa97ec50ee72ab57b85238ec3c758b2
|
| 3 |
+
size 842585165
|
special_tokens_map.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>", "additional_special_tokens": ["<extra_id_0>", "<extra_id_1>", "<extra_id_2>", "<extra_id_3>", "<extra_id_4>", "<extra_id_5>", "<extra_id_6>", "<extra_id_7>", "<extra_id_8>", "<extra_id_9>", "<extra_id_10>", "<extra_id_11>", "<extra_id_12>", "<extra_id_13>", "<extra_id_14>", "<extra_id_15>", "<extra_id_16>", "<extra_id_17>", "<extra_id_18>", "<extra_id_19>", "<extra_id_20>", "<extra_id_21>", "<extra_id_22>", "<extra_id_23>", "<extra_id_24>", "<extra_id_25>", "<extra_id_26>", "<extra_id_27>", "<extra_id_28>", "<extra_id_29>", "<extra_id_30>", "<extra_id_31>", "<extra_id_32>", "<extra_id_33>", "<extra_id_34>", "<extra_id_35>", "<extra_id_36>", "<extra_id_37>", "<extra_id_38>", "<extra_id_39>", "<extra_id_40>", "<extra_id_41>", "<extra_id_42>", "<extra_id_43>", "<extra_id_44>", "<extra_id_45>", "<extra_id_46>", "<extra_id_47>", "<extra_id_48>", "<extra_id_49>", "<extra_id_50>", "<extra_id_51>", "<extra_id_52>", "<extra_id_53>", "<extra_id_54>", "<extra_id_55>", "<extra_id_56>", "<extra_id_57>", "<extra_id_58>", "<extra_id_59>", "<extra_id_60>", "<extra_id_61>", "<extra_id_62>", "<extra_id_63>", "<extra_id_64>", "<extra_id_65>", "<extra_id_66>", "<extra_id_67>", "<extra_id_68>", "<extra_id_69>", "<extra_id_70>", "<extra_id_71>", "<extra_id_72>", "<extra_id_73>", "<extra_id_74>", "<extra_id_75>", "<extra_id_76>", "<extra_id_77>", "<extra_id_78>", "<extra_id_79>", "<extra_id_80>", "<extra_id_81>", "<extra_id_82>", "<extra_id_83>", "<extra_id_84>", "<extra_id_85>", "<extra_id_86>", "<extra_id_87>", "<extra_id_88>", "<extra_id_89>", "<extra_id_90>", "<extra_id_91>", "<extra_id_92>", "<extra_id_93>", "<extra_id_94>", "<extra_id_95>", "<extra_id_96>", "<extra_id_97>", "<extra_id_98>", "<extra_id_99>"]}
|
spiece.model
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:44349399ce87d5409b3ca15780d27ada474dc74ff2a112b08402605444e61a2a
|
| 3 |
+
size 350029
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>", "extra_ids": 100, "additional_special_tokens": ["<extra_id_0>", "<extra_id_1>", "<extra_id_2>", "<extra_id_3>", "<extra_id_4>", "<extra_id_5>", "<extra_id_6>", "<extra_id_7>", "<extra_id_8>", "<extra_id_9>", "<extra_id_10>", "<extra_id_11>", "<extra_id_12>", "<extra_id_13>", "<extra_id_14>", "<extra_id_15>", "<extra_id_16>", "<extra_id_17>", "<extra_id_18>", "<extra_id_19>", "<extra_id_20>", "<extra_id_21>", "<extra_id_22>", "<extra_id_23>", "<extra_id_24>", "<extra_id_25>", "<extra_id_26>", "<extra_id_27>", "<extra_id_28>", "<extra_id_29>", "<extra_id_30>", "<extra_id_31>", "<extra_id_32>", "<extra_id_33>", "<extra_id_34>", "<extra_id_35>", "<extra_id_36>", "<extra_id_37>", "<extra_id_38>", "<extra_id_39>", "<extra_id_40>", "<extra_id_41>", "<extra_id_42>", "<extra_id_43>", "<extra_id_44>", "<extra_id_45>", "<extra_id_46>", "<extra_id_47>", "<extra_id_48>", "<extra_id_49>", "<extra_id_50>", "<extra_id_51>", "<extra_id_52>", "<extra_id_53>", "<extra_id_54>", "<extra_id_55>", "<extra_id_56>", "<extra_id_57>", "<extra_id_58>", "<extra_id_59>", "<extra_id_60>", "<extra_id_61>", "<extra_id_62>", "<extra_id_63>", "<extra_id_64>", "<extra_id_65>", "<extra_id_66>", "<extra_id_67>", "<extra_id_68>", "<extra_id_69>", "<extra_id_70>", "<extra_id_71>", "<extra_id_72>", "<extra_id_73>", "<extra_id_74>", "<extra_id_75>", "<extra_id_76>", "<extra_id_77>", "<extra_id_78>", "<extra_id_79>", "<extra_id_80>", "<extra_id_81>", "<extra_id_82>", "<extra_id_83>", "<extra_id_84>", "<extra_id_85>", "<extra_id_86>", "<extra_id_87>", "<extra_id_88>", "<extra_id_89>", "<extra_id_90>", "<extra_id_91>", "<extra_id_92>", "<extra_id_93>", "<extra_id_94>", "<extra_id_95>", "<extra_id_96>", "<extra_id_97>", "<extra_id_98>", "<extra_id_99>"], "sp_model_kwargs": {}, "tokenizer_class": "T5Tokenizer"}
|