| 
							 | 
						--- | 
					
					
						
						| 
							 | 
						license: mit | 
					
					
						
						| 
							 | 
						tags: | 
					
					
						
						| 
							 | 
						- generated_from_trainer | 
					
					
						
						| 
							 | 
						metrics: | 
					
					
						
						| 
							 | 
						- f1 | 
					
					
						
						| 
							 | 
						datasets: | 
					
					
						
						| 
							 | 
						- wikiann | 
					
					
						
						| 
							 | 
						model-index: | 
					
					
						
						| 
							 | 
						- name: xlm-roberta-base-finetuned-panx-all | 
					
					
						
						| 
							 | 
						  results: | 
					
					
						
						| 
							 | 
						  - task: | 
					
					
						
						| 
							 | 
						      type: token-classification | 
					
					
						
						| 
							 | 
						      name: Token Classification | 
					
					
						
						| 
							 | 
						    dataset: | 
					
					
						
						| 
							 | 
						      name: wikiann | 
					
					
						
						| 
							 | 
						      type: wikiann | 
					
					
						
						| 
							 | 
						      config: en | 
					
					
						
						| 
							 | 
						      split: test | 
					
					
						
						| 
							 | 
						    metrics: | 
					
					
						
						| 
							 | 
						    - name: Accuracy | 
					
					
						
						| 
							 | 
						      type: accuracy | 
					
					
						
						| 
							 | 
						      value: 0.843189280620875 | 
					
					
						
						| 
							 | 
						      verified: true | 
					
					
						
						| 
							 | 
						    - name: Precision | 
					
					
						
						| 
							 | 
						      type: precision | 
					
					
						
						| 
							 | 
						      value: 0.8410061269097046 | 
					
					
						
						| 
							 | 
						      verified: true | 
					
					
						
						| 
							 | 
						    - name: Recall | 
					
					
						
						| 
							 | 
						      type: recall | 
					
					
						
						| 
							 | 
						      value: 0.8568527450211155 | 
					
					
						
						| 
							 | 
						      verified: true | 
					
					
						
						| 
							 | 
						    - name: F1 | 
					
					
						
						| 
							 | 
						      type: f1 | 
					
					
						
						| 
							 | 
						      value: 0.8488554853827908 | 
					
					
						
						| 
							 | 
						      verified: true | 
					
					
						
						| 
							 | 
						    - name: loss | 
					
					
						
						| 
							 | 
						      type: loss | 
					
					
						
						| 
							 | 
						      value: 0.6632214784622192 | 
					
					
						
						| 
							 | 
						      verified: true | 
					
					
						
						| 
							 | 
						--- | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						<!-- This model card has been generated automatically according to the information the Trainer had access to. You | 
					
					
						
						| 
							 | 
						should probably proofread and complete it, then remove this comment. --> | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						# xlm-roberta-base-finetuned-panx-all | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb). | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						It achieves the following results on the evaluation set: | 
					
					
						
						| 
							 | 
						- Loss: 0.1739 | 
					
					
						
						| 
							 | 
						- F1: 0.8581 | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						## Model description | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						More information needed | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						## Intended uses & limitations | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						More information needed | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						## Training and evaluation data | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						More information needed | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						## Training procedure | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						### Training hyperparameters | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						The following hyperparameters were used during training: | 
					
					
						
						| 
							 | 
						- learning_rate: 5e-05 | 
					
					
						
						| 
							 | 
						- train_batch_size: 24 | 
					
					
						
						| 
							 | 
						- eval_batch_size: 24 | 
					
					
						
						| 
							 | 
						- seed: 42 | 
					
					
						
						| 
							 | 
						- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 | 
					
					
						
						| 
							 | 
						- lr_scheduler_type: linear | 
					
					
						
						| 
							 | 
						- num_epochs: 3 | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						### Training results | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						| Training Loss | Epoch | Step | Validation Loss | F1     | | 
					
					
						
						| 
							 | 
						|:-------------:|:-----:|:----:|:---------------:|:------:| | 
					
					
						
						| 
							 | 
						| 0.2912        | 1.0   | 835  | 0.1883          | 0.8238 | | 
					
					
						
						| 
							 | 
						| 0.1548        | 2.0   | 1670 | 0.1738          | 0.8480 | | 
					
					
						
						| 
							 | 
						| 0.101         | 3.0   | 2505 | 0.1739          | 0.8581 | | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						### Framework versions | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						- Transformers 4.12.0.dev0 | 
					
					
						
						| 
							 | 
						- Pytorch 1.9.1+cu102 | 
					
					
						
						| 
							 | 
						- Datasets 1.12.1 | 
					
					
						
						| 
							 | 
						- Tokenizers 0.10.3 | 
					
					
						
						| 
							 | 
						
 |