Datasets:
Tasks:
Text Classification
Modalities:
Text
Sub-tasks:
natural-language-inference
Languages:
Bengali
Size:
100K - 1M
ArXiv:
License:
Commit
·
a1e02ca
1
Parent(s):
2ec4604
Added files
Browse files- README.md +190 -0
- dataset_infos.json +1 -0
- xnli_bn.py +19 -13
README.md
CHANGED
|
@@ -0,0 +1,190 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
annotations_creators:
|
| 3 |
+
- machine-generated
|
| 4 |
+
language_creators:
|
| 5 |
+
- found
|
| 6 |
+
multilinguality:
|
| 7 |
+
- monolingual
|
| 8 |
+
size_categories:
|
| 9 |
+
- 100K<n<1M
|
| 10 |
+
source_datasets:
|
| 11 |
+
- extended
|
| 12 |
+
task_categories:
|
| 13 |
+
- text-classification
|
| 14 |
+
task_ids:
|
| 15 |
+
- natural-language-inference
|
| 16 |
+
languages:
|
| 17 |
+
- bn
|
| 18 |
+
licenses:
|
| 19 |
+
- cc-by-nc-sa-4.0
|
| 20 |
+
---
|
| 21 |
+
|
| 22 |
+
# Dataset Card for `xnli_bn`
|
| 23 |
+
|
| 24 |
+
## Table of Contents
|
| 25 |
+
- [Dataset Card for `xnli_bn`](#dataset-card-for-xnli_bn)
|
| 26 |
+
- [Table of Contents](#table-of-contents)
|
| 27 |
+
- [Dataset Description](#dataset-description)
|
| 28 |
+
- [Dataset Summary](#dataset-summary)
|
| 29 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
| 30 |
+
- [Languages](#languages)
|
| 31 |
+
- [Usage](#usage)
|
| 32 |
+
- [Dataset Structure](#dataset-structure)
|
| 33 |
+
- [Data Instances](#data-instances)
|
| 34 |
+
- [Data Fields](#data-fields)
|
| 35 |
+
- [Data Splits](#data-splits)
|
| 36 |
+
- [Dataset Creation](#dataset-creation)
|
| 37 |
+
- [Curation Rationale](#curation-rationale)
|
| 38 |
+
- [Source Data](#source-data)
|
| 39 |
+
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
|
| 40 |
+
- [Who are the source language producers?](#who-are-the-source-language-producers)
|
| 41 |
+
- [Annotations](#annotations)
|
| 42 |
+
- [Annotation process](#annotation-process)
|
| 43 |
+
- [Who are the annotators?](#who-are-the-annotators)
|
| 44 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
| 45 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
| 46 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
| 47 |
+
- [Discussion of Biases](#discussion-of-biases)
|
| 48 |
+
- [Other Known Limitations](#other-known-limitations)
|
| 49 |
+
- [Additional Information](#additional-information)
|
| 50 |
+
- [Dataset Curators](#dataset-curators)
|
| 51 |
+
- [Licensing Information](#licensing-information)
|
| 52 |
+
- [Citation Information](#citation-information)
|
| 53 |
+
- [Contributions](#contributions)
|
| 54 |
+
|
| 55 |
+
## Dataset Description
|
| 56 |
+
|
| 57 |
+
- **Repository:** [https://github.com/csebuetnlp/banglabert](https://github.com/csebuetnlp/banglabert)
|
| 58 |
+
- **Paper:** [**"BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding"**](https://arxiv.org/abs/2101.00204)
|
| 59 |
+
- **Point of Contact:** [Tahmid Hasan](mailto:[email protected])
|
| 60 |
+
|
| 61 |
+
### Dataset Summary
|
| 62 |
+
|
| 63 |
+
This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of
|
| 64 |
+
MNLI data used in XNLI and state-of-the-art English to Bengali translation model introduced **[here](https://aclanthology.org/2020.emnlp-main.207/).**
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
### Supported Tasks and Leaderboards
|
| 68 |
+
|
| 69 |
+
[More information needed](https://github.com/csebuetnlp/banglabert)
|
| 70 |
+
|
| 71 |
+
### Languages
|
| 72 |
+
|
| 73 |
+
* `Bengali`
|
| 74 |
+
|
| 75 |
+
### Usage
|
| 76 |
+
```python
|
| 77 |
+
from datasets import load_dataset
|
| 78 |
+
dataset = load_dataset("csebuetnlp/xnli_bn")
|
| 79 |
+
```
|
| 80 |
+
## Dataset Structure
|
| 81 |
+
|
| 82 |
+
### Data Instances
|
| 83 |
+
|
| 84 |
+
One example from the dataset is given below in JSON format.
|
| 85 |
+
```
|
| 86 |
+
{
|
| 87 |
+
"sentence1": "আসলে, আমি এমনকি এই বিষয়ে চিন্তাও করিনি, কিন্তু আমি এত হতাশ হয়ে পড়েছিলাম যে, শেষ পর্যন্ত আমি আবার তার সঙ্গে কথা বলতে শুরু করেছিলাম",
|
| 88 |
+
"sentence2": "আমি তার সাথে আবার কথা বলিনি।",
|
| 89 |
+
"label": "contradiction"
|
| 90 |
+
}
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
### Data Fields
|
| 94 |
+
|
| 95 |
+
The data fields are as follows:
|
| 96 |
+
|
| 97 |
+
- `sentence1`: a `string` feature indicating the premise.
|
| 98 |
+
- `sentence2`: a `string` feature indicating the hypothesis.
|
| 99 |
+
- `label`: a classification label, where possible values are `entailment`, `neutral`, `contradiction`.
|
| 100 |
+
|
| 101 |
+
### Data Splits
|
| 102 |
+
| split |count |
|
| 103 |
+
|----------|--------|
|
| 104 |
+
|`train`| 381449 |
|
| 105 |
+
|`validation`| 2419 |
|
| 106 |
+
|`test`| 4895 |
|
| 107 |
+
|
| 108 |
+
|
| 109 |
+
|
| 110 |
+
|
| 111 |
+
## Dataset Creation
|
| 112 |
+
|
| 113 |
+
The dataset curation procedure was the same as the [XNLI](https://aclanthology.org/D18-1269/) dataset: we translated the [MultiNLI](https://aclanthology.org/N18-1101/) training data using the English to Bangla translation model introduced [here](https://aclanthology.org/2020.emnlp-main.207/). Due to the possibility of incursions of error during automatic translation, we used the [Language-Agnostic BERT Sentence Embeddings (LaBSE)](https://arxiv.org/abs/2007.01852) of the translations and original sentences to compute their similarity. All sentences below a similarity thresholdof 0.70 were discarded.
|
| 114 |
+
|
| 115 |
+
### Curation Rationale
|
| 116 |
+
|
| 117 |
+
[More information needed](https://github.com/csebuetnlp/banglabert)
|
| 118 |
+
|
| 119 |
+
### Source Data
|
| 120 |
+
|
| 121 |
+
[XNLI](https://aclanthology.org/D18-1269/)
|
| 122 |
+
|
| 123 |
+
#### Initial Data Collection and Normalization
|
| 124 |
+
|
| 125 |
+
[More information needed](https://github.com/csebuetnlp/banglabert)
|
| 126 |
+
|
| 127 |
+
|
| 128 |
+
#### Who are the source language producers?
|
| 129 |
+
|
| 130 |
+
[More information needed](https://github.com/csebuetnlp/banglabert)
|
| 131 |
+
|
| 132 |
+
|
| 133 |
+
### Annotations
|
| 134 |
+
|
| 135 |
+
[More information needed](https://github.com/csebuetnlp/banglabert)
|
| 136 |
+
|
| 137 |
+
|
| 138 |
+
#### Annotation process
|
| 139 |
+
|
| 140 |
+
[More information needed](https://github.com/csebuetnlp/banglabert)
|
| 141 |
+
|
| 142 |
+
#### Who are the annotators?
|
| 143 |
+
|
| 144 |
+
[More information needed](https://github.com/csebuetnlp/banglabert)
|
| 145 |
+
|
| 146 |
+
### Personal and Sensitive Information
|
| 147 |
+
|
| 148 |
+
[More information needed](https://github.com/csebuetnlp/banglabert)
|
| 149 |
+
|
| 150 |
+
## Considerations for Using the Data
|
| 151 |
+
|
| 152 |
+
### Social Impact of Dataset
|
| 153 |
+
|
| 154 |
+
[More information needed](https://github.com/csebuetnlp/banglabert)
|
| 155 |
+
|
| 156 |
+
### Discussion of Biases
|
| 157 |
+
|
| 158 |
+
[More information needed](https://github.com/csebuetnlp/banglabert)
|
| 159 |
+
|
| 160 |
+
### Other Known Limitations
|
| 161 |
+
|
| 162 |
+
[More information needed](https://github.com/csebuetnlp/banglabert)
|
| 163 |
+
|
| 164 |
+
## Additional Information
|
| 165 |
+
|
| 166 |
+
### Dataset Curators
|
| 167 |
+
|
| 168 |
+
[More information needed](https://github.com/csebuetnlp/banglabert)
|
| 169 |
+
|
| 170 |
+
### Licensing Information
|
| 171 |
+
|
| 172 |
+
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
|
| 173 |
+
### Citation Information
|
| 174 |
+
|
| 175 |
+
If you use the dataset, please cite the following paper:
|
| 176 |
+
```
|
| 177 |
+
@misc{bhattacharjee2021banglabert,
|
| 178 |
+
title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},
|
| 179 |
+
author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},
|
| 180 |
+
year={2021},
|
| 181 |
+
eprint={2101.00204},
|
| 182 |
+
archivePrefix={arXiv},
|
| 183 |
+
primaryClass={cs.CL}
|
| 184 |
+
}
|
| 185 |
+
```
|
| 186 |
+
|
| 187 |
+
|
| 188 |
+
### Contributions
|
| 189 |
+
|
| 190 |
+
Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset.
|
dataset_infos.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"xnli_bn": {"description": "This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of\nMNLI data used in XNLI and state-of-the-art English to Bengali translation model.\n", "citation": "@misc{bhattacharjee2021banglabert,\n title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},\n author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},\n year={2021},\n eprint={2101.00204},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://github.com/csebuetnlp/banglabert", "license": "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)", "features": {"sentence1": {"dtype": "string", "id": null, "_type": "Value"}, "sentence2": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["contradiction", "entailment", "neutral"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "xnli_bn", "config_name": "xnli_bn", "version": {"version_str": "0.0.1", "description": null, "major": 0, "minor": 0, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 175660643, "num_examples": 381449, "dataset_name": "xnli_bn"}, "test": {"name": "test", "num_bytes": 2127035, "num_examples": 4895, "dataset_name": "xnli_bn"}, "validation": {"name": "validation", "num_bytes": 1046988, "num_examples": 2419, "dataset_name": "xnli_bn"}}, "download_checksums": {"https://huggingface.co/datasets/csebuetnlp/xnli_bn/resolve/main/data/xnli_bn.tar.bz2": {"num_bytes": 21437836, "checksum": "a91b4d3f8433a98fd6251396976b17b2385ef49ffbb207fabe8124fc6b066207"}}, "download_size": 21437836, "post_processing_size": null, "dataset_size": 178834666, "size_in_bytes": 200272502}}
|
xnli_bn.py
CHANGED
|
@@ -1,10 +1,13 @@
|
|
| 1 |
"""XNLI Bengali dataset"""
|
| 2 |
import json
|
| 3 |
import os
|
|
|
|
| 4 |
import datasets
|
|
|
|
|
|
|
| 5 |
_CITATION = """\
|
| 6 |
@misc{bhattacharjee2021banglabert,
|
| 7 |
-
title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},
|
| 8 |
author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},
|
| 9 |
year={2021},
|
| 10 |
eprint={2101.00204},
|
|
@@ -13,25 +16,32 @@ _CITATION = """\
|
|
| 13 |
}
|
| 14 |
"""
|
| 15 |
_DESCRIPTION = """\
|
| 16 |
-
This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of
|
| 17 |
-
MNLI data used in XNLI and state-of-the-art English to Bengali translation model.
|
| 18 |
"""
|
| 19 |
_HOMEPAGE = "https://github.com/csebuetnlp/banglabert"
|
| 20 |
_LICENSE = "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)"
|
| 21 |
_URL = "https://huggingface.co/datasets/csebuetnlp/xnli_bn/resolve/main/data/xnli_bn.tar.bz2"
|
| 22 |
_VERSION = datasets.Version("0.0.1")
|
| 23 |
|
|
|
|
| 24 |
class XnliBn(datasets.GeneratorBasedBuilder):
|
| 25 |
"""XNLI Bengali dataset"""
|
| 26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
def _info(self):
|
| 28 |
features = datasets.Features(
|
| 29 |
{
|
| 30 |
"sentence1": datasets.Value("string"),
|
| 31 |
"sentence2": datasets.Value("string"),
|
| 32 |
-
"label": datasets.features.ClassLabel(
|
| 33 |
-
names=["contradiction", "entailment", "neurtral"]
|
| 34 |
-
),
|
| 35 |
}
|
| 36 |
)
|
| 37 |
return datasets.DatasetInfo(
|
|
@@ -40,12 +50,12 @@ class XnliBn(datasets.GeneratorBasedBuilder):
|
|
| 40 |
homepage=_HOMEPAGE,
|
| 41 |
license=_LICENSE,
|
| 42 |
citation=_CITATION,
|
| 43 |
-
version=_VERSION
|
| 44 |
)
|
| 45 |
|
| 46 |
def _split_generators(self, dl_manager):
|
| 47 |
"""Returns SplitGenerators."""
|
| 48 |
-
data_dir = dl_manager.download_and_extract(_URL)
|
| 49 |
return [
|
| 50 |
datasets.SplitGenerator(
|
| 51 |
name=datasets.Split.TRAIN,
|
|
@@ -72,8 +82,4 @@ class XnliBn(datasets.GeneratorBasedBuilder):
|
|
| 72 |
with open(filepath, encoding="utf-8") as f:
|
| 73 |
for idx_, row in enumerate(f):
|
| 74 |
data = json.loads(row)
|
| 75 |
-
yield idx_, {
|
| 76 |
-
"sentence1": data["sentence1"],
|
| 77 |
-
"sentence2": data["sentence2"],
|
| 78 |
-
"label": data["label"]
|
| 79 |
-
}
|
|
|
|
| 1 |
"""XNLI Bengali dataset"""
|
| 2 |
import json
|
| 3 |
import os
|
| 4 |
+
|
| 5 |
import datasets
|
| 6 |
+
|
| 7 |
+
|
| 8 |
_CITATION = """\
|
| 9 |
@misc{bhattacharjee2021banglabert,
|
| 10 |
+
title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},
|
| 11 |
author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},
|
| 12 |
year={2021},
|
| 13 |
eprint={2101.00204},
|
|
|
|
| 16 |
}
|
| 17 |
"""
|
| 18 |
_DESCRIPTION = """\
|
| 19 |
+
This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of
|
| 20 |
+
MNLI data used in XNLI and state-of-the-art English to Bengali translation model.
|
| 21 |
"""
|
| 22 |
_HOMEPAGE = "https://github.com/csebuetnlp/banglabert"
|
| 23 |
_LICENSE = "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)"
|
| 24 |
_URL = "https://huggingface.co/datasets/csebuetnlp/xnli_bn/resolve/main/data/xnli_bn.tar.bz2"
|
| 25 |
_VERSION = datasets.Version("0.0.1")
|
| 26 |
|
| 27 |
+
|
| 28 |
class XnliBn(datasets.GeneratorBasedBuilder):
|
| 29 |
"""XNLI Bengali dataset"""
|
| 30 |
|
| 31 |
+
BUILDER_CONFIGS = [
|
| 32 |
+
datasets.BuilderConfig(
|
| 33 |
+
name="xnli_bn",
|
| 34 |
+
version=_VERSION,
|
| 35 |
+
description=_DESCRIPTION,
|
| 36 |
+
)
|
| 37 |
+
]
|
| 38 |
+
|
| 39 |
def _info(self):
|
| 40 |
features = datasets.Features(
|
| 41 |
{
|
| 42 |
"sentence1": datasets.Value("string"),
|
| 43 |
"sentence2": datasets.Value("string"),
|
| 44 |
+
"label": datasets.features.ClassLabel(names=["contradiction", "entailment", "neutral"]),
|
|
|
|
|
|
|
| 45 |
}
|
| 46 |
)
|
| 47 |
return datasets.DatasetInfo(
|
|
|
|
| 50 |
homepage=_HOMEPAGE,
|
| 51 |
license=_LICENSE,
|
| 52 |
citation=_CITATION,
|
| 53 |
+
version=_VERSION,
|
| 54 |
)
|
| 55 |
|
| 56 |
def _split_generators(self, dl_manager):
|
| 57 |
"""Returns SplitGenerators."""
|
| 58 |
+
data_dir = os.path.join(dl_manager.download_and_extract(_URL), "xnli_bn")
|
| 59 |
return [
|
| 60 |
datasets.SplitGenerator(
|
| 61 |
name=datasets.Split.TRAIN,
|
|
|
|
| 82 |
with open(filepath, encoding="utf-8") as f:
|
| 83 |
for idx_, row in enumerate(f):
|
| 84 |
data = json.loads(row)
|
| 85 |
+
yield idx_, {"sentence1": data["sentence1"], "sentence2": data["sentence2"], "label": data["label"]}
|
|
|
|
|
|
|
|
|
|
|
|