Datasets:
Commit
·
d4dfbb6
1
Parent(s):
f110a12
Update README.md
Browse files
README.md
CHANGED
|
@@ -224,4 +224,69 @@ configs:
|
|
| 224 |
path: voxpopuli/train-*
|
| 225 |
- split: valid
|
| 226 |
path: voxpopuli/valid-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 227 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 224 |
path: voxpopuli/train-*
|
| 225 |
- split: valid
|
| 226 |
path: voxpopuli/valid-*
|
| 227 |
+
language:
|
| 228 |
+
- en
|
| 229 |
+
pretty_name: Speech Recognition Alignment Dataset
|
| 230 |
+
size_categories:
|
| 231 |
+
- 10M<n<100M
|
| 232 |
---
|
| 233 |
+
|
| 234 |
+
# Speech Recognition Alignment Dataset
|
| 235 |
+
|
| 236 |
+
This dataset is a variation of several widely-used ASR datasets, encompassing Librispeech, MuST-C, TED-LIUM, VoxPopuli, Common Voice, and GigaSpeech. The difference is this dataset includes:
|
| 237 |
+
- Precise alignment between audio and text.
|
| 238 |
+
- Text that has been punctuated and made case-sensitive.
|
| 239 |
+
- Identification of named entities in the text.
|
| 240 |
+
|
| 241 |
+
# Usage
|
| 242 |
+
|
| 243 |
+
First, install the latest version of the 🤗 Datasets package:
|
| 244 |
+
|
| 245 |
+
```bash
|
| 246 |
+
pip install --upgrade pip
|
| 247 |
+
pip install --upgrade datasets[audio]
|
| 248 |
+
```
|
| 249 |
+
|
| 250 |
+
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
|
| 251 |
+
function:
|
| 252 |
+
|
| 253 |
+
```python
|
| 254 |
+
from datasets import load_dataset
|
| 255 |
+
|
| 256 |
+
# Available dataset: 'libris','mustc','tedlium','voxpopuli','commonvoice','gigaspeech'
|
| 257 |
+
dataset = load_dataset("nguyenvulebinh/asr-alignment", "libris")
|
| 258 |
+
|
| 259 |
+
# take the first sample of the validation set
|
| 260 |
+
sample = dataset["train"][0]
|
| 261 |
+
```
|
| 262 |
+
|
| 263 |
+
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
|
| 264 |
+
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
|
| 265 |
+
dataset to disk:
|
| 266 |
+
|
| 267 |
+
```python
|
| 268 |
+
from datasets import load_dataset
|
| 269 |
+
|
| 270 |
+
dataset = load_dataset("nguyenvulebinh/asr-alignment", "libris", streaming=True)
|
| 271 |
+
# take the first sample of the validation set
|
| 272 |
+
sample = next(iter(dataset["train"]))
|
| 273 |
+
```
|
| 274 |
+
|
| 275 |
+
## Citation
|
| 276 |
+
|
| 277 |
+
If you use this data, please consider citing the [ICASSP 2024 Paper: SYNTHETIC CONVERSATIONS IMPROVE MULTI-TALKER ASR]():
|
| 278 |
+
```
|
| 279 |
+
@INPROCEEDINGS{synthetic-multi-asr-nguyen,
|
| 280 |
+
author={Nguyen, Thai-Binh and Waibel, Alexander},
|
| 281 |
+
booktitle={ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
|
| 282 |
+
title={SYNTHETIC CONVERSATIONS IMPROVE MULTI-TALKER ASR},
|
| 283 |
+
year={2024},
|
| 284 |
+
volume={},
|
| 285 |
+
number={},
|
| 286 |
+
}
|
| 287 |
+
|
| 288 |
+
```
|
| 289 |
+
|
| 290 |
+
## License
|
| 291 |
+
|
| 292 |
+
This dataset is licensed in accordance with the terms of the original dataset.
|