Delete README.md
Browse files
README.md
DELETED
|
@@ -1,49 +0,0 @@
|
|
| 1 |
-
---
|
| 2 |
-
task_categories:
|
| 3 |
-
- audio-classification
|
| 4 |
-
size_categories:
|
| 5 |
-
- 100B<n<1T
|
| 6 |
-
---
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
# VGGSound
|
| 10 |
-
VGG-Sound is an audio-visual correspondent dataset consisting of short clips of audio sounds, extracted from videos uploaded to YouTube.
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/data/vggsound/
|
| 14 |
-
- **Paper:** https://arxiv.org/abs/2004.14368
|
| 15 |
-
- **Github:** https://github.com/hche11/VGGSound
|
| 16 |
-
|
| 17 |
-
## Analysis
|
| 18 |
-
|
| 19 |
-
- **310+ classes:** VGG-Sound contains audios spanning a large number of challenging acoustic environments and noise characteristics of real applications.
|
| 20 |
-
- **200,000+ videos:** All videos are captured "in the wild" with audio-visual correspondence in the sense that the sound source is visually evident.
|
| 21 |
-
- **550+ hours:** VGG-Sound consists of both audio and video. Each segment is 10 seconds long.
|
| 22 |
-
|
| 23 |
-

|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
## Download
|
| 27 |
-
|
| 28 |
-
We provide a csv file. For each YouTube video, we provide YouTube URLs, time stamps, audio labels and train/test split. Each line in the csv file has columns defined by here.
|
| 29 |
-
|
| 30 |
-
```
|
| 31 |
-
# YouTube ID, start seconds, label, train/test split.
|
| 32 |
-
```
|
| 33 |
-
|
| 34 |
-
And you can download VGGSound directly from this [repository](https://huggingface.co/datasets/Loie/VGGSound/tree/main).
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
## License
|
| 38 |
-
The VGG-Sound dataset is available to download for commercial/research purposes under a Creative Commons Attribution 4.0 International License. The copyright remains with the original owners of the video. A complete version of the license can be found [here](https://thor.robots.ox.ac.uk/datasets/vggsound/license_vggsound.txt).
|
| 39 |
-
|
| 40 |
-
## Citation
|
| 41 |
-
Please cite the following if you make use of the dataset.
|
| 42 |
-
```
|
| 43 |
-
@InProceedings{Chen20,
|
| 44 |
-
author = "Honglie Chen and Weidi Xie and Andrea Vedaldi and Andrew Zisserman",
|
| 45 |
-
title = "VGGSound: A Large-scale Audio-Visual Dataset",
|
| 46 |
-
booktitle = "International Conference on Acoustics, Speech, and Signal Processing (ICASSP)",
|
| 47 |
-
year = "2020",
|
| 48 |
-
}
|
| 49 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|