Datasets:
Updated description of the dataset format and content
Browse files
README.md
CHANGED
|
@@ -22,7 +22,7 @@ license: cc-by-nc-sa-4.0
|
|
| 22 |
|
| 23 |
### Dataset Summary
|
| 24 |
|
| 25 |
-
The CA-PT Parallel Corpus is a Catalan-Portuguese dataset of
|
| 26 |
Machine Translation.
|
| 27 |
|
| 28 |
|
|
@@ -39,11 +39,9 @@ The sentences included in the dataset are in Catalan (CA) and Portuguese (PT).
|
|
| 39 |
|
| 40 |
### Data Instances
|
| 41 |
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
- ca-pt_2023_09_01_full.pt: contains 9.892.953 Portuguese sentences.
|
| 47 |
|
| 48 |
|
| 49 |
### Data Fields
|
|
@@ -63,29 +61,19 @@ This dataset is aimed at promoting the development of Machine Translation betwee
|
|
| 63 |
### Source Data
|
| 64 |
|
| 65 |
#### Initial Data Collection and Normalization
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
| Dataset | Sentences |
|
| 70 |
-
|:-------|-------:|
|
| 71 |
-
| CCMatrix v1 | 3.765.459 |
|
| 72 |
-
| WikiMatrix | 317.649 |
|
| 73 |
-
| GNOME | 1.752 |
|
| 74 |
-
| KDE4 | 117.828 |
|
| 75 |
-
| OpenSubtitles | 235.604 |
|
| 76 |
-
| GlobalVoices | 3.430 |
|
| 77 |
-
| Tatoeba | 723 |
|
| 78 |
-
| Europarl | 1.631.989 |
|
| 79 |
|
| 80 |
-
|
| 81 |
-
The Europarl corpus is a synthetic parallel corpus created from the original Spanish-Catalan corpus by [SoftCatalà](https://github.com/Softcatala/Europarl-catalan).
|
| 82 |
-
|
| 83 |
-
The remaining **3.733.322** sentences are synthetic parallel data created from a random sampling of the Spanish-Portuguese corpora
|
| 84 |
available on [Opus](https://opus.nlpl.eu/) and translated into Catalan using the [PlanTL es-ca](https://huggingface.co/PlanTL-GOB-ES/mt-plantl-es-ca) model.
|
| 85 |
|
| 86 |
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
|
| 87 |
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
|
| 88 |
-
The filtered datasets are then concatenated to form
|
| 89 |
|
| 90 |
#### Who are the source language producers?
|
| 91 |
|
|
@@ -102,7 +90,7 @@ The dataset does not contain any annotations.
|
|
| 102 |
#### Who are the annotators?
|
| 103 |
|
| 104 |
[N/A]
|
| 105 |
-
|
| 106 |
### Personal and Sensitive Information
|
| 107 |
|
| 108 |
Given that this dataset is partly derived from pre-existing datasets that may contain crawled data, and that no specific anonymisation process has been applied,
|
|
|
|
| 22 |
|
| 23 |
### Dataset Summary
|
| 24 |
|
| 25 |
+
The CA-PT Parallel Corpus is a Catalan-Portuguese dataset of created to support Catalan in NLP tasks, specifically
|
| 26 |
Machine Translation.
|
| 27 |
|
| 28 |
|
|
|
|
| 39 |
|
| 40 |
### Data Instances
|
| 41 |
|
| 42 |
+
A single tsv file is provided with the sentences sorted in the same order and
|
| 43 |
+
a header containing the two-letter ISO language code for the language in each column:
|
| 44 |
+
ca-pt_2023_09_01_full.tsv.
|
|
|
|
|
|
|
| 45 |
|
| 46 |
|
| 47 |
### Data Fields
|
|
|
|
| 61 |
### Source Data
|
| 62 |
|
| 63 |
#### Initial Data Collection and Normalization
|
| 64 |
+
@alogin4 dataset_to_hf]$ /bin/python /gpfs/scratch/bsc88/bsc088063/dataset_to_hf/txt2parq
|
| 65 |
+
The first portion of the corpus is a combination of the following original datasets collected from [Opus](https://opus.nlpl.eu/):
|
| 66 |
+
CCMatrix, WikiMatrix, GNOME, KDE4, OpenSubtitles, GlobalVoices, Tatoeba.
|
| 67 |
|
| 68 |
+
Additionally, the corpus contains synthetic parallel data generated from the original Spanish-Catalan Europarl
|
| 69 |
+
made public by [SoftCatalà](https://github.com/Softcatala/Europarl-catalan).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 70 |
|
| 71 |
+
A last portion of the dataset is composed by synthetic parallel data generated from a random sampling of the Spanish-Portuguese corpora
|
|
|
|
|
|
|
|
|
|
| 72 |
available on [Opus](https://opus.nlpl.eu/) and translated into Catalan using the [PlanTL es-ca](https://huggingface.co/PlanTL-GOB-ES/mt-plantl-es-ca) model.
|
| 73 |
|
| 74 |
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
|
| 75 |
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
|
| 76 |
+
The filtered datasets are then concatenated to form the final corpus.
|
| 77 |
|
| 78 |
#### Who are the source language producers?
|
| 79 |
|
|
|
|
| 90 |
#### Who are the annotators?
|
| 91 |
|
| 92 |
[N/A]
|
| 93 |
+
@alogin4 dataset_to_hf]$ /bin/python /gpfs/scratch/bsc88/bsc088063/dataset_to_hf/txt2parq
|
| 94 |
### Personal and Sensitive Information
|
| 95 |
|
| 96 |
Given that this dataset is partly derived from pre-existing datasets that may contain crawled data, and that no specific anonymisation process has been applied,
|