Datasets:
Tasks:
Sentence Similarity
Modalities:
Text
Formats:
json
Sub-tasks:
semantic-similarity-classification
Languages:
English
Size:
100K - 1M
License:
Commit
·
f475d9c
1
Parent(s):
c455f3c
Update README.md
Browse files
README.md
CHANGED
|
@@ -4,6 +4,11 @@ language:
|
|
| 4 |
- en
|
| 5 |
paperswithcode_id: embedding-data/QQP_triplets
|
| 6 |
pretty_name: QQP_triplets
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
|
| 8 |
---
|
| 9 |
|
|
@@ -42,24 +47,46 @@ pretty_name: QQP_triplets
|
|
| 42 |
|
| 43 |
### Dataset Summary
|
| 44 |
|
| 45 |
-
This dataset will give anyone the opportunity to train and test models of semantic equivalence, based on actual Quora data.
|
| 46 |
-
|
| 47 |
-
The dataset consists of over 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line truly contains a duplicate pair.
|
| 48 |
|
| 49 |
Disclaimer: The team releasing Quora data did not upload the dataset to the Hub and did not write a dataset card.
|
| 50 |
These steps were done by the Hugging Face team.
|
| 51 |
|
| 52 |
-
### Supported Tasks
|
| 53 |
-
|
| 54 |
-
[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
|
| 55 |
|
| 56 |
### Languages
|
| 57 |
-
|
| 58 |
-
[More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
|
| 59 |
|
| 60 |
## Dataset Structure
|
| 61 |
-
|
| 62 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
|
| 64 |
### Curation Rationale
|
| 65 |
|
|
|
|
| 4 |
- en
|
| 5 |
paperswithcode_id: embedding-data/QQP_triplets
|
| 6 |
pretty_name: QQP_triplets
|
| 7 |
+
task_categories:
|
| 8 |
+
- sentence-similarity
|
| 9 |
+
- paraphrase-mining
|
| 10 |
+
task_ids:
|
| 11 |
+
- semantic-similarity-classification
|
| 12 |
|
| 13 |
---
|
| 14 |
|
|
|
|
| 47 |
|
| 48 |
### Dataset Summary
|
| 49 |
|
| 50 |
+
This dataset will give anyone the opportunity to train and test models of semantic equivalence, based on actual Quora data. The data is organized as triplets (anchor, positive, negative).
|
|
|
|
|
|
|
| 51 |
|
| 52 |
Disclaimer: The team releasing Quora data did not upload the dataset to the Hub and did not write a dataset card.
|
| 53 |
These steps were done by the Hugging Face team.
|
| 54 |
|
| 55 |
+
### Supported Tasks
|
| 56 |
+
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
|
|
|
|
| 57 |
|
| 58 |
### Languages
|
| 59 |
+
- English.
|
|
|
|
| 60 |
|
| 61 |
## Dataset Structure
|
| 62 |
+
Each example is a dictionary with three keys (query, pos, and neg) containing a list each (triplets). The first key contains an anchor sentence, the second a positive sentence, and the third a list of negative sentences.
|
| 63 |
+
```
|
| 64 |
+
{"query": [anchor], "pos": [positive], "neg": [negative1, negative2, ..., negativeN]}
|
| 65 |
+
{"query": [anchor], "pos": [positive], "neg": [negative1, negative2, ..., negativeN]}
|
| 66 |
+
...
|
| 67 |
+
{"query": [anchor], "pos": [positive], "neg": [negative1, negative2, ..., negativeN]}
|
| 68 |
+
```
|
| 69 |
+
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train them.
|
| 70 |
+
|
| 71 |
+
### Usage Example
|
| 72 |
+
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
|
| 73 |
+
```python
|
| 74 |
+
from datasets import load_dataset
|
| 75 |
+
dataset = load_dataset("embedding-data/QQP_triplets")
|
| 76 |
+
```
|
| 77 |
+
The dataset is loaded as a `DatasetDict` and has the format:
|
| 78 |
+
```python
|
| 79 |
+
DatasetDict({
|
| 80 |
+
train: Dataset({
|
| 81 |
+
features: ['set'],
|
| 82 |
+
num_rows: 101762
|
| 83 |
+
})
|
| 84 |
+
})
|
| 85 |
+
```
|
| 86 |
+
Review an example `i` with:
|
| 87 |
+
```python
|
| 88 |
+
dataset["train"][i]["set"]
|
| 89 |
+
```
|
| 90 |
|
| 91 |
### Curation Rationale
|
| 92 |
|