Update README.md
Browse files
README.md
CHANGED
|
@@ -15,15 +15,15 @@ This model is used to generate paraphrases. It has been trained on a mix of 3 di
|
|
| 15 |
|
| 16 |
We use this model in our ACL'21 Paper ["PROTAUGMENT: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning"](https://arxiv.org/abs/2105.12995)
|
| 17 |
|
| 18 |
-
|
| 19 |
|
| 20 |
If you use this model, please consider citing our paper.
|
| 21 |
-
```
|
| 22 |
@article{Dopierre2021ProtAugmentUD,
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
}
|
| 29 |
```
|
|
|
|
| 15 |
|
| 16 |
We use this model in our ACL'21 Paper ["PROTAUGMENT: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning"](https://arxiv.org/abs/2105.12995)
|
| 17 |
|
| 18 |
+
Jointly used with generation constraints, this model allows to generate diverse paraphrases. We use those paraphrases as a data augmentation technique to further boosts a classification model's generalization capability. Feel free to play with the [code](https://github.com/tdopierre/ProtAugment)!
|
| 19 |
|
| 20 |
If you use this model, please consider citing our paper.
|
| 21 |
+
```
|
| 22 |
@article{Dopierre2021ProtAugmentUD,
|
| 23 |
+
title={ProtAugment: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning},
|
| 24 |
+
author={Thomas Dopierre and C. Gravier and Wilfried Logerais},
|
| 25 |
+
journal={ArXiv},
|
| 26 |
+
year={2021},
|
| 27 |
+
volume={abs/2105.12995}
|
| 28 |
}
|
| 29 |
```
|