Spaces:
Runtime error
Runtime error
Update Space (evaluate main: 828c6327)
Browse files- README.md +106 -5
- app.py +6 -0
- requirements.txt +4 -0
- sacrebleu.py +166 -0
README.md
CHANGED
|
@@ -1,12 +1,113 @@
|
|
| 1 |
---
|
| 2 |
-
title:
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
-
colorTo:
|
| 6 |
sdk: gradio
|
| 7 |
sdk_version: 3.0.2
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
title: SacreBLEU
|
| 3 |
+
emoji: π€
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: red
|
| 6 |
sdk: gradio
|
| 7 |
sdk_version: 3.0.2
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
+
tags:
|
| 11 |
+
- evaluate
|
| 12 |
+
- metric
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Metric Card for SacreBLEU
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
## Metric Description
|
| 19 |
+
SacreBLEU provides hassle-free computation of shareable, comparable, and reproducible BLEU scores. Inspired by Rico Sennrich's `multi-bleu-detok.perl`, it produces the official Workshop on Machine Translation (WMT) scores but works with plain text. It also knows all the standard test sets and handles downloading, processing, and tokenization.
|
| 20 |
+
|
| 21 |
+
See the [README.md] file at https://github.com/mjpost/sacreBLEU for more information.
|
| 22 |
+
|
| 23 |
+
## How to Use
|
| 24 |
+
This metric takes a set of predictions and a set of references as input, along with various optional parameters.
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
```python
|
| 28 |
+
>>> predictions = ["hello there general kenobi", "foo bar foobar"]
|
| 29 |
+
>>> references = [["hello there general kenobi", "hello there !"],
|
| 30 |
+
... ["foo bar foobar", "foo bar foobar"]]
|
| 31 |
+
>>> sacrebleu = evaluate.load("sacrebleu")
|
| 32 |
+
>>> results = sacrebleu.compute(predictions=predictions,
|
| 33 |
+
... references=references)
|
| 34 |
+
>>> print(list(results.keys()))
|
| 35 |
+
['score', 'counts', 'totals', 'precisions', 'bp', 'sys_len', 'ref_len']
|
| 36 |
+
>>> print(round(results["score"], 1))
|
| 37 |
+
100.0
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
### Inputs
|
| 41 |
+
- **`predictions`** (`list` of `str`): list of translations to score. Each translation should be tokenized into a list of tokens.
|
| 42 |
+
- **`references`** (`list` of `list` of `str`): A list of lists of references. The contents of the first sub-list are the references for the first prediction, the contents of the second sub-list are for the second prediction, etc. Note that there must be the same number of references for each prediction (i.e. all sub-lists must be of the same length).
|
| 43 |
+
- **`smooth_method`** (`str`): The smoothing method to use, defaults to `'exp'`. Possible values are:
|
| 44 |
+
- `'none'`: no smoothing
|
| 45 |
+
- `'floor'`: increment zero counts
|
| 46 |
+
- `'add-k'`: increment num/denom by k for n>1
|
| 47 |
+
- `'exp'`: exponential decay
|
| 48 |
+
- **`smooth_value`** (`float`): The smoothing value. Only valid when `smooth_method='floor'` (in which case `smooth_value` defaults to `0.1`) or `smooth_method='add-k'` (in which case `smooth_value` defaults to `1`).
|
| 49 |
+
- **`tokenize`** (`str`): Tokenization method to use for BLEU. If not provided, defaults to `'zh'` for Chinese, `'ja-mecab'` for Japanese and `'13a'` (mteval) otherwise. Possible values are:
|
| 50 |
+
- `'none'`: No tokenization.
|
| 51 |
+
- `'zh'`: Chinese tokenization.
|
| 52 |
+
- `'13a'`: mimics the `mteval-v13a` script from Moses.
|
| 53 |
+
- `'intl'`: International tokenization, mimics the `mteval-v14` script from Moses
|
| 54 |
+
- `'char'`: Language-agnostic character-level tokenization.
|
| 55 |
+
- `'ja-mecab'`: Japanese tokenization. Uses the [MeCab tokenizer](https://pypi.org/project/mecab-python3).
|
| 56 |
+
- **`lowercase`** (`bool`): If `True`, lowercases the input, enabling case-insensitivity. Defaults to `False`.
|
| 57 |
+
- **`force`** (`bool`): If `True`, insists that your tokenized input is actually detokenized. Defaults to `False`.
|
| 58 |
+
- **`use_effective_order`** (`bool`): If `True`, stops including n-gram orders for which precision is 0. This should be `True`, if sentence-level BLEU will be computed. Defaults to `False`.
|
| 59 |
+
|
| 60 |
+
### Output Values
|
| 61 |
+
- `score`: BLEU score
|
| 62 |
+
- `counts`: Counts
|
| 63 |
+
- `totals`: Totals
|
| 64 |
+
- `precisions`: Precisions
|
| 65 |
+
- `bp`: Brevity penalty
|
| 66 |
+
- `sys_len`: predictions length
|
| 67 |
+
- `ref_len`: reference length
|
| 68 |
+
|
| 69 |
+
The output is in the following format:
|
| 70 |
+
```python
|
| 71 |
+
{'score': 39.76353643835252, 'counts': [6, 4, 2, 1], 'totals': [10, 8, 6, 4], 'precisions': [60.0, 50.0, 33.333333333333336, 25.0], 'bp': 1.0, 'sys_len': 10, 'ref_len': 7}
|
| 72 |
+
```
|
| 73 |
+
The score can take any value between `0.0` and `100.0`, inclusive.
|
| 74 |
+
|
| 75 |
+
#### Values from Popular Papers
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
### Examples
|
| 79 |
+
|
| 80 |
+
```python
|
| 81 |
+
>>> predictions = ["hello there general kenobi",
|
| 82 |
+
... "on our way to ankh morpork"]
|
| 83 |
+
>>> references = [["hello there general kenobi", "hello there !"],
|
| 84 |
+
... ["goodbye ankh morpork", "ankh morpork"]]
|
| 85 |
+
>>> sacrebleu = evaluate.load("sacrebleu")
|
| 86 |
+
>>> results = sacrebleu.compute(predictions=predictions,
|
| 87 |
+
... references=references)
|
| 88 |
+
>>> print(list(results.keys()))
|
| 89 |
+
['score', 'counts', 'totals', 'precisions', 'bp', 'sys_len', 'ref_len']
|
| 90 |
+
>>> print(round(results["score"], 1))
|
| 91 |
+
39.8
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
## Limitations and Bias
|
| 95 |
+
Because what this metric calculates is BLEU scores, it has the same limitations as that metric, except that sacreBLEU is more easily reproducible.
|
| 96 |
+
|
| 97 |
+
## Citation
|
| 98 |
+
```bibtex
|
| 99 |
+
@inproceedings{post-2018-call,
|
| 100 |
+
title = "A Call for Clarity in Reporting {BLEU} Scores",
|
| 101 |
+
author = "Post, Matt",
|
| 102 |
+
booktitle = "Proceedings of the Third Conference on Machine Translation: Research Papers",
|
| 103 |
+
month = oct,
|
| 104 |
+
year = "2018",
|
| 105 |
+
address = "Belgium, Brussels",
|
| 106 |
+
publisher = "Association for Computational Linguistics",
|
| 107 |
+
url = "https://www.aclweb.org/anthology/W18-6319",
|
| 108 |
+
pages = "186--191",
|
| 109 |
+
}
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
## Further References
|
| 113 |
+
- See the [sacreBLEU README.md file](https://github.com/mjpost/sacreBLEU) for more information.
|
app.py
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import evaluate
|
| 2 |
+
from evaluate.utils import launch_gradio_widget
|
| 3 |
+
|
| 4 |
+
|
| 5 |
+
module = evaluate.load("sacrebleu")
|
| 6 |
+
launch_gradio_widget(module)
|
requirements.txt
ADDED
|
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# TODO: fix github to release
|
| 2 |
+
git+https://github.com/huggingface/evaluate.git@b6e6ed7f3e6844b297bff1b43a1b4be0709b9671
|
| 3 |
+
datasets~=2.0
|
| 4 |
+
sacrebleu
|
sacrebleu.py
ADDED
|
@@ -0,0 +1,166 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copyright 2020 The HuggingFace Evaluate Authors.
|
| 2 |
+
#
|
| 3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 4 |
+
# you may not use this file except in compliance with the License.
|
| 5 |
+
# You may obtain a copy of the License at
|
| 6 |
+
#
|
| 7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
| 8 |
+
#
|
| 9 |
+
# Unless required by applicable law or agreed to in writing, software
|
| 10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 12 |
+
# See the License for the specific language governing permissions and
|
| 13 |
+
# limitations under the License.
|
| 14 |
+
""" SACREBLEU metric. """
|
| 15 |
+
|
| 16 |
+
import datasets
|
| 17 |
+
import sacrebleu as scb
|
| 18 |
+
from packaging import version
|
| 19 |
+
|
| 20 |
+
import evaluate
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
_CITATION = """\
|
| 24 |
+
@inproceedings{post-2018-call,
|
| 25 |
+
title = "A Call for Clarity in Reporting {BLEU} Scores",
|
| 26 |
+
author = "Post, Matt",
|
| 27 |
+
booktitle = "Proceedings of the Third Conference on Machine Translation: Research Papers",
|
| 28 |
+
month = oct,
|
| 29 |
+
year = "2018",
|
| 30 |
+
address = "Belgium, Brussels",
|
| 31 |
+
publisher = "Association for Computational Linguistics",
|
| 32 |
+
url = "https://www.aclweb.org/anthology/W18-6319",
|
| 33 |
+
pages = "186--191",
|
| 34 |
+
}
|
| 35 |
+
"""
|
| 36 |
+
|
| 37 |
+
_DESCRIPTION = """\
|
| 38 |
+
SacreBLEU provides hassle-free computation of shareable, comparable, and reproducible BLEU scores.
|
| 39 |
+
Inspired by Rico Sennrich's `multi-bleu-detok.perl`, it produces the official WMT scores but works with plain text.
|
| 40 |
+
It also knows all the standard test sets and handles downloading, processing, and tokenization for you.
|
| 41 |
+
|
| 42 |
+
See the [README.md] file at https://github.com/mjpost/sacreBLEU for more information.
|
| 43 |
+
"""
|
| 44 |
+
|
| 45 |
+
_KWARGS_DESCRIPTION = """
|
| 46 |
+
Produces BLEU scores along with its sufficient statistics
|
| 47 |
+
from a source against one or more references.
|
| 48 |
+
|
| 49 |
+
Args:
|
| 50 |
+
predictions (`list` of `str`): list of translations to score. Each translation should be tokenized into a list of tokens.
|
| 51 |
+
references (`list` of `list` of `str`): A list of lists of references. The contents of the first sub-list are the references for the first prediction, the contents of the second sub-list are for the second prediction, etc. Note that there must be the same number of references for each prediction (i.e. all sub-lists must be of the same length).
|
| 52 |
+
smooth_method (`str`): The smoothing method to use, defaults to `'exp'`. Possible values are:
|
| 53 |
+
- `'none'`: no smoothing
|
| 54 |
+
- `'floor'`: increment zero counts
|
| 55 |
+
- `'add-k'`: increment num/denom by k for n>1
|
| 56 |
+
- `'exp'`: exponential decay
|
| 57 |
+
smooth_value (`float`): The smoothing value. Only valid when `smooth_method='floor'` (in which case `smooth_value` defaults to `0.1`) or `smooth_method='add-k'` (in which case `smooth_value` defaults to `1`).
|
| 58 |
+
tokenize (`str`): Tokenization method to use for BLEU. If not provided, defaults to `'zh'` for Chinese, `'ja-mecab'` for Japanese and `'13a'` (mteval) otherwise. Possible values are:
|
| 59 |
+
- `'none'`: No tokenization.
|
| 60 |
+
- `'zh'`: Chinese tokenization.
|
| 61 |
+
- `'13a'`: mimics the `mteval-v13a` script from Moses.
|
| 62 |
+
- `'intl'`: International tokenization, mimics the `mteval-v14` script from Moses
|
| 63 |
+
- `'char'`: Language-agnostic character-level tokenization.
|
| 64 |
+
- `'ja-mecab'`: Japanese tokenization. Uses the [MeCab tokenizer](https://pypi.org/project/mecab-python3).
|
| 65 |
+
lowercase (`bool`): If `True`, lowercases the input, enabling case-insensitivity. Defaults to `False`.
|
| 66 |
+
force (`bool`): If `True`, insists that your tokenized input is actually detokenized. Defaults to `False`.
|
| 67 |
+
use_effective_order (`bool`): If `True`, stops including n-gram orders for which precision is 0. This should be `True`, if sentence-level BLEU will be computed. Defaults to `False`.
|
| 68 |
+
|
| 69 |
+
Returns:
|
| 70 |
+
'score': BLEU score,
|
| 71 |
+
'counts': Counts,
|
| 72 |
+
'totals': Totals,
|
| 73 |
+
'precisions': Precisions,
|
| 74 |
+
'bp': Brevity penalty,
|
| 75 |
+
'sys_len': predictions length,
|
| 76 |
+
'ref_len': reference length,
|
| 77 |
+
|
| 78 |
+
Examples:
|
| 79 |
+
|
| 80 |
+
Example 1:
|
| 81 |
+
>>> predictions = ["hello there general kenobi", "foo bar foobar"]
|
| 82 |
+
>>> references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]]
|
| 83 |
+
>>> sacrebleu = evaluate.load("sacrebleu")
|
| 84 |
+
>>> results = sacrebleu.compute(predictions=predictions, references=references)
|
| 85 |
+
>>> print(list(results.keys()))
|
| 86 |
+
['score', 'counts', 'totals', 'precisions', 'bp', 'sys_len', 'ref_len']
|
| 87 |
+
>>> print(round(results["score"], 1))
|
| 88 |
+
100.0
|
| 89 |
+
|
| 90 |
+
Example 2:
|
| 91 |
+
>>> predictions = ["hello there general kenobi",
|
| 92 |
+
... "on our way to ankh morpork"]
|
| 93 |
+
>>> references = [["hello there general kenobi", "hello there !"],
|
| 94 |
+
... ["goodbye ankh morpork", "ankh morpork"]]
|
| 95 |
+
>>> sacrebleu = evaluate.load("sacrebleu")
|
| 96 |
+
>>> results = sacrebleu.compute(predictions=predictions,
|
| 97 |
+
... references=references)
|
| 98 |
+
>>> print(list(results.keys()))
|
| 99 |
+
['score', 'counts', 'totals', 'precisions', 'bp', 'sys_len', 'ref_len']
|
| 100 |
+
>>> print(round(results["score"], 1))
|
| 101 |
+
39.8
|
| 102 |
+
"""
|
| 103 |
+
|
| 104 |
+
|
| 105 |
+
@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
|
| 106 |
+
class Sacrebleu(evaluate.EvaluationModule):
|
| 107 |
+
def _info(self):
|
| 108 |
+
if version.parse(scb.__version__) < version.parse("1.4.12"):
|
| 109 |
+
raise ImportWarning(
|
| 110 |
+
"To use `sacrebleu`, the module `sacrebleu>=1.4.12` is required, and the current version of `sacrebleu` doesn't match this condition.\n"
|
| 111 |
+
'You can install it with `pip install "sacrebleu>=1.4.12"`.'
|
| 112 |
+
)
|
| 113 |
+
return evaluate.EvaluationModuleInfo(
|
| 114 |
+
description=_DESCRIPTION,
|
| 115 |
+
citation=_CITATION,
|
| 116 |
+
homepage="https://github.com/mjpost/sacreBLEU",
|
| 117 |
+
inputs_description=_KWARGS_DESCRIPTION,
|
| 118 |
+
features=datasets.Features(
|
| 119 |
+
{
|
| 120 |
+
"predictions": datasets.Value("string", id="sequence"),
|
| 121 |
+
"references": datasets.Sequence(datasets.Value("string", id="sequence"), id="references"),
|
| 122 |
+
}
|
| 123 |
+
),
|
| 124 |
+
codebase_urls=["https://github.com/mjpost/sacreBLEU"],
|
| 125 |
+
reference_urls=[
|
| 126 |
+
"https://github.com/mjpost/sacreBLEU",
|
| 127 |
+
"https://en.wikipedia.org/wiki/BLEU",
|
| 128 |
+
"https://towardsdatascience.com/evaluating-text-output-in-nlp-bleu-at-your-own-risk-e8609665a213",
|
| 129 |
+
],
|
| 130 |
+
)
|
| 131 |
+
|
| 132 |
+
def _compute(
|
| 133 |
+
self,
|
| 134 |
+
predictions,
|
| 135 |
+
references,
|
| 136 |
+
smooth_method="exp",
|
| 137 |
+
smooth_value=None,
|
| 138 |
+
force=False,
|
| 139 |
+
lowercase=False,
|
| 140 |
+
tokenize=None,
|
| 141 |
+
use_effective_order=False,
|
| 142 |
+
):
|
| 143 |
+
references_per_prediction = len(references[0])
|
| 144 |
+
if any(len(refs) != references_per_prediction for refs in references):
|
| 145 |
+
raise ValueError("Sacrebleu requires the same number of references for each prediction")
|
| 146 |
+
transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)]
|
| 147 |
+
output = scb.corpus_bleu(
|
| 148 |
+
predictions,
|
| 149 |
+
transformed_references,
|
| 150 |
+
smooth_method=smooth_method,
|
| 151 |
+
smooth_value=smooth_value,
|
| 152 |
+
force=force,
|
| 153 |
+
lowercase=lowercase,
|
| 154 |
+
use_effective_order=use_effective_order,
|
| 155 |
+
**(dict(tokenize=tokenize) if tokenize else {}),
|
| 156 |
+
)
|
| 157 |
+
output_dict = {
|
| 158 |
+
"score": output.score,
|
| 159 |
+
"counts": output.counts,
|
| 160 |
+
"totals": output.totals,
|
| 161 |
+
"precisions": output.precisions,
|
| 162 |
+
"bp": output.bp,
|
| 163 |
+
"sys_len": output.sys_len,
|
| 164 |
+
"ref_len": output.ref_len,
|
| 165 |
+
}
|
| 166 |
+
return output_dict
|