lvwerra HF Staff commited on
Commit
5bb9ffd
·
1 Parent(s): 30fb504

Update Space (evaluate main: 143a05c9)

Browse files
Files changed (2) hide show
  1. README.md +12 -12
  2. requirements.txt +1 -1
README.md CHANGED
@@ -48,9 +48,9 @@ This metric takes as input a list of predicted sentences and a list of lists of
48
  ```
49
 
50
  ### Inputs
51
- - **predictions** (`list` of `str`s): Translations to score.
52
- - **references** (`list` of `list`s of `str`s): references for each translation.
53
- - ** tokenizer** : approach used for standardizing `predictions` and `references`.
54
  The default tokenizer is `tokenizer_13a`, a relatively minimal tokenization approach that is however equivalent to `mteval-v13a`, used by WMT.
55
  This can be replaced by another tokenizer from a source such as [SacreBLEU](https://github.com/mjpost/sacrebleu/tree/master/sacrebleu/tokenizers).
56
 
@@ -93,15 +93,15 @@ Example where each prediction has 1 reference:
93
  {'bleu': 1.0, 'precisions': [1.0, 1.0, 1.0, 1.0], 'brevity_penalty': 1.0, 'length_ratio': 1.0, 'translation_length': 7, 'reference_length': 7}
94
  ```
95
 
96
- Example where the second prediction has 2 references:
97
  ```python
98
  >>> predictions = [
99
- ... ["hello there general kenobi",
100
- ... ["foo bar foobar"]
101
  ... ]
102
  >>> references = [
103
- ... [["hello there general kenobi"], ["hello there!"]],
104
- ... [["foo bar foobar"]]
105
  ... ]
106
  >>> bleu = evaluate.load("bleu")
107
  >>> results = bleu.compute(predictions=predictions, references=references)
@@ -114,12 +114,12 @@ Example with the word tokenizer from NLTK:
114
  >>> bleu = evaluate.load("bleu")
115
  >>> from nltk.tokenize import word_tokenize
116
  >>> predictions = [
117
- ... ["hello there general kenobi",
118
- ... ["foo bar foobar"]
119
  ... ]
120
  >>> references = [
121
- ... [["hello there general kenobi"], ["hello there!"]],
122
- ... [["foo bar foobar"]]
123
  ... ]
124
  >>> results = bleu.compute(predictions=predictions, references=references, tokenizer=word_tokenize)
125
  >>> print(results)
 
48
  ```
49
 
50
  ### Inputs
51
+ - **predictions** (`list[str]`): Translations to score.
52
+ - **references** (`Union[list[str], list[list[str]]]`): references for each translation.
53
+ - **tokenizer** : approach used for standardizing `predictions` and `references`.
54
  The default tokenizer is `tokenizer_13a`, a relatively minimal tokenization approach that is however equivalent to `mteval-v13a`, used by WMT.
55
  This can be replaced by another tokenizer from a source such as [SacreBLEU](https://github.com/mjpost/sacrebleu/tree/master/sacrebleu/tokenizers).
56
 
 
93
  {'bleu': 1.0, 'precisions': [1.0, 1.0, 1.0, 1.0], 'brevity_penalty': 1.0, 'length_ratio': 1.0, 'translation_length': 7, 'reference_length': 7}
94
  ```
95
 
96
+ Example where the first prediction has 2 references:
97
  ```python
98
  >>> predictions = [
99
+ ... "hello there general kenobi",
100
+ ... "foo bar foobar"
101
  ... ]
102
  >>> references = [
103
+ ... ["hello there general kenobi", "hello there!"],
104
+ ... ["foo bar foobar"]
105
  ... ]
106
  >>> bleu = evaluate.load("bleu")
107
  >>> results = bleu.compute(predictions=predictions, references=references)
 
114
  >>> bleu = evaluate.load("bleu")
115
  >>> from nltk.tokenize import word_tokenize
116
  >>> predictions = [
117
+ ... "hello there general kenobi",
118
+ ... "foo bar foobar"
119
  ... ]
120
  >>> references = [
121
+ ... ["hello there general kenobi", "hello there!"],
122
+ ... ["foo bar foobar"]
123
  ... ]
124
  >>> results = bleu.compute(predictions=predictions, references=references, tokenizer=word_tokenize)
125
  >>> print(results)
requirements.txt CHANGED
@@ -1 +1 @@
1
- git+https://github.com/huggingface/evaluate@b3820eb820702611cd0c2247743d764f2a7fe916
 
1
+ git+https://github.com/huggingface/evaluate@143a05c9844fa000fdcb324d30a750f139217fdd