Text Generation
Transformers
Safetensors
llama
text-generation-inference
TildeSIA commited on
Commit
79af113
·
verified ·
1 Parent(s): 7f9e43a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -46
README.md CHANGED
@@ -140,49 +140,38 @@ Results
140
  | Ukrainian | 78.0% | 77.0% | 83.9% | **85.1%** |
141
  | **Average** | 79.5% | 76.8% | 82.5% | **84.7%** |
142
 
143
-
144
- ## Per-Character Perplexity
145
- **What is Perplexity?** Perplexity measures how well a language model predicts text. A model with low perplexity makes accurate predictions consistently, while a high perplexity means the model is frequently "surprised" by unexpected words or patterns. Lower perplexity indicates the model has learned language patterns more effectively. It's less "surprised" by what it encounters because it better understands how the language works.
146
- Perplexity fairly evaluates how well each model handles:
147
- - Spelling accuracy across a diverse vocabulary
148
- - Grammar rules that span multiple words
149
- - Sentence structure and flow
150
- - Language-specific patterns (how different languages form plural forms or compound words)
151
-
152
- **Why Character-Level?** Different language models use different internal vocabularies - some break text into whole words, others into word fragments, and some into individual characters. This makes direct comparison difficult.
153
- Character-level perplexity creates a standardised comparison by calculating how well each model would theoretically perform if we measured their predictions character-by-character. We're not changing how the models work - instead, we use mathematical conversion to approximate their character-level performance based on their predictions.
154
-
155
- **Why does this Matter?** Models with lower perplexity generally perform better on real-world tasks like text generation, translation, and understanding context. It's a reliable indicator of overall language competency across different applications.
156
-
157
- **What data did we use?**
158
- We use WMT24++ as it is a multilingual, language-parallel evaluation set that none of the models have seen during training. WMT24++ is a composite of texts from news, literature, speech, and social media; thus, it is suitable for foundational model benchmarking.
159
-
160
- | Language | TildeOpen 30b | Gemma 2 27b | EuroLLM 22B Prev. | ALIA 40B |
161
- |-----------------|---------|------------|----|------|
162
- | Bulgarian | **2.0539** | 2.2184 | 2.1985 | 2.1336 |
163
- | Czech | **2.1579** | 2.3522 | 2.3221 | 2.2719 |
164
- | Danish | **2.003** | 2.1517 | 2.1353 | 2.0805 |
165
- | German | **1.8769** | 1.9285 | 1.9452 | 1.904 |
166
- | English | 2.0378 | **1.9525** | 2.0568 | 2.0261 |
167
- | Spanish | 1.9503 | 1.9752 | 2.0145 | **1.9369** |
168
- | Estonian | **2.1711** | 2.5747 | 2.3852 | 2.325 |
169
- | Finnish | **2.0497** | 2.288 | 2.2388 | 2.1831 |
170
- | French | **1.8978** | 1.9355 | 1.9282 | 1.9084 |
171
- | Croatian | **2.1147** | 2.544 | 2.4905 | 2.2433 |
172
- | Hungarian | **2.0539** | 2.2228 | 2.2256 | 2.1635 |
173
- | Icelandic | **2.0873** | 3.0329 | 4.7908 | 3.957 |
174
- | Italian | **1.9565** | 2.0137 | 2.0098 | 1.9887 |
175
- | Lithuanian | **2.1247** | 2.4175 | 2.3137 | 2.3075 |
176
- | Latvian | **2.1439** | 2.5355 | 2.3141 | 2.3276 |
177
- | Dutch | **1.9333** | 2.0312 | 2.0079 | 1.9904 |
178
- | Norwegian | **2.1284** | 2.2862 | 2.3506 | 2.2253 |
179
- | Polish | **2.0241** | 2.1294 | 2.0803 | 2.0803 |
180
- | Portuguese | **1.9899** | 2.0597 | 2.0272 | 2.0187 |
181
- | Romanian | **2.0196** | 2.1606 | 2.1641 | 2.1114 |
182
- | Russian | **2.0424** | 2.09 | 2.1095 | 2.0871 |
183
- | Slovak | **2.1192** | 2.338 | 2.3029 | 2.2609 |
184
- | Slovenian | **2.1556** | 2.4443 | 2.3398 | 2.2589 |
185
- | Serbian | **2.2469** | 2.6351 | 4.2471 | 2.3743 |
186
- | Swedish | **2.041** | 2.1809 | 2.1464 | 2.1211 |
187
- | Turkish | **2.0997** | 2.247 | 2.2202 | 2.232 |
188
- | Ukrainian | **2.1376** | 2.2665 | 2.2691 | 2.2086 |
 
140
  | Ukrainian | 78.0% | 77.0% | 83.9% | **85.1%** |
141
  | **Average** | 79.5% | 76.8% | 82.5% | **84.7%** |
142
 
143
+ ## MultiBLiMP Benchmark: Grammar Test
144
+ **What is MultiBLiMP?** [MultiBLiMP](https://arxiv.org/pdf/2504.02768) is a massively multilingual test of core grammar. It gives models pairs of almost-identical sentences—one grammatical and one ungrammatical—and asks whether the model assigns a higher probability to the correct one. Version 1.0 covers 101 languages
145
+
146
+ **Why does this Matter?** MultiBLiMP tests models' ability to distinguish correct and erroneous language. Just like humans, producing mostly correct language is not a big achievement. Rather, it is very bad to make any mistakes at all.
147
+
148
+ **What did we do?**
149
+ We used the standard implementation of the [MultiBLiMP](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/multiblimp) task from the LLM Evaluation Harness. We set tokenisers to ```use_fast=False```. We report **0-shot** accuracy.
150
+
151
+ | Language | Gemma 2 27b | ALIA 40b | EuroLLM Prev. 22b | TildeOpen 1.1 30b
152
+ |----------|-------------|----------|---------------------|-------------|
153
+ | Bulgarian | 95.4% | 98.8% | 97.7% | **99.6%** |
154
+ | Czech | 98.6% | **98.9%** | 98.5% | 98.5% |
155
+ | German | 98.8% | 98.7% | 98.0% | **99.4%** |
156
+ | English | 98.4% | 98.7% | 98.7% | **99.4%** |
157
+ | Estonian | 92.0% | 95.6% | 95.8% | **98.3%** |
158
+ | Finnish | 93.0% | 96.3% | 95.2% | **98.5%** |
159
+ | French | 98.2% | 98.8% | 98.7% | **99.3%** |
160
+ | Serbo-Croatian | 94.6% | 98.5% | 96.4% | **99.6%** |
161
+ | Hungarian | 95.9% | 98.8% | 97.8% | **100.0%** |
162
+ | Icelandic | 88.5% | 80.3% | 74.4% | **98.8%** |
163
+ | Italian | 96.0% | 96.7% | 96.6% | **98.2%** |
164
+ | Latvian | 91.6% | 95.2% | 96.9% | **99.1%** |
165
+ | Lithuanian | 95.3% | 99.0% | 99.0% | **99.7%** |
166
+ | Dutch | 94.0% | 96.6% | 96.5% | **99.2%** |
167
+ | Polish | 97.0% | 97.5% | 97.6% | **99.3%** |
168
+ | Portuguese | 96.1% | 97.6% | 97.1% | **98.2%** |
169
+ | Romanian | 97.7% | 98.9% | 98.5% | **98.9%** |
170
+ | Russian | 94.7% | 96.6% | 97.3% | **99.4%** |
171
+ | Slovak | 97.7% | 98.8% | 97.7% | **99.3%** |
172
+ | Slovenian | 99.0% | **100.0%** | **100.0%** | 98.8% |
173
+ | Spanish | 95.6% | 98.0% | 97.3% | **98.7%** |
174
+ | Swedish | 95.8% | 85.1% | 93.8% | **100.0%** |
175
+ | Turkish | 97.6% | **98.7%** | 97.9% | 96.4% |
176
+ | Ukrainian | 95.6% | 98.0% | 97.3% | **99.2%** |
177
+ | **Average** | 95.7% | 96.7% | 96.4% | **99.0%** |