sampathchanda commited on
Commit
a465972
·
verified ·
1 Parent(s): 063d7dc

Upload folder using huggingface_hub

Browse files
report/base-model-evaluation.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Base model evaluation
2
+ timestamp: 2025-10-14 01:41:37
3
+
4
+ - Model: base_model (step 21400)
5
+ - CORE metric: 0.1976
6
+ - hellaswag_zeroshot: 0.2598
7
+ - jeopardy: 0.0874
8
+ - bigbench_qa_wikidata: 0.5113
9
+ - arc_easy: 0.5354
10
+ - arc_challenge: 0.1183
11
+ - copa: 0.2800
12
+ - commonsense_qa: 0.0796
13
+ - piqa: 0.3798
14
+ - openbook_qa: 0.1627
15
+ - lambada_openai: 0.3839
16
+ - hellaswag: 0.2595
17
+ - winograd: 0.2821
18
+ - winogrande: 0.0513
19
+ - bigbench_dyck_languages: 0.1430
20
+ - agi_eval_lsat_ar: 0.1304
21
+ - bigbench_cs_algorithms: 0.3727
22
+ - bigbench_operators: 0.1762
23
+ - bigbench_repeat_copy_logic: 0.0312
24
+ - squad: 0.2389
25
+ - coqa: 0.2088
26
+ - boolq: -0.5218
27
+ - bigbench_language_identification: 0.1757
28
+
report/base-model-loss.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Base model loss
2
+ timestamp: 2025-10-14 01:34:11
3
+
4
+ - train bpb: 0.8178
5
+ - val bpb: 0.8150
6
+ - sample 0: <|bos|>The capital of France is Paris. It is the largest city in France and the second largest in Europe.
7
+ - sample 1: <|bos|>The chemical symbol of gold is Au. It is a soft, malleable, ductile, and malleable metal. It
8
+ - sample 2: <|bos|>If yesterday was Friday, then tomorrow will be Saturday. If tomorrow is Sunday, then tomorrow will be Monday. If tomorrow is
9
+ - sample 3: <|bos|>The opposite of hot is cold. The opposite of cold is hot. The opposite of hot is cold.
10
+ - sample 4: <|bos|>The planets of the solar system are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune,
11
+ - sample 5: <|bos|>My favorite color is red. I love the color red. I love the color red. I love
12
+ - sample 6: <|bos|>If 5*x + 3 = 13, then x is 5 times 3. If 5*x + 3 =
13
+
report/base-model-training.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Base model training
2
+ timestamp: 2025-10-14 01:32:23
3
+
4
+ - run: d0
5
+ - depth: 20
6
+ - max_seq_len: 2048
7
+ - num_iterations: -1
8
+ - target_flops: -1.0000
9
+ - target_param_data_ratio: 20
10
+ - device_batch_size: 32
11
+ - total_batch_size: 524,288
12
+ - embedding_lr: 0.2000
13
+ - unembedding_lr: 0.0040
14
+ - weight_decay: 0.0000
15
+ - matrix_lr: 0.0200
16
+ - grad_clip: 1.0000
17
+ - eval_every: 250
18
+ - eval_tokens: 10,485,760
19
+ - core_metric_every: 2000
20
+ - core_metric_max_per_task: 500
21
+ - sample_every: 2000
22
+ - model_tag:
23
+ - Number of parameters: 560,988,160
24
+ - Number of FLOPs per token: 3.491758e+09
25
+ - Calculated number of iterations: 21,400
26
+ - Number of training tokens: 11,219,763,200
27
+ - Tokens : Params ratio: 20.0000
28
+ - DDP world size: 8
29
+ - warmup_ratio: 0.0000
30
+ - warmdown_ratio: 0.2000
31
+ - final_lr_frac: 0.0000
32
+ - Minimum validation bpb: 0.8149
33
+ - Final validation bpb: 0.8149
34
+ - CORE metric estimate: 0.2059
35
+ - MFU %: 21.08%
36
+ - Total training flops: 3.917670e+19
37
+ - Total training time: 393.81m
38
+ - Peak memory usage: 75374.27MiB
39
+
report/chat-evaluation-mid.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Chat evaluation mid
2
+ timestamp: 2025-10-14 02:16:06
3
+
4
+ - source: mid
5
+ - task_name: None
6
+ - dtype: bfloat16
7
+ - temperature: 0.0000
8
+ - max_new_tokens: 512
9
+ - num_samples: 1
10
+ - top_k: 50
11
+ - batch_size: 8
12
+ - model_tag: None
13
+ - step: None
14
+ - max_problems: None
15
+ - ARC-Easy: 0.3758
16
+ - ARC-Challenge: 0.2884
17
+ - MMLU: 0.3088
18
+ - GSM8K: 0.0303
19
+ - HumanEval: 0.0671
20
+ - ChatCORE metric: 0.0790
21
+
report/chat-evaluation-sft.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Chat evaluation sft
2
+ timestamp: 2025-10-14 02:39:37
3
+
4
+ - source: sft
5
+ - task_name: None
6
+ - dtype: bfloat16
7
+ - temperature: 0.0000
8
+ - max_new_tokens: 512
9
+ - num_samples: 1
10
+ - top_k: 50
11
+ - batch_size: 8
12
+ - model_tag: None
13
+ - step: None
14
+ - max_problems: None
15
+ - ARC-Easy: 0.3952
16
+ - ARC-Challenge: 0.2961
17
+ - MMLU: 0.3138
18
+ - GSM8K: 0.0402
19
+ - HumanEval: 0.0549
20
+ - ChatCORE metric: 0.0870
21
+
report/chat-rl.md ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Chat RL
2
+ timestamp: 2025-10-14 07:06:07
3
+
4
+ - run:
5
+ - source: sft
6
+ - dtype: bfloat16
7
+ - device_batch_size: 8
8
+ - examples_per_step: 16
9
+ - num_samples: 16
10
+ - max_new_tokens: 256
11
+ - temperature: 1.0000
12
+ - top_k: 50
13
+ - unembedding_lr: 0.0040
14
+ - embedding_lr: 0.2000
15
+ - matrix_lr: 0.0200
16
+ - weight_decay: 0.0000
17
+ - init_lr_frac: 0.0500
18
+ - num_epochs: 1
19
+ - save_every: 60
20
+ - eval_every: 60
21
+ - eval_examples: 400
22
+
report/chat-sft.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Chat SFT
2
+ timestamp: 2025-10-14 02:27:42
3
+
4
+ - run: d0
5
+ - source: mid
6
+ - dtype: bfloat16
7
+ - device_batch_size: 4
8
+ - num_epochs: 1
9
+ - max_iterations: -1
10
+ - target_examples_per_step: 32
11
+ - unembedding_lr: 0.0040
12
+ - embedding_lr: 0.2000
13
+ - matrix_lr: 0.0200
14
+ - weight_decay: 0.0000
15
+ - init_lr_frac: 0.0200
16
+ - eval_every: 100
17
+ - eval_steps: 100
18
+ - eval_metrics_every: 200
19
+ - Training rows: 20,843
20
+ - Number of iterations: 651
21
+ - Training loss: 1.2206
22
+ - Validation loss: 1.0725
23
+
report/header.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # nanochat training report
2
+
3
+ Generated: 2025-10-13 18:24:03
4
+
5
+ ## Environment
6
+
7
+ ### Git Information
8
+ - Branch: master
9
+ - Commit: 626bd3e (clean)
10
+ - Message: Add image of the WebUI to readme
11
+
12
+ ### Hardware
13
+ - Platform: Linux
14
+ - CPUs: 48 cores (96 logical)
15
+ - Memory: 1121.8 GB
16
+ - GPUs: 8x NVIDIA A100-SXM4-80GB
17
+ - GPU Memory: 634.0 GB total
18
+ - CUDA Version: 12.8
19
+ - Hourly Rate: $14.32/hour
20
+
21
+ ### Software
22
+ - Python: 3.10.12
23
+ - PyTorch: 2.8.0+cu128
24
+
25
+
26
+ ### Bloat
27
+ - Characters: 330,622
28
+ - Lines: 8,077
29
+ - Files: 42
30
+ - Tokens (approx): 82,655
31
+ - Dependencies (uv.lock lines): 2,004
32
+
33
+ Run started: 2025-10-13 18:24:07
34
+
35
+ ---
36
+
report/midtraining.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Midtraining
2
+ timestamp: 2025-10-14 02:01:41
3
+
4
+ - run: d0
5
+ - dtype: bfloat16
6
+ - max_seq_len: 2048
7
+ - device_batch_size: 32
8
+ - unembedding_lr: 0.0040
9
+ - embedding_lr: 0.2000
10
+ - matrix_lr: 0.0200
11
+ - init_lr_frac: 1.0000
12
+ - weight_decay: 0.0000
13
+ - final_lr_frac: 0.0000
14
+ - eval_every: 150
15
+ - eval_tokens: 10,485,760
16
+ - total_batch_size: 524,288
17
+ - Number of iterations: 765
18
+ - DDP world size: 8
19
+ - Minimum validation bpb: 0.4176
20
+
report/report.md ADDED
@@ -0,0 +1,269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # nanochat training report
2
+
3
+ Generated: 2025-10-13 18:24:03
4
+
5
+ ## Environment
6
+
7
+ ### Git Information
8
+ - Branch: master
9
+ - Commit: 626bd3e (clean)
10
+ - Message: Add image of the WebUI to readme
11
+
12
+ ### Hardware
13
+ - Platform: Linux
14
+ - CPUs: 48 cores (96 logical)
15
+ - Memory: 1121.8 GB
16
+ - GPUs: 8x NVIDIA A100-SXM4-80GB
17
+ - GPU Memory: 634.0 GB total
18
+ - CUDA Version: 12.8
19
+ - Hourly Rate: $14.32/hour
20
+
21
+ ### Software
22
+ - Python: 3.10.12
23
+ - PyTorch: 2.8.0+cu128
24
+
25
+
26
+ ### Bloat
27
+ - Characters: 330,622
28
+ - Lines: 8,077
29
+ - Files: 42
30
+ - Tokens (approx): 82,655
31
+ - Dependencies (uv.lock lines): 2,004
32
+
33
+ Run started: 2025-10-13 18:24:07
34
+
35
+ ---
36
+
37
+ ## Tokenizer training
38
+ timestamp: 2025-10-13 18:25:52
39
+
40
+ - max_chars: 2,000,000,000
41
+ - doc_cap: 10,000
42
+ - vocab_size: 65,536
43
+ - train_time: 89.5722
44
+ - num_special_tokens: 9
45
+ - token_bytes_min: 1
46
+ - token_bytes_max: 32
47
+ - token_bytes_mean: 6.9151
48
+ - token_bytes_std: 2.8736
49
+
50
+
51
+ ## Tokenizer evaluation
52
+ timestamp: 2025-10-13 18:26:00
53
+
54
+ ### Comparison with GPT-2
55
+
56
+ | Text Type | Bytes | GPT-2 Tokens | GPT-2 Ratio | Ours Tokens | Ours Ratio | Relative Diff % |
57
+ |-----------|-------|--------------|--------------|-------------|------------|-----------------|
58
+ | news | 1819 | 404 | 4.50 | 375 | 4.85 | +7.2% |
59
+ | korean | 893 | 745 | 1.20 | 721 | 1.24 | +3.2% |
60
+ | code | 1259 | 576 | 2.19 | 493 | 2.55 | +14.4% |
61
+ | math | 1834 | 936 | 1.96 | 966 | 1.90 | -3.2% |
62
+ | science | 1112 | 260 | 4.28 | 225 | 4.94 | +13.5% |
63
+ | fwe-train | 4208518 | 900364 | 4.67 | 856901 | 4.91 | +4.8% |
64
+ | fwe-val | 4908443 | 1059062 | 4.63 | 1010356 | 4.86 | +4.6% |
65
+
66
+ ### Comparison with GPT-4
67
+
68
+ | Text Type | Bytes | GPT-4 Tokens | GPT-4 Ratio | Ours Tokens | Ours Ratio | Relative Diff % |
69
+ |-----------|-------|--------------|--------------|-------------|------------|-----------------|
70
+ | news | 1819 | 387 | 4.70 | 375 | 4.85 | +3.1% |
71
+ | korean | 893 | 364 | 2.45 | 721 | 1.24 | -98.1% |
72
+ | code | 1259 | 309 | 4.07 | 493 | 2.55 | -59.5% |
73
+ | math | 1834 | 832 | 2.20 | 966 | 1.90 | -16.1% |
74
+ | science | 1112 | 249 | 4.47 | 225 | 4.94 | +9.6% |
75
+ | fwe-train | 4208518 | 874799 | 4.81 | 856901 | 4.91 | +2.0% |
76
+ | fwe-val | 4908443 | 1029691 | 4.77 | 1010356 | 4.86 | +1.9% |
77
+
78
+
79
+ ## Base model training
80
+ timestamp: 2025-10-14 01:32:23
81
+
82
+ - run: d0
83
+ - depth: 20
84
+ - max_seq_len: 2048
85
+ - num_iterations: -1
86
+ - target_flops: -1.0000
87
+ - target_param_data_ratio: 20
88
+ - device_batch_size: 32
89
+ - total_batch_size: 524,288
90
+ - embedding_lr: 0.2000
91
+ - unembedding_lr: 0.0040
92
+ - weight_decay: 0.0000
93
+ - matrix_lr: 0.0200
94
+ - grad_clip: 1.0000
95
+ - eval_every: 250
96
+ - eval_tokens: 10,485,760
97
+ - core_metric_every: 2000
98
+ - core_metric_max_per_task: 500
99
+ - sample_every: 2000
100
+ - model_tag:
101
+ - Number of parameters: 560,988,160
102
+ - Number of FLOPs per token: 3.491758e+09
103
+ - Calculated number of iterations: 21,400
104
+ - Number of training tokens: 11,219,763,200
105
+ - Tokens : Params ratio: 20.0000
106
+ - DDP world size: 8
107
+ - warmup_ratio: 0.0000
108
+ - warmdown_ratio: 0.2000
109
+ - final_lr_frac: 0.0000
110
+ - Minimum validation bpb: 0.8149
111
+ - Final validation bpb: 0.8149
112
+ - CORE metric estimate: 0.2059
113
+ - MFU %: 21.08%
114
+ - Total training flops: 3.917670e+19
115
+ - Total training time: 393.81m
116
+ - Peak memory usage: 75374.27MiB
117
+
118
+
119
+ ## Base model loss
120
+ timestamp: 2025-10-14 01:34:11
121
+
122
+ - train bpb: 0.8178
123
+ - val bpb: 0.8150
124
+ - sample 0: <|bos|>The capital of France is Paris. It is the largest city in France and the second largest in Europe.
125
+ - sample 1: <|bos|>The chemical symbol of gold is Au. It is a soft, malleable, ductile, and malleable metal. It
126
+ - sample 2: <|bos|>If yesterday was Friday, then tomorrow will be Saturday. If tomorrow is Sunday, then tomorrow will be Monday. If tomorrow is
127
+ - sample 3: <|bos|>The opposite of hot is cold. The opposite of cold is hot. The opposite of hot is cold.
128
+ - sample 4: <|bos|>The planets of the solar system are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune,
129
+ - sample 5: <|bos|>My favorite color is red. I love the color red. I love the color red. I love
130
+ - sample 6: <|bos|>If 5*x + 3 = 13, then x is 5 times 3. If 5*x + 3 =
131
+
132
+
133
+ ## Base model evaluation
134
+ timestamp: 2025-10-14 01:41:37
135
+
136
+ - Model: base_model (step 21400)
137
+ - CORE metric: 0.1976
138
+ - hellaswag_zeroshot: 0.2598
139
+ - jeopardy: 0.0874
140
+ - bigbench_qa_wikidata: 0.5113
141
+ - arc_easy: 0.5354
142
+ - arc_challenge: 0.1183
143
+ - copa: 0.2800
144
+ - commonsense_qa: 0.0796
145
+ - piqa: 0.3798
146
+ - openbook_qa: 0.1627
147
+ - lambada_openai: 0.3839
148
+ - hellaswag: 0.2595
149
+ - winograd: 0.2821
150
+ - winogrande: 0.0513
151
+ - bigbench_dyck_languages: 0.1430
152
+ - agi_eval_lsat_ar: 0.1304
153
+ - bigbench_cs_algorithms: 0.3727
154
+ - bigbench_operators: 0.1762
155
+ - bigbench_repeat_copy_logic: 0.0312
156
+ - squad: 0.2389
157
+ - coqa: 0.2088
158
+ - boolq: -0.5218
159
+ - bigbench_language_identification: 0.1757
160
+
161
+
162
+ ## Midtraining
163
+ timestamp: 2025-10-14 02:01:41
164
+
165
+ - run: d0
166
+ - dtype: bfloat16
167
+ - max_seq_len: 2048
168
+ - device_batch_size: 32
169
+ - unembedding_lr: 0.0040
170
+ - embedding_lr: 0.2000
171
+ - matrix_lr: 0.0200
172
+ - init_lr_frac: 1.0000
173
+ - weight_decay: 0.0000
174
+ - final_lr_frac: 0.0000
175
+ - eval_every: 150
176
+ - eval_tokens: 10,485,760
177
+ - total_batch_size: 524,288
178
+ - Number of iterations: 765
179
+ - DDP world size: 8
180
+ - Minimum validation bpb: 0.4176
181
+
182
+
183
+ ## Chat evaluation mid
184
+ timestamp: 2025-10-14 02:16:06
185
+
186
+ - source: mid
187
+ - task_name: None
188
+ - dtype: bfloat16
189
+ - temperature: 0.0000
190
+ - max_new_tokens: 512
191
+ - num_samples: 1
192
+ - top_k: 50
193
+ - batch_size: 8
194
+ - model_tag: None
195
+ - step: None
196
+ - max_problems: None
197
+ - ARC-Easy: 0.3758
198
+ - ARC-Challenge: 0.2884
199
+ - MMLU: 0.3088
200
+ - GSM8K: 0.0303
201
+ - HumanEval: 0.0671
202
+ - ChatCORE metric: 0.0790
203
+
204
+
205
+ ## Chat SFT
206
+ timestamp: 2025-10-14 02:27:42
207
+
208
+ - run: d0
209
+ - source: mid
210
+ - dtype: bfloat16
211
+ - device_batch_size: 4
212
+ - num_epochs: 1
213
+ - max_iterations: -1
214
+ - target_examples_per_step: 32
215
+ - unembedding_lr: 0.0040
216
+ - embedding_lr: 0.2000
217
+ - matrix_lr: 0.0200
218
+ - weight_decay: 0.0000
219
+ - init_lr_frac: 0.0200
220
+ - eval_every: 100
221
+ - eval_steps: 100
222
+ - eval_metrics_every: 200
223
+ - Training rows: 20,843
224
+ - Number of iterations: 651
225
+ - Training loss: 1.2206
226
+ - Validation loss: 1.0725
227
+
228
+
229
+ ## Chat evaluation sft
230
+ timestamp: 2025-10-14 02:39:37
231
+
232
+ - source: sft
233
+ - task_name: None
234
+ - dtype: bfloat16
235
+ - temperature: 0.0000
236
+ - max_new_tokens: 512
237
+ - num_samples: 1
238
+ - top_k: 50
239
+ - batch_size: 8
240
+ - model_tag: None
241
+ - step: None
242
+ - max_problems: None
243
+ - ARC-Easy: 0.3952
244
+ - ARC-Challenge: 0.2961
245
+ - MMLU: 0.3138
246
+ - GSM8K: 0.0402
247
+ - HumanEval: 0.0549
248
+ - ChatCORE metric: 0.0870
249
+
250
+
251
+ ## Summary
252
+
253
+ - Characters: 330,622
254
+ - Lines: 8,077
255
+ - Files: 42
256
+ - Tokens (approx): 82,655
257
+ - Dependencies (uv.lock lines): 2,004
258
+
259
+ | Metric | BASE | MID | SFT | RL |
260
+ |-----------------|----------|----------|----------|----------|
261
+ | CORE | 0.1976 | - | - | - |
262
+ | ARC-Challenge | - | 0.2884 | 0.2961 | - |
263
+ | ARC-Easy | - | 0.3758 | 0.3952 | - |
264
+ | GSM8K | - | 0.0303 | 0.0402 | - |
265
+ | HumanEval | - | 0.0671 | 0.0549 | - |
266
+ | MMLU | - | 0.3088 | 0.3138 | - |
267
+ | ChatCORE | - | 0.0790 | 0.0870 | - |
268
+
269
+ Total wall clock time: 8h15m
report/tokenizer-evaluation.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Tokenizer evaluation
2
+ timestamp: 2025-10-13 18:26:00
3
+
4
+ ### Comparison with GPT-2
5
+
6
+ | Text Type | Bytes | GPT-2 Tokens | GPT-2 Ratio | Ours Tokens | Ours Ratio | Relative Diff % |
7
+ |-----------|-------|--------------|--------------|-------------|------------|-----------------|
8
+ | news | 1819 | 404 | 4.50 | 375 | 4.85 | +7.2% |
9
+ | korean | 893 | 745 | 1.20 | 721 | 1.24 | +3.2% |
10
+ | code | 1259 | 576 | 2.19 | 493 | 2.55 | +14.4% |
11
+ | math | 1834 | 936 | 1.96 | 966 | 1.90 | -3.2% |
12
+ | science | 1112 | 260 | 4.28 | 225 | 4.94 | +13.5% |
13
+ | fwe-train | 4208518 | 900364 | 4.67 | 856901 | 4.91 | +4.8% |
14
+ | fwe-val | 4908443 | 1059062 | 4.63 | 1010356 | 4.86 | +4.6% |
15
+
16
+ ### Comparison with GPT-4
17
+
18
+ | Text Type | Bytes | GPT-4 Tokens | GPT-4 Ratio | Ours Tokens | Ours Ratio | Relative Diff % |
19
+ |-----------|-------|--------------|--------------|-------------|------------|-----------------|
20
+ | news | 1819 | 387 | 4.70 | 375 | 4.85 | +3.1% |
21
+ | korean | 893 | 364 | 2.45 | 721 | 1.24 | -98.1% |
22
+ | code | 1259 | 309 | 4.07 | 493 | 2.55 | -59.5% |
23
+ | math | 1834 | 832 | 2.20 | 966 | 1.90 | -16.1% |
24
+ | science | 1112 | 249 | 4.47 | 225 | 4.94 | +9.6% |
25
+ | fwe-train | 4208518 | 874799 | 4.81 | 856901 | 4.91 | +2.0% |
26
+ | fwe-val | 4908443 | 1029691 | 4.77 | 1010356 | 4.86 | +1.9% |
27
+
report/tokenizer-training.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Tokenizer training
2
+ timestamp: 2025-10-13 18:25:52
3
+
4
+ - max_chars: 2,000,000,000
5
+ - doc_cap: 10,000
6
+ - vocab_size: 65,536
7
+ - train_time: 89.5722
8
+ - num_special_tokens: 9
9
+ - token_bytes_min: 1
10
+ - token_bytes_max: 32
11
+ - token_bytes_mean: 6.9151
12
+ - token_bytes_std: 2.8736
13
+