Tanaybh commited on
Commit
f3f5c90
·
verified ·
1 Parent(s): 46cda54

Add human-friendly README

Browse files
Files changed (1) hide show
  1. README.md +118 -184
README.md CHANGED
@@ -1,188 +1,122 @@
1
  ---
2
- library_name: transformers
 
3
  tags:
4
- - generated_from_trainer
5
- model-index:
6
- - name: nano-gpt-from-scratch
7
- results: []
 
 
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
-
13
- # nano-gpt-from-scratch
14
-
15
- This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
16
- It achieves the following results on the evaluation set:
17
- - Loss: 3.6938
18
-
19
- ## Model description
20
-
21
- More information needed
22
-
23
- ## Intended uses & limitations
24
-
25
- More information needed
26
-
27
- ## Training and evaluation data
28
-
29
- More information needed
30
-
31
- ## Training procedure
32
-
33
- ### Training hyperparameters
34
-
35
- The following hyperparameters were used during training:
36
- - learning_rate: 0.0005
37
- - train_batch_size: 8
38
- - eval_batch_size: 8
39
- - seed: 42
40
- - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
- - lr_scheduler_type: linear
42
- - num_epochs: 10
43
-
44
- ### Training results
45
-
46
- | Training Loss | Epoch | Step | Validation Loss |
47
- |:-------------:|:------:|:-----:|:---------------:|
48
- | 5.9543 | 0.0748 | 200 | 5.9705 |
49
- | 5.599 | 0.1496 | 400 | 5.5920 |
50
- | 5.3875 | 0.2244 | 600 | 5.4046 |
51
- | 5.3405 | 0.2992 | 800 | 5.2639 |
52
- | 5.1997 | 0.3740 | 1000 | 5.1478 |
53
- | 5.088 | 0.4488 | 1200 | 5.0422 |
54
- | 5.0339 | 0.5236 | 1400 | 4.9466 |
55
- | 4.8631 | 0.5984 | 1600 | 4.8712 |
56
- | 4.7945 | 0.6731 | 1800 | 4.8020 |
57
- | 4.7787 | 0.7479 | 2000 | 4.7387 |
58
- | 4.676 | 0.8227 | 2200 | 4.6780 |
59
- | 4.616 | 0.8975 | 2400 | 4.6380 |
60
- | 4.6444 | 0.9723 | 2600 | 4.5994 |
61
- | 4.6359 | 1.0471 | 2800 | 4.5643 |
62
- | 4.5646 | 1.1219 | 3000 | 4.5283 |
63
- | 4.5776 | 1.1967 | 3200 | 4.4985 |
64
- | 4.5616 | 1.2715 | 3400 | 4.4715 |
65
- | 4.5367 | 1.3463 | 3600 | 4.4492 |
66
- | 4.4906 | 1.4211 | 3800 | 4.4217 |
67
- | 4.4006 | 1.4959 | 4000 | 4.4068 |
68
- | 4.5386 | 1.5707 | 4200 | 4.3741 |
69
- | 4.3777 | 1.6455 | 4400 | 4.3616 |
70
- | 4.4397 | 1.7203 | 4600 | 4.3360 |
71
- | 4.3095 | 1.7951 | 4800 | 4.3158 |
72
- | 4.3144 | 1.8699 | 5000 | 4.2964 |
73
- | 4.3759 | 1.9447 | 5200 | 4.2789 |
74
- | 4.3907 | 2.0194 | 5400 | 4.2606 |
75
- | 4.3318 | 2.0942 | 5600 | 4.2496 |
76
- | 4.2696 | 2.1690 | 5800 | 4.2383 |
77
- | 4.2866 | 2.2438 | 6000 | 4.2168 |
78
- | 4.2801 | 2.3186 | 6200 | 4.2040 |
79
- | 4.3416 | 2.3934 | 6400 | 4.1900 |
80
- | 4.357 | 2.4682 | 6600 | 4.1744 |
81
- | 4.1895 | 2.5430 | 6800 | 4.1598 |
82
- | 4.1719 | 2.6178 | 7000 | 4.1497 |
83
- | 4.2181 | 2.6926 | 7200 | 4.1386 |
84
- | 4.2341 | 2.7674 | 7400 | 4.1248 |
85
- | 4.1929 | 2.8422 | 7600 | 4.1126 |
86
- | 4.2205 | 2.9170 | 7800 | 4.0989 |
87
- | 4.1556 | 2.9918 | 8000 | 4.0861 |
88
- | 4.1201 | 3.0666 | 8200 | 4.0812 |
89
- | 4.1829 | 3.1414 | 8400 | 4.0691 |
90
- | 4.1198 | 3.2162 | 8600 | 4.0647 |
91
- | 4.1811 | 3.2909 | 8800 | 4.0487 |
92
- | 4.0946 | 3.3657 | 9000 | 4.0418 |
93
- | 4.098 | 3.4405 | 9200 | 4.0332 |
94
- | 4.1762 | 3.5153 | 9400 | 4.0212 |
95
- | 4.1506 | 3.5901 | 9600 | 4.0164 |
96
- | 4.0208 | 3.6649 | 9800 | 4.0045 |
97
- | 4.1121 | 3.7397 | 10000 | 4.0025 |
98
- | 4.0627 | 3.8145 | 10200 | 3.9930 |
99
- | 3.8746 | 3.8893 | 10400 | 3.9823 |
100
- | 4.0966 | 3.9641 | 10600 | 3.9739 |
101
- | 3.9827 | 4.0389 | 10800 | 3.9707 |
102
- | 3.9926 | 4.1137 | 11000 | 3.9619 |
103
- | 4.0307 | 4.1885 | 11200 | 3.9533 |
104
- | 4.0668 | 4.2633 | 11400 | 3.9469 |
105
- | 4.0161 | 4.3381 | 11600 | 3.9390 |
106
- | 4.0391 | 4.4129 | 11800 | 3.9330 |
107
- | 3.9875 | 4.4877 | 12000 | 3.9257 |
108
- | 4.0578 | 4.5625 | 12200 | 3.9204 |
109
- | 3.9203 | 4.6372 | 12400 | 3.9126 |
110
- | 4.0048 | 4.7120 | 12600 | 3.9071 |
111
- | 4.0535 | 4.7868 | 12800 | 3.9003 |
112
- | 3.9751 | 4.8616 | 13000 | 3.8971 |
113
- | 3.9949 | 4.9364 | 13200 | 3.8921 |
114
- | 3.8623 | 5.0112 | 13400 | 3.8873 |
115
- | 3.9523 | 5.0860 | 13600 | 3.8841 |
116
- | 3.9892 | 5.1608 | 13800 | 3.8773 |
117
- | 3.8098 | 5.2356 | 14000 | 3.8734 |
118
- | 3.93 | 5.3104 | 14200 | 3.8704 |
119
- | 3.9913 | 5.3852 | 14400 | 3.8621 |
120
- | 3.9875 | 5.4600 | 14600 | 3.8599 |
121
- | 4.0528 | 5.5348 | 14800 | 3.8533 |
122
- | 3.9126 | 5.6096 | 15000 | 3.8485 |
123
- | 4.0453 | 5.6844 | 15200 | 3.8422 |
124
- | 3.8323 | 5.7592 | 15400 | 3.8367 |
125
- | 3.9323 | 5.8340 | 15600 | 3.8336 |
126
- | 3.8529 | 5.9088 | 15800 | 3.8309 |
127
- | 3.8784 | 5.9835 | 16000 | 3.8198 |
128
- | 3.9129 | 6.0583 | 16200 | 3.8198 |
129
- | 3.8845 | 6.1331 | 16400 | 3.8169 |
130
- | 3.7315 | 6.2079 | 16600 | 3.8141 |
131
- | 3.8735 | 6.2827 | 16800 | 3.8041 |
132
- | 3.8643 | 6.3575 | 17000 | 3.8056 |
133
- | 3.871 | 6.4323 | 17200 | 3.7992 |
134
- | 3.9184 | 6.5071 | 17400 | 3.8034 |
135
- | 3.8936 | 6.5819 | 17600 | 3.7936 |
136
- | 3.8913 | 6.6567 | 17800 | 3.7887 |
137
- | 3.8066 | 6.7315 | 18000 | 3.7871 |
138
- | 3.843 | 6.8063 | 18200 | 3.7838 |
139
- | 3.9506 | 6.8811 | 18400 | 3.7783 |
140
- | 3.7878 | 6.9559 | 18600 | 3.7754 |
141
- | 3.877 | 7.0307 | 18800 | 3.7725 |
142
- | 3.8744 | 7.1055 | 19000 | 3.7680 |
143
- | 3.8774 | 7.1803 | 19200 | 3.7669 |
144
- | 3.8095 | 7.2550 | 19400 | 3.7645 |
145
- | 3.8195 | 7.3298 | 19600 | 3.7638 |
146
- | 3.9152 | 7.4046 | 19800 | 3.7587 |
147
- | 3.8205 | 7.4794 | 20000 | 3.7546 |
148
- | 3.7736 | 7.5542 | 20200 | 3.7520 |
149
- | 3.8518 | 7.6290 | 20400 | 3.7499 |
150
- | 3.8738 | 7.7038 | 20600 | 3.7444 |
151
- | 3.8431 | 7.7786 | 20800 | 3.7438 |
152
- | 3.7439 | 7.8534 | 21000 | 3.7388 |
153
- | 3.9137 | 7.9282 | 21200 | 3.7376 |
154
- | 3.7964 | 8.0030 | 21400 | 3.7338 |
155
- | 3.7578 | 8.0778 | 21600 | 3.7363 |
156
- | 3.786 | 8.1526 | 21800 | 3.7312 |
157
- | 3.7942 | 8.2274 | 22000 | 3.7285 |
158
- | 3.8115 | 8.3022 | 22200 | 3.7272 |
159
- | 3.8123 | 8.3770 | 22400 | 3.7248 |
160
- | 3.8411 | 8.4518 | 22600 | 3.7245 |
161
- | 3.8563 | 8.5266 | 22800 | 3.7208 |
162
- | 3.8441 | 8.6013 | 23000 | 3.7179 |
163
- | 3.8005 | 8.6761 | 23200 | 3.7152 |
164
- | 3.6688 | 8.7509 | 23400 | 3.7147 |
165
- | 3.8191 | 8.8257 | 23600 | 3.7098 |
166
- | 3.8469 | 8.9005 | 23800 | 3.7087 |
167
- | 3.8563 | 8.9753 | 24000 | 3.7079 |
168
- | 3.7861 | 9.0501 | 24200 | 3.7058 |
169
- | 3.7502 | 9.1249 | 24400 | 3.7046 |
170
- | 3.6903 | 9.1997 | 24600 | 3.7032 |
171
- | 3.7698 | 9.2745 | 24800 | 3.7019 |
172
- | 3.6781 | 9.3493 | 25000 | 3.7021 |
173
- | 3.7236 | 9.4241 | 25200 | 3.6995 |
174
- | 3.7578 | 9.4989 | 25400 | 3.6996 |
175
- | 3.6315 | 9.5737 | 25600 | 3.6985 |
176
- | 3.6605 | 9.6485 | 25800 | 3.6965 |
177
- | 3.7707 | 9.7233 | 26000 | 3.6958 |
178
- | 3.7968 | 9.7981 | 26200 | 3.6953 |
179
- | 3.8238 | 9.8728 | 26400 | 3.6939 |
180
- | 3.7834 | 9.9476 | 26600 | 3.6938 |
181
-
182
-
183
- ### Framework versions
184
-
185
- - Transformers 4.57.0
186
- - Pytorch 2.8.0
187
- - Datasets 4.0.0
188
- - Tokenizers 0.22.1
 
1
  ---
2
+ language: en
3
+ license: mit
4
  tags:
5
+ - text-generation
6
+ - gpt2
7
+ - transformers
8
+ - custom-tokenizer
9
+ datasets:
10
+ - wikitext
11
  ---
12
 
13
+ # 🤖 Nano GPT - Built From Scratch
14
+
15
+ Hey there! Welcome to my tiny language model. I built this GPT from scratch as a learning project, and honestly, it was pretty fun watching it learn to generate text!
16
+
17
+ ## What is this?
18
+
19
+ This is a super small GPT-2 style language model that I trained on my laptop. It's not going to write your essays or solve world hunger, but it's a cool demonstration of how these language models actually work under the hood.
20
+
21
+ Think of it as a baby GPT - it can generate text, but don't expect Shakespeare. More like... an enthusiastic toddler who just learned to talk.
22
+
23
+ ## Model Stats
24
+
25
+ - **Parameters**: ~1,065,728 (yes, that's million with an M, not billion!)
26
+ - **Layers**: 4 transformer layers
27
+ - **Embedding Size**: 128 dimensions
28
+ - **Attention Heads**: 4 heads
29
+ - **Context Length**: 128 tokens
30
+ - **Vocab Size**: 2000 tokens
31
+ - **Training Data**: WikiText-2 (5,000 samples)
32
+ - **Training Time**: 10 epochs on my laptop
33
+
34
+ ## Quick Start
35
+
36
+ Want to try it out? Here's how:
37
+
38
+ ```python
39
+ from transformers import pipeline
40
+
41
+ # Load the model
42
+ generator = pipeline('text-generation', model='Tanaybh/nano-gpt-from-scratch')
43
+
44
+ # Generate some text
45
+ output = generator(
46
+ "The meaning of life is",
47
+ max_new_tokens=30,
48
+ do_sample=True,
49
+ temperature=0.8
50
+ )
51
+
52
+ print(output[0]['generated_text'])
53
+ ```
54
+
55
+ ## Example Output
56
+
57
+ I gave it the prompt: "**The **"
58
+
59
+ And it generated:
60
+
61
+ > The × 60 munitions, and injuries were found in the taxonomy in the south, the east of the
62
+
63
+ Not bad for a tiny model trained in a few hours, right?
64
+
65
+ ## Training Details
66
+
67
+ I trained this model from scratch using:
68
+ - Custom BPE tokenizer (trained on the same data)
69
+ - GPT-2 architecture (just way smaller)
70
+ - AdamW optimizer with a learning rate of 0.0005
71
+ - Batch size of 8
72
+ - Trained for 10 epochs
73
+
74
+ The whole thing runs on a regular laptop - no fancy GPU clusters needed!
75
+
76
+ ## Limitations
77
+
78
+ Let's be real here:
79
+ - This model is TINY. Like, really tiny. It has 1,065,728 parameters vs GPT-3's 175 billion.
80
+ - It was only trained on 5,000 Wikipedia samples, so its knowledge is... limited.
81
+ - It might generate weird or nonsensical text sometimes. That's part of the charm!
82
+ - Maximum context length is only 128 tokens, so don't expect long conversations.
83
+ - It's a base model with no instruction tuning, so it just continues text rather than following commands.
84
+
85
+ ## Why I Made This
86
+
87
+ I wanted to understand how language models work by building one myself. Sure, I could've just fine-tuned a pre-trained model, but where's the fun in that? This project taught me about:
88
+ - Tokenizer training
89
+ - Transformer architecture
90
+ - Training dynamics
91
+ - How LLMs actually generate text
92
+
93
+ Plus, now I can say I trained a language model from scratch on my laptop. Pretty cool, right?
94
+
95
+ ## Future Improvements
96
+
97
+ Some things I might try:
98
+ - Train on more data (maybe the full WikiText dataset)
99
+ - Experiment with different model sizes
100
+ - Try different tokenizer configurations
101
+ - Add instruction tuning
102
+ - Fine-tune it for specific tasks
103
+
104
+ ## License
105
+
106
+ MIT - Feel free to use this however you want! Learn from it, break it, improve it. That's what it's here for.
107
+
108
+ ## Acknowledgments
109
+
110
+ Built with:
111
+ - 🤗 Hugging Face Transformers
112
+ - PyTorch
113
+ - The WikiText dataset
114
+ - Too much coffee
115
+
116
+ ---
117
+
118
+ **Note**: This is a learning project and experimental model. Use it for fun and education, not production systems!
119
+
120
+ If you found this interesting or helpful, feel free to star the repo or reach out. Always happy to chat about ML stuff!
121
+
122
+ *Last updated: October 05, 2025*