mlabonne commited on
Commit
b8185e3
·
verified ·
1 Parent(s): 28910c3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -127,6 +127,16 @@ RAG systems enable AI solutions to include new, up-to-date, and potentially prop
127
  - llama.cpp: [LFM2-1.2B-Extract-GGUF](https://huggingface.co/LiquidAI/LFM2-1.2B-Extract-GGUF)
128
  - LEAP: [LEAP model library](https://leap.liquid.ai/models?model=lfm2-1.2b-extract)
129
 
 
 
 
 
 
 
 
 
 
 
130
  ## 📬 Contact
131
 
132
  If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).
 
127
  - llama.cpp: [LFM2-1.2B-Extract-GGUF](https://huggingface.co/LiquidAI/LFM2-1.2B-Extract-GGUF)
128
  - LEAP: [LEAP model library](https://leap.liquid.ai/models?model=lfm2-1.2b-extract)
129
 
130
+ You can use the following Colab notebooks for easy inference and fine-tuning:
131
+
132
+ | Notebook | Description | Link |
133
+ |-------|------|------|
134
+ | Inference | Run the model with Hugging Face's transformers library. | <a href="https://colab.research.google.com/drive/1WoPOzoLBJEUjPcDiUn_R42CTdEwQ8kcv?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
135
+ | SFT (TRL) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using TRL. | <a href="https://colab.research.google.com/drive/1j5Hk_SyBb2soUsuhU0eIEA9GwLNRnElF?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
136
+ | DPO (TRL) | Preference alignment with Direct Preference Optimization (DPO) using TRL. | <a href="https://colab.research.google.com/drive/1MQdsPxFHeZweGsNx4RH7Ia8lG8PiGE1t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
137
+ | SFT (Axolotl) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Axolotl. | <a href="https://colab.research.google.com/drive/155lr5-uYsOJmZfO6_QZPjbs8hA_v8S7t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
138
+ | SFT (Unsloth) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Unsloth. | <a href="https://colab.research.google.com/drive/1HROdGaPFt1tATniBcos11-doVaH7kOI3?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
139
+
140
  ## 📬 Contact
141
 
142
  If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).