Running 73 Unlocking On-Policy Distillation for Any Model Family ๐ 73 Apply on-policy distillation to any model family
Running on CPU Upgrade Featured 2.72k The Smol Training Playbook ๐ 2.72k The secrets to building world-class LLMs
distilbert/distilbert-base-cased-distilled-squad Question Answering โข 65.2M โข Updated May 6, 2024 โข 232k โข โข 265
Running 3.6k The Ultra-Scale Playbook ๐ 3.6k The ultimate guide to training LLM on large GPU Clusters