- Turing Post
- Posts
- 11 tips for distilling smaller models with GPT / Webinar
11 tips for distilling smaller models with GPT / Webinar
Go from OpenAI to Open-Source
Date: December 14th, 10:00 am PST / 7:00 pm CET
As LLMs rapidly gain adoption, the need for smaller, more efficient models has never been greater. This shift is driven by the compelling performance of open-source LLMs, juxtaposed with the high costs and closed nature of larger commercial models like GPT-4.
We’re excited to support an upcoming webinar with Predibase during which you’ll learn how to graduate from OpenAI to open-source with model distillation. Drawing on their experience distilling models at Google and Predibase, the presenters will share 11 best practices for distilling large models into smaller, more cost-effective LLMs that can be fine-tuned for your use case. Code snippets included. Must see!
![](https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/1ff2b0af-9b9c-4bc4-a2c9-9c070a8c3209/email.Footer.Diamonds.png?t=1701910308)
Reply