DeepSeek-R1 is the first open-source model to close the performance gap with the best commercial models. But the question remains,Β βHow can I customize and fine-tune DeepSeek?βΒ
Fine-tuning reasoning models like DeepSeek-R1 and its distillations is uncharted territory, with no established training recipes βΒ until now.Β Join us on Feb. 12 at 10 am PTΒ to get a behind-the-scenes look at a new framework for efficient LoRA-based reinforcement learning (RL), enabling you to customize DeepSeek-R1 for your data and use case.
Topics include:
How to fine-tune DeepSeek-R1-Qwen-7B:Β Customize DeepSeek with RL-based techniques.
Performance benchmarks:Β Quantify the impact of fine-tuning on reasoning tasks.
When to fine-tune DeepSeek:Β Know when to fine-tune a reasoning model vs. stick with a standard SLM.
All attendees will receive free credits to get started on their own.Β