• Turing Post
  • Posts
  • Token 1.1: From Task-Specific to Task-Centric ML: A Paradigm Shift

Token 1.1: From Task-Specific to Task-Centric ML: A Paradigm Shift

Our new series on Foundation Model and Large Language Model Ops (FMOp/LLMOps)

Welcome to the beginning of our series on Foundation Model and Large Language Model Operations (FMOp/LLMOps). You might think if you've mastered MLOps, you can easily take on these new algorithms. Well, yes and no – it's complicated.

MLOps and Beyond: FMOps/LLMOps

As we’ve mentioned before, Eduardo Ordax, a leading MLOps Business Developer at AWS, places LLMOps under the broader umbrella of FMOps. According to him, while FMOps builds on MLOps, it also introduces new roles and life cycles. We're talking providers, fine-tuners, and consumers, each with their unique life cycle.

Before writing this series about FMOps, I also talked to Chip Huyen, co-founder at Claypot.ai and author of the brilliant book 'Designing Machine Learning Systems.' When thinking about foundation models, she advises to focus on one critical aspect: the use case. As ML models become increasingly efficient and accessible, companies face a shifting cost-benefit landscape. This necessitates a clear understanding of potential use cases to perform accurate and timely cost-benefit analyses.

The Shift to Task-Centric ML

This chat with Chip got me thinking: what if we are transitioning from a model-centric vs. data-centric, from task-specific to a completely different universe: use-case-centric or task-centric ML?

Subscribe to keep reading

This content is free, but you must be subscribed to Turing Post to continue reading.

Already a subscriber?Sign In.Not now

Reply

or to participate.