• Turing Post
  • Posts
  • Token 1.0: Where are you in FMOps Infrastructure Stack? Tell us

Token 1.0: Where are you in FMOps Infrastructure Stack? Tell us

Navigating the 'Ops' Labyrinth: FMOps/LLMOps vs. MLOps and What is FMOps Infrastructure Stack?


Recently in the MLOps community’s Slack, one participant asked: “Does anyone else feel like LLMOps is only a term to attract VC money during the hype/bubble and provides no additional value to the field?”

Another participant immediately replied: “Running AI apps in production is really hard. I worked for a while in ML 1.0, and now I build generative AI apps, there's almost no similarity."

Eduardo Ordax, the principal MLOps Business Developer across EMEA at AWS, threw out another term – FMOps:

“I consider LLMOps a subset of FMOps that will borrow from and extend the MLOps domain:

  • People and processes differ when compared to MLOps: We have providers, fine-tuners, and consumers, and the lifecycle for each differs.

  • It's important to establish a process for how to select and adapt the FM to a specific context. Here we cover different aspects of prompting, open source vs. proprietary models, latency, cost, and precision.

  • We need to evaluate and monitor fine-tuned models differently. With LLMs, we must consider different fine-tuning techniques, RLHF, and cover all aspects of bias, toxicity, IP, and privacy.

  • Finally, because of the computation requirements of these models, the tech part is very important.

In summary, MLOps is nothing more than trying to do ML following good practices. FMOps/LLMOps is the same for FMs/LLMs. We can reuse some basic MLOps concepts but there are still many new concepts to account for.”

FM - Foundation Model that is not limited to text but also includes video, audio, images, etc.

FMOps – Foundation Model Operations

So who is right?

Who is right?

Is running generative AI apps in production really that different and difficult?

Is there an FMOps Stack as we (almost) have with MLOps?

What should the FMOps Infrastructure Stack look like to make productizing FM easier?

To answer these questions, we need your help!

We invite all the companies that see themselves as a part of FMOps/LLMOps Infrastructure Stack to email us.*

We will work together on making this series as helpful to all practitioners as possible. Just answer this email or send us a note at [email protected]

It's also a chance to be mentioned in front of more than 80,000 of our readers across all networks.

The questions we will be tackling:

  • What are the essential considerations for choosing FM/LLM and moving it to production, focusing on cost, safety, security, performance, and reliability?

  • How can FM/LLM be effectively integrated into an organization’s current systems, prioritizing scalability, system compatibility, and actionable insights?

  • *Which tools have proven most effective and reliable across the FMOps Stack? (Chips, cloud service providers, labeling, synthetic data, fine-tuning, observability, safety etc. - YOU TELL US what are important parts of the FMOps Stack)

  • What are the specialized approaches for creating instruction-based datasets for FMs/LLMs fine-tuning, and how do they differ from traditional ML datasets?

  • How can we best secure and protect sensitive client data processed by a production-ready Large Language Model?

  • How to deal with prompt injection, and other principal challenges and potential solutions?

  • and others…

If you have a question that bugs you about FM/LLM in production: send it our way, and we will find the best expert to answer it.

Just answer this email or send us a note at [email protected]

For those who are at the beginning of their journey of learning LLMs, here is an updated, no-hype list of research papers that can be considered foundational for this new industry.

Foundational Papers

We are looking forward to hearing back from you.

Truly yours,

Turing Post’s Team 🤍

Join the conversation

or to participate.