- Turing Post
- Posts
- AI 101: What's New in World Models?
AI 101: What's New in World Models?
A glimpse at Code World Model, PSI, and others – redefining how models catch the world in their nets
World models are generative AI systems designed to capture how our 3D reality works. From diverse data, they learn the underlying physics, spatial relationships, and cause-and-effect of the world – then use that understanding to predict what happens next, run internal simulations, and make decisions without constant real-world testing.
World models remain a small but highly promising field. Each new development offers a glimpse into how AI is learning to model the physical world and the logic of action itself. We’re tracking these breakthroughs to keep you ahead of the curve.
In our previous articles about world models, we explained the basics – what they are and how their main examples work and an alternative vision on building world models with Physical, Agentic, and Nested (PAN) system. Today we’ll take a look at:
Meta’s groundbreaking Code World Model (CWM), which explores how world models can connect with the world of code and introduces a new reinforcement learning strategy by modifying GRPO;
Probabilistic Structure Integration (PSI) from Stanford NeuroAI Lab – a promptable, probabilistic world model where structure becomes new vocabulary.
And we’ll also briefly cover updates to Dreamer 4, Genie 3, and Cosmos WFM 2.5. Time to explore some exciting new tech!
In today’s episode, we will cover:
Code World Model (CWM)
CWM’s architecture and training
Special Reinforcement Learning Strategy (and more about four RL environments)
Practice and limitations
Probabilistic Structure Integration (PSI)
PSI self-improving workflow
Possibilities that PSI opens for us
Limitations
Other notable world models
Conclusion
Sources and further reading
Code World Model (CWM)
Let’s start with the model that played a part in the global debates about whether GRPO works properly. We’ll turn to GRPO and RL a little bit later, but firstly – what’s the idea behind Meta’s new world model and how does it refer to code?
Meta’s FAIR CodeGen team has extended the idea of world models into a domain that hasn’t traditionally been part of that conversation – code. LLMs and code have long been a natural pair, but in most cases models treat code as plain text: they generate it, fix it, or explain it, without understanding what happens when the code runs or how it changes a system’s state. This gap limits their ability to produce reliable, high-quality code that truly works.
Meta’s latest development, Code World Model (CWM), addresses that gap by bringing the practical, executable side of code into the model’s reasoning process.
CWM is a 32-billion-parameter model trained not just on static code, but also on data that captures how code behaves when executed. This allows CWM to keep on track how each line changes variables and how edits affect the whole program, so debugging, testing, and reasoning about programs go to the next level.
How is it organized from the technical side?
CWM’s architecture and training
Join Premium members from top companies like Microsoft, Google, Hugging Face, a16z, Datadog plus AI labs such as Ai2, MIT, Berkeley, .gov, and thousands of others to really understand what’s going on with AI. Learn the basics👆🏼

Reply