When thousands of AI agents begin to act on our behalf, what will be the system they all run on?
Renen Hallak β founder and CEO of VAST Data β believes weβre witnessing the birth of an AI Operating System: a foundational layer that connects data, compute, and policy for the agentic era. In this episode of Inference, we talk about how enterprises are moving from sandboxes and proof-of-concepts to full production agents, why metadata matters more than βbig data,β and how the next infrastructure revolution will quietly define who controls intelligence at scale.
Subscribe to our YouTube channel, or listen the interview on Spotify / Apple
We discuss:
What βAI OSβ really means β and why the old stack (like Windows, MacOS, Linux etc) canβt handle agentic systems
Why enterprises are underestimating the magnitude (but overestimating the speed) of this shift
The evolving role of data, metadata, and context in intelligent systems
How control, safety, and observability must be baked into infrastructure β not added later
Why Renen says the next 10 years will reshape everything β from jobs to the meaning of money
The ladder of progress: storage β database β data platform β operating system
What first-principles thinking looks like inside a company building for AGI-scale systems
This is a conversation about the architecture of the future β and the fine line between control and creativity when intelligence becomes infrastructure. Watch it now β
This is a free edition. Upgrade if you want to receive our deep dives directly in your inbox. If you want to support us without getting a subscription β do it here.
This transcript is edited by GPT-5. As always β itβs better to watch the full video) β¬οΈ
Ksenia Se:
Welcome to Inference by Turing Post. Today Iβm joined by Renen Hallak, founder and CEO of VAST Data. Letβs jump straight in. Renen, my big question is: when will we shift from experimenting with AI models β what we do now β to living inside environments where thousands of agentic systems act on our behalf?
Renen Hallak:
I think itβs already happening. These shifts always feel slow, and then all at once we realize theyβve happened. Every technology goes through that moment. When Siri first came out on the iPhone, it barely understood anything. A few years later, it suddenly just worked. I think the same thing is happening here β it just takes us a while to notice.
Over the past year, weβve been involved in hundreds of enterprise projects moving from sandbox experiments to real production workloads with agents. There are still a few steps left before what you described becomes common, but it wonβt take more than a few years.
Ksenia:
What do those sandbox environments look like?
Renen:
Usually itβs a small, isolated setup β a team off to the side testing things out, figuring out what works and what doesnβt. Theyβre trying to get a feel for these new abilities before deciding whatβs safe, low-risk, and valuable enough to move into production. Once something checks those boxes, it crosses over. Then another. Then another.
Ksenia:
Given all your work with enterprises, what do they tend to overestimate about agentic systems β and what do they underestimate?
Renen:
They underestimate the magnitude of the change thatβs coming, and they overestimate how fast itβll happen. Looking ten years ahead, I canβt think of a single domain that wonβt change drastically. Not just developers or lawyers or customer support roles β but every human pursuit: art, science, physical labor, construction, carpentry. Every field will be deeply augmented by these systems. Itβll take time, but the impact will be universal.
Ksenia:
Did you say ten years?

Old Operating Systems
Renen:
In ten years, the world will be very different from what we have today. Weβll be asking basic questions again β what money means, what the economy looks like, how people spend their time. I donβt think we know the answers yet. Ten years ago it was easy to describe today; right now itβs much harder to predict the next decade.
Ksenia:
Thatβs interesting, because you started the company in 2016 β a year before the Transformer paper. It didnβt feel easy to imagine today from there. Can you walk me through your path? What changed? What were you solving at the beginning? What contrarian bets felt crazy at the time? Any pivots or turning points you can unpack β since we are talking about a ten-year arc.
Renen:
It is ten years. Back in school there was a side class on neural nets, and what we βknewβ was that they didnβt work β a waste of time. The idea of computers programming themselves like the human brain was intriguing, but we didnβt understand the brain, so how could we make artificial ones work?
About a decade ago it became clear the opposite was happening. Those older neural net ideas from 30 years prior started producing results. I remember watching a YouTube video showing a system recognizing cats in photos. No one expected computers to do that. Computers were good at numbers and exactness, not fuzzy understanding β that was a human thing.
So I dug in. I like to go on these little journeys: what changed, how it works, what the limiting factor is. Algorithmically, it wasnβt wildly different from what we learned in school. The difference was data β suddenly there was a lot of it for training. With that much data, simple approaches began to work in surprising ways.
We still didnβt know how the brain works, yet the system could recognize a cat. Give it more data, maybe it recognizes dogs, then people, then increasingly complex patterns the way we do. Thatβs why we started the company β to enable that progression and see how far it could stretch.
Could we build a thinking machine, or only replicate sensory tasks? Weβre still on that journey. I donβt know if we can build a thinking machine yet, but weβve learned a lot over the decade. First, the problem is even bigger than I expected. You need access to far more data and far more compute. Weβre seeing exabyte-scale systems feeding hundreds of thousands of very hungry parallel GPUs. Iβm very glad we architected for extreme scale from day one β itβs nearly impossible to re-architect once youβve chosen a path.
Our customer base evolved too: from early AI adopters like hedge funds and life science institutes, to generative AI companies and AI clouds, and now to enterprises starting to adopt these abilities. Each phase is different. Training differs from inference. Inference at small scale differs from inference at scale. Agents add another layer. Next comes fine-tuning and physical robots. That progression keeps it interesting and exciting.
Ksenia:
Has your understanding of data changed? Back in the big data era, Andrew Ng called data the new electricity. But today it feels less data-centric and more context-centric. Do you agree?
Renen:
Yes β and Iβd say metadata is now far more important than it used to be. Historically, we analyzed numbers neatly arranged in database columns. It was easy to make sense of. Now most data is unstructured β fuzzy, hard to define. To make sense of it, we have to give it meaning, and that meaning itself has to be captured and stored somewhere.
Thatβs why we built our own database at VAST β to handle these new workloads. The real tension today is between structured and unstructured data. If you think in human terms, itβs similar to how we process experiences: raw sensory input comes through our eyes and ears (or in an agentβs case, cameras and microphones). We interpret it, we form thoughts, and then we store both β the experiences and the thoughts β as memories. Later, we cross-reference new experiences with those memories.
Systems need to do the same: access data as it comes in and reach back to information that may be ten years old, across both structured and unstructured domains. That adds immense complexity.
Ksenia:
Itβs like a whole new system.
Renen:
Exactly. And the old systems werenβt built for this β not for the scale, the performance, or the resilience thatβs required now. They were local. What we need today is global β distributed across edge locations and devices. Everything up and down the technology stack has to be rethought for this new era.
Ksenia:
So where are we in building that system? Whatβs still missing?
Renen:
Thereβs always more missing than built β because every step reveals more of whatβs needed. I like to say weβre climbing an exponential ladder: every step is two or three times higher than the one before. And weβre still at the very beginning. Each step lets us see further into the future, which only expands the scope of what we have to build.
We started VAST by building a storage system β fast access to massive amounts of data. We wanted to break the old trade-offs between price, performance, scale, resilience, and simplicity. Once we did that, customers said, βGreat β but we also need a new kind of database.β So we built the VAST Database on top of our data store. Then they said, βThatβs great β but we also need a way to manage compute.β
Now we have GPUs, DPUs, training, inference β each with different urgency and locality requirements. Some workloads must run close to users; others can run overnight. We needed a data-driven way to orchestrate all that. For instance, an image comes in, it triggers an inference function, which calls another function if it detects a stoplight.
So what began as a storage system became a data platform β and is now evolving into the full operating layer between new hardware and agentic applications.
Whatβs still missing? Everything agents need: security, observability, reproducibility, simpler deployment for non-expert enterprises, compliance with regulation. Every time you open a box, you find a hundred more boxes to open.
But thatβs also what makes it exciting. We now have the foundation β the architecture and the first few building blocks to start with our customers.
Ksenia:
So youβre essentially building an AI operating system. How do you see the competitive landscape? There are so many companies in data operations. How do you see this space evolving?
Renen:
Big data is actually a good analogy. The leap from machine learning to deep learning changed everything. Machine learning was about analyzing numbers and finding anomalies; deep learning touches everything in the world.
We were fortunate to start late. If weβd launched even a year or two earlier, we wouldnβt have seen the deep learning revolution unfold β and even if we had predicted it, we wouldnβt have had the underlying technologies to design for it. Starting in 2016 and spending the first few years on the minimum viable product gave us time to architect specifically for AI.
Our competitors, whether they started five years earlier or thirty, didnβt have that advantage. Their systems were built for the old world β they donβt scale to todayβs demands and werenβt designed for these workloads.
Across the stack, old systems are being displaced: enterprise storage, HPC file systems, data warehouses, orchestration frameworks β all of them have to be rebuilt for the AI era.
Ksenia:
Itβs funny you say you βstarted late,β because 2016 already feels like ancient history.
Renen:
Late compared to our competitors β but early in the context of this new AI wave.
Ksenia:
When you talk about this AI operating system, is it like scaffolding for AGI? Part of that bigger narrative? How do you think about it β and whatβs your definition of AGI?
Renen:
To me, AGI β and the step beyond it, superintelligence β means thinking machines. Computers that donβt just parrot back information or summarize what we give them, but that can generate genuinely new ideas. Weβre not there yet. There are a few big steps we still need to take.
When I think about whatβs missing, I look at us. How do humans come up with new ideas? Itβs rarely one person alone in an ivory tower. Itβs interaction β different people, each with their own model of the world shaped by experience, talking, misunderstanding, and forming new concepts together. Then we test those ideas in the real world.
If we want AI to do that, we need to give it the same ingredients. First, agents need the ability to build their own models of the universe. They canβt all rely on one static model from OpenAI or xAI. Each agent has to fine-tune its understanding continuously, based on what it encounters. Instead of one big AI, weβll have millions of them, each slightly different.
Next, they need to talk to each other. The operating system weβre building is key to that β it supports the loop between inference and fine-tuning, the data pipelines, the persistence layer that keeps track of what agents are saying and doing. It gives us, the humans, a window into those interactions while still letting the agents communicate.
Then they need access to the natural world β through physical robots, cars, drones, humanoids. Give them sensors, the ability to act and perceive. Thatβs how weβll inch closer to true thinking machines. But to do it safely, we need a software infrastructure layer that governs what they can do.
As these systems learn, we need mechanisms to monitor and control them: what theyβre allowed to access, where they can go, what data they can see. For instance, an agent might have access to my personal data because itβs working for me, but it shouldnβt share it with anyone else. Two agents might both have access to the same data, but not be allowed to know that the other one does. All of that must be encoded in policies β rules that make it possible to observe, reproduce, and explain their behavior.
We need to be able to answer questions like, βWhy did that agent respond this way last month?β Thatβs what an operating system should provide β so developers can focus on building applications, not reinventing infrastructure and control every time.
Ksenia:
Thatβs exactly where technical builders and doomers tend to clash. If you work in the trenches, you see how much precision goes into guardrails and control systems. But from the outside, it looks like youβre building something enormous that could easily spiral out of control. How do you talk to people who donβt see that technical reality?
Renen:
Every new technology carries potential for both good and harm β but with AI, the scale is larger. The PC, the internet, the mobile phone β none of those created better versions of themselves. AI can.
Thatβs both exciting and unsettling. On one hand, it could help us tackle the hardest problems we have β clean energy, disease, climate, discovery. On the other, yes, it could get away from us.
I think about it like raising kids. You donβt try to control them forever β you teach them values, watch over them, and trust that theyβll make good decisions. The same logic applies here. We should build controls to observe and prevent harm, but we also have to leave room for discovery, for unexpected, creative things to happen.
Ksenia:
With things moving this fast, how far ahead do you plan?
Renen:
We actually donβt plan far ahead at all. I was talking to a potential investor this morning who asked for a five-year plan, and I told them we donβt have one. We barely know what will happen next year. We do have a one-year plan, but it gets updated every quarter as we learn more and the world shifts.
Forecasting in this space is almost impossible. So instead, our process is simple: we experiment, test ideas, figure out what works, and double down on that. When we find something that doesnβt work, we share it internally so others donβt waste time repeating it. Itβs a very iterative, discovery-driven approach β the best one weβve found.
Ksenia:
If you had to rank the biggest challenges in building this operating system β power, data movement, governance β what would come first?
Renen:
The biggest challenge is that everything is new. The whole stack is being rebuilt. Power plants, data centers, racks β all are designed differently now. The applications running on top of us are being written in new ways too. Everything is changing at once, and staying ahead of it all is tough.
The second challenge is speed. Everything is growing incredibly fast. We have to keep up with customer demand without breaking the company as we scale. Every three or six months, it feels like weβre running a completely different organization because of the new levels of scale weβre handling. Itβs not easy β none of this is β but itβs deeply interesting and genuinely fun. I think this is the most exciting time and place to be working in, certainly in my lifetime β maybe in many lifetimes.
Ksenia:
It really is a crazy and fascinating time. What concerns you most about it β and what excites you most, besides the sheer intensity of it all?
Renen:
On a day-to-day level, I worry about a thousand little things. Every morning I wake up to dozens of messages about things that need fixing or improving β fires to put out, processes to refine. Growing at this pace means thereβs always something broken or changing. My job is to make sure that list gets shorter by the time I go to sleep β though I know thereβll be a new one waiting the next morning.
On a macro level, I worry about making sure this technology is used well. We have to ensure it doesnβt fall into the wrong hands or get misused. I donβt have all the answers, but staying ahead of it, staying in dialogue about it β thatβs essential.
Ksenia:
And what excites you most?
Renen:
The possibilities. Every week we see computers doing things we didnβt think they could. Technology moves in S-curves β bursts of acceleration followed by plateaus. I donβt know if weβre at the steep part of the curve or approaching a flat one, but either way itβs thrilling.
If weβre plateauing, maybe AI itself will help us find the next wave of ideas. If weβre still early in the climb, then the next few years will be breathtaking. At VAST, we get to work with some of the smartest people in the world, across both the application and hardware layers of this new frontier. Thatβs incredibly energizing.
Ksenia:
One of my last questions is always about books. What book has influenced you the most β recently or from your early years?
Renen:
Thatβs tough, because lately I barely have time to read. I work seven days a week right now β this opportunity is so big, and we donβt want to miss it or mess it up. When I do read, I gravitate toward biographies. I read the biographies of Elon Musk and Leonardo da Vinci recently β both fascinating. I like seeing how people think and approach problems.
Earlier in my career, I read a lot of business books since I never studied business formally. My background is in computer science, but I had to learn how to build a company. I tried to absorb as much as I could to avoid making stupid mistakes β or at least make smarter ones.
Ksenia:
I noticed both you and Musk talk a lot about first-principles thinking. How does that help you?
Renen:
It helps clarify whatβs possible and what isnβt. Most people assume most things are impossible. My background is in math, which is all about proofs β you donβt know the answer in advance. You might prove something can be done, or that it canβt. Either way, you expand understanding.
Itβs the same in building technology. If something is physically possible by the laws of physics, we should try to do it. If itβs not, we should prove that it isnβt. When someone says βthat canβt be done,β I ask βwhy not?β Usually theyβre relying on an assumption that doesnβt hold up. If you keep peeling those assumptions back, you reach first principles β and thatβs where real breakthroughs happen.
Most attempts fail, of course. But once in a while, you discover something that works and no one else has done before. Thatβs the only way to build anything truly new.
Ksenia:
I think thatβs one of the greatest augmenting powers of large language models β they let us ask endless questions, keep peeling back layers, and get to the root of things.
Renen:
Exactly. Theyβre beginning to reason, and they donβt have our limitations. Think about it β out of eight billion people, only a tiny fraction ever come up with a truly new idea. And usually, one person does it once, writes about it, lectures about it, and we wait another generation for the next one. Itβs a slow process for humanity.
Computers can accelerate that. Instead of eight billion minds, we could have eight hundred billion thinking machines, each generating new ideas constantly. They can evolve and iterate every minute instead of every twenty years. Itβs evolution on steroids.
Ksenia:
Thatβs fascinating. Thank you so much β it was a pleasure talking to you.
Renen:
My pleasure. Thank you for having me.
