Will Schenk and I just came back from NVIDIA GTC. With more than 1,000 sessions featuring many AI leaders, the conference can, in many ways, serve as a litmus test for where the AI industry is right now. Combining my analytical approach with Will’s practical experience helping companies work with AI more effectively through TheFocus.AI, we want to share the bigger picture we saw there.
In today’s episode, we will cover:
Why AI progress is outpacing organizational readiness
The hidden bottleneck most companies still haven’t solved
Why copilots so often disappoint
What mature AI deployments are actually building
Why the middle layer cannot be skipped
Why this is becoming a management problem
The real divide
Why AI progress is outpacing organizational readiness
For the last two years, the central question in AI was technological: how capable are the models?
That is still an important question. But for many companies, it is no longer the binding one.
The capabilities are arriving faster than organizations can absorb them. Models can reason better, search better, code better, summarize better, and operate across increasingly long chains of tasks. At GTC 2026, that was visible everywhere. Jensen Huang reframed NVIDIA itself as a "token factory." Session after session pushed the same message from different angles: intelligence is becoming an operational resource, something companies will produce, route, govern, and consume at scale. Yet inside most firms, the actual structure of work remains difficult to expose, difficult to verify, and difficult to translate into a form a machine can act on reliably.
This is where many AI discussions now are going the wrong way. Companies say they want AI when what they usually mean is that it’s so hyped that they’re afraid of falling behind. But what they often do not have is a clear enough understanding of how their own work actually gets done. They do not have clean process maps, clear exceptions, reliable ownership, strong feedback loops, or even a shared definition of what good execution looks like in a form a machine can follow. What they do have is habit, tacit knowledge, local judgment, undocumented workarounds, and senior people who can spot when something is wrong instantly but would have a hard time explaining exactly how they know.
The knowledge exists. It is just stored in people, not in systems.
That is why the most important AI work happening inside companies right now is not model selection. It is organizational translation. It is the work of turning messy institutional memory into context, turning context into bounded action, and turning human correction into a learning loop.
A historical analogy helps here, if used carefully. Electric motors arrived in factories in the 1880s, but the big productivity gains did not show up until the 1920s. It’s forty years. Early adopters often kept the old steam-era layout and simply replaced the power source. The real gains came later, when factories were redesigned around distributed electric power, with new floor plans, workflows, and assumptions about coordination and automation.
AI is pushing companies toward a similar threshold. If we think that the gains will come from dropping a new capability into an old organizational structure – this is just hoping for magic. The true benefit comes from redesigning the system around what the capability makes possible. That, underneath the demos and deployment stories, was one of the clearest lessons from GTC.
The hidden bottleneck most companies still haven’t solved
Most companies are not legible to machines.
This sounds abstract, but it is painfully concrete in practice. Work inside firms runs on partial documentation, institutional lore, ambiguous ownership, and constant exception handling. Teams say they have a process when what they often have is a stable pattern of improvisation. Humans can operate inside that environment because they absorb context socially. They know who to ask, which shortcut is acceptable, which dashboard lies, which metric matters, and which exception is normal enough to ignore. A model picking into the same environment sees fragments.
That is why so many AI pilots look impressive in demos and then fall apart in real use. The model can handle generic tasks, but the actual workflow is full of hidden dependencies, unwritten rules, and quality standards that were never built into the system. The company thought it was buying intelligence. What it actually discovered was how much of its own work had never been clearly defined.
And that’s a representation problem.
If the work is not described in a form the system can access, check, verify, and act on, better models will help less than people think. You can improve reasoning, speed, and multimodality, but none of that solves the basic problem: a machine cannot reliably work with knowledge that only lives in people’s heads.
GTC offered a very clear example. NVIDIA’s chip-design team said their first attempt in 2023, a fine-tuned domain expert, failed completely. Not because the model was terrible, but because hardware engineering is a domain where correctness is non-negotiable and answers without traceability are useless. If a system cannot show where an answer came from, engineers will not trust it.
Since then a few things changes – yes, the models got better – but it was the relationship between the model and the company’s knowledge that really had to change. Engineers curated their own documents. Responses became traceable to sources. Verifiability stopped being an afterthought and became part of the system.
Now we fixed the problem of traceability and verifiability, which meant engineers would trust their responses. And that was key to driving adoption.
Why copilots so often disappoint
The first wave of enterprise AI was mostly framed as assistance: chat interfaces, summarization tools, coding help, retrieval, and light automation. The goal was simple enough: save time, improve throughput, reduce friction.
Most companies still evaluate AI on time saved. That is natural. It is the easiest metric to measure, and it makes the ROI case straightforward. But it also locks companies into a narrow understanding of what the technology is for. If the best they can imagine is the same work done 30% faster, they will build infrastructure that is good enough for acceleration and nowhere near good enough for the deeper changes that create real value.
That is why so many copilots disappoint. They often do deliver local gains, but they rarely change the shape of the work. Companies add tools without redesigning the workflow around them, which means they get pockets of productivity rather than systems that compound over time.
The more important question is not whether AI saves time. Sometimes it does, and sometimes it takes more time at first. What matters more is whether it changes the timing, range, and structure of what the organization can do at all.
In many workflows, the problem is not only how much work people have to do. The problem is that the right information often arrives too late, moves too slowly, or is too hard to coordinate. When AI changes that, the value is not only speed. It is the ability to do work that the company previously could not do at the right moment.
That distinction changes what companies need to build. If AI is treated as an acceleration layer, then a chatbot may be enough. If it changes what a company can actually do, then the company needs new data flows, verification systems, orchestration logic, and feedback loops. So if you think about AI being a chatbot – that’s a software purchase. If you look deeper and expect a structural change, then it is a workflow redesign.
Shraddha Sridhar thinks very clearly about it, she described three levels of deployment:
Individual productivity, where one engineer uses an agent to move faster.
Team-level scaling of the same pattern.
Capability expansion, and that is where things get more interesting. Her example was a power insights agent that did not simply speed up an existing task. It changed when important information became available. Data that used to arrive too late to influence the chip now showed up months earlier, when engineers could still act on it. That is a different workflow with a different outcome.
And when Jensen Huang is talking about OpenClaw and that everyone should have "an “openclaw strategy” – that’s exactly what it is about. The point is not that every company needs one more assistant sitting on top of the old system. The point is that once AI becomes a real operational layer, the workflow itself becomes the design problem.
What mature AI deployments are actually building
Across domains, the architecture of successful AI systems is finally starting to look less mysterious and more consistent. Let’s look closer at it →
Don’t settle for shallow articles. Learn the basics and go deeper with us. Truly understanding things is deeply satisfying.
Join Premium members from top companies like Microsoft, Nvidia, Google, Hugging Face, OpenAI, a16z, plus AI labs such as Ai2, MIT, Berkeley, .gov, and thousands of others to really understand what’s going on in AI.

