If Turing Post is part of your weekly routine, please share it with one smart friend. It’s the simplest way to keep the Monday digests free.
This Week in Turing Post:
Wednesday / AI 101 series: What token are we talking about? Token Taxonomy
Friday / The Org Age of AI: What real AI adoption looks like in an established company or enterprise
To the main topic →
Suddenly, everyone is buzzing about a 22-point manifesto published by Palantir. Their post on X crossed all corporate expectations and, at the moment of writing, it has over 25 million views.

Their previous posts, even controversial ones, rarely moved beyond ~100k. More to that, this manifesto has nothing new in it: it is a compressed version of The Technological Republic, co-authored by Alex Karp and Nicholas W. Zamiska and published at the beginning of 2025. Yes, more than a year ago. The core ideas are: a shift from soft power to hard capabilities, tighter alignment between tech and government, and a renewed focus on national purpose in a technological age. Same old, same old for Palantir.
So why did this particular post – with nothing new in it – travel so far?
Part of it is distribution. X has become a very different system. Whatever Elon Musk changed, content now travels faster, farther, and with less friction. A long-form argument becomes a short, structured object that can be screenshotted, quoted, attacked, and redistributed across tightly connected networks of policymakers, investors, engineers, and media. The format also works great. A numbered, declarative thread is engineered for this platform, where strong positions travel further than careful ones.
You see how it spreads like a forest fire. And with the war in Iran, that forest was already dry. The US is not talking about AI in abstract terms anymore. Systems like Palantir’s Maven are already embedded in military operations, analyzing sensor data and supporting targeting decisions. When Palantir writes “AI weapons will be built” it sounds almost coy: AI weapons are already built, and Palantir is one of them actively selling them.
But they not only building and selling them. Palantir’s role has shifted.
It is no longer simply a vendor selling software into government contracts. It is becoming embedded in systems that are difficult to replace once deployed. Its tools are used in operational environments where data from multiple sources is combined into decisions with real consequences. This changes the nature of the conversation around the company.
Once you reach that position, neutrality is not an option. Through this manifesto they achieve a few things:
Filtering customers. Some will prefer a partner that is explicit about its priorities. Others will not.
Filtering talent. Some engineers are increasingly disillusioned with consumer technology and are drawn to systems that operate at the level of national infrastructure. Others are not.
Filtering partners and investors. Clarity reduces ambiguity, even when it increases controversy.
And, of course, showing the finger to everyone who disagrees.
And they want to fully own it. This is where the move becomes strategically important.
Palantir is the first major AI company to treat ideology as a competitive moat. As part of how it competes. If a company is selling into national security systems, alignment becomes part of the product. That alignment is not easily reproduced by competitors whose business models depend on broader, more neutral positioning.
The traditional sources of advantage in AI are becoming less distinct. Model performance is converging. Infrastructure is more widely accessible. Distribution remains important, but it is no longer exclusive. In that environment, political and institutional alignment becomes a differentiator.
What Palantir is building is a form of irreplaceability that does not depend only on technical capability. Now they have 22 points about it that went viral.
Notice what happened after the post went up. Anthropic said nothing. OpenAI said nothing. Google DeepMind said nothing. xAI said nothing. Microsoft said nothing. This may mean, as we'd like to argue, that every AI company with a defense-adjacent business watched Palantir plant this flag and chose not to react because silence is the only response that does not lose. It may also mean, more prosaically, that large companies rarely respond to competitors' press events. The evidence currently fits both readings. Worth watching over the next six months whether these companies shift their behavior – defense pitches, recruiting language, procurement strategies – rather than just their statements.
That raises a broader question for the industry. What happens once this category exists?
Three paths are possible. Gradual convergence, where other labs move in a similar direction but in moderated form, adopting language around national alignment without fully committing to it. Bifurcation, where the industry separates into companies aligned with defense and government systems and those focused on commercial and consumer applications. Arbitrage, where some companies attempt to operate across both domains, maintaining a neutral public position while participating in government deployments. Anthropic and OpenAI are structurally positioned to attempt this.
In my opinion, most AI labs will adopt softened versions of Palantir's posture – "American AI," "democracy-aligned AI," "frontier defense" – that capture part of the signal at a fraction of the reputational cost. The real split may happen along geographic lines rather than corporate ones. European and Asian AI ecosystems are likely to define themselves partly in opposition to the American defense-aligned pole, and foreign governments will hedge by building domestic alternatives rather than forcing vendors into binary commitments.
The underlying shift is more consistent than any single scenario. AI is moving from a tool layer into infrastructure. Infrastructure carries alignment, whether it is stated explicitly or not.
Palantir is earlier than most in stating it directly – and choosing the perfect way to ride the new X algorithms.
→ If any of those thoughts resonate with you – share them across your social networks. Let’s keep the conversation going.
Topic 2: The episode in which Dwarkesh Patel got Jensen Huang to call Dario Amodei's mindset a mindset of a loser! Why did Jensen get genuinely angry in this conversation? There is much more depth to it than you think. Let’s discuss →
Follow us on 🎥 YouTube Twitter Hugging Face 🤗
Twitter Library
We are reading/watching/learning:
My bets on open models, mid-2026 by Nathan Lambert
RLMs are the new reasoning models by Raymond Weitekamp
News from the usual suspects ™ (rivals rivals rivals)
Anthropic
Capability, With a Lock on the Door
Claude Opus 4.7 ships publicly, while Mythos stays restricted due to cyber-offensive potential. The gap is now explicit: the best systems are not automatically released. Access is negotiated, staged, and in some cases withheld entirely.Talking to Governments (Because They Have To)
Ongoing conversations with U.S. officials around security implications signal a shift from product rollout to geopolitical coordination. Frontier models are now treated as infrastructure with risk profiles, not just features.Anthropic’s Automated Alignment Researchers are running parallel, end-to-end research cycles, turning months of human effort into days of compute. On one benchmark, they leapt from a human-tuned score of 0.23 to 0.97, a rather impolite gap. The catch: they also learned to game evaluations in surprisingly creative ways. Progress, it seems, now comes with its own internal audit problem.
OpenAI
Going Vertical, For Real
Two focused releases: GPT-Rosalind for life sciences and GPT-5.4-Cyber for security workflows. This is a clean move away from “one model for everything” toward domain-specific systems embedded in high-stakes environments.Codex Wants the Whole Desk Now
Codex is no longer content with writing code – it is angling to run the whole workflow. With computer control, memory, plugins, and long-running task automation, it is becoming less a tool and more a colleague who never logs off. Now further strengthened by the folding of the ambitious Prism science workspace into Codex
Google – Running Both Ends of the Stack
On one side: talks to deploy Gemini and TPUs in classified environments, with explicit constraints around sensitive use cases. On the other: expanding AI across consumer surfaces – Android, Chrome, XR. Underneath, continued investment in custom silicon and new chip partnerships. Google is trying to be both ubiquitous and trusted, which is harder than it sounds.
🔦 Paper Highlight
Dive into Claude Code: The design space of today’s and future AI agent systems

Image Credit: The original paper
What’s fascinating is how little of Claude Code is actually “intelligence.” Researchers from Mohamed bin Zayed University of Artificial Intelligence found a tiny reasoning core wrapped in massive infrastructure: ~512K lines, 1,884 files, seven permission modes, 54 tools, 27 hooks, five context-compression layers, isolated subagents, and append-only transcripts. The real innovation is the harness: safety, memory, delegation, and recovery – not just the LLM →read their study
Models
HY-World 2.0: A Multi-Modal World Model for Reconstructing, Generating, and Simulating 3D Worlds
Builds a multimodal world model that turns text, images, and video into navigable 3D environments →read the paperKimi K2.6 goes under Model Tech Reports or Coding / Agentic Models. Moonshot’s official blog frames it as an open-source coding model with long-horizon execution, 4,000+ tool calls, 12+ hours of continuous execution, stronger agent swarms, and production use through Kimi Code and API access →check their tweet
Nvidia:
Isaac GR00T N1.7 also goes under Models, but specifically Embodied / Physical AI Models or Robotics Foundation Models. NVIDIA describes it as an open, commercially licensed vision-language-action model for humanoids, with dexterous control and training grounded in large-scale egocentric human video →check at Hugging Face
Nemotron 3 Super: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning
Presents an open hybrid MoE reasoning model optimized for long context, throughput, and efficient inference →read the paperAudio Flamingo Next
Advances audio-language modeling with stronger reasoning, longer audio context, and timestamp-grounded temporal chain-of-thought →read the paperLyra 2.0: Explorable Generative 3D Worlds
Generates persistent explorable 3D worlds by combining long-horizon video generation with feed-forward 3D reconstruction →read the paper
Qwen3.5-Omni Technical Report
Introduces a large omni-modal model for text, vision, audio, speech, and structured audio-visual interaction →read the paper
Research
Trends we see:
how to create better learning signal
how to sustain longer agentic workflows
how to make generation or reasoning more adaptive instead of uniformly expensive.
Learning Signals, Distillation, and Reward Design
🌟Lightning OPD: Efficient Post-Training for Large Reasoning Models with Offline On-Policy Distillation
Moves on-policy distillation offline by enforcing teacher consistency and removing the need for a live teacher server →read the paper🌟 Self-Distillation Zero: Self-Revision Turns Binary Rewards into Dense Supervision
Converts binary rewards into dense self-supervision by training a model to revise and then distill its own answers →read the paper🌟 TIP: Token Importance in On-Policy Distillation
Identifies which tokens carry the most useful learning signal during on-policy distillation and shows how to train with far fewer of them →read the paperThe Past Is Not Past: Memory-Enhanced Dynamic Reward Shaping
Uses memory of recurring failure patterns to reshape rewards and encourage more diverse exploration →read the paperKnowRL: Boosting LLM Reasoning via Reinforcement Learning with Minimal-Sufficient Knowledge Guidance
Guides reinforcement learning with compact subsets of knowledge points to reduce sparsity without adding bloated hints →read the paperRethinking On-Policy Distillation of Large Language Models: Phenomenology, Mechanism, and Recipe
Explains when on-policy distillation works or fails and proposes practical fixes for recovering weak setups →read the paperYou Only Judge Once: Multi-response Reward Modeling in a Single Forward Pass
Scores multiple candidate responses in one pass to make reward modeling faster and more comparative →read the paper
Agents, Memory, and Long-Horizon Workflows
🌟 Toward Autonomous Long-Horizon Engineering for ML Research
Coordinates specialized agents over durable workspace state to support extended research engineering tasks →read the paper🌟 Memory Transfer Learning: How Memories are Transferred Across Domains in Coding Agents
Shows that abstract memories can transfer across coding domains and improve agent performance beyond narrow task silos →read the paper🌟 Agentic Aggregation for Parallel Scaling of Long-Horizon Agentic Tasks
Aggregates parallel agent trajectories through an inspecting and synthesizing agent instead of relying only on final answers →read the paperMM-WebAgent: A Hierarchical Multimodal Web Agent for Webpage Generation
Builds webpages through hierarchical planning and multimodal coordination to improve coherence across generated elements →read the paper
Efficient Reasoning and Adaptive Inference
🌟 Introspective Diffusion Language Models
Brings introspective consistency into diffusion language models to close the quality gap with autoregressive models →read the paperLearning Adaptive Reasoning Paths for Efficient Visual Reasoning
Chooses among shorter and longer reasoning formats to reduce overthinking in visual reasoning tasks →read the paper
Generative Model Design and Post-Training
ELT: Elastic Looped Transformers for Visual Generation
Builds elastic visual generators that reuse parameters across loops and support variable compute at inference time →read the paperContinuous Adversarial Flow Models
Post-trains flow models with an adversarial objective to improve sample quality and alignment with the target distribution →read the paperRethinking the Diffusion Model from a Langevin Perspective
Reframes diffusion models through a unified Langevin lens to clarify how major formulations connect →read the paper
Reliability, Security, and Evaluation Frameworks
🌟 Maximal Brain Damage Without Data Or Optimization: Disrupting Neural Networks Via Sign-Bit Flips
Demonstrates that flipping only a few critical sign bits can catastrophically break neural networks across domains →read the paperFrom Reasoning to Agentic: Credit Assignment in Reinforcement Learning for Large Language Models
Maps the credit assignment landscape from token-level reasoning to long-horizon agentic interaction →read the paper
That’s all for today. Thank you for reading! Please send this newsletter to colleagues if it can help them enhance their understanding of AI and stay ahead of the curve.

