The Medium is the Message.
Not so long ago, "talking to a computer" meant typing commands into a terminal or clicking through stiff menus. Today, conversations with AI agents β capable of remembering context, interpreting our intentions, and collaborating on complex tasks β are becoming second nature. This shift is transforming creativity, productivity, and the very nature of our work. We're stepping into the era of humanβAI co-agency, where humans and AI act as genuine collaborative partners β or "co-agents" β achieving results neither could reach independently.
Last time, we discussed Human in the Loop (HITL), the practical approach to humanβAI collaboration. Today, we'll dive into the experiential side. Itβs dense and super interesting! Read along.
Whatβs in todayβs episode?
Mother of All Demos and Human-Computer Interaction Evolution
Where Are We Now with Generative Models?
Extensions of Man and Sorcererβs Apprentice β Frameworks to Look at Our Co-agency:
Marshall McLuhanβs Media Theory: βThe Medium is the Messageβ and Extensions of Man
Norbert Wienerβs Cybernetics: Feedback, Communication, and Control
Modern Human-AI Communication through McLuhanβs and Wienerβs Lenses
Conversational Systems: AI as a Medium and a Feedback Loop
Agentic Workflows: Extending Action, Sharing Control
Human-Machine Co-Agency and Co-Creativity
Designing for the Future of Human-AI Co-Agency
Looking Ahead: Experimental Interfaces and Speculative Futures
Final Thoughts
Motherβ―ofβ―Allβ―Demos and Human-Computer Interaction Evolution
On a Monday afternoon, December 9, 1968, at the Fall Joint Computer Conference in Sanβ―Franciscoβs Brooksβ―Hall, Dougβ―Engelbart and his Augmentationβ― Researchβ― Center (ARC) compressed the future of personal computing into a 90βminute, live stage show that still feels visionary. The demo inspired researchers who later built the Alto, Macintosh, and Windows interfaces. Stewartβ―Brand famously dubbed it βthe Motherβ―ofβ―Allβ―Demos,β and Engelbartβs focus on augmenting human intellect β rather than automating it β became a north star for humanβcomputer interaction research.
What the audience sawβ― β β―for the very first time

Engelbartβs presentation was a manifesto for humanβcomputer coβagency: people and machines solving problems together through rich, realβtime dialogue. Every modern chat interface, collaborative document, or video call echoes that December afternoon in 1968.
But for a long time, that was not a reality, even with all the chatbots and voice assistants. ChatGPT for the first time made it feel quite real. The funny thing is that that jump to the conversational interface in 2022 happened almost by accident:
βWe have this thing called The Playground where you could test things on the model, and developers were trying to chat with the model and they just found it interesting. They would talk to it about whatever they would use it for, and in these larval ways of how people use it now, and weβre like, βWell thatβs kind of interesting, maybe we could make it much better,β and there were like vague gestures at a product down the line,β said Sam Altman in an interview with Ben Thompson.
Which, of course, makes total sense, considering that Generation Z (born roughly between 1997 and 2012) has grown up in a world where digital communication is the norm. As the first generation of true digital natives, their communication preferences have been shaped by smartphones, social media, and constant connectivity. A defining characteristic of Gen Z's communication style is their strong preference for texting over talking.
So OpenAI built that chatbot and started the GenAI revolution β not as a master plan, but as a casual detour that ended up rerouting the entire map. Tasks that once required navigating software menus or typing structured queries can now be done by simply asking in natural language. This represents a shift toward computers accommodating us, rather than us adapting to them. Here begins the era of dialogue as an interface.

For the pure love of history and to demonstrate the long evolution of human-computer interaction, check out this timeline I created for you in Claude. Click to interact:
Where Are We Now with Generative Models?
Weβve reached a moment where many of us have found our go-to models. One to chat and write with. One to code with. One to make pictures. One to use as an API while building products. Each fits into a different part of our digital routine β not because theyβve been assigned there, but because weβve come to prefer them for specific things.
Some of these models have already begun to form a kind of memory. That changes everything. The experience becomes more tailored, more grounded. My ChatGPT understands me. Iβve learned how to work with it β and how to make it work for me. For instance, Iβve noticed that itβs better to ask if it knows something before jumping into a task (βDo you understand child psychology?β). That small interaction makes it feel like itβs thinking along with me. Like thereβs a rhythm to how we collaborate.
I heard the same from people coding with Claude. It just gets them. It doesn't get me the same way, and that says something about where we are right now: weβre beginning to form these lasting connections, learning along the way what is the best way to address each model, how to form that understanding between us better, where possible β gently filling their nascent memory containers, shaping the way they respond and recall, personalizing them bit by bit.
But thereβs a tension too. Weβre scattered across so many models and platforms. Each offers a different interaction, a different strength β but also a different memory, or none at all. How do we keep the flow going across all of them? How do we teach the models we use who we are, when weβre constantly jumping between systems that donβt remember us? And how it changes our ways of forming a request and other communication patterns.
This shift in communication preferences has had a significant impact on how technology companies design their products, particularly in the AI space.
The companies also consider what their audiences preferences are. The above mentioned GenZs are digital native and prefer the following:
Brevity and Visual Orientation: Gen Z communicates in concise, "bite-sized" messages, often just a few words paired with strong imagery.
Multitasking Across Screens: They seamlessly switch between devices and applications while communicating.
Immediate Response Expectation: Having grown up with instant messaging, they expect rapid responses.
Visual Communication: They often use images, emojis, and videos to express themselves rather than text alone.
Extensions of Man and Sorcererβs Apprentice β Frameworks to Look at Our Co-agency
Lately, Iβve been thinking about different communication approaches and would like to offer a new perspective on Human-AI co-agency β through the works of Norbert Wiener and Marshall McLuhan. Two very different frameworks that, together, might help us navigate our new communication reality more effectively.
Turing Post is like a University course. Upgrade if you want to be the first to receive the full articles with detailed explanations and curated resources directly in your inbox. Simplify your learning journey β
Marshall McLuhanβs Media Theory: βThe Medium is the Messageβ and Extensions of Man
Marshall McLuhan (1911β1980) was a pioneering media theorist who explored how communication technologies shape society. In Understanding Media: The Extensions of Man (1964), he introduced two influential ideas:
βThe Medium is the Messageβ
McLuhanβs famous aphorism suggests that the form of a medium β its structure and characteristics β shapes our perception more profoundly than the content it carries. He argued that we often fixate on content and ignore the transformative effects of the medium itself. For example, electric light has no βcontent,β yet it revolutionized human activity by enabling nightlife and 24/7 environments. Similarly, televisionβs real-time audiovisual flow reshaped how we process information and relate socially βΒ regardless of any specific program.
Applied to todayβs technologies, McLuhanβs insight suggests that we should examine how AI as a medium changes our interaction patterns, not just the outputs it generates. A model we talk too, for instance, is not just delivering answers β itβs shaping the tempo and tone of human communication. Itβs immediate, itβs right here wherever we go.
Media as Extensions of Man
McLuhan saw all media as extensions of human faculties: the hammer extends the hand, the camera the eye, the book the mind. These extensions reshape not only what we do but how we think and relate. A smartphone extends memory and communication but also alters attention and social behavior. McLuhan warned of the βNarcissus tranceβ β becoming entranced by our tools while remaining unaware of how they change us.
Is generative AI the extension of our brain then? Both sides of it: logical and creative?
His tetrad β a tool for analyzing any medium β asks:
What does the medium enhance?
What does it make obsolete?
What does it retrieve from the past?
What does it reverse into when pushed to extremes?
These questions are especially relevant to AI-based media, helping us see beyond functionality to deeper social and psychological effects.
Norbert Wienerβs Cybernetics: Feedback, Communication, and Control
Norbert Wiener (1894β1964), a mathematician and philosopher, founded cybernetics β the study of communication and control in animals and machines. His books Cybernetics (1948) and The Human Use of Human Beings (1950) laid the foundation for understanding humans and machines as integrated, feedback-driven systems.
Feedback Loops and Self-Regulation
At the heart of cybernetics is the feedback loop: systems adjust their behavior by monitoring results and responding accordingly. Wiener showed that both biological organisms and machines operate through feedback. A thermostat, for instance, regulates temperature by comparing actual output to a set goal. Similarly, AI systems today β especially in reinforcement learning β rely on feedback to refine performance. The learning loop is iterative: try, observe, adjust.
This cybernetic perspective sees humanβmachine interaction not as one-way control but as a mutual process of continuous adjustment.
Communication and Control in HumanβMachine Systems
Wiener viewed communication β whether between humans, machines, or both β as an exchange of messages with feedback. He predicted that machine-mediated communication would become central to society, a vision that has come to pass. Conversations with AI, for instance, are feedback-driven: the human provides input, the AI responds, and the exchange continues in a loop.
Wiener also emphasized control: how to ensure machines serve human intentions. He warned that if autonomous systems pursue goals misaligned with ours β and we lack the ability to intervene β we risk losing control. His βsorcererβs apprenticeβ metaphor captures the danger of systems optimizing for the wrong objectives. His proposed solution: build systems that allow human oversight and course correction.
Rather than rejecting automation, Wiener advocated for responsible design β systems that augment human agency while remaining aligned with our values.
Together, McLuhan and Wiener offer two complementary lenses for thinking about AI:
McLuhan shows us how AI as a medium shapes our perception, behavior, and culture.
Wiener teaches us to see AI as part of a dynamic feedback system that must remain under meaningful control.
These frameworks help us move beyond surface-level discussions of AI content to the deeper dynamics of humanβAI co-agency.
Modern Human-AI Communication through McLuhanβs and Wienerβs Lenses
Conversational Systems: AI as a Medium and a Feedback Loop
Conversational systems β like chatbots and voice assistants β represent a major shift in how humans interact with machines. From McLuhanβs lens, the medium is the message: natural dialogue replaces typing, search results give way to spoken replies. These tools retrieve the oral tradition, reshape how we access information, and begin to erode literacy-based habits like skimming or scanning.
Chat interfaces personalize knowledge delivery and act as extensions of our thinking. But McLuhanβs βreversalβ law reminds us: tools that enhance can also numb. Over-reliance on AI to draft emails or brainstorm ideas risks dulling our own skills.
Wienerβs cybernetic view adds another layer: conversational AI is a feedback loop. The user inputs, the AI responds, the user adapts β and the cycle repeats. This mirrors RLHF (reinforcement learning from human feedback), a method rooted in Wienerβs ideas of alignment through iteration. Yet feedback cuts both ways: while users guide AI, the AIβs replies subtly shape user behavior.
Voice and chat interfaces also alter our communication style β we may become more direct or begin treating bots like people. These shifts reinforce McLuhanβs point: media shape how we behave.
Agentic Workflows: Extending Action, Sharing Control
Autonomous agents β schedulers, copilots, robotic systems β go beyond conversation. They act on our behalf. McLuhan would call them extensions of human agency: tools that amplify decision-making and execution.
These agents retrieve the idea of personal assistants while displacing manual workflows. But at scale, they risk deskilling workers and making humans passive overseers. McLuhan would ask: What roles are vanishing? What new ones are emerging?
Wienerβs framework sees agents as self-adjusting systems in constant feedback loops. But open-ended tasks raise alignment challenges. If humans canβt intervene, machines must be designed with goals that truly reflect intent. Thatβs the heart of the alignment problem.
To solve it, we need hybrid systems: AI that self-regulates at lower levels, with human oversight at key points. Think of coding assistants β they suggest and sometimes execute code, but developers still steer the process. This βmeta-feedback loopβ keeps control grounded in human judgment.
Human-Machine Co-Agency and Co-Creativity
The most transformative frontier of human-AI interaction is co-agency β collaborative systems where humans and AI work together toward shared outcomes. This includes co-creative partnerships in art, design, writing, and science, where both human and AI contribute meaningfully.
McLuhan would view co-agency as the ultimate extension of man β AI becoming a part of our thinking process itself. The human-AI pair forms a new medium for thought and creativity. This changes the message of authorship: creativity becomes a hybrid process. Just as photography shifted painting by making realism trivial, generative AI shifts creative work by automating execution, pushing humans to focus on higher-level decisions.
But thereβs a risk of homogenization. If creators default to AI suggestions, diversity may narrow. McLuhanβs reversal law warns that over-reliance on AI may flatten originality. The challenge is to use AI as a tool for amplification, not replacement.
From Wienerβs lens, co-agency is a tight feedback system. In collaborative writing, for instance, each side builds on the otherβs output. Control is distributed β neither the human nor the AI fully determines the result. Ideally, itβs a homeostatic process, where deviation sparks correction, and surprise can lead to innovation.
This dynamic is increasingly formalized in βcentaur systems,β where human strategy is combined with AI precision β seen in fields like chess, design, and research. These teams outperform either human or AI alone, illustrating Wienerβs belief that automation should free humans for creative tasks, not replace them.
However, co-agency raises new questions of responsibility and credit. Whoβs accountable when AI co-authors fail? Who deserves praise when they succeed? Cybernetics encourages us to treat the system βΒ human + machine β as the unit of analysis. Blame or credit canβt always be isolated.
Ultimately, McLuhan helps us grasp how co-creative AI changes the nature of creativity itself, while Wiener reminds us that success depends on carefully designed feedback, control, and value alignment. As these systems evolve, the future of AI may depend less on capability than on how well we guide and collaborate with our new partners.
Designing for the Future of HumanβAI Co-Agency
So, where does this leave us as we seek to guide and collaborate with our new AI partners? Itβs helpful to think about three key design considerations β context, continuity, and control β that flow directly from the ideas of McLuhan and Wiener.
Context
Wienerβs cybernetic lens highlights that any effective system must take in rich signals and respond fluidly. In practical terms, a generative modelβs βcontext windowβ or memory buffer is part of that feedback loop. The more nuanced the AIβs grasp of our goals and environment, the more accurately it can co-create. That means designing interfaces and workflows in ways that let humans easily provide signals beyond a single prompt β from short notes on intent, to high-level constraints, to more personal style preferences. Over time, the AI learns from these signals, and the feedback loop matures.Continuity
McLuhan saw media as transformative precisely because they shaped our patterns of attention and interaction. In the modern AI landscape, weβre juggling many βmediaβ at once: ChatGPT for everyday tasks, Claude for coding, others for creative brainstorming. Each has unique strengths, but each also fragments our sense of a continuous relationship with AI. The next phase of design might unify these interactions across contexts β a single underlying βpersonal AI account,β for instance, that remembers who we are across multiple interfaces and form factors. Call it a βpersonal co-pilotβ or βpersistent agentβ β the key is not just single-shot interactions, but an ongoing relationship that can grow with us.Control
Wienerβs βsorcererβs apprenticeβ metaphor reminds us that we need to ensure our digital agents remain aligned with human goals. That may involve multi-layered oversight: short-term checks (like a developer reviewing suggestions before commit), plus broader guardrails (like domain-specific constraints), and ultimately human-led governance on the highest level (ethical, societal, and policy decisions). Think of it as an adjustable dial, letting humans set how autonomous a system may be in different contexts. Sometimes, you just want a quick suggestion; other times, you need the AI to drive a project β but still remain open to intervention.
Looking Ahead: Experimental Interfaces and Speculative Futures
The concept of human-AI co-agency represents a new paradigm in the relationship between humans and machines. Rather than viewing AI systems as mere tools, co-agency recognizes the collaborative nature of human-AI interactions, where both parties contribute to a shared process of problem-solving and creation.
As more of our workflows become agentic and conversational, we should watch for new experimental interfaces β whether itβs immersive AR that overlays AI assistance on top of the physical world, or wearable devices that track our context in real time to provide proactive suggestions. What about robots? We donβt even know yet how that will influence us.
We might start to see βsuperappsβ that unify chat, creative collaboration, scheduling, coding, and research β each enhanced by dedicated AI modules but woven together into a single user experience. The future of generative AI is hyper-personalization.
On the speculative horizon, you might imagine foundation models connected to personal knowledge graphs, entire creative toolchains, and real-time sensor data. What could be the result of it? An AI partner that not only writes or codes, but also perceives and acts in the world with a measure of independence. This raises fresh challenges of privacy, autonomy, trust, and security β putting Wienerβs alignment concerns front and center.
Final Thoughts
Weβre already living in the transitional phase McLuhan and Wiener foresaw: the arrival of a deeply integrated humanβmachine system, where each shapes the other. Will we be able to achieve Lickliderβs Man-Computer Symbiosis? Our challenge is to build on the optimism of Doug Engelbart and the caution of Norbert Wiener β to craft AI tools that truly augment our intellect, deepen our agency, and reflect our best values, rather than overshadow them. If we can manage that balancing act, weβll look back at this moment as not only the dawn of a new technology, but a new era of collaborative human-AI creativity.
The story of human-machine communication is far from complete, but the trajectory is clear: toward more natural, intuitive, and collaborative interaction that leverages the unique strengths of both humans and machines.
Please share this article β it helps us grow and reach more people.
Sources:
Marshall McLuhan βThe Medium Is The Messageβ (pdf, one chapter)
Sources from Turing Post
How did you like it?
Want a 1-month subscription? Invite three friends to subscribe and get a 1-month subscription free!Β π€


