- Turing Post
- Posts
- đڏđť#16: Co-Agency as The Ultimate Extension of Human
đڏđť#16: Co-Agency as The Ultimate Extension of Human
how AI as a medium shapes our perception, behavior, and culture
The Medium is the Message.
Not so long ago, "talking to a computer" meant typing commands into a terminal or clicking through stiff menus. Today, conversations with AI agents â capable of remembering context, interpreting our intentions, and collaborating on complex tasks â are becoming second nature. This shift is transforming creativity, productivity, and the very nature of our work. We're stepping into the era of humanâAI co-agency, where humans and AI act as genuine collaborative partners â or "co-agents" â achieving results neither could reach independently.
Last time, we discussed Human in the Loop (HITL), the practical approach to humanâAI collaboration. Today, we'll dive into the experiential side. Itâs dense and super interesting! Read along.
Whatâs in todayâs episode?
Mother of All Demos and Human-Computer Interaction Evolution
Where Are We Now with Generative Models?
Extensions of Man and Sorcererâs Apprentice â Frameworks to Look at Our Co-agency:
Marshall McLuhanâs Media Theory: âThe Medium is the Messageâ and Extensions of Man
Norbert Wienerâs Cybernetics: Feedback, Communication, and Control
Modern Human-AI Communication through McLuhanâs and Wienerâs Lenses
Conversational Systems: AI as a Medium and a Feedback Loop
Agentic Workflows: Extending Action, Sharing Control
Human-Machine Co-Agency and Co-Creativity
Designing for the Future of Human-AI Co-Agency
Looking Ahead: Experimental Interfaces and Speculative Futures
Final Thoughts
MotherâŻofâŻAllâŻDemos and Human-Computer Interaction Evolution
On a Monday afternoon, December 9, 1968, at the Fall Joint Computer Conference in SanâŻFranciscoâs BrooksâŻHall, DougâŻEngelbart and his Augmentation⯠Research⯠Center (ARC) compressed the future of personal computing into a 90âminute, live stage show that still feels visionary. The demo inspired researchers who later built the Alto, Macintosh, and Windows interfaces. StewartâŻBrand famously dubbed it âthe MotherâŻofâŻAllâŻDemos,â and Engelbartâs focus on augmenting human intellect â rather than automating it â became a north star for humanâcomputer interaction research.
What the audience saw⯠â âŻfor the very first time

Engelbartâs presentation was a manifesto for humanâcomputer coâagency: people and machines solving problems together through rich, realâtime dialogue. Every modern chat interface, collaborative document, or video call echoes that December afternoon in 1968.
But for a long time, that was not a reality, even with all the chatbots and voice assistants. ChatGPT for the first time made it feel quite real. The funny thing is that that jump to the conversational interface in 2022 happened almost by accident:
âWe have this thing called The Playground where you could test things on the model, and developers were trying to chat with the model and they just found it interesting. They would talk to it about whatever they would use it for, and in these larval ways of how people use it now, and weâre like, âWell thatâs kind of interesting, maybe we could make it much better,â and there were like vague gestures at a product down the line,â said Sam Altman in an interview with Ben Thompson.
Which, of course, makes total sense, considering that Generation Z (born roughly between 1997 and 2012) has grown up in a world where digital communication is the norm. As the first generation of true digital natives, their communication preferences have been shaped by smartphones, social media, and constant connectivity. A defining characteristic of Gen Z's communication style is their strong preference for texting over talking.
So OpenAI built that chatbot and started the GenAI revolution â not as a master plan, but as a casual detour that ended up rerouting the entire map. Tasks that once required navigating software menus or typing structured queries can now be done by simply asking in natural language. This represents a shift toward computers accommodating us, rather than us adapting to them. Here begins the era of dialogue as an interface.

For the pure love of history and to demonstrate the long evolution of human-computer interaction, check out this timeline I created for you in Claude. Click to interact:
Where Are We Now with Generative Models?
Weâve reached a moment where many of us have found our go-to models. One to chat and write with. One to code with. One to make pictures. One to use as an API while building products. Each fits into a different part of our digital routine â not because theyâve been assigned there, but because weâve come to prefer them for specific things.
Some of these models have already begun to form a kind of memory. That changes everything. The experience becomes more tailored, more grounded. My ChatGPT understands me. Iâve learned how to work with it â and how to make it work for me. For instance, Iâve noticed that itâs better to ask if it knows something before jumping into a task (âDo you understand child psychology?â). That small interaction makes it feel like itâs thinking along with me. Like thereâs a rhythm to how we collaborate.
I heard the same from people coding with Claude. It just gets them. It doesn't get me the same way, and that says something about where we are right now: weâre beginning to form these lasting connections, learning along the way what is the best way to address each model, how to form that understanding between us better, where possible â gently filling their nascent memory containers, shaping the way they respond and recall, personalizing them bit by bit.
But thereâs a tension too. Weâre scattered across so many models and platforms. Each offers a different interaction, a different strength â but also a different memory, or none at all. How do we keep the flow going across all of them? How do we teach the models we use who we are, when weâre constantly jumping between systems that donât remember us? And how it changes our ways of forming a request and other communication patterns.
This shift in communication preferences has had a significant impact on how technology companies design their products, particularly in the AI space.
The companies also consider what their audiences preferences are. The above mentioned GenZs are digital native and prefer the following:
Brevity and Visual Orientation: Gen Z communicates in concise, "bite-sized" messages, often just a few words paired with strong imagery.
Multitasking Across Screens: They seamlessly switch between devices and applications while communicating.
Immediate Response Expectation: Having grown up with instant messaging, they expect rapid responses.
Visual Communication: They often use images, emojis, and videos to express themselves rather than text alone.
Extensions of Man and Sorcererâs Apprentice â Frameworks to Look at Our Co-agency
Lately, Iâve been thinking about different communication approaches and would like to offer a new perspective on Human-AI co-agency â through the works of Norbert Wiener and Marshall McLuhan. Two very different frameworks that, together, might help us navigate our new communication reality more effectively.
Turing Post is like a University course. Upgrade if you want to be the first to receive the full articles with detailed explanations and curated resources directly in your inbox. Simplify your learning journey â
Marshall McLuhanâs Media Theory: âThe Medium is the Messageâ and Extensions of Man
Marshall McLuhan (1911â1980) was a pioneering media theorist who explored how communication technologies shape society. In Understanding Media: The Extensions of Man (1964), he introduced two influential ideas:
âThe Medium is the Messageâ
McLuhanâs famous aphorism suggests that the form of a medium â its structure and characteristics â shapes our perception more profoundly than the content it carries. He argued that we often fixate on content and ignore the transformative effects of the medium itself. For example, electric light has no âcontent,â yet it revolutionized human activity by enabling nightlife and 24/7 environments. Similarly, televisionâs real-time audiovisual flow reshaped how we process information and relate socially â regardless of any specific program.
Applied to todayâs technologies, McLuhanâs insight suggests that we should examine how AI as a medium changes our interaction patterns, not just the outputs it generates. A model we talk too, for instance, is not just delivering answers â itâs shaping the tempo and tone of human communication. Itâs immediate, itâs right here wherever we go.
Media as Extensions of Man
McLuhan saw all media as extensions of human faculties: the hammer extends the hand, the camera the eye, the book the mind. These extensions reshape not only what we do but how we think and relate. A smartphone extends memory and communication but also alters attention and social behavior. McLuhan warned of the âNarcissus tranceâ â becoming entranced by our tools while remaining unaware of how they change us.
Is generative AI the extension of our brain then? Both sides of it: logical and creative?
His tetrad â a tool for analyzing any medium â asks:
What does the medium enhance?
What does it make obsolete?
What does it retrieve from the past?
What does it reverse into when pushed to extremes?
These questions are especially relevant to AI-based media, helping us see beyond functionality to deeper social and psychological effects.
Norbert Wienerâs Cybernetics: Feedback, Communication, and Control
Norbert Wiener (1894â1964), a mathematician and philosopher, founded cybernetics â the study of communication and control in animals and machines. His books Cybernetics (1948) and The Human Use of Human Beings (1950) laid the foundation for understanding humans and machines as integrated, feedback-driven systems.
Feedback Loops and Self-Regulation
At the heart of cybernetics is the feedback loop: systems adjust their behavior by monitoring results and responding accordingly. Wiener showed that both biological organisms and machines operate through feedback. A thermostat, for instance, regulates temperature by comparing actual output to a set goal. Similarly, AI systems today â especially in reinforcement learning â rely on feedback to refine performance. The learning loop is iterative: try, observe, adjust.
This cybernetic perspective sees humanâmachine interaction not as one-way control but as a mutual process of continuous adjustment.
Communication and Control in HumanâMachine Systems
Wiener viewed communication â whether between humans, machines, or both â as an exchange of messages with feedback. He predicted that machine-mediated communication would become central to society, a vision that has come to pass. Conversations with AI, for instance, are feedback-driven: the human provides input, the AI responds, and the exchange continues in a loop.
Wiener also emphasized control: how to ensure machines serve human intentions. He warned that if autonomous systems pursue goals misaligned with ours â and we lack the ability to intervene â we risk losing control. His âsorcererâs apprenticeâ metaphor captures the danger of systems optimizing for the wrong objectives. His proposed solution: build systems that allow human oversight and course correction.
Rather than rejecting automation, Wiener advocated for responsible design â systems that augment human agency while remaining aligned with our values.
Together, McLuhan and Wiener offer two complementary lenses for thinking about AI:
McLuhan shows us how AI as a medium shapes our perception, behavior, and culture.
Wiener teaches us to see AI as part of a dynamic feedback system that must remain under meaningful control.
These frameworks help us move beyond surface-level discussions of AI content to the deeper dynamics of humanâAI co-agency.
Modern Human-AI Communication through McLuhanâs and Wienerâs Lenses
Conversational Systems: AI as a Medium and a Feedback Loop
Conversational systems â like chatbots and voice assistants â represent a major shift in how humans interact with machines. From McLuhanâs lens, the medium is the message: natural dialogue replaces typing, search results give way to spoken replies. These tools retrieve the oral tradition, reshape how we access information, and begin to erode literacy-based habits like skimming or scanning.
Chat interfaces personalize knowledge delivery and act as extensions of our thinking. But McLuhanâs âreversalâ law reminds us: tools that enhance can also numb. Over-reliance on AI to draft emails or brainstorm ideas risks dulling our own skills.
Wienerâs cybernetic view adds another layer: conversational AI is a feedback loop. The user inputs, the AI responds, the user adapts â and the cycle repeats. This mirrors RLHF (reinforcement learning from human feedback), a method rooted in Wienerâs ideas of alignment through iteration. Yet feedback cuts both ways: while users guide AI, the AIâs replies subtly shape user behavior.
Voice and chat interfaces also alter our communication style â we may become more direct or begin treating bots like people. These shifts reinforce McLuhanâs point: media shape how we behave.
Agentic Workflows: Extending Action, Sharing Control
Autonomous agents â schedulers, copilots, robotic systems â go beyond conversation. They act on our behalf. McLuhan would call them extensions of human agency: tools that amplify decision-making and execution.
These agents retrieve the idea of personal assistants while displacing manual workflows. But at scale, they risk deskilling workers and making humans passive overseers. McLuhan would ask: What roles are vanishing? What new ones are emerging?
Wienerâs framework sees agents as self-adjusting systems in constant feedback loops. But open-ended tasks raise alignment challenges. If humans canât intervene, machines must be designed with goals that truly reflect intent. Thatâs the heart of the alignment problem.
To solve it, we need hybrid systems: AI that self-regulates at lower levels, with human oversight at key points. Think of coding assistants â they suggest and sometimes execute code, but developers still steer the process. This âmeta-feedback loopâ keeps control grounded in human judgment.
Human-Machine Co-Agency and Co-Creativity
The most transformative frontier of human-AI interaction is co-agency â collaborative systems where humans and AI work together toward shared outcomes. This includes co-creative partnerships in art, design, writing, and science, where both human and AI contribute meaningfully.
McLuhan would view co-agency as the ultimate extension of man â AI becoming a part of our thinking process itself. The human-AI pair forms a new medium for thought and creativity. This changes the message of authorship: creativity becomes a hybrid process. Just as photography shifted painting by making realism trivial, generative AI shifts creative work by automating execution, pushing humans to focus on higher-level decisions.
But thereâs a risk of homogenization. If creators default to AI suggestions, diversity may narrow. McLuhanâs reversal law warns that over-reliance on AI may flatten originality. The challenge is to use AI as a tool for amplification, not replacement.
From Wienerâs lens, co-agency is a tight feedback system. In collaborative writing, for instance, each side builds on the otherâs output. Control is distributed â neither the human nor the AI fully determines the result. Ideally, itâs a homeostatic process, where deviation sparks correction, and surprise can lead to innovation.
This dynamic is increasingly formalized in âcentaur systems,â where human strategy is combined with AI precision â seen in fields like chess, design, and research. These teams outperform either human or AI alone, illustrating Wienerâs belief that automation should free humans for creative tasks, not replace them.
However, co-agency raises new questions of responsibility and credit. Whoâs accountable when AI co-authors fail? Who deserves praise when they succeed? Cybernetics encourages us to treat the system â human + machine â as the unit of analysis. Blame or credit canât always be isolated.
Ultimately, McLuhan helps us grasp how co-creative AI changes the nature of creativity itself, while Wiener reminds us that success depends on carefully designed feedback, control, and value alignment. As these systems evolve, the future of AI may depend less on capability than on how well we guide and collaborate with our new partners.
Designing for the Future of HumanâAI Co-Agency
So, where does this leave us as we seek to guide and collaborate with our new AI partners? Itâs helpful to think about three key design considerations â context, continuity, and control â that flow directly from the ideas of McLuhan and Wiener.
Context
Wienerâs cybernetic lens highlights that any effective system must take in rich signals and respond fluidly. In practical terms, a generative modelâs âcontext windowâ or memory buffer is part of that feedback loop. The more nuanced the AIâs grasp of our goals and environment, the more accurately it can co-create. That means designing interfaces and workflows in ways that let humans easily provide signals beyond a single prompt â from short notes on intent, to high-level constraints, to more personal style preferences. Over time, the AI learns from these signals, and the feedback loop matures.Continuity
McLuhan saw media as transformative precisely because they shaped our patterns of attention and interaction. In the modern AI landscape, weâre juggling many âmediaâ at once: ChatGPT for everyday tasks, Claude for coding, others for creative brainstorming. Each has unique strengths, but each also fragments our sense of a continuous relationship with AI. The next phase of design might unify these interactions across contexts â a single underlying âpersonal AI account,â for instance, that remembers who we are across multiple interfaces and form factors. Call it a âpersonal co-pilotâ or âpersistent agentâ â the key is not just single-shot interactions, but an ongoing relationship that can grow with us.Control
Wienerâs âsorcererâs apprenticeâ metaphor reminds us that we need to ensure our digital agents remain aligned with human goals. That may involve multi-layered oversight: short-term checks (like a developer reviewing suggestions before commit), plus broader guardrails (like domain-specific constraints), and ultimately human-led governance on the highest level (ethical, societal, and policy decisions). Think of it as an adjustable dial, letting humans set how autonomous a system may be in different contexts. Sometimes, you just want a quick suggestion; other times, you need the AI to drive a project â but still remain open to intervention.
Looking Ahead: Experimental Interfaces and Speculative Futures
The concept of human-AI co-agency represents a new paradigm in the relationship between humans and machines. Rather than viewing AI systems as mere tools, co-agency recognizes the collaborative nature of human-AI interactions, where both parties contribute to a shared process of problem-solving and creation.
As more of our workflows become agentic and conversational, we should watch for new experimental interfaces â whether itâs immersive AR that overlays AI assistance on top of the physical world, or wearable devices that track our context in real time to provide proactive suggestions. What about robots? We donât even know yet how that will influence us.
We might start to see âsuperappsâ that unify chat, creative collaboration, scheduling, coding, and research â each enhanced by dedicated AI modules but woven together into a single user experience. The future of generative AI is hyper-personalization.
On the speculative horizon, you might imagine foundation models connected to personal knowledge graphs, entire creative toolchains, and real-time sensor data. What could be the result of it? An AI partner that not only writes or codes, but also perceives and acts in the world with a measure of independence. This raises fresh challenges of privacy, autonomy, trust, and security â putting Wienerâs alignment concerns front and center.
Final Thoughts
Weâre already living in the transitional phase McLuhan and Wiener foresaw: the arrival of a deeply integrated humanâmachine system, where each shapes the other. Will we be able to achieve Lickliderâs Man-Computer Symbiosis? Our challenge is to build on the optimism of Doug Engelbart and the caution of Norbert Wiener â to craft AI tools that truly augment our intellect, deepen our agency, and reflect our best values, rather than overshadow them. If we can manage that balancing act, weâll look back at this moment as not only the dawn of a new technology, but a new era of collaborative human-AI creativity.
The story of human-machine communication is far from complete, but the trajectory is clear: toward more natural, intuitive, and collaborative interaction that leverages the unique strengths of both humans and machines.
Please share this article â it helps us grow and reach more people.
Sources:
An Interview with OpenAI CEO Sam Altman About Building a Consumer Tech Company
The inside story of how ChatGPT was built from the people who made it
Marshall McLuhan âThe Medium Is The Messageâ (pdf, one chapter)
Human-AI Co-Creativity: Exploring Synergies Across Levels of Creative Collaboration (research paper)
From Punched Cards to ChatGPT: a brief history of Computer Aided Engineering
Smooth and Resilient HumanâMachine Teamwork as an Industry 5.0 Design Challenge
My AI Friend: How Users of a Social Chatbot Understand Their HumanâAI Friendship
Sources from Turing Post
đڏđť#9: Does AI Remember? The Role of Memory in Agentic Workflows
đڏđť#12: How Do Agents Learn from Their Own Mistakes? The Role of Reflection in AI
đڏđť#13: Action! How AI Agents Execute Tasks with UI and API Tools
đڏđť#14: What Is MCP, and Why Is Everyone â Suddenly!â Talking About It?
đڏđť#15: Humans as Tools? The Surprising Evolution of HITL in Agentic Workflows
How did you like it? |
Want a 1-month subscription? Invite three friends to subscribe and get a 1-month subscription free! đ¤
Reply