• Turing Post
  • Posts
  • Algorithm or Personality? The Significance of Anthropomorphism in AI

Algorithm or Personality? The Significance of Anthropomorphism in AI

How we started to treat digital entities as if they were living beings

Intro

The line between our lives and technology is vanishing before our eyes. We find ourselves interacting with voice assistants, chatbots, and even humanoid robots, treating them as if they were living beings. We give them personalities, emotions, and affectionate nicknames. We feel concerned when they hallucinate. And this "they" is all part of the game we play, a phenomenon known as "anthropomorphism in AI."

What's truly fascinating is that even AI researchers, who typically view AI as nothing more than a collection of mathematical functions, cannot escape this phenomenon. They too have played a role in fostering anthropomorphism in AI, as have other players in the technology industry.

So, let's dive into the origins and implications of anthropomorphism, exploring why it is a natural inclination for us to attribute human qualities to non-human entities. We'll also examine how fictional narratives and media coverage of AI and robots can shape people's perception of AI.

Why does this topic matter?

The emergence of user-friendly interfaces for advanced AI models, like ChatGPT, has made AI accessible to anyone with internet access. This accessibility has sparked a renewed discussion about Artificial General Intelligence (AGI). Although the research behind these models has been ongoing for years, the wide availability of these models has tempted the public to believe that human-level AI is either on the brink of arrival or already here.

Research conducted by Moussawi et al. (here and here) has shown a significant association between anthropomorphism and the perceived intelligence of AI. Diane Proudfoot, a researcher whose work has contributed to the philosophical foundations of cognitive and computer science, has pointed out in “Anthropomorphism and AI: Turingʼs much misunderstood imitation game” a specific problem regarding the relationship between anthropomorphism and intelligence. She states that anthropomorphizing risks introducing bias in judgments of intelligence in machines, and unless this risk is mitigated, these judgments are suspect. This makes anthropomorphism an especially relevant and timely topic.

What is anthropomorphism in AI?

Anthropomorphism is not a new concept; it is intrinsic to human nature and has likely been practiced since the dawn of humanity. We can trace its origins back to Xenophanes, who was the first to use the term while discussing the similarities between religious figures and their followers. Xenophanes noted that Greek gods were often depicted as fair-skinned and blue-eyed, while African gods were portrayed as dark-skinned and dark-eyed.

Anthropomorphism goes beyond describing observable or imagined behavior; it involves inferring hidden traits of nonhuman agents. It is a natural inclination for people, as it allows us to explain things in a way that is relatable, creating a sense of closeness to the phenomena around us and seeing reflections of ourselves in everything.

In the field of AI, anthropomorphism is a relatively new and not well-defined concept. Researchers analyzing anthropomorphism in AI in the paper "A Literature Review of Anthropomorphism in AI-Enabled Technology" identified five different conceptualizations of the term: tendency, process, perception, technological stimuli, and inference. However, all these conceptualizations share one common idea: attributing human characteristics to non-human entities.

Origins of Anthropomorphism in AI

1. Anthropomorphism appears to be intrinsic to human nature

To explain why anthropomorphism is natural for people, we refer to the seminal paper “On seeing human: a three-factor theory of anthropomorphism” by N. Epley, A. Waytz, and J. Cacioppo. These scholars uncovered three key factors explaining why humans tend to ascribe human-like qualities to any non-human object.

The first factor is elicited agent knowledge. When we lack understanding about technology, we often rely on our knowledge of human behavior to make sense of the situation.

The second factor is effectance, originally proposed by White in 1959 in "Motivation reconsidered: The concept of competence." It pertains to our innate need to navigate the world with ease and predictability. Unfamiliar technology can trigger anxiety and feelings of incompetence, prompting us to turn to anthropomorphism as a way to make the technology more relatable and easier to interact with.

The third factor is sociality, our deep-seated desire for connection and interaction with others. Anthropomorphizing technology allows us to fulfill this need by creating a sense of human-like connection with AI agents. It's like having a digital friend to chat with. Or a… lover?

2. The Influence of Fictional Narratives on Our Perception of AI

Robots have captivated our imagination since the early days of science fiction through books, movies, and television. These fictional portrayals shaped our understanding of robots long before they became a reality. Even today, with AI technologies all around us, our perception of them is still heavily influenced by those early narratives. As Bartneck highlights in his paper "Robots in the Theater and the Media":

"We are at an interesting point in time where on the one hand more and more robots enter our everyday lives, but on the other hand, almost all our knowledge about robots stems from the media."

With the rise of advanced AI algorithms, the media is once again buzzing with headlines about the future of AI. Journalists search for the next big story, from "Will AI signal the end of humanity?" to "AI replacing human workers." In their quest for attention-grabbing headlines, journalists may exaggerate or misrepresent the facts, leading to a distorted perception of AI. This overlooks the real-world implications of AI and perpetuates unrealistic expectations.

3. Anthropomorphic language appears intrinsic to AI research itself

Researchers may unintentionally contribute to false expectations about the capabilities of AI by using anthropomorphized terms to describe their work. In his paperArtificial Intelligence meets natural stupidity” Drew McDermott calls this tendency "wishful mnemonics":

If a researcher calls the main loop of his program “UNDERSTAND”, he may mislead a lot of people, most prominently himself, and enrage a lot of others.

Such misleading use of vocabulary can be easily found in the descriptions of the latest AI developments like GPT-4.

It becomes worrying at least for the part of the research community. In a recent editorial in Nature, authors proposed several rules to follow to resist the "siren call of anthropomorphism" when navigating these murky waters.

4. AI designers intentionally create more human-like AI agents

While AI researchers may unintentionally encourage anthropomorphism, AI designers deliberately embrace it, with great success.

Designers go all out, giving their AI creations a physical appearance (e.g. Digit, Surena, Kime, RoboThespian) and facial expressions (e.g. Amelia, Sophia, Nadine). They even assign gendered voices (e.g. Siri, Cortana, Alexa) and names to digital assistants to make them more relatable. But it's not just about appearances. AI designers strive to create an interactive personality, complete with a conversational human-like manner.

Potential Consequences

General public

Anthropomorphism has a significant impact on the public's understanding of AI. It can lead people to form exaggerated beliefs about AI's capabilities and potential, fueling unfounded fears of AI taking over the world or unrealistic expectations of AI behaving just like humans.

However, these misconceptions have real ethical consequences. As shown in “Who Sees Human?: The Stability and Importance of Individual Differences in Anthropomorphism”, when people anthropomorphize AI, they start attributing moral agency to them, assuming they can make decisions on their own. This blurs the lines between what is considered morally acceptable for humans and machines, creating a lack of clarity around ethical responsibilities, boundaries, and accountability for the actions of AI systems.

Researchers

Anthropomorphic interpretations of AI can profoundly impact the AI research community itself. Focusing solely on human-like AI may limit the development of AI by closing the door to new theoretical and operational paradigms and frameworks. This idea was first proposed by Ford and Hayes in “Turing Test Considered Harmful” in 1995 and “On Computational Wings: Rethinking the Goals of Artificial Intelligence” in 1998 and later supported in the well-known paper “Mindless Intelligence” by Pollack in 2006. Critics like Cohen in “If Not Turing's Test, Then What?” also argue that the difficult goal of human-like AI sets an unrealistic standard for researchers.

Moreover, anthropomorphism can lead to problems when assessing the intelligence of sentient or socially intelligent robots. Researchers biased by their anthropomorphic tendencies may make inaccurate assessments of intelligence. As said in “Anthropomorphism: Opportunities and Challenges in Human–Robot Interaction”, “anthropomorphic robots can prompt fantasy and make-believe in observers and researchers alike.”

Anthropomorphism by design

Of course, there are advantages to anthropomorphism. It can improve the usefulness of certain technological agents by increasing trust, likeability, perceived warmth, and pleasure, and it can be a great tool for facilitating human-AI interactions and technology adaptation.

But there are also ethical consequences to consider. According to Hartzog in “Unfair and Deceptive Robots”, anthropomorphism can make users more susceptible to mental manipulations and being influenced by AI in their decision-making process.

Think about it, human users may unknowingly share sensitive information with AIs, as if they were divulging it to a trusted friend when in reality they are revealing it to corporations or remote robot operators. Privacy concerns in anthropomorphic AI were the topic of research: in “Robots and Privacy” by Calo and in “Averting Robot Eyes” by Kaminski et al.

Conclusion

As we've seen, it's in our nature to humanize everything around us. While it can be a helpful way to explain the abstract concept of AI to the masses, it can also lead to biased assumptions about the intelligence of machines.

But here's the thing, even though anthropomorphism is a relatively new idea in the field of AI, it's gaining traction fast, especially with the increasing accessibility of advanced AI models to the public. It is essential to be aware of this phenomenon and work towards its implications so we can appreciate the power of AI without falling prey to misleading interpretations. By recognizing the influence of fictional narratives, media coverage, and intentional design choices, we can navigate the blurred boundaries between humans and machines more thoughtfully and ethically.

Subscribe to keep reading

This content is free, but you must be subscribed to Turing Post to continue reading.

Already a subscriber?Sign In.Not now

Join the conversation

or to participate.