As marketers and brand managers rush to incorporate AI-generated content like chatbots, virtual assistants and AI-created graphics, it's crucial to consider the psychological impact on customers. While AI interactions can seem highly realistic, they can also fall into an "uncanny valley" where people experience an uneasy or creeped-out feeling. Understanding this effect and how to avoid it is key for successful AI adoption.
The concept of the uncanny valley originated with roboticist Masahiro Mori in the 1970s. He proposed that as robots appear more humanlike, people respond more positively up until a point where the robot is almost lifelike but slightly "off," evoking a sense of unease. This dip into negative emotion before rising again is the uncanny valley.
There are several roots of this effect. As Freud explained, the uncanny stems from something familiar that has been repressed and then reemerges in unfamiliar ways [1]. The AI seems strangely familiar, triggering our latent awareness of human imperfection we'd rather not confront. Eerie dolls, prosthetic limbs, masks and humanoid robots also elicit this effect by falling short of the natural human appearance and behavior we expect.
Recordings of our own voice evoke it too - we're unsettled by hearing ourselves sound different than our self-perception. AI voices and content may seem real at first but subtly abnormal prosody, tone and imperfect empathy become apparent and unnerving over time. When the line between human and machine is blurred, it triggers an existential unease regarding human identity and control.
For marketers using AI, the risk is real. A chatbot that seems human but lacks warmth could drive customers away. As audio deepfakes improve, AI voices in ads could leave audiences subconsciously chilled if imperfections betray the non-human origin. And no one wants their brand associated with creepy feelings.
Recent studies by CloudArmy reinforce the risks of the uncanny valley effect in AI marketing content. We conducted tests using AI-generated voices versus human actors reading identical ad scripts across radio, podcasts and other audio formats. Remarkably, even when the AI voices sound highly realistic or even when respondents found them indistinguishable from real voices, reaction time measures showed implicit negative associations compared to human voices. Despite sounding real, the subtle artificial flaws in AI voices led to unconscious unease and lowered positivity. This reveals how deeply the uncanny valley phenomenon operates below conscious awareness. Rigorous implicit testing is essential to uncovering the creep factor that could be turning off customers without them even knowing why. Brands leveraging AI voices and content should follow CloudArmy's lead in measuring implicit reactions, not just relying on what consumers report.
Research has shown that chatbots can avoid the uncanny valley effect when they stick to just text. But when they try to offer richer, Human-like experiences with animated avatars, they fall prey to triggering creepy feelings [2]. A brief Google search reveals that people seem to find AI videos, in particular, creepy. Perhaps the greater the fidelity required, the greater the risk of unsettling people.
The solution is rigorously testing AI content with techniques that detect emotional responses users and viewers may not self-report. Implicit reactions, and other response-time measures can uncover an unconscious "creep out" factor that AI interactions still trigger for many people. Benchmark AI content against human-generated versions to see if the uncanny valley emerges.
With careful measurement of how AI content truly makes audiences feel - beyond what they consciously articulate - the uncanny valley can be avoided. AI should delight not disturb. Reduce anomalies that betray the illusion until the technology passes smoothly into being welcomed as a helpful partner by customers. Meet people's expectations for natural human speech, warmth and empathy. Then AI content will build trust and boost satisfaction rather than leave an uneasy feeling that undermines the brand.
References
[1] Sigmund, F. and David, M., 1919. The Uncanny. The Standard Edition of the Complete Psychological Works of Sigmund Freud, pp.219-252.
[2] Ciechanowski, L., Przegalinska, A., Magnuski, M. and Gloor, P., 2019. In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92, pp.539-548.