Our expectations can influence all sorts of things, from how effective painkillers can be to how we interact with technology. As AI becomes increasingly integrated into our lives, research is beginning to show how users’ expectations shape their interactions with AI — this could be referred to as the AI placebo effect.
What is the placebo effect?
The placebo effect is quite powerful and was first discovered in medicine. People’s belief in a treatment’s effectiveness can lead to real improvements in their condition, even if the treatment itself is just a sugar pill. The sincere belief in a treatment’s effectiveness triggers physiological and psychological responses that result in measurable effects. This shows how our thoughts and expectations about something can strongly influence us.
The Placebo Effect in AI
In a study by Kosch et al. (2023), participants were tasked with solving word puzzles, with some they were normal puzzles, and others told that their puzzles were given by an “adaptive AI interface” that adjusted the puzzle difficulty based on their performance.
In reality, there was no AI, and every participant experienced the same puzzle set and difficulty levels. By telling people they were getting help from an AI, people felt more confident and actually performed better — even though the AI wasn’t doing anything!
A recent study by Kloft et al. (2024) further shows the placebo effect in human to AI interactions. Participants were told an AI system would either increase or decrease their performance, although there was no AI at all. People processed information faster and performed better when they believed AI was assisting them, and they performed worse when they had negative expectations. The researchers called this the “AI performance bias.”
AI as friend or foe?
With AI becoming more common in our lives, people have different reactions to whether AI is a force for good or poses a threat to humanity.
As it turns out, just describing an AI as caring or manipulative can change how people perceive and interact with it. In a study by Pataranutaporn et al. (2023), participants who thought they were chatting with a caring AI had much more positive conversations compared to those who thought the AI was neutral or manipulative.
People’s expectations created a self-fulfilling prophecy. If they think the AI is friendly, they will treat it that way. In the manipulative condition, people’s conversations with AI were more likely to be negative. Confirmation bias caused people to behave differently and think differently about the AI.
Why does this matter to UX designers?
These studies show that people’s expectations can influence their experiences. How people think about AI can significantly change how they feel about it and the outcomes of that interaction.
- Understanding the placebo effect can help UX designers create better user experiences. By managing expectations and being transparent about what the technology can and can’t do, we can build trust and satisfaction among users.
- We also need to consider the ethical implications of describing AI’s capabilities. We want our users to have genuinely positive experiences, not just ones based on false expectations.
- User studies of AI and HCI systems should incorporate controls for placebo effects to find the true impact of their functionalities on user experience.
The placebo effect shapes people’s interactions with technology. By understanding and acknowledging its influence, we can create more authentic and satisfying user experiences in the ever-evolving world of AI and UX.