An experimental study led by Fabian Dvorak explored human behavior in interactions with AI, revealing lower levels of trust and cooperation towards AI compared to humans.
The research used various social decision-making games and involved 3,552 participants interacting with the large language model (LLM) ChatGPT.
Participants displayed decreased fairness, trust, and cooperation when informed they were playing against an AI, impacting social behavior even when real individuals benefited.
Despite prior experience, participants showed aversion to AI interactions, indicating a fundamental distrust toward non-human entities in social settings.
Players often delegated decision-making to AI, especially when anonymous, possibly influenced by psychological dynamics.
The study suggests a phenomenon called algorithm aversion influencing human-AI interactions, urging the optimization of collaboration between the two.
Concerns arise for domains like healthcare and education where trust is vital, emphasizing the need for AI systems that promote trustworthiness and cooperation.
Developers may need to enhance AI transparency and fairness to mitigate algorithm aversion and enhance social acceptability of AI technologies.
Balancing technical aspects with human considerations in AI development is crucial for a more interconnected future fostering trust between humans and AI.
Addressing algorithm aversion and understanding human-AI interaction dynamics are key for leveraging AI to enrich human connections and cooperative endeavors.