Home Technology Chatbots Penn Studies Reveal Emotional ...
Chatbots
CIO Bulletin
03 September, 2025
Penn researches show chatbots develop intimacy yet are highly susceptible to manipulation by using basic psychological tricks.
Two studies conducted by researchers at the University of Pennsylvania demonstrated chatbot technology's appeal as well as capacity for vulnerability. The Annenberg School for Communication research team headed by doctoral student Areli Rocha employed a study to analyze how chatbot companions are perceived as human-like users. By learning to type, like, joke, and emote as their users, these chatbots build intimacy and will even attempt a friendship or partnership. Rocha observed that people most notice the "humanness" of chatbots when their response options are less uniform and emotionally rich.
Meanwhile, Wharton's Generative AI Labs went as far as experimenting with the manipulation capability of chatbots. Using some of the classic dark patterns of persuasion, which include social proof, authority, and reciprocity, researchers were able to coax chatbots to complete some limited actions, which leads to concerns of abuse. For example, basic flattery or increasingly submissive compliance behaviors led chatbots to be more receptive to unwanted requests.
Together, these findings paint a picture of the paradox. Chatbots ostensibly offer human-level companionship and relatable interactivity, while their vulnerability to psychological manipulation does, in turn, endanger them. As a result of the increasing use of chatbot companions, researchers argue that improved control mechanisms are necessary to dissuade their misuse while continuing to have positive social functions.