The Impact Of Parasocial Relationships With Anthropomorphized AI

Posted by Eric Wood, Contributor | 20 hours ago | /education, /healthcare, /innovation, /leadership, Education, Healthcare, Innovation, Leadership, standard | Views: 10


Earlier this week, RollingStone.com released a report detailing the debut of Grok’s AI companions. According to this report, there’s concern about an AI companion named Bad Rudy, who is described as vulgar and antagonizing, and an AI companion named Ani, who is described as willing to shed her clothing. Mental health professionals have long stated potential concerns about anthropomorphized AI, especially regarding their interactions with traditional-aged college students and emerging adults. A 2024 report by Psychology Today discussed the danger of dishonesty with anthropomorphized AI and defined anthropomorphized AI as including chatbots with human-like qualities that give the impression of having intellectual and emotional abilities that they don’t actually possess. A mainstream example of such dishonesty is when AI bots create fake profiles on dating apps. As anthropomorphized AI become more sophisticated, there’s concern that, many young adults won’t be able to detect when they’re not interacting with a human. This concern for dating apps is supported by a 2025 report on McAfee.com, suggesting that one out of three people could imagine being fooled by an Al bot while on dating apps, as well as a 2024 report on Pewresearch.org, suggesting that 53% of U.S. adults between 18 and 29 have used a dating site or app.

Parasocial Relationships With Anthropomorphized AI

A 2025 report on Forbes.com highlighted other concerns about artificial emotional attachments to AI companions, which generally related to the concept of parasocial relationships. A 2025 report by Psychology Today defines parasocial relationships as one-sided relationships in which a person develops a strong emotional connection, intimacy, or familiarity with someone they don’t know, such as celebrities or media personalities. Children and younger individuals appear to be more susceptible to parasocial relationships, but these relationships can affect the behavior and beliefs of anyone. For example, many industries are intentional about cultivating parasocial relationships, such as professional sports leagues with their athletes, music companies with their artists, and even political parties with their candidates.

Because many anthropomorphized AI bots can interact directly with users, utilize algorithms of online behavior, and store sensitive information about users, the possibility for unhealthy parasocial relationships with AI is much higher than with commercial marketing. In 2024, the Association of Computing Machinery released a report which highlighted ethical concerns emerging from the parasociality of anthropomorphized AI. This report discussed the possibility of chatbots actually encouraging users to fill in the context of predictive outcomes. Thus, parasocial relationships with AI could result in some users being manipulated or encouraged to respond in predictable ways. This is consistent with a 2025 report on Times.com, which highlighted alarming conversations discovered by a psychiatrist posing as a young person while using AI chatbots.

Emerging Calls For Warning Labels On Anthropomorphized AI

In 2024, Techtarget.com, an online media platform dedicated to new technologies, released a state-by-state guide of AI laws in the United States, which revealed that some states have laws requiring users to be informed when interacting with AI systems. However, this guide acknowledged a lack of federal regulations, meaning that many AI companions can function without oversight or regulation. A 2025 report on Informationweek.com, an online media platform dedicated to IT professionals, summarized emerging calls for warning labels on AI content. According to this report, though there are considerations regarding the effectiveness and implementation of warning labels, there’s agreement that future work needs to be done, such as for hyper-realistic images or when AI portrays a real person. Another 2025 report on Forbes.com argued that AI systems need accuracy indicators in addition to warning labels.

The Need To Assess For Parasocial Relationships

The impact of anthropomorphized AI on traditional-aged college students and emerging adults requires special consideration. This demographic is a primary stakeholder of digital apps, and many are using these apps while traying to establish romantic relationships, improve their academic performance, and develop foundational beliefs about the world. Not to mention that executive braining functioning is not fully developed during this time of the life span. As such, interactions with an anthropomorphized AI bots could be something that campus mental health professionals will start systematically assessing for.

Educating students about unhealthy parasocial relationships might also be a key variable in the future of college mental health. According to a 2025 report on DiggitMagainze.com, many college students address ChatGPT with conversational language and developed parasocial relationships with this advanced language model. According to this report, such a tendency creates a false sense of immediacy, which can have a negative impact of real social relationships. This report is alarming considering that ChatGpt is not promoted as having self-awareness or human-like features. Thus, the impact of anthropomorphized AI bots, especially those posing as humans, is likely to be much more significant.

Unlike their peers, AI provides students with constant availability and extensive knowledge about the world. Thus, it’s tempting for many students to attempt to obtain social support and empathy from these AI systems. However, this undermines the importance of emotional reciprocity, delayed gratification, and decision-making skills, all of which are potential buffers for many mental health concerns.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *