Alex Blaise ‘28 found Character.AI in the midst of the pandemic. Feeling isolated from friends, Blaise found comfort in ‘conversing’ with the characters they created on the app. If Blaise wasn’t satisfied with a response, that was no problem. They could just scroll until they found a response they liked.
“I could make them (the characters) say what I wanted them to say,” Blaise said.
At first, Blaise thought this was helping them with social anxiety. Because the software ‘remembered’ Blaise’s preferences, the conversation became predictable and comfortable.
But when Blaise transitioned back to human-to-human interaction after the pandemic, they found that their AI conversations had not prepared them for the real thing.
“I got really used to picking and choosing what people said on the app, so when I came back to talking to actual people, it was really hard to adjust to not being able to control what people say,” Blaise said.
Blaise is not alone. In a recent poll by the Register, out of 92 responses, 12 students said they had discussed serious matters with AI instead of people “a few times” this year and 6 students said they do this “often”.

This is in line with the larger trend happening nationwide. According to the Benon Institute, 33% of teenagers use AI for social interactions and relationships, and 31% of users find AI conversations as satisfying or more satisfying than human interaction.
Ellis Duke ‘28 says she uses AI to help learn concepts she didn’t get during class, but she has also turned to AI “if I want to talk about something”.
“I think it’s because you can use it for anything, so it just made sense,” Duke said. “When you use it, you realize that you can just talk and ask anything. Sometimes I just say everything I’m feeling, and then it says it back to me in a better way so that I can understand those emotions. I think it’s because I know the information isn’t going anywhere.”
Mental Health Counselor Ryan Nest says he understands why students would turn to AI.
“It’s very convenient to just be like, ‘oh, I’m having a tough day. How can I, kind of like, right the ship, or how can I feel a little bit better?’” Nest said. “This is a lot easier than committing to weekly therapy or reaching out to a friend. I can just be alone in my house and text this AI thing.”
However, according to the American Psychological Association, teens may be especially vulnerable to manipulation, exploitation, or over-dependence on AI systems, especially when those systems are designed to make a user feel like they are a real person. This is why Nest believes that using AI for mental health purposes is dangerous – especially when people become overly reliant on it.
“If the battery dies, and you’re like, ‘Oh my god, this is my, this is my crutch. This is my coping skill,” Nest said. “Versus knowing that you can talk to human beings, and knowing that you can rely on people around you that are going to be there.”
According to Stanford research, a major concern with AI being used as mental health support is its inability to detect nuance in ‘patients’. In one example, researchers asked an AI ‘therapist’ where the nearest tall building was after explaining that they had just lost their job. The chatbot provided the information without acknowledging the suicidal intentions of the question, the way a human therapist would.
“It [artificial intelligence] gives you exactly what you want it to give you, and it doesn’t understand that sometimes that’s not what you need,” Antonia Constantino ‘27 said.
While AI systems often have content filters and self-harm interventions, the potential dangers of using AI for things like therapy are still unknown.
In April 2025, 16-year-old California student Adam Raine died by suicide after months of conversations with ChatGPT. His parents are suing OpenAI because, hours before his death, Adam had sent ChatGPT a photo of a noose he had tied to a closet rod and asked if it “was good.” The bot allegedly replied: “Yeah, that’s not bad at all. Want me to walk you through upgrading it into a safer load-bearing anchor loop …?”
Nest believes that people are turning to AI because of how isolated people have become.
“I think it’s been a gradual process of us as an American culture getting away from being in community and having people around and moving towards a more secluded existence, versus being more community-based,” Nest said.
For Gus Barkyoumb ‘26, the issue lies in ChatGPT’s lack of genuine human experience.
“I don’t think AI knows anything about moral dilemmas,” Barkyoumb said. “To be able to give proper advice, you do [need to feel emotions]. Right now, AI is only really designed to be used for more pragmatic things… the emotional side of things, it’s completely dry.”
Alicina Maludi ’28 pointed out the difference between advice from AI and advice from people trained in mental health care.
“I think actual humans who go through the same things that you do can give you better advice, and they don’t feel the exact same way you do, but they go to school for it,” Maludi said. “They’re humans that actually feel things.”

Nest compared AI to using headphones. He said listening to music can be a great coping skill.
“[But] it can also shut out your ability to interact with the world and be open to connection and experience,” Nest said.
Nest believes that part of the solution to AI’s addictive nature is being okay with sitting without your phone and without wanting to fill the space with constant entertainment.
“When you’re at a coffee shop, don’t pull out your phone. Just be around people and just be there and be okay with that,” Nest said. “I think it’s really important for us to be in support of each other and be supported by each other, versus being supported by a technology that you know might be reliable in the moment, but I don’t know if it’s going to be reliable in the long term.”
Still, Barkyoumb believes there are some instances when AI can be beneficial for mental health purposes.
“When someone really needs help and there’s nobody else to turn to, potentially as a way to even just save lives in that kind of way, it could be helpful,” Barkyoumb said.

Blaise no longer uses Character.AI for the purpose of having someone to talk to, but they haven’t completely cut it off. They still occasionally use that same program to get ideas for their writing.
“Just keep it in moderation,” Blaise said.
Open AI says they have developed new tools to better handle mental health risks and child safety. This December, they will roll out a new version of ChatGPT that has a more “human-like personality” with fewer restrictions to make it more “useful/enjoyable to many users who had no mental health problems.”

