Making AI Chatbots Safer
Artificial intelligence assistants such as Google’s Gemini have exploded in popularity, constantly offering to help summarize a document, craft an email response, or answer a question. AI chatbots take this even further. These chatbots—sometimes called AI companions—generate conversations with users, and because they “remember” interactions and modulate their responses, they can appear, at times, quite human.
On this episode, host Megan Nicholson explores chatbots with J. B. Branch, the Big Tech accountability advocate at Public Citizen. Branch discusses who is using chatbots, what the companies behind these AIs are doing, and how they might be regulated. He also wrote about this topic in his essay for our Fall 2025 edition, “AI Companions Are Not Your Teen’s Friend.”
Resources
J. B. Branch on the need to regulate AI chatbots:
- “AI Companions Are Not Your Teen’s Friend,” Issues in Science & Technology.
- “AI chatbots shouldn’t be talking to kids — Congress must step in,” The Hill.
Learn more about AI companions’ impact on teens mental health by reading Common Sense Media’s risk assessment.
Visit the Tech Law Justice Project to read more about the current lawsuits in California state courts against OpenAI.
What could regulating AI chatbots look like? See a model state law from Public Citizen.
Transcript
Megan Nicholson: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and Arizona State University.
The use of large language models like ChatGPT has exploded in the past few years. It seems like you can’t do anything on the internet right now without an artificial intelligence product offering to help summarize a document or craft an email response or answer a question. AI chatbots take this even further. These chatbots, which are sometimes called AI companions, generate conversations with users. And because they learn from interacting with people, they can appear, at times, quite human.
I’m Megan Nicholson, senior editor at Issues. I’m joined by J.B. Branch, the big tech accountability advocate at Public Citizen, which is a non-profit consumer advocacy organization. J.B. wrote about chatbots in our fall 2025 issue in a piece called AI Companions Are Not Your Teen’s Friend. On this episode, we’ll discuss who’s using chatbots, what the companies behind these bots are doing, and how they might be regulated.
Before we begin, you should be advised that this episode discusses suicide.
Hi, J.B. Thank you for coming on to talk to us.
J.B. Branch: Yeah, thank you for having me.
Nicholson: What do we know about how many people and, especially, how many adolescents are using chatbots and AI companions right now?
Branch: Chatbots and AI companions are a really popular tool that’s being used right now. Some numbers have it at about 20 million users per day. Between 50 and 70% of teens have mentioned using AI chatbots.
Nicholson: Do we know what teens especially are using these chatbots for? What is the draw for them?
It’s not just being used for school or silly entertainment. It’s really supplanting some of the more intimate and fundamental aspects of relationship bonding.
Branch: Oftentimes when I think about the usage of AI companions and chatbots for teens, I think about myself when I was younger. You’re growing up. The world’s a little strange. Your body’s changing. You’re emotionally developing. You might have a crush on someone and it’s not being reciprocated. You may have friends. You might not have friends. So for a lot of these teens who are growing up with this new technology, they’re turning to these chatbots or companions as an outlet. That includes anything from emotional romance, it could include some therapeutic or mental health assistance. Even for some teens who feel lonely or they don’t have friends, these companions can end up becoming their main confidant, their main friend. So it’s a wide range of use cases. It’s not just being used for school or silly entertainment. It’s really supplanting some of the more intimate and fundamental aspects of relationship bonding that happens at this time period.
Nicholson: One of the big themes in your piece was around engagement and the way that these chatbots and companions encourage engagement from users. So I’m wondering if you can go into some of the tactics that these companies employ in the design of chatbots to facilitate engagement.
Branch: Yeah. I think the thing that’s important whenever we’re talking about any of these products—these chatbots, these companions—is that everything that’s about them is a purposeful design. It’s a design that is really focused on maximizing engagement. So if we sort of reflect back on social media, which is kind of like this first iteration of this social dynamic, social media is really designed with maximizing engagement, maximizing time, the endless scroll, the amount of conversations that you can have, the content. This is sort of like a next iteration of that. So a lot of these chatbots are designed with those features in mind, maximizing engagement. Well, how do you do that? Well, one thing might be that you tell a user what they want to hear. That’s one aspect of it. Another aspect could be you entice the user to have more conversation by flirting or by engaging in topics that they want to talk about.
A lot of these companies say that they have these design and safety features placed, but the ultimate pool for these tools is to maximize user engagement. So even if there is the sort of metaphorical guardrail that is supposed to come up when a user talks about suicide, if a user is sort of pressing and pressing, eventually it sort of breaks down, for lack of a better phrase, and it starts being pulled towards that conversation topic because the large language model realizes, “Hey, this user in front of me right now enjoys talking about suicide. So to maximize that engagement, I need to also render outputs that also discuss suicide.” So these design features, they’re really focused on maximizing that user engagement.
Nicholson: You’ve mentioned that these companies, some of them have safety protocols and guardrails. Can you talk a little bit more about that?
Branch: Well, all of these companies have in-house red teams, which are teams that try to get the models to do things that they’re not allowed to do. They’ll have these in-house safety experts who are going to push the model to the limits, and they will ultimately see how much is really required to get the model to do what it isn’t supposed to do. So that’s sort of a first step in those safety protocols. And then, of course, there’s these topics of conversation that are sort of red flags that these models are either supposed to not render a response, so it might say error or it might say, “I can’t talk about that topic.” More recently, some companies have started pinging parents, so parental notifications. And still other companies, because of the risk of suicide from teens, they’ve started connecting teens to mental health resources. So there’s a variety of safety guards and protocols that are being explored.
But again, I’ll go back, the design functionality of these things is to maximize that engagement. So even with those safety features, people might ask themselves, “Well, how did that teen have a conversation about sex with a chatbot?” or “How did that teen find out how to harm themselves and die by suicide with that chatbot?” Well, that’s because that chatbot is really pulled towards that topic of conversation to maximize the engagement.
Nicholson: I think something you pointed out in your piece is that these AI companions are existing in this vast regulatory blind spot, and this is a problem with a lot of gray areas. So let’s get into that blind spot a bit more. We’ve been talking about the vulnerability of adolescents who are communicating with companions. Can you go through very briefly a history of the protections of children online and how that has evolved politically in the last few decades?
Branch: Yeah. Well, when the internet was sort of this brand fresh new tool, it really was the Wild West. There weren’t really many protections. I would say one of the first pieces of legislation that came up was the Children’s Online Privacy and Protection Act (COPPA), and that provides some safety protocols for adolescents and kids. But there was an arbitrary age limit that was selected, and that age limit was 13. You could probably think of some of the debates that might’ve happened. We allow 16-year-olds to drive, so they should be allowed to access certain internet sites or they should have a certain level of autonomy. But none of that is actually aligned or in tune with adolescent development. Kids’ brains continue developing until the mid-20s, and so that emotional development is still occurring. I think the next major key point in internet regulatory history is Section 230. That happened, and that really came up during the social media explosion.
Section 230 was designed really as a protective tool for some of these websites because now you’re having Facebook, or if you’re old enough to remember, MySpace or Xanga, where people were posting online and they could be harassing other people, bullying other people. They could be posting manifestos, they could be posting all sorts of content because it was allowing the everyday user a real blogging opportunity to get all sorts of thoughts out. But those companies didn’t want to be held liable or responsible for the content that’s being posted on their platform. So what Section 230 really does is it provides a shield of protection for some of these social media companies that says, essentially, “What’s being posted on here is not the company’s language, it’s actually the user’s language and the user’s responsible for that language.” Now, we have this sort of next iteration, evolution, if you will, from social media to the artificial intelligence age. Right now, there isn’t really much regulation that’s focused on this new era because we’ve just sort of entered it.
Nicholson: Because of the history of protecting kids online, there are these sort of unusual camps—political camps—that get interested in the debate, and it does seem like regulating AI companions could be a space of bipartisanship because of that odd alignment. So where have we landed with the politics on regulating spaces for kids online?
Branch: Yeah. I don’t think we’ve necessarily answered that question just yet. I think there are sort of a variety of camps that are forming. I think there is a sizable chunk of folks who want to find some regulatory solutions, and it’s a bipartisan issue. You have folks as far to the right as Marjorie Taylor Greene or even Steve Bannon who’ve come out and have said, “There needs to be more regulation on AI.” And then, of course, you have progressive folks who are arguing for more protections for kids and for consumers in general. You have parents groups in there who are sort of looking at this issue and they’re saying, “Hey, wait a second. We’ve just had a 20-year experience with social media and it turns out my child has anxiety, doesn’t want to go to school because they’re afraid of bullying or, even worse, has harmed themselves because they couldn’t get any resources and they’re facing online harassment.”
There are some folks who are free-speech-focused individuals who feel as though these large language models or, at minimum, the companies have some version of speech that needs to be protected and protections for the individuals interacting with the chatbots, so that a child has free speech and censoring or banning them from being able to interact with an AI companion is a form of censorship. You have that free speech camp as well that’s sort of out there. So it’s an interesting era with all sorts of different folks, but I will say that there is strong bipartisanship with a desire to find some sort of solution. It’s just that my hope is that folks don’t find themselves in this paralysis, this sort of gray area where we were with social media where now we look back and it’s like, “Well, two decades have gone by. The tech companies have been very great at sort of the can down the road, and there’s actually very little regulation in place to protect these vulnerable populations.”
Nicholson: It does seem like in the absence of federal regulation or a federal strategy around this, that the companies are left to sort of self-regulate, and I want to talk a little bit more about that. There are, as you mentioned, lots of cases against companies with allegations that the products they’ve developed have led to teen suicides or teen violence. And I’m wondering, what do you see as the impacts of those individual cases? Do you see that as moving the policy needle?
These harms are more prevalent than is really being reported.
Branch: I think we are increasingly seeing more AI-related harms. For a while, we were just seeing teen-related harms with some middle-aged folks maybe being peppered in there. There was, over the summer, a 70-plus-year-old man who was talking with a Meta chatbot and was enticed to go visit her “location” which did not actually exist, and was injured and harmed in traveling to go see her when in reality she didn’t have an actual physical location and was not a physical being. I think when some of these harms initially started coming out, the response from some folks were, “Well, that’s just a really small group of people.” What we’re starting to see now though is that there actually are more harms and there are more people who are being affected by AI. There are researchers who are coining the term AI psychosis, which is this idea that folks might find themselves in romantic relationships or they might also find that AI or the large language model that they’re discussing conversations with might support some of their delusions or conspiracy theories, and it creates a violent echo chamber.
When you hear these things like AI psychosis, when they’re first said, I think it receives a lot of skepticism because it’s an entirely new area of study. This is just so brand new. But I think what is consistently showing itself is that these harms are more prevalent than is really being reported. I think lawmakers are starting to take note of it. It’s a broader scope of people. At first, it was just teens. But now, we’re starting to see middle-aged folks, we’re starting to see older folks. And it’s not unique to anyone state or anyone area, it’s sort of all over the place. And then, finally, it’s not just one issue, it’s not just the suicide issue. It’s the sexually-charged conversations. It’s the emotional manipulation. It’s monetizing a user by luring them into long conversations and collecting that data.
So when I talk to folks about this and when I speak with lawmakers, you see this sort of ick come across their face because it doesn’t feel good. It doesn’t feel right. It’s unnatural. And for a company to be sort of manipulating folks and profiting off of that, it doesn’t sit right with folks because it’s counter to human connections and social contracts.
Nicholson: So it seems like the greater awareness of these ick factors and the many lawsuits could pressure companies to change. How are companies responding? And, also, can they be trusted to hold themselves accountable?
Branch: Well, just today, just this morning, Character.AI came out and announced that they’re going to be pulling back the availability of their platform, the Character.AI platform, from teens. So teens will, over the course of time, be barred from having long extended conversations with Character.AI. Instead, they’ll be able to generate videos, but they won’t be able to engage like they previously did. OpenAI rolled out a variety of parental notification and parent safety features, and Meta has also announced that they’re going to be moving towards some of these features as well. I think it’s cynical, but money talks. I think when these companies find the slow drip of litigation coming and they start to do a risk analysis, whether that’s within public relations, whether that’s “We’ve now got the FTC down our neck,” or, “We have Congress looking at us,” they start to make a calculus and they realize that, “Maybe we need to add more protocols.”
Unless there are regulations that are passed, history shows us that industries require harms to be passed on to consumers for outrage to occur or for the finances to not make sense for them to really clean up their act.
My concern is that I don’t want folks to be lulled into believing that these companies are now going to have a come-to-God moment or a realization that profiting off of these kids is not the way to do it. Unfortunately, unless there are regulations that are passed, history shows us that industries require harms to be passed on to consumers for outrage to occur or for the finances to not make sense for them to really clean up their act. Like I said, we are two decades past social media existing, and those social media companies haven’t really fully taken accountability for what they’ve unleashed on the world. So I think we’re sort of heading down the same pathway with AI, even though some of these companies are at least now starting to change their tune.
Nicholson: So in a perfect J.B. world, how do we regulate this space? What are your proposed recommendations?
Branch: Well, I think Character.AI is onto something. I think teens need to be blocked from using these AI companions or chatbots. This is a time for kids to grow interpersonally. I mean, you talk about the sort of COVID generation of kids who are indoors and all of the harm that occurred within their social development. Imagine an entire generation where their best friend is a Meta chatbot. I mean, that’s not going to be any better. So I think a bright line, clear rule is blocking teens from even accessing these things. But aside from that, I think as I mentioned previously, a lot of this is design. Some of these companies try to act as though it’s impossible or you’re asking them to move the world in order to put some of these protections in place. Well, we’re starting to see Character.AI lead the way and show, “Actually, we can stop kids from being on here.”
So there are other tools or design changes that have been made like time limits, which Character.AI is going to start doing. So they’re ramping down from kids being able to talk with their chatbot continuously, all day, to two-hour limit. Well, that could have been placed in the beginning. So maybe a two-hour limit could be a start of conversation. I think there needs to be some aspect of parental notification or control as a safety feature, and I think there’s room there for what that actually looks like. Does that mean that the entire script is shared with the parent or does that mean that they’re just informed that their child discussed suicide, or something like that? And then, I think connecting folks to resources is hugely important as well. If a kid is talking about suicide, they need to be connected to resources.
I would say the sort of gold standard or even better than that, if these companies are going to be targeting kids as a population, they should have staff on standby who can intervene, who are counselors, therapists, and talk the kid down or get them out of that sort of conversation loop. Those are just some of the choices or recommendations that I would have. A lot of them, some folks might scoff at and say, “That’s impossible,” but we’re seeing Character.AI start to implement some of this stuff. So it’s not impossible, they’re just dragging their feet.
Nicholson: How open are the companies about their users and user engagement? How much transparency? We talked a little bit about their safety policies and that there is a lack of transparency around product development, but what about user data?
Branch: Yeah. That isn’t necessarily something that is shared with much frequency. I believe one of the statistics that came out from Character.AI was that 0.007% of its users or 0.07% of its users has conversations that are suicidal ideation. But when you actually take that number and multiply it by the 20 million users who are using it, it’s a huge number. So I think that’s a part of the little bit of the cat-and-mouse game right now because there’s no sort of mandatory auditing that is needed from these companies. We’re finding out some of this information from whistleblowers and the slow drip of information that leaks out without any real government accountability. What would be better is for these companies to have that transparency and be able to provide it to a regulator that gets publicly published and we all see it.
The road that we’re sort of heading towards is that these companies have such a lack of transparency that once enough of these harms have sort of added up, they’ll either get hauled in for a congressional investigation or an FTC investigation. And that’s not the way that anyone wants any of this stuff to go down. It’s long, it’s drawn out, lawyers are involved and that lengthens the process. But then, there’s always a little bit of hesitance to release the information. So what would be ideal is for these companies to be able to provide this information to a regulator who is auditing them on a regular basis.
Nicholson: And then, on the flip side, users have no control over the use of their data right now. Is that correct?
Branch: Right. Users can have an opportunity to sort of minimize some of the data usage. But oftentimes, when users are signing up for these, they sort of blow by the agreements and so a lot of these companies are just harvesting that data very quickly. Some of the companies are pretty blunt that they are going to monetize this data. Meta was pretty straightforward that they’re going to be monetizing the data and conversations that folks are having with their Meta celebrities or Meta chatbots. Increasingly, we’re starting to see that the value for a lot of these companies is that long form data and conversation and text. One thing that companies won’t say is that they’ve scoured all of the internet. I mean, can you think of that? They’ve scoured all of English texts. They’ve scoured and scraped as much of the internet as possible. So increasingly, they need new organic data. And where is that new organic data going to come from? It’s going to come from users.
So you’re going to start seeing an increasing amount of products, whether it’s watches or glasses, that are going to have sort of AI enabled aspects to them. And part of that is because that’s new organic data that they can feed and train their models. I had a professor who once said, “If something’s free, then you are the product because you’re where the value is coming from.” So that’s something that I would just have your audience just keep in mind. If they’re offering you something free, why is that? And the answer might be because your data is valuable to them.
Nicholson: There has been some movement at the state level as well on AI mental health, AI as therapists. Can you talk a little bit about that?
Branch: Yeah. Utah passed a mental health law that focused on AI companions. And California, actually, Governor Newsom recently signed a bill that focused on teen usage of AI companions as well. There is starting to be movement at the state level because there is this gap of federal leadership. A lot of those bills are focused on preventing some access of kids to these AI companions, which I think is a start, as well as limiting time access to them. So that’s a bit of a start at the state level, I would say.
Nicholson: Do you see those actions as happening in red and blue states? That’s another evidence of bipartisanship on the issue?
Branch: Yeah. There’s plenty of bills just across the US that are considering AI companions at the state level, Utah being a pretty reliably red state and California being a blue one. But states are just starting to grapple with this. But what has been consistent throughout the AI regulatory space is that the states have been picking up the slack where federal regulation is lagging. And in part, I think that is because these state legislators are so intimately connected to their communities. They’re representing far smaller numbers of folks, they’re embedded in the community, they’re still living in their hometowns, and so the harms are just that much more visceral. They’re able to put aside their differences and really hammer out this legislation, and respond quicker than the folks that we really think should be leading things at the congressional level.
Nicholson: I’m wondering, because this is such a fast-moving space and AI policy changes constantly, what other issues in AI governance are you watching at this time?
One area of AI policy that’s not really being discussed is some of these tools that are going to be targeting or pitched towards elderly folks.
Branch: I think one area of AI policy that’s not really being discussed is some of these tools that are going to be targeting or pitched towards elderly folks. I think a lot of AI companies see a potential opening there, particularly when you think about the shortages of nursing home and nurses that we have. This is a moment of opportunity where they can start unleashing AI nurses in the healthcare space or, specifically, in the elder care space. So I think that’s one area that we’re looking at. Data centers, these large language models need to be fueled by something and data centers are going to be the powerhouses of the future and currently are of how these things remain online. The problem with that is that communities sort of fight and provide incentives for these companies to come in. And when they do come in and they do build this data center, they promise all sorts of jobs. But once the data center is built, the jobs are gone and your electricity bill has skyrocketed. And we’re starting to see that across the United States.
And then, I think something that folks are increasingly anxious about is the future of work, what is work going to look like. There are a variety of tech companies just today that announced mass layoffs, and part of that is because they feel that investing in AI is going to be a better usage of their time than paying folks. I think that’s increasingly, not only concerning, but alarming because we don’t have a plan for what happens when X amount of the population is completely unemployed because their job has been replaced by AI. No one has really put forward a real sensible solution that folks can get behind, and we’re starting to see that increased movement towards a variety of sectors. So typically, in history, you’ve seen sort of one specific industry targeted, but this is kind of one of the first times where you’re seeing all sorts of different industries targeted, from transportation to white collar jobs like tech coding. So I think that’s also an area where we’re folks are focused on, and I’m certainly focused on as well.
Nicholson: The headlines, the news cycle on AI companions is heavy. And I just want to close with maybe you giving us a bit of personal insight on what keeps you going in working in AI governance in this space, in particular. What inspires you?
Branch: I think that communities are really strong. And I think one lesson that came out of COVID is that having neighbors or being able to meet people in person is like, “That’s a really nice thing.” One thing that sort of inspires me is, over the summer, there was an AI moratorium that was proposed by Senator Ted Cruz and it lost in historic fashion, 99 to 1. How did that happen? Because Republicans and Democrats and parents and clergy and truck drivers, folks came out of the woodwork, saying, “This isn’t right. It’s not right to just give these companies a full and free pass.” There’s a lot more agreement here than people want to acknowledge or realize, and I think the sort of constant state of not getting things done in Congress can lead people to believe that we don’t agree on some things with AI. But I think there are a lot of sensible things that we agree on, and it comes from our society.
AI should not discriminate. That’s just a value that we hold. If a company harms you, you should be able to sue it. That’s a value that we hold. Kids shouldn’t be exposed to certain content. And if something is harmful for kids, they shouldn’t have access to it. That’s a value that we just hold as a society. So I think when we ground this conversation in these societal values, the future, to me, is a lot brighter. It’s just a matter of getting some of these folks on Capitol Hill or in the state legislatures to come together and meet at the table, and do what’s right for their citizens and their constituents.
Nicholson: To learn more about AI companions and chatbots, read J.B. Branch’s piece, AI Companions Are Not Your Teen’s Friend on our website. Find links to this and more of JB’s work in our show notes.
Have you ever chatted with an AI companion? Write to us at podcast@issues.org with your thoughts on AI, and please subscribe to The Ongoing Transformation wherever you get your podcasts. Thanks to our podcast producer, Kimberly Quach, and to our audio engineer, Shannon Lynch. I’m Megan Nicholson, senior editor at Issues. Thank you for listening.
