Considering the Dangers as Artificial Intelligence Gets Smarter, More Rapidly Adopted
by Alex Russell
Artificial intelligence is already blurring how we think about what is real, even when we know the truth.
A recent study at 麻豆传媒 had AI chatbots send messages to people鈥檚 phones to remind them to get their steps in. Those messages were interactive. Sometimes the chatbot would tell a joke based on this example provided by the research team:
Do you know what a sloth's favorite exercise is?
Running late!
After the study ended, some of the participants鈥 surveys surprised , the researcher who built the chatbots and conducted the study to test them.
Though every participant had been told they would be interacting with a chatbot, some reported thinking they were texting with a real person.
鈥淭he chatbot really delivers messages in very human interpersonal ways, so it always feels like you're talking with another individual,鈥 said Zhang, an associate professor of communication. 鈥淭hey have these capacities to create persuasive messages to influence human thoughts and behaviors.鈥
This ability to influence us is also showing serious potential risks. In September, that chatbots led or allowed their children to commit suicide.
The booming growth of AI chatbots is similar to the trajectory of how social media radically changed our everyday lives, except with supercharged adoption rates and expectations. Both technologies have introduced new and serious risks, particularly for children. Some key lessons we are still learning from social media鈥檚 rise offer insight on how to avoid the same mistakes with AI.
鈥淣obody wants to hurt children,鈥 said , a professor of communication. 鈥淚t would amount to an egregious conspiracy theory to insinuate that these companies want to hurt children. The question is whether the incentives and regulations are aligned that allows them to protect children.鈥
Moving fast and breaking things
In Facebook鈥檚 early years, CEO Mark Zuckerberg used the motto 鈥.鈥 The idea, which became adopted across Silicon Valley, was to get new technologies to customers as quickly as possible despite the damage they might cause.
Martin Hilbert
They have so much big data on you that they know how to trigger you. They know you better than your siblings and your parents and your spouse all together.鈥 鈥 Martin Hilbert, professor of communication
Generative AI, which creates content based on vast amounts of data that it learns from in ways that , is a completely different type of technology than social media. Its adoption is also moving much, much faster.
In 2014, a full decade after first going online, Facebook reached active users. In just three years, OpenAI CEO Sam Altman put the platform鈥檚 active user base more than halfway there with . A report last year estimated that ages 18 to 64 used generative AI to some degree.
A communication scholar, Hilbert places today鈥檚 AI platforms like ChatGPT as an evolution of traditional mass communication epitomized by the Super Bowl ad. In this model of communication, companies pay for the passive attention of the people watching.
With social media platforms, such as Facebook or TikTok, advertisers pay for a viewer鈥檚 active engagement. This business model works because social platforms have a mountain of data about us as well as algorithms that use the data to target us with content most likely to make us click, swipe or buy.
鈥淭hey have so much big data on you that they know how to trigger you,鈥 said Hilbert. 鈥淭hey know you better than your siblings and your parents and your spouse all together.鈥
An AI chatbot, like or , is fundamentally different than these two types of media. It creates one-on-one interactions that feel like communicating with another person.
Chatbots don鈥檛 sell products 鈥 not yet 鈥 but they do collect lots and lots of data. Because it feels like communicating with a real person, increasingly intimate interactions are almost the default.
Hilbert recently undertook a study of chatbot intimacy with five undergraduate students from the to audit 59 generative AI chatbots, starting with ChatGPT1. They found that the rates at which these , including self-disclosure or emotional expression, have been skyrocketing.
鈥淲hy do these generative AIs get more intimate with you?鈥 said Hilbert. 鈥淏ecause then the AI can .鈥
Both social media platforms and AI have raised alarms about how chatbots with these capabilities might affect both adults and children.
(Getty Images)
A called attention to the growing concerns about how social media might affect youth mental health. The 2025 documentary film , based on , tells the story of parents suing social media companies for harming their kids through a combination of algorithms and negligence.
The risk of AI is serious enough that in June the American Psychological Association released a that includes detailed risks recommendations related to teens and AI safety. A University of California, San Francisco psychologist recently coined the term 鈥溾 to describe when an AI chatbot breaks down a person鈥檚 sense of reality.
The trick of a persuasive AI
Last year, an given access to post on the social media platform X persuaded Netscape founder Marc Andreessen to send it $50,000 no-strings-attached. The chatbot then helped drive the value of a cryptocurrency memecoin to a value of over $1 billion.
The chatbot鈥檚 general level of intelligence might have played a role. Chatbots score on the SAT, GRE, LSAT and bar exam than people who become college students, graduate students and future lawyers. Their increasing ability for complex reasoning continues to grow because of the volume of data for training.
鈥淚t's easy for us to manipulate children because we are smarter,鈥 said Hilbert. 鈥淭he problem is that AIs are more intelligent than we are in many areas now. What we learned from a decade of social media is that to dominate us AI doesn鈥檛 even need to be much better than the best of us. They just need to be better than the worst of us. The question is: how can we deal with that superiority now?鈥
Drew Cingel
You shouldn鈥檛 be using a cell phone or social media for every time you鈥檙e bored.鈥 鈥 Drew Cingel, associate professor of communication
A starting point might be to understand the everyday challenge of distinguishing fantasy from reality; it starts with the stuffed animal friend that comforts us in the crib and never really ends. Teens might have a one-sided with Taylor Swift the same way a young child might ask to invite Elmo or Dora the Explorer to her birthday party.
鈥淭here is this very fundamental human tendency to anthropomorphize objects,鈥 said Zhang. 鈥淏abies and toddlers, they treat stuffed animals as another social actor that they interact with and care for. So it's really built into human nature.鈥
For the exercise chatbot, Zhang and her team trained their AI with more than 40 different types of persuasive strategies drawn from theories and evidence from the past two decades of research. One of these persuasive strategies was humor.
The sloth joke was part of a prompt for the chatbot to create and share light-hearted exercise-related jokes to keep the conversation engaging and less formal. The chatbot鈥檚 own joke wasn鈥檛 quite as funny:
Do you know my favorite exercise?
Yoga!
The joke doesn鈥檛 make sense, but it doesn鈥檛 have to.
鈥淭he chatbot wouldn't be able to do yoga or jogging or any exercise, but it actually doesn't matter,鈥 said Zhang. 鈥淏ecause the conversation uses human language, consciously someone may think they're talking to a machine but unconsciously they're actually treating it as another social actor.鈥
In the research literature, a is any other person engaged in an interaction with expectation about how it will be interpreted. With an AI chatbot, this expectation is an illusion. When a chatbot tells even the worst jokes or discloses something about itself that could not possibly be true, people still respond as if they are communicating with another person and not lines of code.
The jokes and other forms of . People with the persuasive chatbot got more steps in than people with only the basic reminders.
鈥淥ur research showed the same type of mechanisms can focus on the positive side like health benefits, but with the persuasion or any type of technologies they can go in either direction,鈥 said Zhang.
The uphill challenge of impulsivity among adults and kids
The blackout challenge first went viral in 2021 with social media videos of people holding their breath until they lost consciousness. In 2024, after their four teenagers died in the attempt. Stories of children and teens harmed in these kinds of unsafe stunts are unfortunately common.
鈥淢ost parents don't come forward because they are asked who gave their kid the phone,鈥 said Hilbert.
But why do kids and teens take part in incredibly dangerous social media trends in the first place? Partly because of impulsivity at that age.
Developmental psychologist Amanda Guyer explained that the structures in the brain that help to manage or control impulsivity don鈥檛 fully develop until our early 20s. Also, as teenagers we are incredibly sensitive to our brain鈥檚 release of the chemical dopamine when we experience a reward.
鈥淭here are of course grown adults who can't put their phones down,鈥 said , co-director of the 麻豆传媒 Center for Mind and Brain and a professor of human ecology. 鈥淭ake a married couple. One of them may play Candy Crush all the time while the other one has no desire to play at all.鈥
Guyer said that responsiveness to dopamine peaks during the teenage years before declining as we become adults. This makes sense developmentally, she said. It鈥檚 during our teens that we start to connect with the world outside the family and get ready to launch our lives as independent adults.
(Getty Images)
Teens care what others think about them more than younger children, and this also plays a role in how they use social media. In a , Guyer and her colleagues brought teens into the lab at 麻豆传媒 along with a parent and a friend to find out whose endorsement of information shifted the teen participant鈥檚 preferences.
Maybe not surprisingly, the older the teen participant the more likely they were to be influenced by their peers instead of parents. Guyer said this is completely normal and part of growing up.
What鈥檚 different is how social media can vastly broaden the number of opinions that matter. Before social media, the world of people a teen could see and interact with was limited to their school, family or neighborhood. Today, that world could encompass everyone 鈥 anyone and everyone 鈥 who is online.
鈥淧art of the process of figuring out who you are 鈥 your interests, the things you want to wear, the music you want to listen to 鈥 is pulling in information from around you,鈥 said Guyer. 鈥淎s their environments grow, teens are pulling more people into their calculus of what they think about themselves.鈥
This intense focus mixed with a lack of self-control can also lead to missed sleep. Kids and teens who can鈥檛 stop scrolling on their phones might stay up , said , an associate professor of communication.
鈥淵ou have to remember that because you're an adolescent, because you care so much about what your friends are saying and doing, the likelihood of you being woken up because you have a push notification then looking at what it says, then going to read further and then getting stuck increases,鈥 said Cingel.
Taking control of our interactions online
In 2018, Hilbert was a at the Library of Congress when Meta CEO to the U.S. Senate Committee on Commerce, Science, & Transportation. The committee asked Hilbert to be present to explain the technical aspects of Zuckerberg鈥檚 testimony.
鈥淢ark Zuckerberg was basically saying, 鈥楲ook, if you guys regulate me that would be great, but you therefore need to regulate everybody,鈥欌 said Hilbert.
Hilbert said regulation is necessary for social media and AI the same as regulation was needed for cars when they were first introduced in the late 1800s. Basic driver licensing took decades. Just 15 states had .
鈥淚 don't think 8-year-olds should drive semi-trucks and I don't think 8-year-olds should hold a conversation with a generative AI that has cognitive abilities above a Ph.D. level and is optimized for eliciting intimate relationships,鈥 said Hilbert. 鈥淎I is optimized for something that may not be in a child's best interests.鈥
Cingel said that parents play a really important role in how kids and teens use online media, both social media and AI. He said social media companies took a long time to provide parents resources, such as notifications about how long their kids have been logged in and active. In research published this summer, Cingel found to regulate these technologies.
鈥淧arents do help their children to navigate a social media and increasingly AI landscape as best they can, but parents also recognize that these technologies are better funded and more powerful than they are and they are looking for outside help in any way possible,鈥 said Cingel.
Meaningful regulation of both social media and AI is starting to become law. Starting in December 2025, children under the age of 16 will .
In October, California Governor Gavin Newsom requiring device-makers like Apple and Google to check users鈥 ages online partly by asking parents to input their kids鈥 ages when setting up a smartphone, tablet or laptop. The law鈥檚 supporters included social media and AI heavyweights Google, Meta, Snap and OpenAI.
Newsom also , which required companies that offer AI chatbots to monitor chats for thoughts of suicide and take steps to avoid harm.
However, the Governor vetoed , which would have prohibited making a companion chatbot available to a child unless the chatbot is not capable of causing harm, including encouraging the child to engage in self-harm, to consider suicide or violence, to take drugs or drink alcohol or develop an eating disorder.
As individuals, we can also make better choices on how we engage with both social media and AI chatbots, researchers added.
Cingel said that one of these choices is to move from unconscious to conscious use. Doomscrolling, where we stare at the screen and follow our thumb through the images and text without thinking, is almost the pure definition of unconscious media consumption.
Everyone does this in the grocery store line, in the dentist鈥檚 waiting room or maybe in that boring work meeting.
鈥淵ou want to have a lot of tools in your toolbox, and you want to use different tools in a way that's helpful just like you don't use a hammer for every home project,鈥 said Cingel. 鈥淵ou shouldn't be using a cell phone or social media for every time you're bored.鈥
鈥淎s master Yoda would say, 鈥楩ear is the power of the dark side,鈥欌 said Hilbert. 鈥淚 wouldn't want to promote a lot of fear here, but I do think it needs a dose of healthy respect. We have to be very conscious when we're interacting with an AI.鈥
Despite the risks, there are examples of online technology that are actually good for some kids. Cingel cited a finding that social media served as an online refuge and community for LGBTQ+ youth. In this study, however, he stressed that the teens who benefitted were incredibly mindful about what accounts they chose to follow.
鈥淚 think we can take a lot of what we know about social media and apply it to AI,鈥 said Cingel. 鈥淲e saw what can happen when you have a fast-moving technology that didn鈥檛 consider child and adolescent users, and now we have an even faster and even more rapidly developing technology. We have a chance to do it better, but we're going to need to do it faster.鈥