Artificial intelligence (AI) is trickling its way into our everyday lives and changing the way we work, communicate and enjoy entertainment. And not just our lives: Teens are embracing AI tools at an accelerated pace, similarly to how they were early adopters of social media.
In fact, a recent report by Common Sense Media shows 7 out of 10 teenagers have used at least one type of generative AI, especially chatbots and AI-supported search. Perhaps more alarming, only 37 percent of adults in teens’ lives, from parents to teachers, were aware they were doing so. And, almost half of parents say they haven’t talked about responsible use of generative AI with their kids.
“Kids are increasingly using AI-driven tools on social media, to help with schoolwork or as a companion to stave off boredom,” said Michael Redovian, MD, a child and adolescent psychiatrist in Akron Children’s Lois and John Orr Family Behavioral Health Center. “The research shows teens understand the potential of AI, but we need to make sure they also grasp the dangers.”
He encourages parents to talk to their kids about responsible use of AI and offers 4 talking points to get started. But first things first, we as parents need to understand how kids are using AI, so we can discuss the good and bad of this new technology taking the country by storm.
Emphasize AI is a tool, not a replacement for human work.
Kids are using chatbots like ChatGPT or Google Gemini, web interfaces that mimic human conversation, to ask questions, generate images, compose written content and more. They are helping kids with schoolwork to brainstorm topic ideas, support research or proofread papers, but chatbots also have the capability to draft a research paper or a college application’s personal statement.
Emphasize to kids chatbots are a tool to assist with schoolwork and aren’t meant to take place of their own creativity and work. Help kids understand the concept of original work, and remind them of possible consequences of plagiarism or using the platform to do the work for them.
“Try using the platforms together to discuss how they work and responsible use of these tools,” said Dr. Redovian. “Show kids how these tools are meant to be a starting point to boost their own creative thinking or to check their grammar or polish a paper.”
Discuss credible sources.
Kids have likely encountered Al-generated results on search engines like Google. This AI-generated summary at the top of results can be very helpful in summarizing a research topic, but it also can be misleading.
“AI models can pull false information, and the tool relies on user feedback to tell the difference,” said Dr. Redovian. “Kids still need to double check the facts against credible websites as to what it’s recommending to make sure it’s not false information.”
He encourages parents to discuss what makes a source legit. Explain that credible sources are written by a person or company who is an expert or trusted source on the subject without bias. Teach kids to fact check the AI summary by reviewing the supplied sources and links to ensure they’re reliable.
Encourage kids to spot fake or bias content.
Chatbots can be used to create false or distorted images or videos — for better or worse. Kids are using them to enhance their appearance in photos or alter the story for social sharing. But, this manipulated content kids are coming across online also can be used to generate bias, bully or threaten them.
Just like you teach kids to question what they see in the news or read online, it’s important to help kids think critically about AI. Challenge them to look for signs of fake or bias content when browsing online. Teach them to develop a habit of fact checking and determining credibility of the sender.
To spot fakes, Dr. Redovian suggested paying attention to minute details. “Look for specific features like fingers, text and small details on clothing to tell if something’s off,” he said. “Keep in the back of your mind the things you’re seeing online could be fake.”
Discuss AI privacy and security.
AI tools work off the huge amounts of data it collects about us — many times without us realizing it.
While conversing with smart speakers, like Amazon’s Alexa, or using character.ai kids may unknowingly offer personal information like their name, age, interests or location details. Whatever these models are fed, they can hold on to that information.
“If kids are interacting with AI, they’re leaving behind a digital footprint,” warned Dr. Redovian. “When using AI, kids need to maintain security and be mindful internet citizens.”
In addition, some character.ais, which kids use to create characters and personalities to chat with characters created by others, can incorporate inappropriate content or promote negative habits, such as self-harm.
Teach kids to take the same precautions with AI tools as they do with social media. Discuss privacy and online safety habits, and remind kids these tools aren’t the same as live conversations with a trusted friend.
“While we’re all still learning the benefits and risks AI poses to us, keeping an open line of communication better prepares kids to use AI responsibly and recognize its pitfalls,” said Dr. Redovian.
If you think your child is struggling significantly with social media, talk to one of our pediatricians. If necessary, they can refer your child to Akron Children’s Lois and John Orr Family Behavioral Health Center for mental health services.