Discord unveils new parental controls to monitor teens

  • Family Center gives parents insight into how their teens are using the app
  • App changes made due to child predators using it for sextortion, abductions
  • Discord: Goal is “to help foster productive dialogue” about safer habits

This picture taken on Januray 22, 2021 in Rennes, western France, shows a smartphone screen featuring messaging service applications WhatsApp, Signal, telegram, Viber, Discord and Olvid. (Photo by Damien MEYER / AFP) (Photo by DAMIEN MEYER/AFP via Getty Images)

(NewsNation) — Discord unveiled a new tool called Family Center that will allow parents and guardians to better understand who their children are communicating with on the platform and monitor their activity.

The new tool features an option for parents and guardians to opt into their teens’ accounts to see who they’ve added as friends, chats they’ve joined or participated in and who they’ve messaged or called in direct messages or group messages.

Parents will receive the data in a weekly email summary; however, parents won’t be able to see what their teens wrote or said.

Parents and guardians will need a special QR code to connect to their teen’s account. Then, they’ll scan the code with their Discord app and the teen will need to accept the connection request.

“Similar to how parents know who their teens are friends with and what clubs they’re a part of at school, Family Center helps them learn more about who their teens are friends with and talk to on Discord,” the company wrote in a blog post. “Our goal with Family Center is to help foster productive dialogue about safer internet habits, and to create mutually beneficial ways for parents and teens to connect about experiences online.”

The new feature and safety guidelines come after an NBC News investigation last month into child safety on the platform. The news outlet investigated hidden communities and chat rooms adults have used to groom children before abducting minors and extorting them into sending nude images.

The policy change also comes after The Washington Post reported last month that AI-generated child sex images have been proliferated across the internet in recent months.

The company said in a blog post announcing the changes that the updated child sexual abuse material policy would include “any text or media content that sexualizes children, including drawn, photorealistic, and AI-generated photorealistic child sexual abuse material. The goal of this update is to ensure that the sexualization of children in any context is not normalized by bad actors.”

Tech

Copyright 2025 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed