Meta announced it is disabling all teens’ access to its AI characters worldwide on every app. “Starting in the coming weeks, teens will no longer be able to access AI characters across our apps until the updated experience is ready,” wrote Meta in a blog post. The company stated in an updated blog post that this rule covers anyone who provided a teen birthday and those who say they are adults but are suspected to be teens by their age prediction technology. Meta stopped access when parents demanded clearer information and control over their teens’ AI interactions. The company planned to release these features this year but has decided to disable AI characters completely. Meta plans age-specific AI characters The company aims to create a special AI character just for teens. Meta stated that the new AI characters for teens will include parental controls. The new characters will provide age-appropriate replies and focus on education, sports, and hobbies. In October, Meta introduced parental controls to customize how teens engage with AI on its apps. The new features limited teens from viewing extreme violence, nudity, and graphic drug use, inspired by the PG-13 rating. A few days later, the company introduced tools for managing AI characters, so parents and guardians can track subjects and restrict access to specific characters. Meta stated that parents could fully disable chats with AI characters. On Thursday, Wired stated that Meta tried to restrict information gathering about social media’s effects on teen mental health and related reports. The company wants to remove any mention of a recent teen suicide case linked to social media, and references to Meta’s finances, employee activities, and Mark Zuckerberg’s Harvard years. Meta’s move to halt teen AI characters occurs just before a trial in New Mexico, where Meta faces charges for failing to protect children on its apps. Its alleged that Meta did not safeguard minors against online solicitation, trafficking, and sexual abuse on its sites. Meta stated in pretrial motions that the jury should only decide if Meta broke New Mexico’s Unfair Practices Act concerning child safety and youth mental health. Other issues like alleged election interference, misinformation, or privacy violations should not be considered. Besides the New Mexico case, Meta is facing a trial next week for causing social media addiction. The 19-year-old plaintiff, known as K.G.M., claimed that the platforms’ algorithm caused addiction and harmed her mental health. CEO Mark Zuckerberg is likely to testify once the trial starts. Online platforms tighten teen access Social media platforms and AI companies have changed their teen experience due to lawsuits claiming they contributed to self-harm. On January 20, OpenAI introduced age prediction features in ChatGPT to detect and enhance safeguards for younger users. ChatGPT’s age prediction model analyzes behavioral and account data, such as the user’s stated age, account age, activity times, and usage trends. Last week, TikTok introduced AI technology in Europe to detect and delete accounts of users under 13 years old. The AI age control system examines profile details, clips, and posting habits to identify underage accounts. The feature will roll out in the coming weeks. Other online platforms like Instagram, YouTube, TikTok, and Roblox adopted age-gating measures. Sharpen your strategy with mentorship + daily ideas - 30 days free access to our trading program