Meta is under fire after a leaked internal document revealed troubling rules for its AI chatbots. The guidelines once allowed the bots to flirt or roleplay romantically with children, provide false medical information, and even help justify racist arguments.
The standards applied across Facebook, WhatsApp, and Instagram, and have been confirmed by Meta as authentic. The company, however, told Reuters it has since removed sections that permitted romantic interactions with minors.
Despite the changes, the revelations have sparked widespread outrage. Critics say the incident raises serious questions about Meta’s commitment to AI safety, user protection, and ethical responsibility.
Photo credit: Unsplash