Meta is transitioning to using artificial intelligence for the majority of its internal risk assessments for updates on Facebook, Instagram, and WhatsApp.
The company plans for AI to handle up to 90 percent of these assessments to accelerate product development while involving humans for complex or sensitive issues.
Engineers at Meta will input risk-related data into a system that scores updates, deciding if human review is necessary, allowing certain features to go live automatically.
Critics express concerns that AI may not fully comprehend intricate social or political risks tied to misinformation, privacy, or cultural impact, but Meta acknowledges the need for gradual implementation and human judgment.