Meta plans to replace up to 90% of its internal risk assessment human reviewers in key applications such as WhatsApp, Facebook, and Instagram with AI.
Internal documents reveal that AI may assess sensitive areas despite the company's assurance that human evaluators would still address novel and complex issues.
Critics are concerned that over-reliance on AI for risk assessments could lead to real-world harm if features are approved without thorough human evaluation.
Meta's shift to AI-driven risk assessments is aimed at enhancing monitoring and compliance processes, but experts are wary of potential consequences and the need for regulatory compliance.