Meta, for the first time, has shared information about its approach to combating forced-labor compounds that fuel pig butchering scams on its platforms and across the web. The company said it has been collaborating with global law enforcement and other tech companies. It has done takedowns of more than 2 million accounts connected to scam compounds so far this year. Longtime pig butchering researchers say that Meta has been slow to publicly and directly acknowledge the problem and the role its many platforms play in connecting scammers with potential victims.
More than 200k people have been trafficked and held in compounds where they are forced to play the role of an online scammer. People have been trafficked from over 60 countries around the world, often after seeing online ads promising them jobs that are too good to be true.
Pig butchering scams drive financial theft, but they start with either cold communication between scammers and potential victims or contact that originates from social media groups. It is inevitable that scammers will gravitate towards them because Meta's services are recognizable and trusted around the world.
Some pig butchering activity can skirt tech company standards—even when they are doing a large number of takedowns—because the content isn’t explicit enough to meet the criteria for removal.
Meta is focused on combatting scam compounds using its policies around dangerous organizations and individuals, as well as wider safety policies.
Scammers are now able to easily generate understandable content in many languages using AI translation tools for the scripts and messages they send potential victims, as well as job advertisements luring prospective workers into scam compounds.
Meta says one recent scam compound that it took action against followed a tip from OpenAI threat researchers who had spotted the criminal operation using ChatGPT to translate messages that could be used in pig butchering.
The pig butchering crisis has escalated into a multibillion-dollar global crisis of pig butchering scams.
Meta declined to share how many accounts it had removed prior to this year but said it had been working with law enforcement for more than two years.
Scammers have also been making efforts to evade law enforcement and technology clampdowns by using artificial intelligence tools, integrating deepfakes into their campaigns, and using malware to expand their capabilities.