In 2025, the rise of bots and AI agents driving web traffic has led to over $114 billion in digital ad spend being lost to fake engagement, impacting platforms like TikTok and Reddit.
Social media platforms are struggling with fake accounts, AI-generated content, and synthetic influencers, which has caused user trust to decline significantly.
The use of large language models has accelerated the spread of fake personas and deepfakes across social platforms like Facebook and Reddit.
Gaming has emerged as a testing ground for combating fake participation due to its structured environment and financial incentives, leading to the adoption of aggressive anti-cheat measures by game developers.
Platforms like Riot and Razer are implementing real-time detection and verification systems to distinguish between human players and bots in gaming environments.
Coliseum, a Web3-native tournament platform, is pioneering a system called the Guardian Network, which integrates verification measures to ensure genuine participation and combat manipulation.
Some argue that the solution to fake engagement lies in infrastructure changes that prioritize verified human interaction and make faking costly.
Regulators are starting to intervene to address the crisis of engagement, with the U.S. FTC penalizing the purchase of fake followers, and European agencies acting against privacy violations in biometric ID systems.
While gaming may not solve all internet issues, it is seen as a potential starting point for rebuilding digital trust by prioritizing verified participation by default.