Rec Room has seen a 70% reduction in toxic voice chat incidents over the past 18 months since rolling out ToxMod.
Voice intelligence platforms, such as Modulate’s ToxMod, can monitor across every live conversation.
With the data behind those instances, Rec Room has been able to dig into what was driving that behavior, and who was behind the toxicity they were seeing.
Experiments and tests let them get underneath the most effective response pattern: responding quickly, and then stacking and slowly escalating interventions, starting from friendly warnings and then moving to bans.
False positives are reduced dramatically, because each alert helps establish a clear behavior pattern before the nuclear option is chosen.
And when implementing a strategy, don’t jump right to solutions, instead, spend more time defining the problem.
The best conversations outside of trust and safety can be with designers, who are good at problem-solving and have an understanding of designing trust and safety solutions.
It’s important to understand that engagement profile when making decisions based on the escalations you’re getting from trust and safety tools.
Interventions and responses start from wanting to change player behavior, and is a more effective than just banning players as a reactive tool.
Every game, platform and community requires a different kind of moderation suited to its audience and engagement profile.