The rise of freely available apps with powerful voice-cloning capability is putting voice synthesis in the hands of ordinary people, raising concerns about potential misuse and lack of safeguards.
Some platforms like ElevenLabs, Speechify, and Resemble AI use speech synthesis models to analyze and recreate voices without stringent safeguards, posing risks.
Apps like PlayKit from Play.ht allow users to clone voices for free for a trial period, highlighting the ease of access to this technology and potential for misuse.
The app enables voice cloning by analyzing uploaded videos and generating eerily accurate voice clones within seconds, with minor inaccuracies in tone and emotion.
Concerns arise as individuals can be easily fooled by voice clones created from short video clips, leading to potential misuse like financial scams.
Platforms lacking explicit permission requirements for voice cloning are likely to increase, raising the urgency for regulations and safeguards in the AI voice synthesis space.
While platforms like Descript prioritize audio consent from clone targets, there is a lack of universal safeguards and regulations to prevent unauthorized voice cloning.
The absence of global AI regulation leaves a gap in addressing voice-cloning issues, with the need for government cooperation and proactive measures in enforcing ethical standards.
Ethical concerns persist despite claims by platforms like Play.ht to protect voice rights, as demonstrated by the ease of cloning famous movie monologues without explicit permission.
The article suggests caution in trusting voices in various mediums and emphasizes the importance of seeking direct verification to combat potential voice-cloning crises.