Google has admitted to using YouTube videos to train its AI, including its Gemini AI algorithm and video creation software, Veo 3.
The company claims that the practice has been part of its strategy even before AI existed, although most creators are unaware of their content being used.
There are concerns regarding transparency and consent, with creators not having the ability to opt out of their videos being used for AI training.
Google states it has safety measures to prevent unauthorized use of images or voices, but experts argue that users should have a say in how their content is utilized.
The lack of consent raises ethical and legal issues, as creators feel their intellectual property rights are being infringed upon.
Transparency will be crucial as AI becomes more integrated into web platforms, with creators deserving the right to choose how their work is utilized.
Google's approach raises questions about openness and creator autonomy in AI training endeavors.