AI can be biased when the data used to train it is unbalanced or not inclusive, leading to serious issues like wrongful arrests.
To address bias in AI, diverse teams are needed to build AI models, clear data choices, and regular checks to identify problems early.
AI's reliance on large amounts of data poses privacy concerns for individuals, as seen with smart speakers and fitness apps collecting personal information without clear consent.
In 2023, a major tech company faced backlash for its AI system reading user data without adequate disclosure, highlighting the importance of transparency in AI practices.