Police departments across America have begun deploying AI surveillance cameras which have the ability to judge and categorize subjects.
One such company selling the tech is Fusus, which is providing AI surveillance models to police stations across the country.
Facial recognition algorithms often used in the technology, however, are fundamentally biased because of variations in the data used to train them.
These disparities in data accuracy could often lead to the identification of innocent people.
AI-powered facial recognition algorithms are exacerbating existing racial and class biases in policing, making it difficult to remedy the situation.
Centralized systems such as these can identify clothing, vehicles, and individuals from various camera feeds making it easy to surveil a person's movements.
These systems allow law enforcement to expand surveillance beyond the humanly possible, exacerbating existing biases prevalent in the police department.
The datasets used to train these algorithms are often overwhelmingly composed of lighter-skinned subjects, which is causing biases to be embedded into the algorithms from the start.
AI technology is marketed as more reliable than human decision-making, but instead of eliminating bias, AI is entrenching biases even deeper into the system.
In essence, these technologies are not just about protection, they're tools of control.