AI systems reflect the data they are trained on, interpreting and generating new reflections based on patterns within the data.
Training bias in AI systems, resulting from over-representation or under-representation of certain perspectives in the training data, can lead to a warped reflection of reality.
Understanding AI training data is crucial to interpret the output of AI systems, as their responses may contain factual errors or biases originating from the distorted training data.
The development of AI systems requires a consideration of training data, including who selects it and the values guiding the selection, to build better and more comprehensive mirrors of humanity.