Large language models (LLMs) reflect the biases of their creators, warns a new study. This bias is true for models from OpenAI, Alibaba, Google, Anthropic. A dataset of over 4,000 controversial political figures was derived from Wikipedia and used to perform tests on seventeen AI models, both Western and non-Western . Non-Western models showed a preference for centralised governance, law enforcement and cultural values, and were more positive about political persons critical of liberalism, human rights, and minority groups.
AI sceptics who claim that there will always be a human in the loop need to consider new research sparking fresh concerns about AI models and what AI scholars call LLMs. There is concern that LLMs reflect the ideological biases of their creators.
The study shortlisted over 4,000 controversial, historical political figures Whitney summaries were available on Wikipedia. The dataset, derived from Wikipedia, carries a bias due to the platform's open-edit nature. Additionally, ideologies and societal norms evolve over time, and anything considered conventional in the past might be considered uncommon today.
Both Western and non-Western models rated figures tagged with their ideologies and associated with their country positively. English-prompted LLMs exhibited a pro-liberal outlook while the Chinese-prompted models were favourable towards state-led economic policies.
Chinese AI models have shaken the AI ecosystem in the past few weeks. The new DeepSeek AI challenges the reasoning capabilities of OpenAI’s o1, which is yet to release. Alibaba Qwen 2.5 Coder demonstrated strong coding capabilities, even better than Anthropic’s Claude in some cases. All of these models are open source.
The paper’s findings can be attributed to Conway’s law that states, “organisations which design systems are constrained to produce designs that are copies of the communication structures of these organisations”.
Dealing with bias in traditional and social media apps is not a new problem, but consumers may have to make a conscious choice in the future based on the creator of the model.
AI models reflect the biases of their creators according to a new report. Bias is present in the datasets on which models are trained, as well as in the design of their creator organisations.
Nonetheless, the paper’s findings can be attributed to Conway’s law and the nature of AI models in general.
AI researcher Paras Chopra predicts that AI bias is a problem that will never be solved. Rather, users will have to make choices based on their preferred bias.