Chinese chatbot DeepSeek, developed by the millennial mathematician Liang Wenfeng, has been hailed as AI’s Sputnik moment as it appears to rival top US-based models at a fraction of the cost. DeepSeek's achievement shook the market, causing shares in major global companies such as chip maker Nvidia to plummet. China's AI lab claimed that the cost for training one of its models had only been $5.6 million. In the biggest week for AI since the launch of ChatGPT in November 2022, DeepSeek's app became the most downloaded on Apple's app stores in the US and UK.
DeepSeek's announcement prompted fears about control of AI, particularly given the technology's capacity for being weaponized. Furthermore, it is now one of the principal battlegrounds in global geopolitics, posing a question about whether DeepSeek’s success presents a greater threat to US hopes of maintaining supremacy in AI. While some suggest that DeepSeek's accessible technology could open AI up to more startups, DeepSeek's main users are China's leading technology companies.
DeepSeek said its model is comparable to OpenAI's 01 model and is even better in some areas, with a more efficient design intended to activate only the relevant part of the system to answer the query. DeepSeek inspired OpenAI to announce the launch of o3-Mini reasoning model on Friday. The model was free to use, and its design could potentially redefine the benchmarks of AI performance and offer more equality among AI researchers.
Experts generally agreed that Big Tech companies like Google and Facebook, with much larger computational processing power, could still rapidly experiment and use emerging techniques, such as those used in DeepSeek’s technology. In the UK, minister of state for digital and culture expressed excitement about DeepSeek’s breakthrough. But, as a minister tasked with using AI to deliver economic growth, he also acknowledged the security risks implicit in downloading the Chinese app.
However, as more information about the chatbot came to light, it was clear that the government had the ability to censor its responses for politically sensitive issues. The Irish Data Protection Commission demanded explanations from DeepSeek about its information processing. It was discovered that DeepSeek would censor itself in real time when its answers might be politically sensitive or challenging for the government.
David Sacks, a White House AI adviser, claimed there was substantial evidence that DeepSeek had distilled knowledge from OpenAI models, and that OpenAI was unhappy about the move. OpenAI's founder, Sam Altman, initially welcomed DeepSeek as a new competitor. Yet, the next day, OpenAI claimed it was “reviewing indications that DeepSeek may have inappropriately distilled our models”.
DeepSeek’s success led to numerous discussions about the ramifications of AI innovation and the security risks involved. Governments and companies need to prioritize AI security to check the threat of weaponized AI. Further discussions also ensued regarding the control of AI and the fair sharing of technology, particularly as AI becomes more accessible to small companies.
DeepSeek's success will only perpetuate the competition for AI supremacy. As AI remains a prime concern for global security, it is important for governments to safeguard against potential misuse of AI technology.
Although DeepSeek has ruffled feathers, it is important to remember that AI is an evolving technology that is still open to many possibilities. DeepSeek’s success does not necessarily equate to the downfall of US or European AI progress, and could in hindsight act as a wake-up call for further research and development into AI technology.
Overall, DeepSeek has opened up discussions for improved AI sharing and security. Governments should prioritise security risks to put a check on weaponised AI. As AI becomes more accessible, it is vital for governments and companies to share technology and ensure a more level playing field for all AI researchers, regardless of geography.