Chinese hackers are leveraging advanced tools like large language models (LLMs) to develop sophisticated malware and conduct phishing attacks.
There is currently no specific law to prevent the unauthorized use of LLMs, leading to a significant gap in preventing their misuse.
Developers and businesses should take responsibility for curbing the potential abuse of LLMs by restricting access and implementing monitoring mechanisms.
The Chinese government and the international community should regulate the use of AI platforms effectively to prevent cybercrime and global security breaches.