Since its inception, Kaspersky has been a pioneer in integrating artificial intelligence (AI), specifically machine learning (ML), into its products and services for cybersecurity. The Kaspersky AI Technology Research Center is comprised of data scientists, ML engineers, and other cybersecurity experts to address complex threats related to AI and ML.
Kaspersky has developed a variety of AI/ML-powered threat detection technologies, primarily to identify malware, which include neural network algorithms and decision-tree ML technology. They have also developed AI technologies specific to corporate products and services such as Kaspersky Managed Detection and Response, Kaspersky SIEM and Kaspersky XDR.
Kaspersky also explores the capabilities of generative AI, particularly large language models (LLM), to explore new solutions through the use of LLM tools such as ChatGPT.
Additionally, Kaspersky is researching the use of AI/ML in industrial environments with Kaspersky MLAD (Machine Learning for Anomaly Detection) and the Kaspersky Neuromorphic Platform (KNP). These focus on predictive analytics, automatic recognition of early signs of impending attacks or failures, and AI solutions based on spiking neural networks and the energy-efficient neuromorphic processor developed by Russian-based Motive Neuromorphic Technologies (Motive NT).
Kaspersky is actively patenting its AI technologies, both for detection technologies such as malware detection and improving datasets for ML, but also for technologies related to anomaly detection and the use of neuromorphic networks in industry-relevant tasks.
Kaspersky's AI expertise is not limited to research teams. Other teams also contribute significantly, apply ML in many tasks like Machine Vision Technologies in the Antidrone team or AI coding assistants in the CoreTech and KasperskyOS departments.
The primary goal of Kaspersky's AI Technology Research Center is to develop AI technologies for cybersecurity, raise awareness about the pros and cons of AI and its applications in security, and actively monitor improper or malicious AI usage.
Kaspersky's efforts to raise awareness include demonstrating the danger of deepfake videos and sponsorships of educational courses on AI for cybersecurity. They are also involved in the creation of international standards for the use of AI and presented the first principles for the ethical use of AI systems in cybersecurity at the Internet Governance Forum in 2023.