In a concerning development, Microsoft and OpenAI have uncovered evidence that cybercriminals are already exploiting advanced language models like ChatGPT to enhance their attacks. The tech giants released new research today, revealing that state-sponsored hacking groups from Russia, North Korea, Iran, and China have been experimenting with these powerful AI tools to refine their techniques and evade detection.
“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft warned in a blog post.
The Strontium group, a notorious Russian hacking collective linked to military intelligence, has been using large language models (LLMs) to gain insights into satellite communication protocols, radar imaging technologies, and other technical parameters, according to Microsoft’s findings. This notorious group, also known as APT28 or Fancy Bear, has been active during the ongoing Russia-Ukraine conflict and previously played a role in the hacking of Hillary Clinton’s 2016 presidential campaign.
But their use of AI goes beyond mere research. The Strontium group has also been leveraging LLMs to assist with basic scripting tasks, such as file manipulation, data selection, regular expressions, and multiprocessing, potentially automating or optimizing their technical operations.
The Thallium group, a North Korean state-sponsored hacking collective, has likewise been utilizing LLMs to research publicly reported vulnerabilities and target organizations. They have also used these AI models to aid in basic scripting tasks and to draft content for phishing campaigns.
Iranian hackers from the group known as Curium have taken a similar approach, using LLMs to generate phishing emails and even write code to evade detection by antivirus software. Chinese state-affiliated threat actors have also been observed using LLMs for research, scripting, translations, and to refine their existing hacking tools.
The revelation comes amid growing concerns about the potential misuse of AI in cyberattacks. Recent months have seen the emergence of tools like WormGPT and FraudGPT, which assist in the creation of malicious emails and cracking tools. Last month, a senior official at the National Security Agency also warned that hackers are using AI to make their phishing emails more convincing and harder to detect.
While Microsoft and OpenAI have not detected any “significant attacks” using LLMs yet, the companies have been swift in shutting down all accounts and assets associated with these hacking groups. “At the same time, we feel this is important research to publish to expose early-stage, incremental moves that we observe well-known threat actors attempting, and share information on how we are blocking and countering them with the defender community,” Microsoft stated.
The current use of AI in cyberattacks may be limited, but Microsoft warns of potential future use cases like voice impersonation. “AI-powered fraud is another critical concern. Voice synthesis is an example of this, where a three-second voice sample can train a model to sound like anyone,” Microsoft cautions. “Even something as innocuous as your voicemail greeting can be used to get a sufficient sampling.”
Faced with this AI-powered threat, Microsoft’s solution is to fight fire with fire, using AI to respond to AI attacks. “AI can help attackers bring more sophistication to their attacks, and they have resources to throw at it,” says Homa Hayatyfar, principal detection analytics manager at Microsoft. “We’ve seen this with the 300+ threat actors Microsoft tracks, and we use AI to protect, detect, and respond.”
Microsoft is building a Security Copilot, a new AI assistant designed specifically for cybersecurity professionals, to help identify breaches and better understand the vast amounts of data and signals generated through cybersecurity tools daily. The software giant is also overhauling its software security following major Azure cloud attacks and incidents where Russian hackers spied on Microsoft executives.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
