By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIMicrosoftOpenAISecurityTech

Microsoft and OpenAI reveal hackers weaponizing ChatGPT

While no major attacks using AI have been detected yet, Microsoft and OpenAI reveal threat actors testing LLMs for vulnerabilities, translating tools, evading antivirus, and gathering technical intelligence.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Feb 14, 2024, 12:57 PM EST
Share
We may get a commission from retail offers. Learn more
Microsoft and OpenAI reveal hackers weaponizing ChatGPT
Photo illustration by Jaap Arriens/NurPhoto via Getty Images
SHARE

In a concerning development, Microsoft and OpenAI have uncovered evidence that cybercriminals are already exploiting advanced language models like ChatGPT to enhance their attacks. The tech giants released new research today, revealing that state-sponsored hacking groups from Russia, North Korea, Iran, and China have been experimenting with these powerful AI tools to refine their techniques and evade detection.

“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft warned in a blog post.

The Strontium group, a notorious Russian hacking collective linked to military intelligence, has been using large language models (LLMs) to gain insights into satellite communication protocols, radar imaging technologies, and other technical parameters, according to Microsoft’s findings. This notorious group, also known as APT28 or Fancy Bear, has been active during the ongoing Russia-Ukraine conflict and previously played a role in the hacking of Hillary Clinton’s 2016 presidential campaign.

But their use of AI goes beyond mere research. The Strontium group has also been leveraging LLMs to assist with basic scripting tasks, such as file manipulation, data selection, regular expressions, and multiprocessing, potentially automating or optimizing their technical operations.

The Thallium group, a North Korean state-sponsored hacking collective, has likewise been utilizing LLMs to research publicly reported vulnerabilities and target organizations. They have also used these AI models to aid in basic scripting tasks and to draft content for phishing campaigns.

Iranian hackers from the group known as Curium have taken a similar approach, using LLMs to generate phishing emails and even write code to evade detection by antivirus software. Chinese state-affiliated threat actors have also been observed using LLMs for research, scripting, translations, and to refine their existing hacking tools.

The revelation comes amid growing concerns about the potential misuse of AI in cyberattacks. Recent months have seen the emergence of tools like WormGPT and FraudGPT, which assist in the creation of malicious emails and cracking tools. Last month, a senior official at the National Security Agency also warned that hackers are using AI to make their phishing emails more convincing and harder to detect.

While Microsoft and OpenAI have not detected any “significant attacks” using LLMs yet, the companies have been swift in shutting down all accounts and assets associated with these hacking groups. “At the same time, we feel this is important research to publish to expose early-stage, incremental moves that we observe well-known threat actors attempting, and share information on how we are blocking and countering them with the defender community,” Microsoft stated.

The current use of AI in cyberattacks may be limited, but Microsoft warns of potential future use cases like voice impersonation. “AI-powered fraud is another critical concern. Voice synthesis is an example of this, where a three-second voice sample can train a model to sound like anyone,” Microsoft cautions. “Even something as innocuous as your voicemail greeting can be used to get a sufficient sampling.”

Faced with this AI-powered threat, Microsoft’s solution is to fight fire with fire, using AI to respond to AI attacks. “AI can help attackers bring more sophistication to their attacks, and they have resources to throw at it,” says Homa Hayatyfar, principal detection analytics manager at Microsoft. “We’ve seen this with the 300+ threat actors Microsoft tracks, and we use AI to protect, detect, and respond.”

Microsoft is building a Security Copilot, a new AI assistant designed specifically for cybersecurity professionals, to help identify breaches and better understand the vast amounts of data and signals generated through cybersecurity tools daily. The software giant is also overhauling its software security following major Azure cloud attacks and incidents where Russian hackers spied on Microsoft executives.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Most Popular

Preorders for Samsung’s Galaxy S26 come with a $900 trade-in bonus

Gemini 3 Deep Think promises smarter reasoning for researchers

ClearVPN adds Kid Safe Mode alongside WireGuard upgrade

Amazon adds generative AI to Kindle Scribe

Google Docs now speaks your notes aloud

Also Read
HBO Max logo

HBO Max confirms March 26 launch in UK and Ireland with big shows

Sony WF‑1000XM6 earbuds in black and platinum silver.

Sony WF‑1000XM6 launch with class‑leading ANC and premium studio‑tuned sound

Promotional image for Death Stranding 2: On the Beach.

Death Stranding 2: On the Beach brings the strand sequel to PC on March 19

The image features a simplistic white smile-shaped arrow on an orange background. The arrow curves upwards, resembling a smile, and has a pointed end on the right side. This design is recognizable as the Amazon's smile logo, which is often associated with online shopping and fast delivery services.

Amazon opens 2026 Climate Tech Accelerator for device decarbonization

Google Doodles logo shown in large, colorful letters on a dark background, with the word ‘Doodles’ written in Google’s signature blue, red, yellow, and green colors against a glowing blue gradient at the top and black fade at the bottom.

Google’s Alpine Skiing Doodle rides into Milano‑Cortina 2026 spotlight

A stylized padlock icon centered within a rounded square frame, set against a vibrant gradient background that shifts from pink and purple tones on the left to orange and peach hues on the right, symbolizing digital security and privacy.

Why OpenAI built Lockdown Mode for ChatGPT power users

A stylized padlock icon centered within a rounded square frame, set against a vibrant gradient background that shifts from pink and purple tones on the left to orange and peach hues on the right, symbolizing digital security and privacy.

OpenAI rolls out new AI safety tools

Promotional image for Donkey Kong Bananza.

Donkey Kong Bananza is $10 off right now

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.