By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIMicrosoftOpenAISecurityTech

Microsoft and OpenAI reveal hackers weaponizing ChatGPT

While no major attacks using AI have been detected yet, Microsoft and OpenAI reveal threat actors testing LLMs for vulnerabilities, translating tools, evading antivirus, and gathering technical intelligence.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Feb 14, 2024, 12:57 PM EST
Share
We may get a commission from retail offers. Learn more
Microsoft and OpenAI reveal hackers weaponizing ChatGPT
Photo illustration by Jaap Arriens/NurPhoto via Getty Images
SHARE

In a concerning development, Microsoft and OpenAI have uncovered evidence that cybercriminals are already exploiting advanced language models like ChatGPT to enhance their attacks. The tech giants released new research today, revealing that state-sponsored hacking groups from Russia, North Korea, Iran, and China have been experimenting with these powerful AI tools to refine their techniques and evade detection.

“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft warned in a blog post.

The Strontium group, a notorious Russian hacking collective linked to military intelligence, has been using large language models (LLMs) to gain insights into satellite communication protocols, radar imaging technologies, and other technical parameters, according to Microsoft’s findings. This notorious group, also known as APT28 or Fancy Bear, has been active during the ongoing Russia-Ukraine conflict and previously played a role in the hacking of Hillary Clinton’s 2016 presidential campaign.

But their use of AI goes beyond mere research. The Strontium group has also been leveraging LLMs to assist with basic scripting tasks, such as file manipulation, data selection, regular expressions, and multiprocessing, potentially automating or optimizing their technical operations.

The Thallium group, a North Korean state-sponsored hacking collective, has likewise been utilizing LLMs to research publicly reported vulnerabilities and target organizations. They have also used these AI models to aid in basic scripting tasks and to draft content for phishing campaigns.

Iranian hackers from the group known as Curium have taken a similar approach, using LLMs to generate phishing emails and even write code to evade detection by antivirus software. Chinese state-affiliated threat actors have also been observed using LLMs for research, scripting, translations, and to refine their existing hacking tools.

The revelation comes amid growing concerns about the potential misuse of AI in cyberattacks. Recent months have seen the emergence of tools like WormGPT and FraudGPT, which assist in the creation of malicious emails and cracking tools. Last month, a senior official at the National Security Agency also warned that hackers are using AI to make their phishing emails more convincing and harder to detect.

While Microsoft and OpenAI have not detected any “significant attacks” using LLMs yet, the companies have been swift in shutting down all accounts and assets associated with these hacking groups. “At the same time, we feel this is important research to publish to expose early-stage, incremental moves that we observe well-known threat actors attempting, and share information on how we are blocking and countering them with the defender community,” Microsoft stated.

The current use of AI in cyberattacks may be limited, but Microsoft warns of potential future use cases like voice impersonation. “AI-powered fraud is another critical concern. Voice synthesis is an example of this, where a three-second voice sample can train a model to sound like anyone,” Microsoft cautions. “Even something as innocuous as your voicemail greeting can be used to get a sufficient sampling.”

Faced with this AI-powered threat, Microsoft’s solution is to fight fire with fire, using AI to respond to AI attacks. “AI can help attackers bring more sophistication to their attacks, and they have resources to throw at it,” says Homa Hayatyfar, principal detection analytics manager at Microsoft. “We’ve seen this with the 300+ threat actors Microsoft tracks, and we use AI to protect, detect, and respond.”

Microsoft is building a Security Copilot, a new AI assistant designed specifically for cybersecurity professionals, to help identify breaches and better understand the vast amounts of data and signals generated through cybersecurity tools daily. The software giant is also overhauling its software security following major Azure cloud attacks and incidents where Russian hackers spied on Microsoft executives.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Most Popular

The $19 Apple polishing cloth supports iPhone 17, Air, Pro, and 17e

Apple MacBook Neo: big power, surprising price, one clear target — Windows

Everything Nothing announced on March 5: Headphone (a), Phone (4a), and Phone (4a) Pro

OpenAI’s GPT-5.4 is coming — and it’s sooner than you think

MacBook Neo and external monitors: it’s complicated

Also Read
A simple illustration shows a large black computer mouse cursor pointing toward a white central hub with five connected nodes on an orange background.

Claude Marketplace lets you use one AI commitment across multiple tools

Perplexity Computer promotional banner featuring a glowing glass orb with a laptop icon floating above a field of wildflowers against a gray background, with the text "perplexity computer works" in the center and a vertical list of action words — sends, creates, schedules, researches, orchestrates, remembers, deploys, connects — displayed in fading gray text on the right side.

Perplexity Computer is the AI that actually does your work

99ONE Rogue 102321

99ONE Rogue wants to kill the ugly helmet comms box forever

TACT Dial 01 tactile desk instrument

TACT Dial 01: turn it, press it, focus — that’s literally it

Close-up of a person holding the Google Pixel 10 Pro Fold in Moonstone gray with both hands, rear-facing triple camera array and Google "G" logo prominently visible, worn against a silver knit top and blue jacket with a poolside background.

Pixel Care+ makes owning a Pixel a lot less scary — here’s why

Woman with blonde curly hair sitting outside in a lush park, holding a blue Google Pixel 10 and smiling at the screen.

Pixel 10a, Pixel 10, Pixel 10 Pro: one winner for every buyer

Google Search AI Mode showing Canvas in action, with a split-screen view of a conversational AI chat on the left and an "EE Opportunity Tracker" scholarship and grant tracking dashboard on the right, displaying a total funding secured amount of $5,000, scholarship cards with deadlines, and status labels including "To Apply" and "Awarded."

Google’s Canvas AI Mode rolls out to everyone in the U.S.

Google NotebookLM app listing on the Apple App Store displayed on an iPhone screen, showing the app icon, tagline "Understand anything," a Get button with In-App Purchases noted, 1.9K ratings, age rating 4+, and a chart ranking of No. 36 in Productivity.

NotebookLM Cinematic Video Overviews are live — here’s what’s new

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.