By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AILifestyleTech

AI detects covert consciousness in comatose patients before doctors

A new study reveals that AI can identify hidden signs of awareness in brain injury patients days earlier than traditional bedside examinations.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Sep 3, 2025, 2:45 AM EDT
Share
Woman looking at brain scan images
Photo by Tunvarat Pruksachat / Getty Images
SHARE

When families sit vigil beside a loved one after a catastrophic brain injury, the waiting is full of small, haunted questions: Is there anything behind those closed eyelids? Will they ever respond? Doctors press for signs—eye opening, hand squeezes, a grimace—and chart the fragile arc of recovery. For a surprising number of patients, those standard bedside checks miss something: awareness that stays hidden because the body can’t reliably answer.

A new study introduces a different kind of examiner: a camera and an algorithm that watches faces at a resolution humans simply can’t. The tool, called SeeMe, tracks microscopic facial displacements—landmarks down to the level of pores—and determines whether those shifts line up with simple spoken commands such as “open your eyes” or “stick out your tongue.” In a prospective study of acute brain-injury patients, SeeMe spotted signs of purposeful, stimulus-evoked facial movement days before clinicians noticed them, and in more patients overall.

What the researchers did (and what they found)

The team behind SeeMe recorded videos of 37 comatose adults admitted after acute brain injuries and compared the algorithm’s readout to standard clinical exams and blinded human raters. Using a combination of fine-grained landmark tracking and a deep-learning classifier, the system quantified facial displacements after each command and tested whether the pattern of movement matched the command shown. SeeMe was designed not just to flag movement, but to check whether movements were specific to the instruction—an important step toward distinguishing intentional responses from random twitches.

On the headline numbers: SeeMe detected eye-opening responses on average 4.1 days earlier than bedside clinicians, and it flagged mouth movements (smiles or tongue protrusion) in patients several days before those responses became obvious to human examiners. Across analyzable videos, SeeMe detected eye responses in 30 of 36 patients and mouth responses in 16 of 17. Patients who produced larger and more frequent micro-movements recorded by SeeMe tended to have better clinical outcomes at discharge—hinting that these tiny motions may carry prognostic information.

Why this matters: covert consciousness, explained

“Covert consciousness” (also called cognitive-motor dissociation) describes patients whose brains register—and sometimes act on—commands even though outward behavior looks absent. Prior neuroimaging and EEG studies have shown that as many as roughly 15–25% of patients who appear behaviorally unresponsive nevertheless show brain signatures of awareness when tested with specialized scans. Those techniques are powerful but resource-intensive and not part of routine bedside practice. A camera-based approach could offer a simpler, cheaper way to screen patients for hidden signs of awareness and do it more often.

“It’s almost like a flickering light bulb,” Jan Claassen, a neurologist not involved with the project, told reporters—consciousness often returns in small, unreliable flashes before becoming steady again. Detecting those early flickers, even if they precede overt movement by days, can change how clinicians counsel families and when they start rehabilitation.

Strengths, skeptics and limitations

There’s a pragmatic elegance to SeeMe: it uses readily available cameras, standard experimental commands, and automated analysis—tools that could be deployed at the bedside without the infrastructure burden of fMRI. The study is also open access and peer reviewed in Communications Medicine, and the authors provide clear methods showing how the algorithm classifies command-specific responses.

But caveats matter. The study enrolled 37 patients with a mix of injury types; some sessions had to be skipped because of clinical instability or equipment issues. Sedative drugs, paralytics and mechanical ventilation can suppress or obscure tiny motor responses. The algorithm’s detection of micro-movement does not by itself prove full subjective experience—rather, it flags behavior that is more likely to be purposeful than purely random. The authors themselves call for larger, multi-center validation, integration with tools such as electromyography (to rule out non-neural muscle artifact), and careful testing across diverse patient populations before SeeMe could be used to make high-stakes decisions.

Real-world implications and ethical terrain

If further work confirms the finding, the clinical ripple effects could be substantial. Earlier detection of covert responses might nudge teams to start rehabilitation earlier, reconsider the timing of life-sustaining decisions, or open new avenues for communication—researchers are already exploring whether specific facial movements could eventually be used as yes/no signals. But that possibility raises thorny ethics: if a patient can indicate “yes” or “no” via tiny facial gestures, how do we validate and interpret those signals reliably? Who decides when a micro-response is sufficient to change goals of care? And how would families weigh probabilistic machine-detected signs against more familiar bedside examinations?

Clinicians and ethicists will also worry about false positives and the emotional weight of premature hope. The study’s authors and outside experts emphasize that SeeMe is not a silver bullet; it’s an additional data stream that must be integrated with neurological exams, imaging, EEG and the patient’s broader clinical context.

Where research goes next

The team plans to expand testing, refine classifiers to reduce noise, and probe whether patterned facial responses can be exploited to answer simple questions—turning detection into communication. Parallel lines of work are exploring EEG markers, sleep-pattern signatures and fMRI tasks as complementary methods to find hidden awareness. If multiple, independent signals converge, clinicians would have a stronger, more actionable case that a patient is partially aware even when outward signs are minimal.

Bottom line

SeeMe doesn’t claim to bring people back to consciousness. What it offers—backed by peer-reviewed data—is a new way to see the earliest, smallest behavioral whispers that the human eye can miss. For families pacing hospital corridors and clinicians tasked with fraught decisions, spotting those whispers sooner could make a practical difference. But turning that possibility into everyday practice will require more evidence, careful safeguards and clear ethical guardrails. The “flickering light bulb” of recovery is a fragile signal; this study suggests we now have a more sensitive meter to detect it.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Most Popular

Gemini 3.1 Flash TTS is Google’s new powerhouse text-to-speech model

Google app for desktop rolls out globally on Windows

Google debuts Gemini app for Mac with instant shortcut access

Google Chrome’s new Skills feature makes AI workflows one tap away

Anthropic’s revamped Claude Code desktop app is all about parallel coding workflows

Also Read
Claude design system interface showing an interactive 3D globe visualization with customizable settings. The left side displays a dark-themed globe with North America in focus, overlaid with cyan-colored connecting arcs between major North American cities including Reykjavik, Vancouver, Seattle, Portland, San Francisco, Los Angeles, Toronto, Montreal, Chicago, New York, Nashville, Atlanta, Austin, New Orleans, and Miami. The top of the interface includes navigation tabs for 'Stories' and 'Explore', along with 'Tweaks' toggle (enabled), and action buttons for 'Comment' and 'Edit'. On the right side is a dark control panel with three sections: Theme (Dark mode selected, with Light option available), Breakpoint (Desktop selected, with Tablet and Mobile options), and Network settings including adjustable sliders for Arc color (bright cyan), Arc width (0.6), Arc glow (13), Arc density (100%), City size (1.0), and Pulse speed (3.4s), plus checkboxes for 'Show arcs', 'Show cities', and 'City labels'.

Anthropic Labs unveils Claude Design

OpenAI Codex app logo featuring a stylized terminal symbol inside a cloud icon on a blue and purple gradient background, with the word “Codex” displayed below.

Codex desktop app now handles nearly your whole stack

A graphic design featuring the text “GPT Rosalind” in bold black letters on a light green background. Behind the text are overlapping translucent green rectangles. In the bottom left corner, part of a chemical structure diagram is visible with labels such as “CH₃,” “CH₂,” “H,” “N,” and the Roman numeral “II.” The right side of the background shows a blurred turquoise and green abstract pattern, evoking a scientific or natural theme.

OpenAI launches GPT-Rosalind to accelerate biopharma research

Perplexity interface showing a model selection menu with options for advanced AI models. The default choice, “Claude Opus 4.7 Thinking,” is highlighted as a powerful model for complex tasks. Other options include “GPT-5.4 New” for complex tasks and “Claude Sonnet 4.6” for everyday tasks using fewer credits. A toggle for “Thinking” is switched on, and a tooltip on the right reads “Computer powered by Claude 4.7 Opus.”

Perplexity Max users now get Claude Opus 4.7 in Computer by default

Anthropic brand illustration divided into two halves: On the left, an orange-coral background displays a stylized network or molecule diagram with white circular nodes connected by white lines, enclosed within a black wavy border outline representing a head or mind. On the right, a light teal background features an abstract line drawing of a figure or person with curved black lines and black dots, sketched over a white grid on transparent checkered background, suggesting data points and analytical thinking. The composition symbolizes the intersection of artificial intelligence and human cognition.

Claude Opus 4.7 is Anthropic’s new powerhouse for serious software work

Illustration of Claude Code routines concept: An orange-coral background with a stylized design featuring two black curly braces (code brackets) flanking a white speech bubble containing a handwritten lowercase 'u' symbol. The image represents code execution and automated routines within Claude Code.

Anthropic gives Claude Code cloud routines that work while you sleep

Gemini interface showing a NEET Mock Exam Practice Session. On the left side, a chat message from the user says 'I want to take a NEET mock exam.' Below it is Gemini's response explaining a complete NEET mock exam designed to test concepts in Physics, Chemistry, and Biology, with a 'Show thinking' option expanded. The response includes an embedded card for 'NEET UG Practice Test' dated Apr 11, 7:10 PM, with options to 'Try again without interactive quiz' and encouragement message. On the right side is a panel titled 'NEET UG Practice Test' displaying three subject sections: Physics (45 Questions with a yellow icon and blue Start button), Chemistry (45 Questions with a purple icon and blue Start button), and Biology (90 Questions with a green icon). Each section includes a brief description of question topics covered.

Google Gemini now lets you take full NEET mock exams for free

AI Mode in Chrome showing AI-powered shopping assistant panel alongside a Ninja coffee machine product page with pricing and details

Chrome’s AI Mode puts search and pages side by side

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.