OpenAI’s latest bet says a lot about where the company – and its CEO Sam Altman – think the AI story goes next: straight into your brain, but without anyone picking up a scalpel. What looks like a routine funding announcement is really a window into a bigger ambition to fuse AI, biology and new kinds of hardware into a single, always-on interface layer between humans and machines.
At the center of this move is Merge Labs, a still-mostly-in-stealth brain-computer interface (BCI) startup co‑founded by Altman that just raised around $250–$252 million in seed funding, with OpenAI as a key investor alongside Bain Capital and Valve co‑founder Gabe Newell. That seed check is unusually big by any standard, but especially for a company that does not yet have a commercial product and openly frames its work on a decades-long timeline. OpenAI has not said how much of that round it contributed, but the company described BCIs as “an important new frontier” that can create a “natural, human‑centered way” to interact with AI and confirmed it will collaborate with Merge on scientific “foundation models” and other tools.
If you’re picturing a Neuralink‑style surgical implant with wires poking out of a shaved skull, Merge is trying to be the opposite of that. The company talks about “much less invasive” interfaces that connect to neurons using molecules rather than electrodes, while sending and receiving data via “deep‑reaching modalities like ultrasound” – think physics and bioengineering instead of neurosurgery and titanium. In practice, that likely means using engineered proteins or other molecular “reporters” to make specific neurons show up more clearly under ultrasound, so a scanner placed outside the head can pick up brain activity with more bandwidth and over more of the brain than today’s noninvasive EEG caps allow.
That combination of molecular tricks and ultrasound is also what makes Merge’s approach both exciting and speculative. Ultrasound through an intact skull loses resolution, so part of the bet is that clever biology can compensate enough to make the signal usable for rich interactions with AI – everything from more intuitive accessibility tools to controlling devices or even future AR systems by intent, not touch. Getting molecules into the right neurons safely, at scale, is its own grand challenge and drags the project into the messy, heavily regulated world of gene delivery, safety trials and long‑term monitoring.
The timing is not accidental. Morgan Stanley has pegged the US BCI market alone at roughly $400 billion over the long run, driven first by medical use cases like paralysis, stroke, spinal cord injuries and other neurological conditions. For now, most commercial and research BCI work sits squarely in that medical lane, and a lot of early products will likely look like assistive devices, not sci‑fi headbands for healthy consumers. But players like Merge – and, in a more invasive way, Neuralink – are very clearly eyeing a second act in consumer, workplace and even military applications once the technology matures and regulators are less skittish.
OpenAI’s own blog language leans into that consumer future. It frames BCIs as a way for “anyone” to communicate, learn and interact with AI more naturally, and says AI systems themselves will be used to interpret intent and adapt to individual users, even when they’re working with noisy, partial brain signals. In other words, OpenAI does not just want to run the model in the cloud; it wants a say in the interface layer that reads your brain and turns that data into something a model can understand in real time.
That dual role is where the conflict‑of‑interest questions come in. Altman is both the CEO of OpenAI and a co‑founder of Merge, even though reports indicate he is not personally investing in the startup’s current round. To critics, this looks like yet another example of an already powerful AI leader setting up a dense web of overlapping bets – from OpenAI itself to crypto projects and now brain interfaces – that all feed the same long‑term vision of AI permeating every layer of human life.
At the same time, this fits Altman’s well‑documented pattern of backing technologies he thinks will define the next few decades, from AGI to radical longevity to novel hardware. A noninvasive, high‑bandwidth BCI that plays nicely with AI agents ticks pretty much every box on that list – it changes interfaces, extends human capability, and, if it works, becomes a foundational platform other companies have to build on. For OpenAI, writing a check into Merge is a way to keep a direct line into that hardware story rather than watching it develop from the sidelines as a mere software supplier.
Zoom out, and the deal slots neatly into OpenAI’s wider push to own more of the stack. The company has already committed roughly $1.4 trillion over eight years for compute and infrastructure, is collaborating with Broadcom on its own AI chips, and is exploring consumer devices with former Apple design chief Jony Ive. It has also put out a request for US manufacturing partners for everything from data center hardware to robotics, signaling it wants more control over how and where its future hardware is built.
That aggressive expansion comes with serious financial pressure. Reporting has suggested OpenAI could still be posting an operating loss well into 2028 and may not turn a profit until around 2030, fueling speculation that a deep‑pocketed partner like Microsoft or Amazon could eventually absorb the company if the financing environment turns. Against that backdrop, funding a moonshot BCI venture looks like a side bet on a future where OpenAI is not just renting out models but also licensing core technologies that sit literally at the interface between humans and AI.
It is also a reminder that BCI hype has burned big tech before. Meta poured money into brain‑typing research, bought neural‑interface startup CTRL‑Labs, and still quietly pivoted to wrist‑based electromyography when it became clear that the road to commercial BCIs was longer and more complicated than glossy demos suggested. Merge itself is upfront that its approach could take “decades rather than years,” which is a refreshingly honest way of saying this is closer to a long‑horizon research lab than a near‑term product company.
For everyday users, the near future is not a telepathic ChatGPT headset but a steady creep of AI‑infused interfaces – smarter keyboards, eye‑tracking, wearables, accessibility tech – that move incrementally closer to “reading” intent instead of just keystrokes. Merge and OpenAI are betting that by the time biology and physics catch up, people will already be used to AI in their pockets, in their workflows and in their homes, making the leap to AI that can listen directly to brain activity feel less like science fiction and more like the next logical UX upgrade.
Whether that future feels empowering or invasive will depend on all the things that rarely show up in a funding announcement: who owns the data, how consent and safety are enforced, how regulators treat molecular and ultrasound‑based brain tech, and whether everyday people see any benefit beyond a cooler way to click a button. What is clear, though, is that OpenAI and Sam Altman are no longer content with AI that just lives on servers and screens; they are now writing checks to explore what happens when the boundary between model and mind gets a lot thinner – even if it takes decades to get there.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
