They didn’t show it. They didn’t tweet a photo, leak a spec sheet, or post a glossy promo clip. But onstage at Emerson Collective’s 2025 Demo Day, OpenAI CEO Sam Altman and designer Jony Ive did something almost as rare in today’s attention economy: they confirmed the work, described the feel, and — with that peculiar mix of restraint and theatricality good designers favor — left most of the mystery intact.
Altman and Ive told host Laurene Powell Jobs that the device they’ve been quietly building has moved from sketches and mockups into hardware prototypes. When asked about timing, Ive offered one of the few concrete markers: the thing could arrive in “less than” two years. If that sounds like a long time for a small gadget, consider the ambition behind the words: this isn’t a new phone; it’s an attempt to rethink how a consumer might live with generative AI when the screen is not the center of gravity.
The prototype — not many details, but a clear aesthetic
Onstage, the pair leaned into description rather than demonstration. Journalists who covered the conversation reported two consistent impressions: the device is small — “roughly the size of a smartphone,” per multiple accounts — and likely screen-free, an intentionally quieter presence than the always-blaring phone. Altman framed the object as “simple and beautiful and playful,” a product whose emotional appeal arrives before its spec sheet. Ive, long the evangelist for tactile humility in design, said he loves “solutions that teeter on appearing almost naive in their simplicity,” and wants objects people reach for “almost carelessly,” without intimidation.
The conversation included one memorable shorthand for design success: Altman described an earlier prototype that simply didn’t invite touch — “I did not have any feeling of, ‘I want to pick up that thing and take a bite out of it,’ and then finally we got there,” he said — and reporters quickly picked up on the playful language. Business Insider and others elided this into what some outlets called a “lick test”: a crude shorthand for whether design crosses into that rare emotional territory where you want to physically interact with an object. The phrase is silly on its face, but revealing: the team is explicitly chasing emotional resonance, not only technical capability.
Why this matters (and why the silence is deliberate)
OpenAI’s push into hardware is not a hobby project. It follows the company’s announced merger with Jony Ive’s io team — a high-value deal and a public signal that OpenAI intends to own not just the software that drives models, but the physical product those models inhabit. The io acquisition and subsequent integration into OpenAI’s product teams give this effort formal muscle: designers, industrial know-how, and a mandate to try something the market hasn’t yet nailed.
Related /
- OpenAI completes acquisition of io, eyes future of AI-powered gadgets
- OpenAI and Jony Ive’s new AI gadget aims to be your helpful friend, not a creepy companion
- Laurene Powell Jobs endorses Jony Ive’s mysterious OpenAI-powered device
- OpenAI and Jony Ive leak details about mysterious screen-free AI device
- OpenAI might be building a family of tiny, screen-free gadgets — and it’s leaning on Apple’s playbook to do it
That said, moving from prototype to a shipping product has tripped up several high-profile challengers trying to unseat the phone as the place where we live with digital assistants. Humane’s AI Pin and the Rabbit R1 are recent reminders that being smart in a demo is not the same as being useful in daily life — both devices struggled with performance, user expectations, or sales after launch. OpenAI’s experiment, then, has to clear two bars at once: i) the hardware must feel effortless and emotionally inviting, and ii) the software must be reliably, frictionlessly useful in real-world conditions.
The design brief, as heard onstage
Two themes were repeated during the interview: calm and inevitability. Altman said he hopes people will look at the finished object and think, “That’s it!” — the kind of reaction that makes a product feel inevitable rather than revolutionary. Ive pushed the same note from the craftsmanship side: great design should look simple without being simplistic, and should make users feel unconcerned about mastery or friction. It’s a design brief that sets the project apart from flashy, feature-heavy gadgets.
That approach helps explain why the team has erred on the side of secrecy. When you’re trying to make something that relies as much on feeling as on function, early images and leaks can lock public perception into the wrong expectations. Better, from a designer’s point of view, to hold the object back until form and behavior align. Onstage, both men emphasized iteration: that a version existed that didn’t “feel” right, and that the team kept going until it did.
Market realities: the uphill climb
There’s a practical side to all this idealism. Building a new category device means figuring out production, distribution, latency and compute (will the device mostly stream to cloud models or run offline?), privacy and security, and, critically, a business model for a product that may not look like a phone or a watch. OpenAI now has design authority and deep pockets (and a mandate), but history shows that hardware bets are risky. Competitors’ missteps — underwhelming user experience, disappointing performance, overambitious promises — are cautionary tales. If OpenAI wants this to be more than a curiosity, it needs the hardware to be both emotionally magnetic and operationally dependable.
What success would look like
Success won’t be a single headline. It will be something quieter: a device people keep in a pocket or a bag because it genuinely makes everyday tasks easier — not because it offers an extra screen, but because it changes the relationship we have with information and assistance. That means low friction for voice, sense of context (location, calendar, ambient cues), strong on-device privacy or clear, trustworthy cloud tradeoffs, and a price point that matches perceived value. The team’s explicit focus on “playful” and “intuitive” design suggests they understand the non-technical bar: people must want to touch it.
The near term
Right now, there’s a prototype and a timeline that opens the door to a 2027-era shipping window if everything moves as stated. But prototypes are not product announcements — and in tech, “less than two years” often contains more than a handful of caveats. Expect more conversations, more demonstrations in controlled settings, and — eventually — the kind of slow-burn reveal that Ive has favored in the past: measured, tactile, and built to be understood by holding it. The primary place to watch is the recording of the Emerson Collective conversation; until OpenAI chooses to show the device itself, that onstage interview is the best window we have.
Altman and Ive didn’t give us a spec list. They gave us a design thesis and a timetable. That, in a moment where everyone wants a new gadget to break the phone’s monopoly, may be more interesting — and harder to deliver — than any leaked render. If the prototype truly earns the “lick test” they joked about, it will have done something few products do: it will have made the future feel inevitable. If it fails, it will join a long list of ambitious hardware experiments. Either way, the quietness around the work is, for now, the point: they’re designing not to win an attention contest, but to earn a place in people’s hands.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
