Microsoft quietly rolled out one of the more practical AI upgrades to OneNote this week, and if you’ve ever been frustrated that Copilot couldn’t “see” your carefully organized tables or the screenshots you pasted into your notes, that frustration is now officially over.
For a lot of people, OneNote is basically a digital brain dump – a place where everything lands, from typed meeting minutes to hand-drawn diagrams, pasted screenshots, reservation confirmations, and color-coded to-do lists. The problem was that Copilot, Microsoft’s AI assistant embedded inside OneNote, only understood part of that picture. It could read your typed words just fine, but everything else? It essentially walked right past it. That gap has now been addressed in a meaningful way.
Microsoft officially confirmed on April 29, 2026, that Copilot in OneNote can now reason over a much broader range of content on your notebook pages – specifically images, tables, and note tags – in addition to everything it could already understand. This might sound like a minor tweak on the surface, but for anyone who uses OneNote seriously, it’s actually a pretty significant shift in how useful the AI assistant can be on a day-to-day basis.
To understand why this matters, it helps to look at where Copilot in OneNote started. Microsoft first launched the feature back in November 2023, and at that point, it was essentially limited to typed text only. If you had a mix of written notes and handwritten ink on the same page, Copilot would just ignore the ink half. That was a real limitation, especially for tablet users and students who prefer writing by hand. Microsoft addressed that in mid-2024, when it rolled out support for inked, handwritten notes – meaning Copilot could now analyze both typed and handwritten content, summarize it, answer questions about it, and even generate to-do lists from it. That update was a solid step forward, and a lot of OneNote fans appreciated it.
But the journey didn’t stop there. Even after the ink support arrived, users were still running into a wall when it came to other types of content. Say you had a project planning page with a formatted table showing timelines and responsibilities – Copilot still didn’t recognize the table structure. Or if you’d pasted in a screenshot of a product comparison or a photo of a whiteboard session, Copilot had no idea what was in that image. It simply couldn’t factor those elements into its responses. You’d ask it a question about your notes and get an answer that, while technically correct based on the text it could see, missed half the context sitting right there on the page.
The new update changes all of that. Now, when you ask Copilot a question in OneNote, it reasons over everything on the page – typed text, handwritten ink, images, tables, and even note tags like the little checkboxes and flags you use to mark tasks and priorities. Crucially, you don’t need to change anything about how you take notes or how you interact with Copilot. You just ask your question in plain, natural language, and the AI will pull context from all the different types of content available to build a smarter, more accurate response.
One of the more compelling scenarios Microsoft highlights is travel planning – something a lot of people genuinely do in OneNote. Imagine you’ve built a notebook page for an upcoming trip, with a table laying out your day-by-day itinerary, plus images like destination photos, screenshots of hotel bookings, and ticket confirmations. Before this update, asking Copilot something like “Are there any gaps in my schedule?” would have returned an incomplete answer because it couldn’t see the table or the reservation screenshots. Now you can ask, “Based on the itinerary and reservation details on this page, are there any activities or bookings I may have missed for this trip?” – and Copilot will actually be able to process all of that content together and give you a genuinely useful answer.
The real-world implications extend well beyond travel planning. Think about how many professionals use OneNote for meeting notes that include both typed minutes and pasted screenshots from a shared screen. Or researchers who keep image archives alongside written analysis. Or students who photograph textbook diagrams and embed them alongside their own typed notes. In all of these cases, Copilot was previously operating with blinders on. Now it has a much fuller picture of what’s actually on the page.
What Microsoft has essentially done here is bring multimodal understanding to OneNote – the same capability that has been transforming AI assistants across the industry. The broader push in Microsoft 365 Copilot has been heading in this direction for a while; earlier this year, the company also announced that Copilot could interpret embedded images inside Word documents, PowerPoint files, and PDFs, extracting insights from charts, diagrams, and screenshots to give more contextually complete answers. OneNote is now catching up to that same standard, making the experience feel more consistent across the Microsoft 365 suite.
There’s also an important conversational dimension to all of this that’s worth calling out. The update doesn’t just improve first-answer accuracy – it also enables a more natural follow-up conversation. Because Copilot now has richer context from the start, you can ask one question, get an answer, and then ask a follow-up without needing to rephrase or manually point Copilot toward content it previously couldn’t see. That’s a genuinely more natural way to interact with your notes, and it’s closer to how you’d talk to a colleague who has actually read everything on the page.
As for availability, the feature is rolling out to users who have a Microsoft 365 Copilot license – either the Premium or Basic tier. On Windows, you’ll need OneNote version 2601 (Build 19628.20128) or later. On Mac and iOS, you’re looking at version 16.106 (Build 26020821) or later. There’s no manual setup or settings change required – the improved understanding kicks in automatically when you start chatting with Copilot on a page that has mixed content types.
It’s a reminder that some of the most useful AI upgrades aren’t the headline-grabbing, full-product-overhaul kind. Sometimes it’s just the AI finally being able to read the table you’ve been putting in your notes for the last two years. That kind of quiet, practical progress is often what actually makes a productivity tool worth using every day.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
