If you’ve paid any attention to AI news in the past year, you’ve probably noticed how quickly Microsoft—in partnership with OpenAI and its own Azure AI teams—has been pushing its Copilot platform into nearly every corner of its software ecosystem. But if you’re following the developments more closely, you might have spotted something that sits a step ahead of even the mainstream Copilot features: Microsoft Copilot Labs. This is Microsoft’s open-invite, ever-changing workshop for new and experimental AI tools, a kind of digital test kitchen where early features are served up to ordinary users for hands-on exploration.
Launched in 2024 and growing rapidly since, Copilot Labs isn’t just another beta program. It’s where Microsoft takes off the training wheels and openly admits: “Here, things are weird, sometimes broken, and that’s how we learn.” For users, that means the chance to try things like instant 3D model generation, emotionally aware AI voices, or browser-based AI gaming demos long before they become mainstream—or fail quietly in a corner of Redmond and never see the light of day. For Microsoft, it’s a critical tool in responsible AI development: gathering diverse user feedback, finding real-world risks and bugs, and iterating fast in public as AI’s boundaries shift by the month.
In this article, we’ll dive deep into Copilot Labs as of September 2025: how it works, what it offers, how it aligns with the larger Copilot and AI landscape, and how users can get involved. We’ll also explore the implications for privacy, security, and responsible AI, and compare Labs to other experimental platforms in the fast-evolving world of artificial intelligence.
Origins and purpose: why Copilot Labs exists
Microsoft’s Copilot range—stretching from GitHub to Microsoft 365 to standalone web apps—was already a major overhaul for productivity and digital assistance. But with generative AI evolving at breakneck speed, companies like Microsoft face a tough dilemma: How do you launch bold, risky, and creative new AI features without risking your whole brand if they flop or misfire? The solution: a separate, clearly labeled experimental area, open to actual users, where the company can iterate fast and gather feedback before pushing new ideas to the masses.
Copilot Labs was announced in late 2024 as precisely this “lab environment.” As Microsoft described in their official launch, “Labs helps us test bold ideas early, learn quickly, and shape future innovation. It’s a space where we bridge research and product, working closely with users to co-develop features that address emerging needs.”
For Microsoft, Labs offers several benefits:
- Rapid iteration in the open: Features can be A/B tested, tweaked, or discarded outright before they become mainline.
- User-generated feedback at scale: Diverse testers help spot biases, breakage, and creative possibilities, creating a reality check for developer assumptions.
- A channel for responsible AI: By explicitly labeling features as experimental, Microsoft sets clearer expectations and can control risk, especially around issues like privacy, fairness, accessibility, and unintended outcomes.
- Market differentiation: Copilot Labs gives Microsoft a competitive edge, putting it toe-to-toe with “Labs” areas at Google, OpenAI, and other AI leaders, while inviting its large user base to co-create new AI experiences.
User access and onboarding
So, who actually gets into Copilot Labs, and how do you start using it? As of 2025, Copilot Labs is freely available to anyone with a Microsoft account, though some of the newest or most experimental features are restricted to users in select regions or those with a Copilot Pro subscription. The platform is designed for transparency and ease of entry—even if you’ve never played with AI before.
How to access Copilot Labs
- Go to the Copilot Labs web portal: copilot.microsoft.com/labs
- Sign in with your Microsoft account: A free account is sufficient for most features, but paid Copilot Pro users may get early access to some experiments.
- Browse available experiments: The Labs homepage showcases active AI experiments, often changing each month.
- Select and launch an experiment: Each experiment includes a “Try now” button, a brief description, and links to community discussions or feedback forms.
- Explore and provide feedback: Users are encouraged to test, provide real-world input, and submit bug reports or suggestions—directly influencing the shape of future AI development.
There’s no need to download software; all experiments run in-browser, sometimes spawning a new tab for more complex features (like 3D model generation or game demos). Most experiments are open globally, though a few are restricted to users in regions like the US, UK, and Canada (see “Regional Availability” below for more details).
Paid users (Copilot Pro, Copilot for Microsoft 365) may see even newer features and are often placed in “test cohorts” for very early prototypes, but the overall goal—unlike private beta programs—remains accessibility and open feedback from the broad Copilot user base.
User interface and experience
Copilot Labs strives for a straightforward, modern user interface—even as the tools themselves may be “rough around the edges.” When users sign in, they’re met by a clean dashboard displaying the list of ongoing experiments, each with a title card, short description, and a call-to-action button. Each experiment opens in its own web view (sometimes as a self-contained app, sometimes in the sidebar), and presents a clear, guided workflow to let users try out different scenarios.

Key UX principles
- Simplicity: UI design avoids clutter; only the minimum controls needed for the current experiment are shown.
- Transparency: Each feature is marked as experimental, with notes on intended use, known limitations, and opportunities to provide feedback.
- Real-time feedback: Users usually receive immediate visual or audio responses, making experimentation feel fluid and engaging.
- Onboarding for new features: For more complex labs (like Copilot 3D or Copilot Actions), embedded tooltips or short guides walk users through first-time setup or data input.
- Accessibility: Consistent with Microsoft’s design philosophy, Copilot Labs includes support for keyboard navigation, screen readers, and other accessibility features to ensure a broad range of users can participate.
The experience overall is closer to testing a new app install than a “beta” buried within a settings menu. Labs experiments aim to be interactive and visual, often inviting creative play and direct manipulation rather than pure Q&A.
Feedback and community engagement
A core pillar of Copilot Labs is the feedback loop between Microsoft and its users. At every stage, users are prompted to share experiences, report bugs, and suggest improvements via built-in feedback forms, direct links in each experiment, or through Microsoft’s growing Copilot community areas, including Discord and the Microsoft Community Hub.
How feedback shapes Copilot Labs
- Real-time bug and issue reporting: Users can submit glitches, unexpected outputs, or accessibility barriers directly from experiment windows.
- Feature requests and upvotes: Community members can propose tweaks or entirely new labs, collaborate with other testers, and “upvote” popular requests for prioritization.
- Discussion forums and Discord: Ongoing community discussions help surface not just technical bugs, but also ethical questions, creative use cases, and emergent behaviors that Microsoft’s own teams might not predict.
- Co-development ethos: Microsoft openly states it “learns in the open” and credits Labs participants in shaping which features “graduate” to the main Copilot release and which are retired.
For users, this turns participation into a two-way street—both a chance to try the wildest new AI features, and to directly influence the direction of one of the world’s largest software platforms.
Copilot Labs—major experiments as of September 2025
| Experiment | Description | Key Features | Access/Region |
|---|---|---|---|
| Copilot Audio Expressions | Turns text input into emotionally expressive, lifelike audio narration | Emotive and Story modes, granular voice/style choices | Available globally (English only) |
| Copilot 3D | Generates downloadable 3D models from a single 2D image | One-click image-to-3D, GLB output, My Creations library | Available globally |
| Copilot Appearance | Gives Copilot a real-time conversational face and voice with visual expressions | Animated avatar, synchronized speech, conversational memory | US, UK, Canada (test group only) |
| Copilot Actions | AI “agent” can perform web tasks (like reservations, bookings) on the user’s behalf | Automated browsing, form filling, limited by security guardrails | Select countries, not EU-wide |
| Copilot Vision | Enables Copilot to “see” the user’s screen or camera and give context-aware advice | Visual understanding, opt-in only, privacy-focused | US, UK, Canada, Australia, NZ |
| Copilot Gaming Experiences | Browser-based demos where AI generates and manages gameplay in real time | Real-time scene generation, Quake II demo, rapid prototyping | Available worldwide, 18+ |
| Think Deeper | Advanced reasoning for complex problems, step-by-step responses | Uses the latest OpenAI o1 reasoning models, limited usage quota | US, UK, Canada, AU, NZ |
Each of these experiments is discussed in detail below, with hands-on descriptions and critical analysis of user experience, capabilities, and current limitations.
Copilot Audio Expressions: AI that speaks (and feels)
One of the showpieces of Copilot Labs as of late 2025 is Copilot Audio Expressions, an advanced AI voice generator that goes leagues beyond monotone text-to-speech. The tool can transform any user input—be it a script, a narration for a video, or even a short story—into a rich, emotionally nuanced audio track, effectively giving everyday users the power to create personalized audio content with little effort.

Key features
- Two expressive modes: Emotive (user chooses style and voice; up to 59 sec) and Story (auto-selects voices and style; up to 90 sec, supports narrative dialog).
- Choice of nearly a dozen synthetic voices: Including male/female, different accents, and speaking speeds.
- Automatic creative enhancements: The AI can subtly rephrase or elaborate on scripts to make spoken audio more engaging.
- No login required for short samples: MP3 download format, playback on any device.
- Use Cases: Accessibility (narrating documents), quick podcast intros, video content, storytelling with kids, prototyping for creators.
User experience
We found that even basic scripts resulted in surprisingly natural, lively speech—complete with emotional inflection, pauses, and even subtle improvisations not present in the original text. In Story mode, voiced character differentiation (think: a cat speaking in a British accent in a kids’ story) makes it easy for creators to inject charm and realism into voiceovers without needing any recording gear.
Limitations
- English only (for now): No support for other languages as of September 2025.
- Duration limits: 59 seconds for Emotive, 90 seconds for Story.
- Creative liberties: The AI sometimes rephrases or “improves” the script—which is great for most, but could frustrate those needing word-for-word accuracy.
Real-world impact
By lowering the technical and cost barriers to expressive voiceover, Copilot Audio Expressions has drawn favorable comparisons to professional SaaS text-to-speech tools costing far more. Content creators—especially YouTubers, teachers, indie podcasters, and marketers—can quickly test narration ideas or prototype content, while accessibility advocates see its value in making text content audibly engaging for people with reading challenges.
Copilot 3D: one-click 3D model generation
Turning a flat photograph into a fully navigable 3D model once required high-end software and pro-level skills. Not anymore. Copilot 3D in Labs brings one of the most jaw-dropping demos of accessible AI to the general public. Upload almost any 2D image (PNG or JPG, up to 10MB)—and within seconds, download a printable, game-ready 3D object in GLB format.

Key features
- Image-to-3D AI: Analyzes depth, geometry, and color from a single image.
- Output format: GLB file, instantly usable in game engines, AR/VR apps, or for 3D printing.
- Auto Library: “My Creations” vault stores models for 28 days (manual deletion possible).
- No software install: Runs fully in-browser; desktop browsers recommended for best performance.
- Ideal image types: Clear, single-subject photos (furniture, objects, simple shapes); busy scenes can still be challenging.
User experience
We have praised the speed and ease of the process: find or shoot a product photo, upload, wait 5–15 seconds, and boom—you have a 3D object you can spin, inspect, or download. For teachers, hobbyists, game developers, and rapid prototypers, this cuts hours off what would otherwise be a painstaking process. It’s not just a party trick: Copilot 3D has been used for game jams, teaching geometry, and even by indie devs to prototype AR assets.
Caveats
- No text-to-3D yet: You can’t “describe” an object and get a 3D model—uploads only.
- Performance with animals, crowded or complex photos: The AI performs best with simple, well-lit, high-contrast images.
- Privacy and copyright: Microsoft blocks uploads of recognizable faces, celebrities, or copyrighted art, and does not use uploads to train future models.
- Not yet in major desktop 3D apps: GLB output is industry standard, so conversion for advanced modeling is possible via Blender or MeshLab.
Real-world impact
For years, 3D asset creation was a barrier for indie game devs or students. Copilot 3D democratizes this—suddenly, anyone with a digital photo and a browser can generate props, teaching models, or visual assets quickly. For AR/VR, quick prototyping, and educational animation, the implications are massive.
Copilot Appearance: giving AI a digital “face”
Remember Clippy from ‘90s Microsoft Office? Copilot Appearance is the sophisticated, non-intrusive AI cousin you never knew you wanted. This experimental feature puts a real-time, visually expressive avatar onto Copilot—one that blinks, smiles, raises eyebrows, looks genuinely surprised, and syncs perfectly with its AI-generated speech. The end goal? To make Copilot feel more like a collaborative presence and less like an impersonal chatbot.

Key features
- Real-time facial animation: The avatar matches speech with smiles, nods, or looks of surprise.
- Conversational memory: References earlier conversation topics and maintains a contextual thread.
- Voice and face integration: Speech synchronizes perfectly with expressions.
- Web-only/test group: Currently only in Copilot Labs, and only for selected users in the US, UK, and Canada.
User experience
Users who’ve tried Appearance describe it as “having a friendly digital co-worker.” Unlike static profile images or pixel-art bots, Copilot’s avatar feels dynamic and human-like—but without the sometimes uncanny “over-realism” of certain virtual humans. Responses are punctuated by facial cues (nods for agreement, eyebrow raises for surprise), making conversations feel natural and, yes, a bit more fun. Microsoft execs say the avatar will eventually age and pick up “digital patina”—wear and tear, scuffs, little signs of shared history—as you interact over time.
Limitations
- Region-restricted: Only available to a select group of Labs beta users in North America and the UK.
- Web-only: No mobile or Windows desktop integration yet.
- Not (yet) customizable: Future plans include different visual styles and workplace avatars.
Broader impact
Copilot Appearance points toward a future where virtual agents become persistent, humanized digital teammates—ideal for remote work, inclusive communication, or simply making AI less intimidating. It also has clear accessibility potential for users who prefer multimodal (voice + visual) interaction. For now, though, it’s still very much a work in progress—and a delightful preview of where digital assistance is headed.
Copilot Actions: the AI that does, not just says
For all the talk of “agents,” today’s most popular AI chatbots mainly answer questions—they don’t actually do things for you on the web. Enter Copilot Actions, perhaps Microsoft’s boldest bet to make Copilot act like “a true AI agent”—capable of booking reservations, navigating shopping flows, or automating repetitive web tasks, all triggered by a single user prompt.

How Copilot Actions works
- Cloud-based web automation: Whenever you launch a task, Copilot spins up a secure, sandboxed cloud browser, then programmatically clicks, types, and navigates as if a human were at the keyboard (but isolated from your actual machine).
- Supported scenarios: Booking event tickets, reserving restaurants, sending simple gifts, pre-filling forms, and automating basic site flows.
- Pre-built actions and plugins: Many common workflows (OpenTable, Booking.com, Dynamics, Salesforce, etc.) are supported out of the box, with options for custom-constructed “actions” by power users or businesses.
- Security and privacy-first: All sessions are ephemeral, never touch your local files, and are severely limited when it comes to CAPTCHAs, multi-factor authentication, and anything involving sensitive/PIN-protected data.
User experience
The new system is both innovative and fast—ask Copilot to “book a table for two at a Japanese restaurant at 8 pm,” and it’ll handle much of the workflow automatically, then pause for you to complete final verification steps (like phone number or payment). The cloud session’s split-pane view lets you see both the automated browser and chat sidebar. For now, Actions is often slower than doing the task manually, but the burden shifts to the AI, freeing the user from tedious web flows.
Limitations and caveats
- Not fool-proof: Human steps (MFA, payment forms, CAPTCHA) still require user intervention; “full autonomy” is a long-term goal.
- Privacy questions: Microsoft emphasizes that sessions are volatile and that screenshots are not used for AI training, but transparency about telemetry is a work in progress.
- Regional restrictions: Actions are disabled in some countries (the EU, especially) due to regulatory requirements.
- Reliability: Some beta users report timeouts, task failures, and inconsistent performance.
Strategic significance
While Copilot Actions is still “lab-stage,” it’s a monumental step. By combining conversational AI with cloud-based automation, Microsoft is moving toward a future where digital assistants handle not just information, but action—real workflows, purchases, and bookings. If it lives up to its promise (and clears privacy hurdles), it could redefine how millions interact with the web.
Copilot Vision: giving AI eyes on your world
The next frontier for Copilot is “seeing” the context you’re working in. Copilot Vision lets the AI analyze what’s on your browser tab, desktop window, or mobile camera feed—then answer questions, provide insights, or highlight actionable information in real time. Think of this as the missing link between AI and modern multitasking: it’s no longer just a text box, but a real-time, context-aware partner.

Key features
- Visual awareness: AI sees your desktop/app window, browser tab, or phone camera—then instantly answers questions or highlights next steps.
- User-initiated sessions: “Vision” is opt-in only, clearly marked by an icon, and times out after inactivity.
- No persistent data: Images, audio, and biometric data are processed for the current response only, never stored or used for model training.
- Safety filters: Vision blocks itself on paywalls, “sensitive” or DRM-protected sites, and can only be used on an expanding list of approved domains for now.
- Multimodal conversation: Works in Edge, on Windows, and via mobile—with ongoing expansion to more platforms.
- Biometric consent: For features requiring face or hand analysis, explicit consent is required from the user.
User experience
Copilot Vision is less about “remote control” and more about guidance—think asking “What’s the main takeaway from this slide deck?” or “How do I fix this spreadsheet formula?” while sharing your screen. It highlights relevant sections, explains on-screen content, and draws on knowledge of what you’re working on—“contextualizing” your AI chat like never before. Early testers have praised its value in multitasking, technical support, and real-time learning/teaching.
Security and privacy
Vision takes privacy extremely seriously—nothing is stored, processed images are memory-resident only, and access is opt-in with full user control. On shared or sensitive content (and all “not safe for AI” sites), it simply turns off or refuses to respond. For EU and privacy-focused users, this marks a clear distinction from tools like Windows Recall, and Microsoft’s documentation stresses that Vision is strictly a tool for in-the-moment advice, not surveillance or data retention.
Broader implications
By adding “eyes” to AI, Copilot Vision is setting the stage for much more fluid, real-world AI interaction: think live translation in meetings, on-the-fly technical troubleshooting, or screen-based accessibility help. This mirrors experiments by OpenAI, Google, and others—but Microsoft’s focus on opt-in privacy and contextual safety is setting early benchmarks for responsible deployment.
Copilot Gaming Experiences: playing with AI-generated worlds
For gamers (and anyone who’s ever wondered about AI in entertainment), Copilot Gaming Experiences is the sandbox for AI-generated games and play. The most headline-grabbing demo? A browser-playable recreation of Quake II, generated frame-by-frame by Microsoft’s new “Muse” AI model—not the original game engine.

How it works
- Real-time AI model graphics: Gameplay scenes are synthesized using Muse AI, not pre-rendered or traditionally coded.
- Playable retro levels: Players can explore a full (but time-limited) Quake II map, move, interact, and even fight AI-generated enemies—right in the browser.
- Available to all (18+): Sign in to Copilot Labs, confirm age, and try the demo.
- Beyond retro: Tech previews hint at upcoming demos with new genres, real-time user directions, and even personalized narratives driven by AI decisions.
User experience
Games load in-browser, sometimes after a bit of delay, and then function much like classic shooters—with simplified controls and slightly “fuzzier” visuals due to real-time generation. Play sessions are brief (under a minute), but showcase where AI-generated art and play are heading. For devs and digital artists, the implications for prototyping, preservation, and modding are enormous: imagine breathing new life into old games or generating new gameplay loops from simple prompts.
Broader impact
For now, these demos are still “toys,” but they signal Microsoft’s intention to weave generative AI directly into the fabric of game development and interactive entertainment. The future likely includes integration with Copilot for Windows gaming, AI-powered NPCs, dynamic story creation, and live world-building—pushing the boundaries of what games can be.
Evolution and future roadmap
Since its launch, Copilot Labs has grown from a small handful of demos (Think Deeper and Copilot Vision were among the first) into a multi-experiment playground covering audio, vision, gaming, web automation, and beyond. Where next? Microsoft’s roadmap is clear:
- More experiments, faster: The Labs platform will remain dynamic, with new features arriving every few weeks and successful ones “graduating” to mainstream Copilot, while others cycle out or go back to the drawing board.
- Broader integration: Experiments that work—like Copilot 3D or Audio Expressions—could become permanent fixtures, eventually linking with Microsoft 365, gaming platforms, and even Azure developer tools.
- Deeper agent capabilities: Copilot Actions is a sign that Microsoft sees the future in autonomous agents—AIs that can proactively execute multi-step tasks, navigate the web, and function as smart assistants, not just chatbots.
- Privacy and compliance at the forefront: As user data flows multiply, expect Labs features to adopt ever-stricter privacy controls and regulatory compliance—already a focus given AI’s complex legal landscape.
- Global expansion: Microsoft continues to roll out Labs and its experiments to new regions and languages as local laws, feedback, and technical readiness allow.
Comparison with other experimental AI platforms
Microsoft isn’t alone in hosting a “test site” for bleeding-edge AI features. Google Labs, OpenAI’s “alpha” previews, and various independent AI startup sandboxes are now common across the industry.
- Google Labs: Similar in ethos, Google’s Labs (for Gemini and Workspace AI) lets power users try new generative features with early feedback mechanisms. Google’s “Labs” tends to emphasize collaboration with academics and developers, while Microsoft’s Copilot Labs is more visibly integrated into mainstream user accounts.
- OpenAI’s availabilities: OpenAI frequently releases alpha/beta tools to its Plus users first—such as Voice Mode, Sora video generation, and Agents. These are more closed than Copilot Labs but share the principle of controlled releases and rapid iteration.
- Other sandboxes (Anthropic, Meta, etc.): Smaller platforms run Labs for prompt injection, custom agent testing, or agent procedural logic. These are often developer-oriented rather than “consumer labs.”
Microsoft’s Copilot Labs sets itself apart through its heavy focus on practical, end-user features (voice, 3D, gaming, web automation) and broader community engagement—including Discord integration and feedback hubs.
Integration with Microsoft 365 Copilot and Copilot Studio
Perhaps the greatest strength of Copilot Labs is that its most successful features are just a click away from integration into core Microsoft ecosystems:
- Microsoft 365 Copilot: Once matured and robust, Labs experiments are primed for integration with Word, Excel, Outlook, Teams, and PowerPoint—improving productivity tools with new AI-powered capabilities.
- Copilot Studio: For enterprise and developer users, Copilot Studio offers a low-code/no-code environment to build, customize, and extend AI agents using “ingredients” refined in Labs (such as Copilot Actions and 3D model integration). Studio acts as a more enterprise-focused, customizable follow-on to concepts proven out in Labs.
- Unified feedback: Data, issues, and success stories from Labs feed directly into Microsoft’s design documentation and product roadmap, creating a visible pipeline from experiment to mainstream adoption.
Security and privacy considerations
With AI comes deep responsibility, and Copilot Labs is structured from the ground up to prioritize user privacy and regulatory compliance:
- Opt-in participation: All Labs features require explicit user engagement; nothing is “always-on” or monitoring users by default.
- Ephemeral data handling: Uploaded images or context (Vision sessions) are never stored or used for training; once processed, they are discarded.
- Transparency: Users are informed about what data is used, for what purpose, and for how long—especially for Vision, Gaming, and Actions features.
- Enterprise isolation: Features that touch business processes (via Copilot Studio or Copilot 365) operate within enterprise compliance boundaries, maintaining organizational data controls and respecting sensitivity labels.
- Vigilance on AI risks: Copilot Labs incorporates rigorous content filters (AI for hate/sexual/violent content), bias detection, and user controls for data retention—even influencing how agents interact with 3rd-party APIs and services.
For anyone worried about AI privacy, Labs is ahead of many peers in making boundaries visible and giving users control—though scrutiny, as always, remains warranted as new experiments roll out.
Availability and regional limitations
As with most cutting-edge tech, where and how you access Copilot Labs varies by geography, language, and regulatory environment:
- Core availability: Copilot Labs is accessible globally for most experiments, provided you have a Microsoft account; usage quotas may apply for certain features.
- Region-locked experiments: Copilot Appearance, Copilot Vision, and Copilot Actions are—at time of writing—limited to select regions (notably the US, UK, Canada, Australia, and New Zealand), with gradual expansion as compliance and language support matures.
- Language support: English is currently the default for Audio Expressions, 3D model labeling, and Vision Q&A, though broader type and output language support is on the roadmap, starting with the major European and Asian languages.
- Enterprise and education: Office/enterprise tenants may have separate access patterns through Copilot Studio and Microsoft 365 policies; admins have robust controls for feature rollout.
In short, most Labs experiments are easy to access if you’re in the right region, but the newest and boldest features may require patience and a paid Copilot Pro subscription.
Conclusion: why Copilot Labs matters
In a world where AI is everywhere (and sometimes nowhere fast), Copilot Labs is Microsoft’s open bet on inviting users into the process of building the AI-powered future. For anyone curious about how AI is tested, deployed, and iterated—and for those eager to get their hands on tomorrow’s creative tools today—Labs is a rare and refreshing blend of transparency, playfulness, and responsibility.
By bridging the gap between research, experimentation, and production, and by directly involving its community in the evolution of Copilot, Microsoft is aiming to build not just smarter AI—but AI that works with, and for, real people. Whether you want to make a voiceover, a 3D model, an AI-driven game, or just see what’s coming around the corner in everyday digital life, Copilot Labs is, for now, a front-row seat on the next era of computing.
Want to participate?
- Visit Copilot Labs
- Join the Copilot Community Discord
- Watch for the Copilot icon in your Microsoft 365 apps and keep an eye on announcements as new Labs features graduate into mainstream Copilot
Stay curious, keep experimenting, and help shape what AI becomes for everyone. The future’s not set—but you can bet there will be a Lab for it.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
