When a teacher in Liberia logs into an AI workshop and walks out a few weeks later having built an interactive climate curriculum for local schools, you get a glimpse of what Anthropic and Teach For All are trying to do with their new global AI training initiative: put educators, not tech companies, in the driver’s seat of classroom AI.
Anthropic, the company behind the Claude AI models, has teamed up with Teach For All, a global network of independent teacher-leadership organizations, to roll out a program that gives more than 100,000 teachers and alumni in 63 countries access to tools, training, and a community centered on AI. Collectively, those educators serve over 1.5 million students, most of them in under-resourced schools where small gains in access or productivity can translate into huge changes in outcomes. Instead of shipping a pre-baked “AI for schools” product, the partnership is trying something more interesting: turning teachers into co-designers who can shape what AI looks like in real classrooms, from lesson planning workflows to fully fledged learning apps.
Teach For All is a useful partner for this kind of experiment because it has spent the last decade and a half building a network that looks similar on paper but very different on the ground. Think Teach For India, Enseña Chile, Teach For Nigeria, Teach For Australia, Teach For America and dozens of others—locally run, threaded together by a shared focus on expanding educational opportunity in communities that have historically been left behind. As of 2025, the network spans 63 partner organizations across six continents, with roughly 14,800 teachers in active two‑year commitments and over 100,000 alumni, more than three‑quarters of whom keep working on challenges facing marginalized children. That footprint gives Anthropic an unusually broad, real‑world test bed for AI in education: rural classrooms, urban public schools, refugee contexts, you name it.
The heart of the new effort is something called the AI Literacy & Creator Collective, or LCC. It’s less a single course and more an ecosystem made up of three parts. First is the AI Fluency Learning Series, a six‑episode live training track designed with Anthropic’s education team. It walks educators through AI basics, what Claude can actually do, and practical classroom scenarios, from drafting lesson plans to differentiating reading materials for mixed‑ability groups. In November 2025 alone, over 530 educators showed up for the first run of these sessions, which is a good signal that this is meeting a real demand rather than just adding another webinar to teachers’ already overloaded calendars.
Once teachers get past the initial “what is this thing?” stage, they move into Claude Connect, the community layer that keeps the whole experiment alive between live events. This is where more than 1,000 educators from 60‑plus countries swap prompts, compare use cases, and share small discoveries that rarely make it into official case studies—things like “this prompt structure helps my Grade 9 students actually revise their essays” or “here’s how I explain hallucinations to 12‑year‑olds.” For teachers who are often the only tech‑curious person in their staff room, having a global backchannel like this can matter as much as the formal training.
The third piece, Claude Lab, is where the program gets more experimental. It gives a subset of educators access to Claude Pro features plus regular office hours with Anthropic staff, so they can push on edge cases, try more ambitious projects, and directly influence the model’s product roadmap. Within four days of announcing Claude Lab, the team says they received over 200 applications, which suggests there is no shortage of teachers who want to be more than just “end users” of AI tools. For Anthropic, that’s a pretty clear signal that it can treat classrooms as living labs for responsible AI design, not just a target market for enterprise licenses.
The projects emerging from this ecosystem are already more diverse than a typical edtech demo deck. In Liberia, a teacher who was new to AI attended LCC sessions on AI fluency and then used Claude’s Artifacts feature—essentially a way to spin up interactive apps, tools, or visualizations on the fly—to build a climate education curriculum tailored for Liberian schools. In Bangladesh, another educator working with Grade 6 and 7 students, more than half of whom struggled with basic numeracy, created a gamified math app complete with boss battles, leaderboards, and experience‑point rewards to keep students engaged. In Argentina, a tech educator at Enseña por Argentina has been using Claude to develop digital, interactive workspaces aligned to secondary curricula, describing how discovering Claude “significantly expanded” her practice after trying several AI tools.
If you zoom out from the individual stories, you start to see the pattern Anthropic keeps emphasizing: teachers as co‑architects. Wendy Kopp, CEO of Teach For All, has been explicit that if AI is going to make education more equitable, the people who understand students’ lives and local systems best need a say in how it’s designed and deployed. That means teachers providing ongoing feedback on what’s confusing, what saves time, where the model fails in the local context, and which features actually help with learning rather than just adding novelty. For Anthropic, that feedback loop isn’t just nice branding—it feeds into how Claude handles classroom‑specific tasks like generating age‑appropriate examples, respecting school data policies, and being transparent about uncertainty.
The partnership also plugs into a broader education push from Anthropic that’s been building quietly over the past couple of years. In Iceland, the company worked with the Ministry of Education and Children on what it describes as one of the first national‑scale AI education pilots, giving teachers across the country structured access to Claude for lesson prep and student support. In Rwanda, it’s working with the government and the training provider ALX to introduce AI tools and training into the national system, including upskilling thousands of teachers and a cohort of civil servants so they can think about AI not just as a classroom tool but as an infrastructure question. Anthropic staff have also been involved in the White House Taskforce on AI Education in the United States, framing this as part of a push to make practical AI literacy a baseline skill rather than an optional extra.
There’s also a governance angle baked into this that goes beyond “cool new tools.” When teachers in Nigeria talk about “significant learning around responsible AI implementation,” they’re not just referring to model accuracy—they’re navigating questions about bias, local languages, exam integrity, and data protection, often in systems where basic infrastructure is still catching up. By surfacing those issues early, programs like the LCC can stress‑test the industry’s favorite talking point, that AI will “close gaps,” against the messy realities of under‑resourced schools. Training 100,000 plus educators in how to critique, not just consume, AI outputs, is one way to build local capacity so schools don’t have to depend entirely on outside consultants to tell them what’s safe or effective.
For classroom teachers, the promise is more practical than all of that. If you’re juggling 40 students, limited materials, and a never‑ending pile of administrative work, something that drafts differentiated worksheets, generates examples tuned to your syllabus, or helps design a simple practice app can free up real time for human interaction. The early climate curriculum, math game, and digital workspaces show what happens when those capabilities are pushed into the hands of people who know exactly where the friction points are. It’s the difference between “AI in education” as a buzzword and AI as a set of very specific, teacher‑defined workflows.
Of course, big questions remain. How do you keep access equitable when advanced AI features can still be expensive at scale? How do ministries and school systems integrate teacher‑built tools into official curricula and assessments without burning out the same teachers you’re trying to support? And how do companies like Anthropic avoid treating these partnerships as pure product‑testing pipelines, rather than long‑term commitments to local capacity and public‑sector infrastructure?
Still, there’s something genuinely different about watching a global AI company center its education strategy on teacher leadership rather than glossy demos. In this model, a math teacher in Dhaka, a science teacher in Monrovia, and a tech educator in Buenos Aires are not just “users” of Claude; they’re part of the system that decides what AI in education should look like. If Anthropic and Teach For All can sustain that posture—and if school systems and governments are willing to listen to what these teachers learn in the process—this initiative could be less about introducing one more tool and more about reshaping who gets to write the rules for AI in classrooms worldwide.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
