OpenAI has launched a new EMEA Youth & Wellbeing Grant, putting real money—€500,000—behind a simple idea: if AI is going to shape young people’s lives, it needs to do it safely, fairly, and in ways that actually help them grow. Instead of building yet another shiny product, this program is aimed at the people already on the frontlines with kids and teenagers: NGOs, youth workers, researchers, and educators in Europe, the Middle East and Africa.
At its core, the grant is quite straightforward: OpenAI will fund organizations that either work directly with young people and families or study how AI is affecting their safety, wellbeing and development. The money is not a massive moonshot fund, but it is meaningful—individual grants are expected to range from about €25,000 to €100,000, with the possibility of multi-year awards for bigger or networked projects. That size bracket is targeted at organizations that already exist and know their communities, but need resources to run pilots, scale a program across more schools, or turn a one-off research project into something that policy makers or product teams can actually use.
The focus is deliberately narrow: youth, AI, and wellbeing. On the NGO side, OpenAI lists things like youth protection and harm-prevention programs, AI literacy initiatives for kids, parents and teachers, and practical tools that help organizations respond safely when AI shows up in their work—think guidance for school counselors dealing with AI-generated bullying, or helpdesks that can recognize AI-enabled scams targeting teens. Research teams, meanwhile, are encouraged to look at both sides of the equation: how AI might enrich youth education and development, and how it might undermine child safety or mental health if safeguards fall short.
That dual framing reflects where the broader conversation around youth and technology has landed in the last few years. The World Health Organization’s European office, for instance, has described the impact of digital technologies on young people’s mental health as “mixed,” highlighting both benefits and harms and calling for smarter policy responses rather than panic. The European Parliament’s own brief on youth and social media similarly warns against simple “screen time” narratives and pushes for more nuanced work on how specific digital behaviours relate to wellbeing. OpenAI’s grant slots neatly into that landscape: it is not trying to settle the debate, but to generate better evidence and more practical tools.
Geographically, the program is tightly scoped: applicants have to be legally registered in an EMEA country, so the money stays within Europe, the Middle East and Africa. That EMEA presence is not a nice-to-have; it’s a mandatory criterion, sitting alongside alignment with the program’s objectives, impact potential, methodological rigor, feasibility, and some sense of sustainability beyond the grant period. The emphasis on ethics and data protection is strong: organizations need to show how they will protect minors, handle consent and manage data safely—exactly the issues UNICEF and other child-rights bodies keep flagging as AI systems ingest more and more children’s data.
If you look at the timelines, this is not some far-off promise. Applications opened on 28 January 2026 and close on 27 February 2026, with funded projects expected to start in the second or third quarter of the year. There’s a structured review process—initial eligibility screening, review by a council that looks at technical, ethical and legal fit, and then final approvals and contracting. Once projects are up and running, OpenAI says their outputs—reports, toolkits, tested approaches—will be fed into product development, policy work and regulatory discussions, particularly in Europe, where the company is simultaneously rolling out training for 20,000 small and medium businesses on AI skills.
To actually apply, organizations have to do more than fill in a checkbox form. They need to provide a project title, a proposal of up to 500 words outlining objectives, methods, timelines and key deliverables, a detailed budget, CVs for the team, an ethics and data handling plan, and any relevant letters of support or partnership confirmation. In other words, this is pitched at serious actors—NGOs that already run youth programs, university labs working on adolescent mental health and AI, or coalitions trying to harmonise child safety practice across multiple countries.
The interesting question is why OpenAI is doing this now. AI is no longer something kids will “meet” in the future; it’s already in classrooms, on their phones, and inside the recommendation engines that shape their feeds. The United Nations has estimated that nearly eight in ten people aged 15–24 were online in 2023, and a child goes online for the first time every half second, which gives a sense of how central digital systems have become to the youth experience. As AI gets woven into that fabric, the stakes go up: there are clear upsides—from personalised learning and accessible tools for young people with disabilities to better digital mental health support—but also new risks, like AI-generated sexual abuse material, extortion, deepfake bullying, or manipulative chatbots targeting teens.
A lot of global guidance in the last couple of years has converged on the same theme: children need to be explicitly recognised in AI policy and design. UNICEF’s updated guidance on AI and children, for example, sets out requirements for child‑centred AI, from privacy-by-design and non-discrimination to strong safety, inclusion and skills-building for the AI era. Professional bodies such as the American Psychological Association have also stressed the need for long-term research into how adolescents interact with AI and the psychological impact over time, especially for vulnerable groups. Against that backdrop, OpenAI’s grant can be read less as “nice CSR” and more as a response to mounting pressure on AI developers to back up their safety talk with funding and partnerships that give youth experts a seat at the table.
If you’re an organization thinking about applying, the sweet spot looks like projects that produce something usable beyond academic journals. OpenAI explicitly encourages outputs such as ready-to-use toolkits for schools, policy briefs for regulators, or tested intervention models that others can copy or adapt. That could be, for instance, a cross-country study on how AI homework tools change learning habits in low-income communities, paired with teacher training materials; or a youth-led project that develops guidelines for AI companions that respect boundaries and mental health, then stress-tests those guidelines with real products.
There’s also a subtle but important power shift embedded in the design: OpenAI is not dictating a single model of “good” AI for young people. By funding NGOs, university labs and coalitions, it is effectively outsourcing some of the agenda-setting to people who work daily with children and teens, or who track the unintended consequences of AI in the wild. If those groups take full advantage of the program, they can use the company’s money—and access—to push for stricter safeguards, better transparency, and designs that actually reflect youth perspectives rather than adult assumptions about what kids “should” do online.
The grant is not going to fix every problem with AI and young people, and it doesn’t pretend to. But it does mark a shift in how one of the most influential AI companies is engaging with youth wellbeing: moving from abstract safety principles to funding people who can test, criticise and improve AI in real classrooms, homes and youth centres. For young people themselves, the impact won’t come from the announcement—it will come from whether local NGOs, schools, researchers and youth networks grab this opportunity to build tools and evidence that make AI feel less like something happening to them, and more like something that can genuinely work for them.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
