By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIGoogleTech

Kaggle launches Benchmarks Resource Grants for AI evaluation

Instead of just funding competitions, Kaggle is now directly backing the people designing the benchmarks that will define how LLMs are judged in the first place.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Apr 8, 2026, 1:27 AM EDT
Share
We may get a commission from retail offers. Learn more
Illustration of a magnifying glass with a blue ‘k’ in the center, surrounded by simple line drawings of scientific and data‑related icons such as DNA, an atom, a cube network, charts, and documents connected by dotted lines, representing Kaggle’s focus on data science and research.
Image: Kaggle
SHARE

Kaggle is doubling down on AI evaluation with a new Benchmarks Resource Grant program, and it is very much a sign of where the LLM ecosystem is headed: less hype, more rigorous, repeatable testing of what models can actually do in the real world. Instead of just funding flashy competitions, Kaggle now wants to underwrite the invisible but crucial work of building high‑quality benchmarks that everyone else can rely on.

At the center of this move is Kaggle Benchmarks, the platform the company has been quietly turning into a sort of public “proving ground” for large language models and agents. Benchmarks on Kaggle are essentially standardized evaluations: a set of tasks, datasets, and scoring rules that models must pass, all wrapped in a way that is transparent, reproducible, and shareable via public leaderboards. Kaggle has already been working with heavyweight labs like Google DeepMind, OpenAI, IBM Research, and Meta to host their evaluations, including well‑known suites like Google’s FACTS and Meta’s multilingual benchmarks, but until now, this level of support has mostly been reserved for big research groups.

The new Benchmarks Resource Grant program widens that circle to the broader research community, specifically to people who are building new benchmarks but do not have the compute or infrastructure to run them at scale. If you are selected, you do not just get some credits and a pat on the back; Kaggle is offering a surprisingly full‑stack package: higher compute quotas, access to leading models from providers like OpenAI, Google, Anthropic, Grok, Qwen, and DeepSeek, and a managed backend that keeps your leaderboard fresh as new models ship. In other words, you focus on defining what to measure and how to score it, and Kaggle takes on the unglamorous work of spinning up evaluations, tracking runs, and updating rankings as the model landscape changes.

That managed piece is more important than it sounds. Benchmarks tend to decay quickly when no one updates them: new models appear, APIs change, and the original code rots. Kaggle is essentially promising to keep the lights on for selected benchmarks, continuously re‑running them on newly supported models and surfacing the results to its community of more than 30 million users, which instantly gives a successful benchmark real visibility and impact. On top of that, researchers can tap into a growing ecosystem of third‑party labs that can bring their own models to the table, expanding coverage beyond just the default providers.

Under the hood, the whole thing is powered by Kaggle’s open‑source kaggle-benchmarks SDK, which is basically the toolkit for turning research ideas into production‑grade evaluations. The SDK gives you a unified llm.prompt() interface for multiple providers, support for structured outputs (think Pydantic‑style schemas), multi‑agent workflows, and automatic handling of conversation history, so you don’t have to glue together half a dozen client libraries just to ask a model a question. Tasks are declared in plain Python with a @kbench.task decorator, and the return type directly drives how the leaderboard is scored, whether that is a simple pass/fail, a numeric score, or something more complex like tuple‑based counts.

Evaluation is where things get interesting. Rather than forcing every benchmark to boil down to exact‑match metrics, Kaggle’s framework leans heavily into both deterministic checks and “LLM‑as‑judge” style scoring. You can combine regex and equality checks with judge models that assess open‑ended responses, for example, asking an LLM judge whether another model’s answer satisfies a natural‑language rubric. For larger datasets, the SDK handles parallel, batched evaluations over DataFrames, running multiple models at once and aggregating metrics like accuracy and standard deviation so benchmark authors can spend their time on task design instead of pipeline plumbing.

Multimodality and tooling support are baked in as well, which is increasingly non‑negotiable for modern benchmarks. Researchers can evaluate models on text‑plus‑image scenarios, wire in tools via function calling, or even safely execute model‑generated code inside sandboxed Docker environments, which is especially relevant for agentic workflows where the model is allowed to run snippets of logic to solve tasks. All of this is designed to reflect how models are actually used in production rather than in isolated academic settings, aligning with Kaggle’s broader push toward “what actually works in AI” instead of synthetic leaderboard chasing.

The grant program also comes bundled with a human layer: implementation help from Kaggle’s product and engineering teams. That means if your benchmark exposes some missing capability in the platform, Kaggle is explicitly open to evolving the SDK and platform features to support new evaluation mechanisms. For many academic or independent researchers who are strong on ideas but short on engineering time, that kind of collaboration can be the difference between a neat paper prototype and a widely adopted benchmark that others reuse.

Once a benchmark is live, Kaggle’s distribution machine kicks in. The company is promising marketing support to promote grantee benchmarks across its platform, blog, and community channels, giving niche or specialized evaluations a shot at mainstream adoption. Given Kaggle’s role as a hub for data scientists, ML engineers, and now a growing number of AI agents and model builders, a high‑quality benchmark that lands on the front page can quickly become a reference point in its domain, much like well‑known Kaggle competitions did for earlier machine learning tasks.

Interestingly, the Benchmarks Resource Grants sit alongside Kaggle’s existing Competitions Research Grants, which target a slightly different kind of project. If your goal is to mobilize thousands of practitioners around a focused problem—say, detecting forgery in scientific images or predicting 3D RNA folding—then a fully fledged Kaggle competition with a prize pool and rankings still makes more sense. In those cases, Kaggle helps eligible academic and nonprofit partners by reducing or waiving platform fees and sometimes funding the prize money, provided the dataset, licensing, and problem design meet its selection criteria.

That split is deliberate. Benchmarks are about ongoing measurement—keeping a pulse on how models behave across time—while competitions are about bursts of intense community problem‑solving. There is even a middle ground in the form of benchmark‑focused hackathons, where organizations can ask the community to help define new evaluation tasks around specific capabilities they care about, from AGI‑flavored cognitive skills to domain‑specific reasoning. For prediction problems with private labels and offline scoring, the more traditional competition format remains the go‑to.

Zooming out, this push comes on the heels of Kaggle’s broader Community Benchmarks rollout earlier in the year, which opened up benchmark creation to anyone on the platform. Community Benchmarks already allow developers to design tasks through a friendly UI or via notebooks, run models without managing their own API keys, and publish reusable benchmarks with shareable leaderboards. The new grant layer effectively adds a “turbo” mode on top of that system for projects that are particularly promising, resource‑intensive, or strategically important for the ecosystem.

For practitioners and labs, the message is pretty clear: if you are serious about measuring LLMs and agents, Kaggle wants to be your home base. For researchers, especially those outside the biggest industrial labs, the Benchmarks Resource Grant program offers a rare combination of free compute, top‑tier model access, engineering support, and built‑in distribution—all in exchange for building evaluations that make the AI landscape a little more honest and a lot more useful. And for everyone who has ever scrolled through a leaderboard and quietly wondered whether the numbers really meant anything, more transparent, well‑maintained benchmarks cannot come soon enough.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Leave a Comment

Leave a ReplyCancel reply

Most Popular

Windows 10 and 11 PCs hit by 2026 Secure Boot deadline

How to scan documents in the iPhone Notes app

OpenAI launches Safety Fellowship for independent AI research

Samsung confirms the end of Samsung Messages in July 2026

Amazon eyes $9 billion takeover of Apple satellite partner Globalstar

Also Read
OpenAI Codex app logo featuring a stylized terminal symbol inside a cloud icon on a blue and purple gradient background, with the word “Codex” displayed below.

OpenAI Codex loses six older models in spring cleanup

OpenAI Prism app icon shown as a layered, glowing blue geometric shape centered on a soft blue gradient background, representing an AI-powered scientific writing workspace.

OpenAI’s Prism turns AI into a tough scientific paper reviewer

Apple MacBook Neo in citrus color.

MacBook Neo refresh rumoured with A19 Pro and 12GB unified memory

The classic Apple logo, shown in light silvery-blue, set against a black background. The logo has a clean, minimalist design featuring the iconic bitten apple silhouette with a soft, matte finish.

New leak says Apple’s foldable iPhone is actually the ‘iPhone Ultra’

Three magenta metal dummy phone units, representing two iPhone 18 models and a foldable iPhone, are laid flat on a wood surface with their backs facing up, showing raised camera plateaus and circular MagSafe outlines on the outer devices while the central foldable unit displays a wide hinge section.

Foldable iPhone dummy leak shows iPad mini-like experience in your pocket

Apple Siri illustration

Apple is testing a Siri that does several tasks in one go

The Apple logo, a white silhouette of an apple with a bite taken out of it, is displayed in the center of a circular, colorful pattern. The pattern consists of small, multicolored dots arranged in a radial pattern around the apple. The background is black.

Apple’s AI problems start long before Siri speaks

Apple MacBook Neo in citrus color.

MacBook Neo is so popular that it’s now a massive problem for Apple

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.