Grammarly — the company that taught millions to stop starting sentences with “So…” — just moved deeper into the classroom. On August 18, 2025, the company rolled out an AI-native writing surface called Docs and a suite of specialised AI agents designed to help writers at every step: proofread, paraphrase, find citations, predict reader reactions — and yes, estimate what grade a paper might receive.
Think of Docs as a modern writing canvas and the agents as task-specific copilots that sit alongside your text. Grammarly’s announcement lists a set of agents aimed at student workflows:
- AI Grader — ingest your assignment, rubric and even public information about an instructor’s expectations, then return tailored feedback and a predicted grade.
- Proofreader — inline corrections and clarity edits (the classic Grammarly work, but more agentic).
- Paraphrase agent — rewrites to match tone, audience and style.
- Reader Reactions — predicts likely questions or confusions a reader might have after finishing the piece.
- Citation Finder — finds and formats citations to back claims.
- Expert Review — topic-specific feedback from an agent trained to act like a domain expert.
Grammarly says these agents are available within Docs “at no extra cost” to Free and Pro users, a notable move given that many advanced AI features are often pushed behind pricier tiers.
The AI Grader is what has grabbed headlines. Grammarly describes it as more than a surface-level score: it compares writing against an uploaded rubric, assignment details and—even more controversially—“publicly available” information about an instructor’s grading style to give a prediction and recommendations for improvement. That combination of rubric-matching and instructor-aware feedback is what lets the firm claim it can tell you whether a draft looks like an A.
That sounds useful: students could iterate on drafts with a better sense of alignment to assignment goals. It also raises obvious questions about accuracy and gaming the system (more on that below).
Grammarly isn’t just courting students. It also launched educator-facing features: a plagiarism checker that scans academic and web databases and an AI detector that returns a likelihood that a text was AI-generated. But those monitoring tools are initially gated: Grammarly says the plagiarism and AI-detection agents are available to Pro users at launch, and it plans to bring the full slate to Enterprise and Education customers later in the year.
It’s worth flagging the obvious caveat: AI-detection tech is imperfect. Educators and students are already in a tense dance — some students run their own work through detectors to avoid false flags, while teachers worry about both dishonesty and unfair false positives. The Wall Street Journal and other outlets have reported that detection tools can be inconsistent and that human judgment remains essential.
Jenny Maxwell, head of Grammarly for Education, framed the release as filling a gap between helpful AI and academic integrity: students should get tools that enhance learning without undermining it. In the company’s messaging, agents are “partners” that teach students how to work with AI — a sell to faculty and institutions worried about students outsourcing thinking to models.
In practice, Grammarly is trying to thread a narrow needle: offer assistance that boosts quality and AI literacy, while also providing educators with detection and originality tools that clamp down on misuse.
There are real upsides here. A grader agent that aligns writing to a rubric could help students learn to answer assignment prompts more precisely; a citation finder can save hours of source chasing; reader-prediction tools can sharpen clarity before submission. And making most agents available to Free and Pro users lowers the barrier for students who can’t afford institutional packages.
But the tech’s weaknesses matter. Detection tools aren’t infallible, rubrics vary wildly between instructors, and “publicly available” info about an instructor is a fuzzy input that could bias feedback toward certain expectations. There’s also a cultural risk: if students start optimizing only for the grader agent’s idea of an A, the nuance of original argumentation or the messy learning process could be sidelined. Reporting on the wider ecosystem shows instructors are already wary: many students try to “humanize” AI-assisted prose to avoid detection, and teachers report mixed success with automated flags.
Grammarly’s new Docs and agent suite are a logical next step for a product that has always tried to be the writer’s helper. Giving students a preview of how their paper might score is a neat tool for iteration — and for anxious procrastinators, it could be a godsend. But the grade prediction is an estimate, not a verdict. Teachers, students and institutions will need to treat it that way: as guidance, not as the last word.
If nothing else, the rollout highlights the larger reality: writing software is no longer just about spelling and style. It’s about aligning signals, incentives and pedagogy in a world where AI participates in the draft. Whether that turns out to be a net gain for learning depends on how transparently those systems operate and how wisely classrooms adapt.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
