i-Ready: 13 Million Students, Zero Meaningful Evidence
The Data Gap Behind One of America’s Largest EdTech Tools
If you work in a public school, you’ve likely heard of i-Ready. If not, it’s an ‘adaptive tutoring’ tool that has become one of the most widely used EdTech platforms in U.S. classrooms.
i-Ready began in 2011 as an adaptive diagnostic test: a tool designed to identify gaps in student reading and mathematics skills. Today, however, it doesn’t merely diagnose problems — it attempts to fix them, assigning students to ‘personalized’ lessons based on their diagnostic results.
What began as a measurement tool has become a full-service teaching system.
According to company reports, i-Ready is used by over 13 million K-8 students in the United States — representing roughly one in three elementary and middle schoolers.
With such widespread adoption, you’d expect a deep body of research demonstrating its effectiveness.
But you’d be wrong.
Where’s The Data?
Over the past two weeks, I’ve searched extensively for credible evidence of i-Ready’s impact on student learning. Here’s what I found:
ZERO randomized controlled trials
ZERO top-tier (Q1 or Q2) peer-reviewed journal articles
TWO lower-tier (Q3) journal articles
This article suggests i-Ready diagnostics are less predictive than traditional end-of-year state assessments
This article measures improvements only within i-Ready itself — not beyond the tool
TWO unlisted journal articles
This article finds no difference between users and non-users
This article does not include any form of a control group
A LARGE VOLUME of gray literature
Dissertations, vendor reports, conference posters, and white papers.
That’s it.
Fifteen years. Thirteen million students. Not a single high-quality, independent study showing i-Ready improves learning.
What About the Johns Hopkins studies?
i-Ready advocates often point to two studies from the Johns Hopkins School of Education: one examining reading, the other mathematics.
A few important caveats
Both studies were conducted in partnership with Curriculum Associates, the developer of i-Ready.
Both were released as white papers — meaning they were not peer-reviewed.
The name ‘Johns Hopkins’ carries weight, however, so it’s worth examining what they actually found.
On reading, the researchers concluded: “Results of this study did not show any statistically significant associations between i-Ready Personalized Instruction usage and SBA ELA achievement.”
In plain terms, they found no measurable impact.
On math, the researchers reported a positive impact of i-Ready. However, a closer look at the data tells a different story:
Across grades, students using i-Ready improved by 7.8%, while students not using it improved by 7.0% — a difference of less than one percentage point.
In a sample of 7,646 students, at roughly $70 per student, that amounts to over $535,000 spent on i-Ready to achieve a 0.8% difference in mathematics performance, with no measurable change in reading.
In medicine, we distinguish between statistical significance and clinical significance. The former tells us that a result is unlikely to be due to chance; the latter asks whether that result is actually meaningful for patients. In other words, a finding may be detectable in the data, but still be irrelevant in practice.
By that standard, even if i-Ready math produces statistically significant differences, the effect is clinically meaningless.
The Bigger Problem
Not only is there zero high-quality evidence supporting the widespread adoption of i-Ready, but the platform itself appears to misunderstand why certain teaching practices improve learning in the first place.
To see this, we need to distinguish between two concepts:
Summative Assessment: End-of-unit or end-of-term evaluations that result in a final mark or grade (e.g., an AP exam).
Formative Assessment: Ongoing, low-stakes checks for understanding designed to inform the learning process (e.g., a weekly quiz).
For decades, research has shown that formative assessment is one of the most powerful drivers of student learning. The entire field of adaptive tutoring — including i-Ready — is built on this idea: regularly assess students, provide feedback, and tailor instruction accordingly.
But there’s a critical question that is rarely considered: why does formative assessment work?
The common assumption is that its power lies in the immediate feedback given to students — that by seeing their mistakes, students can adjust and improve.
However, decades of research from Welsh educationalist Dylan Wiliam suggest a different mechanism. Formative assessment works primarily because of the information it provides to teachers — information that allows instruction to be adjusted in response to student understanding.
In fact, formative assessment is only ‘formative’ when the information it generates changes what the teacher does in response.
When teachers design assessments, review responses, and provide feedback, they gain insight into what students understand, where they struggle, and what they need next. This allows them to adapt instruction in real time — adjusting explanations, pacing, and examples to guide students through new concepts. Importantly, human teachers can adapt across emotional, cognitive, and behavioral dimensions in ways no digital program can.
And this is where the problem emerges.
When students use i-Ready, the system determines which questions are asked, what feedback is given, and how instruction is sequenced. The only thing teachers see are summary performance metrics. They don’t see the questions, the responses, or the misconceptions as they emerge. And critically, teachers cannot use summary performance metrics to refine their instruction.
Put simply, the key mechanism that makes formative assessment effective — its ability to inform and adapt teaching — is largely removed. What remains is abundant activity, but little learning.
So Now Then…
If one of the most widely used tools in American education operates without an evidential base, this signals a broader problem with how we evaluate EdTech.
Somewhere along the way, we stopped asking, “Does this actually help students learn?” — or perhaps we never seriously considered this question in the first place
We’re not asking for miracles. We’re merely asking for evidence.
NOTE: If you know of any peer-reviewed i-Ready research from academic outlets I have missed, please send along and I will update the article accordingly.


Will this be the next Sold A Story - EdTech Edition? Shiny brochures and marking campaigns seemed to have worked wonders.
Final score:
Big tech 1
Students 0
Fascinating data dive. The $535K for 0.8% difference is the smoking gun here.
What really strikes me is how this mirrors the broader EdTech problem: we're treating formative assessment as if it's about the feedback loop to students, when decades of research (Wiliam, Black, Hattie) show the mechanism that matters is the information it provides to teachers. i-Ready takes the diagnostic data away from educators and leaves them with summary dashboards that can't inform actual instructional decisions.
The fact that even the "positive" Johns Hopkins findings show no meaningful impact on reading and a clinically irrelevant difference on math tells you everything about the field's willingness to accept "statistically significant" as a substitute for "actually helpful."