The Pattern: Society changes → New theorists emerge → Assessment follows → Curriculum narrows to match assessment
Change comes from society, not individuals. Theorists are products of their time.
Assessments are supposed to measure learning.
Instead, they shape learning.
We've built a system where the tool meant to observe has become the thing that controls what happens.
🌡️ → 🌦️
When the measure becomes the target, it ceases to be a good measure.
The Result: Teachers teach to the test. Curriculum narrows to what's assessed. Students learn that success = performing on evaluations, not understanding.
Why analyze an evaluation tool instead of a traditional curriculum unit?
This analysis evaluates the evaluator: What is EdReports actually measuring? Does it assess curriculum quality or enforce compliance?
— EdReports K–2 ELA Review Criteria v2.1, p. 2
— EdReports K–2 ELA Review Criteria v2.1, p. 10
EdReports doesn't measure curriculum quality—it enforces content-first compliance and calls it "science."
The Core Issue: Traditional assessment assumes cognition is static—"the same results each time in the same setting" (Sullivan, 2011). But learning is context-sensitive, identity-shaped, and constantly evolving.
EdReports doesn't just review curriculum—it acts as a gatekeeper that enforces a specific philosophy of literacy and learning.
If EdReports measures content compliance instead of curriculum quality,
how do we prove it?
Part 1: Test EdReports as an evaluation tool (Wiggins' validity framework)
Part 2: Treat EdReports AS curriculum and evaluate what opportunities it creates
Let's examine the methodology for both parts...
I hypothesize that EdReports will fail both as a curriculum evaluation tool and as a curriculum itself.
EdReports will fail as an evaluation tool
Testing validity, reliability, and actual usage patterns
EdReports will narrow curriculum opportunities
Analyzing what learning opportunities it creates or restricts
If correct, both tests should reveal the same pattern: EdReports enforces content-first compliance and calls it "quality."
Drawing on Wiggins (1998), we examine three dimensions to test EdReports as an evaluation tool:
Does EdReports measure what it claims to measure?
Can curricula succeed/fail on EdReports for reasons unrelated to actual curriculum quality?
Does EdReports assume cognition is static?
Does it account for context-sensitive, identity-shaped learning (Sullivan, 2011)?
How is EdReports actually used?
Does it function as a gatekeeper that narrows curriculum options and enforces compliance?
Wiggins' Validity Test: Can curricula succeed/fail for reasons unrelated to actual curriculum quality?
Yes, curricula can fail for the wrong reasons: A high-quality curriculum designed for Deaf readers would score zero on EdReports—not because it's ineffective, but because it doesn't use phonics-based decoding. EdReports conflates one specific approach with curriculum quality itself.
Sullivan (2011): Traditional assessment assumes cognition is static—"the same results each time in the same setting." But learning is context-sensitive, identity-shaped, and constantly evolving.
"Materials... provide reasonable pacing where phonics skills are taught one at a time... [with a] clear evidence-based explanation for the expected hierarchy of phonemic awareness competence."
— EdReports K–2 ELA Review Criteria v2.1, p. 7
Result: EdReports is unreliable because it treats learning as a fixed, universal process rather than a context-dependent, identity-shaped experience.
Wiggins' Usage Test: How do schools and districts actually use this evaluation tool in practice?
"Materials must 'Meet Expectations' in BOTH Gateway 1 and Gateway 2 to be reviewed in Gateway 3" — Any curriculum that doesn't pass both phonics gates never gets evaluated for usability or quality
Districts use EdReports ratings as adoption criteria, narrowing curriculum options to only those that align with one specific approach
The same evaluation criteria apply regardless of student population, community values, or local literacy needs
EdReports is used as a compliance enforcement mechanism, not a quality measurement tool. It doesn't help districts choose the best curriculum for their students—it narrows options to those that comply with one ideological position on literacy instruction.
When EdReports fails as an evaluation tool, what happens to literacy education?
The tool meant to improve curriculum quality instead restricts what counts as "quality."
EdReports doesn't just evaluate curriculum—it functions as curriculum by determining what teachers teach and what students learn.
If EdReports is curriculum, we need to evaluate it as curriculum—not just as a tool.
This requires a framework that measures learning opportunities, not content compliance.
Enter the 3D Compass: A framework for measuring curriculum across 8 dimensions of learning opportunities.
The 3D compass has 3 axes and 8 octants. All sides of all axes are equally important. Quality curriculum provides opportunities in all 8 octants—not just one narrow corner.
Does it reward student agency or require teacher scripts? Opportunities for self-driven work vs. co-construction
Does it provide both established foundations (what society agrees is "true") AND opportunities to explore alternatives? Balance between practical learning and theoretical exploration (fringe theories, lost voices, niche interests)
Does it provide both clear structure (rubrics, assigned goals) AND opportunities for self-direction (creating own goals, exploring without rubrics)? Balance between prescribed pathways and flexible exploration
I scored all 54 EdReports indicators across 6 dimensions to measure what opportunities EdReports creates or restricts
Example: "Materials provide systematic and explicit instruction..." scores high on Structured (prescribed pathway), zero on Independence (no student agency), and zero on Flexible (one-size-fits-all)
This reveals whether EdReports creates balanced opportunities—or enforces a single, narrow vision of learning
Mapping all 54 EdReports indicators across the 6 dimensions reveals extreme imbalance:
A quality curriculum would show moderate scores (2-3) across all six dimensions—creating a roughly hexagonal shape. Instead, EdReports is maxed out in structure while near-zero in five other dimensions.
Independence
Collaboration
"Materials include systematic and explicit instruction... with repeated teacher modeling... Students practice phonics skills..."
— EdReports K–2 ELA Review Criteria v2.1, Gateway 1
"Materials provide clear protocols and teacher guidance that frequently allow students to engage in listening and speaking..."
— EdReports K–2 ELA Review Criteria v2.1, Gateway 2
Result: EdReports rewards curricula that minimize both student agency and collaborative learning—students follow scripts, not create knowledge
Practical
Theoretical
"Scope and sequence clearly delineate... with a clear evidence-based explanation for the expected hierarchy of phonemic awareness..."
— EdReports K–2 ELA Review Criteria v2.1, p. 7
"Materials include a clear, research-based core instructional pathway..."
— EdReports K–2 ELA Review Criteria v2.1, Gateway 2
Result: EdReports treats one approach to literacy as settled truth—no exploration of alternatives, no questioning of assumptions
Structured
Flexible
"Materials provide reasonable pacing where phonics skills are taught one at a time and allot time where phonics skills are practiced to automaticity..."
— EdReports K–2 ELA Review Criteria v2.1, p. 7
"Materials include decodable texts with phonics aligned to the program's scope and sequence..."
— EdReports K–2 ELA Review Criteria v2.1, p. 7
Result: EdReports enforces a single, rigid pathway—erasing diverse learners and alternative routes to literacy
EdReports doesn't create a balanced opportunity space—it collapses curriculum into a single corner
When districts use EdReports as a gatekeeper, they adopt curricula that maximize compliance and minimize opportunities for agency, collaboration, exploration, and diverse pathways. This isn't about quality—it's about control.
EdReports enforces the same content-first, compliance-driven logic that has persisted for 250 years.
It calls this "science"—but it's actually corporate-era gatekeeping dressed in modern language.
We've shown EdReports fails as an evaluation tool.
But what should we measure instead of content compliance?
Instead of asking "Does this curriculum cover the right content?"
We should ask "What opportunities does this create for metacognitive development?"
Here's the framework that makes this possible...
These are interconnected nodes. Any node can trigger any other—no hierarchy, no sequence. Like a 6D radar graph with butterfly effects.
Looking inward to map your own cognitive architecture. Not "Who am I?" but "Who do I think I am, and why?" Unlearning imposed narratives. Building internal stability.
Pure exploration without agenda. The "most freeing area of metacognition." Following tangents, embracing the butterfly effect. A small curiosity can blossom into massive, unexpected journeys.
Tailoring curiosity with magnitude and direction. Turning wandering wonder into targeted inquiry. Asking "Where do I find what I need?" Making deliberate choices about scope and sourcing.
The reality check. How does your identity bend the information you receive? How does new information force "truth maintenance" updates to your internal reality? Critical awareness of bias.
Mapping external minds. Understanding stakeholder biases, contexts, realities. "What are they actually asking for?" Fitting others' realities into your own to ensure communication is received.
Creating entirely new ideas. Putting it all together across independence, collaboration, and application. Not summarizing—constructing something that didn't exist before.
Bloom treats "remembering" as bottom, "creating" as top. But Diffusion can spark Refraction, which sends you back to Vectoring, which reshapes Endospection. No hierarchy. No sequence.
Teachers design assignments that create opportunities for students to engage these 6 nodes. Not "master content," but "experience these metacognitive processes."
Each node influences every other—no beginning, no end. This is learning as a living ecosystem.
What if curriculum design started with opportunities, not content?
The shift: Instead of asking "What content must students master?", we ask "What opportunities will students have access to?" Curriculum becomes a map of possible experiences, not a checklist of required content.
Not by measuring content coverage, but by analyzing the range of learning opportunities students can access.
This is what the 3D Compass measures: Not "Does this curriculum teach phonics correctly?" but "What learning opportunities does this curriculum create or restrict?"
EdReports gives us content-focused evaluation.
We need metacognitive opportunity mapping.
When we change how we evaluate curriculum,
we change what counts as learning—
and we change who gets to learn.
Metacognition is not the end of learning.
It is the only beginning we can trust.