Research

2021 Assessments: The Researcher’s Perspective

2021 Assessments: The Researcher’s Perspective

Statewide assessments are more than just a box-checking exercise—they’re critical tools for ensuring transparency and promoting equity. Yet the 2021 assessments have been the subject of fierce debate, leaving policymakers, educators, and parents divided on the value and costs of testing students after an unprecedented school year.

Recent federal guidance makes it clear that states should not expect a blanket assessment waiver, but questions about how to best test students remain. To learn more, DQC’s Allie Ball spoke to DQC Board Member Morgan Polikoff, associate professor of education at the University of Southern California. Polikoff underscores the importance of administering 2021 assessments and agrees that having statewide assessment data this year is crucial to supporting recovery efforts and ensuring that students’ needs are met. 

Question #1: What are some issues states still have to address when it comes to assessments?

Polikoff: The big one is safety. Many schools have been back in person for a while now, and we have a pretty good idea of the precautions they need to have in place. But it’s still an ongoing conversation. Any changes to safety guidelines will have practical implications for administering assessments.

Even with the recent federal guidance, states have a good amount of flexibility on how they administer assessments. They can push assessments until the fall—which I’m surprised more states are not considering, particularly those where students are not yet back in the classroom. They can also administer tests remotely, though that raises questions about validity.

Finally, there’s the issue of participation. How do you include enough students, when we already know that some parents will object to their students participating? And what do you do for those who don’t?

Question #2: How can states can measure the student learning for those students that do not participate in statewide assessments?

Polikoff: Professor Andrew Ho (Harvard Graduate School of Education) proposed an interesting strategy for estimating the performance of missing students using data from 2017 assessments. It’s a good idea, but includes a lot of assumptions. At a certain point, if you don’t have the data, you don’t have the data.

States and districts might also look to data from local or interim assessments. Some interim assessments, such as NWEA MAP, are nationally normed and widely used—meaning they would have enough data to support comparisons and matching. It’s not the same as statewide assessments, but it’s something. The other option would be waiting and administering assessments in the fall.

Question #3: What are the benefits and drawbacks of delaying this year’s assessments until fall 2021?

Polikoff: Participation will likely be higher. We also know that students learn at different rates during the summer, so testing in the fall will provide a better picture of where students stand when they return to school. However, the data may not be as useful for decisionmaking next year. And because statewide assessments are typically administered in the spring, the results may not be perfectly comparable to prior years.

It comes down to figuring out what data you need and the best way to get it. If you think you can get the highest participation and the best data by delaying assessments until the fall, then that might tip the scales in that direction.

Question #4: What can local and interim assessments tell us, and how is that different from summative assessments?

Polikoff: Local assessments and interim assessments may be more targeted than statewide assessments and provide more immediate, actionable results to support instruction. Some may even be computer-adaptive, which could be particularly useful for testing students who have fallen behind this year.

The key difference is comparability—particularly when it comes to locally-designed assessments. The fundamental purpose of statewide assessments is gathering comparable performance data for every student in a state. But if a test is developed at the district or school level, how will you be able to compare results from different years or across the state? You won’t have that ability.

Question #5: What is the most important information we can gain from this year’s assessment data?

Polikoff: Having comparable assessment data for this year will allow us to investigate which students have been negatively impacted by COVID-19 and where supports are most needed. I’ve seen this narrative emerging that says, “We don’t need assessments to tell us what we already know.” However, we don’t actually have a good understanding of how the pandemic affected student learning. Any data we do have from interim assessments is highly caveated. Most assume that the pandemic has exacerbated gaps between student groups—which might be true, but other evidence that suggests the biggest gaps are between high- and low-performing students. That’s an important trend and worth knowing.

Question #6: Is there anything else you think is important to consider when it comes to assessments?

Polikoff: Assessment data is just one piece in a broader constellation of data. States and districts should also be gathering data on social-emotional learning—especially considering the trauma that many students experienced this year. By using all available information, they can ensure that any services or remediation will address the needs of the whole child.

For more on 2021 assessments, read DQC President and CEO Jennifer Bell-Ellwanger’s op-ed in The 74 and this statement from DQC and more than 40 other civil rights, social justice, disability rights, and education advocacy organizations to Secretary Cardona.