Explore

Analysis: There’s Lots of Education Data Out There — and It Can Be Misleading. Here Are 6 Questions to Ask

Put Students First: Support Journalism That Exposes Truth and Inspires Action. Donate to The 74

Data is critical to addressing inequities in education. However, too often data is misused, interpreted to fit a particular agenda or misread in ways that perpetuate an inaccurate story. Data that’s not broken down properly can hide gaps between different groups of students. Facts out of context can lead to superficial conclusions. 

For example, a 2019 Minnesota Department of Education Report highlights that between 2016 and 2018, Advanced Placement exam participation for Black students increased by 29 percent, the largest jump for any student group. But Black students still accounted for just 4 percent of AP test takers, and the pass rate was only 41 percent — significantly lower than the statewide average. None of this appears anywhere in the report, which also fails to mention ongoing racial disparities in AP and dual-enrollment access and success in the state.

This is just one example of how data can create a deceptive narrative. In this case, the goal was to celebrate success. But while this is important, it is also essential to be honest when there is work to be done to effectively serve all kids. Below are six guiding questions that I ask myself (and that I hope will be helpful for you) when I am consuming data. 

1 Is it representative?

Representative samples are designed to mirror the larger population and are important in ensuring all groups are included in the right combination. A sample that is not representative could be biased, overrepresenting certain groups and magnifying their outcomes or responses while underrepresenting others. 

For example, Minneapolis Public Schools administered a parent survey last year to understand family preferences about in-person learning and transportation needs. White families represented 64 percent of responses but only 37 percent of students. This inflated the percentage of families who indicated they wanted to return to in-person learning. To effectively use data like this for planning and decision-making, it is important to look past the aggregate numbers to avoid decisions that don’t reflect the needs and interests across a more diverse student population. 

2 Is it disaggregated? 

Aggregate data can mask inequities and lead to incomplete conclusions. In 2020, Minnesota’s overall graduation rate was 83.8 percent — a historic high for the state. However, when disaggregated by student groups, large gaps emerge that paint a different picture. For example, only 56 percent of Indigenous and 65 percent of students with disabilities graduated that year. 

Disaggregated data is important because it provides a better sense of how each student group is doing, and how well the system is supporting their specific needs. This information can help to inform inquiry into why disparities are happening and can assist with better tailoring support and interventions for students.

3 What is the sample size?

It’s not only important to know if the sample is representative, but how big the sample sizes are — in aggregate and by group. This is known as the n-size. The more people who participate, the closer the data gets to the actual population size and the less likely it is that outliers will skew the results.

For example, in a 2021 RAND survey of parents, the researchers acknowledged that the small n-sizes for Black and Asian parents made it hard to detect if there were statistically significant differences across parent groups. 

4 What are the limitations?

No survey, study or test is perfect. There are always limitations on what is measured and how. That doesn’t mean we can’t learn from the data — but it is important to understand those limitations to place the data in context and help with interpreting validity. 

In its July 2021 brief, the Center for School and Student Progress shared findings about how COVID-19 impacted student achievement during the 2020-21 school year, including which children were most affected. However, the overall attrition rate in 2020-21 was a higher-than-normal 20 percent — meaning that 1 in 5 students who tested in the prior year did not take the assessment. The attrition rate was higher for students of color and those from low-income families. Knowing this suggests the data may overestimate academic achievement and gains. 

5 What is it measuring? 

Different tests serve different purposes, and some are of higher quality than others. For example, statewide tests are generally benchmarked to state standards, so results can be compared from district to district. Other tests are designed to measure progress for an individual student over the course of the school year. Depending on what information you’re looking for, one type of test may provide better data than the other.

In some cases — particularly for surveys — it is also important to know what was asked and how the answers were evaluated. For example, one of the most-cited statistics from a recent teachers union survey was that 29 percent of educators were considering leaving the profession. However, this was within a very specific context. When asked “How are you currently feeling about your work as an educator?” 7 of the 10 possible answers provided were negative — “stressed,” “overwhelmed,” “frustrated,” “worried,” “thinking about quitting or retiring,” “angry,” “worried about my own mental health.” Only three responses were positive: “happy,” “inspired” and “focused.” 

Bias in a question can influence the answers that people select when taking a survey, reenforcing a preconceived belief without capturing the full picture. 

6How can you put the data in context? 

No piece of data tells the whole story. When drawing conclusions about how students are doing or what families want, it’s important to use more than one metric. Math and reading proficiency provide a critical touchpoint about how students are doing academically, but it’s important to use other data — attendance rates, discipline, graduation rates, surveys and other feedback — to put it all in context. And in the end, it is important to remember why we gather education data in the first place. It should not be just to look back, but to ask what’s next, to give a starting point to larger conversations that involve teachers, families, students and other education stakeholders. 

Krista Kaput is research director for EdAllies.

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

Republish This Article

We want our stories to be shared as widely as possible — for free.

Please view The 74's republishing terms.





On The 74 Today