Using data to inform our conversations about public school performance is a good idea, but too often, the measures we use are reduced to imprecise terms like “proficiency,” which can carry several different meanings when describing a local, state, or national assessment1.

As Susan Dynarski notes in The Upshot, this is also a common problem with the most-frequently used proxy for “poverty” in education, Free/Reduced Price Lunch (FRPL) eligibility:

“Nearly half of students nationwide are eligible for a subsidized meal in school. Children whose families earn less than 185 percent of the poverty threshold are eligible for a reduced-price lunch, while those below 130 percent get a free lunch. For a family of four, the cutoffs are $32,000 for a free lunch and$45,000 for a reduced-price one. By way of comparison, median household income in the United States was about \$54,000 in 2014.

Eligibility for subsidized school meals is clearly a blunt indicator of economic status. But that is the measure that policy makers, educators and researchers rely on when they gauge gaps in academic achievement in schools, districts and states.”

In practice, this means that when we refer to FRPL students as “economically disadvantaged,” we’re really painting with a broad brush. Thankfully, Dynarski and her co-author, Katherine Michelmore, devised a way to use current FRPL data to produce a more precise picture of student economic disadvantage: instead of looking at FRPL-eligibility in the current school year, we can use longitudinal datasets to look at how many years a student has been FRPL-eligible.

The concept is simple. If you were comparing two fifth grade students, student A and student B. If student A has been FRPL-eligible for a year and student B has been FRPL-eligible for five years, it’s clear that student B has a greater economic disadvantage than student A.

Dynarski continues:

No one ever actively decided that eligibility for subsidized meals was the best way to measure students’ economic disadvantage. The metric was widely available and became by default the standard way to distinguish between poorer and richer children. But it was always an imprecise measure, and we can do better at little cost.

We’ve already seen researchers stand up to advocate for better ways to quantify “proficiency” - I hope we see a similar movement by researchers to advocate for better measures of poverty. Supporting Dynarski’s approach would be a good (and cost-efficient!) step in that direction.

1. This is a particular problem when NAEP results are released. Just remember, friends don’t let friends engage in misNAEPery ↩︎

Posted on:
August 12, 2016
Length: