Tuesday, January 8, 2013

Mind the Gap

I want to share an idea I saw at a conference last month. Presented by Paul Stern of the Vancouver Public Schools, it was one of two very intriguing concepts for working with assessment data. Fair or unfair, schools are the subject of a lot of comparisons---how well they perform against other schools in their area, state, or even nationally and internationally, as well as internal comparisons that look at scores from year to year. We can think of lots of reasons why these "apples to oranges" discussions are cagey---everything from the populations schools draw from, to the curriculum used, to teacher quality, parent involvement, and so forth.

Perhaps the biggest of these---in terms of what school staff discuss or dismiss---is the percent of students eligible for free/reduced lunch (FRL). Often used as a measure of poverty, the greater the percentage in a given school, the greater the population living at or below the poverty line. There are some quarrels with using this. For example, the percentage decreases as grade levels increase---that is, there are far more students in kindergarten who are eligible vs. high school seniors. This may be due to underreporting at upper grade levels (a kid doesn't want to appear different in front of peers, and so the paperwork doesn't get turned in), or simply that as children age and become more independent, it's more likely to find two working parents outside the home (and therefore more income). But, we'll set this aside for today's discussion.

So, here's a chart that will serve as the starting point for us.

The dots on this chart represent every school in the state of Washington for which data were available on performance of 8th graders on the state math test and percent of students eligible for free or reduced price meals. The dark orange trendline tells us about what we'd expect: the greater the percent of students eligible for FRL, the lower the percentage of students meeting the standard (a/k/a "passing the test"). They straight beige line shows the statewide percentage for meeting the standard on the 8th grade math test.

Looking at this might engender some questions about schools that don't fit the overall model. In the lower lefthand corner, we have schools with a low percent of FRL...but poor performance on the test. And in the upper righthand corner, we have a few schools with a large percent of FRL, but are doing better than the statewide performance. What are those schools doing, I wonder?

But let's say that you're in a large district, like Seattle. It's likely there are conversations about students achievement at the middle school as it relates to poverty, but we can dig deeper than that. We might expect a certain level of performance, based on the model shown above. But using the model to supply a context will allow us to remove poverty from the discussion---in other words, what is the gap in performance between the predictive model and the actual score?

Here is the same chart, with Seattle schools highlighted (click to embiggen):


As we can see, some schools, are below the trendline---they didn't score as well as predicted. Others are above the trendline---they performed better than predicted. To help visualize this a little better, let's zoom in on two of the schools.


The arrows point to the predicted performance of McClure and Pathfinder. Based on their percentage of students eligible for free/reduced lunch, we would have expected them to score around the state level (~55%). However, McClure scored 13 points above this...and Pathfinder 6 points below.

We can also build a chart to take a broader look at the various gaps between predicted and actual performance. Using the handy-dandy formula for slope that Excel provides for this trendline (y = -0.362x + 68.088), we can substitute the percent of FRL for x and find the predicted performance based on the trendline (y).

See? Your Algebra teacher knew learning about slope would come in handy someday.

Using one of the stock charts in Excel, we can visualize this to get a better idea of the differences in performance.The schools are organized, left to right, by their predicted performance. The dot at the end of each line represents their actual performance. The length of the lines shows the difference.


This chart helps us see things in a new way. For example, Madrona has the highest percentage of FRL out of these schools, but their gap in terms of expected performance is certainly not as big as Cascade or Orca. Hamilton has the lowest percentage of FRL and the highest actual math scores in the district, but it is not the school that best outperformed expectations. This also allows us to see that schools like Jane Addams and Madison, while still performing below the state average, are outperforming expectations (if only by a small margin). We don't celebrate our successes nearly enough in education. Maybe that's because we don't look for them like this.

Again, the idea here is to remove poverty levels as the focus for explaining the differences between schools. Doing so allows us to look for deeper answers about curriculum and instruction. This is not to say that socioeconomic status has no impact---just that dismissing low performance because of is not the whole story.

I've used public data available here to model these charts, but you could substitute other indicators. Education is certainly not all about the test---and schools shouldn't be judged on a single measure. But I do think that this could be a powerful starting point for schools and districts.

No comments:

Post a Comment