Saturday, November 17, 2007

The Great Yardstick Debate

The Bluegrass Institute recently picked a fight with the folks over at the Kentucky Long-Term Policy Research Center.

At issue is whether Kentucky's system of public education (preschool through college) has improved in recent years and how Kentucky's schools compared to those in other states.

It all started when the KLTPRC issued its recent study indicating that Kentucky had made educational progress since 1992. This must have rubbed the Bluegrass Institute folks the wrong way. They spent some time looking for holes in the report.

BGI's response criticized the KLTPRC report alleging: misstatements of NAEP data, inappropriate use of dropout data, using the ACT to rank states, and more.

But a blogger on the NAEP beat, Susan Ohanian suggested to BGI that their biases were showing.

The Bluegrass Institute offers "free-market solutions to Kentucky's most pressing problems." That, of course, puts them in the Choice camp, which is a euphemism for vouchers. Not that they are at all hesitant about proclaiming their love of vouchers: "Vouchers allow parents to choose better schools."

So are they offering a scholarly critique of Kentucky's testing program--or a manifesto to drain public confidence in their schools?

As the Courier-Journal demonstrated recently, this kind of claim is a fundamental problem for any "Think Tank." It is incredibly hard for anyone to ignore their own biases, especially if they believe strongly in them.

Lord knows, I have biases.

I don't think I could fully trust anyone who told me they had no biases. I just want to know what they are.

My biases come from a career as an elementary school administrator in two Kentucky counties before KERA, during its rocky implementation and after Sen. Gerald Neal's SB 168 (and its data disaggregation, which is significantly more powerful than I first realized; and which predates NCLB.) Since graduation I have taught at UK, EKU and the private Georgetown College.

I am biased by the sum of my experiences; and compelling data.

I don't need reports from anyone to convince me that significant improvements have been made in Kentucky's system of public schools. Like other social institutions, they are far from perfect. But given the relatively modest financial resources Kentucky invests, on the whole, Kentucky should be right proud of its schools.

As education historian and scholar Diane Ravitch understands, what we really need is...

"...an independent, nonpartisan, professional audit agency to administer tests and report results to the public.

Such an agency should be staffed by testing professionals without a vested interest in whether the scores go up or down. Right now, when scores go down, the public is told that the test was harder this year - but when scores rise, state officials never speculate that the test might have been easier. Instead, they high-five one another and congratulate the state Board...for their wise policies and programs.

What the public needs are the facts. No spin, no creative explanations, no cherry-picking of data for nuggets of good news.

Just the facts.

In her presentation of the KLTPRC report at the group's recent conference (video), Dr Amy Watts presented data in support of two propositions: 1) that public education, writ large, in Kentucky is progressing, and that 2) Kentucky's relative standing among the states is improving.

The following is paraphrased from Watts' presentation.

On an incomplete data set, KLTRPC derived an Index considering an adjusted set of 11 indicators; which revealed across-the-board improvements roughly from the 43rd rank to the 34th. This report mirrors two previous reports done outside of Kentucky...Kentucky was 34th in Education Week's Quality Counts 2007 Achievement Index and was 31st in the Morgan Quinto 2006-2007 Smartest State Index."

The KLTPRC data set looks at two kinds of indicators: educational attainment and student Achievement.


Educational attainment indicators

These data refer to Kentucky adults (HS diploma, 2-year degree, Bachelor's degree) and show an upward trend...although a relatively flat one, that leaves Kentucky ranked near the bottom of US states.

Dropout rate shows progress; declining (which is good) about 2% over the last 7 years.

(KLTPRC used the definition insisted upon by the National Center of Educational Statistics: "The percent of high school students who left high school between the beginning of one school year, and the beginning of the next, without earning a high school diploma or its equivalent.")

Since not all states reported the data for all years, a percentile indicator was used
to show progress over time. That measure showed that Kentucky made progress
while outperforming other states; growing from the 32nd percentile to the 61st.
Student Achievement data

Rather than looking at the scores, these data focus on the percentage of students who performed at the proficient level or above - which is Kentucky's goal. The CATS Accountability Index grew over time for all levels (Elem, Middle & HS).

The 4th and 8th grade Reading, Math and Science data are reported using the National Assessment of Educational Progress. To put Kentucky's growth into a national context the scores are reported using a percentile scale.

4th grade Reading improved (in percentile terms, from about 25th to about 50th).

For 8th graders the trend is flat - not much growth - and in percentile terms Kentucky has fallen behind - dropping from average to below average).

In Math, 4th grade achievement is up from 13 percent proficient to 31%, but the gap between Kentucky's progress and that of the nation has widened over time (we're falling behind.)

8th grade Math is up from 10% up to 27% proficient, which is roughly equivalent to the rest of the nation.

In percentile terms, while Kentucky has progressed in 4th grade Math, it has not kept pace with the national average and Kentucky has fallen behind from a 20th percentile ranking, down now to the 16th percentile.

In 8th grade math the story is more mixed. Kentucky was making progress up to the year 2000, rising from 17th percentile to 37th, but has fallen off since that time. Now it's down to 27th. This is a concern...as math is seen by economists as a crucial area for the economic future of the state.
Science is a bright spot for Kentucky with 4th graders rising from average performance rankings up to the 81st percentile, while 8th grade rankings rose from below average to average.

To gauge high school performance, ACT composite scores were used. On the ACT, Kentucky has shown improvement while narrowing the gap between Kentucky and
the nation. This has resulted in a higher standing for Kentucky, up from 10th percentile rank to 24th.

KLTPRC then created a composite that suggests Kentucky moved from 43rd to 34th. This mirrors results from two other non-Kentucky studies.
In the Q & A, Chris Derry founder and president of the Bluegrass Institute for Public Policy Solutions queried Watts. In reality, he asked all three questions and sat down while Watts responded. Below, I have cut and pasted Watts's responses for the ease of KSN&C readers.
Chris Derry: "You and I have discussed this, but I want to take these questions to a higher level; and frankly because we're on the Internet. I would like for the basis of discussing the policy consequences of some of the data you've used - might lead people to conclude otherwise, if other data were used (sic).

For instance, you've used the data point of dropout rates, and in fact...the Kentucky Auditor Crit Luallen came out with a report that challenged the accuracy of the Kentucky...Department of Education's dropout rate, which are the dropout rates used in your calculation. (Luallen criticized the student information system with preventing accurate calculations - off by as much as 30%. That system is currently being replaced as per Luallen's 2006 suggestion.)

So I would say, if the Auditor has questioned that accuracy, why would you introduce it into this report?

Amy Watts: Regarding the dropout rate ...the dropout rate that I'm using here is a consistent definition that is used over time and the NCES, the National Center for Education Statistics, and officials at KDE work diligently to ensure the accuracy of these data.

In fact, the [NCES] can't even compute a national dropout rate because several of the states don't adhere to their very strict definition.

So in this context, with these data, we are able to place Kentucky in the national setting and how we've progressed in this particular indicator over time based on the fact that these are nationally recognized standards of data collection and methodology and that is consistent. That's the biggest benefit of using that particular indicator.

As far as the Auditor's report, it really did help to point out where they can ...continue to make sure that ...the systems that they already have, the ...internal audit systems that they already have in place, to ensure ...the accuracy and ...how clean these data are, that that is maintained. So it helped highlight some places where they can really strengthen this and make it even better than it already is.

Derry: ...the ACT is an essential test that is really is an international standard because of so many students both abroad and domestically, who are required to take that test, to qualify to enter college. ...And Kentucky, I believe, Illinois and Colorado are the only states, beginning this year, in which all students in those states are required to take the ACT.

But because other states do not have that 100% requirement, and because their percentages of test takers is so low, ACT has cautioned against comparing one state versus another; yet you've done that in this report.

Watts: The ACT, does not warn against using these data in the context in which we have used them.

They've cautioned that you have to take into account demographic differences across that states and we were very transparent in the data that we used here. You can go to the ACT website and the methodology and some of these cautions that you've talked about are made readily apparent there.

So, definitely, we were very transparent in these data so that you could understand exactly what went into calculating the indexes used here and the rankings that we found.

Derry: ...a big part of the emphasis in this report is on census data; and looking at the age category of age 25 to 64.

As indicated in your report you say...'sufficient time has passed since the Kentucky Education Reform Act of 1990 and the Postsecondary Education Improvement Act of 1997 to ask, 'Are we making educational progress in Kentucky and if so are we gaining relative to the nation?'

When you use census data you're including ...hundred of thousands of people who were not in KERA. And yet, you're making the claim in this, that they should be included as an indication of the progress under KERA.

Watts: Yes the census data, those are, again, readily accepted indicators of educational attainment and progress that are used time and time again by tons of different researchers.

So ...it seemed like it would be ...incomplete without this data in an index of educational progress.

...They do not reflect directly upon how KERA or the Postsecondary Education
Act of 1997 have contributed to where we've come.

This is kind of a spot check of; 'let's look. Let's see where we are now. Let's see where we were. Now, let's begin to see where we're going based on these data that really give us an overall accurate picture of what's happening in Kentucky.'

Before any of this happened, I had already read the report, saw its construction as well as its transparency.

Kentucky's funding levels may still be in the basement, and some areas of progress are certainly stronger than others, but overall student achievement gains are undeniable.

Student achievement is a lagging indicator and Kentucky is realizing the benefits of earlier (and continued) effort. This would seem to be the very definition of "an efficient system of common schools throughout the state," which is the legislature's constitutional mandate.

Viewed as a cost/benefit ratio, Kentucky schools are providing its citizens a better educational program than the state has a right to expect.

This is essentially what Judge Thomas Wingate alluded to when he struck down the Council for Better Education's most recent effort to force the General Assembly to keep its commitment to school reform.

For the amount of oil we put in the engine, this seventeen-year-old Chevy is runing pretty well.

1 comment:

Anonymous said...

I am delighted that Richard Day took so much time to open a dialog on the problems with the KLTPRC’s “Policy Notes #23.” Although Richard’s commentary includes some misconceptions, his willingness to discuss the issues provides a framework to deal with some of the more serious problems involving the KLTPRC’s little flier.

Let’s start with an incorrect statement from Dr. Amy Watts, author of the KLTPRC flier, concerning the ranking of all 50 states using the ACT college entrance test.

As quoted in Day’s blog, Dr. Watts says, “The ACT, does not warn against using these data in the context in which we have used them.”

Actually, the Bluegrass Institute contacted ACT spokesman Ed Colby for ACT’s position on doing 50-state rankings with their assessment. Mr. Colby says his organization, “definitely would discourage doing state to state rankings due to the significant variation in participation rates.” I don’t understand how anyone could possibly misconstrue this.

It isn’t hard to understand why the ACT and the Bluegrass Institute are concerned about a 50-state ranking with the ACT. The participation rates for 2005 high school graduates on the ACT varied from a low of just four percent in Delaware (yes – only 4 percent) to a high of 100 percent in Colorado and Illinois, according to data readily available in the ACT’s Web site. I hope most readers can understand that such incredibly diverse samples – samples that were not randomly selected, by the way – are unsuitable for performing the ranking that Dr. Watts attempted. I have not discussed this with Dr. Watts so must assume that she is totally unfamiliar with the tremendous state-by-state variations in ACT participation for 2005.

Richard Day also did not check with the ACT before posting Dr. Watts’ comments. He, too, wasn’t fully unaware of the huge variation in ACT participation rates across the nation until after he posted his blog. I suspect he would have worded the relevant section of his post differently if he had checked on the ACT’s real recommendation regarding such rankings.

Another major problem with the KLTPRC flier is that it is simply inappropriate to do anything with Kentucky dropout data. Kentucky’s flawed dropout database has been officially declared unreliable by the state auditor. Richard makes a mistake of his own in this area of his comments. The audit didn’t find those dropout rates were off “by as much as 30%.” In fact, the actual audit shows the error is at least 30% and speculates that the real rate of error is probably much higher. Unfortunately, the auditor was only able to obtain data on those students who left school during the school term. The additional students who simply didn’t return after the summer break could not audited in any way due to inadequate data from the Kentucky Department of Education. The audit is very clear on the point that the missing information on summer dropouts will make the true errors in the dropout rate even larger than 30%.

As a note, my own estimates are that the true error in the KDE’s current dropout rate reports is closer to 100%. We won’t know for sure for two more years, assuming the state’s new student tracking system is properly implemented and managed (Unfortunately, a similar system was recently set up in Texas, but I recently read that it still provides inaccurate data due to failure to audit and enforce the rules effectively).

I found Dr. Watts’ defense of Kentucky’s highly unreliable dropout data, as quoted by Richard Day, terribly disappointing. NCES dropout data for all the states has been widely criticized by many organizations ranging from the conservative Manhattan Institute to the much more liberally oriented Urban Institute. The problem is that most states have no mechanisms to insure the data is accurately collected. As in Kentucky, other analyses provide elsewhere provide strong evidence that the real dropout rates are much higher than most states report. Ranking such unreliable data amounts to nothing more than rewarding those who do the least accurate job in tracking students.

The deficiencies in current NCES dropout reporting are even recognized by the US Congress. That is why the Congress required graduation rate rather than dropout rate reporting when No Child Left Behind was enacted. At present, it appears likely that Congress will tighten the currently very loose graduation rate reporting most states are currently using under NCLB.

Another fact concerning Ms. Watts’ supposedly solid dropout rates is of interest. The NCES merrily went on publishing dropout data for Louisiana throughout the mid-1990s while that state transitioned to a much more accurate dropout reporting system that used student tracking. Once that new system was up and running in the 1995-96 school year, Louisiana’s dropout rate suddenly tripled. The point is that the NCES didn’t catch the problems with Louisiana data prior to 1995-96. And, the NCES has no way to check the accuracy of other states’ data, either (So much for Dr. Watts’ claims that this is high quality data). Interested readers can check the Louisiana situation for themselves in “Dropout Rates in the United States: 2001,” published by the NCES in November 2004. It may still be on line. Search for NCES publication 2005-046. Find information about Louisiana’s accounting system change in the footnotes to Table 2.

The problem with dropout rate reporting isn’t a liberal versus conservative issue or a voucher supporter versus non-voucher supporter issue; almost everyone who has seriously looked at the dropout data knows it isn’t reliable. Sadly, kids are falling through the cracks while the education establishment goes on generating these fantasy numbers. KLTPRC should have done more checking before including such clearly unreliable data in its flier.

There are other problems highlighted by Richard Day’s post concerning the KLTPRC report – too many to discuss here. However, I would note that he quotes Watts admitting that the US Census data on diplomas, and two- and four-year degrees “do not directly reflect upon how KERA or the Postsecondary Education Act of 1997 have contributed to where we have come.” That is an amazing comment. This caution certainly isn’t in her two-page flier. In fact, the very first sentence of that flier says that sufficient time has elapsed to evaluate how KERA and the postsecondary legislation are performing. The implication is that these three Census items do provide information useful to the evaluation of KERA. Now, she admits the obvious – these three indicators really are not suitable for the purpose proclaimed in the first sentence of her flier. Add to the problems of these three indicators the unsuitable nature of the ACT and dropout rate data Watts used, and nearly half of her 11 indicators go up in smoke.

That leaves only the NAEP. But, wait a minute! It looks like Richard Day quotes Susan Ohanian criticizing the Bluegrass Institute for using NAEP data – the very same data that Watts uses. When I asked Richard about that, it seems he misunderstood Ohanian’s blog. He thought she was actually criticizing the KLTPRC report, but in fact she is criticizing the Bluegrass Institute’s use of NAEP in a totally different report that shows scoring on our CATS test has slowly been inflating.

Anyway, if you are in the Ohanian camp, then all the data used by KLTPRC is suspect. We don’t share Ohanian’s dim view of the NAEP, but we do know that plenty of cautions have to be observed in doing any state to state comparisons with the results of this federal assessment. Watts didn’t mention or observe any of them.

In closing, what these discussions highlight more than anything else is the very unsatisfactory quality of data that policy makers must deal with in trying to determine how KERA is really functioning and how it might best be improved. Ultimately, Richard Day and the Bluegrass Institute are largely on the same page on this issue. We both want solid information that will inform all of us about how education is really performing. We want that data to be highly useful to educators both to evaluate student performance and to inform on ways that might improve that performance. When policymakers get nothing better than what the KLTPRC offers, they are left largely in the dark, if not actually mislead. We hope our bringing this to the public’s attention will be viewed in proper context as a sincere effort to kick education data to a higher level. We think our students, our educators, and the commonwealth deserve the high quality information we seek.