This blog provides information on public education in children, teaching, home schooling

Showing posts with label value added. Show all posts
Showing posts with label value added. Show all posts
Friday, September 3, 2010

LA Times Value Added Editorial

The Los Angeles Times editorial page gets it mostly right today on the value-added issue ("Good teachers, good students," September 3, 2010). It says a number of smart things that I agree with, such as:
  • "Test scores are indeed just one indicator of a teacher's performance."
  • "But it's revealing, and disturbing, to read the comments of some teachers who don't seem to care whether their students' scores slide. They argue that they're focused on more important things than the tests measure. That's unpersuasive."
  • "This page has never believed that test scores should count for all of a teacher's evaluation — or even be the most important factor. But they should be a part of it."
  • "Right now, the "value-added" scores The Times has been reporting are more useful for evaluating schools than teachers. Many factors can throw off the data at the classroom level."
  • "That's why we think the Obama administration has been too hasty to push states into linking test scores to teacher evaluations and to reward states that overemphasize the scores, making them count for half or more of a teacher's worth. The administration's first priorities should have been developing better tests, which it's working on now — if we're going to judge teachers in part by these scores, it's unacceptable to say that top-notch tests are too expensive — and statistical models that minimize random factors and make the scores a better evaluation tool."
  • "Current teacher evaluation practices are ripe for overhaul. Performance reviews should include, at minimum, classroom observations, portfolios of student work over the academic year and, yes, objective test data."
I just wish its news division had taken some of these points to heart, namely having patience until the methodology was ready to be joined by other measures of teacher effectiveness, such as classroom observations, and not publishing the value-added scores of individual teachers and definitively labeling some as most effective and least effective.

Heather Horn of The Atlantic magazine offers a nice summary of some of the related issues and links to relevant sources in this September 1, 2010 blog post.

And Dana Goldstein offers a smart retort (and a preview of her upcoming The Nation feature on value added?) to a vacuous and vitriolic Slate post by Jack Shafer on this topic.

Related Posts:
You have read this article Los Angeles / Los Angeles Times / teacher / teacher effectiveness / teaching / value added with the title value added. You can bookmark this page URL http://apt3e.blogspot.com/2010/09/la-times-value-added-editorial.html. Thanks!
Wednesday, September 1, 2010

More Grist for the Value-Added Mill

Here is additional smart and pithy commentary on the current value-added conversation that I wasn't able to incorporate into yesterday's post or have only discovered since.
You have read this article Los Angeles / Los Angeles Times / teacher / teacher effectiveness / teaching / value added with the title value added. You can bookmark this page URL http://apt3e.blogspot.com/2010/09/more-grist-for-value-added-mill.html. Thanks!
Tuesday, August 31, 2010

Adding Value to the Value-Added Debate

Seeing as I am not paid to blog as part of my daily job, it's basically impossible for me to be even close to first out of the box on the issues of the day. Add to that being a parent of two small children (my most important job – right up there with being a husband) and that only adds to my sometimes frustration of not being able to weigh in on some of these issues quickly.

That said, here is my attempt to distill some key points and share my opinions -- add value, if you will -- to the debate that is raging as a result of the Los Angeles Times's decision to publish the value-added scores of individual teachers in the L.A. Unified School District.

First of all, let me address the issue at hand. I believe that the LA Times's decision to publish the value-added scores was irresponsible. Given what we know about the unreliability and variability in such scores and the likelihood that consumers of said scores will use them at face value without fully understanding all of the caveats, this was a dish that should have been sent back to the kitchen.

Although the LA Times is not a government or public entity, it does operate in the public sphere. And it has a responsibility as such an actor. Its decision to label LA teachers as 'effective' and 'ineffective' based on suspect value-added data alone is akin to an auditor secretly investigating a firm or agency without an engagement letter and publishing findings that may or may not hold water.

Frankly, I don't care what positive benefits this decision by the LA Times might have engendered.
Yes, the district and the teachers union have agreed to begin negotiations on a new evaluation system. Top district officials have said they want at least 30% of a teacher's review to be based on value-added and have wisely said that the majority of the evaluations should depend on classroom observations. Such a development exonerates the LA Times, as some have argued. In my mind, any such benefits are purloined and come at the expense of sticking it -- rightly in some cases, certainly wrongly in others -- to individual teachers who mostly are trying their best.

Oh, I know, I know. It's not about the teachers anymore. Their day has come and gone. "It's about the kids" now, right? But you know what? The decisions we make about how we license, compensate, evaluate and dismiss teachers affects them as individual people, as husbands and wives, as mothers and fathers. It effects who may or may not choose to enter the profession in the coming years. If we mistakenly catch a bunch of teachers in a wrong-headed, value-added dragnet based upon a missionary zeal and 'head in the sand' conviction that numbers don't lie, we will be doing a disservice both to teachers and to the kids. And, if we start slicing and dicing teachers left and right, who exactly will replace them?

(1) Value-added test scores should not be used as the primary means of informing high-stakes decisions, such as tenure and dismissal.
One primary piece of evidence was released just this week from the well-respected, nonpartisan
Economic Policy Institute. The EPI report, co-authored by numerous academic experts, said:

  • Student test scores are not reliable indicators of teacher effectiveness, even with the addition of value-added modeling (VAM).
  • Though VAM methods have allowed for more sophisticated comparisons of teachers than were possible in the past, they are still inaccurate, so test scores should not dominate the information used by school officials in making high-stakes decisions about the evaluation, discipline and compensation of teachers.
  • Neither parents nor anyone else should believe that the Los Angeles Times analysis actually identifies which teachers are effective or ineffective in teaching children because the methods are incapable of doing so fairly and accurately.
  • Analyses of VAM results show that they are often unstable across time, classes and tests; thus, test scores, even with the addition of VAM, are not accurate indicators of teacher effectiveness. Student test scores, even with VAM, cannot fully account for the wide range of factors that influence student learning, particularly the backgrounds of students, school supports and the effects of summer learning loss. As a result, teachers who teach students with the greatest educational needs appear to be less effective than they are.
Other experts, such as Mathematica Policy Research, Rick Hess, and Dan Goldhaber have offered important cautions as well.

The findings of the IES-funded Mathematica report were “largely driven by findings from the literature and new analyses that more than 90 percent of the variation in student gain scores is due to the variation in student-level factors that are not under the control of the teacher. Thus, multiple years of performance data are required to reliably detect a teacher’s true long-run performance signal from the student-level noise…. Type I and II error rates [‘false positives’ and ‘false negatives’] for teacher-level analyses will be about 26 percent if three years of data are used for estimation.
In a typical performance measurement system, more than 1 in 4 teachers who are truly average in performance will be erroneously identified for special treatment, and more than 1 in 4 teachers who differ from average performance by 3 months of student learning in math or 4 more in reading will be overlooked. In addition, Type I and II error rates will likely decrease by only about one half (from 26 to 12 percent) using 10 years of data.”

Hess has “three serious problems with what the LAT did. First … I'm increasingly nervous at how casually reading and math value-added calculations are being treated as de facto determinants of "good" teaching…. Second, beyond these kinds of technical considerations, there are structural problems. For instance, in those cases where students receive substantial pull-out instruction or work with a designated reading instructor, LAT-style value-added calculations are going to conflate the impact of the teacher and this other instruction…. Third, there's a profound failure to recognize the difference between responsible management and public transparency.”

Goldhaber, in a Seattle Times op-ed, says that he “support[s] the idea of using value-added methods as one means of judging teacher performance, but strongly oppose[s] making the performance estimates of individual teachers public in this way. First, there are reasons to be concerned that individual value-added estimates may be misleading indicators of true teacher performance. Second, performance estimates that look different from one another on paper may not truly be distinct in a statistically significant sense. Finally, and perhaps most important, I cannot think of a profession in either the public or private sector where individual employee performance estimates are made public in a newspaper.”

Multiple measures to inform teacher evaluation seems like the right approach, including the use of multiple years of value-added student data (one thing the LA Times DID get right). That said, the available research would seem to suggest that states (particularly in Race to the Top) that have proposed basing 50% or more of an individual educators evaluation on a value-added score may have gone too far down the path. LA Unified officials have said (LA Times, 8/30/2010) they want at least 30% of a teacher's review to be based on value-added and that the majority of the evaluations should depend on observations. That might be a more appropriate stance.

(2) Embracing the status quo is unacceptable.
As reports such at The New Teacher Project's
Widget Effect have chronicled, current approaches to teacher evaluation are broken. They don’t work for anyone involved. Critics of VAM cannot simply draw a line in the sand and state that, "This will not stand!" If not this, then what? Certainly not the current system! Fortunately, efforts led by organizations such as the American Federation of Teachers and the Hope Street Group are developing or have offered thoughtful solutions to this issue. [Disclosure: I participated in Hope Street's effort and my New Teacher Center colleague Eric Hirsch serve on AFT’s evaluation committee.] Sadly, LA Unified and the LA Teachers Union both are culpable –along with the LA Times – in bringing this upon the city's teachers by refusing to act to analyze or utilize available value-added data. An adherence to the status quo created a void that the LA Times sought to fill in order to sell more newspapers in a wrong-headed attempt to inform the public.

(3) The ‘lesser of two evils’ axiom should not be invoked.
Even if you agree that all the factors we currently use to select and sort teachers is worse than a value added only alternative,
as argued by Education Sector's Chad Aldeman, our current arsenal does not meaningfully inform high-stakes decisions (apart from entry tests with largely low passing scores and the aforementioned impossible-to-fail evaluations). That's, of course, both a condemnation of the current system's inability and/or unwillingness to differentiate between teachers, but it's also a recognition that we haven't struck the right balance or developed the value-added systems to inform high-stakes decisions in this regard in all but a few promising places.

(4) Don't lose sight of the utility of value-added data to inform formative assessment of teaching practice.
If one of the takeaways from research is that value-added data shouldn't be used to drive high-stakes decisions, it is helpful to think about the use of this data to inform teacher development. Analysis of student work, including relevant test scores, is an important professional development opportunity that all teachers, especially new ones, should have regular opportunities to engage in. Systems such as the NTC’s
Formative Assessment System provide such a tool in states and districts with whom it works on teacher induction. Sadly, this is not the norm in American schools, but is built into high-quality professional development approaches, as Sara Mead wisely discusses in her recent Ed Week blog post. As I noted under #2, LA Unified missed an opportunity to embrace such data to inform its educators in such a way. In the LA Times value added series, several teachers bemoaned the fact that they had never had the opportunity to see such data until it was published in the newspaper.

(5) Valid and reliable classroom observation conducted by trained evaluators is critical.
Other elements of an evaluation system are even more important than value-added methodology if for no other reason that the majority of teachers do not teach tested subjects. Unless we, God forbid, develop multiple-choice assessments of more and more subjects and grade levels, we're going to need valid and reliable ways of assessing the practice of educators who cannot be assessed by value-added student achievement scores. Despite some of the criticisms lobbed at the District of Columbia's new
IMPACT evaluation system, this is an element at the heart of DC’s approach to teacher evaluation. Further, the Gates Foundation’s on-going teacher effectiveness study holds great promise.

(6) We've got to get beyond this focus on the 'best' and 'worst' teachers.
How about we focus on strengthening the effectiveness of the 80-90% of teachers in the middle? We know how to do that through
comprehensive new teacher induction and high-quality professional development, but we're just lacking the collective will to pull it off and invest in what makes a difference. These are similar roadblocks to what has prevented the use of student outcomes from being considered in teacher evaluations. It raises discomfort, requires a change in prevailing (often mediocre) practices, demands greater accountability, and necessitates viewing teaching not as a private activity but as a collective endeavor. But I keep making this point over and over again about the importance of a teacher development focus within the teacher effectiveness conversation because I see too few reform advocates taking it seriously. Take off the blinders, folks. It is not primarily about firing teachers.

(7) Teacher effectiveness is contextual.
Teaching and learning conditions impact an individual educator’s ability to succeed. It is entirely possible that an individual teacher's value-added score is significantly determined by the teaching and learning conditions (supportive leadership, opportunities to collaborate, classroom resources) present at their school site than about their individual knowledge, skills and practices. In Seinfeldian terms, teachers are not 'masters of their domain' necessarily. The EPI report makes this point. So do my New Teacher Center colleagues through statewide teaching and learning conditions surveys. So does Duke University economist Helen Ladd (also a co-signed on the EPI report) and the University of Toronto’s Kenneth Leithwood.

You have read this article Los Angeles / Los Angeles Times / teacher effectiveness / teacher evaluation / value added with the title value added. You can bookmark this page URL http://apt3e.blogspot.com/2010/08/adding-value-to-value-added-debate.html. Thanks!
Friday, November 6, 2009

Using Value Added to Assess Teacher Effectiveness

The Association for Public Policy Analysis and Management -- an organization not widely known outside of academia and technical policy circles -- puts on truly meaty conferences. I've attended three APPAM conferences to date, including the Annual Fall Research Conference going on in Washington, DC this week.

Education is merely one strand at APPAM, but the sessions feature some of the biggest names in educational research addressing some very policy relevant issues. The current conference features sessions on value-added modeling, school choice, teacher certification and teacher induction, teacher performance pay, financial aid, college persistence, and more.

The session I attended yesterday on "Using Value Added To Assess Teacher Effectiveness" was excellent. It featured four papers each of which I will undoubtedly oversimplify in this brief blog post. (I encourage you to seek out the papers and read them closely -- below I've linked to those that are available.) One by Dan Goldhaber and Michael Hansen (University of Washington) suggests that year-to-year correlations in value-added teacher effects are modest, but that pre-tenure estimates of teacher job performance do predict estimated post-tenure performance in both math and reading. A second by Julian Betts (UCSD) and Cory Koedel (University of Missouri-Columbia) suggests that bias does exist in value-added models due to student sorting, but that it can be overcome through the use of multiple years of value-added data; further, the study suggests that data from the first year or two of classroom teaching may be insufficient to make reliable judgments about teacher quality. A third by Michael Weiss of MDRC suggests that that teacher variability carries implications for measuring program effects within randomized controlled trials when those teachers are not randomly assigned. And a fourth by John Tyler (Brown University) and Tom Kane (Harvard University) found that teacher assessments made using classroom observation rubrics (such as Charlotte Danielson's) are closely aligned with value-added ratings of teachers.
You have read this article APPAM / Education / research / teacher effectiveness / teacher evaluation / value added with the title value added. You can bookmark this page URL http://apt3e.blogspot.com/2009/11/using-value-added-to-assess-teacher.html. Thanks!
Friday, March 13, 2009

Stupid Stuff from Skoolboy

Kudos to Dr. Aaron Pallas (AKA skoolboy) for his terrific post ("It's The Stupid System") today on the Gotham Schools blog.

He takes New York City schools chancellor Joel Klein and the Reverend Al Sharpton to task for their Huffinington Post piece which implies that there is an easy achievement-gap fix -- namely value-added assessment and merit pay alone.
As usual, skoolboy’s main concern is that Klein and Sharpton are talking about effective teachers without ever once discussing what it is that they do. Reward the good ones, get rid of the bad ones, it’s all about sorting teachers–and never about actually improving instruction. Let’s suppose that Klein, Sharpton and others are right–that it is difficult to tell which teachers are going to be highly successful when they start teaching, because the instruction teachers receive prior to taking over a classroom can’t fully prepare them for the challenges of an urban classroom. Why not focus on professional development, and assisting novice teachers in learning effective practices on the job? How does giving effective teachers merit pay and dismissing poor performers actually improve anyone’s practice?
I wholeheartedly agree with Pallas's take on this. I said as much in my post on Monday ("Measurement Is Not Destiny"). The human capital challenge can't just be about rewarding the best and dismissing the worst. It must also be about a focused effort to make the vast majority of educators more effective. That will require a comprehensive effort, including high-quality, job-embedded, sustained professional development and robust induction support.

UPDATE: Corey Bunje Bower at Ed Policy Thoughts has some thoughts on the Klein/Sharpton piece as well.
You have read this article Aaron Pallas / Al Sharpton / Joel Klein / teacher effectiveness / teacher pay / teacher quality / value added with the title value added. You can bookmark this page URL http://apt3e.blogspot.com/2009/03/stupid-stuff-from-skoolboy.html. Thanks!
Monday, March 9, 2009

Measurement Is Not Destiny

Stephen Sawchuk has written an excellent cover story ("Stimulus Bill Spurs Focus on Teachers") in this week's edition of Education Week. It discusses the federal stimulus legislation which directs states to abide by the equitable teacher distribution provisions in the No Child Left Behind Act -- as well as to improve teacher effectiveness -- in exchange for state stabilization funds and the opportunity to apply for competitive grants as part of Secretary Duncan's "Race To The Top Fund."

With regard to teacher effectiveness, there's just one little problem. There's no definition in federal law -- let alone in state laws -- about what that actually means. From Education Week:

Several states, and some districts, now endorse performance-based teacher evaluations that define good teaching, determine which teachers exhibit such practices, and identify those who fall short for assistance. Others are reorienting professional development toward sustained school-based approaches that researchers say are more likely to change teacher behavior and improve student achievement than “one shot” workshops.

Some efforts to improve teacher effectiveness have proved politically challenging. The federal Teacher Incentive Fund, a performance-pay program, has promoted interest in using test scores to estimate teacher effectiveness. That approach has generally not been favored by teachers’ unions. The tif program received an additional $200 million in the stimulus.

Additionally, a limited number of states have the ability to match teacher records to student data, and even those with the technical capacity have not always used their data to estimate teacher effectiveness. The unions fear such links could ultimately be used to establish punitive policies, and they have successfully lobbied legislators to curb the use of “teacher effect” data in some states. ("Growth Data for Teachers Under Review," Oct. 12, 2008.)

But the possibilities of “value added” are enticing to policymakers. Officials in Tennessee, the lone state that has incorporated teacher-effect data into personnel decisions, are awaiting new data that will reveal whether efforts to attract effective teachers to the most challenged schools have improved results, said Julie McCargar, the state director of federal programs.

This is a huge issue – and it will be interesting to see if the U.S. Department of Education focuses its regulatory definition and its expectations of states – like so many others – simply on measuring and identifying and perhaps rewarding effective teachers. The logical and more purposeful next step, of course, is to look at what behaviors, characteristics, or knowledge make certain educators more effective and then determine how to scale up approaches to initial training or on-going professional development programs to help make the vast majority of teacher candidates, beginning teachers and veteran teachers better. I have no insider knowledge about the Department's thinking around all this, but I’m always astonished at the wealth of policymakers, policy organizations, and foundations that never seem to get past square one on this topic.

Measurement is not destiny.

If all we do is use value-added metrics to determine who the best teachers are and pay them more money for being better, we will be sacrificing the quality of public education for a short-sighted reform. While more money might keep some effective educators from leaving a particular school or district, or from leaving the profession entirely, it won't do anything to make existing and future teachers a whit better.

The teacher effectiveness conversation must be about more than value-added measurement and performance pay, although it can certainly include those elements. It can't be simply about rewarding the good and getting rid of the bad. Fundamentally, it must be about a concerted human capital strategy to use existing knowledge as well as future data and research to strengthen teacher preparation, induction and professional development to improve the skills and abilities of all teachers. Hopefully, the Department's focus on teacher effectiveness will impel such an effort.

We can do better -- by learning from the best teachers and finding ways to replicate their success. Now, that would be effective.

You have read this article Education Week / federal / stimulus / teacher distribution / teacher effectiveness / teacher pay / teacher preparation / teacher quality / U.S. Department of Education / value added with the title value added. You can bookmark this page URL http://apt3e.blogspot.com/2009/03/measurement-is-not-destiny.html. Thanks!
Wednesday, January 14, 2009

Things Are Looking Up

Well, sort of. In my family, not so much -- we've all been knocked out by various colds, pneumonia, ear infections, etc for the last 7 days. But out there in the wider world -- things are starting to look downright perky for education!

For example:

--4 separate times today articles about increasing funding for community colleges crossed my desk
--MDRC issued some much more intriguing results from their Opening Doors aid study
--Arne Duncan talked about increasing the Pell Grant
--I read, and enjoyed, two bright and interesting memos to Obama-- one by Davis Jenkins and Julian Alssid, and the other by Jamie Merisotis
--Sociologist extraordinaire Linn Posey accepted an offer to join my department!
-- I heard that Gates is starting a value-added initiative

I'm sure there's more to come. It feels like a whole new world with Arne and Barack (and LDH?) at the helm....
You have read this article Arne Duncan / Barack Obama / community college / Pell Grant / value added with the title value added. You can bookmark this page URL http://apt3e.blogspot.com/2009/01/things-are-looking-up.html. Thanks!
Monday, July 7, 2008

The Persistence of Value-Added Teacher Effects

In case you missed it, eduwonkette offered up an excellent post last month about the accumulation of value-added teacher effects. She describes a new NBER Working Paper by Brian Jacob, Lars Lefgren and David Sims which suggests that these effects may not as large as prior research has suggested. Here's why:
"Our estimates suggest that only about one-fifth of the test score gain from a high value-added teacher remains after a single year. Given our standard errors, we can rule out one-year persistence rates above one-third. After two years, about one-eighth of the original gain persists.

Our results indicate that contemporary teacher value-added measures may overstate the ability of teachers, even exceptional ones, to influence the ultimate level of student knowledge since they conflate variation in short-term and long-term knowledge.

Our results suggest some caution should be taken in focusing on such measures of teacher effectiveness. If value-added test score gains do not persist over time, adding up consecutive gains does not correctly account for the benefits of higher value-added teachers."
Interesting. If true, this study doesn't say that teachers don't matter, but perhaps less than some value-added proponents have suggested because students may only retain a fraction of the knowledge gained from having more effective teachers in consecutive years.
You have read this article Brian Jacob / Lars Lefgren / research / teacher quality / value added with the title value added. You can bookmark this page URL http://apt3e.blogspot.com/2008/07/the-persistence-of-value-added-teacher.html. Thanks!

LinkWithin

Related Posts Plugin for WordPress, Blogger...