This blog provides information on public education in children, teaching, home schooling

Showing posts with label Institute for Education Sciences. Show all posts
Showing posts with label Institute for Education Sciences. Show all posts
Monday, June 28, 2010

Positive Effects of Comprehensive Teacher Induction

Today, Mathematica Policy Research, Inc. released the final report of its IES/U.S Department of Education-funded randomized controlled trial (RCT) of comprehensive teacher induction. It shows a statistically significant and sizeable impact on student achievement in mathematics (0.20 standard deviations) and reading (0.11 standard deviations) of third-year teachers who received two years of robust induction support. That's the equivalent of moving students from the 50th to 54th percentile in reading achievement and from the 50th to 58th percentile in math achievement.

As a basis of comparison, I note that in 2004, Mathematica conducted a RCT of Teach for America (TFA). In that study, it compared the gains in reading and math achievement made by students randomly assigned to TFA teachers or other teachers in the same school. The results showed that, on average, students with TFA teachers raised their mathematics test scores by 0.15 standard deviations (versus 0.20 standard deviations in the induction study), but found no impact on reading test scores (versus 0.11 standard deviations in the induction study).

In another recent Mathematica report (boy, these folks are busy!), the authors note that "The achievement effects of class-size reduction are often used as a benchmark for other educational interventions. After three years of treatment (grades K-2) in classes one-third smaller than typical, average student gains amounted to 0.20 standard deviations in math and 0.23 standard deviations in reading (U.S. Department of Education, 1998)." In that report -- an evaluation of the Knowledge Is Power Program (KIPP), Mathematica researchers found a very powerful impact from KIPP: "For the vast majority of KIPP schools studied, impacts on students’ state assessment scores in mathematics and reading are positive, statistically significant, and educationally substantial.... By year three, half of the KIPP schools in our sample are producing math impacts of 0.48 standard deviations or more, equivalent to the effect of moving a student from the 30th percentile to the 48th percentile on a typical test distribution..... Half of the KIPP schools in our sample show three-year reading effects of 0.28 standard deviations or more."

Is it appropriate to compare effect sizes among RCTs or, for that matter, among research in general? I am told that it is, although certainly considerations such as cost effectiveness and scalability have to enter into the conversation. Implementation issues also must be attended to. With regard to teacher induction, the issue of cost effectiveness was addressed in a 2007 cost-benefit study published in the Education Research Service's Spectrum journal and summarized in this New Teacher Center (NTC) policy brief.

Disclosure: I am employed by the NTC which participated in the induction RCT, and I helped to coordinate NTC's statement on the study.
The NTC is "encouraged" by the study. However, NTC believes that "it does not reflect the even more significant outcomes that can be achieved when districts have the time, capacity and willingness to focus on an in-depth, universal implementation of comprehensive, high-quality induction. It speaks volumes about the quality of induction and mentoring provided and the necessity of new teacher support that student achievement gains were documented despite [design and implementation] limitations to the study."


UPDATE: Read the Education Week story by Stephen Sawchuk here. And the Mathematica press release here.



You have read this article induction / Institute for Education Sciences / KIPP / mentoring / New Teacher Center / research / Teach for America / teacher effectiveness / TFA with the title Institute for Education Sciences. You can bookmark this page URL http://apt3e.blogspot.com/2010/06/positive-effects-of-comprehensive.html. Thanks!
Thursday, April 2, 2009

It's Easton!!

And a hip-hip-hooray! It's official: John Easton (executive director of the Consortium on Chicago School Research), new IES Director.

Hat tip to Ed Week's Politics K-12 blog.

See the CCSR press release here.
You have read this article Consortium for Chicago School Research / Institute for Education Sciences / John Easton / U.S. Department of Education with the title Institute for Education Sciences. You can bookmark this page URL http://apt3e.blogspot.com/2009/04/it-easton.html. Thanks!
Tuesday, March 31, 2009

"No Effects" Studies

Education Week's got this article out about randomized trials producing "no effects." According to the article, these null findings are raising eyebrows and "prompting researchers, product developers, and other experts to question the design of the studies, whether the methodology they use is suited to the messy real world of education, and whether the projects are worth the cost, which has run as high as $14.4 million in the case of one such study."

Wow, is that ever a disappointing reaction. Here's why:

1. We should be psyched, not upset, that studies with null effects are being released. That is not always the case. Publication bias, anyone? I've often thought that studies demonstrating null effects need to be publicized even more widely than those that find positive or negative impacts. Too many places out there are at the behest of funders, and can't release null findings. Too many assistant professors don't get tenure because they "didn't find anything." Are you kidding me? If it's a current practice and you learn it doesn't produce any effects, either way, it needs to be out there. We should learn as much from null findings and "worst practices" as we do from "statistically significant" impacts and "best practices."

2. Saying that experimentation isn't suited to the "messy real world" is a cop out. It lumps many different kinds of experiments into one category-- the good, the bad, and the ugly. Field experiments, lab settings, cluster-randomized trials with volunteer districts, and student-level randomized experiments with participants selected via administrative data-- these are very different animals. Each approach has a differential potential for generalizable results (external validity) and varying levels of challenges to internal validity as well. I'll grant you, experiments that rely on volunteer samples probably can't help us much in education-- since in real life programs aren't applied to students, families or schools who volunteer--they apply to everyone. This is especially a problem when we try interventions to close achievement gaps-- African-Americans who volunteer for studies are very, very different from those who do not (Tuskegee anyone?).

3. Doing experiments well costs a LOT of money. Putting trials on tight budgets helps to ensure they aren't run well--PIs cannot build the kinds of relationships that promote treatment fidelity, cannot collect high-quality data, and cannot get inside the black box of mechanisms--and instead are stuck simply estimating average treatment effects. No drug works for everyone, and no drug works in the exact same way for everyone-- the medical community knows this, and uses larger samples to make identifying differential and heterogeneous effects possible. When is Education going to catch up?

4. One thing I do agree with this article on. The model IES is using needs some revisions. I heard William T. Grant president Bob Granger give a great talk at SREE recently, where he made the point that the usual 'try small things then scale them up' model isn't going anywhere fast. We need to know how current policies work as currently implemented-- at scale. Go after that, spend what's necessary to conduct experiments with higher internal AND external validity, and support researchers to do this who reject old models and try new things. I promise you, we'll get somewhere.
You have read this article Education Week / Institute for Education Sciences / research with the title Institute for Education Sciences. You can bookmark this page URL http://apt3e.blogspot.com/2009/03/effects-studies.html. Thanks!
Monday, April 28, 2008

Institute of Education Sciences

Today's Washington Post features a very interesting article about the U.S. Department of Education's Institute of Education Sciences and its director Grover "Russ" Whitehurst.

One "major insight" that has emerged from the work of IES to date warms the cockles of my heart: "The success of students depends more on who teaches them than on nearly any other factor."

Whitehurst says the next priority area for research is to determine what makes a teacher good.

Kudos to Alexander Russo's This Week in Education blog for bringing this article to my attention.
You have read this article Institute for Education Sciences / teacher quality / U.S. Department of Education with the title Institute for Education Sciences. You can bookmark this page URL http://apt3e.blogspot.com/2008/04/institute-of-education-sciences.html. Thanks!

LinkWithin

Related Posts Plugin for WordPress, Blogger...