This blog provides information on public education in children, teaching, home schooling

Showing posts with label evaluation. Show all posts
Showing posts with label evaluation. Show all posts
Saturday, January 7, 2012

Remaking Academia: Improve the Hiring Process

The latest entry in a continuing series here at The Education Optimists


Have you ever sought a job as a professor? Depending on your field and where you’ve applied, it goes something like this:

(1) You send in a letter of interest, a CV, and some publications. Maybe some letters of reference too, or perhaps just contact information for those people. If it’s a teaching institution or a school of education, maybe you’ll also send in a statement of teaching philosophy and some student evaluations.

(2) If the search committee likes what they see in the file, they get in touch. This typically means you’ve published a fair bit, demonstrated that you have some interesting ideas, come from a good graduate program, have very solid letters that say you’re among the very best, can attract grant funding, etc.

(3) Then you either meet with the committee via phone or Skype, or at a conference, or more commonly go to campus. (Sometimes it’s a two-step or three-step sequence, sometimes you just go right to campus.) During the visits, you’ll do a talk about your research (to show how you approach questions, theory and evidence), talk with lots of academics who will ask you about your future research plans and what you like to read and discuss (mainly to see if they think you’re smart and they like you), meet with an administrator or two, talk to students, and maybe give a demonstration of your teaching (e.g. a pedagogical talk).

(4) At the end of the evaluation period, a search committee, or even the entire department, together with the dean, has a set of information about you. It includes a written record of what you’ve done, thoughts about what they’ve heard, some student evaluations on a set of metrics, etc. Then they make their decisions.

Often this results in the offer of a job at a pretty good salary with decent benefits, with a three-year contract, and the possibility of tenure. Or, if you’re lucky, it’s a tenured position—in which case they’ve committed (after a tenure committee does their own review) to hire you “for life.”

This process has long puzzled me for what it omits. And as I listen to heated discussions of ineffective professors and teachers, and watch the advent of a strong debate from k -20 over using metrics to decide who to fire, I have to wonder: why can’t we start instead by using data and standard human resources practices to improve our effectiveness at hiring?

Before I list some suggestions for improvement, let me admit that I have held one academic job for my entire career (which admittedly is just 8 years long). And this area—hiring and evaluation—isn’t the topic of my own research. So I don’t know about every practice used in every college or school, and it’s quite possible some of what I think should be done IS being done—in which case we should get a good census of practices and start evaluating their effectiveness. This is a blog I really hope to get constructive feedback on (yes, more so than usual).

(1) Rethink who does the hiring. Right now prospective colleagues primarily do it. This is good, since they are whom you’ll end up working with and spending time with. They should and must have a role. But those peers were hired because they are talented researchers and teachers, not because they know how to evaluate large numbers of prospective applicants and make terrific judgment calls. Professionalization of this hiring practice is needed, and it must include very experienced people who’ve done hiring in academia for decades. Ideally, they’d be systematically trained in identifying expertise in the competencies academics need to do their jobs very well (see next point).

(2) Bring some additional competencies into the mix. Being a good professor or teacher requires strong time management skills, grit, resilience, ability to respond under pressure, communication skills, drive, ability to implement feedback, performance orientation, inquisitiveness, and cultural competencies as well. Where/how are these being assessed now? Primarily in terms of how much you’ve managed to publish in X time (which doesn’t necessarily tell you how well time was managed since other activities are often sacrificed). There are instruments for measuring such things, and we’re often ignoring them. That’s not good enough. What other competencies predict success in academia? We need to know, and we need to integrate them into hiring.

(3) Lengthen the process. My colleagues will hate me for saying this, but spending only a total of maybe 2-5 days evaluating whether a person should be allowed to teach large numbers of students, enjoy limited campus resources, etc, is far too quick. You need more data and more time to analyze it.

(4) Systematize the evaluation process. We use very superficial forms and often don’t consider the data that result in any sophisticated way. The process of reference and background checks is too personal, political, and idiosyncratic, mainly because people who were never trained to do these checks are in charge!

There’s got to be even more we can do. Sure it has to be a flexible process that can be adapted to public flagships or private liberal arts colleges, as well as community colleges, etc. It also can’t be so expensive as to prevent scaling. And it will need revision and improvement. When’s the last time your department changed how it recruited and evaluated applicants?

The current process skips key steps and fails to assess competencies that when not present, lead to failure and turnover in academia (and k12 teaching). Instead of researching who we should fire, why not focus our attention on improving the hiring process? It seems far more efficient, not to mention equitable and ethical.
You have read this article accountability / evaluation / hiring / professors / value-added with the title evaluation. You can bookmark this page URL http://apt3e.blogspot.com/2012/01/remaking-academia-improve-hiring-process.html. Thanks!
Thursday, March 26, 2009

Evaluation of Milwaukee Voucher Program

Alan Borsuk of the Milwaukee Journal-Sentinel writes ("Study finds results of MPS and voucher schools are similar") about the first report from a long-awaited and on-going evaluation of the Milwaukee school voucher program by the School Choice Demonstration Project.
The first research since the mid-1990s comparing the academic progress of students in Milwaukee's precedent-setting private school voucher program with students in Milwaukee Public Schools shows no major differences in success between the two groups.

Summarizing a comparison of how matched groups of voucher and MPS students did across two years of tests, the researchers wrote:

"The primary finding in all of these comparisons is that there is no overall statistically significant difference between MPCP (voucher) and MPS student achievement growth in either math or reading one year after they were carefully matched to each other."

When I worked in Wisconsin Governor Jim Doyle's Office, we worked hard to bring greater accountability to the $129 million taxpayer-funded Milwaukee Parental Choice Program. The 2006 compromise ("Governor and Speaker Gard Announce Deal on School Choice, Accountability, and Small Class Size Funding") -- made necessary because of the then-Republican Assembly Speaker (who had close ties to local and national voucher school lobbyists) -- wasn't perfect or as strong it might otherwise have been, but it was a step forward.

The push for greater accountability was precipitated over concerns about student learning (such as reported by Rethinking Schools in 2005) and widespread fraud and abuse (such as this and this) that seemed to be rampant within many start-up schools financed solely by the public subsidies available through the voucher program. This national evaluation -- along with stronger financial reporting requirements, independent accreditation, and participation in standardized testing -- was one piece of that accountability rubric passed into law. (Here is a link to a summary of the Act (2005 Wisconsin Act 125).

Other elements of accountability discussed back in 2005 and 2006, and still under discussion within the Wisconsin education policy community, include certification of educators in the voucher schools and a reporting of school-by-school assessment results. The argument for teacher certification was made to ensure a minimum standard of teacher quality based on evidence that teachers in voucher schools were not required to have graduated from college. In addition, it was felt that in order for parents to make informed choices for their kids academic information on the voucher schools needed to be available the same way it is for schools within Milwaukee's public system.

After all, taxpayer money -- from the state of Wisonsin and from the city of Milwaukee -- finances this program. And these kids deserve the best education possible. Certainly, they deserve some assurance of basic quality.

For readers who can't get enough of this issue, check out these links for further background about this issue:

Education Optimists: "School Vouchers Are No Silver Bullet"
Eduwonk: "Vouching Toward Gomorrah"
Quick & The Ed: "I Should Know Better..."
Fordham Institute: What's The Place of Accountability in School Voucher Programs
David Figlio & Cecelia Rouse: Do Accountability and Voucher Threats Improve Low-Performing Schools?
You have read this article accountability / evaluation / Jim Doyle / Milwaukee Journal Sentinel / Milwaukee Parental Choice Program / school choice / voucher / Wisconsin with the title evaluation. You can bookmark this page URL http://apt3e.blogspot.com/2009/03/evaluation-of-milwaukee-voucher-program.html. Thanks!
Tuesday, July 8, 2008

A Blank Prescription for Policy Reform

Stanford has yet another interesting publication, Pathways, which integrates critical analysis of research on poverty with bold prescriptions for policy reform. I'm a bit late to the game in mentioning this, since the first issue came out in December, but then again as Mom-to-toddler a 6- month lag or so on my reading ain't bad.

I'm especially a fan of Becky Blank's piece which includes a list of priority efforts for antipoverty programs. Blank is whip-smart, and was recently a contender for the chancellorship here at Madison. Darn it all, she wasn't chosen.

Here's a quote from near the end of her article, which should illustrate why many of us would've loved to have her here. "Social policy evaluation is one of the least well appreciated tools of long-term policy design."

If there was one message I'd send to the presidential candidates, it's that when tackling ANY area of policy reform, please please please take the evaluation of your programs seriously. No, data isn't entirely objective, but it's a whole heck of a lot more objective than simply deciding something's working -- or not-- based entirely on politics or ideology.
You have read this article data / evaluation / policy reform / Rebecca Blank / social policy / Stanford University with the title evaluation. You can bookmark this page URL http://apt3e.blogspot.com/2008/07/a-blank-prescription-for-policy-reform.html. Thanks!

LinkWithin

Related Posts Plugin for WordPress, Blogger...