Rating Your Professors: Scholars Test Improved Course Evaluations


April 25, 2010

Source:  http://chronicle.com/article/Evaluations-That-Make-the/65226/?key=SW5wIgNtYXJPZ3c3cnZOcyMFbXYsI0t6P3xGa3kaZllX


Jackson Hill for The Chronicle

This month students will fill in bubbles on course-evaluation forms, but many scholars say those forms don't tell us much about learning.

By David Glenn

Evaluations That Make the Grade: 4 Ways to Improve Rating the Faculty 1  


During the next few weeks, hundreds of thousands of college students will fill out course-evaluation forms. On a scale of one to five, they might be asked to rate the instructor's command of the subject material; whether he or she used class time effectively; or whether the exams covered the most important concepts in the course. In most cases they'll also be given space to add anecdotal comments. (On large campuses, it's statistically certain that at least one student will use that space to write a limerick that rhymes "exam" and "scam.")

For students, the act of filling out those forms is sometimes a fleeting, half-conscious moment. But for instructors whose careers can live and die by student evaluations, getting back the forms is an hour of high anxiety. Some people need an extra glass of wine that day.

And many find the concept of evaluations toxic. "They should be outlawed," says D. Larry Crumbley, a professor of accounting at Louisiana State University at Baton Rouge who recently co-edited a book about the topic. "They have destroyed higher education." Mr. Crumbley believes the forms lead inexorably to grade inflation and the dumbing down of the curriculum.

Outlawing the forms seems unlikely. The tide, in fact, seems to be moving in the opposite direction. Last year, Texas enacted a law that will require every public college to post each faculty member's student-evaluation scores on a public Web site.

So can the evaluations be improved? Various scholars and administrators have tried. Here are four efforts to build a better mousetrap:

Custom Questions

The IDEA Center, an education research group based at Kansas State University, has been spreading its particular course-evaluation gospel since 1975. The central innovation of the IDEA system is that departments can tailor their evaluation forms to emphasize whichever learning objectives are most important in their discipline. For an anatomy class, the most important objective might be gaining factual knowledge. For a literature course, it might be analytic writing.

Evaluations That Make the Grade: 4 Ways to Improve Rating the Faculty 2  

Departments can also customize the forms further by adding questions specific to a particular local course. A mandatory information-literacy course at Saint Francis University, in Pennsylvania, has customized its IDEA forms with questions about whether enough time was spent on citation skills, online resources, and critically evaluating information sources.

"When a particular department devises its own set of questions, that can eventually give them a set of data that they can assess over time," says Theresa L. Wilson, an instructional-technology specialist at Saint Francis who helps oversee the system.

The IDEA system is attractive, Ms. Wilson says, because it offers both locally customizable features and the ability to compare certain scores with a large, nationally normed database. (Roughly 350 colleges use the IDEA Center's system, though in some cases only a single department or academic unit participates.) When the IDEA Center analyzes a college's data, it provides both "raw" scores for each course and also scores that are adjusted to account for factors outside the instructor's control.


D. Larry Crumbley, an accounting professor at Louisiana State who helped edit a new book on course evaluations, thinks they have dumbed down the curriculum and led to grade inflation. "They should be outlawed," he says.


For example, in a large general-education class where many of the students say they were not motivated to learn the material, course-evaluation scores will tend to be lower than in a small class that is composed entirely of majors. The IDEA system's adjusted scores try to correct for such structural biases.

Peter Skoner, associate provost of Saint Francis, agrees that the IDEA system has been valuable, but he says that its forms are only a small part of the university's faculty-evaluation system. "Department chairs visit every class each year," he says. "We have annual reviews that faculty members complete with their chairs. We have self-reflection. We have standardized exam scores. So there are many ways to assess teaching and learning, and these IDEA forms are only a piece of the puzzle."

Student Self-Assessment

More than a decade ago, Elaine Seymour, who was then director of ethnography and evaluation research at the University of Colorado at Boulder, was assisting with a National Science Foundation project to improve the quality of science instruction at the college level. She found that many instructors were reluctant to try new teaching techniques because they feared their course-evaluation ratings might decline.

Ms. Seymour and her colleagues thought that was a sad dynamic. So they did an end-run around the problem by developing a new evaluation instrument designed to capture students' own perceptions of how much they learned in a course—and, more importantly, which course elements helped them learn.

The survey instrument, known as SALG, for Student Assessment of their Learning Gains, is now used by instructors across the country. The project's Web site contains more than 900 templates, mostly for courses in the sciences.

Like the IDEA Center's model, the SALG allows instructors to customize questionnaires to match the objectives of a particular course. But the forms must also include a minimal number of baseline elements, which allow data to be compared across institutions and over time.

"One of our biggest challenges," says Robert D. Mathieu, a professor of astronomy at the University of Wisconsin at Madison and the principal investigator on the SALG project, "is that we have instructors and departments that want to use the SALG, but at the same time are also required by their institutions to use the classic types of course evaluation. So you end up with a situation where students are double-hit, and there's a certain amount of survey fatigue."

Mr. Mathieu hopes to soon test the use of SALG across an entire college. (There may be a pilot next year at Santa Clara University.) One of the program's latest innovations is a feature that analyzes the texts of students' survey comments, indicating which words are used most frequently and allowing instructors to see patterns that they might otherwise miss.

"If you just flip through students' comments," Mr. Mathieu says, "you tend to give highest weight to the extrema"—that is, the outliers who gush with praise or rip your teaching to shreds. "So the ability to do some quantitative analysis of these comments really allows you to take a more nuanced and effective look at what these students are really saying."

Quality Teaching

The Teaching and Learning Quality survey has been developed during the last three years by Theodore W. Frick, an associate professor of education at Indiana University at Bloomington. Unlike the IDEA Center's model, this questionnaire is meant to be one-size-fits-all.

The project began when Mr. Frick was asked to serve on a teaching-awards committee. The committee looked at nominees' course-evaluation data, but Mr. Frick was deeply skeptical. "I looked at these forms and said, Gee, does this have anything to do with student learning?" he says. "Like many faculty members, I was pretty jaundiced about the entire concept of course evaluations. I just thought it was a smiles test and a test of popularity."

He started to read the scholarly literature on course evaluations and reluctantly concluded that there is at least a weak relationship between students' global ratings of a course and certain measures of their learning. On the other hand, Mr. Frick thought his campus's evaluation forms could be improved if they included items about teaching practices that are known to improve student learning. So he drafted a form that asked about how effectively the instructors activated the students' previous knowledge, demonstrated skills and concepts, and offered applications. For instance, students are asked whether they agree with statements like, "In this course, I was able to reflect on, discuss with others, and defend what I learned."

In a pilot study that included 12 courses in business, philosophy, kinesiology, and several other fields, Mr. Frick and his colleagues found that his new course-evaluation form was strongly correlated with both students' and instructors' own measures of how well the students had mastered each course's learning goals.

Mr. Frick wants his instrument to be used to help instructors improve, not as a high-stakes measure for hiring and firing instructors. "When course evaluations are a major part of tenure and promotion, I think over time that gives faculty members an incentive to design the evaluations so that they'll get good ratings, whether or not they have anything to do with student learning," says Mr. Frick, who will discuss his project this week at the annual meeting of the American Educational Research Association.

Minimal Bias

Two years ago, when it became clear that the Texas legislature was likely to require public online disclosure of faculty members' course-evaluation scores, officials at the University of North Texas got very anxious. The university had no standard course-evaluation system, and the forms varied enormously across various academic units. That meant that the evaluation scores that would soon become public were nowhere near apples-to-apples comparisons. Administrators and faculty members foresaw a lot of unhappiness—and maybe even lawsuits.

The provost asked a team of experts in psychometrics and human resources to come up with a campuswide course-evaluation tool that would minimize any distortions or biases. The committee convened focus groups with more than 300 faculty members and 80 students. "For example, we would look at some proposed items and ask the focus groups whether students were actually in a position to measure those items," says Paula Iaeger, a graduate student in education who served on the committee. "Faculty members and students both told us, for example, that students should not be asked whether the textbook was the best possible for the course, because students can't know that."

Ms. Iaeger says her committee took pains to include adjunct faculty members in those focus groups. "If we were only listening to our tenure-track faculty, we would have gotten an incomplete picture," she says.

The new North Texas instrument that came from these efforts tries to correct for biases that are beyond an instructor's control. The questionnaire asks students, for example, whether the classroom had an appropriate size and layout for the course. If students were unhappy with the classroom, and if it appears that their unhappiness inappropriately colored their evaluations of the instructor, the system can adjust the instructor's scores accordingly.

"And we can also use that data in other ways," Ms. Iaeger says. "Once we accumulate enough data about which classrooms the students dislike, we can try not to assign novice teachers to those classrooms, so the students don't get a double-whammy."

Don't Ask the Cars

None of these innovations impresses Mr. Crumbley, the Louisiana State skeptic. No matter how sophisticated a survey instrument might be, he says, students are too likely to answer in haphazard or dishonest ways.

"Students are the inventory," Mr. Crumbley says. "The real stakeholders in higher education are employers, society, the people who hire our graduates. But what we do is ask the inventory if a professor is good or bad. At General Motors," he says, "you don't ask the cars which factory workers are good at their jobs. You check the cars for defects, you ask the drivers, and that's how you know how the workers are doing."

Few critics of course evaluations are willing to go that far. William H. Pallett, president of the IDEA Center, says that when course rating surveys are well-designed and instructors make clear that they care about them, students will answer honestly and thoughtfully.

"Student ratings aren't the be-all and end-all," Mr. Pallett says. "But they can inform instructors about things they can do to improve their learning. And students usually do correctly identify the tasks and learning objectives that were most important to a course. If they were just filling out these forms completely nonchalantly," he says, "we wouldn't see that pattern."

Another who finds value in evaluations is Ken Bain, vice provost of instruction at Montclair State University and author of What the Best College Teachers Do (Harvard University Press). Almost everyone agrees that course evaluations by themselves are inadequate, he says. But both faculty members and administrators (perhaps for different reasons) are too hesitant to add less-quantitative assessments of teaching.

In Mr. Bain's view, student evaluations should be just one of several tools colleges use to assess teaching. Peers should regularly visit one another's classrooms, he argues. And professors should develop "teaching portfolios" that demonstrate their ability to do the kinds of instruction that are most important in their particular disciplines.

"It's kind of ironic that we grab onto something that seems fixed and fast and absolute, rather than something that seems a little bit messy," he says. "Making decisions about the ability of someone to cultivate someone else's learning is inherently a messy process. It can't be reduced to a formula."



1. cpri2405 - April 26, 2010 at 09:32 am

"Students are the inventory," Mr. Crumbley says. "The real stakeholders in higher education are employers, society, the people who hire our graduates. But what we do is ask the inventory if a professor is good or bad. At General Motors," he says, "you don't ask the cars which factory workers are good at their jobs. You check the cars for defects, you ask the drivers, and that's how you know how the workers are doing."

Wow, I would love to take a class with this guy! All I would need to do is sit on the assembly line and wait until I am finished. Hopefully I am not a Toyota or some other lemon that needs to be in the shop much after I am completed and sold. I wish Professor Crumbley (gotta love that name) would have extended that metaphor further.

2. dupont01 - April 26, 2010 at 09:33 am

A few years back, my college added a "self-assessment" portion to the standard evaluation that required students to report their level of preparation and commitment. This seems to have had the effect of neutralizing a growing and alarming tendency to see the five-point rating scale as an Amazon.com consumer review--at least, the students seem more aware that classes are not quite the same thing as commodities, and that the teacher-student relationship is not the same thing as consumer/object purchased and consumed.

3. sages - April 26, 2010 at 11:36 am

Professor Crumbley's metaphor might be raising hackles, but he's right: the stakeholder and the beneficiary of higher education is our society. The students are not our customers, the society is our customer. The students are the products. And I couldn't agree more with the comment that the practice of student evaluations leads to teaching to a good evaluation score, not to a good outcome of the education process.

4. anonscribe - April 26, 2010 at 12:55 pm

I also find Crumbley's metaphor crumby. For a simple improvement on the metaphor adapted from Dewey, let's just say that students are the wheat and society is the gourmand, or students are the trees and society is the paper mill, or students are the cows and society is the fat redneck waiting to eat some sirloin.

Apparently, perceiving students as means instead of ends-in-themselves results is some distasteful metaphors. Then again, higher education has become a distasteful place, so perhaps it's fitting. I prefer to think of my students as my students, which is why student evaluations are perfectly valuable measures that must exist alongside other measures. I grade hard and give a ton of work in my classes, but I also provide abundant help and clear descriptions of my expectations. Have yet to receive anything but good evals.

I will say, however, that there are particularly disturbing issues concerning prejudice in student evaluations. Perhaps if I were a lesbian of color, I'd get awful surveys for that reason alone, which seems to be the case with some colleagues of mine (who get largely great reviews, sprinkled with some strangely vitriolic tongue lashings from frat boys who don't like women telling them what to do). Seems important for institutions to begin addressing this problem.

5. mcslainte7 - April 26, 2010 at 01:24 pm

The IDEA system seems like a good one. We use it at my institution. But a system is only as good as the operators. The adjusted scores are designed to make poor teaching situations seem better than they are. They also make excellent teaching situations (small classes of highly motivated students) seem worse. I am lucky enough to have such a situation, yet my supervisors ignore the fact that I peg perfect raw scores from students and wonder why my adjusted scores are lower! IDEA is unfortunately too complicated in its quantitative formulas for the average administrator to understand, which is the real problem with assessment. Most administrators don't want to put the time in to understand pedagogy in all of the fields they oversee. They just want a number to tell them "is it good or bad?" We don't need better assessment tools. We need better leadership.

6. koritzdg - April 26, 2010 at 08:25 pm

If we are to evaluate student surveys and their results, we must first have an answer to this question: What is the purpose of student evaluation surveys? Here, the word "purpose" can be understood as "intent." But the road to Hell.... How are the surveys understood and used by students, faculty, administrators and others? Do the get at some "truth" of classroom practice that some other tool cannot, or cannot as effectively? Why do we need numerical evidence of good teaching?

7. rginzberg - April 27, 2010 at 06:03 am

I was one of those who demanded that students be able to give feedback on professors back in the 1970s when teaching evaluations were first being instituted. The reason for teaching evaluations was to be able to alert Administration to the most egregious situations (professors who never showed up for scheduled classes, professors who came to class or held office hours while drunk or high on drugs, professors who sexually harrassed students, professors who smoked cigarettes continuously during class causing students with asthma and other serious illnesses to experience health problems all semester long during class, etc.).

In those days there was NO way for students to give any feedback on the most blatant abuses. Some folks also complained of professors who taught from yellowed lecture notes which hadn't been updated in 40 years or professors whose lectures were termnally boring, and hoped teaching evaluations would address that -- but the main reason was to give students a way to report serious abuse, NOT to make higher education "consumer driven."

Nowadays with different means of communication, different teaching techniques, different rules (and laws) related to sexual harrassment, different rules regarding tobacco use on campus, etc., these most egregious abuses rarely happen.

Teaching evaluation only imperil the untenured (those assistant professors who have yet to come up for tenure, and those adjuncts who are not on a tenure track).

The crusty old (tenured) professor who lectures from the same yellow teaching notes that he or she used 40 years ago is not in danger of losing his or her teaching job regardless of what is said on teaching evaluations. There are other ways of reporting professors who grope students or who show up to class drunk (cell phones come to mind).

The POINT of teaching evaluations was never to set up yet-to-be-educated first and second year undergraduates as arbitors of who should and should not get tenure, or as evaluators of the "expertise" or "knowledge" of those who've spent decades developing that knowledge and expertise.

It was to curb blatant abuse of students by a few "bad apples" in the classroom.

Given today's educational and technological environment, this is better achieved by other means.

Teaching evaluations MAY still be useful to give feedback to instructors on how they might improve their instructional delivery, but they should not be used in tenure decisions or posted on public websites.

Students ARE in a good position to give certain kinds of feedback, but are NOT in a good position to evaluate things like a professor's knowledge of the subject he or she is teaching or worthiness of tenure. Colleagues should be doing that, not students.

Professors who show up drunk or otherwise impaired can be reported via Twitter or with a call on a cell phone and an administrator can show up and observe the problem in real time.

Professors who sexually assault students can be reported to the police and arrested. Such reports are treated MUCH differently nowadays than they were 40 years ago.

Students do not need teaching evaluations any more to alert appropriate authorities to the worst kinds of abuse. There are better ways to do that now.

The teaching evaluation, as we now know it, should be gotten rid of.

8. triumphus - April 27, 2010 at 06:52 am

So much to measure and so little time.

9. schultzgd - April 27, 2010 at 08:02 am

Bad measures beget bad data. To cast the blame on "lazy" teachers, "bad" administrators or "consumeristic" students is patently shotty analysis of the actual problem. Focus the instrument by asking a few simple questions such as "What are the important outcomes students should have gotten from this class? Was the pedagogy a responsible choice to yield those outcomes? Did assessment methods distinguish levels of mastery of the content in a valid and sound way?"

These instruments do not have to be "bad". Nor do they have to come from a consultant's vest or a vendor's shopping cart. Focus on what you want to accomplish and keep the instrument simple and focused.

10. jnicolay - April 27, 2010 at 08:37 am

In my experience the learning context is conditioned by the expectations of the university, i.e., its learning culture. When the university publishes standards of performance for students and faculty this serves to externalize what the students otherwise would regard as faculty discretion. It is also my experience that some subjects are notorious "low votes" from students.

I tell the true story of the day I was in the evelator at Burris Hall, VPI. Two female faculty members, who obviously did not know each other, were comparing notes on student evaluations. Said one, "Mine weren't so good." The other asked, "What do you teach?" She replied, "Statistics. And how about you? How did you fare?" The response, "Very well, but I teach human reproduction."

The most serious challenge we face in the classroom is to humanize the experience, to recognize that students, like all of us, live complex lives; lives full of the twists and turns that distract from the more noble aspirations of education, and the immersion into the learning experience that faculty universally desire. We can't turn of the extraneous in their lives any more than we can do so in our own.

11. jmalmstrom - April 27, 2010 at 08:37 am

Shall we mention the fact that their statistical validity is questionable at best? That there are so many variables impacting the scores that we can't even begin to measure? Or should I just quote Deming: "That which is most important to manage is impossible to measure."

12. goldenrae9 - April 27, 2010 at 08:47 am

Having started my career in a K-12 classroom I always wonder why it's okay for the stringint evaluation procress in K-12 but not in higher education. We are educators and there are some colleagues who have poor teaching skills.

13. gglynn - April 27, 2010 at 08:49 am

While teaching evaluations are useful there is a fundamental problem with how they are used. It would be more effective if faculty were rated on their plan to improve their teaching and how effectively they carried out that plan. Student evaluations should be just one measure at how effective they were. As with many assessment exercises we stop at the measurement and do not close the loop or reward people for their efforts to improve rather than the absolute scores they receive.

14. tridaddy - April 27, 2010 at 09:03 am

There is a place for student evaluation of teaching, if done appropriately (to the contrary to some who say get rid of student evaluations). However, one problem is the system of administration where we expect students to enter the classroom on a single day and provide feedback on an entire semester of teaching or instruction. Anyone knows that most individuals will respond based on the most recent history or memory or how they feel, which may not reflect adequately or appropriately on an entire semester of teaching. Although asking for evaluation as part of an exam is not a very good idea either, I wonder how offering students 3 or 4 times during a semester at the end of a class to provide feedback would affect (that is more accurately) evaluation of teaching. Obviously, there is the evaluation fatigue issue but perhaps the same 4 or 5 questions each time would reduce the fatigue to some acceptable level. One final note in agreement with other commenters, student evaluations of teaching should not be the "end-all" and only evaluation tool for an instructor.

15. selfg - April 27, 2010 at 09:17 am

No one has mentioned that students expect to be able to rate their classes/professors. I'm the Director of our online program and about mid-way through each semester I start getting email from students who want to know when the course evaluation form will be available. Even when I encourage them to let me know if something is going wrong in class, they prefer to wait until they can submit comments anonymously. Whether faculty/staff like or dislike end of class evaluations, students expect to have a voice. It is the Amazon way.

16. wordymusic - April 27, 2010 at 10:15 am

Students are generally afraid of backlash from professors for even mild criticism.

17. softshellcrab - April 27, 2010 at 10:20 am

I agreed strongly with two comments in this article. The first, by Professor Crumbley, was that professor evaluations are a bad idea, and have a deleterious effect on higher education. I also agree with the author's comment that, despite this, their use is certainly not going away, and probably rising in both use and also in terms of the emphasis put on the results. Professor Crumbley, by the way, is one of the most respected accounting professors in America, and I strongly suspect he earns (or would earn if he is not currently subject to evaluations) extremely high ratings as a teacher. So his comment is not mere sour grapes but the thoughtful comments of one of the top faculty in his discipline. My feeling is simple: that students are really not qualified to judge who is a good teacher. They can judge the obviously horrible, or obviously great ones, but the two extremes are a minority. Certainly, timing of evaluations, grading, open requests for high marks, etc. can all skew the results. Also, at least at my school, there is a Lake Wobegon aspect to them as almost everyone is rated overall above 3.0 on a 1-5 scale. Most are rated above 3.5. I think to ask a few specific questions can be valuable, but students really are not good at telling the better teachers, often tougher graders, from the less able ones who may just be more friendly, grade easier, etc.

18. hassanaref - April 27, 2010 at 10:22 am

I have always felt, both as a faculty member and as an administrator, that student evaluations could not and should not be the end-all of teaching evaluation. Teaching is a very multi-facetted activity and trying to collapse it down to a bunch of numerical scores on a spreadsheet is almost sure to provide an incomplete picture. Furthermore, asking only the students to provide those scores captures, at best, only one part of the story. What about syllabus? Lesson plans? Textbook and other materials provided? The quality of homework and other assignments? The thought that went into tests and exams? And so on. These things can hardly be fully assessed by a student taking the course once, and certainly not by a student who is marginally prepared for the course in the first place. They require a professional view, ideally some kind of peer assessment.

The other thing that one finds is that students change their view of a particular course over time. The evaluation given in the heat of the moment often reflects other things than the long term value of a course, let alone the teaching talents of a professor. Scores on teaching evaluation sheets often reflect the students' own feelings in the thick of things, in particular his or her insecurity or confidence with the material. A year later the student has matured and realizes that a rigorous course, viewed as "terrible" at the time and rated accordingly, was a necessary step and maybe a bit of a wake-up call. At graduation students will often recall how a course they really didn't like when they took it, and marked down on evaluations, was one of the most important and valuable. Alumni will tell you years later how a course they hated when they took it was one of the most beneficial to their careers. This kind of time lag effects are, obviously, not captured on standard evaluation forms.

In the research universities we are actually pretty good at assessing research primarily, I contend, because we recognize the multi-facetted nature of research activity from the outset. We recognize that the impact of research comes in a multitude of ways. Some of these are quantifiable, such as the number of citations of a paper or the number of grant dollars brought in, but others are much more qualitative, such as the type of mentoring given to PhD students or the appreciation by editors worldwide of an individual as a journal referee. Yet we usually manage to give a good account of all an individual's talents and attributes in research when the time comes for major career decisions. We need to learn how to do the same with teaching. A first step in this process is to de-emphasize the student evaluation as the stand-alone assessment of teaching. We need to complement student evaluations with other measures of quality and impact in teaching that are not generated by students.

19. physicsprof - April 27, 2010 at 10:24 am

Can we at least weigh student evaluations with the corresponding grades they got in that particular course (factor rankings with 1.00 if a student received an A, 0.75 for B, 0.50 for C, 0.25 for D, 0 for E) or with their average GPA?

20. dnewton137 - April 27, 2010 at 10:30 am

Two pleas to all my academic colleagues out there:

First, several of the commenters above demonstrate the common academic aversion to the use of "industrial" nomenclature, e.g., words like "product" and "customer", in discussing our own hallowed profession. Might we not identify and agree upon some commonly acceptable terms that would render unnecessary huffy proclamations that "A university is not an industry!"

Second, is it not obvious that arguments about the primary purpose of our enterprise, whether we and our institutions exist to serve our students or our society, are silly? It's not either one or the other. It's both! As we seek adequate methods of assessing how well we are performing, let us strive to include measures of both. That's not easy, of course, but we owe it to both our students and our society to optimize, if not perfect, our outcomes. (Incidentally, I would explicitly reject the not uncommon belief that the primary purpose of our institutions is to serve ourselves.)

21. davi2665 - April 27, 2010 at 10:40 am

Tridaddy has a good idea for seeking student input on several occasions, not just a one time response at the end of the course when the students are anxious to leave and are tired of studying. An excellent evaluation tool is student performance on an instrument that has national comparisons- for example, the shelf examinations from the National Board of Medical Examiners for advanced biology courses and medical school courses, which will show the actual achievement of the students on an exam objectively written by educators from around the nation.

The best form of assessment of a faculty member's teaching ability is the evaluation by master educators and experienced course directors who observe the professor's teaching across the entire semester. As a course director, I attended every lecture by every professor, took detailed notes on strengths and weaknesses, talked with students after the lecture to assess their take away from the presentation, and held several open discussion sessions where the students could come to seek help, ask advice, and provide feedback on the course content, the examinations, the small group sessions, and the professors. I also met with the professors, whether starting assistant professors or seasoned tenured full professors, to discuss their contributions and to seek ways to make the instruction even more effective. I expected them to provide the same frank feedback to me on my instruction.

One aspect (one of the few of professional sports that I like is the opportunity for detailed assessment of the performance of the participating athletes. If every game is watched carefully, performance is closely scouted by experts, and all individuals involved in the performance are given a chance for feedback, it is difficult to hide egregious conduct, not showing up, showing up under the weather, inappropriate and non-germane rants and discussions, and other blatant abuses brought up by rginzberg. However, what I would most like to see abolished is the open-ended anonymous complaint evaluations where a student can just unload and be as irresponsible or nasty as he/she wants, damaging careers and reputations. I still believe that evaluations should not be anonymous, and that includes journal review boards and grant review panels- if a person signs the review, it is likely to be vastly more thoughtful and measured than an anonymous opportunity to throw written hand grenades.

22. intered - April 27, 2010 at 10:42 am

It is difficult to think of a more polite term to describe a professor of accounting who prefers to cling to myth over scientific data. We have a variety of regression studies with 'n's as large as 85,000 records demonstrating that grades awarded account for very little variance in students evaluations. (Less than spurious variables and unrelated variables of interest, such as course type.)

Students (our sons and daughters, right?) are smarter and have more character than to sell their evaluation for a grade. Yes, a few students do. So also do a few professors sell their grades for sexual favors. Should we characterize the professoriate that way? If you disagree with this, don't give me your anecdotes, send me your datasets.

It is grossly illogical, unscientific, and unprofessional to describe a population based on the characteristics of a small number of outliers carefully chosen to reinforce our biases.

The questions about validity are valid. It requires good measurement science skills to design a good evaluation. Most instruments ask too many questions of marginal validity and of little practical use. Students sense this and it affects their responses. The most useful information resides in open ended responses to well structured framing questions. These data are the most difficult to extract and aggregate. When you do develop a system for "Comment Profiling" a heretofore unseen world of useful information emerges. Unlike aggregated data derived from scaled questions, comment profiling data becomes richer, more distinctive, and more useful with aggregation.

23. mabenanti123 - April 27, 2010 at 10:50 am

One of my colleagues at another campus told me that he recently found out that a small group of students in a freshman class had read everyone's evaluations and changed the ones they believed to be too positive before putting them in the envelope to deliver to the drop box. The other students were afraid to speak up. How should these evaluations be counted?

24. wiztax01 - April 27, 2010 at 11:01 am

I was a little surprised that the website www.ratemyprofessors.com was not mentioned in the article. I am aware that it is not held in high regard by higher education employees, but with over 6,000 schools, 1 million professors, and 10 million opinions in the database (according to them), it should have a certain amount of face validity, although I do not know if it has been analyzed.

25. intered - April 27, 2010 at 11:08 am

Afterthought: #1 calls attention to an important issue. Impact assessment (Level III and IV assessments as they are commonly known in corporate higher education) form a bookend to the process measures we have the potential to take in well-designed end-of-course assessments (Level I assessment). Well designed and taken together, these assessment support the kind of inferential research that can pinpoint the areas of greatest need for improving the instructional environment (learner inputs, nature of the learning objectives, nature of the activities, instructional methods, performance evaluation, etc.). Often when we get bad results, we are left to guess the causes. Faculty blame students, employers blame the curriculum, etc.

26. mcphslibrary - April 27, 2010 at 11:21 am

In addition to the official online course evaluations students do, I always ask them for anonymous evaluations for my own use at the end of each course I teach. The form isn't complicated, and I give students about ten minutes to answer. I ask 1- What was most useful? Why? 2- What was least useful? Why? 3- What suggestions can you offer to improve the course next time? I may also include one or two more course-specific questions. I tell students that I take their suggestions seriously and I often use them to make changes if the suggestions seem reasonable. I point out what aspects of the course resulted from prior student suggestions. The student feedback is almost always thoughtful and useful, and helps me improve the way I teach. Students are experts in what actually helps them learn.

27. softshellcrab - April 27, 2010 at 11:39 am

@ intered

Your attack Professor Crumbley is really seriously misplaced. Please do an internet search and see his resume and learn more about him. As I noted in my previous post, this is one of the best respected accounting professors in the country. He has been an innovator and leader, and is extremely respected as a teacher. He has even published a series of novels based on accounting topics that help teach accounting. This is not a case where you want to attack the source. Nor it he just someone that other professors respect, but not students; just the opposite, he has a great reputation as an accounting teacher an innovator.

28. vlghess - April 27, 2010 at 01:01 pm

Actually, two comments:
1. re: Ratemyprofessor.com--when I've looked, the actual number of evaluations for any one individual is rather small; I suspect that the self-selected group of responders has a set of characteristics that might not make them representative, and therefore would limit the usefulness of the information.
2. I understand why this is so, but the "wall of separation" between learning outcomes assessment and teaching evaluation is quite high. While there are lots of variables and figuring out how learning correlates to particular individual teaching is tricky (as several states working on the problem at the p-12 level can testify), it seems bizarre to not be able to evaluate teaching using the one direct measure that gets at what I really want to know: are his/her students learning?

29. intered - April 27, 2010 at 01:09 pm


I understand and appreciate your comments supporting this person's teaching and dedication. That said, I ask that the issue be examined dispassionately.

How would we feel about being cared for by a respected, kind, and dedicated physician who professed that blood tests have caused the outbreak of AIDS and that they should be outlawed? Closer to home, what position should we take with respect to a professor of psychology who condemns students as having lower IQs than previous generations and that television caused the lowering?

The parallels are these.

1. Professor Crumbley asserts that end-of-course evaluations have dumbed down the curriculum. In addition to being illogical in ways that I doubt that I have to spell out, the empirical component of this assertion is provably false, and easily so.

2. Professor Crumbley asserts that end-of-course evaluations have led to grade inflation. It is unclear from this article whether he is placing the dumbed down curriculum in this chain of causality, thereby compounding his errors in reasoning, or is asserting that end-of-course evaluation have directly caused grade inflation. Either way, his assertion carries a component of bad logic -- end-of-course evaluations have been around for at least 50 years, grade inflation is younger than that -- but the main issue is that there is no credible empirical evidence that end-of-course evaluations have caused grade inflation. As a matter of course, I have examined every research study on this topic for the past 30 years. My appraisal is that grade inflation, once one overcomes the definitional challenges, is generally a context-dependent phenomenon that owes to multiple causes not the least of which is the fact that higher education, which was once a small family of tiny niche markets occupied by the smart and the rich is now a large and still growing set of markets of increasing diversity in inputs, processes, outcomes, and learner goals. In addition, as I indicated before, regression after regression fails to show that grades can purchase good student evaluations. What must we think of our professors and students to assert that!

3. Professor Crumbley wants us to outlaw the process to which he illogically and unscientifically imputes certain negative effects. Don't like the effects of a measurement system . . . outlaw the system! Did he consider, perhaps, improving the informational system?

I ask an open question. What position should we take with respect to the kindly physician, the psychology or accounting professor or any other trusted knowledge holder to whom we go for clear and accurate reasoning that is presumably based on the available facts, when that person is illogical and unscientific, and asks us to support these positions by outlawing the culprit in his simple-minded analyses? Hire him? Fire him? Ask him to confine his opinions to accounting? Ignore him because academic freedom implies that flat-earthers are entitled to their opinion and to teach our students anything they want? Do you really believe that Professor Crumbly hasn't "taught" his unfounded, some would say, "nutty" beliefs to his students? Is he serving them well in doing so?

30. goodeyes - April 27, 2010 at 01:44 pm

In my experience, faculty that get higher teaching evaluations find them valid and those with lower ones find them invalid. This article leaves out the fact that thousands of research articles support the validity and reliability of using teaching evaluations as one part of the evaluation of teaching. There is incredible teaching, excellent teaching, okay teaching, and very poor teaching. We have experienced it ourselves and I know even today that the evaluations I gave of faculty when I was student accurately reflect these various levels of teaching. Why do we as faculty allow poor teaching with all the research available on what is incredible teaching?

31. intered - April 27, 2010 at 01:57 pm



I would add the question, "Why do we retain professors to teach the sciences who teach and evaluate the way their great grand-professors taught and evaluated, blatantly ignoring the last 50 years of learning sciences, pedagogical sciences, and evaluation sciences?" It is one thing not to be in a professional position to know or appraise these sciences. It is, as Scriven said, a treason of the intellectuals, to know and ignore them to the disservice of students.

32. rgren - April 27, 2010 at 02:36 pm

Here is something to ask your students. Are you our customers or our product?

When I have done so, I am told in no uncertain terms that the students, especially the most successful, wish to be part of the institution. They want to be seen as contributing to the campus life, the reputation of their college, and to the learning process. If one wishes to use the factory metaphor, students don't want to be seen as buyers of what we purvey, nor as the product. They want to be involved in the production, especially at the level of quality circles or other empowered group.

33. agusti - April 27, 2010 at 03:27 pm

After 15 years working in higher ed, and nearly unanimously positive student evaluations, I still agree with Crumbley that we should do away with student evaluations as thay exist now.

Students are like anyone else: they evaluate things based on what they wanted to get out of them. So five stars on Amazon for an MP3 player that sounds good and has a long battery life and good warranty makes perfect sense to me. But if we compare what students want from our classes and what we strive to provide, it doesn't take long to realize that these two things are often drastically at odds.

So when students "evaluate", regardless of what question you ask or how you ask it, they are really answering this question: "Did I get what I wanted?". This isn't too far from asking a five-year-old, "Were the broccoli and carrots yummy for dinner tonight?".

I say this not because our students are infantile, but because they are driven by desires that usually don't have anything to do with learning. They want classes that aren't too hard, aren't too much work, and won't mess up their GPA. Period. Just like most of us want jobs that pay well, don't require overtime and where the boss doesn't give us flak. It's human nature. We want the most from the least amount of effort, and all of us - professors included - show evidence of this in everyday life, in what we buy, how much we expect to pay for things, how we want people to treat us - but suddenly when "evaluation" time comes, we expect a bunch of 18-22 year-olds to rise above human nature and evaluate based not upon what they wanted, but what we wanted them to want. Ain't gonna happen.

Until this most basic of facts can be dealt with in evaluation tools, I say keep them out of the classroom.

34. intered - April 27, 2010 at 03:42 pm


So, are you saying that you evaluated your professors not on their merits as instructors but based on your desires that had nothing to do with learning, such as whether or not they messed up your GPA or gave you too much work? Or, are you saying that the rest of society is not like you?

35. agusti - April 27, 2010 at 04:05 pm


as an undergraduate with little perspective on learning and its inherent value, yes, I probably "evaluated" my professors and classes based on how much they corresponded to my 18-22 year old interests.

As a graduate student, with a love for the subject roughly equal to that of my professors, I evaluated in a different way, with a different perspective.

I'd say that makes me exactly like the rest of society: I tend to comment favorably on that which gives me what I want, and less favorably on things that do the opposite.


36. intered - April 27, 2010 at 04:40 pm


Understood. Do see your generalizations applying to the nearly half of today's college students who are adults, most of whom work and have family and social commitments, and are generally regarded as quite purposive in their reasons for returning to school? Do you think they apply to the 15% (guessing here, I haven't seen recent data) of the 18-22 year olds who have serious career designs, such as medical school, music, law school, etc.? How about the 5% who are serious intellectuals and who desire to become scientists, etc.?

The problems I have with such generalizations are (a) they are pessimistic withe respect to the human condition and, more importantly, (b) they do not correspond to the objective evidence. We have 3-5 million taxonomized end-of-course assessment comments in our knowledge base, spanning 25 years, and several times that many datapoints from scaled questions on end-of-course assessments. The analyses of these data overwhelmingly support the generalization that students are intelligent consumers who are focused on the material elements of their learning environment.

The detail runs into volumes but, for example, more than 90% of comments are about the core elements of the learning environment (instruction, curriculum, textbooks, fellow learners) with only 2% focused on purely hygiene factors (temperature, vending machines, etc.). Eighty percent of the learning environment comments are focused on the professor and 75% of those are positive. Of the negative comments, most are constructive (e.g., I put three weeks into my research paper and all I got back was "Good Job" I expected helpful feedback and got none."). Interestingly, a higher percentage of the positive comments are not actionable (e.g., Great professor. Really made me think.") We also see trend lines suggesting that students are becoming even better judges of effective teaching than their predecessors. This might correspond with the fact that IQ and other ability test scores appear to be rising as well.

Do these findings support the conclusion that students are narrowly self-serving such that they will rate you poorly if you make them work hard or give them a well-deserved low grade? Tye do not.

I'm certain that some students behave exactly as you say, and they tend to be the memorable ones (it's called primacy/recency bias). Also, how the assessment process is set up and the importance imputed to it has a great deal to do with the quality of the findings. If you tell students that the process is useless and no one cares about the findings, they will be more inclined to conserve their efforts, and vice versa.

I am personally at a loss to explain how those of us who make our living via the free and unhampered exchange of information can be so eager to close the windows of inspection into our classrooms. How many of us can say that we cannot learn to teach better? How many of us are certain that our students have noting to teach us when they evaluate us? We are, as so many are saying, behaving as the now extinct Mandarin class.

37. agusti - April 27, 2010 at 05:06 pm


I think you've made good points here, as in your other posts, but they don't have all that much to do with what I've said. Moving from the top down:

1. I think it's important not to confuse what students want from an institution (a degree, a chance to advance in either their educational or professional area) with the things they have to do to get what they want (taking classes). Sometimes these classes will correspond to things they already want, and most of the time they won't. So I'm simply saying that asking people to evaluate the quality of a thing they most often didn't want in and of itself, when that thing involves hard work and investment of time and money, is suspect. This applies to the 95% in your first paragraph for whom classes are a means to an end, not an end in themselves.

2. Your second and third paragraphs give fine documentation of the fact that evaluations have happened and that students are capable of making comments about the quality of the experience they have have had. The real question, however, is how useful these comments are when we consider my basic point, which is that most people are rendering an opinion of an experience they would not willingly have undertaken were there another option.

3. Your fourth, shorter paragraph references something I haven't mentioned, regarding grades. Interesting point, not one I have taken up here.

4. I personally have always asked students to give as much information as they possibly can in evaluations, figuring that if we're going to do them, we might as well gather as much information as possible. So I for one have never "devalued" evaluations in front of students, nor would I. So again, well said, but not applicable to anything I've mentioned.

5. Finally, and by far most importantly, nothing I've explained makes any comment whatsoever about the need to improve teaching. That need always exists, I just believe that the "student evaluation" is too deeply flawed a tool to be used in a serious way. For now.

I appreciate the passion you display for the evaluation process and the hope that it might one day be more useful as a tool for improving teaching, I just don't feel that it's there yet (or anywhere close), due to the fact that we haven't found a way around the broccoli and carrots problem.

38. intered - April 27, 2010 at 06:22 pm


Thanks. We have some differences in perspective but I, too, appreciate your perseverance and clarity in explaining your perspective.

Give some thought if you will to the comments students make in response to two questions along the lines of: (a) "Tell me what this instructor did that contributed to your learning in a noteworthy way" and (b) "Tell me what this instructor could have done that would have improved your learning in this course." When you aggregate these two comment categories (as tokens) over time, you end up with a surprisingly accurate profile of the person as an instructor. Some professors get a silly grin when they read two year's worth of aggregated comments. They are amazed at the clarity and focus. Among other things, "Comment Profiling" can be used for best-practice sharing, mentoring, etc. A good thing for all of us, I believe. Thanks for listening.

39. cleverclogs - April 28, 2010 at 08:43 am

@intered #31 who writes:
"I would add the question, 'Why do we retain professors to teach the sciences who teach and evaluate the way their great grand-professors taught and evaluated, blatantly ignoring the last 50 years of learning sciences, pedagogical sciences, and evaluation sciences?'"

Well, as someone who works on faculty development, these are young "sciences" of which intered speaks and, indeed, many of the discoveries of the last 50 years are now being called into question. For example, I was recently at a conference where a highly respected keynote speaker presented new evidence that "learning styles" should not be attached to students (e.g. "I'm a visual learner") but to the material such that we ask ourselves, is this the most effective way to learn this piece of information, rather than, am I hitting all the learning styles? As a teacher, I knew that years ago. Now finally, pedagogical best practices will catch up, probably in another five or six years. Meanwhile, all the students that come in the intervening years get a subpar education founded on faulty best practices. And this is just one example.

I think what many fine professors react against is the sort of fashionability of these evals, the assumption that they somehow represent "the truth" and the disproportionate weight they are given in the academy. In a few years, pedagogial science may find that some of the observations and objections being made here are legitimate. In the meantime, again, students will have received a subpar education from faculty hamstrung by earlier vitriolic evals and some very good if challenging professors will have left teaching rather than be forced to contort/dumb-down the curriculum for students who believe that they are, for example, visual learners who can only learn visually.

40. intered - April 28, 2010 at 11:11 am


We are not in disagreement on your first main point. I allowed for misinterpretation of my reference. Many of the learning styles/modalities generalizations that you mention are based on weak research, and even on conceptual meaning schemes for which there is little empirical support. Some of it derives from NLP, which may have merit but is not scientifically grounded.

My comments regarding the learning sciences go primarily to psychological and physiological learning theory and their empirics, where you will find the incremental accumulation of generalizations characteristic of good sciences. Discontinuities are occasional and justified by breakthrough findings or conceptualizations.

Even though some of the pre-scientific ways of teaching turn out to have good scientific support (which should not be a surprise) others have been contradicted by modern sciences yet are still practiced by the majority of the professoriate. On balance, the professoriate has not incorporated these sciences and acts as if they did not exist.

What I have said about the learning sciences applies even more to measurement sciences. The most common form of assessment in the US professoriate is the multiple-choice test. (The other dominant form in the essay which, as commonly practiced, is even worse scientifically; I will not address it here.) I have analyzed many hundreds of such tests affecting the grades of tens of thousands of students across the nation. Half of the test items on the average second generation test (revised from at least one round of testing and student feedback), constructed by the average professor, fail minimum basic validity tests. The validity profile is worse for first generation tests and worse still when the test is comprised of the sample test questions in textbook supplements (10-15% of the professoriate use these). Before you try to justify this, think of what it means! You need not take my word about these dismal findings. Run the analyses on a few dozen MC tests administered by your faculty.

But even if the MC tests were acceptably valid within their theoretical framework, they still represent intellectual treason in 2010. I know of no valid research to suggest that even the best MC test produces anything close to the generalization in application routinely produced by authentic assessments that are integrated with authentic activities. Ask your faculty if they design and use authentic activities and assessments. A few do (and tend to gush about them) but most of your folks have no idea what you are talking about.

This is where we disagree.

You portray the professoriate as a wise, incrementally moderate group, carefully appraising new sciences and slowly adopting their findings so as not to lead themselves or their students down the occasionally inevitable scientific dead ends.

No offense, but this is absurd. I have stood in front of at least 10,000 instructors across the nation and 'irresponsibly, unprofessionally ignorant' is the only accurate term one can employ with respect to their collective knowledge of learning sciences, measurement sciences, and their application to effective teaching and evaluation methods.

As for the end-of-course evaluation discussion, I wish a few of those commenting above could see how clearly they are revealing their ignorance of measurement science. They believe in their tedious measures of gluon cancellation but their attitude toward measuring what they do as teachers that is not unlike that of the 17th Century church. These few folks are atavistic stone throwers.

41. skocpol - April 29, 2010 at 05:16 pm


"I have stood in front of at least 10,000 instructors across the nation and 'irresponsibly, unprofessionally ignorant' is the only accurate term one can employ with respect to their collective knowledge of learning sciences, measurement sciences, and their application to effective teaching and evaluation methods.

"As for the end-of-course evaluation discussion, I wish a few of those commenting above could see how clearly they are revealing their ignorance of measurement science. They believe in their tedious measures of gluon cancellation but their attitude toward measuring what they do as teachers that is not unlike that of the 17th Century church. These few folks are atavistic stone throwers."

Any teacher who has such vitriolic contempt for the persons to whom he is trying to teach new ways of thinking should quit or be fired. Your hubris is blatant, and your effectiveness should be measured to see whether it is zero. If nonzero, it should be normalized by the length and frequency of your comments.

42. ellentr2 - April 30, 2010 at 07:32 am

At our institution the IDEA course evaluations are used by the administration as instructor evaluations. We have lost good instructors simply because of low(er) IDEA scores. I am concerned that too much weight is placed on the evaluation of a course (instructor). Why should 18-22 year olds with little objective expertise in a discipline have such a significant impact on the careers of faculty who've invested their whole adult lives into the development of their discipline expertise? Low-enrollment courses in low-enrollment programs have adverse t-score adjustments making the "grades" even worse. A decade after arriving, I now understand why I should bring donuts to class on course evaluation day.

43. intered - April 30, 2010 at 11:34 am


You're probably right. I notice that my concern sometimes manifests itself in the rhetoric of contempt, albeit in the abstract. Let me ask you, though, how you might feel if you were addressing a group of physicians who were not only espousing 50 year old medical precepts based on even older science but were closed-minded and indignantly proud of their position? This is a generally faithful analogy with respect to the professoriate's refusal to change how it teaches and evaluates students.

Yes, exceptions abound, but we still use agrarian calendars, we still lecture for 50 minutes, we sill assess with invalid instruments, we still proffer inauthentic content, we still view learning as an isolated instructor/student dynamic, we still refuse to improve our teaching methods through peer review, we still refuse to police ourselves to protect students against the tenured prof who quit updating his lecture 20 years ago, we still criticize and often ridicule attempts at innovation, . . .

This thread is about validity. You have isolated a bit of my anger and focused on that. I thank you for it but I still want us to join the modern age where we can exploit modern sciences to become better teachers and evaluators of student performance. Do you have any material comments on that topic?

44. navydad - April 30, 2010 at 12:20 pm

"The students are not our customers, the society is our customer. The students are the products."

This comment makes for good rhetoric and it may be true in an abstract or ideal sort of way. In reality, though, students and their families decide if and where to go to college and most of them pay most of the costs of college. Therefore, they are customers paying for a service and to pretend otherwise is silly and unproductive.

45. skocpol - April 30, 2010 at 01:03 pm

Despite my gray hair, I do. I am a Professor of Physics at Boston University.

Five years ago I (re-)joined a multi-lecture-section algebra-based introductory physics course that had already evolved to using clickers, on-line WebAssign homwork, a common syllabus, and common evening exams. With the clickers I would give students a chance to confer before answering. Then I would try (relatively unsuccessfully) to get students to volunteer to explain the different positions that were revealed by the PRS questions. I freely used questions created by my colleagues with more familiarity with the issues. As the years passed, I found myself just telling the class my ideas about the origins of the different positions revealed by PRS.

I now teach a single-lecture-section Spring-Fall offering of the course, and have retained and tried to improve these innovations.
This term in an interdepartmental pedagogy seminar, a chemist told us that he makes students continue to talk to each other and be polled over and over on a single PRS question, until they converge on an answer, even if it isn't the correct one. I immediately tried that and liked the improved interactivity, although in the interests of time I tend now to add obscure hints to aid their progress.

We will see (I hope) from the written free-response sections of the course evaluations that I gave yesterday (and which are being typed today by the departmental office staff) whether they comment on this mid-semester switch -- and more importantly what they have to say about it.

Our department has been typing up free responses and analyzing numerical-scale questions for course and instructor evaluation purposes for 18 years. Results are given to the teacher, the annual departmental merit review committee, and eventually at promotion time. For comparison, the numerical averages over types of classes on "Instructor overall" are systematically analyzed and accumulated over time.

The Dean started requiring all departments to have evaluations five years ago. The college only analyzes the numerical responses, so we type the written responses verbatim quickly before giving the Dean's office the filled-out Scantron forms. They made us abandon our clever 8-point scale (more range, no middle), in favor of the less informative traditional 5-point scale.

Our Provost has set up an internal grants competition seeking sustainable changes to large introductory courses to help them follow trends toward active learning and student engagement. Some in our department would like to adopt or pilot a studio approach for discussion and lab and perhaps eventually do away with lecture in intro courses. Others resist vociferously.

Faculty have huge time demands from research (and proposal writing without any kind of staff assistance) as well as the endless rounds of seminar and colloquium series, committees with major substantive responsibilities, etc. Lecturing is easier (the second time) than innovating. Doing your own approach may be more satisfying than submitting to a script written by collaborating with others. Finding space that would not transform and thereby subtract registrar's classrooms or teaching labs still used by other courses is a major difficulty. To be sustainable, innovation requires enough acceptance that it does not all ride on the guru who comes up with the ideas in the first place. I am on board with piloting the studio course approach, but then the same type of room needs to be used by pilot courses from other departments so that it is sufficiently full all day, all week that it does not contribute to the maxed out demands on the registrar for classroooms.

I am old enough to have personally experienced vast swings in the enthusiasms of education innovators. As a junior faculty member at Harvard, I absorbed the use of viewgraph projectors (= handwritten Powerpoint that allowed both 4 to 1 reduction followed by mass photocopying and a line-by line "striptease' during presentation.) The pedagogical rage was voluntary homework followed by unit tests available on a flexible time schedule all through the term. Of course the room was packed just before any deadline, and students then would burn through as many versions as were available, without passing any of them.

The pendulum swings. Good ideas in the hands of their proponents are great. Spread to everybody they probably won't work as well, and eventually will be criticized by another generation of innovators.

Change keeps us alive, aware, and engaged. The direction of change must also change from time to time. Do your thing, and don't get so frustrated. it's not good for your health. ;-)


46. intered - April 30, 2010 at 01:15 pm

"The students are not our customers, the society is our customer. The students are the products."

Are some discussants aware that nearly half of today's college students are adults, most of whom have jobs, families, and adult social commitments. One *might* be able to pull off an "in loco parentis" argument for a 17 year old but when a 35 year old father of three, manager of seven, pays you to teach him accounting (or literature), *he* is the customer and you have a social contract to deliver what you promise on time and as described.

47. athlwulf - April 30, 2010 at 04:20 pm

Mr. Crumbley's statements represent the worst of higher education's Ivory Tower attitudes and disdain for the students. It might sound nice and lofty to say that society is the real customer, but society not enrolling in classes, studying, subjected to both good and bad professors, and walking away buried under four years of debt. Granted, student's often do not know what is best for them, but part of the service they are paying for is to be lead by someone who is professional and current on the best methods of teaching and engaging. There is an implicit agreement here: I will follow along with he understanding that what you ask me to do will result in attaining the objectives of the class. Part of the process of knowing you are effective, is getting the feedback from the students. Sure, there are those with an axe to grind, and those that do not know what is best for them, but a good teacher pays attention to the process and is interested to know how his or her instruction has impacted them. This does not mean that every criticism or praise is warranted or in need of action. But we also don't need to dehumanize them either.

48. crankycat - May 03, 2010 at 04:16 pm

"Students are inventory"?! That has to be the most cynical and dehumanizing statement about education I have ever seen. Education is not a commodity like corn futures. It is a process based on voluntary relationships and should not be treated like some sort of "product".