View in portable document format.




On behalf of the Faculty Council Executive Committee, Professors Patrick Davis (pharmacy) and Susan Klein (law) submitted the following report in response to the UT System Task Force Report on the Evaluation of Faculty Teaching. The secretary has classified the report as general legislation. The Council will hear the report at its meeting on May 6, 2013. If the report is endorsed by the Council, it will then be transmitted to UT System for consideration.

greninger signature
Sue Alexander Greninger, Secretary
General Faculty and Faculty Council

Posted on the Faculty Council website on May 3, 2013.


*“Draft Policies for Evaluating Faculty: Recommendations for Incorporating Student and Peer Reviews in the Faculty Evaluation Process”

The Faculty Council of the University of Texas at Austin appreciates the opportunity to respond to the Task Force draft recommendations, and urges the UT System Chancellor to withhold implementation of these recommendations until such time as it can be amended in light of the concerns that follow, including referencing of relevant research supporting the specific recommendations.
Amendment Number One:
  1. The first mandatory question (of the five) should be modified to read “The instructor clearly defined and explained the objectives and expectations for the overall course (or that faculty member’s section of the course).”

    Amendment Number Two:
  2. The “Mandatory Survey Questions” should be expanded from the proposed five to include two additional questions: (1) an overall instructor rating; and (2) an overall course rating. These seven required survey questions would be reported to UT Systems rather than only the proposed five.

    Amendment Number Three:
  3. The average score for each question must be reported separately. UT System should not add the seven scores and divide by 7 to come up with an average, composite score, as there is no mathematical or logical reason to average the scores on these individual questions. These questions assess entirely different elements. If the intent is to arrive at some “overall” score, then the two capstone questions suggested in the previous amendment address exactly this point.

    Amendment Number Four
  4. The use of on-line surveys will remain optional for those faculty who wish to use them in lieu of paper surveys (which would continue to be available). Methods need to be identified by which student written comments can be secured with both on-line and paper surveys in order to provide relevant information beyond the numeric feedback.
  1. Is there research that supports the Task Force's recommendation that "no single, all-encompassing, overall impression item be included, as it would be overly broad, largely subjective, and thus uninformative?" In fact, a number of important decisions about the quality of teaching rely, in part, on student answers to the overall instructor and course questions. No other question provides a comprehensive student view as to whether the instructor taught effectively or whether the course content was useful. Budget Councils, Department Chairs, and Deans often use the results of these two questions in determining whether a faculty member is in need of help in the classroom, whether a faculty member is improving overall, or whether faculty members are eligible for teaching awards.
  2. No study is cited to demonstrate that using social media to "continuously evaluate" the professor during the course will not devolve into an anonymous "slam table." Should a faculty member choose to seek such ongoing feedback, there needs to be assurance that availability of these ongoing, unstructured comments be restricted solely to the faculty member and purged at the end of the semester. If research demonstrates that using social media will indeed be beneficial, the selection of an appropriate provider should be vetted based on a system that provides the essential tools specifically for this purpose.
  3. Greater consideration needs to be given to how student comments can be integrated into the capstone assessments provided by the standardized questions (both paper and electronic surveys). Student comments are frequently the most important communication to faculty clarifying the meaning of numeric responses, and either set of responses alone provides only partial feedback. That said, thorough consideration needs to be given to how students and faculty will be protected from student comments being released under open records requests.
  4. It is a widespread impression among faculty that response rates are lower with online (electronic) responses than with paper responses. Data is available and should be provided in the Report comparing response rates across different survey methods, and strategies to improve response rates. While the “incentive” process is proposed to incentivize student responses, (early release of grades) the Task Force should cite examples of how this has been implemented and how well it works at other institutions. For example, there is concern that the threat to withhold "early" grades from students who do not complete electronic evaluations in the required timeframe creates a potentially hostile environment, and encourages a thoughtless process of completing the assessments, rather than providing informative feedback. Has this been the experience of other institutions implementing an early completion incentive?
  5. A number of faculty feel that results from paper evaluations are more reliable since the surveys are completed only by students who actually attend the class. Electronic evaluations, on the other hand, can be completed by students who were not present, may not be interested in the course, and/or have a biased interest in completing the survey. The Task Force should consider methods by which this imbalance can be addressed as part of their recommendations.
  6. In team-taught courses, it is not uncommon for a faculty member to complete their involvement in the course well before the end of the semester, which is when evaluations are being called for. Doing so puts them at a decided disadvantage of being evaluated after considerable time has passed since their interactions with students in the course. This also forces ‘clustering’ of multiple evaluations at the end of the semester for all of the faculty involved in the course (further encouraging a thoughtless process of completing the evaluations just to get them done). Thus, the online (electronic) evaluation should be designed so that such faculty can arrange for their assessments during the semester (upon completion of their section) rather than at the end of the course. Likewise, paper surveys should be permitted during the semester upon completion of the faculty member’s section.


eCIS Comments from UT Austin Faculty Members

Date: Tuesday, April 16, 2013 12:27 PM
Dear Michael,

I echo Jamie's sentiments (though perhaps without invoking Nuremberg...). And the problem I have with the 5-item questionnaire is that it seems more designed to assess "responsible" teaching (e.g., providing an accurate syllabus, responding to student communications) than high-quality, effective, inspiring teaching. I would also endorse the 2-item questionnaire over the current version of the 5-item one.

Caryn Carlson
Associate Chair
Department of Psychology

On Mon, Apr 15, 2013 at 2:04 PM, James W. Pennebaker <> wrote:
Caryn forwarded me the info about the proposed faculty evaluations. I have two very strong feelings about this:

1. Any evaluation system must be fast and brief with as little disruption to the department as possible. The full evaluation system that is being proposed will undermine faculty morale, be a huge drain on faculty time and research productivity, and likely will not lead to any substantial improvement in teaching. The way it currently sounds, this will be like the Nuremberg trials where each faculty member will spend hours worrying about their class to their peers then stand in a witness box afterwards as a verdict is pronounced.

2. As useless as the 5-item questionnaire is, at least it is brief and not too intrusive. How about a 2-item questionnaire with open-ended response: a) overall how good was the class; b) overall how good was the instructor?

Jamie Pennebaker
Chair, Department of Psychology

I'm going to comment with this email list on the draft document for "Policies on Evaluating Faculty", and I have included Faculty Council email so I hope this is ok. I read this document for the first time tonight though clearly my comments are influenced by the earlier task force report.

My highlight of the opening paragraph would include "strengthen performance objectives" and "critical questions which evaluate faculty teaching". I believe that the overall objective of this directive is to assign faculty a grade based entirely on student evaluations. As you can see at the end of the document, the draft is dismissive of peer evaluations and has no interest in their being transmitted to UT System.

The best mechanism to provide "faculty feedback throughout the semester" is *social media* but the Task Force promoted MyEdu which I would judge a clear conflict of interest. Further, the Task Force (apparently) did not broadly survey Texas institutions, including UT campuses, A&M or Rice, but rather picked as their benchmarks University of Maryland and USC which stand behind the Tier One universities in Texas.

Before having "Mandatory Survey Questions", UT System should have drawn feedback from the UT campuses and worked to secure agreement on the "required wording". This should be a matter of consensus rather than a directive from the Chancellor.

I have no idea of the importance of the order of response being "strongest" going from left to right which is just the opposite of our surveys, but I would be interested in what our colleagues in Psychology say on this point. It may be entirely an even choice.

There are five questions here -- on our basic survey we have six questions plus overall evaluations on the instructor and the course. The Task Force felt that the last two questions we are ask are "subjective" and so without content. But all questions of this type are "subjective". Our expanded survey (which my dept uses) includes 17 questions plus the two overall evaluation questions.

There is one question which is identically worded on their "five" and our "six": their #3 "The instructor communicated information effectively." is our #2. Their #1 corresponds roughly to our #1 "Course is well-organized" though our question has broader reach. Their #2 again corresponds roughly to our #7 though we ask broadly that "Instructor well-prepared" Their #4 can be matched against a combination of our #3 and 5 Their #5 corresponds to our #10.

Their comment that "Institutions should consider that longer surveys typically lead to lower response rates and less accurate responses" does not match my experience with my own classes.

I should mention that the one response on my own student surveys in my lower division class that has caused a change was the question #15 "Intellectually stimulating" where the response has caused me to think about how I could improve and refocus the class.

The next paragraphs which emphasize: "Student participation is crucial", "Mandatory completion of course evaluation", and "completion of course evaluation is required" reinforce my earlier thought that bthis proposal is directed at "grading faculty" based entirely on student evaluation. I am strongly opposed, as are all student groups, to coercive methods such as "*priority access to grades*".

Student comments are crucial for teac as opposed hing awards, especially Regent's awards and other high-profile recognition for exceptional teaching. The philosophy expressed in this draft places it weight on *mechanics of teaching *as opposed to inspiring students in the classroom. The bullets at bottom of page 4 are very mechanical.

Peer review is necessary for teaching awards, and required for promotion and post-tenure review. Such reviews are a necessary component of each candidate's file. and there is no option for not including such reports. Though the overall report is reviewed by each candidate, and they can contest the conclusions. It would seem that UT System is not familiar with evaluation processes on the individual campus. The concluding statement that "UT System will not be collecting the results of peer evaluations at this time" substantiates my earlier thought that UT System is only interested in assigning grades to faculty teaching based entirely on student evaluations.

--Bill Beckner

ps In reference to comments on electronic surveys, CTL has claimed that though the response for electronic as opposed to paper surveys is clearly much lower, the outcome data results in similar evaluation data. If the data from the English department analysis contradicts this claim, then that is an important result to highlight.

I have no desire for my silence to be taken as assent to this travesty of a "report." I question both the activities and the recommendations of the Task Force on the Evaluation of Faculty Teaching.

I want to see the scholarship that says the recommended questions in the recommended order will be an adequate evaluation of any faculty member's teaching (and why); I find it ludicrous that the faculty of a major research university is assured that "The above five questions provide a more complete method for establishing an overall impression" with no evidence whatever offered to back that claim or even to define what is meant by an "overall impression." The Task Force spent three full days(!) meeting and deliberating over this allegedly foolproof selection of questions, so I will perhaps be forgiven if I suggest that they were given a completed document asserting this claim. I also find it insulting that so little explanation (of the 11 double-spaced pages, only 5) of the Task Force decision is provided. Can the Faculty Council make a Public Information request for the records of the Task Force in order to find out what the Task Force was even doing during the three days--never mind the junkets to Maryland and California and what the MyEdu presentations said?

Patricia Galloway
Associate Professor
School of Information

Comments from Pat Davis (professor, pharmacy) on Peer Observation Report from 6-28-2012

Note:  this appears to be the original report submitted to System by the four-member (faculty) Task Force,
  • Very much like the idea that this is a component of peer mentoring, collegial feedback, and faculty development.  In terms of using it for ‘assessment’ it is only in the context of a multifaceted approach.
  • Very much like that these are suggested as guidelines, but emphasize that the local units need to determine the particulars (each University and the local units within each university).  Cultures differ, and there is no ‘one size fits all.”
  • Agree under 3a that ‘evaluation’ is a separate process, using longitudinal peer observations as one component to document improvement.
  • I like the idea that the process calls for a ‘post-observation’ discussion rather than just forwarding a report.  This enhances the collegial elements of the process.
Comments on Teaching Evaluation Report
  • Requiring evaluations to be conducted at the end of the semester if fine for courses where there is only one faculty member.  However, in tram-taught (and in some cases extensively team-taught) courses, more timely evaluations need to be conducted.  Otherwise, the students simply won’t remember the faculty member who taught in the first part of the course.
  • Likewise, the first question concerning course objectives and expectations is fine if the faculty member has the whole, course, but not in team-teaching.  The statement should be broadened to reflect that the faculty member should make clear their objectives and expectations for whatever portion of the course they have.
  • I like the idea that additional questions can be posed by the instructor or standardly by the institution.
  • I understand the statement that single, all encompassing, overall impression statements require a broad view in constructing a response.  That does not, however, negate the student’s opinion.  Teaching is incredibly multifaceted, and five questions won’t capture it all.  There is nothing wrong with a broad ‘capstone’ evaluative statement that is a holistic appraisal of the teacher.
  • While I am supportive of incentivizing the completion of surveys by the students, the proposal is problematic.  We should not ‘withhold’ (delay) informing students about their grades if they don’t complete the survey.  Rather, we should provide an early (stamped ‘preliminary’) grade for those completing evaluations in a timely manner.  Don’t bill this as a penalty if you don’t; it should be an incentive if you do.
  • The incentive issues become problematic for largely team-taught courses.  We have some courses with 10-12 faculty members involved because of the integrated nature of the course.  Are students to do 10-12 evaluations at the end of the semester?  This brings us back to my first point that in such courses we need a way for faculty to be evaluated in closer proximity to when they complete their part of the course.  This is more timely feedback (students haven’t forgotten) and it prevent clustering of evaluations in the end which students will simply rush through to get the box checked so they can get their grades.
  • I like the idea of a standard statement in the syllabus informing students that this is part of their professional responsibility.  In addition to providing reminders, faculty should also point out examples of how student feedback has resulted in tangible course changes.
  • I would support the MyEdu tool for ongoing feedback during the course ONLY if this is a private communication between the student and faculty member and is NEVER accessible administratively.  As stated in the proposal, this mechanism is NOT MEANT FOR FACULTY EVALUATION, and the only way faculty will adopt  this is knowing that invited candid feedback won’t backfire on them.  It is not clear what is meant by the faculty member being able to “edit” these comments (on the surface it sounds inappropriate, but if the comments are a private communication, why would they ‘edit’ them?).
  • Under Attachment 1, the CONFIDENTIALITY section is ambiguous.  It implies that the evaluations must be confidential, but that is clearly not the case.  The balance of the text makes it clear that it is STUDENT IDENTIFY that is to be confidential.  The first sentence needs to be reworded to say that.
  • While a number (many, most) faculty are opposed to online evaluations, I am not (provided the issues in subsequent bullets are addressed).  Indeed I have used them extensively (eCIS at UT) and I see online (electronic) evaluations as the ONLY mechanism to address extensively team-taught courses in terms of a timely evaluation of a faculty member when they complete their part of the course rather than a jumble of evaluations at the end of the semester with the other faculty involved.  If this is online (electronic) there is no reason why it couldn’t be ‘turned on’ to meet  this critical need.  Further, our program involves live teaching (by ITV) across four UT system components (UT Austin, UT El Paso, UT Pan American, and UT Health Science Ctr SA).  Handling four packets with mailings is exceedingly tedious, and these are frequently our most heavily team-taught course.  Again, electronic, flexibly-scheduled evaluations are key.
  • My last concern is a big one.  One of the reasons we have low participation rates with the UT eCIS program is that faculty continue to be concerned as to whether student written comments are subject to open records.  If so, many equate the process to an ‘open access slam table’.  I don’t see written comments (which simply MUST be a part of the student feedback; most faculty consider them more important than the numeric), and the open records accessibility is not addressed. 

My comments/Questions:
•What methodologies will the University System employ to analyze the data?
•Will departmental items, in addition to the five required questions, be included in the official results/data analysis for tenure and promotion in addition to the five questions?
•Who can access the results? (Only within the University System or the general public?)
•How will a “required” survey affect student responses/evaluations of faculty if they are forced to evaluate before receiving their grades?
•Will the survey include space for student comments?

Best Regards,

Nancy Kwallek, PhD, RID, IIDA, IDEC
Gene Edward Mikeska Endowed Chair for Interior Design
Director, Interior Design Program
School of Architecture
1 University Station B7500
Austin, Texas 78712

Additional comment from colleague:

“I very strongly disagree with changing our current evaluation system. The Regents need to butt out! This is absolutely not their role. I especially disagree with eliminating the two questions--about course and instructor. We should take a stand against any more interference from the system office or the board of regents!”

Best Regards,

Nancy Kwallek, PhD, RID, IIDA, IDEC
Gene Edward Mikeska Endowed Chair for Interior Design
Director, Interior Design Program
School of Architecture
1 University Station B7500
Austin, Texas 78712

Additional comment from colleague:

“I have concerns about placing increased importance on student evaluations in determining faculty promotion and tenure. Good CIS results do not guarantee that someone is a good teacher. This will turn into a popularity contest and is less and less about course content and rigor.”

Best Regards,

Nancy Kwallek, PhD, RID, IIDA, IDEC
Gene Edward Mikeska Endowed Chair for Interior Design
Director, Interior Design Program
School of Architecture
1 University Station B7500
Austin, Texas 78712