Open Journal of Statistics
Vol.05 No.06(2015), Article ID:60521,15 pages
10.4236/ojs.2015.56058

A Gendered Study of Student Ratings of Instruction

Lucas Huebner, Rhonda C. Magel

Department of Statistics, North Dakota State University, Fargo, ND, USA

Email: Rhonda.magel@ndsu.edu

Copyright © 2015 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

http://creativecommons.org/licenses/by/4.0/

Received 6 August 2015; accepted 18 October 2015; published 23 October 2015

ABSTRACT

This research tests for differences in mean class averages between male and female faculty for questions on a student rating of instruction form at one university in the Midwest are considered to be in the category of “very high research activity” by the Carnegie Commission on Higher Education. Differences in variances of class averages are also examined for male and female faculty. Tests are conducted by first considering all classes across the entire university and then classes just within the College of Science and Mathematics. The proportion of classes taught by female instructors in which the average male student rating was higher than the average female student rating was compared to the proportion of classes taught by male instructors in which the average male student rating was higher than the average female student rating. Results are discussed.

Keywords:

Gendered Expectations, Means, Variances, Proportions

1. Introduction

Student ratings of instruction are often used by universities as the main way of evaluating the teaching effectiveness of a faculty member. This is particularly true at research universities [1] . Some studies have shown that there is gender bias in student ratings of instruction [2] [3] . Marcotte [4] discusses a small study conducted based on student ratings in online courses. When students thought the instructor was female, the instructor was rated lower in all 12 categories considered. This included the category of “caring and respectful”. Everything about the online course was the same in all cases except for students were told different genders for the instructor. One category in the study discussed by Marcotte [4] is “promptness” of the instructor in returning graded items. Students who thought the instructor was male gave the instructor an average rating of 4.35. Students who thought the instructor was female, gave the instructor an average rating of 3.55.

In other research, Feldman [5] [6] and Lueck, Endres, & Caplan [7] , found that female students rated female instructors higher and male students rated male instructors higher. Feldman [5] [6] found that this was evidenced further based one’s perception of gender “roles”, which is more pronounced among business and engineering students. Worthington [8] , in particular found this gender bias among finance majors. Basow [9] and Centra & Gaubatz [10] found that male instructors were rated similarly by both their male and female students. These studies found however, that female students tended to rate female instructors higher overall. Male instructors tend to be rated higher in knowledge/scholarship questions as well as enthusiasm. Female instructors tend to be rated higher in “comfortable environment” [11] -[13] .

The subject area that an instructor teaches in also has an effect on student ratings of instruction. Teachers in science and engineering get lower ratings than those teachers in the humanities and social sciences. Female faculty fair even worse in the sciences, finding both male and female students rated female instructors lower than male instructors [9] [11] [12] .

The age of an instructor was found to also affect ratings. Older instructors do not receive as high of student ratings of instruction. Increasing age has a more negative effect on female instructors [14] .

Sprague and Massoni [15] and Laube, Massoni, Sprague, and Ferber [16] found that students have gendered expectations of instructors. Students expect female instructors to be caring and nurturing. They also expect female instructors to be available more often than male instructors in addition to not being as demanding or grading as harshly [12] [13] [15] [17] [18] . Students expect male instructors to be entertaining and energetic. Schmidt created a database on the words students used on the website “Rate My Professor” [19] . He found that patterns in the words used were associated with the gender of the instructor. The words “intelligent”, “genius”, and “smart” came up more often when evaluating male instructors across all disciplines. The words “bossy”, “nurturing”, and “strict” came up more when evaluating female instructors across all disciplines [19] . Researchers found that female instructors who did not meet these “gendered” expectations were rated more harshly than the male instructors who did not meet their gendered expectations [12] [13] [15] [17] .

Studies have also shown that there is a significant positive correlation between the grade a student expects in a class and the same student’s teaching evaluation of the professor [20] -[23] . In classes that have a common final exam, it has been shown that there is a modest significant positive correlation between student performance on the exam and the corresponding evaluation by the student of the teaching effectiveness of the instructor. However, based on the results of another study considering classes with no common final, there is some evidence that a student may give a higher rating to an instructor because they are “grateful” for the grade they are expecting to receive, or may give a lower rating to an instructor because they are upset about the grade they are receiving [24] . This could result in grade inflation [24] . Recall that researchers have found that female faculty are expected to grade less harshly than male faculty and are rated lower by students if they do not meet this gendered expectation [12] [13] [15] [17] .

Basow and Martin [25] summarize much of the research that has been done concerning gendered expectations of students and completing student ratings of instruction for male and female faculty. They say that female faculty must be more caring and nurturing, be available more often, and not be as demanding on tests and assignments as male faculty, in order to get comparable evaluations. Female faculty not having these “gendered” characteristics get lower ratings. Findings from a North Carolina University study with lead researcher Lillian MacNell, suggest that female faculty still have to work harder to get similar ratings to male faculty members [4] .

2. Study Design

We would like to explore some of these gendered findings at an intensive research university in the Midwest to see if we find similar results. In particular, we would like to examine student ratings of instruction for the 2013- 14 academic year at North Dakota State University. This university is located in Fargo, North Dakota. It is ranked by the Carnegie Commission on Higher Education as one of the top public and private universities in the country and by NSF in the category of “Research University/Very High Research Activity”. As of 2014, the university had a total enrollment of 14,747 students broken down into 12,124 undergraduate students, 340 professional students, and 2283 graduate students. Approximately 91% of the students were US citizens, 7% international, and 2% permanent residents with 54% of the students being male and 46% being female. The University consists of 8 colleges (www.ndsu.edu). Institutional Review Board (IRB) approval was obtained.

At North Dakota State University (NDSU), all instructors are required to give students in their classes an opportunity to evaluate the instruction in the class. This evaluation must take place during the last three weeks of the semester. The student rating of instruction form used at NDSU currently consists of 16 questions. These questions are given in Figure 1. The first six questions on the form have been used during the past 10 years. The

Figure 1. Student rating of instruction form.

last ten questions on the form were added to the form in 2013. The form also asks students to respond to the following demographic questions: 1) Their gender; 2) Their level (freshman, sophomore, junior, senior, graduate student); 3) Whether the course is elective or not; and 4) Their expected grade (A, B, C, D, F).

Student response data for the academic year 2013-14 was collected from all classes taught in an in-class environment, but not including workshops, seminars, or independent study classes. Data was collected from a total of 2092 classes.

Three areas of research will be examined in regard to this student rating of instruction questionnaire. The first area that will be examined is how the average student response to each of the questions is related to the class demographics. In particular, the demographics that will be considered include the following: 1) The percentage of students in the class required to take the class for their major; 2) The percentage of males in the class; 3) The percentage of students expecting to receive either an A or B in the class; 4) The percentage of freshman and sophomores in the class; and then 5) The gender of the instructor. In this phase of the research, a total of 16 least squares regressions will be conducted with the dependent variables for each of the regressions being the average class responses to each of the questions. The independent variables for each of the regression models will be the five demographic variables that have been mentioned. We would like to determine how much of the variation in class average responses to each question is explained by the class demographics. If this percentage is high, this indicates that the question is not measuring effective instruction, but rather something else. In this phase of the research, we would also like to investigate whether or not the gender variable is significant.

The second area of research that we will examine is to compare the average class responses of each of the questions for female instructors with the average class responses of each of the questions for male instructors across all 8 colleges of the university. We will test to see if there is a significant difference in the average class responses between male and female instructors for each question using two versions of a t-test: one assuming equal variances (pooled); and one not assuming equal variances (Satterthwaite). We are particularly interested in how the means compare between male and female instructors for the following questions: Question 5, “The fairness of procedures for grading this course”, and Question 10, “I understood how my grades were assigned in this course”, since research indicates that students expect female instructors to grade them less harshly [12] [13] [15] [17] ; Question 12, “The instructor was available to assist students outside of class”, because research has shown that students expect female instructors to be available more often [12] [13] [15] [17] ; and Question 13, “The instructor provided feedback in a timely manner”, since research has shown students rate male instructors higher in this category even if response time is the same [4] . The average class responses of each questions between male and female instructors within the College of Science and Mathematics will also be compared. Research has shown that female instructors in the sciences tend to get lower ratings than male instructors from the students [9] [11] [12] . In addition to comparing the mean class average responses between males and females for both the entire University and then for the College of Science and Mathematics, the variances in the average class responses between male and female instructors will also be compared for each question considering the entire University and then considering only the College of Science and Mathematics.

In the last phase of the research, we will use only classes in which at least five male students and five female students responded. In this phase we will compare the proportion of classes taught by female instructors in which the average male student response is higher than the average female student response to the proportion of classes taught by male instructors in which the average male student response is higher. These proportions will be compared for each question. Our hypothesis is that the proportions will be higher for male instructors. Our sample consisted of 112 classes taught by female instructors and 162 classes taught by male instructors. The lower sample size was due to the lack of students not responding to the demographic question about their gender and to the fact that we deleted any class from this portion of the study that did not have at least 5 male and 5 female students responding.

3. Results

Recall that in the first phase of the research, 16 ordinary least square regressions were to be conducted with the 16 dependent variables being the average class responses to each of the 16 questions. The independent variables in the model were the five class demographic variables collected: 1) The percentage of students in the class required to take the class for their major; 2) The percentage of males in the class; 3) The percentage of students expecting to receive either an A or B in the class; 4) The percentage of freshman and sophomores in the class; and then 5. The gender of the instructor. For the majority of the questions, the demographics explained between 15% and 28% of the variation in responses. There were four questions that were the exception to this. Fifty-nine percent of the variation of in class average responses to question 11, “I met or exceeded the course objectives given for this course”, were explained by the demographics. Forty-nine percent of the variation in class average response to question 6, “Your understanding of the course content”; 39% of the variation in class average responses to question 5, “The fairness of procedures for grading this course”; and 32% of the variation in class average responses to question 4, “The quality of this course”, was explained by the demographics. For all questions, the percentage of students expecting to receive an A or a B was highly significant in explaining the variation in class average responses. For questions 7, 12, 15, and 16, the percentage of males in the class was significant at alpha equal to 0.05, and for question 8, the percentage of males in the class was significant at 0.10. In all of these cases, an increase of 10% males in the class corresponded with an average increase of approximately 0.04 in the instructor rating for that question. The only other demographic variable found to be significant for any question was the percentage of students taking the class because it was required for their major. This demographic variable was only significant (alpha = 0.05) for question 6. In this case, the class average response decreased with an increase in the proportion of students for which this course was required for their major. The first variable to enter any of the models, and the most significant variable, was the variable for the proportion of students in the class expected to receive either an A or a B grade. The indicator variable for gender of the instructor was not significant for any of these models with the proportion of students expecting an A or a B already in the model.

If this set of questions were used to evaluate effective teaching, one may consider dropping questions 6 and 11 since about 50% or more of the variation in class average responses is explained by the class demographics. These questions are not really evaluating effective teaching.

We next tested whether there was a difference in the mean class average responses for each of the 16 questions between classes taught by female instructors and classes taught by male instructors. The sample means for male and female instructors for each of the questions is given in Table 1.

When the classes across all 8 colleges of the university was considered, a significant difference was found in the mean class average responses between male and female instructors for questions 6 and 11 with alpha equal to 0.05 and for question 15 with an alpha value of 0.10 ( p-value = 0.0548). In all these cases, the mean response for female instructors was higher. Female instructors were rated higher on the student’s understanding of the course content, the student’s meeting or exceeding course objectives, and the instructor setting and maintaining higher standards. It is interesting to note that when a regression analysis was conducted with class average response to question 11 being the dependent variable, gender of the instructor was not significant when the percentage of students who expected to get an A or a B was in the model. This was also true for questions 6 and 15. Students in female instructor classes were expecting more A’s and B’s than student’s in male instructor classes.

Female instructors were not rated significantly lower on average to Question 5, “The fairness of grading procedures”, but the sample mean of the class averages for female instructors was lower (p-value = 0.1242). Female instructors were not rated significantly lower on average to Question 10, “I understood how my grades were assigned”, but the sample mean of the class averages for female instructors was slightly lower for this question (p-value = 0.6359). There was also no significant difference in means for Questions 12 and 13, about the availability of the instructor and providing feedback in a timely manner. The sample means for male and female instructors were very close to each other, with the sample means for females only being very slightly higher (p- values = 0.6848 and 0.8016, respectively).

Since research has shown that female faculty are rated lower in the science field [9] [11] [12] , mean class average responses between male and female instructors were compared in the College of Science and Mathematics. These are given in Table 2. A significant difference at alpha equal to 0.05 was found between the mean responses for questions 3, 5, 7. A marginally significant difference was found between the mean responses of question 1 (p-value = 0.1004), and question 2 (p-value = 0.0824). In the three significant cases, except for question 5, the mean response associated with female faculty was higher. Female instructors were rated higher on communication ability, and creating an atmosphere conducive to learning. Research has shown that female faculty tend to be rated higher in “comfortable environment”, but it is interesting to note we did not find a significant difference for the overall University, but in the College of Science and Mathematics for this question [11] -[13] . Females were rated marginally higher on the student’s satisfaction with the instruction in the course, and on the instructor as a teacher. Question 5 had students rating the fairness of procedures for grading this course. Research

(a)P-value = 0.9570 when testing equality of variances. (b)P-value = 0.9982 when testing equality of variances. (c)P-value = 0.2405 when testing equality of variances. (d)P-value = 0.9505 when testing equality of variances. (e)P-value = 0.9505 when testing equality of variances (females higher). (f)P-value = 0.4327 when testing equality of variances. (g)P-value = 0.6370 when testing equality of variances. (h)P-value = 0.6665 when testing equality of variances. (i)P-value = 0.3768 when testing equality of variances. (j)P-value = 0.1489 when testing equality of variances. (k)P-value = 0.0906 when testing equality of variances. (l)P-value = 0.1643 when testing equality of variances. (m)P-value = 0.4901 when testing equality of variances. (n)P-value = 0.9489 when testing equality of variances. (o)P-value = 0.0168 when testing equality of variances (males higher). (p)P-value = 0.3904 when testing equality of variances.

Table 1. Mean gender results for Questions 1 - 16 all colleges. (a) Mean gender results Question 1; (b) Mean gender results Question 2; (c) Mean gender results Question 3; (d) Mean gender results Question 4; (e) Mean gender results Question 5; (f) Mean gender results Question 6; (g) Mean gender results Question 7; (h) Mean gender results Question 8; (i) Mean gender results Question 9; (j) Mean gender results Question 10; (k) Mean gender results Question 11; (l) Mean gender results Question 12; (m) Mean gender results Question 13; (n) Mean gender results Question 14; (o) Mean gender results Question 15; (p) Mean gender results Question 16.

(a)P-value = 0.0450 when testing equality of variances (males higher). (b)P-value = 0.0082 when testing equality of variances (males higher). (c)P-value = 0.0186 when testing equality of variances (males higher). (d)P-value = 0.5476 when testing equality of variances. (e)P-value = 0.0104 when testing equality of variances (females higher). (f)P-value = 0.5042 when testing equality of variances. (g)P-value = 0.00084 when testing equality of variances (males higher). (h)P-value = 0.0002 when testing equality of variances (males higher). (i)P-value = 0.0018 when testing equality of variances (males higher). (j)P-value = 0.1588 when testing equality of variances. (k)P-value = 0.0304 when testing equality of variances (males higher). (l)P-value = 0.0019 when testing equality of variances (males higher). (m)P-value = 0.0974 when testing equality of variances (males higher). (n)P-value = 0.0198 when testing equality of variances (males higher). (o)P-value = 0.0032 when testing equality of variances (males higher). (p)P-value = 0.0268 when testing equality of variances (males higher).

Table 2. Mean gender results for questions 1 - 16 college of science and math. (a) Mean gender results Question 1; (b) Mean gender results Question 2; (c) Mean gender results Question 3; (d) Mean gender results Question 4; (e) Mean gender results Question 5; (f) Mean gender results Question 6; (g) Mean gender results Question 7; (h) Mean gender results Question 8; (i) Mean gender results Question 9; (j) Mean gender results Question 10; (k) Mean gender results Question 11; (l) Mean gender results Question 12; (m) Mean gender results Question 13; (n) Mean gender results Question 14; (o) Mean gender results Question 15; (p) Mean gender results Question 16.

has shown that student expect female faculty to not be as harsh on grading, with students giving female faculty lower evaluations on grading if they are harsh [12] [13] [15] [17] [18] . For question 10, “I understood how my grades were assigned in this course”, the sample mean of class average responses was slightly lower for female faculty, although not significant (p-value = 0.2587). The class average responses for questions 12 and 13 on availability and providing feedback in a timely manner were actually slightly higher for female faculty, but not significant (p-values = 0.6848 and 0.8016, respectively). Recall that research suggests that students expect female faculty to be available more often than male faculty [12] [13] [15] [17] [18] . Research has also shown that males are rated higher in promptness even when taking the same amount of time to return assignments as female faculty [4] .

We did consider the differences in variation of class average responses for each question between male and female instructors over the entire University (All Colleges). A significant difference in variability was found between class average responses for classes taught by male and female instructors for questions 5, and 15 at alpha equal to 0.05. A marginally significant difference was found between the class average responses for question 11 (p-value = 0.0906). The class average responses for question 5 had a larger variability for female faculty. This question was on rating the fairness of grading procedures. It is noted again that research has shown students expect female faculty to grade less harshly [12] [13] [15] [17] [18] . This could account for the larger variability among female faculty. The variances of the responses for male faculty were higher in the other two cases. The p-values for tests of variances are given in Table 1. The difference in variability of the class average responses for male and female faculty was tested was tested within the College of Science and Mathematics. A significant difference in variability between class average responses for classes taught by male and female instructors was found for questions 1, 2, 5, 7, 8, 9, 11, 12, 14, 15, and 16 at alpha equal to 0.05. A marginally significant difference in variability was found between class average responses for question 13 (p-value = 0.0974). In all cases, except for question 5, the class average responses were found to be less variable for classes taught by female instructors than for classes taught by male instructors (Table 2).

In the last phase of our research, we considered only classes in which at least 5 male students and 5 female students responded. Other research has found that male students tend to rate male instructors higher and female students tend to rate female instructors higher [5] -[7] . We wanted to test whether the proportion of classes taught by female instructors in which the male student response was higher was significantly lower than the proportion of classes taught by male instructors in which the male student response was higher. The sample proportion of classes taught by female instructors in which the male response was higher was calculated (Proportion 1) and the sample proportion of classes taught by male instructors in which the male response was higher (Proportion 2) was calculated. These are given in Table 3. In all cases, the sample proportion for female instructors was lower

Table 3. Proportion of classes in which male student response is higher.

than the sample proportion for male instructors. There were 10 questions in which the male student response was significantly lower for female instructors at alpha equal to 0.05. These were for questions 1, 4, 5, 6, 7, 8, 9, 11, 14, and 16. There were 2 additional questions in which the male response was significantly lower at alpha equal to 0.10 for female instructors. These were for questions 3 and 12 with the proportions not significantly different at alpha equal to 0.10 for question 10, but with a p-value of 0.102. This is similar to the research of Feldman [5] who did find that there was a gendered interaction in student ratings as well as the research of Bachen et al. [11] who found that female students tended to rate female instructors higher. It is interesting to note that all of the sample proportion 1’s are less than 0.50 except for the sample proportion association with question 3. Question 3 was the only question in which the sample proportion of classes taught by female instructors in which the male class average student response was higher, 0.5089, than the sample proportion of classes taught by female instructors in which the female class average student response was higher, 0.4911. Question 13 was the only question in which the sample proportion of classes taught by male instructors in which the male class average response was higher, 0.4877, was less than the sample proportion of classes taught by male instructors in which the female response was higher, 0.5123.

4. Conclusions

Our research did not find the gender indicator variable to be significant when proportion of students expecting A’s and B’s in model (All Colleges). We did find significant differences for questions 6, 11, and 15 with student responses associated with female faculty higher (understanding of course content, met or exceeded course objectives, instructor set and maintained high standards); this is when proportion of students expecting A’s and B’s is not controlled for in model. It does appear that a higher proportion of students taking courses from female faculty are expecting A’s or B’s. We also compared the variances of class average responses for male and female faculty across all colleges. The variances were significantly different or marginally significantly different for 3 of the 16 questions. The variance of class average responses was significantly higher for female faculty on question 5, the “fairness of procedures for grading”. In the other two questions, the variance of class averages for male faculty was higher.

When considering only the College of Science and Mathematics, we did not find that female faculty was rated lower on average. We found female faculty to be associated with significantly higher ratings in creating an atmosphere conducive to learning (other research has found this), and communication ability of the instructor as a teacher (marginally). We did find that female faculty was rated significantly lower on the fairness of grading procedures. We did find that the variances of the class average responses between male and female faculty were significant or marginally significantly different on 13 of the 16 questions. In all of these cases, except for question 5 asking about the fairness of grading procedures, the variances were higher for male faculty class averages.

Cite this paper

LucasHuebner,Rhonda C.Magel, (2015) A Gendered Study of Student Ratings of Instruction. Open Journal of Statistics,05,552-567. doi: 10.4236/ojs.2015.56058

References

  1. 1. Read, W., Rama, D.V., & Raghunandan K. (2001). “The Relationship Between Student Evaluations of Teaching and Faculty Evaluations”, Journal of Education for Business, 76, 189-192.

  2. 2. Arbuckle, J., & Williams, B.D. (2003). “Students’ Perceptions of Expressiveness: Age and Gender Effects on Teacher Evaluations”, Sex Roles, 49, 507-516.

  3. 3. Brady, K.L., & Eisler, R.M. (1999). “Sex and Gender in the College Classroom: A Quantitative Analysis of Faculty-Student Interactions and Perceptions”, Journal of Educational Psychology, 91, 127-145.

  4. 4. Marcotte, Amanda (2014). “Best Way for Professor to Get Good Student Evaluations? Be Male.”, XXfactor, Dec. 9, 2014.

  5. 5. Feldman, Kenneth. 1993. "College Students' Views of Male and Female College Teachers: Part II-Evidence from Students' Evaluations of their Classroom Teachers." Research in Higher Education 34:151-211.

  6. 6. Feldman, Kenneth. 1992. "College Students' Views of Male and Female College Teachers: Part I-Evidence from the Social Laboratory and Experiments." Research in Higher Education 33:317-73.

  7. 7. Lueck, T.L., Endres, K.L, & Caplan, R. E. (1993). “The Interaction Effects of Gender on Teaching Evaluations”, Journalism Education, 48, 46-54.

  8. 8. Worthington, A. (2002). “The Impact of Student Perceptions and Characteristics on Teaching Evaluations: A Case Study in Finance Education”, Assessment & Evaluation in Higher Education, 27, 49-64.

  9. 9. Basow, Susan A. (1995). “Student Evaluations of College Professors: When Gender Matters”, Journal of Educational Psychology, 87, 656-665.

  10. 10. Centra, J.A. & Gaubatz, N.B. (2000). “Is There Gender Bias in Student Evaluations of Teaching?”, Journal of Higher Education, 71, 17-33.

  11. 11. Bachen, C.M., McLoughlin, M.M. and Garcia, S.S. (1999) Assessing the Role of Gender in College Students’ Evaluations of Faculty. Communication Education, 48, 193-210.
    http://dx.doi.org/10.1080/03634529909379169

  12. 12. Basow, S.A. and Montgomery, S. (2005) Student Evaluations of Professors and Professor Self-Ratings: Gender and Divisional Patterns. Journal of Personnel Evaluation in Education, 18, 91-106.
    http://dx.doi.org/10.1007/s11092-006-9001-8

  13. 13. Bennett, S.K. (1982) Student Perceptions of and Expectations for Male and Female Instructors: Evidence Relating to the Question of Gender Bias in Teaching Evaluation. Journal of Educational Psychology, 74, 170-179.
    http://dx.doi.org/10.1037/0022-0663.74.2.170

  14. 14. Wilson, J.H., Beyer, D. and Monteiro, H. (2014) Professor Age Affects Student Ratings: Halo Effect for Younger Teachers. College Teaching, 62, 20-24.
    http://dx.doi.org/10.1080/87567555.2013.825574

  15. 15. Sprague, J. and Massoni, K. (2005) Student Evaluations and Gendered Expectations: What We Can’t Count Can Hurt Us. Sex Roles, 53, 779-793.
    http://dx.doi.org/10.1007/s11199-005-8292-4

  16. 16. Laube, H., Massoni, K., Sprague, J. and Ferber, A.L. (2007) The Impact of Gender on the Evaluation of Teaching: What We Know and What We Can Do. NWSA Journal, 19, 87-104.

  17. 17. Sinclair, L. and Kunda, Z. (2000) Motivated Stereotyping of Women: She’s Fine If She Praised Me but Incompetent If She Criticized Me. Personality & Social Psychology Bulletin, 26, 1329-1342.
    http://dx.doi.org/10.1177/0146167200263002

  18. 18. Burns-Glover, A. and Veith, D. (1995) Revising Gender and Teaching Evaluations: Sex Still Makes a Difference. Journal of Social Behavior and Personality, 10, 69-80.

  19. 19. Jaschik, S. (2015) Rate My Word Choice.
    https://www.insidehighered.com/news/2015/02/09/new-analysis-rate-my-professors-finds-patterns-words-used-describe-men-and-women

  20. 20. Felton, J. (2008) Attractiveness, Easiness and Other Issues: Student Evaluations of Professors on Ratemyprofessors.com. Assessment and Evaluation in Higher Education, 33, 151-211.
    http://dx.doi.org/10.1080/02602930601122803

  21. 21. Greenwald, A.G. and Gillmore, G.M. (1997) Grading Leniency Is a Removable Contaminant of Student Ratings. American Psychologist, 52, 1209-1217.
    http://dx.doi.org/10.1037/0003-066X.52.11.1209

  22. 22. Millea, M. and Grimes, P.W. (2002) Grade Expectations and Student Evaluation of Teaching. College Student Journal, 36, 582-590.

  23. 23. Culver, S. (2010) Course Grades, Quality of Student Engagement, and Students’ Evaluation of Instructor. International Journal of Teaching and Learning in Higher Education, 22, 331-336.

  24. 24. Eizler, C.F. (2002) College Students’ Evaluation of Teaching and Grade Inflation. Research in Higher Education, 43, 483-501.
    http://dx.doi.org/10.1023/A:1015579817194

  25. 25. Basow, S.A. and Martin, J.L. (2012) Bias in Student Evaluations. In: Kite, M.E., Ed., Effective Evaluation of Teaching: A Guide for Faculty and Administrators, Society for the Teaching of Psychology, Washington DC, 40-49.