The Review Process and the Discipline

One of the things I enjoyed about being editor of Political Behavior was been the ability to think about the editorial and peer review process.  I have initiated several policies as editor about increasing the data accessibility and research transparency, the promotion of articles in social media, and other areas.  I have also used the reviewer database from Political Behavior to try to empirically tackle two issues in the review process.  The more serious of the two is explores if there are differences in the outcome of the review process based on the gender of the authors or submissions.  This is part of a broader group of editors trying to answer this question. 

In a completely different project, we conducted an experiment about gender biases in Student Evaluations of Teaching (SET). In four classes, we randomly assigned students to either receive the standard SET instrument or an instrument that contained anti-bias language: “Student evaluations of teaching play an important role in the review of faculty. Your opinions influence the review of instructors that takes place every year. Iowa State University recognizes that student evaluations of teaching are often influenced by students’ unconscious and unintentional biases about the race and gender of the instructor. Women and instructors of color are systematically rated lower in their teaching evaluations than white men, even when there are no actual differences in the instruction or in what students have learned. As you fill out the course evaluation please keep this in mind and make an effort to resist stereotypes about professors. Focus on your opinions about the content of the course (the assignments, the textbook, the in-class material) and not unrelated matters (the instructor’s appearance).” In the first paper, we show that the language increased the SETs for female faculty while having no effect on the SETs for male faculty. We are working on a second paper involving the responses to the open-ended questions on the SETs.

A substantially less serious piece,  "Dear Reviewer 2: Go F’ Yourself" published in Social Sciences Quarterly, tries to answer if Reviewer #2 is actually the horrible person that the broader research community believes.  The data suggest that the reviewer most likely to be a negative outlier in the review process is actually Reviewer #3