首页    期刊浏览 2025年07月12日 星期六
登录注册

文章基本信息

  • 标题:Gender Bias Impacts Top-Merited Candidates
  • 本地全文:下载
  • 作者:Emma Rachel Andersson ; Carolina E. Hagberg ; Sara Hägg
  • 期刊名称:Frontiers in Research Metrics and Analytics
  • 电子版ISSN:2504-0537
  • 出版年度:2021
  • 卷号:6
  • DOI:10.3389/frma.2021.594424
  • 语种:English
  • 出版社:Frontiers Media S.A.
  • 摘要:Expectations of fair competition underlie the assumption that academia is a meritocracy. However, bias may reinforce gender inequality in peer review processes, unfairly eliminating outstanding individuals. Here, we ask whether applicant gender biases peer review in a country top ranked for gender equality. We analyzed peer review assessments for recruitment grants at a Swedish medical university, Karolinska Institutet (KI), during four consecutive years (2014–2017) for Assistant Professor (n = 207) and Senior Researcher (n = 153). We derived a composite bibliometric score to quantify applicant productivity and compared this score with subjective external (non-KI) peer reviewer scores of applicants' merits to test their association for men and women, separately. To determine whether there was gender segregation in research fields, we analyzed publication list MeSH terms, for men and women, and analyzed their overlap. There was no gendered MeSH topic segregation, yet men and women with equal merits are scored unequally by reviewers. Men receive external reviewer scores resulting in stronger associations (steeper slopes) between computed productivity and subjective external reviewer scores, meaning that peer reviewers “reward” men's productivity with proportional merit scores. However, women applying for assistant professor or senior researcher receive only 32 or 92% of the score men receive, respectively, for each additional composite bibliometric score point. As productivity increases, the differences in merit scores between men and women increases. Accumulating gender bias is thus quantifiable and impacts the highest tier of competition, the pool from which successful candidates are ultimately chosen. Track record can be computed, and granting organizations could therefore implement a computed track record as quality control to assess whether bias affects reviewer assessments.
国家哲学社会科学文献中心版权所有