Michael Chwe Mala Htun Francesca R. Jensenius. Adria Lawrence David J. Samuels David A. Singer. June 7, 2017

Similar documents
Michael Chwe Mala Htun Francesca R. Jensenius. Adria Lawrence David J. Samuels David A. Singer

Embedding Librarians into the STEM Publication Process. Scientists and librarians both recognize the importance of peer-reviewed scholarly

Citation-Based Indices of Scholarly Impact: Databases and Norms

EVALUATING THE IMPACT FACTOR: A CITATION STUDY FOR INFORMATION TECHNOLOGY JOURNALS

Complementary bibliometric analysis of the Health and Welfare (HV) research specialisation

Research Evaluation Metrics. Gali Halevi, MLS, PhD Chief Director Mount Sinai Health System Libraries Assistant Professor Department of Medicine

Complementary bibliometric analysis of the Educational Science (UV) research specialisation

Cut Out Of The Picture

University of Liverpool Library. Introduction to Journal Bibliometrics and Research Impact. Contents

Your research footprint:

Making Hard Choices: Using Data to Make Collections Decisions

USING THE UNISA LIBRARY S RESOURCES FOR E- visibility and NRF RATING. Mr. A. Tshikotshi Unisa Library

Centre for Economic Policy Research

Where to present your results. V4 Seminars for Young Scientists on Publishing Techniques in the Field of Engineering Science

Using Bibliometric Analyses for Evaluating Leading Journals and Top Researchers in SoTL

THE USE OF THOMSON REUTERS RESEARCH ANALYTIC RESOURCES IN ACADEMIC PERFORMANCE EVALUATION DR. EVANGELIA A.E.C. LIPITAKIS SEPTEMBER 2014

The Financial Counseling and Planning Indexing Project: Establishing a Correlation Between Indexing, Total Citations, and Library Holdings

CHAPTER OBJECTIVES - STUDENTS SHOULD BE ABLE TO:

Literature Reviews. Professor Kathleen Keating

Manuscript writing and editorial process. The case of JAN

POLICY AND PROCEDURES FOR MEASUREMENT OF RESEARCH OUTPUT OF PUBLIC HIGHER EDUCATION INSTITUTIONS MINISTRY OF EDUCATION

Peer Review Process in Medical Journals

Cut Out of the Picture A campaign for gender equality among directors within the UK film industry

1.1 What is CiteScore? Why don t you include articles-in-press in CiteScore? Why don t you include abstracts in CiteScore?

How economists cite literature: citation analysis of two core Pakistani economic journals

Tranformation of Scholarly Publishing in the Digital Era: Scholars Point of View

arxiv: v1 [cs.dl] 8 Oct 2014

Criteria for Tenure and Promotion. Department of Literature and Languages

EDITORIAL POLICY. Open Access and Copyright Policy

Measuring Academic Impact

Bibliometric glossary

Introduction. The report is broken down into four main sections:

Rawal Medical Journal An Analysis of Citation Pattern

Citation Analysis. Presented by: Rama R Ramakrishnan Librarian (Instructional Services) Engineering Librarian (Aerospace & Mechanical)

AN OVERVIEW ON CITATION ANALYSIS TOOLS. Shivanand F. Mulimani Research Scholar, Visvesvaraya Technological University, Belagavi, Karnataka, India.

Introduction. Status quo AUTHOR IDENTIFIER OVERVIEW. by Martin Fenner

Scopus. Advanced research tips and tricks. Massimiliano Bearzot Customer Consultant Elsevier

University Library Collection Development Policy

Building an Academic Portfolio Patrick Dunleavy

Experiences with a bibliometric indicator for performance-based funding of research institutions in Norway

Professor Birger Hjørland and associate professor Jeppe Nicolaisen hereby endorse the proposal by

Bibliometrics & Research Impact Measures

DON T SPECULATE. VALIDATE. A new standard of journal citation impact.

Author Directions: Navigating your success from PhD to Book

Communication Studies Publication details, including instructions for authors and subscription information:

Cited Publications 1 (ISI Indexed) (6 Apr 2012)

UCSB LIBRARY COLLECTION SPACE PLANNING INITIATIVE: REPORT ON THE UCSB LIBRARY COLLECTIONS SURVEY OUTCOMES AND PLANNING STRATEGIES

2013 Environmental Monitoring, Evaluation, and Protection (EMEP) Citation Analysis

Measuring the reach of your publications using Scopus

MEASURING EMERGING SCIENTIFIC IMPACT AND CURRENT RESEARCH TRENDS: A COMPARISON OF ALTMETRIC AND HOT PAPERS INDICATORS

In basic science the percentage of authoritative references decreases as bibliographies become shorter

INTERNATIONAL JOURNAL OF EDUCATIONAL EXCELLENCE (IJEE)

Navigate to the Journal Profile page

Publishing research. Antoni Martínez Ballesté PID_

Measuring Your Research Impact: Citation and Altmetrics Tools

Citation analysis: Web of science, scopus. Masoud Mohammadi Golestan University of Medical Sciences Information Management and Research Network

I. Introduction Assessment Plan for Ph.D. in Musicology & Ethnomusicology School of Music, College of Fine Arts

Public Administration Review Information for Contributors

Practice with PoP: How to use Publish or Perish effectively? Professor Anne-Wil Harzing Middlesex University

Bibliometric analysis of the field of folksonomy research

UNDERSTANDING JOURNAL METRICS

This is the author s final accepted version.

Practical Applications of Do-It-Yourself Citation Analysis

Educated readership. 1 Introduction. 2 Proliferation. oestros 7 (2012)

PUBLIKASI JURNAL INTERNASIONAL

What is academic literature? Dr. B. Pochet Gembloux Agro-Bio Tech Liège university (Belgium)

Citation Metrics. From the SelectedWorks of Anne Rauh. Anne E. Rauh, Syracuse University Linda M. Galloway, Syracuse University.

Co-Publishing Music History Online: Strategies for Collaborations between Senior and Junior Scholars. James L. Zychowicz, Ph. D.

Where Should I Publish? Margaret Davies Associate Head, Research Education, Humanities and Law

Introduction to the Literature Review

Selected Members of the CCL-EAR Committee Review of The Columbia Granger s World of Poetry May, 2003

What is bibliometrics?

and Beyond How to become an expert at finding, evaluating, and organising essential readings for your course Tim Eggington and Lindsey Askin

An Introduction to Bibliometrics Ciarán Quinn

STRATEGY TOWARDS HIGH IMPACT JOURNAL

Scopus Journal FAQs: Helping to improve the submission & success process for Editors & Publishers

Citation Analysis of International Journal of Library and Information Studies on the Impact Research of Google Scholar:

Chapter 3 sourcing InFoRMAtIon FoR YoUR thesis

DOWNLOAD PDF BOWKER ANNUAL LIBRARY AND TRADE ALMANAC 2005

WRoCAH White Rose NETWORK Expressive nonverbal communication in ensemble performance

(web semantic) rdt describers, bibliometric lists can be constructed that distinguish, for example, between positive and negative citations.

Workshop Training Materials

Research Playing the impact game how to improve your visibility. Helmien van den Berg Economic and Management Sciences Library 7 th May 2013

A Guide to Publication in Educational Technology

of Nebraska - Lincoln

Literature Reviews. Lora Leligdon Engineering Research Librarian CSEL L166 /

What do you really do in a literature review? Studying the Comparative Politics of Public. Education

Journal of Undergraduate Research at Minnesota State University, Mankato

Impact Factors: Scientific Assessment by Numbers

Dissertation proposals should contain at least three major sections. These are:

Journal of American Computing Machinery: A Citation Study

INTRODUCTION TO SCIENTOMETRICS. Farzaneh Aminpour, PhD. Ministry of Health and Medical Education

Editorial Policy. 1. Purpose and scope. 2. General submission rules

Academic honesty. Bibliography. Citations

Follow this and additional works at: Part of the Library and Information Science Commons

Measuring the Impact of Electronic Publishing on Citation Indicators of Education Journals

The Library Reference Collection: What Kinds of Materials will you find in the Reference Collection?

The editorial process for linguistics journals: Survey results

RESEARCH TOOLS GUIDE NOODLETOOLS ICONN WEB EVALUATION

Quality Of Manuscripts and Editorial Process

Transcription:

Google Scholar in Political Science: Blessing or Curse? Michael Chwe Mala Htun Francesca R. Jensenius Adria Lawrence David J. Samuels David A. Singer June 7, 2017 1

When political scientists sit on hiring committees, evaluate candidates for tenure and promotion, and write letters of recommendation, we are often called on to assess a scholar s impact. We provide our subjective assessment of the candidate s work, and compare the candidate against her or his peers. Recently, political scientists have turned to using Google Scholar (GS) and other citation count tools to make these sensitive and critical assessments. People often mention their GS citation count on their CVs, and some departments require faculty to report the count in their tenure files. GS is readily available, free, and requires no registration or subscription. Writers of tenure letters, who up to the early 2000s would grapple with a scholar s contributions using mostly qualitative assessments, now routinely pepper their appraisals with Google citation counts. Many political scientists have chosen to set up GS profile pages, which allow users to find all of a scholar s published (and often unpublished) work, along with citation counts for each work and summary statistics including total citation counts by year. Some see citation counts as more objective than an individual scholar s opinion. Yet no objective metric of impact exists of articles, books, journals, or individual scholars. Every metric contains built- in biases. Despite the rise of GS as the discipline s de facto standard for scholarly assessment, there has been little discussion about what, exactly, it measures, or what its biases and pitfalls are. The absence of discussion runs the risk of letting pervasive but potentially biased practices hide behind the pretense of objectivity. Since political scientists normally go to great lengths to find unbiased measures and advance careful empirical arguments, it is especially odd that we are uncritically using a measure with so many obvious flaws to evaluate ourselves. The purpose of this article is to trigger a discipline- wide discussion about the benefits and the pitfalls of using GS to evaluate scholarly impact. We first summarize how GS works, and then discuss how it can be used thoughtfully in academic evaluations. On these issues, the authors of this article are not in complete agreement. We highlight both the advantages of GS, including its citation counts and its platform for disseminating scholarship and facilitating networking, and also its disadvantages and biases. On this latter point, we are most concerned about gender bias in citations and undercounting of certain forms of scholarship. Some of the advantages and disadvantages pertain to GS in particular, while others are inherent in the use of any citation count metric as an evaluative tool. We conclude by summarizing our assessments. The Rise of Google Scholar and How it Works Google Scholar (www.scholar.google.com) (GS) was created in 2004 as a search engine for academics. It works in a similar way to Google s general search engine, generating results from searches based on the strength of the link between the search terms and how often and how recently a work has been cited. GS indexes virtually everything available on the web in any language, including journal articles, academic books 2

and book chapters, and non- peer- reviewed material such as conference papers, working papers, theses, and dissertations. 1 In addition to providing citation counts, GS ranks the top 20 journals in a discipline. At the top of the GS home page, the Metrics button provides links to rankings by discipline and language, using an H5 index the number X of articles that have at least X citations in the last five years. Inexplicably, GS does not include international relations journals in its top publications political science list. However, one can search for any journal by title (e.g., International Organization ), and the result provides an H5 number, allowing comparison against journals that GS does include in its political science list. And finally, since 2012 Google has allowed scholars to create editable GS profile pages. After a page is set up, GS automatically populates it with links to materials the author has written that are archived online, and it indicates how many times each item has been cited. The link on the number of citations will take users to the items citing the work. Advantages of Google Scholar The advantages of using GS to measure scholarly impact stem primarily from how easy it is to use. When people have a GS account, anyone with access to the internet can quickly get an overview of scholars publications, rank- ordered by the number of citations of each publication. It is convenient to click on the hyperlinks of each publication to view abstracts and gain access to articles that are publicly available, or available through an institutional account. Even articles that are stuck behind paywalls become more accessible, as open- source versions of the articles are regularly harvested from other websites. This ease of access has many benefits, which can be organized under three main headings: incentives for quality and visibility, academic coordination and open access, and consistency in research evaluation. Incentives for quality and visibility. Academics are under never- ending pressure to publish. In the US, the tenure system puts a constant strain on (particularly junior) faculty to publish or perish. In the UK, this pressure has been institutionalized at a more aggregate level through the Research Excellence Framework (REF) system, which makes public funding to universities contingent on publications and impact. 2 Similar arrangements exist in other European countries. The European Research Council, for example, asks all grant applicants to provide information about their track record, including number of publications and citations. As a result, scholars continually face a trade- off between quantity and quality in their work. An increasing focus on citations and impact through the GS citation scores, rather than just the publication count, may provide incentives for emphasizing quality over quantity. GS also nudges researchers to make their research more readily available. The links to open- source versions of work on the GS screen may serve as a reminder of the 1 2017. See About Google Scholar, at https://scholar.google.com/intl/en/scholar/about.html. April 7, 2 See http://www.hefce.ac.uk/rsrch/funding/ 3

importance of making work available to the public. This may motivate scholars to choose open- access publishing or post copies of their work on their personal websites. Finally, the tantalizing image of the ever- growing citation count may inspire authors to publicize their published work more, for example by using social media to announce a publication along with a summary. Academic coordination and open access to research. GS may facilitate the dissemination of ideas and intellectual networking. A simple google search on a keyword can guide a reader to a scholar they did not know, and the reader can easily gain access to the scholar s other work through their GS profile. The citation count again is useful as a quick way to see which of the author s articles and books are most popular. The automatic email notifications provided by GS can also encourage intellectual networking. Interested readers can receive automatic emails when scholars post new work, allowing them to keep up with the newest research of a broad range of people. Conversely, scholars can get notified when somebody cites them, often at the working- paper stage. GS also contributes to the ongoing open science revolution. Since most of the students in the world are locked out of research that is behind journal paywalls, GS can help to make published research available to a much larger audience. Consistency in research evaluation. The GS profile, with its full list of academic production, citation counts, and h- index, provides straightforward measures of scholarly quality and impact variables that are of great interest for employers and funding agencies. Going by typical standards of measurement, they are highly reliable (anyone looking them up would get the same value) and consistent (the same coding scheme applies to all scholars). Such objective measures may provide evidence against the unfounded feeling that one candidate is better than another for a job. Given the prevalence of implicit biases against women and minorities, they can serve as hard facts to challenge unfounded perceptions and prejudices. The citation count can also make it easier for scholars to get credit for books and articles that have not have been published in the most famous outlets, but still may be of intellectual importance. The citation count can provide evaluators with information about how the work has affected the field that is independent of the academic standing of the publishing outlet. There is something intuitively appealing about indicators than can be applied across all applications or candidates. Used wisely, they can provide evaluators with a new piece of information that together with other means of evaluation can serve to improve their judgement. The danger is that the consistency and reliability of these measures may come at the cost of their validity. What are such measures actually capturing? Disadvantages of Google Scholar What are the potential drawbacks to using GS to assess scholarly impact? The first issue is one of data validity. Google does not disclose its GS algorithm (or its primary search algorithm). GS appears to vacuum up virtually everything on the web without any quality control. One consequence of the GS algorithm may be over- counting. Samuels (2011) found that one of his journal articles had 80 citations according to GS, but six of them were duplicate entries and 52 were unpublished works. Over- counting is likely the result of multiple online versions of an article (e.g., a working paper version, a conference submission, and a final published version) as well as subtle variations in bibliographic format and content. 4

However, GS may also under- count. Samuels also found that 5 of the 80 citations to his article in GS were to books. However, according to the Google Books database a separate database not linked to Google Scholar the article was cited in 24 books. Undercounting of citations in books may decline as more material is entered into the GS database over time. (Samuels found no difference in book citations to that same article in 2017, while a difference did appear in 2010.) But since the methodology is not public, self- correction over time cannot be assumed. To the extent that GS still undercounts citations that books receive or of articles in books, GS may understate some scholars impact, especially scholars who publish more in books than in articles (Samuels 2013). Even if citation counts were perfectly accurate, we must take into account the sociological and political process of citing. How do scholars choose which works to cite and which works to omit? Most of us tend to think that good citation practices involve acknowledging prior work that helped generate ideas and explaining how a study s claims fit into a literature. However, recent studies show that these common- sense citation practices are actually loaded with biases against women, scholars in smaller research communities, solo authors, early career scholars, and even pioneering scholars with bold ideas. Moreover, many people cite works for strategic reasons, not principled reasons, which reproduces hierarchies of power in the discipline. For instance, Maliniak, Powers, and Walter (2013) analyzed over two decades of publications in international relations (IR) and found that, controlling for a variety of factors such as publication venue, methodology, and tenure status, an article written by a woman receives 80 percent of the citations as a similar article written by a man. Women are less likely to be cited by the most influential articles, and men cite themselves more than women, possibly because women experience penalties for self- promotion (Moss- Racusin and Rudman 2010). Similarly, Colgan (2017) found that male IR instructors are less likely than female instructors to assign work by female scholars, while women are also less likely than men to assign their own work to their students. Importantly, this bias can arise even if scholars cite or assign the work that is most relevant to them. To the extent that women are underrepresented in faculty positions, conferences, and publications to begin with, scholars are less likely to be aware of and therefore cite women s research. Although there have been few systematic studies in political science to assess whether other underrepresented scholars are subject to similar kinds of biases in citation counts, the existence of such biases is highly plausible. In addition, scholars in larger research communities have an advantage over scholars in smaller fields of study: they have a larger pool of people who can cite them. A paper cited in 16 out of 100 articles published in a given year on the US Congress probably has less "impact than a paper cited in 8 of the 10 articles published that year on Pakistan. Scholars in well- tilled fields thus tend to have larger citation counts even if their work produces incremental improvements rather than novel insights in small but important or growing fields of inquiry. People who co- author can more easily generate citations than solo authors. Citation counts are not divided by the number of co- authors: if a paper with five authors is cited once, each of the five authors receives one citation, not 0.2 citations. Higher citation counts for scholars who co- author can further exacerbate other biases. For example, Teele and 5

Thelen (2017) demonstrate that most collaborative work in ten of the most prominent political science journals are authored by all- male teams. Citation counts are especially problematic for scholars early in their careers, because the number of citations that an article or book receives in the five years or so after publication says little about its long- term impact. Wang, Song, and Barabasi (2013) looked at a sample of physics papers and found that having 50 citations in the first five years after publication was not associated with more citations after 20 years. In fact, papers with the most citations in 30 years tended to have relatively few citations early on. Stephen, Veugelers, and Wang (2017) examined 660,000 research articles published in 2001 in the Web of Science database, and found that highly original papers were less likely to be highly cited within three years of publication, but more likely to be highly cited three or more years after publication. In the near term, the original papers suffered a disadvantage compared with less novel, incremental work. John Nash's foundational paper defining Nash equilibrium provides an illustration: this paper had only 16 citations in the first five years after publication, according to Google Scholar. It is problematic that citation counts are most consequential when they are least accurate: early in a scholar s career when hiring and promotion decisions are made. Yet papers and books that are cited many years after publication are arguably more important than papers and books that are cited only soon after publication. Basing promotion decisions on citations within five years of publication biases toward competence but away from lasting significance, as the example of John Nash s paper shows. Most papers and books have very few citations, making it difficult to predict which works will be important 20 years from now using any metric. The use of GS for evaluating early- career scholars provides perverse incentives. From a numerical standpoint, it is better to publish incremental work on topics in which there is already a large, active subgroup of scholars who cite one another than it is to open up a new field of research. In addition, any kind of evaluation based on short- term citations discourages boldness, especially among early- career political scientists, and discourages innovation. Many important scholarly works initially defy easy contextualization and fit poorly into existing literatures. A thoughtful reviewer may be better able to judge the competence, ambition, and reach of a piece of scholarly work through close reading than through examining citation metrics. Above, we posited a normative basis for when an author should cite others, using the criteria of relevance to the topic and influence on the author s project. Yet as the preceding discussion of incentives suggests, scholars are not only concerned with citing the work that has been most important to them, but also face an array of professional imperatives: they want their work to appeal to reviewers, get published, and garner citations. Scholars may thus engage in several forms of strategic citation (see Aizenman and Kletzer 2008). First, some books and articles are cited almost entirely for their flaws, not their importance. Authors often cite poorly- executed studies, easy targets, and straw- man pieces to explain what they are arguing against (see Nexon and Jackson 2015). Second, citation choices may be guided by expectations of likely peer reviewers. Strategic citations therefore include fellow members of academic networks who are likely to be favorably disposed to the argument and findings, producing a bias against citing scholars outside those networks or whose work is likely to be critical. Third, strategic citations may be 6

driven by calculations about which citations editors and reviewers will expect to see in the bibliography. Journal articles are increasingly subject to shorter word limits; omitting citations is one way to keep papers under 10,000 words, but authors do not want to be criticized for missing key works. This fear encourages drive- by citations" citing papers merely because other similar papers cite those papers, regardless of their actual relevance. Anecdotally, scholars who observe this phenomenon are often those whose work is repeatedly cited erroneously, for arguments they did not actually make. These forms of strategic citation reproduce existing inequalities in the discipline, leading to a systematic tendency to cite authors of works already deemed important, not because of their relevance and importance to the author, but because they are perceived as gatekeepers, hold key editorial positions, or reside in powerful departments. Junior scholars have expressed concerns to us that their submissions will face rejection if they anger prominent scholars by omitting mention of their work or engaging too critically with it. Like the biases discussed above, strategic citation can reinforce disciplinary hierarchies at the expense of underrepresented minorities, scholars from lower- ranked institutions, and innovative work that does not neatly fit into existing literatures. The concerns raised in this section do not foreclose the potential of GS to serve as an indicator of scholarly impact, but point to the need for more information about its methodology and a fuller understanding of the factors that affect decisions about whom to cite. If departments and granting institutions look at GS counts, and external letter writers use them as a guide, the counts are effectively being counted twice or even several times, thus reducing the independence of external scholars' judgement. Moreover, since a GS profile (or the absence of one) is a public signal, it can have disproportionate effect on people's opinions because a person seeing it knows that other people see it too (Chwe 2017). Concluding Thoughts and Practical Tips The question of whether to use GS and other citation count tools to evaluate other scholars for hiring, promotion, and tenure requires considering both advantages and disadvantages. GS has many advantages: it promotes consistency in research evaluation, encourages transparency, publicity, and openness, makes it easier to gain access to scholarly work, facilitates networking among scholars, and may provide incentives for quality over quantity. On the other hand, its data may be of questionable validity, and its counts are biased in various ways. GS both over- and under- counts published works, and its citation counts favor male (and presumably also white) scholars, scholars who co- author, scholars in larger research communities, incremental work, and work that is cited strategically. While breaking down some doors, the use of GS entrenches long- existing inequalities in the political science discipline. And while reliance on GS may seem to offer an easy metric to replace the role of external reviewers in evaluating impact, the use of GS in fact makes reasoned, independent, scholarly judgment more important than ever. The debate over GS should animate a larger discussion of values in the discipline. We may choose to disagree about some of these values, such as the relative importance of generating new ideas versus verifying existing ones, and the relative importance of work that fits poorly into existing research paradigms versus work that fits easily. 7

Our aim in this article has been to open up a forum for meaningful debate about citation counts in the discipline. Yet even as we discuss the pros and cons, people will continue to rely on GS to make every- day judgements. We therefore conclude with some practical advice for everyone who uses GS. Since we all make typos and because GS s algorithm is imperfect, anyone who creates a profile page should do three things. First, go through the list of works GS automatically captures, delete those that are not yours and merge duplicates. Second, click add and then go through the publications that GS thinks might be yours and click on those that in fact are yours. Third, if Step 2 did not catch certain works that you know are archived online somewhere, you can search for and capture additional materials manually. Political scientists can and should find ways to use GS more wisely, which means using citation counts as one measure of impact among others, and remaining mindful of their pitfalls and biases. References Aizenman, Joshua and Kenneth Kletzer. 2008. The Life Cycle of Scholars and Papers in Economics the Citation Death Tax. National Bureau of Economic Research (NBER) Working Paper Series, no. 13891. Chwe, Michael Suk- Young. 2017. "Stereotypes Are More Powerful When People Like to Agree With Each Other." Working paper, UCLA. Colgan, Jeff. 2017. Gender Bias in International Relations Graduate Eduation? New Evidence from Syllabi. PS: Political Science & Politics 50, 2:456-460. Henrekson, Magus and Waldenstrom, Daniel. 2011. How Should Research Performance be Measured? A Study of Swedish Economists. The Manchester School 79(6):1139-1156. Hirsch, Jorge E. "An index to quantify an individual's scientific research output." Proceedings of the National academy of Sciences of the United States of America (2005): 16569-16572. Daniel Maliniak, Ryan Powers and Barbara F. Walter. 2013. The Gender Citation Gap in International Relations. International Organization, 67(4), pp 889-922. Merton, Robert K. 1968. The Matthew Effect in Science. Science 159 (3810): 56 63. Moss- Racusin, Corinne A. and Laurie A. Rudman. 2010. Disruptions in Women's Self- Promotion: The Backlash Avoidance Mode. Psychology of Women Quarterly, 34(2), pp. 186 202. Nexon, Daniel, and Patrick Thaddeus Jackson. 2015. "Academia Isn't Baseball." Duck of Minerva, August 10. Samuels, David. 2013. "Book citations count." PS: Political Science & Politics 46.04, pp. 785-790. 8

Samuels, David J. 2011 The modal number of citations to political science articles is greater than zero: Accounting for citations in articles and books." PS: Political Science & Politics 44 (4), pp. 783-792. Stephan, Paula, Reinhilde Veugelers and Jian Wang. 2017. Reviewers are Blinkered by Bibliometrics. Nature 544 (7651), pp. 411-412. Teele, Dawn Langan and Kathleen Thelen. 2017. Gender in the Journals: Publication Patterns in Political Science. PS: Political Science & Politics 50 (2), pp. 433-447. 9