THE KISS OF DEATH? THE EFFECT OF BEING CITED IN A REVIEW ON

Similar documents
On the relationship between interdisciplinarity and scientific impact

Comparing Bibliometric Statistics Obtained from the Web of Science and Scopus

Discussing some basic critique on Journal Impact Factors: revision of earlier comments

Long-Term Variations in the Aging of Scientific Literature: From Exponential Growth to Steady-State Science ( )

Long-term variations in the aging of scientific literature: from exponential growth to steady-state science ( )

The Decline in the Concentration of Citations,

Alphabetical co-authorship in the social sciences and humanities: evidence from a comprehensive local database 1

PBL Netherlands Environmental Assessment Agency (PBL): Research performance analysis ( )

Changes in publication languages and citation practices and their effect on the scientific impact of Russian Science ( ) 1

Citation analysis may severely underestimate the impact of clinical research as compared to basic research

Microsoft Academic is one year old: the Phoenix is ready to leave the nest

A systematic empirical comparison of different approaches for normalizing citation impact indicators

Results of the bibliometric study on the Faculty of Veterinary Medicine of the Utrecht University

hprints , version 1-1 Oct 2008

The journal relative impact: an indicator for journal assessment

Canadian collaboration networks: A comparative analysis of the natural sciences, social sciences and the humanities

F1000 recommendations as a new data source for research evaluation: A comparison with citations

Je veux bien, mais me citerez-vous? On publication language strategies in an anglicized research landscape1

STI 2018 Conference Proceedings

arxiv: v1 [cs.dl] 8 Oct 2014

On the causes of subject-specific citation rates in Web of Science.

FROM IMPACT FACTOR TO EIGENFACTOR An introduction to journal impact measures

Should author self- citations be excluded from citation- based research evaluation? Perspective from in- text citation functions

Canadian Collaboration Networks: A Comparative Analysis of the Natural Sciences, Social Sciences and the Humanities 1

Improving the Coverage of Social Science and Humanities Researchers Output: The Case of the Érudit Journal Platform

CITATION CLASSES 1 : A NOVEL INDICATOR BASE TO CLASSIFY SCIENTIFIC OUTPUT

THE USE OF THOMSON REUTERS RESEARCH ANALYTIC RESOURCES IN ACADEMIC PERFORMANCE EVALUATION DR. EVANGELIA A.E.C. LIPITAKIS SEPTEMBER 2014

How quickly do publications get read? The evolution of Mendeley reader counts for new articles 1

Is Scientific Literature Subject to a Sell-By-Date? A General Methodology to Analyze the Durability of Scientific Documents

Citation analysis: State of the art, good practices, and future developments

Predicting the Importance of Current Papers

Source normalized indicators of citation impact: An overview of different approaches and an empirical comparison

A Scientometric Study of Digital Literacy in Online Library Information Science and Technology Abstracts (LISTA)

In basic science the percentage of authoritative references decreases as bibliographies become shorter

K-means and Hierarchical Clustering Method to Improve our Understanding of Citation Contexts

BIBLIOMETRIC REPORT. Bibliometric analysis of Mälardalen University. Final Report - updated. April 28 th, 2014

Usage versus citation indicators

Bibliometric evaluation and international benchmarking of the UK s physics research

Edited volumes, monographs and book chapters in the Book Citation Index (BKCI) and Science Citation Index (SCI, SoSCI, A&HCI)

Coverage of highly-cited documents in Google Scholar, Web of Science, and Scopus: a multidisciplinary comparison

Edited Volumes, Monographs, and Book Chapters in the Book Citation Index. (BCI) and Science Citation Index (SCI, SoSCI, A&HCI)

The 2016 Altmetrics Workshop (Bucharest, 27 September, 2016) Moving beyond counts: integrating context

Keywords: Publications, Citation Impact, Scholarly Productivity, Scopus, Web of Science, Iran.

Developing library services to support Research and Development (R&D): The journey to developing relationships.

Publication boost in Web of Science journals and its effect on citation distributions

Team size matters: Collaboration and scientific impact since 1900

Figures in Scientific Open Access Publications

Title characteristics and citations in economics

The real deal! Applying bibliometrics in research assessment and management...

Alfonso Ibanez Concha Bielza Pedro Larranaga

K-means and Hierarchical Clustering Method to Improve our Understanding of Citation Contexts

The Journal Impact Factor: A brief history, critique, and discussion of adverse effects

Measuring the Impact of Electronic Publishing on Citation Indicators of Education Journals

HIGHLY CITED PAPERS IN SLOVENIA

THE EVALUATION OF GREY LITERATURE USING BIBLIOMETRIC INDICATORS A METHODOLOGICAL PROPOSAL

Scientometric Measures in Scientometric, Technometric, Bibliometrics, Informetric, Webometric Research Publications

2nd International Conference on Advances in Social Science, Humanities, and Management (ASSHM 2014)

Mike Thelwall 1, Stefanie Haustein 2, Vincent Larivière 3, Cassidy R. Sugimoto 4

Scientometrics & Altmetrics

Can scientific impact be judged prospectively? A bibliometric test of Simonton s model of creative productivity

Open Access & Predatory Journals

RESEARCH PERFORMANCE INDICATORS FOR UNIVERSITY DEPARTMENTS: A STUDY OF AN AGRICULTURAL UNIVERSITY

Study on the audiovisual content viewing habits of Canadians in June 2014

DISCOVERING JOURNALS Journal Selection & Evaluation

What is Web of Science Core Collection? Thomson Reuters Journal Selection Process for Web of Science

Using Bibliometric Analyses for Evaluating Leading Journals and Top Researchers in SoTL

Open Access Determinants and the Effect on Article Performance

Eigenfactor : Does the Principle of Repeated Improvement Result in Better Journal. Impact Estimates than Raw Citation Counts?

Peter Ingwersen and Howard D. White win the 2005 Derek John de Solla Price Medal

Bibliometric report

Citation Analysis in Research Evaluation

Complementary bibliometric analysis of the Health and Welfare (HV) research specialisation

Sundance Institute: Artist Demographics in Submissions & Acceptances. Dr. Stacy L. Smith, Marc Choueiti, Hannah Clark & Dr.

Swedish Research Council. SE Stockholm

Centre for Economic Policy Research

A Correlation Analysis of Normalized Indicators of Citation

MEASURING EMERGING SCIENTIFIC IMPACT AND CURRENT RESEARCH TRENDS: A COMPARISON OF ALTMETRIC AND HOT PAPERS INDICATORS

InCites Indicators Handbook

Human Hair Studies: II Scale Counts

CITATION COUNTS ARE USED TO

Science Indicators Revisited Science Citation Index versus SCOPUS: A Bibliometric Comparison of Both Citation Databases

1.1 What is CiteScore? Why don t you include articles-in-press in CiteScore? Why don t you include abstracts in CiteScore?

Universiteit Leiden. Date: 25/08/2014

Which percentile-based approach should be preferred. for calculating normalized citation impact values? An empirical comparison of five approaches

Library Acquisition Patterns Preliminary Findings

Methods for the generation of normalized citation impact scores. in bibliometrics: Which method best reflects the judgements of experts?

A bibliometric analysis of publications by staff from Mid Yorkshire Hospitals NHS Trust,

Journal Citation Reports on the Web. Don Sechler Customer Education Science and Scholarly Research

EVALUATING THE IMPACT FACTOR: A CITATION STUDY FOR INFORMATION TECHNOLOGY JOURNALS

Professor Birger Hjørland and associate professor Jeppe Nicolaisen hereby endorse the proposal by

A BIBLIOMETRIC ANALYSIS OF ASIAN AUTHORSHIP PATTERN IN JASIST,

French Canada s Media Landscape Prepared For IAB. French Canada Executive Summary Prepared by PHD Canada, Rob Young January

2015: University of Copenhagen, Department of Science Education - Certificate in Higher Education Teaching; Certificate in University Pedagogy

Mapping and Bibliometric Analysis of American Historical Review Citations and Its Contribution to the Field of History

ARTICLE IN PRESS. Journal of Informetrics xxx (2009) xxx xxx. Contents lists available at ScienceDirect. Journal of Informetrics

How economists cite literature: citation analysis of two core Pakistani economic journals

AN INTRODUCTION TO BIBLIOMETRICS

Quality assessments permeate the

Rawal Medical Journal An Analysis of Citation Pattern

Self-citations at the meso and individual levels: effects of different calculation methods

Transcription:

THE KISS OF DEATH? THE EFFECT OF BEING CITED IN A REVIEW ON SUBSEQUENT CITATIONS Christian Lachance 1, Steve Poirier 2 and Vincent Larivière 1,3 1 École de bibliothéconomie et des sciences de l'information, Université de Montréal, C.P. 6128, Succ. Centre-Ville, Montréal, QC. H3C 3J7, Canada 2 Institut de Cardiologie de Montréal, Université de Montréal, 5000 Belanger, Montreal, QC, H1T 1C8, Canada 3 Observatoire des Sciences et des Technologies (OST), Centre Interuniversitaire de Recherche sur la Science et la Technologie (CIRST), Université du Québec à Montréal, CP 8888, Succ. Centre-Ville, Montréal, QC. H3C 3P8, Canada [christian.lachance.1@umontreal.ca; steve.poirier@icm-mhi.org; vincent.lariviere@umontreal.ca] Abstract This article inquires into recent claims that citation in a review article provokes a decline in a paper s later citation count, these being instead given to the review. Using the Science Citation Index Expanded, we looked at the yearly percentages of lifetime citations of papers published in 1990 first cited in review articles in 1992 and 1995 in the field of biomedical research, and found no significant change to occur following review, regardless of the papers citation activity or specialty. Further comparison was made with papers from the field of clinical research, but yielded no meaningful results to support the notion that review articles have any substantial effect on the citation count of the papers they review. Introduction A recent editorial (Marks et al. 2013) noted what it called an alarming trend in biomedical research literature of citing review articles instead of the original papers themselves. Others have also pointed out that reviews, or secondary literature, are more cited than the original research papers whose findings they summarize (MacRoberts & MacRoberts 1989, Knottnerus & Knottnerus 2009). Considering the use and importance of citation counts, despite reservations, as markers of scientific impact, most notably in the constitution of the Journal Impact Factor (Pinski & Narin 1976, Moed & Van Leeuwen 2005, Neuhaus, Marx & Daniel 2009), skewed or misleading data could have a deleterious effect on one s ability to assess a scientist s or a paper s impact. As the editorial in question noted, in addition to a lack of acknowledgement for research conducted, this practice might be misleading with regard to possible differences in the conclusions between the review article and the original, for example. Furthermore, the importance of review articles, especially that of systematic reviews, in medical literature owing to their position atop the hierarchy of evidence (Patsopoulos, Analatos & Ioannidis 2005, Kaczorowski 2009), along with their length (Pinski & Narin 1976) leads to them having a relative over-importance in terms of citation counts (Aksnes 2006, Berthod 2009) that could further complicate matters, by magnifying any effect on the papers or by giving the impression that the theft of citations is greater than it actually could be. But is this the case? Does a paper s inclusion in a review article have any impact on its citation count thereafter? And, if so, what is it? It is truly the proverbial kiss of death, a marked decline in citations made to the original?

Methods We used the Science Citation Index Expanded (SCIE) to identify papers for scrutiny. We limited our inquiry to the field of biomedical research, the one specifically called out in the editorial (Marks et al. 2013). The publication year of 1990 was chosen so as to allow a long enough time for the papers to sink in and be cited in review articles, but still offer the possibility of observing citation counts thereafter over a long period. The other extreme date chosen was 2010, providing a 20-year citation window (plus publication year). With these criteria, the SCIE provided a dataset of 65,871 papers, which have been cited 877,641 times between 1990 and 2010. These citations were split by specialty and citation type (in paper or review article). A control group of 15,259 papers never cited in a review article over the period was drawn from the original population of 1990-published papers. In order to focus on the before/after effect of review-article inclusion, two subpopulations of papers, those first included in reviews in 1992 and those in 1995, were chosen for closer scrutiny. These two years represent the moments when half (50.38%) and three quarters (76.89%), respectively, of the 50,522 papers cited in reviews at least once over our time span will have been cited in a review at least once (See Figure 1). Figure 1. Cumulative percentage of 1990-published papers cited in review articles, over time. A total of 10,040 (1992) and 2,934 (1995) papers were included in these new subcategories. Table 1 presents an overview of the main characteristics of the dataset. The aggregated scale showing no effect, we separated the papers into several subpopulations, based on the lifetime number of citations they received, limiting this to those obtained from other original research articles, and not from reviews. The hypothesis was that papers would react differently depending on their activity, to wit that seldom-cited papers might in fact overall benefit from reviewarticle inclusion, in effect getting greater exposure thereby, whereas highly cited papers could on the other hand suffer, however faintly, by having some of their citations diverted away. The papers were separated into populations of lifetime citations 1-5, 6-10, 11-15, 16-25, 26-45 and 46+, attempting to obtain roughly similar volumes. A similar segmentation was performed on the control group and the data was plotted on a chart. Table 1. Overview of the dataset Total population Control group Papers first cited by review articles in 1992 1995 Number of papers 65,781 15,259 10,040 2,934 % of those papers cited in a review article 76.80% 0% 100% 100%

Number of citations 2,823,032 90,660 477,357 71,407 Number of citations from review articles 423,781 0 74,988 9,555 % of citations received from review articles 15.01% 0% 15.71% 13.38% Cumulative % of total reviewarticle citations accrued by that date n/a n/a 50.38% 76.89% We also looked at the combined years 1992 to 1996 in the same fashion, so as to rule out any year-specific event as much as possible. Additionally, we plotted the yearly percentage of lifetime citations versus the number of review-article citations received in 1992 or 1995, presuming that any existing effect would be magnified by a large number of reviews. We then looked at the yearly percentage of lifetime citations versus the biomedical research specialty to see if significant intra-discipline variance existed, knowing that interdisciplinary variance does (Bensman, Smolinsky & Pudovkin 2010, Van Eck et al. 2013). Finally, we also looked at papers from the field of clinical research to see if the type of research conducted, fundamental or clinical, had an effect. Results The 1992, 1995 and control groups remain very similar throughout. There is a clear drop following 1992 for all but the most highly cited papers (Fig. 2 F), consistent with the usual 2-3-year window of high activity following publication for most papers. This decline is slightly less pronounced for papers getting their first reviews in 1995, peaking lower, but maintaining a slightly greater activity level in their later years. The 1995 plots, regardless of the number of citations, tend to be very similar to the control groups (average deviation of 0.39 percentage points). On the whole, the greater the lifetime citation counts are, the less pronounced the initial high-citation period is, as papers accumulate a greater proportion of their citations after their initial exposure period. The 1992-reviewed papers show the highest peaks, which can be thought to be representative of their immediate popularity and uptake. Papers first cited by reviews in 1995 shows very little variation following review, dropping slightly except for papers with 26+ lifetime citations (Fig. 2 E-F), which rise or remain stable.

Figure 2. Percentage of lifetime citations (in papers), per year, for 1990-published papers, vs. year of first citation(s) by a review article. Papers with lifetime citations: (A) 1-5; (B) 6-10; (C) 11-15; (D) 16-25; (E) 26-45; (F) 46+ Looking at the percentages of lifetime citations accrued over the 1990-2010 period for the years 1992-1996 combined showed that they all follow very similar progression and decline in the years before and after the initial review(s). No marked variation was noted. The number of citations they received from reviews on the year they are first cited by this document type also seems to have little effect. They vary from 1 to 10 (average 1.36) for 1992, and from 1 to 7 (average 1.14) for 1995. In both cases, papers with a low initial number of citations coming from reviews (1-2) (representing 93.1% and 98.5% of papers in 1992 and 1995, respectively) show no effect. More highly reviewed papers fluctuate, but they represent subpopulations too small for analysis to be meaningful (1-66). No common trend or magnification effect was observed. Separating the papers by specialty (microbiology, genetics & heredity, etc.) likewise showed no clear reaction, all years studied resenting similar variation over time to that of the control group. Aberrant behaviour, again, was only found among very small subpopulations. The field-to-field comparison between clinical and biomedical research (made over a shorter period, 1990-2000) showed a very similar behaviour for both types of research, the main difference being that

biomedical research papers accrue citations from other papers at a quicker pace, and that the gap widens with time papers cited in reviews in 1995 are further apart than those cited in reviews in 1992. Discussion and Conclusion The evolution of citations received over time by 1990 papers in biomedical research and clinical medicine presents no meaningful discrepancies, all groups following very similar patterns irrespective of their year of citation in reviews or the fact that were not cited in reviews. Figure 2 A-B shows that all papers with few lifetime citations (1-10) behave virtually identically, irrespective of their citations received in reviews. These papers are somewhat less sensitive to effects over time, however, as they gather most of their citations very early on, the remainder trickling in. It is the slightly more cited papers that begin to show different behaviour, namely that the 1995-reviewed papers exhibit a lower initial peak, but, typically around 1998, pass the 1992 contingent in percentage of lifetime citations accrued per year. No marked change is observed around 1996 that would be significantly different from the control group. As for the 1992-reviewed papers, the sharper peak they exhibit is likely due to the papers importance to be reviewed two years after publication implies a great degree of citation activity to begin with, so it is possible that a sort of Matthew Effect (Merton 1968) is helping these papers make the most of the spotlight during their citation prime. Papers from the field of clinical research peak in 1992 and dip in 1995, but these changes are present in the control group as well. The absence of a significant impact, regardless of number of citations or year cited in a review, supports the notion that elapsed time is the main cause of these changes, rather than the fact that they were cited in reviews or not. As for the difference observed between biomedical and clinical research, namely that the former has a more pronounced positive skew, it would suggest that biomedical research papers are integrated into the community, and thus cited, more quickly, but this could only be characteristic of the field; further comparative study would be required to make that claim. In conclusion, we could find no clear indication that review articles have a harmful effect on the citation counts of the papers they review, regardless of the paper s citation activity, field, or specialty. It also shows that there is no positive effect either: a citation coming from a review does not seem to act as a trigger signaling the existence of a paper to the research community. Our study has several limits. Beyond the limitations inherent to the coverage of the SCIE and the choice of the publication year of 1990, we willfully limited our study to the biomedical research field. As citation practices vary among disciplines (Bensman, Smolinsky & Pudovkin 2010, Van Eck et al. 2013), the applicability of our findings remains limited in scope. Comparisons were made with clinical research, but the latter was not as thoroughly investigated. Furthermore, we looked mainly at the papers first cited in review articles in the years 1992 and 1995, with a slight broadening of scope over the 1992-1996 range. A more longitudinal inquiry would obviously yield a better overall picture. Another limit is that we used the WoS definition of what constitutes a review article. Colebunders and Rousseau (2013) have shown for some medical specialties that great variation exists in what is considered a review depending on the criteria used, and regardless, the WoS classification is made by automatic assignation based on simple criteria and is not validated for each individual paper. It is thus possible that our study does not cover the full population of reviews or includes papers that are not actually reviews. Mitigating this, however, are the findings of Harzing (2013), who studied whether the classification of papers as reviews based on their inclusion of over 100 references (one of the WoS criteria) was valid, and found that in the sciences, it was in most cases justified. We therefore feel confidant to have a substantial proportion of our citation window's reviews under scrutiny. Finally, the small size of some subpopulations often prevented a more finely tuned analysis of these groups, making it difficult to strongly assert the meaningfulness of these subpopulations behaviour. Nevertheless, we feel confident that, as a first look into this question, the purpose of the study was achieved, being to provide a general feel for the impact, or lack thereof, of review articles on the citation counts of the papers they cite.

References Aksnes, D.W. (2006). Citation Rates and Perceptions of Scientific Contribution. Journal of the American Society for Information Science and Technology, 57(2), 169-185. Bensman, S.J., Smolinsky, L.J. & Pudovkin, A.L. (2010). Mean Citation Rate per Article in Mathematics Journals: Differences From the Scientific Model. Journal of the American Society for Information Science and Technology, 61(7), 1440-1463. Berthod, A. (2009). So What? or Required Content of a Review Article. Separation & Purification Reviews, 38(3), 203-206. Colebunders, R., & Rousseau, R. (2013). On the definition of a review and does it matter? In J. Gorraiz, E. Schiebel, C. Gumpenberger, M. Hörlesberger and H. Moed (Eds.), Proceedings of ISSI 2013 (p. 2072-2074). Vienna: Austrian Institute of Technology. Harzing, A.-W. (2013). Document categories in the ISI Web of Knowledge: Misunderstanding the social sciences? Scientometrics, 94(1), 23-34 Kaczorowski, J. (2009). Standing on the shoulders of giants. Introduction to systematic reviews and metaanalyses. Canadian Family Physician, 55(11), 1155-1156. Knottnerus, J.A. & Knottnerus, B.J. (2009). Let's make the studies within systematic reviews count. The Lancet, 373(9675), 1605. MacRoberts, M.H., & MacRoberts, B.R. (1989). Problems of citation analysis: A critical review. Journal of the American Society for Information Science, 40(5), 342 349. Marks, M.S., Marsh, M.C.P., Shrorer, T.A. & Stevens, T.H. (2013). Editorial. Traffic, 14(1), 1. Merton, R.L. (1968). The Matthew Effect in Science. Science, 159(3180), 56-63. Moed, H.F. & Van Leeuwen, N. (1995). Improving the Accuracy of Institute for Scientific Information's Journal Impact Factor. Journal of the American Society for Information Science and Technology, 46(6), 461-467. Neuhaus, C., Marx, W. & Daniel, H.-D. (2009). The Publication and Citation Impact Profiles of Angewandte Chemie and the Journal of the American Chemical Society Based on the Sections of Chemical Abstracts: A Case Study on the Limitations of the Journal Impact Factor. Journal of the American Society for Information Science and Technology 60(1), 176-183. Patsopoulos, N. A., Analatos, A.A. & Ioannidis, J.P.A. (2005). Relative Citation Impact of Various Study Designs in the Health Sciences. Journal of the American Medical Association, 293(19), 2362-2366. Pinski, G. & Narin, F. (1976). Citation Influence for Journal Aggretates of Scientific Publications: Theory, With Application to the Literature of Physics. Information Processing & Management, 12, 297-312. Van Eck, N.J., Waltman, L., Van Raan, A.F.J., Klautz, R.J.M. & Peul, W.C. (2013). Citation Analysis May Severely Underestimate the Impact of Clinical Research as Compared to Basic Research. PloS ONE, 8(4), e62395.