IADVL
Indexed with PubMed and Science Citation Index (E) 
 
Users online: 8616 
     Home | Feedback | Login 
About Current Issue Archive Ahead of print Search Instructions Online Submission Subscribe What's New Contact  
  Navigate here 
  Search
 
  
 Resource links
   Similar in PUBMED
    Search Pubmed for
    Search in Google Scholar for
   Article in PDF (584 KB)
   Citation Manager
   Access Statistics
   Reader Comments
   Email Alert *
   Add to My List *
* Registration required (free)  

 
  In this article
   References
   Article Tables

 Article Access Statistics
    Viewed409    
    Printed16    
    Emailed0    
    PDF Downloaded33    
    Comments [Add]    

Recommend this journal

 


 
 Table of Contents    
LETTER TO THE EDITOR - OBSERVATION LETTER
Year : 2019  |  Volume : 85  |  Issue : 5  |  Page : 541-545

Impact factor: Does it really have an impact?


1 Department of Pharmacology, R. G. Kar Medical College, Kolkata, West Bengal, India
2 Department of Pediatrics, Medical College and Hospital, Kolkata, West Bengal, India
3 Department of Pharmacology, Institute of Post Graduate Medical Education and Research, Kolkata, West Bengal, India

Date of Web Publication16-Aug-2019

Correspondence Address:
Dr. Sandeep Lahiry
Department of Pharmacology, R. G. Kar Medical College, 1, Khudiram Bose Sarani, Kolkata - 700 004, West Bengal
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/ijdvl.IJDVL_51_19

Rights and Permissions



How to cite this article:
Lahiry S, Sinha R, Thakur S. Impact factor: Does it really have an impact?. Indian J Dermatol Venereol Leprol 2019;85:541-5

How to cite this URL:
Lahiry S, Sinha R, Thakur S. Impact factor: Does it really have an impact?. Indian J Dermatol Venereol Leprol [serial online] 2019 [cited 2019 Dec 11];85:541-5. Available from: http://www.ijdvl.com/text.asp?2019/85/5/541/263569




Sir,

'If there is one thing every bibliometrician agrees, it is that you should never use the journal impact factor to evaluate research performance for an article or for an individual – that is a mortal sin.' – Nature, 2010.[1]

Publishing in reputed journals undoubtedly enhances the future career prospects for an academic, in many fields. There has been a tendency within academic administrations to focus on journal impact factors, whilst judging the worth of scientific contributions by researchers, for promotions, and even for purposes of recruitment. But is it the correct way to assess scientific research? Can a journal impact factor reliably assess journal quality?

While many, particularly young researchers, would agree to it, veterans probably shrug it as a misconception. One reason being, a trend for journals to utilize some sort of journal impact factor-based “ranking”, without formally setting a prior evaluative criteria for a broader assessment. Such lists often utilize bibliometrics, completely ignoring the fact that no single metric can address all the relevant variables. Practically, such methods would more often display skewed results (representing mostly North-American journals), without giving appropriate attention to good quality indexed journals, which rank lower on impact factors alone.[2]

Librarians and information scientists have been using various metrics over the past 75 years for evaluating journals. However, the advent of the Clarivate Analytics citation indexes made it possible to do computer-compiled statistical reports not only on the output of journals but also in terms of citation frequency. Since 1975, it has published Journal Citation Reports that provide quantitative metrics for ranking, evaluating, categorizing and comparing journals. The journal impact factor is one such metric that is a measure of the frequency with which the “average article” in a journal has been cited in a particular year or time period.

Interestingly, the journal impact factor as a metric was introduced originally in the 1960s, to help librarians decide which journals to purchase. However, the concept of journal impact factor was first simplified by Eugene Garfield, the founder of the Institute for Scientific Information. Today, we know that the journal impact factor represents the frequency with which articles from a journal published in the past 2 years have been cited in a particular year or time frame. It is calculated by dividing the number of current year citations by the total number of articles published in the previous two years.[3]

For instance, the 2018 impact factor of a journal would be calculated as follows:

2018 Impact factor = A/B

where

  • A = The number of times that all items published in that journal in 2016 and 2017 was cited by indexed publications during 2018
  • B = The total number of “citable items” published by that journal in 2016 and 2017.


”Citable items” for such a calculation are usually articles, reviews, proceedings or notes; not editorials or letters to the editor.

However, it must be remembered that when comparing journals by impact factor, self-citations from a cited-only journal are not included in its impact factor calculation.[3] Self-citations often represent nearly 13% of the citations that a journal receives.

Thus, the revised formula for calculating journal impact factor (excluding self-citations) would be as follows:

2018 Impact factor = C/D

where

  • A = Total citations in 2018 to articles published in that journal in 2016 and 2017
  • B = 2018 self-citations to articles published in 2016 and 2017
  • C = A – B (total citations – self citations to recent articles)
  • D = Total number of articles published in that journal in 2016 and 2017.


Despite several limitations and development of some other indices, impact factor remains the most frequently used index to assess the quality of a journal [Table 1]. As a consequence, if an author publishes an article in a journal, which has a high impact factor, the contribution is viewed more favorably. Articles published in high impact factor journals generally receive more positive attention or more points when someone seeks an academic benefit, as mentioned above. As a tool for management of library journal collections, the journal impact factor supplies the library administrator with information about journals already in the collection, and journals under consideration for acquisition.
Table 1: Advantages and limitations of using impact factor as a tool to assess journal quality

Click here to view


However, many argue that the journal impact factor alone may not be accurate in assessing the usefulness of a journal. In fact, Larivière and Sugimoto, in their six-point critique of a journal impact factor, explain that a two-year period for citations could accidently favor certain disciplines over others.[4] They also pointed that the practice of including citations for “front matter,” such as editorials, letters to the editor and so on, in the numerator and not actually considering them in the denominator (as they do not qualify as “citable items”) is inherently flawed.[4] Interestingly, even some of the reputed journals, such as Nature and Science, have been found to use front matter to boost their journal impact factor.[5],[6]

Over the years, other critics have argued that the journal impact factors, per se, may not reflect anything informative about the quality of empirical research. Firstly, whether the impact of research is appropriately indexed over a relatively short time span (i.e. the 2 years following publication in case of journal impact factor), as compared to longer time spans is still debatable. A paper can receive a large number of citations in the short run, because it reports surprising or counter-intuitive findings regardless of whether the research was conducted in a rigorous manner or not. In other words, the short-term citation rate of a journal may not be particularly informative concerning the quality of the research it reports.

Secondly, there can be disproportionate representation of western journals, as limited journals have been assigned a journal impact factor (roughly 8000 in science). In 2016, an analysis of Journal Citation Reports, which is a database to assess journal impact factor of roughly 11,000 Institute of Scientific Information[7],[8] journals, revealed that only 20–40% of articles received as many citations as the journal impact factor suggested.[4]

Thirdly, it is not uncommon to find journals publishing review articles, commentaries, editorials and so on more in comparison to original articles. Review articles, in general, are cited more frequently than typical research articles because they often serve as surrogate sources for earlier literature, particularly in journals that discourage extensive bibliographies. This leaves room for manipulation, leading to inflated journal impact factor in some cases. Moreover, because “citable items” usually include articles, reviews, proceedings, there has been trend of systematic inflation of the average journal impact factor by increasing the number of papers and references per paper.[9]

Fourthly, most journals often have no discrete measures for correction of self-citations, which are in many cases, numerous. This has led to documented cases of manipulation by unscrupulous editors, where the volume of self-citations has ranged from 7 to 20% of an articles' references.[10] High self-citation has been particularly more common for specialized journals and articles with multiple authors.[11],[12]

Next, comparing journals of different subject areas can also be quite misleading. As the journal impact factor cannot differentiate between disciplines, it should not be used to compare across disciplines. Moreover, a relative difference in reference practices implies that medical researchers are much more likely to publish in journals with high journal impact factor than, say, mathematicians or social scientists.[4] For the particular purpose of comparing contributions and journals across different specialties and thus resolving the aforementioned situations, the journal impact factor may be modified by devising the concept of Specialty Impact Factor or “S-Impact Factor.”[13],[14] This S-impact factor may be calculated as follows:

  • A = Impact factor of a journal
  • B = Highest impact factor in the same specialty
  • S-impact factor = A/B.


Adhering to the abovementioned formula, it is fairly reasonable to assume that the best journals of all specialties have equal or equivalent value. The journals having the highest journal impact factor, irrespective of its specialty, will have the S-impact factor as 1; while it will be <1 for other journals of the same specialty. This index may be used in a more meaningful way to make comparisons of quality of journals, along with imposing a minimum essential score applicable in relation with different specialties for academic benefit.[13]

Modified impact factor is a new concept based on the existing journal impact factor. It makes a more rational comparison between intra and inter-discipline journals by adopting three specific factors: highest impact factor, color coding and modification. Different color coding (red, yellow and green) is being used at different levels (disciplines, specialties and branches). The highest impact factor of a group is measured at the level of 100% as a reference discipline, while the other group members are normalized or converted to its equivalent accordingly.[14]

Lastly, it is incorrect to assess the scholastic worth of an author using a single metric such as journal impact factor, because even articles published in journals with journal impact factor 50.0 may not receive citations. In such situations, the journal impact factor may be wrongly used as a measure of individual quality.[15]

The Departments of Science and Technology, and Biotechnology in a joint Open Access Policy have pointed that journal impact factor must not be used”as a surrogate measure of the quality of individual research articles, to assess an individual scientist's contributions, or in hiring, promotion, or funding decision.[15]

However, such a central recommendation has not been executed, in actuality. A recent study by Madhan et al. pointed out that cumulative journal impact factors were still being utilized as a criterion for prestigious awards such as the Tata Innovation Fellowship, Innovative Young Biotechnologist Award, National Bioscience Awards for Career Development and so on.[16] Similarly, the Indian Council of Medical Research routinely uses average journal impact factor as a measure of performance of its various laboratories.

However, institutions such as The National Assessment and Accreditation Council, Bengaluru uses various bibliometrics other than journal impact factor in its accreditation process.[17] It also asks for the “h-index” (or Hirsch's index) of each author, which is an alternative to other bibliometric indicators.[18] The h-index is an author-level metric that attempts to measure both the productivity and citation impact of the publications of a scientist or scholar. This index is based on the set of the scientist's most cited papers and the number of citations that they have received in other publications. It is calculated as the maximum value of h such that the given author/journal has published h papers, that have each been cited at least h number of times. The question whether h-index is an ideal way of measuring research performance is still debatable.[15]

One of the most controversial (and problematic) use of journal impact factor is University Grants Commission policy of appointment and promotion of teachers in academic institutions. The University Grants Commission calculates an academic performance indicator score, which includes points for research, based on which teachers earn more points for papers published in journals with a higher journal impact factor.[19] But the fundamental problem with such policy is that the journal impact factor fluctuates from year to year because of the 2-year window. It favors certain journals and disciplines, and does not consider any kind of 'field-normalization'. It also fails to predict citations. This makes it cumbersome to rely on the journal impact factor to evaluate a teacher's research performance and decide about employment or promotions. What makes it worse is the arbitrariness of the policy of formulating award points based on rangesof journal impact factor.[16]

Validation of research output along with evaluation of the quality of the new knowledge is the basic tenet of all scientific work. To boost the growth of quality research in our country, a policy statement was released by the Indian National Science Academy on Dissemination and Evaluation of Research Output in India.[20] This document shares an insight over basic policy parameters such as promoting pre-print repositories and incorporating quality peer review, minimizing interference caused by predatory journals as well as predatory conferences, policies for categorizing and evaluating research effort and rationalizing payment policies in the Indian scenario.

There is no “gold standard” method, and the art of making correct assessments about journal quality comes with experience. The “quality” of a research study may be an elusive thing to quantify. And, as scholars have demonstrated, different scientists evaluating the same manuscripts do not always agree on the quality of the work in question. However, one of the best strategies is to follow a step-wise approach.

First, a hint about journal quality is given by its publisher and in many cases, the society, association or organization affiliated to it. Highly rated publishers such as Nature, Science, Wolter Kluwers, Sage and so on, often work in collaboration with reputed societies such as The American Heart Association, American Psychological Association, and so on. This also increases the likelihood of one's research or paper being discovered.

Second, scholarly reputation of editorial board members may indicate the quality of a journal. Hence, a little background-check about the peer-reviewing committee may be helpful.

Third, indexing information from a reliable database, such as Ulrichs Web Global Serials Directory, provides substantial information on the journal.[21] The Ulrichs Web Global Serials Directory is an authoritative source of bibliographic and publisher information on more than 300,000 academic periodicals and scholarly journals.[22] Indexed journals, especially those with good/high impact factors, are rightly presumed to be of high “quality.” Such journals ensure a consistent and good quality peer-review process, and it is prestigious to publish in them. However, these high impact factor journals also have a very high rejection rate. Currently, PubMed/Medline, EMBASE, Web of Science and Scopus are considered to be some of the best indexing agencies for scientific journals.

Fourth, despite lesser known specialty journals having a higher acceptance rate, most studies are submitted in journals with higher rejections rates, purely because such journals have higher journal impact factor and visibility. Of course, every medical researcher would love to be published in NEJM, Lancet, JAMA or BMJ, but those are extremely difficult journals to be accepted into, and so being realistic about the possibility of your paper being accepted, is crucial. On one hand, one must not be dissuaded by rejection rates of reputed journals; Consequently, one must also remember that a smaller specialty journal may reach the target audience one seeks. Moreover, authors should not overemphasize the journal impact factor, but rather, give due consideration to the speed and efficiency of the editorial handling of their manuscripts, and to the quality and timeliness of the peer review.

Fifth, newer metrics such as Eigenfactor can be used as an alternative to journal impact factor.[23] It not only uses data gathered for 5 years to calculate a journal's citations but also takes into account which journals have been cited, so that highly cited journals do not influence the network more than lesser cited journals. There is an added provision for self-citation check. Put simply, it ranks journals in a manner similar to that used by Google, or a Search Engine Optimizer (SEO), for ranking the importance of websites in a search.

Impact factors should be treated with caution. Until the deficiencies in the system have been corrected and their limitations better understood, the journal impact factor remains a relatively crude index for evaluating a particular journal. Sometimes, relying totally on a single metric-based assessment such as journal impact factor, may even discourage ethical research. Academics cannot inculcate risk-taking and prospective thinking, as it yields an uncertainty where one cannot afford working on something that might not lead to citations. Hence, institutions need to establish robust evaluative policies, to inculcate an understanding where creditability is based on scientific content rather than public metrics.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Van Noorden R. Metrics: A profusion of measures. Nature 2010;465:864-6.  Back to cited text no. 1
    
2.
Pontille D, Torny D. The controversial policies of journal ratings: Evaluating social sciences and humanities. Res Eval 2010;19:347-60.  Back to cited text no. 2
    
3.
Wang JC. The impact factor and scientific journals. Global Spine J 2016;6:205-6.  Back to cited text no. 3
    
4.
Larivière V, Sugimoto CR. The journal impact factor: A brief history, critique, and discussion of adverse effects. Springer Handbook of Science and Technology Indicators. 2018. p. 1-33.  Back to cited text no. 4
    
5.
Van Noorden R. The science that's never been cited. Nature 2017;552:162-4.  Back to cited text no. 5
    
6.
Schekman R. How journals like Nature, Cell and Science are damaging science. The Guardian; 2013. Available from: https://www.theguardian.com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-science. [Last retrieved on 2019 Mar 12].  Back to cited text no. 6
    
7.
Manuf PA. Journal Citation Reports 2018. Available from: https://clarivate.com/wp-content/uploads/2018/06/Crv_JCR_First-Impact-Factor-List_2018_A4_v3.pdf. [Last retrieved on 2019 Mar 12].  Back to cited text no. 7
    
8.
Journal Citation Reports – Clarivate. Available from: https://clarivate.com/products/journal-citation-reports/. [Last retrieved on 2019 Mar 12].  Back to cited text no. 8
    
9.
Cordero RJ, de León-Rodriguez CM, Alvarado-Torres JK, Rodriguez AR, Casadevall A. Life science's average publishable unit (APU) has increased over the past two decades. PLoS One 2016;11:e0156983.  Back to cited text no. 9
    
10.
Falagas ME, Kavvadia P. “Eigenlob”: Self-citation in biomedical journals. FASEB J 2006;20:1039-42.  Back to cited text no. 10
    
11.
Miguel A, Martí-Bonmatí L. Self-citation: Comparison between radiología, European radiology and radiology for 1997-1998. Eur Radiol 2002;12:248-52.  Back to cited text no. 11
    
12.
Hakkalamani S, Rawal A, Hennessy MS, Parkinson RW. The impact factor of seven orthopaedic journals: Factors influencing it. J Bone Joint Surg Br 2006;88:159-62.  Back to cited text no. 12
    
13.
Singh S. Toward more meaningful evaluation of contributions and journals across different specialties: Introducing specialty impact factor. Indian J Dermatol Venereol Leprol 2013;79:737-8.  Back to cited text no. 13
[PUBMED]  [Full text]  
14.
Iftikhara M, Masood S, Song TT. Modified impact factor (MIF) at specialty level: A way forward. Procedia Soc Behav Sci 2012;69:631-64.  Back to cited text no. 14
    
15.
Bornmann L, Williams R. Can the journal impact factor be used as a criterion for the selection of junior researchers? A large-scale empirical study based on researcher ID data. J Informetr 2017;11:788-99.  Back to cited text no. 15
    
16.
Madhan M, Gunasekaran S, Arunachalam S. Evaluation of research in India – Are we doing it right? Indian J Med Ethics 2018;3:221-9.  Back to cited text no. 16
    
17.
National Assessment and Accreditation Council. Guidelines for Institutions to opt out 'Non Applicable Metrics.' Bengaluru: National Assessment and Accreditation Council; 2018. Available from: http://www.naac.gov.in/docs/Guidelines_non_applicable_metric.pdf. [Last retrieved on 2019 Mar 12].  Back to cited text no. 17
    
18.
Hirsch JE. Does the H index have predictive power? Proc Natl Acad Sci U S A 2007;104:19193-8.  Back to cited text no. 18
    
19.
Government of India. University Grant Commission-Notification, New Delhi: Government of India; 11 July, 2016. Available from: https://ugc.ac.in/pdfnews/3375714_API-4th-Amentment-Regulations-2016.pdf. [Last retrieved on 2019 Mar 12].  Back to cited text no. 19
    
20.
Chaddah P, Lakhotia SC. A policy statement on “dissemination and evaluation of research output in India” by the Indian National Science Academy (New Delhi). Proc Indian Natn Sci Acad 2018;84:319-29.  Back to cited text no. 20
    
21.
Ulrichsweb Global Serials Directory | U-M Library. Available from: https://www.lib.umich.edu/database/ulrichsweb-global-serials-directory. [Last retrieved on 2019 Mar 12].  Back to cited text no. 21
    
22.
Ulrichsweb – Frequently Asked Questions. Available from: http://www.ulrichsweb.com/ulrichsweb/faqs.asp. [Last retrieved on 2019 Mar 12].  Back to cited text no. 22
    
23.
Kianifar H, Sadeghi R, Zarifmahmoudi L. Comparison between impact factor, Eigenfactor metrics, and SCimago journal rank indicator of pediatric neurology journals. Acta Inform Med 2014;22:103-6.  Back to cited text no. 23
    



 
 
    Tables

  [Table 1]



 

Top
Print this article  Email this article

    

Online since 15th March '04
Published by Wolters Kluwer - Medknow