首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 25 毫秒
1.
In this article I study characteristics of the journal impact factor (JIF) computed using a 5-year citation window as compared with the classical JIF computed using a 2-year citation window. Since 2007 ISI-Thomson Reuters has published the new 5-year impact factor in the JCR database. I studied changes in the distribution of JIFs when the citation window was enlarged. The distributions of journals according their 5-year JIFs were very similar all years studied, and were also similar to the distribution according to the 2-year JIFs. In about 72% of journals, the JIF increased when the longer citation window was used. Plots of 5-year JIFs against rank closely followed a beta function with two exponents. Thus, the 5-year JIF seems to behave very similarly to the 2-year JIF. The results also suggest that gains in JIF with the longer citation window tend to distribute similarly in all years. Changes in these gains also tend to distribute similarly from 1 year to the following year.  相似文献   

2.
We use a new approach to study the ranking of journals in JCR categories. The objectives of this study were to empirically evaluate the effect of increases in citations on the computation of the journal impact factor (JIF) for a large set of journals as measured by changes in JIF, and to ascertain the influence of additional citations on the rank order of journals according their new JIFs within JCR groups. To do so, modified JIFs were computed by adding additional citations to the number used by Thomson-Reuters to compute the JIF of journals listed in the JCR for 2008. We considered the effect on rank order of a given journal of adding 1, 2, 3 or more citations to the number used to compute the JIF, keeping everything else equal (i.e., without changing the JIF of other journals in a given group). The effect of additional citations on the internal structure of rankings in JCR groups increased with the number of citations added. In about one third of JCR groups, about half the journals changed their rank order when 1–5 citations were added. However, in general the rank order tended to be relatively stable after small increases in citations.  相似文献   

3.
The journal impact factor (JIF) proposed by Garfield in the year 1955 is one of the most commonly used and prominent citation-based indicators of the performance and significance of a scientific journal. The JIF is simple, reasonable, clearly defined, and comparable over time and, what is more, can be easily calculated from data provided by Thomson Reuters, but at the expense of serious technical and methodological flaws. The paper discusses one of the core problems: The JIF is affected by bias factors (e.g., document type) that have nothing to do with the prestige or quality of a journal. For solving this problem, we suggest using the generalized propensity score methodology based on the Rubin Causal Model. Citation data for papers of all journals in the ISI subject category ??Microscopy?? (Journal Citation Report) are used to illustrate the proposal.  相似文献   

4.
Gold Open Access (=Open Access publishing) is for many the preferred route to achieve unrestricted and immediate access to research output. However, true Gold Open Access journals are still outnumbered by traditional journals. Moreover availability of Gold OA journals differs from discipline to discipline and often leaves scientists concerned about the impact of these existent titles. This study identified the current set of Gold Open Access journals featuring a Journal Impact Factor (JIF) by means of Ulrichsweb, Directory of Open Access Journals and Journal Citation Reports (JCR). The results were analyzed regarding disciplines, countries, quartiles of the JIF distribution in JCR and publishers. Furthermore the temporal impact evolution was studied for a Top 50 titles list (according to JIF) by means of Journal Impact Factor, SJR and SNIP in the time interval 2000–2010. The identified top Gold Open Access journals proved to be well-established and their impact is generally increasing for all the analyzed indicators. The majority of JCR-indexed OA journals can be assigned to Life Sciences and Medicine. The success-rate for JCR inclusion differs from country to country and is often inversely proportional to the number of national OA journal titles. Compiling a list of JCR-indexed OA journals is a cumbersome task that can only be achieved with non-Thomson Reuters data sources. A corresponding automated feature to produce current lists “on the fly” would be desirable in JCR in order to conveniently track the impact evolution of Gold OA journals.  相似文献   

5.
6.
Journal impact factors (JIF) have been an accepted indicator of ranking journals. However, there has been increasing arguments against the fairness of using the JIF as the sole ranking criteria. This resulted in the creation of many other quality metric indices such as the h-index, g-index, immediacy index, Citation Half-Life, as well as SCIMago journal rank (SJR) to name a few. All these metrics have their merits, but none include any great degree of normalization in their computations. Every citation and every publication is taken as having the same importance and therefore weight. The wealth of available data results in multiple different rankings and indexes existing. This paper proposes the use of statistical standard scores or z-scores. The calculation of the z-scores can be performed to normalize the impact factors given to different journals, the average of z-scores can be used across various criteria to create a unified relative measurement (RM) index score. We use the 2008 JCR provided by Thompson Reuters to demonstrate the differences in rankings that would be affected if the RM-index was adopted discuss the fairness that this index would provide to the journal quality ranking.  相似文献   

7.
The article describes the method for the online determination of the journal impact factor (JIF). The method is very simple and can be used both for the ISI defined journal impact factor and for the calculation of other generalised journal impact factors. But the direct online method fails for non-ISI journals i.e. journals not indexed by ISI to the three citation databases. For such journals only the “External Cited Impact Factor” associated with citations from ISI journals (ECIFisi) can be determined online by the common method. As an extra benefit the online method makes available the determination of the geographical distribution of citations and citable units in relation to any given JIF, i.e. the international impact for a particular journal in a given year. The method is illustrated by calculating the generalised JIF, self-citations and ECIF(isi) as well as the international impact for Journal of Documentation and Scientometrics.  相似文献   

8.
The ISI journal impact factor (JIF) is based on a sample that may represent half the whole-of-life citations to some journals, but a small fraction (<10%) of the citations accruing to other journals. This disproportionate sampling means that the JIF provides a misleading indication of the true impact of journals, biased in favour of journals that have a rapid rather than a prolonged impact. Many journals exhibit a consistent pattern of citation accrual from year to year, so it may be possible to adjust the JIF to provide a more reliable indication of a journal’s impact.  相似文献   

9.
The International Journal for Numerical Methods in Engineering keeps its first place in impact factor ranking for 2008 with the historic high impact factor of 2.229, as announced in the 2008 Journal Citation Reports® (Thomson Reuters, 2009). Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
Currently the Journal Impact Factors (JIF) attracts considerable attention as components in the evaluation of the quality of research in and between institutions. This paper reports on a questionnaire study of the publishing behaviour and researchers preferences for seeking new knowledge information and the possible influence of JIF on these variables. 54 Danish medical researchers active in the field of Diabetes research took part. We asked the researchers to prioritise a series of scientific journals with respect to which journals they prefer for publishing research and gaining new knowledge. In addition we requested the researchers to indicate whether or not the JIF of the prioritised journals has had any influence on these decisions. Furthermore we explored the perception of the researchers as to what degree the JIF could be considered a reliable, stable or objective measure for determining the scientific quality of journals. Moreover we asked the researchers to judge the applicability of JIF as a measure for doing research evaluations. One remarkable result is that app. 80% of the researchers share the opinion that JIF does indeed have an influence on which journals they would prefer for publishing. As such we found a statistically significant correlation between how the researchers ranked the journals and the JIF of the ranked journals. Another notable result is that no significant correlation exists between journals where the researchers actually have published papers and journals in which they would prefer to publish in the future measured by JIF. This could be taken as an indicator for the actual motivational influence on the publication behaviour of the researchers. That is, the impact factor actually works in our case. It seems that the researchers find it fair and reliable to use the Journal Impact Factor for research evaluation purposes.  相似文献   

11.
I studied the distribution of changes in journal impact factors (JIF) between 1998 and 2007 according to an empirical beta law with two exponents. Changes in JIFs (CJIF) were calculated as the quotient obtained by dividing the JIF for a given year by the JIF for the preceding year. The CJIFs showed good fit to a beta function with two exponents. In addition, I studied the distribution of the changes in segments of the CJIF rank order. The distributions, which were similar from year to year, could be fitted to a Lorentzian function. The methods used here can be useful to understand the changes in JIFs using relatively simple functions.  相似文献   

12.

A previous study (https://doi.org/10.1007/s11192-020-03457-x) found a discrepancy between Elsevier’s CiteScore and Clarivate Analytics’ Journal Impact Factor (JIF) in library and information science (LIS) journals. One possibility to explain this discrepancy may lie in the number and type of documents used to calculate these journal-based metrics. Using the top quartile of Scopus-indexed journals from 2011 to 2018, we assessed the number of documents for each journal and year that were indexed in Scopus and in Web of Science (WoS) in six fields of study: LIS, discrete mathematics and combinatorics (DMC), medicine: epidemiology (ME), agriculture and biological sciences (ABS), social science: demography (SSD), and environmental engineering (EE). The number of documents in WoS was higher than those indexed in Scopus for four fields of study: LIS, ME, SSD and EE, with a difference of 1653, 3931, 635 and 197 documents, respectively. For DMC and ABS, Scopus listed more documents than WoS for the same years and journals, the differential being 7 and 1284, respectively. The greater indexing of documents in WoS than in Scopus in four fields of study may explain why the JIF of top-ranking LIS journals is different than their CiteScore. To verify this possibility, one category (DMC) was examined in detail. Of the 16 DMC journals examined, 91.1% were articles, while 8.9% of missing documents were corrections, an erratum, an editorial, an abstract report and in press articles. There were no significant differences between the citation patterns of the missing DMC journals’ documents in Scopus and WoS. Citations to missing documents may impact the CiteScore and JIF and should thus be properly indexed.

  相似文献   

13.
Both citations to an academic work and post-publication reviews of it are indicators that the work has had some impact on the research community. The Thomson Reuters evaluation and selection process for web of knowledge journals includes citation analysis but this is not systematically practised for evaluation of books for the book citation index (BKCI) due to the inconsistent methods of citing books, the volume of books and the variants of the titles, especially in non-English language. Despite the fact that correlations between citations to a book and the number of corresponding book reviews differ from research area to research area and are overall weak or non-existent, this study confirms that books with book reviews do not remain uncited and accrue a remarkable mean number of citations. Therefore, book reviews can be considered a suitable selection criterion for BKCIs. The approach suggested in this study is feasible and allows easy detection of corresponding books via its book reviews, which is particularly true for research areas where books play a more important role such as the social sciences, the arts and humanities.  相似文献   

14.
This paper presents a study of possible changes in patterns of document types in economics journals since the mid-1980s. Furthermore, the study includes an analysis of a possible relation between the profile of a journal concerning composition of document types and factors such as place of publication and JIF. The results provide little evidence that the journal editors have succeeded in manipulating the distribution of document types. Furthermore, there is little support for the hypothesis that journal editors decrease the number of publications included in the calculation of JIF or for that matter for the hypothesis that journal editors increase the number of publications not included in the calculation of JIF. The results of the analyses show that there is a clear distinction of journals based on place of publication and JIF  相似文献   

15.
We studied the effect on journal impact factors (JIF) of citations from documents labeled as articles and reviews (usually peer reviewed) versus citations coming from other documents. In addition, we studied the effect on JIF of the number of citing records. This number is usually different from the number of citations. We selected a set of 700 journals indexed in the SCI section of JCR that receive a low number of citations. The reason for this choice is that in these instances some citations may have a greater impact on the JIF than in more highly-cited journals. After excluding some journals for different reasons, our sample consisted of 674 journals. We obtained data on citations that contributed to the JIF for the years 1998?C2006. In general, we found that most journals obtained citations that contribute to the impact factor from documents labeled as articles and reviews. In addition, in most of journals the ratio between citations that contributed to the impact factor and citing records was greater than 80% in all years. Thus, in general, we did not find evidence that citations that contributed to the impact factor were dependent on non-peer reviewed documents or only a few citing records.  相似文献   

16.
Here we show a novel technique for comparing subject categories, where the prestige of academic journals in each category is represented statistically by an impact-factor histogram. For each subject category we compute the probability of occurrence of scholarly journals with impact factor in different intervals. Here impact factor is measured with Thomson Reuters Impact Factor, Eigenfactor Score, and Immediacy Index. Assuming the probabilities associated with a pair of subject categories our objective is to measure the degree of dissimilarity between them. To do so, we use an axiomatic characterization for predicting dissimilarity between subject categories. The scientific subject categories of Web of Science in 2010 were used to test the proposed approach for benchmarking Cell Biology and Computer Science Information Systems with the rest as two case studies. The former is best-in-class benchmarking that involves studying the leading competitor category; the latter is strategic benchmarking that involves observing how other scientific subject categories compete.  相似文献   

17.
18.
Article-count impact factor of materials science journals in SCI database   总被引:2,自引:0,他引:2  
This article proposed a new index, so-called “Article-Count Impact Factor” (ACIF) for evaluating journal quality in light of citation behaviour in comparison with the ISI journal impact factors. The ACIF index was the ratio of the number of articles that were cited in the current year to the source items published in that journal during the previous two years. In this work, we used 171 journal titles in materials categories published in the years of 2001–2004 in international journals indexed in the Science Citation Index Expanded (SCI) database as data source. It was found that ACIF index could be used as an alternative tool in assessing the journal quality, particularly in the case where the assessed journals had the same (equal or similar) JIF values. The experimental results suggested that the higher the ACIF value, the more the number of articles being cited. The changes in ACIF values were more dependent on the JIF values rather than the total number of articles. Polymer Science had the greatest ACIF values, suggesting that the articles in Polymer Science had greater “citation per article” than those in Metallurgical Engineering and Ceramics. It was also suggested that in order to increase a JIF value of 1.000, Ceramics category required more articles to be cited as compared to Metallurgical Engineering and Polymer Science categories.  相似文献   

19.
Relationships between publication language, impact factors and self-citations of journals published in individual countries, eight from Europe and one from South America (Brazil), are analyzed using bibliometric data from Thomson Reuters JCR Science Edition databases of ISI Web of Knowledge. It was found that: (1) English-language journals, as a rule, have higher impact factors than non-English-language journals, (2) all countries investigated in this study have journals with very high self-citations but the proportion of journals with high self-citations with reference to the total number of journals published in different countries varies enormously, (3) there are relatively high percentages of low self-citations in high subject-category journals published in English as well as non-English journals but national-language journals have higher self-citations than English-language journals, and (4) irrespective of the publication language, journals devoted to very specialized scientific disciplines, such as electrical and electronic engineering, metallurgy, environmental engineering, surgery, general and internal medicine, pharmacology and pharmacy, gynecology, entomology and multidisciplinary engineering, have high self-citations.  相似文献   

20.
Journal impact factor (JIF) has been used for journal evaluation over a long time, but also accompanied by the continuing controversy. In this study, a new indicator, the Journal’s Integrated Impact Index (JIII) has been proposed for journal evaluation. In the JIII, one journal’s average citations per paper, total citations, and all journals’ average level of average citations per paper and total citations have been used to characterize the integrated impact of journals. Some contrastive analyses were carried out between JIII and JIF. The results show some interesting properties of the new indicator, and also reveal some relevant relationships among JIII, JIF, and other bibliometric indicators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号