首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
Scientific production has been evaluated from very different perspectives, the best known of which are essentially based on the impact factors of the journals included in the Journal Citation Reports (JCR). This has been no impediment to the simultaneous issuing of warnings regarding the dangers of their indiscriminate use when making comparisons. This is because the biases incorporated in the elaboration of these impact factors produce significant distortions, which may invalidate the results obtained. Notable among such biases are those generated by the differences in the propensity to cite of the different areas, journals and/or authors, by variations in the period of materialisation of the impact and by the varying presence of knowledge areas in the sample of reviews contained in the JCR. While the traditional evaluation method consists of standardisation by subject categories, recent studies have criticised this approach and offered new possibilities for making inter-area comparisons. In view of such developments, the present study proposes a novel approach to the measurement of scientific activity, in an attempt to lessen the aforementioned biases. This approach consists of combining the employment of a new impact factor, calculated for each journal, with the grouping of the institutions under evaluation into homogeneous groups. An empirical application is undertaken to evaluate the scientific production of Spanish public universities in the year 2000. This application considers both the articles published in the multidisciplinary databases of the Web of Science (WoS) and the data concerning the journals contained in the Sciences and Social Sciences Editions of the Journal Citation Report (JCR). All this information is provided by the Institute of Scientific Information (ISI), via its Web of Knowledge (WoK).  相似文献   

2.
In this paper, we analysed six indicators of the SCI Journal Citation Reports (JCR) over a 19-year period: number of total citations, number of citations to the two previous years, number of source items, impact factor, immediacy index and cited half-life. The JCR seems to have become more or less an authority for evaluating scientific and technical journals, essentially through its impact factor. However it is difficult to find one's way about in the impressive mass of quantitative data that JCR provides each year. We proposed the box plot method to aggregate the values of each indicator so as to obtain, at a glance, portrayals of the JCR population from 1974 to 1993. These images reflected the distribution of the journals into 4 groups designated low, central, high and extreme. The limits of the groups became a reference system with which, for example, it was rapidly possible to situate visually a given journal within the overall JCR population. Moreover, the box plot method, which gives a zoom effect, made it possible to visualize a large sub-population of the JCR usually overshadowed by the journals at the top of the rankings. These top level journals implicitly play the role of reference in evaluation processes. This often incites categorical judgements when the journals to be evaluated are not part of the top level. Our «rereading» of the JCR, which presented the JCR product differently, made it possible to qualify these judgements and bring a new light on journals.  相似文献   

3.
On first quartile journals which are not of highest impact   总被引:1,自引:0,他引:1  
Here we study the relationship between journal quartile rankings of ISI impact factor (at the 2010) and journal classification in four impact classes, i.e., highest impact, medium highest impact, medium lowest impact, and lowest impact journals in subject category computer science artificial intelligence. To this aim, we use fuzzy maximum likelihood estimation clustering in order to identify groups of journals sharing similar characteristics in a multivariate indicator space. The seven variables used in this analysis are: (1) Scimago Journal Ranking (SJR); (2) H-Index (H); (3) ISI impact factor (IF); (4) 5-Year Impact Factor (5IF); (5) Immediacy Index (II); (6) Eigenfactor Score (ES); and (7) Article Influence Score (AIS). The fuzzy clustering allows impact classes to overlap, thereby accommodating for uncertainty related to the confusion about the impact class attribution for a journal and vagueness in impact classes definition. This paper demonstrates the complex relationship between quartiles of ISI impact factor and journal impact classes in the multivariate indicator space. And that several indicators should be used for a distinct analysis of structural changes at the score distribution of journals in a subject category. Here we propose it can be performed in a multivariate indicator space using a fuzzy classifier.  相似文献   

4.
Towards appropriate indicators of journal impact   总被引:2,自引:0,他引:2  
This paper reviews a range of studies conducted by the authors on indicators reflecting scholarly journal impact. A critical examination of the journal impact data in theJournal Citation Reports (JCR), published by the Institute for Scientific Information (ISI) has shown that the JCR impact factor is inaccurate and biased towards journals revealing a rapid maturing or decline in impact. In addition, it was found that the JCR cited half life is an inappropriate measure of decline of journal impact. More appropriate impact measures of scholarly journals are proposed. A new classification system is explored, describing both maturing and decline of journal impact as measured through citations. Suggestions for future research are made, analysing in more detail the distribution of citations among papers in a journal.  相似文献   

5.
Summary This paper identifies and presents some characteristics of the psychology journals included in each of the Journal Citation Reports (JCR) categories in 2002. The study shows that most of the journals belong to the categories of Multidisciplinary Psychology (102) and Clinical Psychology (83). Their ranking is seen to vary depending on the category, and the same journal may occupy different positions in different JCR categories. Journals included in the categories of Biological Psychology, Experimental Psychology and Multidisciplinary Psychology had the highest impact factor (IF).  相似文献   

6.
The journal impact factor (JIF) proposed by Garfield in the year 1955 is one of the most commonly used and prominent citation-based indicators of the performance and significance of a scientific journal. The JIF is simple, reasonable, clearly defined, and comparable over time and, what is more, can be easily calculated from data provided by Thomson Reuters, but at the expense of serious technical and methodological flaws. The paper discusses one of the core problems: The JIF is affected by bias factors (e.g., document type) that have nothing to do with the prestige or quality of a journal. For solving this problem, we suggest using the generalized propensity score methodology based on the Rubin Causal Model. Citation data for papers of all journals in the ISI subject category ??Microscopy?? (Journal Citation Report) are used to illustrate the proposal.  相似文献   

7.
This work proposes an entropy-based disciplinarity indicator (EBDI) which allows the classification of scientific journals in four classes: knowledge importer, knowledge exporter, disciplinary and interdisciplinary with regards to the discipline(s) in which they are classified. Assuming that the set references in the papers published in a journal represent a significant part of their knowledge basis, the diversity (measured with Shannon’s entropy) and ratio between internal and external (to the discipline in which the journal is classified) references can provide a measure of the disciplinarity/interdisciplinarity of the journal in the reference dimension. The homologous analysis can be applied to the set of citations received by the papers published in the journal. In this article, an entropy-based indicator for the measurement of the disciplinarity of scientific journals is developed, applied (to the cited and citing dimensions) and discussed. The indicator can take finite values and it is found to be theoretically consistent when tested against two definitions for bibliometric indicators. The combinations of disciplinarity values in the citing and cited dimensions permits the classification of journals according to their knowledge importing/exporting profile (separately, with regards to the social sciences or the sciences), providing a taxonomy of the role of journals according to their importing, exporting, interdisciplinary or specialized profile with regards to the subject category in which they are classified. The indicator, EBDI and the resulting taxonomy is proposed and tested for the set of journals in LIS subject category in JCR 2013 and for the sets of journals in Andrology and Legal Medicine in JCR 2015. Evidence of concurrent validity with journal co-classification patterns is found in the three sets of journals.  相似文献   

8.
The journal Impact Factor (IF) is not comparable among fields of science and social science because of systematic differences in publication and citation behaviour across disciplines. In this work, a decomposing of the field aggregate impact factor into five normally distributed variables is presented. Considering these factors, a principal component analysis is employed to find the sources of the variance in the Journal Citation Reports (JCR) subject categories of science and social science. Although publication and citation behaviour differs largely across disciplines, principal components explain more than 78 % of the total variance and the average number of references per paper is not the primary factor explaining the variance in impact factors across categories. The categories normalized impact factor based on the JCR subject category list is proposed and compared with the IF. This normalization is achieved by considering all the indexing categories of each journal. An empirical application, with one hundred journals in two or more subject categories of economics and business, shows that the gap between rankings is reduced around 32 % in the journals analyzed. This gap is obtained as the maximum distance among the ranking percentiles from all categories where each journal is included.  相似文献   

9.
This study focuses on journals that lead their Web of Science (WoS) subject category ranking when the usual 2-year window for the Journal Impact Factor (JIF2) is used as the ranking variable, and examines evidence that contradicts their top-ranked position in the context of their group. The source data were obtained from all 177 WoS subject categories in the Science Edition 2015 Journal Citation Reports (JCR). I compared journals in each WoS subject category with leaders in terms of JIF2, number of citable items and number of citations that contribute to the JIF2. Rankings were calculated with alternative metrics (for example, the Journal Impact Factor without self-citations and the eigenfactor), and the minimum reduction in the number of citations that would displace the top-ranked journal from its leading position was also calculated. In addition, the stability of rankings over time, the number of WoS subject categories in which journals are leaders, the publishers that own leading journals, and the percentages of research articles (as opposed to review articles) published in different journals were also studied. In general, leading journals are not necessarily the top-ranked in terms of citations received or the number of citable items they publish. In addition, most leaders maintained their position when other metrics were used instead the JIF2, although rankings based on the eigenfactor were at variance with this finding. The distribution of publishers was highly skewed, with a linear relationship between the cumulative number of publishers owning the top-ranked journal and the cumulative number of WoS subject categories. In only 85 subject categories (48%) the percentage of research articles (not reviews) in the number of citable items was greater than the mean percentage for the subject category. In 31 instances, leaders did not publish any research articles, but only reviews.  相似文献   

10.
This paper presents the journal relative impact (JRI), an indicator for scientific evaluation of journals. The JRI considers in its calculation the different culture of citations presented by the Web of Science subject categories. The JRI is calculated considering a variable citation window. This citation window is defined taking into account the time required by each subject category for the maturation of citations. The type of document considered in each subject category depends on its outputs in relation to the citations. The scientific performance of each journal in relation to each subject category that it belongs to is considered allowing the comparison of the scientific performance of journals from different fields. The results obtained show that the JRI can be used for the assessment of the scientific performance of a given journal and that the SJR and SNIP should be used to complement the information provided by the JRI. The JRI presents good features as stability over time and predictability.  相似文献   

11.
The paper is concerned with analysing what makes a great journal great in the sciences, based on quantifiable Research Assessment Measures (RAM). Alternative RAM are discussed, with an emphasis on the Thomson Reuters ISI Web of Science database (hereafter ISI). Various ISI RAM that are calculated annually or updated daily are defined and analysed, including the classic 2-year impact factor (2YIF), 5-year impact factor (5YIF), Immediacy (or 0-year impact factor (0YIF)), Eigenfactor, Article Influence, C3PO (Citation Performance Per Paper Online), h-index, Zinfluence, PI-BETA (Papers Ignored—By Even The Authors), Impact Factor Inflation (IFI), and three new RAM, namely Historical Self-citation Threshold Approval Rating (H-STAR), 2 Year Self-citation Threshold Approval Rating (2Y-STAR), and Cited Article Influence (CAI). The RAM data are analysed for the 6 most highly cited journals in 20 highly-varied and well-known ISI categories in the sciences, where the journals are chosen on the basis of 2YIF. The application to these 20 ISI categories could be used as a template for other ISI categories in the sciences and social sciences, and as a benchmark for newer journals in a range of ISI disciplines. In addition to evaluating the 6 most highly cited journals in each of 20 ISI categories, the paper also highlights the similarities and differences in alternative RAM, finds that several RAM capture similar performance characteristics for the most highly cited scientific journals, determines that PI-BETA is not highly correlated with the other RAM, and hence conveys additional information regarding research performance. In order to provide a meta analysis summary of the RAM, which are predominantly ratios, harmonic mean rankings are presented of the 13 RAM for the 6 most highly cited journals in each of the 20 ISI categories. It is shown that emphasizing THE impact factor, specifically the 2-year impact factor, of a journal to the exclusion of other informative RAM can lead to a distorted evaluation of journal performance and influence on different disciplines, especially in view of inflated journal self citations.  相似文献   

12.
Journal self-citation rates in ecological sciences   总被引:1,自引:0,他引:1  
Impact factors are a widely accepted means for the assessment of journal quality. However, journal editors have possibilities to influence the impact factor of their journals, for example, by requesting authors to cite additional papers published recently in that journal thus increasing the self-citation rate. I calculated self-citation rates of journals ranked in the Journal Citation Reports of ISI in the subject category “Ecology” (n = 107). On average, self citation was responsible for 16.2 ± 1.3% (mean ± SE) of the impact factor in 2004. The self-citation rates decrease with increasing journal impact, but even high impact journals show large variation. Six journals suspected to request for additional citations showed high self-citation rates, which increased over the last seven years. To avoid further deliberate increases in self-citation rates, I suggest to take journal-specific self-citation rates into account for journal rankings.  相似文献   

13.
The impact factor is one of the most used scientometric indicators. Its proper and improper uses have been discussed extensively before. It has been criticized extensively, yet it is still here. In this paper I propose the journal report card, which is a set of measures, each with an easily comprehensible meaning that provides a fuller picture of the journals?? standing. The set of measures in the report card include the impact factor, the h-index, number of citations at different points on the ranked list of citations, extent of uncitedness and coverage of the h-core. The report card is computed for two sets of journals, the top-20 journals in JCR 2010 and the top-20 journals in JCR 2010 for the category Information and Library Science.  相似文献   

14.
Summary The research performance of Thai researchers in various subject categories was evaluated using a new mathematical index entitled “Impact Factor Point Average” (IFPA), by considering the number of published papers in journals listed in the Science Citation Index (SCI) database held by the Institute for Scientific Information (ISI) for the years 1998-2002, and the results compared with the direct publication number (PN) and publication credit (PC) methods. The results suggested that the PN and PC indicators cannot be used for comparison between fields or countries because of the strong field-dependence. The IFPA index, based on a normalization of differences in impact factors, rankings, and number of journal titles in different subject categories, was found to be simple and could be used with equality for accurate assessment of the quality of research work in different subject categories. The results of research performance were found to be dependent on the method used for the evaluations. All evaluation methods indicated that Clinical Medicine was ranked first in terms of the research performance of Thai scholars listed in the SCI database, but exhibited the lowest improvement of performance. Chemistry was shown to be the most improved subject category.  相似文献   

15.
We use a new approach to study the ranking of journals in JCR categories. The objectives of this study were to empirically evaluate the effect of increases in citations on the computation of the journal impact factor (JIF) for a large set of journals as measured by changes in JIF, and to ascertain the influence of additional citations on the rank order of journals according their new JIFs within JCR groups. To do so, modified JIFs were computed by adding additional citations to the number used by Thomson-Reuters to compute the JIF of journals listed in the JCR for 2008. We considered the effect on rank order of a given journal of adding 1, 2, 3 or more citations to the number used to compute the JIF, keeping everything else equal (i.e., without changing the JIF of other journals in a given group). The effect of additional citations on the internal structure of rankings in JCR groups increased with the number of citations added. In about one third of JCR groups, about half the journals changed their rank order when 1–5 citations were added. However, in general the rank order tended to be relatively stable after small increases in citations.  相似文献   

16.
The purpose of this study is to analyze the hypothetical changes in the 2002 impact factor (IF) of the biomedical journals included in the Science Citation Index-Journal Citation Reports (SCI-JCR) by also taking into account cites coming from 83 non-indexed Spanish journals on different medical specialties. A further goal of the study is to identify the subject categories of the SCI-JCR with the largest increase in their IF, and to estimate the 2002 hypothetical impact factor (2002 HIF) of these 83 non-indexed Spanish journals. It is demonstrated that the inclusion of cites from a selection of non SCI-JCR-indexed Spanish medical journals in the SCI-JCR-indexed journals produces a slight increase in their 2002 IF, specially in journals edited in the USA and in the UK. More than half of the non-indexed Spanish journals has a higher 2002 HIF than that of the SCI-JCR-indexed journal with the lowest IF in the same subject category.  相似文献   

17.
The aim of this study was to ascertain the possible effect of journal self-citations on the increase in the impact factors of journals in which this scientometric indicator rose by a factor of at least four in only a few years. Forty-three journals were selected from the Thomson—Reuters (formerly ISI) Journal Citation Reports as meeting the above criterion. Eight journals in which the absolute number of citations was lower than 20 in at least two years were excluded, so the final sample consisted of 35 journals. We found no proof of widespread manipulation of the impact factor through the massive use of journal self-citations.  相似文献   

18.
Citations from 1980 to 1988, obtained from fifty biomedical journals covered by theJournal Citation Reports (JCR) are studied. In purely numerical terms, the evolution of each citation (journal citation), including its impact factor (IF), would depend essentially on three variables for each journal: (i) the yearly rate of increase of items that could be cited (citable items), (ii) the relative yearly increment of the citing journals, (iii) the relative yearly increment of citations. The mechanics of this give rise to the three standard patterns for journal citations, namely: (i) annual impact factors increase each year (ascending evolution), (ii) annual impact factors remain the same each year (constant evolution), (iii) annual impact factors decrease each year (descending evolution). The reason why some journal citation profiles do not fit into the standard patterns is presumably that forces are at work able to alter the numerical mechanics described. The concepts of saturation/unsaturation of the demand for scientific information are introduced, showing how they are reflected in the impact factor figures for the journals cited.  相似文献   

19.
Here we show a novel technique for comparing subject categories, where the prestige of academic journals in each category is represented statistically by an impact-factor histogram. For each subject category we compute the probability of occurrence of scholarly journals with impact factor in different intervals. Here impact factor is measured with Thomson Reuters Impact Factor, Eigenfactor Score, and Immediacy Index. Assuming the probabilities associated with a pair of subject categories our objective is to measure the degree of dissimilarity between them. To do so, we use an axiomatic characterization for predicting dissimilarity between subject categories. The scientific subject categories of Web of Science in 2010 were used to test the proposed approach for benchmarking Cell Biology and Computer Science Information Systems with the rest as two case studies. The former is best-in-class benchmarking that involves studying the leading competitor category; the latter is strategic benchmarking that involves observing how other scientific subject categories compete.  相似文献   

20.
The main purpose of this study was to analyze the Italian journals indexed in the 2000 edition of the Journal Citation Reports (JCR) published by the Institute for Scientific Information (ISI) (Philadelphia, USA). The performance and the visibility of these journals were evaluated in terms of Impact Factor (IF), mean IF from citing journals and cited journals, and self-citing and self-cited rates. Seventy-three Italian journals were indexed in the JCR, 14 of which achieved an IF equal to or higher than one. Most citing journals were European and American, thus showing a fairly good visibility of the articles published in the 14 journals analyzed. The self-citing and self-cited rates showed a wide variation. The journal that appeared to perform best was theJournal of High Energy Physics, an electronic publication whose success seemingly confirms Internet circulation as an effective means to enhance the visibility and consequently the quality, in term of citations, of a journal. Italy's low overall expenditure on research & development (R&D) and low number of researchers compared to countries with longstanding high publishing standards and traditions are no doubt partly to blame for its poor performance in scientific publishing. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号