共查询到20条相似文献,搜索用时 703 毫秒
1.
Identifying "highly-rated" journals - an Australian case study 总被引:1,自引:1,他引:0
2.
Summary We investigated the distribution of citations included in documents labeled by the ISI as “editorial material” and how they
contribute to the impact factor of journals in which the citing items were published. We studied all documents classified
by the ISI as “editorial material” in the Science Citation Index between 1999 and 2004 (277,231 records corresponding to editorial
material published in 6141 journals). The results show that most journals published only a few documents that included 1 or
2 citations that contributed to the impact factor, although a few journals published many such documents. The data suggest
that manipulation of the impact factor by publishing large amounts of editorial material with many citations to the journal
itself is not a widely used strategy to increase the impact factor. 相似文献
3.
Journals that increase their impact factor at least fourfold in a few years: The role of journal self-citations 总被引:3,自引:0,他引:3
The aim of this study was to ascertain the possible effect of journal self-citations on the increase in the impact factors
of journals in which this scientometric indicator rose by a factor of at least four in only a few years. Forty-three journals
were selected from the Thomson—Reuters (formerly ISI) Journal Citation Reports as meeting the above criterion. Eight journals
in which the absolute number of citations was lower than 20 in at least two years were excluded, so the final sample consisted
of 35 journals. We found no proof of widespread manipulation of the impact factor through the massive use of journal self-citations. 相似文献
4.
The paper is concerned with analysing what makes a great journal great in the sciences, based on quantifiable Research Assessment
Measures (RAM). Alternative RAM are discussed, with an emphasis on the Thomson Reuters ISI Web of Science database (hereafter
ISI). Various ISI RAM that are calculated annually or updated daily are defined and analysed, including the classic 2-year
impact factor (2YIF), 5-year impact factor (5YIF), Immediacy (or 0-year impact factor (0YIF)), Eigenfactor, Article Influence,
C3PO (Citation Performance Per Paper Online), h-index, Zinfluence, PI-BETA (Papers Ignored—By Even The Authors), Impact Factor
Inflation (IFI), and three new RAM, namely Historical Self-citation Threshold Approval Rating (H-STAR), 2 Year Self-citation
Threshold Approval Rating (2Y-STAR), and Cited Article Influence (CAI). The RAM data are analysed for the 6 most highly cited
journals in 20 highly-varied and well-known ISI categories in the sciences, where the journals are chosen on the basis of
2YIF. The application to these 20 ISI categories could be used as a template for other ISI categories in the sciences and
social sciences, and as a benchmark for newer journals in a range of ISI disciplines. In addition to evaluating the 6 most
highly cited journals in each of 20 ISI categories, the paper also highlights the similarities and differences in alternative
RAM, finds that several RAM capture similar performance characteristics for the most highly cited scientific journals, determines
that PI-BETA is not highly correlated with the other RAM, and hence conveys additional information regarding research performance.
In order to provide a meta analysis summary of the RAM, which are predominantly ratios, harmonic mean rankings are presented
of the 13 RAM for the 6 most highly cited journals in each of the 20 ISI categories. It is shown that emphasizing THE impact
factor, specifically the 2-year impact factor, of a journal to the exclusion of other informative RAM can lead to a distorted
evaluation of journal performance and influence on different disciplines, especially in view of inflated journal self citations. 相似文献
5.
The article describes the method for the online determination of the journal impact factor (JIF). The method is very simple
and can be used both for the ISI defined journal impact factor and for the calculation of other generalised journal impact
factors. But the direct online method fails for non-ISI journals i.e. journals not indexed by ISI to the three citation databases.
For such journals only the “External Cited Impact Factor” associated with citations from ISI journals (ECIFisi) can be determined
online by the common method. As an extra benefit the online method makes available the determination of the geographical distribution
of citations and citable units in relation to any given JIF, i.e. the international impact for a particular journal in a given
year. The method is illustrated by calculating the generalised JIF, self-citations and ECIF(isi) as well as the international
impact for Journal of Documentation and Scientometrics. 相似文献
6.
Julia Osca-Lluch Pedro Blesa José Manuel Barrueco Elena Velasco Thomas Krichel 《Scientometrics》2008,75(2):313-318
This paper studies the main characteristics of the citation indexes currently developed in Spain. The paper compares the impact
factors offered by Spanish citation indexes with the impact factor of Spanish journals also collected by the JCRs of the ISI
(SCI and SSCI) over a five-year period (2001–2005). Spanish journals published in English have higher impact factor scores
in the JCR databases of the ISI than in Spanish citation indexes. 相似文献
7.
This article introduces a new modified method for calculating the impact factor of journals based on the current ISI practice
in generating journal impact factor values. The impact factor value for a journal calculated by the proposed method, the so-called
Cited Half-Life Impact Factor (CHAL) method, which is based on the ratio of the number of current year citations of articles
from the previous X years to that of articles published in the previous X years, the X value being equal to the value of the
cited half-life of the journal in the current year. Thirty-four journals in the Polymer Science Category from the ISI Subject
Heading Categories were selected and examined. Total citations, impact factors and cited half-life of the 34 journals during
the last five years (1997-2001) were retrieved from the ISI Journal Citation Reports and were used as the data source for the calculations in this work, the impact factor values from ISI and CHAL methods then
being compared. The positions of the journals ranked by impact factors obtained from the ISI method were different from those
from the CHAL method. It was concluded that the CHAL method was more suitable for calculating the impact factor of the journals
than the existing ISI method.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
8.
Jerome K. Vanclay 《Scientometrics》2009,78(1):3-12
The ISI journal impact factor (JIF) is based on a sample that may represent half the whole-of-life citations to some journals,
but a small fraction (<10%) of the citations accruing to other journals. This disproportionate sampling means that the JIF
provides a misleading indication of the true impact of journals, biased in favour of journals that have a rapid rather than
a prolonged impact. Many journals exhibit a consistent pattern of citation accrual from year to year, so it may be possible
to adjust the JIF to provide a more reliable indication of a journal’s impact. 相似文献
9.
This paper presents the results of an exploratory bibliometric study aiming at an analysis of basic high energy physics (HEP)
research impact on fields other than physics, and particularly on application-oriented R&D.
After a general discussion of an extensive citation analysis of basic research publications from three HEP institutes—CERN,
DESY, and SLAC—the paper focuses on the ‘knowledge flow’ from physics to non-physics, and more specifically the flow from
basic physics research to the ‘applied world’. At this level, we report journal- as well as research field characteristics,
and we identify the most frequently citing R&D groups.
We conclude that DESY is most cited by the ‘applied world’, followed by SLAC and CERN. If the number of journals that institutes
have in common—whether based on the source or the citing publication—is taken as an indicator of the resemblance of their
research interests, we found that CERN and SLAC have the closest resemblance, followed by SLAC and DESY, with CERN and DESY
having the least in common. 相似文献
10.
Journal self-citation rates in ecological sciences 总被引:1,自引:0,他引:1
Jochen Krauss 《Scientometrics》2007,73(1):79-89
Impact factors are a widely accepted means for the assessment of journal quality. However, journal editors have possibilities
to influence the impact factor of their journals, for example, by requesting authors to cite additional papers published recently
in that journal thus increasing the self-citation rate. I calculated self-citation rates of journals ranked in the Journal
Citation Reports of ISI in the subject category “Ecology” (n = 107). On average, self citation was responsible for 16.2 ±
1.3% (mean ± SE) of the impact factor in 2004. The self-citation rates decrease with increasing journal impact, but even high
impact journals show large variation. Six journals suspected to request for additional citations showed high self-citation
rates, which increased over the last seven years. To avoid further deliberate increases in self-citation rates, I suggest
to take journal-specific self-citation rates into account for journal rankings. 相似文献
11.
Dan Ben-David 《Scientometrics》2010,82(2):351-364
One of the more important measures of a scholar’s research impact is the number of times that the scholar’s work is cited
by other researchers as a source of knowledge. This paper conducts a first of its kind examination on Israel’s academic economists
and economics departments, ranking them according to the number of citations on their work. It also provides a vista into
one of the primary reasons given by junior Israeli economists for an unparalleled brain drain from the country—discrepancies
between research impact and promotion. The type of examination carried out in this paper can now be easily replicated in other
fields and in other countries utilizing freely available citations data and compilation software that have been made readily
accessible in recent years. 相似文献
12.
Summary The research performance of Thai researchers in various subject categories was evaluated using a new mathematical index entitled
“Impact Factor Point Average” (IFPA), by considering the number of published papers in journals listed in the Science Citation
Index (SCI) database held by the Institute for Scientific Information (ISI) for the years 1998-2002, and the results compared
with the direct publication number (PN) and publication credit (PC) methods. The results suggested that the PN and PC indicators
cannot be used for comparison between fields or countries because of the strong field-dependence. The IFPA index, based on
a normalization of differences in impact factors, rankings, and number of journal titles in different subject categories,
was found to be simple and could be used with equality for accurate assessment of the quality of research work in different
subject categories. The results of research performance were found to be dependent on the method used for the evaluations.
All evaluation methods indicated that Clinical Medicine was ranked first in terms of the research performance of Thai scholars
listed in the SCI database, but exhibited the lowest improvement of performance. Chemistry was shown to be the most improved
subject category. 相似文献
13.
Scientific production has been evaluated from very different perspectives, the best known of which are essentially based on
the impact factors of the journals included in the Journal Citation Reports (JCR). This has been no impediment to the simultaneous
issuing of warnings regarding the dangers of their indiscriminate use when making comparisons. This is because the biases
incorporated in the elaboration of these impact factors produce significant distortions, which may invalidate the results
obtained. Notable among such biases are those generated by the differences in the propensity to cite of the different areas,
journals and/or authors, by variations in the period of materialisation of the impact and by the varying presence of knowledge
areas in the sample of reviews contained in the JCR. While the traditional evaluation method consists of standardisation by
subject categories, recent studies have criticised this approach and offered new possibilities for making inter-area comparisons.
In view of such developments, the present study proposes a novel approach to the measurement of scientific activity, in an
attempt to lessen the aforementioned biases. This approach consists of combining the employment of a new impact factor, calculated
for each journal, with the grouping of the institutions under evaluation into homogeneous groups. An empirical application
is undertaken to evaluate the scientific production of Spanish public universities in the year 2000. This application considers
both the articles published in the multidisciplinary databases of the Web of Science (WoS) and the data concerning the journals
contained in the Sciences and Social Sciences Editions of the Journal Citation Report (JCR). All this information is provided
by the Institute of Scientific Information (ISI), via its Web of Knowledge (WoK). 相似文献
14.
Vicente P. Guerrero-Bote Felipe Zapico-Alonso María Eugenia Espinosa-Calvo Rocío Gómez-Crisóstomo Félix de Moya-Anegón 《Scientometrics》2007,71(3):423-441
The capacity to attract citations from other disciplines — or knowledge export — has always been taken into account in evaluating
the quality of scientific papers or journals. Some of the JCR’s (ISI’s Journal Citation Report) Subject Categories have a greater exporting character than others because they are less isolated. This influences the rank/JIF
(ISI’s Journal Impact Factor) distribution of the category. While all the categories fit a negative power law fairly well,
those with a greater External JIF give distributions with a more sharply defined peak and a longer tail — something like an
iceberg. One also observes a major relationship between the rates of export and import of knowledge. 相似文献
15.
The science and engineering base is a key source of knowledge for the development and use of Information and Communication
Technologies (ICTs). In order to be able to effectively describe and monitor world-wide scientific activity related to ICTs,
it is important to be able to provide reliable macro-level statistics of this knowledge base. International bibliographic
databases and related bibliometric indicators together provide an analytical framework and appropriate measures to cover both
the ‘supply side’—research capabilities and outputs—and ‘demand side’—collaboration, diffusion and citation impact—related
to the ICT research. This paper presents results of such a bibliometric study describing macro-level features of this ICT
knowledge base. The data were retrieved from a specially developedCWTS ICT Database which provides a broad-scope world-wide coverage of ICT-relevant research papers published in high-quality international
scientific and technical journals. The cross-country comparison focuses on the level of scientific output and co-operation
patterns of the most actively publishing nations with a focus on the three Triad zones—the European Union, the USA and Japan. 相似文献
16.
An item-by-item subject classification of papers published in multidisciplinary and general journals using reference analysis 总被引:2,自引:0,他引:2
A serious shortcoming of bibliometric studies based on the(Social) Science (s) Citation Index is the lack of an universally applicable subject classification scheme as individual papers are concerned. Subject classification
of papers on the basis of assigning journals to subject categories (like those found in the various supplements of ISI databases)
works well in case of highly specialised journals, but fails for multidisciplinary journals such asNature, Science andPNAS—and so far as subfields are taken into consideration-also for “general” journals (e.g.JACS orAngewandte Chemie). This study presents the results of a pilot project attempting to overcome this shortcoming by delimiting the subject of
papers published in multidisciplinary and general journals by an item-by-item subject classification scheme, where assignment
is based on the analysis of the subject classification of reference literature. The results clearly confirmed the conclusions
of earlier studies by the authors in the field of reference analysis. For the really important journals (sufficiently high
number of annual publications and high impact with respect to the field), the share of classifiable papers was surprisingly
high, and the assignment proved reliable as well. Since papers in the leading general and multidisciplinary journals are frequently
citing general and multidisciplinary journals, an iterated application of the procedure is expected to increase the number
of classifiable publications.
The results of the new methodology may improve the validity of bibliometric studies for research evaluation purposes. 相似文献
17.
Mapping interdisciplinarity at the interfaces between the Science Citation Index and the Social Science Citation Index 总被引:2,自引:0,他引:2
Loet Leydesdorff 《Scientometrics》2007,71(3):391-405
The two Journal Citation Reports of the Science Citation Index 2004 and the Social Science Citation Index 2004 were combined in order to analyze and map journals
and specialties at the edges and in the overlap between the two databases. For journals which belong to the overlap (e.g.,
Scientometrics), the merger mainly enriches our insight into the structure which can be obtained from the two databases separately; but
in the case of scientific journals which are more marginal in either database, the combination can provide a new perspective
on the position and function of these journals (e.g., Environment and Planning B — Planning and Design). The combined database additionally enables us to map citation environments in terms of the various specialties comprehensively.
Using the vector-space model, visualizations are provided for specialties that are parts of the overlap (information science,
science & technology studies). On the basis of the resulting visualizations, “betweenness” — a measure from social network
analysis — is suggested as an indicator for measuring the interdisciplinarity of journals. 相似文献
18.
Summary The contribution of Brazil to the database of the Institute for Scientific Information, ISI, has increased remarkably during the last years. Among the Brazilian research institutions, the publications of the University of São Paulo (USP) have been around 30% of the country's total publication within the ISI database. A similar share was found for USP's publications published in the 1980-1999 period and classified in the Life Sciences. This was observed in publications from both the highest impact factor journals and from those with the largest number of articles. We have found that the present share of USP's publications in some of the fields of the Life Sciences was much less than 30%, suggesting a gradual decentralization of the scientific activity in Brazil. The data point out that this set of USP's publications were concentrated in traditional and basic fields of biological research, where the focus is mainly oriented by international trends. The data suggest that USP's researchers have not been much devoted to some of the fields where research is oriented toward national issues. 相似文献
19.
The Science Citation Index, Journal Citation Reports (JCR), published by the Institute for Scientific Information (ISI) and designed to rank, evaluate, categorize and compare journals, is used in a wide scientific context as a tool for evaluating researchers and research work, through the use of just one of its indicators, the impact factor. With the aim of obtaining an overall and synthetic perspective of impact factor values, we studied the frequency distributions of this indicator using the box-plot method. Using this method we divided the journals listed in the JCR into five groups (low, lower central, upper central, high and extreme). These groups position the journal in relation to its competitors. Thus, the group designated as extreme contains the journals with high impact factors which are deemed to be prestigious by the scientific community. We used the JCR data from 1996 to determine these groups, firstly for all subject categories combined (all 4779 journals) and then for each of the 183 ISI subject categories. We then substituted the indicator value for each journal by the name of the group in which it was classified. The journal group may differ from one subject category to another. In this article, we present a guide for evaluating journals constructed as described above. It provides a comprehensive and synthetic view of two of the most used sections of the JCR. It makes it possible to make more accurate and complete judgements on and through the journals, and avoids an oversimplified view of the complex reality of the world of journals. It immediately reveals the scientific subject category where the journal is best positioned. Also, whereas it used to be difficult to make intra- and interdisciplinary comparisons, this is now possible without having to consult the different sections of the JCR. We construct this guide each year using indicators published in the JCR by the ISI. 相似文献
20.
Comparative assessment of the journal literature produced by laboratories/institutions working in different fields is a difficult
exercise. The impact factor of the journals is not a suitable indicator since citation practices vary with fields. The variation
is corrected in this study using a measure, the “subfield corrected impact factor” and it is applied to the journal papers
produced by the Indian Council of Scientific and Industrial Research Laboratories. This measure helped to compare the impact
of journal literature in different fields. 相似文献