首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recently there are many organizations conducting projects on ranking world universities from different perspectives. These ranking activities have made impacts and caused controversy. This study does not favor using bibliometric indicators to evaluate universities?? performances, but not against the idea either. We regard these ranking activities as important phenomena and aim to investigate correlation of different ranking systems taking bibliometric approach. Four research questions are discussed: (1) the inter-correlation among different ranking systems; (2) the intra-correlation within ranking systems; (3) the correlation of indicators across ranking systems; and (4) the impact of different citation indexes on rankings. The preliminary results show that 55?% of top 200 universities are covered in all ranking systems. The rankings of ARWU and PRSPWU show stronger correlation. With inclusion of another ranking, WRWU (2009?C2010), these rankings tend to converge. In addition, intra-correlation is significant and this means that it is possible to find out some ranking indicators with high degree of discriminativeness or representativeness. Finally, it is found that there is no significant impact of using different citation indexes on the ranking results for top 200 universities.  相似文献   

2.
In recent years, several national and community-driven conference rankings have been compiled. These rankings are often taken as indicators of reputation and used for a variety of purposes, such as evaluating the performance of academic institutions and individual scientists, or selecting target conferences for paper submissions. Current rankings are based on a combination of objective criteria and subjective opinions that are collated and reviewed through largely manual processes. In this setting, the aim of this paper is to shed light into the following question: to what extent existing conference rankings reflect objective criteria, specifically submission and acceptance statistics and bibliometric indicators? The paper specifically considers three conference rankings in the field of Computer Science: an Australian national ranking, a Brazilian national ranking and an informal community-built ranking. It is found that in all cases bibliometric indicators are the most important determinants of rank. It is also found that in all rankings, top-tier conferences can be identified with relatively high accuracy through acceptance rates and bibliometric indicators. On the other hand, acceptance rates and bibliometric indicators fail to discriminate between mid-tier and bottom-tier conferences.  相似文献   

3.
We find evidence for the universality of two relative bibliometric indicators of the quality of individual scientific publications taken from different data sets. One of these is a new index that considers both citation and reference counts. We demonstrate this universality for relatively well cited publications from a single institute, grouped by year of publication and by faculty or by department. We show similar behaviour in publications submitted to the arXiv e-print archive, grouped by year of submission and by sub-archive. We also find that for reasonably well cited papers this distribution is well fitted by a lognormal with a variance of around ??2?=?1.3 which is consistent with the results of Radicchi et al. (Proc Natl Acad Sci USA 105:17268?C17272, 2008). Our work demonstrates that comparisons can be made between publications from different disciplines and publication dates, regardless of their citation count and without expensive access to the whole world-wide citation graph. Further, it shows that averages of the logarithm of such relative bibliometric indices deal with the issue of long tails and avoid the need for statistics based on lengthy ranking procedures.  相似文献   

4.
South Africa has 23 universities, of which five are placed in one or more of the 2011 Shanghai Jiao Tong, Times Higher Education, and Quacquarelli Symonds world university rankings. The five are: Cape Town, Witwatersrand, KwaZulu-Natal, Stellenbosch and Pretoria. They are ranked above the other 18 universities, with Cape Town in top position, mainly because they have significantly higher publication and citation counts. In the Shanghai Jiao Tong ranking Cape Town??s Nobel Prize alumni and highly-cited researchers give it an additional lead over second-placed Witwatersrand, which has Nobel Prize alumni but no highly-cited researchers. KwaZulu-Natal, in third place, has no Nobel Prize alumni but one highly-cited researcher, which places it ahead of Stellenbosch and Pretoria despite the latter two having higher publication output. However, in the Times Higher Education ranking, which places Cape Town first and Witwatersrand second, Stellenbosch is ranked but not KwaZulu-Natal, presumably because the publication and citation counts of Stellenbosch are higher. The other 18 universities are ranked by the SCImago and Webometrics rankings in an order consistent with bibliometric indicators, and consistent with approximate simulations of the Shanghai Jiao Tong and Times Higher Education methods. If a South African university aspires to rise in the rankings, it needs to increase publications, citations, staff-student ratio, and proportions of postgraduate students, international students and international staff.  相似文献   

5.
The obsolescence and “durability” of scientific literature have been important elements of debate during many years, especially regarding the proper calculation of bibliometric indicators. The effects of “delayed recognition” on impact indicators have importance and are of interest not only to bibliometricians but also among research managers and scientists themselves. It has been suggested that the “Mendel syndrome” is a potential drawback when assessing individual researchers through impact measures. If publications from particular researchers need more time than “normal” to be properly acknowledged by their colleagues, the impact of these researchers may be underestimated with common citation windows. In this paper, we answer the question whether the bibliometric indicators for scientists can be significantly affected by the Mendel syndrome. Applying a methodology developed previously for the classification of papers according to their durability (Costas et al., J Am Soc Inf Sci Technol 61(8):1564–1581, 2010a; J Am Soc Inf Sci Technol 61(2):329–339, 2010b), the scientific production of 1,064 researchers working at the Spanish Council for Scientific Research (CSIC) in three different research areas has been analyzed. Cases of potential “Mendel syndrome” are rarely found among researchers and these cases do not significantly outperform the impact of researchers with a standard pattern of reception in their citations. The analysis of durability could be included as a parameter for the consideration of the citation windows used in the bibliometric analysis of individuals.  相似文献   

6.
7.
8.
Pei-Shan Chi 《Scientometrics》2014,101(2):1195-1213
Publications that are not indexed by citation indices such as Web of Science (WoS) or Scopus are called “non-source items”. These have so far been neglected by most bibliometric analyses. The central issue of this study is to investigate the characteristics of non-source items and the effect of their inclusion in bibliometric evaluations in the social sciences, specifically German political science publications. The results of this study show that non-source items significantly increase the number of publications (+1,350 %) and to a lesser extent the number of citations from SCIE, SSCI, and A&HCI (+150 %) for evaluated political scientists. 42 % of non-source items are published as book chapters. Edited books and books are cited the most among non-source items. About 40 % of non-source items are in English, while 80 % of source items are in English. The citation rates of researchers taking non-source items into account are lower than those from source items, partially as a result of the limited coverage of WoS. In contrast, the H-indices of researchers taking only non-source items into account are higher than those from source items. In short, the results of this study show that non-source items should be included in bibliometric evaluations, regardless of their impact or the citations from them. The demand for a more comprehensive coverage of bibliometric database in the social sciences for a higher quality of evaluations is shown.  相似文献   

9.
We have developed a method to obtain robust quantitative bibliometric indicators for several thousand scientists. This allows us to study the dependence of bibliometric indicators (such as number of publications, number of citations, Hirsch index...) on the age, position, etc. of CNRS scientists. Our data suggests that the normalized h-index (h divided by the career length) is not constant for scientists with the same productivity but different ages. We also compare the predictions of several bibliometric indicators on the promotions of about 600 CNRS researchers. Contrary to previous publications, our study encompasses most disciplines, and shows that no single indicator is the best predictor for all disciplines. Overall, however, the Hirsch index h provides the least bad correlations, followed by the number of papers published. It is important to realize however that even h is able to recover only half of the actual promotions. The number of citations or the mean number of citations per paper are definitely not good predictors of promotion. Due to space constraints, this paper is a short version of a more detailed article. [JENSEN & AL., 2008B]  相似文献   

10.
Petrovich  Eugenio 《Scientometrics》2022,127(5):2195-2233

Scholars in science and technology studies and bibliometricians are increasingly revealing the performative nature of bibliometric indicators. Far from being neutral technical measures, indicators such as the Impact Factor and the h-index are deeply transforming the social and epistemic structures of contemporary science. At the same time, scholars have highlighted how bibliometric indicators are endowed with social meanings that go beyond their purely technical definitions. These social representations of bibliometric indicators are constructed and negotiated between different groups of actors within several arenas. This study aims to investigate how bibliometric indicators are used in a context, which, so far, has not yet been covered by researchers, that of daily newspapers. By a content analysis of a corpus of 583 articles that appeared in four major Italian newspapers between 1990 and 2020, we chronicle the main functions that bibliometrics and bibliometric indicators played in the Italian press. Our material shows, among other things, that the public discourse developed in newspapers creates a favorable environment for bibliometrics-centered science policies, that bibliometric indicators contribute to the social construction of scientific facts in the press, especially in science news related to medicine, and that professional bibliometric expertise struggles to be represented in newspapers and hence reach the general public.

  相似文献   

11.
Vieira  Elizabeth S. 《Scientometrics》2022,127(5):2747-2772

It is widely recognised that science in Africa will benefit from international research collaboration (IRC), and therefore studies have been done on IRC in Africa (hereafter: Africa-related IRC research). However, there is no information on the development of Africa-related IRC research, the geographical location of the scientists interested in the topic, the visibility of the literature and the themes researched. This information makes it possible to understand relevant aspects in the context of IRC in Africa, which are useful for identifying IRC strengths, weaknesses and opportunities. It also allows paving the way for future research on this topic. Using discipline–specific terms, bibliometric, and thematic analysis, I collected the literature on Africa-related IRC research indexed in the Web of Science Core Collection (WoS). The results showed that the number of publications on the topic has increased, few African countries have researched the topic, a third of the publications were written exclusively by African scientists, and the topic has high visibility. The panoply of publications revealed that patterns, driving factors, effects, networks, asymmetries, and policies concerning IRC were the main themes researched.

  相似文献   

12.
This paper addresses two related issues regarding the validity of bibliometric indicators for the assessment of national performance within a particular scientific field. Firstly, the representativeness of a journal-based subject classification; and secondly, the completeness of the database coverage. Norwegian publishing in microbiology was chosen as a case, using the standard ISI-product National Science Indicators on Diskette (NSIOD) as a source database. By applying an "author-gated" retrieval procedure, we found that only 41 percent of all publications in NSIOD-indexed journals, expert-classified as microbiology, were included under the NSIOD-category Microbiology. Thus, the set of defining core journals only is clearly not sufficient to delineate this complex biomedical field. Furthermore, a subclassification of the articles into different subdisciplines of microbiology revealed systematic differences with respect to representation in NSIOD's Microbiology field; fish microbiology and medical microbiology are particularly underrepresented.In a second step, the individual publication lists from a sample of Norwegian microbiologists were collected and compared with the publications by the same authors, retrieved bibliometrically. The results showed that a large majority (94%) of the international scientific production in Norwegian microbiology was covered by the database NSIOD. Thus, insufficient subfield delineation, and not lack of coverage, appeared to be the main methodological problem in the bibliometric analysis of microbiology.  相似文献   

13.
In this paper an analysis of the presence and possibilities of altmetrics for bibliometric and performance analysis is carried out. Using the web based tool Impact Story, we collected metrics for 20,000 random publications from the Web of Science. We studied both the presence and distribution of altmetrics in the set of publications, across fields, document types and over publication years, as well as the extent to which altmetrics correlate with citation indicators. The main result of the study is that the altmetrics source that provides the most metrics is Mendeley, with metrics on readerships for 62.6 % of all the publications studied, other sources only provide marginal information. In terms of relation with citations, a moderate spearman correlation (r = 0.49) has been found between Mendeley readership counts and citation indicators. Other possibilities and limitations of these indicators are discussed and future research lines are outlined.  相似文献   

14.
In the last few years, many new bibliometric rankings or indices have been proposed for comparing the output of scientific researchers. We propose a formal framework in which rankings can be axiomatically characterized. We then present a characterization of some popular rankings. We argue that such analyses can help the user of a ranking to choose one that is adequate in the context where she/he is working.  相似文献   

15.
Bibliometric analysis has been used increasingly as a tool within the scientific community. Interplay is vital between those involved in refining bibliometric methods and the recipients of this type of analysis. Production as well as citations patterns reflect working methodologies in different disciplines within the specialized Library and Information Science (LIS) field, as well as in the non-specialist (non-LIS) professional field. We extract the literature on bibliometric analyses from Web of Science in all fields of science and analyze clustering of co-occurring keywords at an aggregate level. It reveals areas of interconnected literature with different impact on the LIS and the non-LIS community.We classify and categorize bibliometric articles that obtain the most citations in accordance with a modified version of Derrick’s, Jonker’s and Lewison’s method (Derrick et al. in Proceedings, 17th international conference on science and technology indicators. STI, Montreal, 2012). The data demonstrates that cross-referencing between the LIS and the non-LIS field is modest in publications outside their main categories of interest, i.e. discussions of various bibliometric issues or strict analyses of various topics. We identify some fields as less well-covered bibliometrically.  相似文献   

16.
The increasing use of bibliometric indicators in science policy calls for a reassessment of their robustness and limits. The perimeter of journal inclusion within ISI databases will determine variations in the classic bibliometric indicators used for international comparison, such as world shares of publications or relative impacts. We show in this article that when this perimeter is adjusted using a natural criterion for inclusion of journals, the journal impact, the variation of the most common country indicators (publication and citation shares; relative impacts) with the perimeter chosen depends on two phenomena. The first one is a bibliometric regularity rooted in the main features of competition in the open space of science, that can be modeled by bibliometric laws, the parameters of which are “coverage-independent” indicators. But this regularity is obscured for many countries by a second phenomenon, the presence of a sub-population of journals that does not reflect the same international openness, the nationally-oriented journals. As a result indicators based on standard SCI or SCISearch perimeters are jeopardized to a certain extent by this sub-population which creates large irregularities. These irregularities often lead to an over-estimation of share and an under-estimation of the impact, for countries with national editorial tradition, while the impact of a few mainstream countries arguably benefits from the presence of this sub-population. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

17.
As part of a research program to analyse research in Bangladesh we provide a comparison between research indicators related to India, Bangladesh, Pakistan and Sri Lanka. In this investigation we make use of Web of Science (WoS) data as well as Scopus data (using the SCImago website). Special attention is given to collaboration data and to the evolution of country h-indices. A comparison based on relative quality indicators shows that Sri Lanka is the best performer among these four countries. Such a result agrees with the ranking of these countries according to the United Nations’ Human Development Index (HDI).  相似文献   

18.
Bibliometric analyses of research in developing countries are interesting for various reasons. The situation of Cuba is rather exceptional. The Cuban Journal of Agricultural Science (CJAS) is the only Cuban research journal, indexed by the Institute of Scientific Information's Web of Science (WoS). We explore the possibilities of a citation analysis for Cuban research publications in general and for those in CJAS in particular. For the period 1988–1999, we find that this journal represents 14% of Cuban research publications, cited in the WoS. We remark that the number of self citations is relatively high and even increases since 1995. The results are classified by disciplines and we use a co-citation matrix to discuss the different observed citation patterns.  相似文献   

19.
Impact of bibliometrics upon the science system: Inadvertent consequences?   总被引:3,自引:0,他引:3  
Summary Ranking of research institutions by bibliometric methods is an improper tool for research performance evaluation, even at the level of large institutions. The problem, however, is not the ranking as such. The indicators used for ranking are often not advanced enough, and this situation is part of the broader problem of the application of insufficiently developed bibliometric indicators used by persons who do not have clear competence and experience in the field of quantitative studies of science. After a brief overview of the basic elements of bibliometric analysis, I discuss the major technical and methodological problems in the application of publication and citation data in the context of evaluation. Then I contend that the core of the problem lies not necessarily at the side of the data producer. Quite often persons responsible for research performance evaluation, for instance scientists themselves in their role as head of institutions and departments, science administrators at the government level and other policy makers show an attitude that encourages 'quick and dirty' bibliometric analyses whereas better quality is available. Finally, the necessary conditions for a successful application of advanced bibliometric indicators as support tool for peer review are discussed.  相似文献   

20.
Excellence for Research in Australia (ERA) is an attempt by the Australian Research Council to rate Australian universities on a 5-point scale within 180 Fields of Research using metrics and peer evaluation by an evaluation committee. Some of the bibliometric data contributing to this ranking suffer statistical issues associated with skewed distributions. Other data are standardised year-by-year, placing undue emphasis on the most recent publications which may not yet have reliable citation patterns. The bibliometric data offered to the evaluation committees is extensive, but lacks effective syntheses such as the h-index and its variants. The indirect H2 index is objective, can be computed automatically and efficiently, is resistant to manipulation, and a good indicator of impact to assist the ERA evaluation committees and to similar evaluations internationally.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号