首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 184 毫秒
1.
Many different measures are used to assess academic research excellence and these are subject to ongoing discussion and debate within the scientometric, university-management and policy-making communities internationally. One topic of continued importance is the extent to which citation-based indicators compare with peer-review-based evaluation. Here we analyse the correlations between values of a particular citation-based impact indicator and peer-review scores in several academic disciplines, from natural to social sciences and humanities. We perform the comparison for research groups rather than for individuals. We make comparisons on two levels. At an absolute level, we compare total impact and overall strength of the group as a whole. At a specific level, we compare academic impact and quality, normalised by the size of the group. We find very high correlations at the former level for some disciplines and poor correlations at the latter level for all disciplines. This means that, although the citation-based scores could help to describe research-group strength, in particular for the so-called hard sciences, they should not be used as a proxy for ranking or comparison of research groups. Moreover, the correlation between peer-evaluated and citation-based scores is weaker for soft sciences.  相似文献   

2.
This paper introduces a citation-based "systems approach" for analyzing the various institutional and cognitive dimensions of scientific excellence within national research systems. The methodology, covering several aggregate levels, focuses on the most highly cited research papers in the international journal literature. The distribution of these papers across institutions and disciplines enables objective comparisons their (possible) international-level scientific excellence. By way of example, we present key results from a recent series of analyses of the research system in the Netherlands in the mid 1990s, focussing on the performance of the universities across the various major scientific disciplines within the context of the entire system"s scientific performance. Special attention is paid to the contribution in the world"s top 1% and top 10% most highly cited research papers. The findings indicate that these high performance papers provide a useful analytical framework - both in terms of transparency, cognitive and institutional differentiation, as well as its scope for domestic and international comparisons - providing new indicators for identifying "world class" scientific excellence at the aggregate level. The average citation scores of these academic "Centres of Scientific Excellence" appear to be an inadequate predictor of their production of highly cited papers. However, further critical reflection and in-depth validation studies are needed to establish the true potential of this approach for science policy analyses and evaluation of research performance. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

3.
We applied a set of standard bibliometric indicators to monitor the scientific state-of-arte of 500 universities worldwide and constructed a ranking on the basis of these indicators (Leiden Ranking 2010). We find a dramatic and hitherto largely underestimated language effect in the bibliometric, citation-based measurements of research performance when comparing the ranking based on all Web of Science (WoS) covered publications and on only English WoS covered publications, particularly for Germany and France.  相似文献   

4.
Summary  In this paper we analyze the objectivity of the peer review process of research performance by research groups in the scientific and technological Valencian system, over the period 1998-2002. For that purpose, we use qualitative and quantitative indicators to assess which of them are the most important to determine a research group as excellent one, based on peer review evaluation methodology. The results show that excellence appears to be driven only by publications in SCI/SSCI and the number of sexenios, and suggest that the peer review process is not as objective as we expected.  相似文献   

5.
The Nature Index (NI) has become a rather powerful tool to identify emerging trends in various research fields. According to the NI 2015 released at the beginning of 2016, China, the world’s second largest producer of research papers, maintains a strong momentum on scientific output. Based on online source metrics such as Mendeley bookmarks, we evaluated multi-viewed impact of the top 50 academic institutions in the NI China. For the selection of multiple metrics, we investigated the presence and coverage of different kinds of online metrics, with a particular focus on their correlations with traditional citation-based metrics. In addition, Mendeley, Twitter, and Scopus are chosen as the complementary sources for multi-metrics. We sorted three ranks of the top 50 institutions in the NI China based on citation counts in Scopus, reader counts on Mendeley, and Twitter counts and we analyzed the differences among various ranking results. The diverse metrics revealed different aspects on institutions’ academic impact.  相似文献   

6.
Leta  Jacqueline  Glänzel  Wolfgang  Thijs  Bart 《Scientometrics》2006,67(1):87-105
Summary In the present study a bibliometric meso-level analysis of Brazilian scientific research is conducted. Both sectoral and publication profile of Brazilian universities and research institutions are studied. Publication dynamics and changing profiles allow to the conclusion that powerful growth of science in Brazil goes with striking structural changes. By contrast, citation-based indicators reflect less spectacular developments.  相似文献   

7.
In recent years, the extent of formal research evaluation, at all levels from the individual to the multiversity has increased dramatically. At the institutional level, there are world university rankings based on an ad hoc combination of different indicators. There are also national exercises, such as those in the UK and Australia that evaluate research outputs and environment through peer review panels. These are extremely costly and time consuming. This paper evaluates the possibility of using Google Scholar (GS) institutional level data to evaluate university research in a relatively automatic way. Several citation-based metrics are collected from GS for all 130 UK universities. These are used to evaluate performance and produce university rankings which are then compared with various rankings based on the 2014 UK Research Excellence Framework (REF). The rankings are shown to be credible and to avoid some of the obvious problems of the REF ranking, as well as being highly efficient and cost effective. We also investigate the possibility of normalizing the results for the university subject mix since science subjects generally produce significantly more citations than social science or humanities.  相似文献   

8.
Summary Research quality is the cornerstone of modern science, it is used in the understanding of reputational differences among scientific and academic institutions. Traditionally, scientific activity is measured by a set of indicators and well-established bibliometric techniques based on the number of academic papers published in top-ranked journals or on the number of citations of these papers. These indicators are usually critical in measuring differences in research performance, both at individual and at scientific institutional levels. In this paper, we introduce an alternative and complementary set of indicators based on the results of competition for research funding, that aims to enlarge the framework in which research performance has traditionally been measured. Theoretical support for this paper is found in the role that the search for funding plays in the researchers’ credibility cycle as well as in peer review, the basic instrument for the allocation of public R&D funds. Our method analyses the outcomes of the researchers’ struggle for funding, using data from research proposal applications and awards, as the unit of observation, and aggregating them by research institutions to rank them in relative scales of research competitiveness.  相似文献   

9.
K. Buchholz 《Scientometrics》1995,32(2):195-218
One of the major questions in science research is addressed in detail, that is the problem of evaluation of research work both by objective characterization, accessible to proof, and by adequate characterization, referring to the content and cognitive level of the work under investigation. A short discussion of established methods by science indicators as well as by peer review compiles merits and shortcomings of these methods. A short review refers to a few approaches towards the development of criteria for an improved assessment and characterization of research work and their shortcomings are discussed. Notably for the evaluation of medium or low range quality no reliable method is available. Therefore a systematic compilation of criteria which covers the full range of excellence to failure with respect to scientific quality is developed and a comprehensive list of criteria is presented which should provide a basis both for objective and adequate characterization of publications.  相似文献   

10.
Impact of bibliometrics upon the science system: Inadvertent consequences?   总被引:3,自引:0,他引:3  
Summary Ranking of research institutions by bibliometric methods is an improper tool for research performance evaluation, even at the level of large institutions. The problem, however, is not the ranking as such. The indicators used for ranking are often not advanced enough, and this situation is part of the broader problem of the application of insufficiently developed bibliometric indicators used by persons who do not have clear competence and experience in the field of quantitative studies of science. After a brief overview of the basic elements of bibliometric analysis, I discuss the major technical and methodological problems in the application of publication and citation data in the context of evaluation. Then I contend that the core of the problem lies not necessarily at the side of the data producer. Quite often persons responsible for research performance evaluation, for instance scientists themselves in their role as head of institutions and departments, science administrators at the government level and other policy makers show an attitude that encourages 'quick and dirty' bibliometric analyses whereas better quality is available. Finally, the necessary conditions for a successful application of advanced bibliometric indicators as support tool for peer review are discussed.  相似文献   

11.
Bibliometric analyses of scientific publications provide quantitative information that enables evaluators to obtain a useful picture of a team's research visibility. In combination with peer judgements and other qualitative background knowledge, these analyses can serve as a basis for discussions about research performance quality. However, many mathematicians are not convinced that citation counts do in fact provide useful information in the field of mathematics. According to these mathematicians, citation and publication habits differ completely from scholarly fields such as chemistry or physics. Therefore, it is impossible to derive valid information regarding research performance from citation counts. The aim of this study is to obtain more insight into the significance of citation-based indicators in the field of mathematics. To which extent do citation-scores mirror to the opinions of experts concerning the quality of a paper or a journal? A survey was conducted to answer this question.Top journals, as qualified by experts, receive significantly higher citation rates thangood journals. Thesegood journals, in turn, have significantly higher scores than journals with the qualificationless good. Top publications, recorded in the ISI database, receive on the average 15 times more citations than the mean score within the field of mathematics as a whole. In conclusion, the experts' views on top publications or top journals correspond very well to bibliometric indicators based on citation counts.  相似文献   

12.
Development of accurate systems to assess academic research performance is an essential topic in national science agendas around the world. Providing quantitative elements such as scientometric rankings and indicators have contributed to measure prestige and excellence of universities, but more sophisticated computational tools are seldom exploited. We compare the evolution of Mexican scientific production in Scopus and the Web of Science as well as Mexico’s scientific productivity in relation to the growth of the National Researchers System of Mexico is analyzed. As a main analysis tool we introduce an artificial intelligence procedure based on self-organizing neural networks. The neural network technique proves to be a worthy scientometric data mining and visualization tool which automatically carries out multiparametric scientometric characterizations of the production profiles of the 50 most productive Mexican Higher Education Institutions (in Scopus database). With this procedure we automatically identify and visually depict clusters of institutions that share similar bibliometric profiles in bidimensional maps. Four perspectives were represented in scientometric maps: productivity, impact, expected visibility and excellence. Since each cluster of institutions represents a bibliometric pattern of institutional performance, the neural network helps locate various bibliometric profiles of academic production, and the identification of groups of institutions which have similar patterns of performance. Also, scientometric maps allow for the identification of atypical behaviors (outliers) which are difficult to identify with classical tools, since they outstand not because of a disparate value in just one variable, but due to an uncommon combination of a set of indicators values.  相似文献   

13.

Research universities have a strong devotion and advocacy for research in their core academic mission. This is why they are widely recognized for their excellence in research which make them take the most renowned positions in the different worldwide university leagues. In order to examine the uniqueness of this group of universities we analyze the scientific production of a sample of them in a 5 year period of time. On the one hand, we analyze their preferences in research measured with the relative percentage of publications in the different subject areas, and on the other hand, we calculate the similarity between them in research preferences. In order to select a set of research universities, we studied the leading university rankings of Shanghai, QS, Leiden, and Times Higher Education (THE). Although the four rankings own well established and developed methodologies and hold great prestige, we choose to use THE because data were readily available for doing the study we had in mind. Having done that, we selected the twenty academic institutions ranked with the highest score in the last edition of THE World University Rankings 2020 and to contrast their impact, we also, we compared them with the twenty institutions with the lowest score in this ranking. At the same time, we extracted publication data from Scopus database for each university and we applied bibliometrics indicators from Elsevier’s SciVal. We applied the statistical techniques cosine similarity and agglomerative hierarchical clustering analysis to examine and compare affinities in research preferences among them. Moreover, a cluster analysis through VOSviewer was done to classify the total scientific production in the four major fields (health sciences, physical sciences, life sciences and social sciences). As expected, the results showed that top universities have strong research profiles, becoming the leaders in the world in those areas and cosine similarity pointed out that some are more affine among them than others. The results provide clues for enhancing existing collaboration, defining and re-directing lines of research, and seeking for new partnerships to face the current pandemic to find was to tackle down the covid-19 outbreak.

  相似文献   

14.
Traditional citation-based indicators and activities on Online Social Media Platforms (OnSMP; e.g. Twitter) have been used to assess the impact of scientific research. However, the association between traditional indicators (i.e., number of citations and journal impact factor) and the new OnSMP metrics still deserve further investigations. Here, we used multivariate models to evaluate the relative influence of collaboration, time since publication and traditional indicators on the interest of 2863 papers published in five ecological journals from 2013 to 2015 as given by nine OnSMP. We found that most activities were concentrated on Twitter and Mendeley and that activities in these two OnSMP are highly correlated. Our results indicate that traditional indicators explained most of the variation in OnSMP activity. Considering that OnSMP activities are high as soon as the articles are made available online, contrasting with the slow pace in which the citations are accumulated, our results support the use of activities on OnSMP as an early signal of research impact of ecological articles.  相似文献   

15.
The objective of this research is elaborating new criteria for evaluating the significance of the research results achieved by scientific teams. It is known, that the h-index (Hirsch index) is used to evaluate scientific organizations, as well as individual scientific workers. On the one hand, such a scientometric indicator as the “h-index of a scientific organization” reflects the organization’s scientific potential objectively. On the other hand, it does not always adequately reflect the significance that the results of a scientific team’s research activity have for the scientific megaenvironment (scientific community). The i-index has an even greater disadvantage, being principally limited by the size of a scientific team, although h-index is also dependent on the number of publications. Not trying to diminish the significance of the traditional parameters for monitoring the research activity of scientific organizations, including the institutions of higher education, the authors stress the necessity of using not only the traditional indicators, but also other parameters reflecting the significance of a scientific team’s research results for the scientific community. It should also not be forgotten that a scientific team is a social system whose functioning is not limited to the “sum” of individual scientific workers’ activities. The authors suggest new criteria of significance of research activity of scientific teams, which are suitable for the specific usage, hence they (the indicators) should be used with great caution; it is most appropriate to use the authors’ criteria for analyzing the dynamics of the research activity of scientific teams (following the principle “Compare yourself with yesterday’s yourself”). The authors’ proposed citation-based indicators make it possible to evaluate the true significance of research activity of a scientific team for the scientific community; while defining and justifying the new criteria, the authors also took into consideration the actuality of such a problem as the struggle with the self-citation effect (in a wider context—the problem of struggling with the artificial “improvement” of the scientometric indicators). The methodological basis of the research is formed by the system, metasystem, probability statistic, synergetic, sociological and qualimetric approaches. The research methods are the analysis of the problem situation, the analysis of the scientific literature and the best practices of research activity management at the institutions of higher education (benchmarking), the cognitive, structural–functional and mathematical modelling, the methods of graph, set and relation theory, the methods of qualimetry (the theory of latent variables), the methods of probability theory and mathematical statistics.  相似文献   

16.
We propose a method for selecting the research guarantor when papers are co-authored. The method is simply based on identifying the corresponding author. The method is here applied to global scientific output based on the SCOPUS database in order to build a new output distribution by country. This new distribution is then compared with previous output distributions by country but which were based on whole or fractional counting, not only for the total output but also for the excellence output (papers belonging to the 10 % most cited papers). The comparison allows one to examine the effect of the different methodological approaches on the scientific performance indicators assigned to countries. In some cases, there was a very large variation in scientific performance between the total output (whole counting) and output as research guarantor. The research guarantor approach is especially interesting when used with the excellence output where the quantity of excellent papers is also a quality indicator. The impact of excellent papers naturally has less variability as they are all top-cited papers.  相似文献   

17.
Ranking is widely considered to be an important tool for evaluating the performance, competitiveness, and success of academic institutions. An appropriate ranking system should evaluate the key missions of the higher education system in a way that helps to improve the leadership goals and activities carried out by the universities. Based on the concepts derived from the Iranian Higher Education Upstream Documents and Measures used internationally for university ranking, this study identifies 21 key measures that can be used in the ranking of Iranian universities. The measures are grouped into five categories: scientific infrastructure, scientific effectiveness, socio-cultural effectiveness, international interactions, and sustainability. Then, using the Interpretative Structural Modeling approach, the researchers develop a coherent rubric for establishing the ranking. The proposed conceptual model focuses primarily on the universities’ contribution to technological and scientific infrastructure, then secondarily on their contribution to scientific advancement and international interactions, and finally at a tertiary level on their socio-cultural effectiveness and sustainability.  相似文献   

18.
Towards a new crown indicator: an empirical analysis   总被引:1,自引:0,他引:1  
We present an empirical comparison between two normalization mechanisms for citation-based indicators of research performance. These mechanisms aim to normalize citation counts for the field and the year in which a publication was published. One mechanism is applied in the current so-called crown indicator of our institute. The other mechanism is applied in the new crown indicator that our institute is currently exploring. We find that at high aggregation levels, such as at the level of large research institutions or at the level of countries, the differences between the two mechanisms are very small. At lower aggregation levels, such as at the level of research groups or at the level of journals, the differences between the two mechanisms are somewhat larger. We pay special attention to the way in which recent publications are handled. These publications typically have very low citation counts and should therefore be handled with special care.  相似文献   

19.
Recent years have seen enormously increased interest in the comparative evaluation of research quality in the UK, with considerable resources devoted to ranking the output of academic institutions relative to one another at the sub-discipline level, and the disposition of even greater resources dependent on the outcome of this process. The preferred methodology has been that of traditional peer review, with expert groups of academics tasked to assess the relative worth of all research activity in ‘their’ field. Extension toinstitutional evaluation of a recently refined technique ofjournal ranking (Discipline Contribution Scoring) holds out the possibility of ‘automatic’ evaluation within a time-frame considerably less than would be required using methods based directly on citation counts within the corpus of academic work under review. This paper tests the feasibility of the technique in the sub-field of Business and Management Studies Research, producing rankings which are highly correlated with those generated by the much more complex and expensive direct peer review approach. More generally, the analysis also gives a rare opportunity directly to compare the equivalence of peer review bibliometric analysis over a whole sub-field of academic activity in a non-experimental setting.  相似文献   

20.
The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress. Existing ranking systems focus on measuring the influence of a scientific paper (citations)—these rankings do not reward journals for publishing innovative work that builds on new ideas. We propose an alternative ranking based on the proclivity of journals to publish papers that build on new ideas, and we implement this ranking via a text-based analysis of all published biomedical papers dating back to 1946. In addition, we compare our neophilia ranking to citation-based (impact factor) rankings; this comparison shows that the two ranking approaches are distinct. Prior theoretical work suggests an active role for our neophilia index in science policy. Absent an explicit incentive to pursue novel science, scientists underinvest in innovative work because of a coordination problem: for work on a new idea to flourish, many scientists must decide to adopt it in their work. Rankings that are based purely on influence thus do not provide sufficient incentives for publishing innovative work. By contrast, adoption of the neophilia index as part of journal-ranking procedures by funding agencies and university administrators would provide an explicit incentive for journals to publish innovative work and thus help solve the coordination problem by increasing scientists’ incentives to pursue innovative work.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号