首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Coverage and adoption of altmetrics sources in the bibliometric community   总被引:2,自引:0,他引:2  
Altmetrics, indices based on social media platforms and tools, have recently emerged as alternative means of measuring scholarly impact. Such indices assume that scholars in fact populate online social environments, and interact with scholarly products in the social web. We tested this assumption by examining the use and coverage of social media environments amongst a sample of bibliometricians examining both their own use of online platforms and the use of their papers on social reference managers. As expected, coverage varied: 82 % of articles published by sampled bibliometricians were included in Mendeley libraries, while only 28 % were included in CiteULike. Mendeley bookmarking was moderately correlated (.45) with Scopus citation counts. We conducted a survey among the participants of the STI2012 participants. Over half of respondents asserted that social media tools were affecting their professional lives, although uptake of online tools varied widely. 68 % of those surveyed had LinkedIn accounts, while Academia.edu, Mendeley, and ResearchGate each claimed a fifth of respondents. Nearly half of those responding had Twitter accounts, which they used both personally and professionally. Surveyed bibliometricians had mixed opinions on altmetrics’ potential; 72 % valued download counts, while a third saw potential in tracking articles’ influence in blogs, Wikipedia, reference managers, and social media. Altogether, these findings suggest that some online tools are seeing substantial use by bibliometricians, and that they present a potentially valuable source of impact data.  相似文献   

2.
Recent altmetrics research has started to investigate the meaning of altmetrics and whether altmetrics could reveal something about the attention or impact connected to research. This research continues this line of investigations and studies reasons for why some research has received significant online attention in one or both of two social media services; Twitter or Mendeley. This research investigated Finnish researchers’ opinions about the reasons for why their research had received significant online attention and if the attention received could reflect scientific or societal impact of their research. Furthermore it was studied whether the authors of the papers with significant online attention actively followed how their papers were shared or discussed online and if the authors thought that the online attention increased either the scientific or societal impact of their work. Based on the findings it can be stated that the level of online attention received is a sum of many factors and that there are also specific differences between the platforms where the attention has been received. For the articles that had received significant attention on Mendeley the reasons for that attention were more often seen as due to an academic audience, while the situation was reverse on Twitter, with the majority of reasons for the attention being linked to a wider audience. Similar trend could be seen when asked about whether the online attention could reflect scientific or societal impact, although a clear consensus about whether online attention could reflect any type of impact at all could not be reached.  相似文献   

3.
This work presents a new approach for analysing the ability of existing research metrics to identify research which has strongly influenced future developments. More specifically, we focus on the ability of citation counts and Mendeley reader counts to distinguish between publications regarded as seminal and publications regarded as literature reviews by field experts. The main motivation behind our research is to gain a better understanding of whether and how well the existing research metrics relate to research quality. For this experiment we have created a new dataset which we call TrueImpactDataset and which contains two types of publications, seminal papers and literature reviews. Using the dataset, we conduct a set of experiments to study how citation and reader counts perform in distinguishing these publication types, following the intuition that causing a change in a field signifies research quality. Our research shows that citation counts work better than a random baseline (by a margin of 10%) in distinguishing important seminal research papers from literature reviews while Mendeley reader counts do not work better than the baseline.  相似文献   

4.
Traditional citation-based indicators and activities on Online Social Media Platforms (OnSMP; e.g. Twitter) have been used to assess the impact of scientific research. However, the association between traditional indicators (i.e., number of citations and journal impact factor) and the new OnSMP metrics still deserve further investigations. Here, we used multivariate models to evaluate the relative influence of collaboration, time since publication and traditional indicators on the interest of 2863 papers published in five ecological journals from 2013 to 2015 as given by nine OnSMP. We found that most activities were concentrated on Twitter and Mendeley and that activities in these two OnSMP are highly correlated. Our results indicate that traditional indicators explained most of the variation in OnSMP activity. Considering that OnSMP activities are high as soon as the articles are made available online, contrasting with the slow pace in which the citations are accumulated, our results support the use of activities on OnSMP as an early signal of research impact of ecological articles.  相似文献   

5.
This article contains two investigations into Mendeley reader counts with the same dataset. Mendeley reader counts provide evidence of early scholarly impact for journal articles, but reflect the reading of a relatively young subset of all researchers. To investigate whether this age bias is constant or varies by narrow field and publication year, this article compares the proportions of student, researcher and faculty readers for articles published 1996–2016 in 36 large monodisciplinary journals. In these journals, undergraduates recorded the newest research and faculty the oldest, with large differences between journals. The existence of substantial differences in the composition of readers between related fields points to the need for caution when using Mendeley readers as substitutes for citations for broad fields. The second investigation shows, with the same data, that there are substantial differences between narrow fields in the time taken for Scopus citations to be as numerous as Mendeley readers. Thus, even narrow field differences can impact on the relative value of Mendeley compared to citation counts.  相似文献   

6.
Metrics like the number of tweets or Mendeley readers are currently discussed as an alternative to evaluate research. These alternative metrics (altmetrics) still need to be evaluated in order to fully understand their meaning, their benefits and limitations. While several preceding studies concentrate on correlations of altmetrics with classical measures like citations, this study aims at investigating metric-compatibility within altmetrics. For this purpose, 5000 journal articles from six disciplines have been analyzed regarding their metrics with the help of the aggregators PlumX and Altmetric.com. For this set, the highest numbers of events have been recognized regarding Mendeley readers, followed by Twitter and Facebook mentions. Thereby variations considering the aggregators could be observed. Intra-correlations between the metrics across one aggregator have been calculated, as well as inter-correlations for the corresponding metrics across the aggregators. For both aggregators, low to medium intra-correlations could be calculated which shows the diversity of the different metrics. Regarding inter-correlations, PlumX and Altmetric.com are highly consistent concerning Mendeley readers (\(r=0.97\)) and Wikipedia mentions (\(r=0.82\)), whereas the consistency concerning Twitter (\(r=0.49\)), blogs (\(r=0.46\)) and Reddit (\(r=0.41\)) is on a moderate level. The sources Facebook (\(r=0.29\)), Google+ (\(r=0.28\)) and News (\(r=0.11\)) show only low correlations.  相似文献   

7.
Prior research shows that article reader counts (i.e. saves) on the online reference manager, Mendeley, correlate to future citations. There are currently no evidenced-based distribution strategies that have been shown to increase article saves on Mendeley. We conducted a 4-week randomized controlled trial to examine how promotion of article links in a novel online cross-publisher distribution channel (TrendMD) affect article saves on Mendeley. Four hundred articles published in the Journal of Medical Internet Research were randomized to either the TrendMD arm (n = 200) or the control arm (n = 200) of the study. Our primary outcome compares the 4-week mean Mendeley saves of articles randomized to TrendMD versus control. Articles randomized to TrendMD showed a 77% increase in article saves on Mendeley relative to control. The difference in mean Mendeley saves for TrendMD articles versus control was 2.7, 95% CI (2.63, 2.77), and statistically significant (p < 0.01). There was a positive correlation between pageviews driven by TrendMD and article saves on Mendeley (Spearman’s rho r = 0.60). This is the first randomized controlled trial to show how an online cross-publisher distribution channel (TrendMD) enhances article saves on Mendeley. While replication and further study are needed, these data suggest that cross-publisher article recommendations via TrendMD may enhance citations of scholarly articles.  相似文献   

8.
In this paper an analysis of the presence and possibilities of altmetrics for bibliometric and performance analysis is carried out. Using the web based tool Impact Story, we collected metrics for 20,000 random publications from the Web of Science. We studied both the presence and distribution of altmetrics in the set of publications, across fields, document types and over publication years, as well as the extent to which altmetrics correlate with citation indicators. The main result of the study is that the altmetrics source that provides the most metrics is Mendeley, with metrics on readerships for 62.6 % of all the publications studied, other sources only provide marginal information. In terms of relation with citations, a moderate spearman correlation (r = 0.49) has been found between Mendeley readership counts and citation indicators. Other possibilities and limitations of these indicators are discussed and future research lines are outlined.  相似文献   

9.
In this study we examined a sample of 100 European astrophysicists and their publications indexed by the citation database Scopus, submitted to the arXiv repository and bookmarked by readers in the reference manager Mendeley. Although it is believed that astrophysicists use arXiv widely and extensively, the results show that on average more items are indexed by Scopus than submitted to arXiv. A considerable proportion of the items indexed by Scopus appear also on Mendeley, but on average the number of readers who bookmarked the item on Mendeley is much lower than the number of citations reported in Scopus. The comparisons between the data sources were done based on the authors and the titles of the publications.  相似文献   

10.
Development of accurate systems to assess academic research performance is an essential topic in national science agendas around the world. Providing quantitative elements such as scientometric rankings and indicators have contributed to measure prestige and excellence of universities, but more sophisticated computational tools are seldom exploited. We compare the evolution of Mexican scientific production in Scopus and the Web of Science as well as Mexico’s scientific productivity in relation to the growth of the National Researchers System of Mexico is analyzed. As a main analysis tool we introduce an artificial intelligence procedure based on self-organizing neural networks. The neural network technique proves to be a worthy scientometric data mining and visualization tool which automatically carries out multiparametric scientometric characterizations of the production profiles of the 50 most productive Mexican Higher Education Institutions (in Scopus database). With this procedure we automatically identify and visually depict clusters of institutions that share similar bibliometric profiles in bidimensional maps. Four perspectives were represented in scientometric maps: productivity, impact, expected visibility and excellence. Since each cluster of institutions represents a bibliometric pattern of institutional performance, the neural network helps locate various bibliometric profiles of academic production, and the identification of groups of institutions which have similar patterns of performance. Also, scientometric maps allow for the identification of atypical behaviors (outliers) which are difficult to identify with classical tools, since they outstand not because of a disparate value in just one variable, but due to an uncommon combination of a set of indicators values.  相似文献   

11.
By means of their academic publications, authors form a social network. Instead of sharing casual thoughts and photos (as in Facebook), authors select co-authors and reference papers written by other authors. Thanks to various efforts (such as Microsoft Academic Search and DBLP), the data necessary for analyzing the academic social network is becoming more available on the Internet. What type of information and queries would be useful for users to discover, beyond the search queries already available from services such as Google Scholar? In this paper, we explore this question by defining a variety of ranking metrics on different entities—authors, publication venues, and institutions. We go beyond traditional metrics such as paper counts, citations, and h-index. Specifically, we define metrics such as influence, connections, and exposure for authors. An author gains influence by receiving more citations, but also citations from influential authors. An author increases his or her connections by co-authoring with other authors, and especially from other authors with high connections. An author receives exposure by publishing in selective venues where publications have received high citations in the past, and the selectivity of these venues also depends on the influence of the authors who publish there. We discuss the computation aspects of these metrics, and the similarity between different metrics. With additional information of author-institution relationships, we are able to study institution rankings based on the corresponding authors’ rankings for each type of metric as well as different domains. We are prepared to demonstrate these ideas with a web site (http://pubstat.org) built from millions of publications and authors.  相似文献   

12.
Mike Thelwall 《Scientometrics》2018,115(3):1231-1240
Counts of the number of readers registered in the social reference manager Mendeley have been proposed as an early impact indicator for journal articles. Although previous research has shown that Mendeley reader counts for articles tend to have a strong positive correlation with synchronous citation counts after a few years, no previous studies have compared early Mendeley reader counts with later citation counts. In response, this first diachronic analysis compares reader counts within a month of publication with citation counts after 20 months for ten fields. There are moderate or strong correlations in eight out of ten fields, with the two exceptions being the smallest categories (n?=?18, 36) with wide confidence intervals. The correlations are higher than the correlations between later citations and early citations, showing that Mendeley reader counts are more useful early impact indicators than citation counts.  相似文献   

13.

Research universities have a strong devotion and advocacy for research in their core academic mission. This is why they are widely recognized for their excellence in research which make them take the most renowned positions in the different worldwide university leagues. In order to examine the uniqueness of this group of universities we analyze the scientific production of a sample of them in a 5 year period of time. On the one hand, we analyze their preferences in research measured with the relative percentage of publications in the different subject areas, and on the other hand, we calculate the similarity between them in research preferences. In order to select a set of research universities, we studied the leading university rankings of Shanghai, QS, Leiden, and Times Higher Education (THE). Although the four rankings own well established and developed methodologies and hold great prestige, we choose to use THE because data were readily available for doing the study we had in mind. Having done that, we selected the twenty academic institutions ranked with the highest score in the last edition of THE World University Rankings 2020 and to contrast their impact, we also, we compared them with the twenty institutions with the lowest score in this ranking. At the same time, we extracted publication data from Scopus database for each university and we applied bibliometrics indicators from Elsevier’s SciVal. We applied the statistical techniques cosine similarity and agglomerative hierarchical clustering analysis to examine and compare affinities in research preferences among them. Moreover, a cluster analysis through VOSviewer was done to classify the total scientific production in the four major fields (health sciences, physical sciences, life sciences and social sciences). As expected, the results showed that top universities have strong research profiles, becoming the leaders in the world in those areas and cosine similarity pointed out that some are more affine among them than others. The results provide clues for enhancing existing collaboration, defining and re-directing lines of research, and seeking for new partnerships to face the current pandemic to find was to tackle down the covid-19 outbreak.

  相似文献   

14.
15.
In theory, the web has the potential to provide information about the wider impact of academic research, beyond traditional scholarly impact. This is because the web can reflect non-scholarly uses of research, such as in online government documents, press coverage or public discussions. Nevertheless, there are practical problems with creating metrics for journals based on web data: principally that most such metrics should be easy for journal editors or publishers to manipulate. Nevertheless, two alternatives seem to have both promise and value: citations derived from digitised books and download counts for journals within specific delivery platforms.  相似文献   

16.
17.
18.
Wei  Yaoyu  Lei  Lei 《Scientometrics》2018,116(3):1771-1783
There are three main reasons for retraction: (1) ethical misconduct (e.g. duplicate publication, plagiarism, missing credit, no IRB, ownership issues, authorship issues, interference in the review process, citation manipulation); (2) scientific distortion (e.g. data manipulation, fraudulent data, unsupported conclusions, questionable data validity, non-replicability, data errors—even if unintended); (3) administrative error (e.g. article published in wrong issue, not the final version published, publisher errors). The first category, although highly deplorable has almost no effect on the advancement of science, the third category is relatively minor. The papers belonging to the second category are most troublesome from the scientific point of view, as they are misleading and have serious negative implications not only on science but also on society. In this paper, we explore some temporal characteristics of retracted articles, including time of publication, years to retract, growth of post retraction citations over time and social media attention by the three major categories. The data set comprises 995 retracted articles retrieved in October 2014 from Elsevier’s ScienceDirect. Citations and Mendeley reader counts were retrieved four times within 4 years, which allowed us to examine post-retraction longitudinal trends not only for citations, but also for Mendeley reader counts. The major findings are that both citation counts and Mendeley reader counts continue to grow after retraction.  相似文献   

19.
The problem of comparing academic institutions in terms of their research production is nowadays a priority issue. This paper proposes a relative bidimensional index that takes into account both the net production and the quality of it, as an attempt to provide a comprehensive and objective way to compare the research output of different institutions in a specific field, using journal contributions and citations. The proposed index is then applied, as a case study, to rank the top Spanish universities in the fields of Chemistry and Computer Science in the period ranging from 2000 until 2009. A comparison with the top 50 universities in the ARWU rankings is also made, showing the proposed ranking is better suited to distinguish among non-elite universities.  相似文献   

20.
Thelwall (J Informetr 11(1):128–151, 2017a.  https://doi.org/10.1016/j.joi.2016.12.002; Web indicators for research evaluation: a practical guide. Morgan and Claypool, London, 2017b) proposed a new family of field- and time-normalized indicators, which is intended for sparse data. These indicators are based on units of analysis (e.g., institutions) rather than on the paper level. They compare the proportion of mentioned papers (e.g., on Twitter) of a unit with the proportion of mentioned papers in the corresponding fields and publication years. We propose a new indicator (Mantel–Haenszel quotient, MHq) for the indicator family. The MHq is rooted in the Mantel–Haenszel (MH) analysis. This analysis is an established method, which can be used to pool the data from several 2?×?2 cross tables based on different subgroups. We investigate using citations and assessments by peers whether the indicator family can distinguish between quality levels defined by the assessments of peers. Thus, we test the convergent validity. We find that the MHq is able to distinguish between quality levels in most cases while other indicators of the family are not. Since our study approves the MHq as a convergent valid indicator, we apply the MHq to four different Twitter groups as defined by the company Altmetric. Our results show that there is a weak relationship between the Twitter counts of all four Twitter groups and scientific quality, much weaker than between citations and scientific quality. Therefore, our results discourage the use of Twitter counts in research evaluation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号