首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Cross-field normalization of scientometric indicators   总被引:2,自引:2,他引:0  
Comparative assessment of scientometric indicators is greatly hindered by the different standards valid in different science fields and subfields. Indicators concerning to different fields can be compared only after first gauging them against a properly chosen reference standard, and their relative standing can then be compared. Methods of selecting reference standards and scaling procedures are surveyed in this study, and examples are given to their practical application.  相似文献   

2.
Median and percentile impact factors: A set of new indicators   总被引:1,自引:1,他引:0  
Summary In a recent article Sombatsompop et al. (2004) proposed a new way of calculating a synchronous journal impact factor. Their proposal seems quite interesting and will be discussed in this note. Their index will be referred as the Median Impact Factor (MIF). I explain every step in detail so that readers with little mathematical background can understand and apply the procedure. Illustrations of the procedure are presented. Some attention is given to the estimation of the median cited age in case it is larger than ten year. I think the idea introduced by Sombatsompop, Markpin and Premkamolnetr has a great theoretical value as they are - to the best of my knowledge - the first ones to consider impact factors not using years as a basic ingredient, but an element of the actual form of the citation curve. The MIF is further generalized to the notion of a percentile impact factor.  相似文献   

3.
Scientometrics - The aim of the present work is to determine the share of country self-citations and to analyze its impact on total citations, average citation per paper, % cited publications and...  相似文献   

4.
Two paradigmatic approaches to the normalisation of citation-impact measures are discussed. The results of the mathematical manipulation of standard indicators such as citation means, notably journal Impact Factors, (called a posteriori normalisation) are compared with citation measures obtained from fractional citation counting (called a priori normalisation). The distributions of two subfields of the life sciences and mathematics are chosen for the analysis. It is shown that both methods provide indicators that are useful tools for the comparative assessment of journal citation impact.  相似文献   

5.
利用改进的百分比法提取多普勒血流声谱图包络   总被引:9,自引:0,他引:9  
刘斌  汪源源  王威琪 《声学技术》1998,17(1):9-11,14
本文从是了大频率的数字估计方法着手,提取声谱图的包络。  相似文献   

6.
Various scientometric indices have been proposed in an attempt to express the quantitative and qualitative characteristics of scientific output. However, fully capturing the performance and impact of a scientific entity (author, journal, institution, conference, etc.) still remains an open research issue, as each proposed index focuses only on particular aspects of scientific performance. Therefore, scientific evaluation can be viewed as a multi-dimensional ranking problem, where dimensions represent the assorted scientometric indices. To address this problem, the skyline operator has been proposed in Sidiropoulos et al. (J Informetr 10(3):789–813, 2016) with multiple combinations of dimensions. In the present work, we introduce a new index derived from the utilization of the skyline operator, called Rainbow Ranking or RR-index that assigns a category score to each scientific entity instead of producing a strict ordering of the ranked entities. Our RR-index allows the combination of any known indices depending on the purposes of the evaluation and outputs a single number metric expressing multi-criteria relative ranking and can be applied to any scientific entity such as authors and journals. The proposed methodology was experimentally evaluated using a dataset of over 105,000 scientists from the Computer Science field.  相似文献   

7.
Lyhagen  Johan  Ahlgren  Per 《Scientometrics》2020,125(3):2545-2560
Scientometrics - Journal rankings often show significant changes compared to previous rankings. This gives rise to the question of how well estimated the rank of a journal is. In this contribution,...  相似文献   

8.
The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress. Existing ranking systems focus on measuring the influence of a scientific paper (citations)—these rankings do not reward journals for publishing innovative work that builds on new ideas. We propose an alternative ranking based on the proclivity of journals to publish papers that build on new ideas, and we implement this ranking via a text-based analysis of all published biomedical papers dating back to 1946. In addition, we compare our neophilia ranking to citation-based (impact factor) rankings; this comparison shows that the two ranking approaches are distinct. Prior theoretical work suggests an active role for our neophilia index in science policy. Absent an explicit incentive to pursue novel science, scientists underinvest in innovative work because of a coordination problem: for work on a new idea to flourish, many scientists must decide to adopt it in their work. Rankings that are based purely on influence thus do not provide sufficient incentives for publishing innovative work. By contrast, adoption of the neophilia index as part of journal-ranking procedures by funding agencies and university administrators would provide an explicit incentive for journals to publish innovative work and thus help solve the coordination problem by increasing scientists’ incentives to pursue innovative work.  相似文献   

9.
We present a rating method that, given information on the pairwise comparisons of n items, minimizes the number of inconsistencies in the ranking of those items. Our Minimum Violations Ranking (MVR) Method uses a binary linear integer program (BILP) to do this. We prove conditions when the relaxed LP will give an optimal solution to the original BILP. In addition, the LP solution gives information about ties and sensitivities in the ranking. Lastly, our MVR method makes use of bounding and constraint relaxation techniques to produce a fast algorithm for the linear ordering problem, solving an instance with about one thousand items in less than 10 minutes.  相似文献   

10.
Variable screening and ranking using sampling-based sensitivity measures   总被引:12,自引:0,他引:12  
This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables.  相似文献   

11.
Internet is a worldwide network composed of interconnected but independent networks, called Autonomous Systems. Each network owner has to decide which other networks to interconnect with and how to allocate its traffic among its providers. The financial flows between Autonomous Systems depend on these decisions and raise the key issue of revenue management. In this paper, we propose some models and exact methods for the joint optimization problem of interconnection policy and traffic allocation for a customer AS. This problem is analyzed in the top-percentile pricing framework for the interconnection agreements, and we assess the solution methods using real-life instances.  相似文献   

12.
In this work, we separate the illumination and reflectance components of a single input image which is non-uniformly illuminated. Considering the input image and its blurred version as two different combinations of illumination and reflectance components, we use the conventional independent component analysis (ICA) to separate these two components. The separated reflectance component, which is an illumination normalized version of the input image, can then be used as an effective pre-processed (illumination normalized) image for different computer vision tasks e.g. face recognition. To this end, we present simulation results to show that our proposed pre-processing method called illumination normalization using ICA increases the accuracy rate of several baseline face recognition systems (FRSs). The proposed method showed improved performance of baseline FRSs when using the Extended Yale-B databases.  相似文献   

13.
14.
15.
Degradation tests are widely used to assess the reliability of highly reliable products which are not likely to fail under traditional life tests or accelerated life tests. However, for some highly reliable products, the degradation may be very slow and hence it is impossible to have a precise assessment within a reasonable amount of testing time. In such cases, an alternative is to use higher stresses to extrapolate the product's reliability at the design stress. This is called an accelerated degradation test (ADT). In conducting an ADT, several decision variables, such s the inspection frequency, sample size and termination time, at each stress level are influential on the experimental efficiency. An inappropriate choice of these decision variables not only wastes experimental resources but also reduces the precision of the estimation of the product's reliability at the use condition. The main purpose of this paper is to deal with the problem of designing an ADT. By using the criterion of minimizing the mean‐squared error of the estimated 100 th percentile of the product's lifetime distribution at the use condition subject to the constraint that the total experimental cost does not exceed a predetermined budget, a nonlinear integer programming problem is built to derive the optimal combination of the sample size, inspection frequency and the termination time at each stress level. A numerical example is provided to illustrate the proposed method. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

16.
When LCA practitioners perform LCAs, the interpretation of the results can be difficult without a reference point to benchmark the results. Hence, normalization factors are important for relating results to a common reference. The main purpose of this paper was to update the normalization factors for the US and US-Canadian regions. The normalization factors were used for highlighting the most contributing substances, thereby enabling practitioners to put more focus on important substances, when compiling the inventory, as well as providing them with normalization factors reflecting the actual situation. Normalization factors were calculated using characterization factors from the TRACI 2.1 LCIA model. The inventory was based on US databases on emissions of substances. The Canadian inventory was based on a previous inventory with 2005 as reference, in this inventory the most significant substances were updated to 2008 data. The results showed that impact categories were generally dominated by a small number of substances. The contribution analysis showed that the reporting of substance classes was highly significant for the environmental impacts, although in reality, these substances are nonspecific in composition, so the characterization factors which were selected to represent these categories may be significantly different from the actual identity of these aggregates. Furthermore the contribution highlighted the issue of carefully examining the effects of metals, even though the toxicity based categories have only interim characterization factors calculated with USEtox. A need for improved understanding of the wide range of uncertainties incorporated into studies with reported substance classes was indentified. This was especially important since aggregated substance classes are often used in LCA modeling when information on the particular substance is missing. Given the dominance of metals to the human and ecotoxicity categories, it is imperative to refine the CFs within USEtox. Some of the results within this paper indicate that soil emissions of metals are significantly higher than we expect in actuality.  相似文献   

17.
OBJECTIVE: To determine how to use the multitude of available epidemiological data to rank accidents for prioritisation of prevention. METHODS: A stepwise method to rank accidents for priority-setting at any time is proposed. The first step is to determine the overall objectives of injury prevention. Based on these objectives, the relevant epidemiological criteria are determined. These criteria need to be weighed by experts in such a way that these weights can be used for every new cycle of priority-setting. Thus, every time the method is applied: first, the relevant types of accidents are identified; second, the epidemiological criteria are determined per type of accident; and third, the types of accidents are ranked by means of standardised weights per criterion. The proposed indirect method is illustrated by an empirical example. The results were compared with a direct method, i.e. ranking by an expert panel. RESULTS: In the pilot, we ranked four age groups of victims of a home and leisure accident: 0-4, 4-19 and 20-54 years of age, and victims aged 55 years or older. The resulting rankings differ largely per application; number one are victims older than 55 years or those of 20-54 years. CONCLUSIONS: The proposed method enables a structured, transparent way to set priorities for home and leisure accidents. It is a promising method, although further development is clearly necessary, based on the actual application of the model.  相似文献   

18.
Selection of proper materials for different components is one of the most challenging tasks in the design and development of products for diverse engineering applications. Materials play a crucial and important role during the entire design and manufacturing process. Wrong selection of material often leads to huge cost involvement and ultimately drives towards premature component or product failure. So the designers need to identify and select proper materials with specific functionalities in order to obtain the desired output with minimum cost involvement and specific applicability. This paper attempts to solve the materials selection problem using two most potential multi-criteria decision-making (MCDM) approaches and compares their relative performance for a given material selection application. The first MCDM approach is ‘Vlse Kriterijumska Optimizacija Kompromisno Resenje’ (VIKOR), a compromise ranking method and the other one is ‘ELimination and Et Choice Translating REality’ (ELECTRE), an outranking method. These two methods are used to rank the alternative materials, for which several requirements are considered simultaneously. Two examples are cited in order to demonstrate and validate the effectiveness and flexibility of these two MCDM approaches. In each example, a list of all the possible choices from the best to the worst suitable materials is obtained taking into account different material selection criteria. The rankings of the selected materials almost corroborate with those as obtained by the past researchers.  相似文献   

19.
In this paper, we briefly review the development of ranking and selection (R&S) in the past 70 years, especially the theoretical achievements and practical applications in the past 20 years. Different from the frequentist and Bayesian classifications adopted by Kim and Nelson (2006b) and Chick (2006) in their review articles, we categorize existing R&S procedures into fixed-precision and fixed-budget procedures, as in Hunter and Nelson (2017). We show that these two categories of procedures essentially differ in the underlying methodological formulations, i.e., they are built on hypothesis testing and dynamic programming, respectively. In light of this variation, we review in detail some well-known procedures in the literature and show how they fit into these two formulations. In addition, we discuss the use of R&S procedures in solving various practical problems and propose what we think are the important research questions in the field.  相似文献   

20.
由于无法得到准确的期望效用函数,在信息不完全和结果不确定的环境下作出决策是困难的,提出基于候选方案排序的进化决策方法,通常通过分析得出一组与候选方案期望效用相关的指标,设计决策规则归结为寻找二之间的相关关系,如果将所有候方案按其对效用有影响的指标分为n类,并利用进化算法在n!空间中索全部方案的期望效用排序,则根据此排序作出最佳决策,提出针对排序问题的遗传算法,该方法较少依赖专家知识,无须显式地构造期望效用函数,能有效处理非数值或非量化指标以及指标冲突和指标相关等问题,在带随机噪声环境下仍能获得稳健解,在仿真机器人控制器设计中的应用表明了该方法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号