共查询到20条相似文献,搜索用时 0 毫秒
1.
Configuration management is critical to effective management, coordination, and control in an integrated software-development environment. Complex development environments require an integrated CM toolset. The CM tools you select and how you implement them will affect every phase of the life cycle in an integrated, team-oriented environment. The tools you select must facilitate the many diverse activities performed by separate teams and the management of those activities. The author considers the broad range of organizational and technical issues when selecting configuration management tools 相似文献
2.
数字水印技术具有鲁棒性、透明性、复杂性等特性,大部分文献对水印评估的阐述都是集中在鲁棒性上,实际上复杂性评估在数字水印评估领域也是非常重要的.因此,主要是针对数字音频水印算法嵌入函数的复杂性进行评估.给出了基本方案评估的概念,选取了两种典型的音频水印算法,应用此方案评估标准对算法进行评估,并同时论述了应用到基本方案上的嵌入参数和音频检验集.给出了两种算法复杂性评估的结果并将结果进行比较. 相似文献
3.
为了定量描述交通流系统的复杂性,将联合熵和C0复杂度应用到交通流序列分析之中,通过计算原始序列和替代序列的联合熵反应系统包含的非线性成分的多少,通过计算序列的C0复杂度反应系统包含的非规则成分的多少。计算周期序列,Logistic序列,Henon序列,随机序列,及5个不同时段的实测交通流速度序列的联合熵和C0复杂度,结果表明不同序列的两种复杂性测度存在明显的差异,且计算需要步长较短,交通流系统是一个介于有序和无序、规则和非规则之间的混沌系统,且不同时段交通流序列的联合熵和C0复杂度存在着明显的差异,可以用来定量刻画交通流系统的复杂性。 相似文献
4.
A rough set-based hypergraph trust measure parameter selection technique for cloud service selection
Nivethitha Somu Kannan Kirthivasan V. S. Shankar Sriram 《The Journal of supercomputing》2017,73(10):4535-4559
Selection of trustworthy cloud services has been a major research challenge in cloud computing, due to the proliferation of numerous cloud service providers (CSPs) along every dimension of computing. This scenario makes it hard for the cloud users to identify an appropriate CSP based on their unique quality of service (QoS) requirements. A generic solution to the problem of cloud service selection can be formulated in terms of trust assessment. However, the accuracy of the trust value depends on the optimality of the service-specific trust measure parameters (TMPs) subset. This paper presents TrustCom—a novel trust assessment framework and rough set-based hypergraph technique (RSHT) for the identification of the optimal TMP subset. Experiments using Cloud Armor and synthetic trust feedback datasets show the prominence of RSHT over the existing feature selection techniques. The performance of RSHT was analyzed using Weka tool and hypergraph-based computational model with respect to the reduct size, time complexity and service ranking. 相似文献
5.
6.
Potential computer system users or buyers usually employ a computer performance evaluation technique only if they believe its results provide valuable information. System Performance Evaluation Cooperative (SPEC) measures are perceived to provide such information and are therefore the ones most commonly used. SPEC measures are designed to evaluate the performance of engineering and scientific workstations, personal vector computers, and even minicomputers and superminicomputers. Along with the Transaction Processing Council (TPC) measures for database I/O performance, they have become de facto industry standards, but do SPEC's evaluation outcomes actually provide added information value? In this article, we examine these measures by considering their structure, advantages and disadvantages. We use two criteria in our examination: are the programs used in the SPEC suite properly blended to reflect a representative mix of different applications, and are they properly synthesized so that the aggregate measures correctly rank computers by performance? We conclude that many programs in the SPEC suites are superfluous; the benchmark size can be reduced by more than 50%. The way the measure is calculated may cause distortion. Substituting the harmonic mean for the geometric mean used by SPEC roughly preserves the measure, while giving better consistency. SPEC measures reflect the performance of the CPU rather than the entire system. Therefore, they might be inaccurate in ranking an entire system. To remedy these problems, we propose a revised methodology for obtaining SPEC measures 相似文献
7.
The complexity of evaluating integers and polynomials is studied. A new model is proposed for studying such complexities. This model differs from previous models by requiring the construction of constant to be used in the computation. This construction is given a cost which is dependent upon the size of the constant. Previous models used a uniform cost, of either 0 or 1, for operations involving constants. Using this model, proper hierarchies are shown to exist for both integers and polynomials with respect to evaluation cost. Furthmore, it is shown that almost all integers (polynomials) are as difficult to evaluate as the hardest integer (polynomial). These results remain true even if the underlying basis of binary operations which the algorithm performs are varied. 相似文献
8.
9.
SeyedMehrdad Hosseini Alireza Fatehi Tor Arne Johansen Ali Khaki Sedigh 《Journal of Process Control》2012,22(9):1732-1742
This paper provides a systematic method for model bank selection in multi-linear model analysis for nonlinear systems by presenting a new algorithm which incorporates a nonlinearity measure and a modified gap based metric. This algorithm is developed for off-line use, but can be implemented for on-line usage. Initially, the nonlinearity measure analysis based on the higher order statistic (HOS) and the linear cross correlation methods are used for decomposing the total operating space into several regions with linear models. The resulting linear models are then used to construct the primary model bank. In order to avoid unnecessary linear local models in the primary model bank, a gap based metric is introduced and applied in order to merge similar linear local models. In order to illustrate the usefulness of the proposed algorithm, two simulation examples are presented: a pH neutralization plant and a continuous stirred tank reactor (CSTR). 相似文献
10.
Evan E. Anderson 《Software》1989,19(8):707-717
The proliferation of software packages has created a difficult, complex problem of evaluation and selection for many users. Traditional approaches to the quantification of package performance have relied on compensatory models, such as the linear weighted attribute model, which sums the weighted ratings of software attributes. These approaches define the dimensions of quality too narrowly and, therefore, omit substantial amounts of information from consideration. This paper presents an alternative methodology, previously used in capital rationing and tournament ranking, that expands the opportunity for objective insight into software quality. In particular, it considers three measures of quality, the frequency with which the attribute ratings of one package exceed those of another, the presence of outliers, where very poor performance may exist on a single attribute and be glossed over by compensatory methods, and the cumulative magnitude of attribute ratings on one package that exceed those on others. The proposed methodology is applied to the evaluation of the following software types: word processing, database management systems, spreadsheet/financial planning, integrated software, graphics, data communications and project management. 相似文献
11.
Xin Sun Yanheng Liu Jin Li Jianqi Zhu Huiling Chen Xuejie Liu 《Pattern recognition》2012,45(8):2992-3002
Recent years, various information theoretic based measurements have been proposed to remove redundant features from high-dimensional data set as many as possible. However, most traditional Information-theoretic based selectors will ignore some features which have strong discriminatory power as a group but are weak as individuals. To cope with this problem, this paper introduces a cooperative game theory based framework to evaluate the power of each feature. The power can be served as a metric of the importance of each feature according to the intricate and intrinsic interrelation among features. Then a general filter feature selection scheme is presented based on the introduced framework to handle the feature selection problem. To verify the effectiveness of our method, experimental comparisons with several other existing feature selection methods on fifteen UCI data sets are carried out using four typical classifiers. The results show that the proposed algorithm achieves better results than other methods in most cases. 相似文献
12.
Since fuzzy quality data are ubiquitous in the real world, under this fuzzy environment, the supplier selection and evaluation on the basis of the quality criterion is proposed in this paper. The Cpk index has been the most popular one used to evaluate the quality of supplier’s products. Using fuzzy data collected from q2 possible suppliers’ products, fuzzy estimates of q suppliers’ capability indices are obtained according to the form of resolution identity that is a well-known theorem in fuzzy sets theory. Certain optimization problems are formulated and solved to obtain α-level sets for the purpose of constructing the membership functions of fuzzy estimates of Cpki. These membership functions are sorted by using a fuzzy ranking method to choose the preferable suppliers. Finally, a numerical example is illustrated to present the possible application by incorporating fuzzy data into the quality-based supplier selection and evaluation. 相似文献
13.
An exploration of enterprise technology selection and evaluation 总被引:1,自引:0,他引:1
The evaluation-and-selection of enterprise technologies by firms has been said to be largely rational and deterministic. This paper challenges this notion, and puts forward the argument that substantial ceremonial aspects also play an important role. An in-depth, exploratory longitudinal case study of a bank selecting a ubiquitous and pervasive e-mail system was conducted using grounded theory and a hermeneutic [pre] understanding of institutional and decision making theories. Intuition, symbols, rituals, and ceremony all figured prominently in the decision process. However, rather than being in conflict with the rational processes, we found them to be in tension, leading to a more holistic social construction of decision processes. For researchers, this suggests that a focus on process rationality, not outcomes, might lead to a fuller understanding of these critical decisions. For managers, it underscores the importance of understanding the past in order to create the future. 相似文献
14.
Jozo J. Dujmović 《International journal of parallel programming》1980,9(6):435-458
Computer evaluation, comparison, and selection is essentially a decision process. The decision making is based on a number of worth indicators, including various computer performance indicators. The performance indicators are obtained through the computer performance measurement procedure. Consequently, in this environment the measurement procedure should be completely conditioned by the decision process. This paper investigates various aspects of the computer performance measurement and evaluation procedure within the context of the computer evaluation, comparison, and selection process based on the Logic Scoring of Preference method. A set of elementary criteria for performance evaluation is proposed and the corresponding set of performance indicators is defined. The necessary performance measurements are based on a standardized set of synthetic benchmark programs and include three separate measurements: monoprogramming performance measurement, multiprogramming performance measurement, and multiprogramming efficiency measurement. Using the proposed elementary criteria, the measured performance indicators can be transformed into elementary preferences and then aggregated with other nonperformance elementary preferences obtained through the evaluation process. The applicability of presented elementary criteria is illustrated by numerical examples. 相似文献
15.
Inter-process communication is achieved in many concurrent programming languages through message entries. Many of these languages contain inter-entry selection constructs allowing a process to selectively choose one of a group of message entries to service. Generally, this selection process is handled non-deterministically. To enable a degree of control over this selection, limited inter-entry selection control mechanisms are available in several of these languages. This paper reviews the need for more expressive inter-entry selection control mechanisms and details the design and implementation of two such control mechanisms—static and dynamic preferences. These implementations of static and dynamic preferences, SRps and SRpd respectively, are extensions of the SR concurrent programming language. In both implementations, the use of preferences is optional, and thus, the overhead associated with their use is incurred only when their use is necessary. Finally, this paper describes the performance of these implementations on several classical synchronization problems. For tests run in a shared memory environment the results show that there is substantial cost associated with the preference implementations. However, the results of the distributed environment tests illustrate that the incremental cost of adding preferences is small and often not discernable when the overhead costs associated with communications across a network are considered. 相似文献
16.
Many data mining applications, such as spam filtering and intrusion detection, are faced with active adversaries. In all these applications, the future data sets and the training data set are no longer from the same population, due to the transformations employed by the adversaries. Hence a main assumption for the existing classification techniques no longer holds and initially successful classifiers degrade easily. This becomes a game between the adversary and the data miner: The adversary modifies its strategy to avoid being detected by the current classifier; the data miner then updates its classifier based on the new threats. In this paper, we investigate the possibility of an equilibrium in this seemingly never ending game, where neither party has an incentive to change. Modifying the classifier causes too many false positives with too little increase in true positives; changes by the adversary decrease the utility of the false negative items that are not detected. We develop a game theoretic framework where equilibrium behavior of adversarial classification applications can be analyzed, and provide solutions for finding an equilibrium point. A classifier??s equilibrium performance indicates its eventual success or failure. The data miner could then select attributes based on their equilibrium performance, and construct an effective classifier. A case study on online lending data demonstrates how to apply the proposed game theoretic framework to a real application. 相似文献
17.
M. Reddy 《Virtual Reality》1998,3(2):132-143
Level of detail (LOD) is a technique where geometric objects are represented at a number of resolutions, allowing the workload of the system to be based upon an object's distance, size, velocity, or eccentricity. However, little is known about how to specify optimally when a particular LOD should be selected so that the user is not aware of any visual change, or to what extent any particular LOD scheme can improve an application's performance. In response, this paper produces a generic, orthogonal model for LOD based upon data from the field of human visual perception. The effect of this model on the system is evaluated to discover the contribution that each component makes towards any performance improvement. The results suggest that both velocity and eccentricity LOD should be implemented together (if at all) because their individual contribution is likely to be negligible. Also, it is apparent that size (or distance) optimisations offer the greatest benefit, contributing around 95% of any performance increment. 相似文献
18.
《Computers & Industrial Engineering》2007,52(1):143-161
The employee evaluation and selection system is an important problem that can significantly affect the future competitiveness and the performance of an organization. This paper presents a comprehensive hierarchical structure for selecting and evaluating a right employee. The structure can systematically build the goals of employee selection to carry out the business goals and strategies of an organization, identify the suitable factor and measure indicators, and set up a consistent evaluation standard for facilitating a decision process. The process of matching an employee with a certain job is performed through a competency-based fuzzy model. An example demonstrates the feasibility of the presented framework. 相似文献
19.
In this paper a methodology is developed for quantitative evaluation and selection of complex systems satisfying the desired specifications. The proposed methodology is based on the development of a figure of merit combining both problem and system specifications and permitting the evaluation and selection of the most appropriate system from a set of candidate systems suitable for a wide range of applications for which the system can be used. A case study is investigated showing the validity of the developed methodology. 相似文献
20.
Sevinc Ilhan Omurca 《Applied Soft Computing》2013,13(1):690-697
Supplier evaluation and selection process has a critical role and significant impact on purchasing management in supply chain. It is also a complex multiple criteria decision making problem which is affected by several conflicting factors. Due to multiple criteria effects the evaluation and selection process, deciding which criteria have the most critical roles in decision making is a very important step for supplier selection, evaluation and particularly development. With this study, a hybridization of fuzzy c-means (FCM) and rough set theory (RST) techniques is proposed as a new solution for supplier selection, evaluation and development problem. First the vendors are clustered with FCM algorithm then the formed clusters are represented by their prototypes that are used for labeling the clusters. RST is used at the next step of modeling where we discover the primary features in other words the core evaluation criteria of the suppliers and extract the decision rules for characterizing the clusters. The obtained results show that the proposed method not only selects the best supplier(s), also clusters all of the vendors with respect to fuzzy similarity degrees, decides the most critical criteria for supplier evaluation and extracts the decision rules about data. 相似文献