首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
混沌伪随机序列复杂性的一种量度方法   总被引:2,自引:1,他引:2  
刘金梅  丘水生 《计算机应用》2009,29(4):938-940,
基于序列生成过程及特征序列等概念,定义了序列的原生系数来度量混沌伪随机序列的复杂性。仿真结果表明这种量度方法十分有效。利用分岔图和近似熵论证了新量度方法的优点,相对于近似熵,原生系数更能有效地反映序列复杂度。  相似文献   

2.
Schamp  A. 《Software, IEEE》1995,12(4):114-119
Configuration management is critical to effective management, coordination, and control in an integrated software-development environment. Complex development environments require an integrated CM toolset. The CM tools you select and how you implement them will affect every phase of the life cycle in an integrated, team-oriented environment. The tools you select must facilitate the many diverse activities performed by separate teams and the management of those activities. The author considers the broad range of organizational and technical issues when selecting configuration management tools  相似文献   

3.
数字水印技术具有鲁棒性、透明性、复杂性等特性,大部分文献对水印评估的阐述都是集中在鲁棒性上,实际上复杂性评估在数字水印评估领域也是非常重要的.因此,主要是针对数字音频水印算法嵌入函数的复杂性进行评估.给出了基本方案评估的概念,选取了两种典型的音频水印算法,应用此方案评估标准对算法进行评估,并同时论述了应用到基本方案上的嵌入参数和音频检验集.给出了两种算法复杂性评估的结果并将结果进行比较.  相似文献   

4.
为了定量描述交通流系统的复杂性,将联合熵和C0复杂度应用到交通流序列分析之中,通过计算原始序列和替代序列的联合熵反应系统包含的非线性成分的多少,通过计算序列的C0复杂度反应系统包含的非规则成分的多少。计算周期序列,Logistic序列,Henon序列,随机序列,及5个不同时段的实测交通流速度序列的联合熵和C0复杂度,结果表明不同序列的两种复杂性测度存在明显的差异,且计算需要步长较短,交通流系统是一个介于有序和无序、规则和非规则之间的混沌系统,且不同时段交通流序列的联合熵和C0复杂度存在着明显的差异,可以用来定量刻画交通流系统的复杂性。  相似文献   

5.
交通运输系统与社会经济系统的系统结构特征需要定量描述。为测度系统的结构特征之一复杂度,引入信息理论的改进的Lempel-Ziv算法——“通用试凑算法”。通过计算宏观交通量与GDP时间序列的复杂度,得到简单结论:GDP复杂度与货运交通量复杂度具有较强的正相关关系;GDP复杂度对宏观交通量复杂度的影响是长期的,具有滞后效应;GDP与宏观交通量现象的相关程度与二者复杂度的相关程度没有必然的联系。  相似文献   

6.
基于特征相关性的特征选择   总被引:3,自引:1,他引:3       下载免费PDF全文
提出了一种基于特征相关性的特征选择方法。该方法以特征之间相互依赖程度(相关度)为聚类依据先对特征进行聚类,再从各特征簇中挑选出具有代表性的特征,然后在被选择出来的特征中删除与目标特征无关或是弱相关的特征,最后留下的特征作为最终的特征子集。理论分析表明该方法的运算效率高,时间复杂度低,适合于大规模数据集中的特征选择。在UCI数据集上与文献中的经典方法进行实验比较和分析,结果显示提出的特征选择方法在特征约减和分类等方面具有更好的性能。  相似文献   

7.
针对电磁环境越来越复杂的问题,提出了一种基于前馈BP神经网络算法的电磁环境复杂度评估方法。首先建立了靶场电磁环境模型,分析了电磁环境复杂度的评估指标,为定量评估提供理论依据;然后分析了BP神经网络关键参数的选取方法,通过靶场实例验证了神经网络的功能;最后将新方法与传统评估方法进行了对比研究。结果表明了新方法优于传统方法,能够实时、快速、自适应地实现电磁环境的定性和定量分级,拓展了传统方法的应用范围,对研究真实的战场电磁环境问题具有实用价值。  相似文献   

8.
为了定量描述交通流系统的复杂性,将联合熵和C0复杂度应用到交通流序列分析之中,通过计算原始序列和替代序列的联合熵反应系统包含的非线性成分的多少,通过计算序列的C0复杂度反应系统包含的非规则成分的多少。计算周期序列,Logistic序列,Henon序列,随机序列,及5个不同时段的实测交通流速度序列的联合熵和C0复杂度,结果表明不同序列的两种复杂性测度存在明显的差异,且计算需要步长较短,交通流系统是一个介于有序和无序、规则和非规则之间的混沌系统,且不同时段交通流序列的联合熵和C0复杂度存在着明显的差异,可以用来定量刻画交通流系统的复杂性。  相似文献   

9.
Selection of trustworthy cloud services has been a major research challenge in cloud computing, due to the proliferation of numerous cloud service providers (CSPs) along every dimension of computing. This scenario makes it hard for the cloud users to identify an appropriate CSP based on their unique quality of service (QoS) requirements. A generic solution to the problem of cloud service selection can be formulated in terms of trust assessment. However, the accuracy of the trust value depends on the optimality of the service-specific trust measure parameters (TMPs) subset. This paper presents TrustCom—a novel trust assessment framework and rough set-based hypergraph technique (RSHT) for the identification of the optimal TMP subset. Experiments using Cloud Armor and synthetic trust feedback datasets show the prominence of RSHT over the existing feature selection techniques. The performance of RSHT was analyzed using Weka tool and hypergraph-based computational model with respect to the reduct size, time complexity and service ranking.  相似文献   

10.
11.
Giladi  R. Ahitav  N. 《Computer》1995,28(8):33-42
Potential computer system users or buyers usually employ a computer performance evaluation technique only if they believe its results provide valuable information. System Performance Evaluation Cooperative (SPEC) measures are perceived to provide such information and are therefore the ones most commonly used. SPEC measures are designed to evaluate the performance of engineering and scientific workstations, personal vector computers, and even minicomputers and superminicomputers. Along with the Transaction Processing Council (TPC) measures for database I/O performance, they have become de facto industry standards, but do SPEC's evaluation outcomes actually provide added information value? In this article, we examine these measures by considering their structure, advantages and disadvantages. We use two criteria in our examination: are the programs used in the SPEC suite properly blended to reflect a representative mix of different applications, and are they properly synthesized so that the aggregate measures correctly rank computers by performance? We conclude that many programs in the SPEC suites are superfluous; the benchmark size can be reduced by more than 50%. The way the measure is calculated may cause distortion. Substituting the harmonic mean for the geometric mean used by SPEC roughly preserves the measure, while giving better consistency. SPEC measures reflect the performance of the CPU rather than the entire system. Therefore, they might be inaccurate in ranking an entire system. To remedy these problems, we propose a revised methodology for obtaining SPEC measures  相似文献   

12.
The complexity of evaluating integers and polynomials is studied. A new model is proposed for studying such complexities. This model differs from previous models by requiring the construction of constant to be used in the computation. This construction is given a cost which is dependent upon the size of the constant. Previous models used a uniform cost, of either 0 or 1, for operations involving constants. Using this model, proper hierarchies are shown to exist for both integers and polynomials with respect to evaluation cost. Furthmore, it is shown that almost all integers (polynomials) are as difficult to evaluate as the hardest integer (polynomial). These results remain true even if the underlying basis of binary operations which the algorithm performs are varied.  相似文献   

13.
14.
This paper provides a systematic method for model bank selection in multi-linear model analysis for nonlinear systems by presenting a new algorithm which incorporates a nonlinearity measure and a modified gap based metric. This algorithm is developed for off-line use, but can be implemented for on-line usage. Initially, the nonlinearity measure analysis based on the higher order statistic (HOS) and the linear cross correlation methods are used for decomposing the total operating space into several regions with linear models. The resulting linear models are then used to construct the primary model bank. In order to avoid unnecessary linear local models in the primary model bank, a gap based metric is introduced and applied in order to merge similar linear local models. In order to illustrate the usefulness of the proposed algorithm, two simulation examples are presented: a pH neutralization plant and a continuous stirred tank reactor (CSTR).  相似文献   

15.
Evan E. Anderson 《Software》1989,19(8):707-717
The proliferation of software packages has created a difficult, complex problem of evaluation and selection for many users. Traditional approaches to the quantification of package performance have relied on compensatory models, such as the linear weighted attribute model, which sums the weighted ratings of software attributes. These approaches define the dimensions of quality too narrowly and, therefore, omit substantial amounts of information from consideration. This paper presents an alternative methodology, previously used in capital rationing and tournament ranking, that expands the opportunity for objective insight into software quality. In particular, it considers three measures of quality, the frequency with which the attribute ratings of one package exceed those of another, the presence of outliers, where very poor performance may exist on a single attribute and be glossed over by compensatory methods, and the cumulative magnitude of attribute ratings on one package that exceed those on others. The proposed methodology is applied to the evaluation of the following software types: word processing, database management systems, spreadsheet/financial planning, integrated software, graphics, data communications and project management.  相似文献   

16.
《Information & Management》1987,12(3):117-129
Three main models for evaluating and selecting computer systems are presented and compared: (a) the additive-weight model, (b) the Eigenvector model, and (c) the multi-attribute utility model. A case study describes the application of these three models to the selection of a computer for an organization. In this study, the three models showed almost identical results in ranking the alternatives.Based on the data obtained from the case study and a comparison of the models' attributes, we recommend using the multi-attribute utility model in cases wherein the required assumptions of independence hold. To overcome the difficulties involved in understanding and applying the model, the development of an interactive decision-support system is recommended.  相似文献   

17.
Since fuzzy quality data are ubiquitous in the real world, under this fuzzy environment, the supplier selection and evaluation on the basis of the quality criterion is proposed in this paper. The Cpk index has been the most popular one used to evaluate the quality of supplier’s products. Using fuzzy data collected from q2 possible suppliers’ products, fuzzy estimates of q suppliers’ capability indices are obtained according to the form of resolution identity that is a well-known theorem in fuzzy sets theory. Certain optimization problems are formulated and solved to obtain α-level sets for the purpose of constructing the membership functions of fuzzy estimates of Cpki. These membership functions are sorted by using a fuzzy ranking method to choose the preferable suppliers. Finally, a numerical example is illustrated to present the possible application by incorporating fuzzy data into the quality-based supplier selection and evaluation.  相似文献   

18.
An exploration of enterprise technology selection and evaluation   总被引:1,自引:0,他引:1  
The evaluation-and-selection of enterprise technologies by firms has been said to be largely rational and deterministic. This paper challenges this notion, and puts forward the argument that substantial ceremonial aspects also play an important role. An in-depth, exploratory longitudinal case study of a bank selecting a ubiquitous and pervasive e-mail system was conducted using grounded theory and a hermeneutic [pre] understanding of institutional and decision making theories. Intuition, symbols, rituals, and ceremony all figured prominently in the decision process. However, rather than being in conflict with the rational processes, we found them to be in tension, leading to a more holistic social construction of decision processes. For researchers, this suggests that a focus on process rationality, not outcomes, might lead to a fuller understanding of these critical decisions. For managers, it underscores the importance of understanding the past in order to create the future.  相似文献   

19.
Many data mining applications, such as spam filtering and intrusion detection, are faced with active adversaries. In all these applications, the future data sets and the training data set are no longer from the same population, due to the transformations employed by the adversaries. Hence a main assumption for the existing classification techniques no longer holds and initially successful classifiers degrade easily. This becomes a game between the adversary and the data miner: The adversary modifies its strategy to avoid being detected by the current classifier; the data miner then updates its classifier based on the new threats. In this paper, we investigate the possibility of an equilibrium in this seemingly never ending game, where neither party has an incentive to change. Modifying the classifier causes too many false positives with too little increase in true positives; changes by the adversary decrease the utility of the false negative items that are not detected. We develop a game theoretic framework where equilibrium behavior of adversarial classification applications can be analyzed, and provide solutions for finding an equilibrium point. A classifier??s equilibrium performance indicates its eventual success or failure. The data miner could then select attributes based on their equilibrium performance, and construct an effective classifier. A case study on online lending data demonstrates how to apply the proposed game theoretic framework to a real application.  相似文献   

20.
There is a wide range of standards available for the integration and interoperability of applications and information systems, both on domain-specific and domain-neutral levels. The evaluation and selection of interoperability standards are necessary in the application development and integration projects, when there is a need to assess the usefulness of existing models or to find open solutions. In addition, standards have to be evaluated when recommendations are made for a given domain or when their quality is examined. The evaluation of the scope and other aspects of interoperability standards is usually performed against project-specific requirements, but generic frameworks can be used for supporting the evaluation. In this article, we present a conceptual framework which has been developed for the systematic evaluation of interoperability standards. We also present an overview of a process for the evaluation of interoperability standards. We illustrate the use of these models with practical experience and examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号