首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
赵亮  杨战平 《控制与决策》2015,30(6):1014-1020
针对模型确认中的确认度量问题,构造实验观测数据经验概率分布的置信包络。通过计算其与模型响应概率分布之间距离的上/下确界,给出基于概率分布距离确认度量的置信区间。通过构造与实验观测数据有关的协方差矩阵,给出基于概率分布距离的多响应模型确认度量及其置信区间的求解方式。该度量利用了模型输出与实验观测的完整概率分布信息,并且考虑了各模型响应间的相关性。算例仿真结果表明其确认错误率低于现有的其他两种确认度量。  相似文献   

2.
锁斌  孙东阳  曾超  张保强 《控制与决策》2020,35(8):1923-1928
模型确认试验是一种新的试验,其目的在于度量仿真模型的可信度.为了得到低成本、高可信度的模型确认试验方案,提出一种随机不确定性模型确认试验设计方法.首先,基于面积确认度量指标提出一种新的无量纲的模型确认度量指标(面积确认度量指标因子),并且在其基础上发展了基于专家系统的仿真模型准确性定性评判准则;然后,建立随机不确定性模型确认试验优化设计模型,提出该优化模型的求解方法;最后,通过两个数值算例对提出的模型确认试验设计方法进行验证.结果表明,小样本情况下,试验方案的随机性会影响模型评判结果的可信度;面积度量指标因子随试验样本数量的增加而收敛;随机不确定性模型确认试验设计方法能够避免试验方案对模型确认结果的影响.  相似文献   

3.
当前存在的云模型相似性度量仅局限于单粒度空间,缺乏多粒度云模型的相似性度量的相关研究.因此,文中首先证明知识距离框架的相关性质,并建立知识距离与信息度量、信息粒度之间的联系,在分层递阶粒结构上得到如下结论:同一粒结构中粒空间的粒度差异正相关于知识距离,通过知识距离可将随粒度连续变化的粒空间映射到一维坐标上.最后,在知识距离框架的基础上提出云模型相似性度量方法.实验验证上述结论在云模型粒空间上成立.  相似文献   

4.
为了度量网络效能的大小,通过对美国Jeffrey R.Cares所建立的网络效能度量模型提出疑问,在参考相关知识和概念的基础上,采取引入网络链路连通概率的方式,对该模型进行了拓展并进行了举例计算.计算结果表明:网络效能不但取决于网络中的节点和链路的数量,而且还与链路连通概率相关,这与实际相吻合.拓展后的模型不但可以用于分析或比较网络效能的大小,而且还可以用于权衡网络链路的相对重要性.这为开展基于复杂网络的作战效能问题研究提供了一种新思路.  相似文献   

5.
软件度量模型及其方法的研究   总被引:4,自引:0,他引:4  
如何提高软件的质量始终是软件工程领域研究的重要方向。基于度量的量化管理是目前最有效的质量保证手段之一,为此对软件度量的概念和范围进行了研究,提出了一种软件度量模型和度量指标,并对相关度量模型方法进行了比较。  相似文献   

6.
多模型概率假设密度平滑器   总被引:9,自引:5,他引:4  
针对杂波环境下的多个机动目标跟踪问题, 本文将多模型概率假设密度(Multiple-model probability hypothesis density, MM-PHD)滤波器和平滑算法相结合, 提出了MM-PHD前向--后向平滑器. 为了避免引入复杂的随机有限集(Random finite set, RFS)理论, 本文根据PHD的物理空间(Physical space)描述法推导得到了MM-PHD平滑器的后向更新公式. 由于MM-PHD前向--后向平滑器的递推公式中包含有多个积分, 因此它在非线性非高斯条件下没有解析的表达形式. 故本文又给出了它的序贯蒙特卡洛(Sequential Monte Carlo, SMC)实现. 100次蒙特卡洛(Monte Carlo, MC)仿真实验表明, 与MM-PHD滤波器相比, MM-PHD平滑器能够更加精确地估计多个机动目标的个数和状态, 但MM-PHD平滑器存在一定的时间滞后, 并且需要耗费更大的计算代价.  相似文献   

7.
在对Coad&Yourdon模型和CK度量进行分析的基础上,提出对Coad&Yourdon模型主题层结构进行细化的原则以及相应的扩展CK度量模型.探讨了将CK度量方法用于对Coad&Yourdon模型度量的过程与方法,使CK度量的适用范围扩充至面向对象分析(OOA)阶段.最后给出相关实例进行了具体的说明.  相似文献   

8.
针对随机与认知混合不确定性的概率盒灵敏度分析问题,提出一种利用概率盒缩减前后重叠面积作为不确定性度量的全局灵敏度分析方法.混合不确定性在航空航天仿真系统中广泛存在,概率盒方法用于随机与认知混合不确定性的表征在学术界已被广泛应用.首先,介绍传统概率盒灵敏度分析的不确定性缩减法理论,在此基础上,进一步考虑概率盒在位置和形状上的偏移量;然后,通过计算缩减前后的概率盒面积重叠量来表征各输入不确定性的影响程度,阐述其实施步骤;最后,通过数值算例对所提出方法与传统不确定性缩减方法进行全局灵敏度分析的对比和验证,并应用于发动机总体性能仿真计算灵敏度排序.研究结果表明,所提出面积重叠方法比传统不确定性缩减法适用范围更广,计算结果更准确.  相似文献   

9.
赵玉娟  刘擎超 《计算机工程》2012,38(21):171-174
在机器学习领域,分类器加权在小样本数据集中的分类正确率较低。为此,提出一种基于混合距离度量的多分类器加权集成方法。结合欧氏距离、曼哈顿距离、切比雪夫距离,设计混合的距离度量加权方法,使用加权投票组合规则集成各分类器的输出结果。实验结果表明,该方法鲁棒性较好,分类正确率较高。  相似文献   

10.
张保强  陈梅玲  孙东阳  锁斌 《控制与决策》2020,35(10):2459-2465
针对时变系统的不确定性量化和传递问题,提出一种概率盒演化方法.根据系统的时变规律,获取系统响应的累积分布函数随时间变化的规律.将认知不确定性参数和随机不确定性参数分离在外层和内层,用蒙特卡洛法量化外层的认知不确定性参数,用基于随机配点的非嵌入式混沌多项式法量化内层的随机不确定性参数,通过求取不同时刻系统响应的累积分布函数的上下边界创建时变概率盒.最后,通过一延时电路性能退化算例验证所提出方法的有效性.研究表明,时变概率盒不仅可以表征系统特定时刻的混合不确定性,而且反映了输出响应的时变规律和输出不确定性随时间变化的趋势.  相似文献   

11.
Databases are the core of Information Systems (IS). It is, therefore, necessary to ensure the quality of the databases in order to ensure the quality of the IS. Metrics are useful mechanisms for controlling database quality. This paper presents two metrics related to referential integrity, number of foreign keys (NFK) and depth of the referential tree (DRT) for controlling the quality of a relational database. However, to ascertain the practical utility of the metrics, experimental validation is necessary. This validation can be carried out through controlled experiments or through case studies. The controlled experiments must also be replicated in order to obtain firm conclusions. With this objective in mind, we have undertaken different empirical work with metrics for relational databases. As a part of this empirical work, we have conducted a case study with some metrics for relational databases and a controlled experiment with two metrics presented in this paper. The detailed experiment described in this paper is a replication of the later one. The experiment was replicated in order to confirm the results obtained from the first experiment.

As a result of all the experimental works, we can conclude that the NFK metric is a good indicator of relational database complexity. However, we cannot draw such firm conclusions regarding the DRT metric.  相似文献   


12.
《Computer Communications》1999,22(13):1260-1265
Some schemes are proposed for dynamic accommodation of the permission probability based on frame reservation multiple access protocol, a variant of PRMA, where the base station broadcasts the acknowledgements for all the slots in a frame which occurs at the end of the frame. One is by adjusting the permission probability with regard to the available slots, once in every frame. Another is to set the permission probability individually, based on the contending number of retrials for each active voice terminal. For the integration of voice and data users, to combine the above mentioned schemes, the speech packet dropping probability and mean data delay techniques are superior to the original PRMA with fixed permission probability, as indicated from the simulation results obtained here.  相似文献   

13.
This paper presents the results of a study in which we empirically investigated the suite of object-oriented (OO) design metrics introduced in (Chidamber and Kemerer, 1994). More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described in (Li and Henry, 1993) where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber and Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than “traditional” code metrics, which can only be collected at a later phase of the software development processes  相似文献   

14.
Coverage metrics for functional validation of hardware designs   总被引:1,自引:0,他引:1  
Software simulation remains the primary means of functional validation for hardware designs. Coverage metrics ensure optimal use of simulation resources, measure the completeness of validation, and direct simulations toward unexplored areas of the design. This article surveys the literature, and discusses the experiences of verification practitioners, regarding coverage metrics  相似文献   

15.
针对在负载均衡中相关告警分配到不同处理节点上的问题,提出一种基于告警相关概率的负载均衡方法。实验结果表明,该方法适合于传输信息关联的场合,具有负载均匀度较高、告警关联平均破坏度低等优点。  相似文献   

16.
Geyong  Mohamed   《Performance Evaluation》2005,60(1-4):255-273
The efficiency of a large-scale multicomputer is critically dependent on the performance of its interconnection network. Current multicomputers have widely employed the torus as their underlying network topology for efficient interprocessor communication. In order to ensure a successful exploitation of the computational power offered by multicomputers it is essential to obtain a clear understanding of the performance capabilities of their interconnection networks under various system configurations. Analytical modelling plays an important role in achieving this goal. This study proposes a concise performance model for computing communication delay in the torus network with circuit switching in the presence of multiple time-scale correlated traffic which is found in many real-world parallel computation environments and has strong impact on network performance. The tractability and reasonable accuracy of the analytical model demonstrated by extensive simulation experiments make it a practical and cost-effective evaluation tool to investigate network performance with various alternative design solutions and under different operating conditions.  相似文献   

17.
A key promise of process languages based on open standards, such as the Web Services Business Process Execution Language, is the avoidance of vendor lock-in through the portability of processes among runtime environments. Despite the fact that today various runtimes claim to support this language, every runtime implements a different subset, thus hampering portability and locking in their users. It is our intention to improve this situation by enabling the measurement of the portability of executable service-oriented processes. This helps developers to assess their implementations and to decide if it is feasible to invest in the effort of porting a process to another runtime. In this paper, we define several software quality metrics that quantify the degree of portability of an executable, service-oriented process from different viewpoints. When integrated into a development environment, such metrics can help to improve the portability of the outcome. We validate the metrics theoretically with respect to measurement theory and construct validity using two validation frameworks. The validation is complemented with an empirical evaluation of the metrics using a large set of processes coming from several process libraries.  相似文献   

18.
To make global decisions about a project or group of projects, it is necessary to analyse several metrics in concert, as well as changes in individual metrics. This paper discusses several approaches to collective metrics analysis. First, classification tree analysis is described as a technique for evaluating both process and metrics characteristics. Next, the notion of a multiple metrics graph is introduced. Developed initially as a way to evaluate software switch quality, a multiple metrics graph allows collections of metrics to be viewed in terms of overall product or process improvement.This work was done while the authors were affiliated with Contel Technology Center, Chantilly, Virginia, USA.  相似文献   

19.
Empirical validation of software metrics used to predict software quality attributes is important to ensure their practical relevance in software organizations. The aim of this work is to find the relation of object-oriented (OO) metrics with fault proneness at different severity levels of faults. For this purpose, different prediction models have been developed using regression and machine learning methods. We evaluate and compare the performance of these methods to find which method performs better at different severity levels of faults and empirically validate OO metrics given by Chidamber and Kemerer. The results of the empirical study are based on public domain NASA data set. The performance of the predicted models was evaluated using Receiver Operating Characteristic (ROC) analysis. The results show that the area under the curve (measured from the ROC analysis) of models predicted using high severity faults is low as compared with the area under the curve of the model predicted with respect to medium and low severity faults. However, the number of faults in the classes correctly classified by predicted models with respect to high severity faults is not low. This study also shows that the performance of machine learning methods is better than logistic regression method with respect to all the severities of faults. Based on the results, it is reasonable to claim that models targeted at different severity levels of faults could help for planning and executing testing by focusing resources on fault-prone parts of the design and code that are likely to cause serious failures.  相似文献   

20.
This paper focuses on activity recognition when multiple views are available. In the literature, this is often performed using two different approaches. In the first one, the systems build a 3D reconstruction and match that. However, there are practical disadvantages to this methodology since a sufficient number of overlapping views is needed to reconstruct, and one must calibrate the cameras. A simpler alternative is to match the frames individually. This offers significant advantages in the system architecture (e.g., it is easy to incorporate new features and camera dropouts can be tolerated). In this paper, the second approach is employed and a novel fusion method is proposed. Our fusion method collects the activity labels over frames and cameras, and then fuses activity judgments as the sequence label. It is shown that there is no performance penalty when a straightforward weighted voting scheme is used. In particular, when there are enough overlapping views to generate a volumetric reconstruction, our recognition performance is comparable with that produced by volumetric reconstructions. However, if the overlapping views are not adequate, the performance degrades fairly gracefully, even in cases where test and training views do not overlap.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号