首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 203 毫秒
1.
面向安全攸关系统中小概率事件的统计模型检测   总被引:1,自引:0,他引:1  
杜德慧  程贝  刘静 《软件学报》2015,26(2):305-320
在开放运行环境中,安全攸关系统的不确定性行为有可能导致小概率事件的发生,而此类事件的可靠性指标往往很高,小概率事件一旦发生就会产生灾难性的后果,严重威胁到人们的生命、财产安全.因此,评估、预测小概率事件发生的概率,对于提高系统的可靠性具有重要意义.统计模型检测是一种基于模拟的模型验证技术,结合了系统的快速模拟及统计分析技术,能够有效提高模型检测的效率,适用于验证、评估安全攸关系统的可靠性,但其面临的挑战性问题之一是在可接受的样本数量下,使用统计模型检测技术难以预测、评估小概率事件发生的概率.因此,提出一种改进的统计模型检测框架,设计和开发基于机器学习的统计模型检测器,实现在相对较少的样本数量下预测和评估小概率事件发生的概率.结合轨道交通控制系统中避碰控制案例分析,进一步证明改进后的统计模型检测器能够有效预测和评估安全攸关系统中小概率事件发生的概率.  相似文献   

2.
基于蚁群算法求解车辆路径规划的缺陷性分析,提出一种自适应动态搜索蚁群算法(ADACO).建立算法模型,以测试案例的TSP问题为基础实验性配置组合参数;采用伪随机分布和自适应转移概率相结合策略,帮助群体选择较高质量路径;分段化设定信息素强度,有效诱导群体及时跳脱局部困境并构造新的解.测试结果表明,无论在时间开销还是配送成本方面,ADACO算法较于其它算法均有实质性地突破,验证了该算法的可行性.  相似文献   

3.
房丙午  黄志球  谢健 《软件学报》2022,33(10):3717-3731
统计模型检测,已成为随机混成系统安全性验证的重要方法.但对安全性要求较高的系统,其不安全事件和系统失效都是稀有事件.在这种情况下,统计模型检测很难采样到满足稀有属性的样本而变得不可行.针对该问题,提出了交叉熵迭代学习的统计模型检测方法首先,使用连续时间马尔可夫链表示随机混成系统的路径概率空间,推导出路径空间上的参数化概率分布函数族;然后构造了随机混成系统路径空间上的交叉熵优化模型,提出了在路径空间上迭代学习最优重要性采样分布的算法;最后给出了基于重要性采样的稀有属性验证算法.实验结果表明:该方法能够有效地对随机混成系统的稀有属性进行验证;且在相同样本数量下,与一些启发式重要性采样方法相比,该方法的估计值能够更好地分布在均值附近,标准方差和相对误差减少超过了一个数量级.  相似文献   

4.
物理网络系统(Cyber Physical Systems,CPSs)是一种计算进程和物理进程紧密结合的智能系统.目前对于CPSs的研究还处于初级阶段,从概念模型到关键技术都没有确切描述,给相关研究者正确理解及研究CPSs造成困难.基于CPSs未来广泛的应用范围和良好的发展前景,本文对CPSs进行了系统的介绍.本文首先给出了CPSs定义,分析了CPSs与物联网的异同点,对CPSs研究现状进行了总结,以此为基础提出了一个CPSs初步的结构框架;其次着重对CPSs研究涉及到的关键技术与基础理论进行了系统讨论,最后对CPSs的研究前景进行了展望.  相似文献   

5.
为使建立的系统方程更适合强机动反舰导弹的真实运动状态,在分析"当前"统计模型及自适应滤波算法的基础上.通过由滤波残差的变化来检测目标机动与否的准则,设计了一种非线性的机动频率函数.实现机动频率自适应调整,优化了"当前"统计模型的系统参数,构建了机动频率自适应算法.在想定的初始条件下,结合反舰导弹的末段"蛇行机动",对建立的机动频率自适应算法进行Monte Carlo仿真实验,结果表明:该算法运行稳定,适应能力强,能有效提高"当前"统计模型的跟踪性能.  相似文献   

6.
利用水中目标通过预警系统时的通过特性波形特征,提出了一种基于自适应幅度滤波技术和时间滤波技术的预警值更算法,应用于被动预警值更系统的微弱信号检测.海试实验结果表明算法能有效克服高海况复杂背景影响,对有效舰船目标正确值更概率达到86.9%,且弥补了传统预警处理系统检测算法抗爆破冲击干扰信号弱的缺陷,算法结构简单,实时性高,可以成为未来引信预警系统利用舰船辐射声场探测目标的一种有效途径.  相似文献   

7.
根据带有随机特征的复杂信息系统性质验证的需求,针对离散概率回报模型的分层直到公式,提出一种性质验证分析方法。在综合各种离散随机逻辑的基础上,使用一种同时具有迁移回报及迁移步区间表达能力的概率计算树逻辑表示系统模型的分层直到路径公式性质,使用自动机技术建模路径公式,通过构造积模型完成模型与自动机的同步演化,基于积模型给出相应的状态概率满足算法。实例结果验证了该方法的可行性和有效性。  相似文献   

8.
高婉玲  洪玫  杨秋辉  赵鹤 《计算机科学》2017,44(Z6):499-503, 533
近年来,统计模型检测技术已经得到了广泛的应用,不同的统计算法对统计模型检测的性能有所影响。主要对比不同统计算法对统计模型检测的时间开销影响,从而分析算法的适用环境。选择的统计算法包括切诺夫算法、序贯算法、智能概率估计算法、智能假设检验算法及蒙特卡罗算法。采用无线局域网协议验证和哲学家就餐问题的状态可达性验证为实例进行分析,使用PLASMA模型检测工具进行验证。实验结果表明,不同的统计算法在不同的环境中对模型检测的效率有不同的影响。序贯算法适用于状态可达性性质的验证,时间性能最优;智能假设检验算法与蒙特卡罗算法适合验证复杂模型。这一结论有助于在模型检测时对统计算法的选择,从而提高模型检测的效率。  相似文献   

9.
尽管近年来模型检测取得了很大的进步,但是对于大系统的验证能力依然有限。在众多的状态减少和压缩技术中,抽象技术是最有效的方法之一。本文给出了基于K-模拟的抽象的高效算法,并证明了在线性时序逻辑框架下抽象的可靠性和完备性。  相似文献   

10.
模型检测因其自动化程度高、能够提供反例路径等优势,被广泛应用于Web服务组合的兼容性验证。本文针对模型检测过程中存在的状态爆炸问题,在传统的模型检测方法中引入谓词抽象和精化技术,提出了一种针对Web服务组合的抽象精化验证框架。使用谓词抽象技术对原子Web服务抽象建模,将各Web服务抽象模型组合成组合抽象模型;将模型检测后得到的反例在各原子Web服务上做投影操作,对投影反例进行确认;对产生伪反例的Web服务抽象模型进行精化,生成新的组合抽象模型,再次对性质进行验证。最后通过实例分析说明基于抽象精化技术的Web服务组合验证框架在缓解状态爆炸问题上的可行性。  相似文献   

11.
Statistical Model Checking (SMC), as a technique to mitigate the issue of state space explosion in numerical probabilistic model checking, can efficiently obtain an approximate result with an error bound by statistically analysing the simulation traces. SMC however may become very time consuming due to the generation of an extremely large number of traces in some cases. Improving the performance of SMC effectively is still a challenge. To solve the problem, we propose an optimized SMC approach called AL-SMC which effectively reduces the required sample traces, thus to improve the performance of SMC by automatic abstraction and learning. First, we present property-based trace abstraction for simplifying the cumbersome traces drawn from the original model. Second, we learn the analysis model called Prefix Frequency Tree (PFT) from the abstracted traces, and optimize the PFT using the two-phase reduction algorithm. By means of the optimized PFT, the original probability space is partitioned into several sub-spaces on which we evaluate the probabilities parallelly in the final phase. Besides, we analyse the core algorithms in terms of time and space complexity, and implement AL-SMC in our Modana Platform to support the automatic process. Finally we discuss the experiment results for the case study :energy-aware building which shows that the number of sample traces is effectively reduced (by nearly 20\% to 50\%) while ensuring the accuracy of the result with an acceptable error.  相似文献   

12.
We address the problem of model checking stochastic systems, i.e., checking whether a stochastic system satisfies a certain temporal property with a probability greater (or smaller) than a fixed threshold. In particular, we present a Statistical Model Checking (SMC) approach based on Bayesian statistics. We show that our approach is feasible for a certain class of hybrid systems with stochastic transitions, a generalization of Simulink/Stateflow models. Standard approaches to stochastic discrete systems require numerical solutions for large optimization problems and quickly become infeasible with larger state spaces. Generalizations of these techniques to hybrid systems with stochastic effects are even more challenging. The SMC approach was pioneered by Younes and Simmons in the discrete and non-Bayesian case. It solves the verification problem by combining randomized sampling of system traces (which is very efficient for Simulink/Stateflow) with hypothesis testing (i.e., testing against a probability threshold) or estimation (i.e., computing with high probability a value close to the true probability). We believe SMC is essential for scaling up to large Stateflow/Simulink models. While the answer to the verification problem is not guaranteed to be correct, we prove that Bayesian SMC can make the probability of giving a wrong answer arbitrarily small. The advantage is that answers can usually be obtained much faster than with standard, exhaustive model checking techniques. We apply our Bayesian SMC approach to a representative example of stochastic discrete-time hybrid system models in Stateflow/Simulink: a fuel control system featuring hybrid behavior and fault tolerance. We show that our technique enables faster verification than state-of-the-art statistical techniques. We emphasize that Bayesian SMC is by no means restricted to Stateflow/Simulink models. It is in principle applicable to a variety of stochastic models from other domains, e.g., systems biology.  相似文献   

13.
Sequential Monte Carlo (SMC) represents a principal statistical method for tracking objects in video sequences by on-line estimation of the state of a non-linear dynamic system. The performance of individual stages of the SMC algorithm is usually data-dependent, making the prediction of the performance of a real-time capable system difficult and often leading to grossly overestimated and inefficient system designs. Also, the considerable computational complexity is a major obstacle when implementing SMC methods on purely CPU-based resource constrained embedded systems. In contrast, heterogeneous multi-cores present a more suitable implementation platform. We use hybrid CPU/FPGA systems, as they can efficiently execute both the control-centric sequential as well as the data-parallel parts of an SMC application. However, even with hybrid CPU/FPGA platforms, determining the optimal HW/SW partitioning is challenging in general, and even impossible with a design time approach. Thus, we need self-adaptive architectures and system software layers that are able to react autonomously to varying workloads and changing input data while preserving real-time constraints and area efficiency. In this article, we present a video tracking application modeled on top of a framework for implementing SMC methods on CPU/FPGA-based systems such as modern platform FPGAs. Based on a multithreaded programming model, our framework allows for an easy design space exploration with respect to the HW/SW partitioning. Additionally, the application can adaptively switch between several partitionings during run-time to react to changing input data and performance requirements. Our system utilizes two variants of a add/remove self-adaptation technique for task partitioning inside this framework that achieve soft real-time behavior while trying to minimize the number of active cores. To evaluate its performance and area requirements, we demonstrate the application and the framework on a real-life video tracking case study and show that partial reconfiguration can be effectively and transparently used for realizing adaptive real-time HW/SW systems.  相似文献   

14.
It is well recognized that traceability links between software artifacts provide crucial support in comprehension, efficient development, and effective management of a software system. However, automated traceability systems to date have been faced with two major open research challenges: how to extract traceability links with both high precision and high recall, and how to efficiently visualize links for complex systems because of scalability and visual clutter issues. To overcome the two challenges, we designed and developed a traceability system, DCTracVis. This system employs an approach that combines three supporting techniques, regular expressions, key phrases, and clustering, with information retrieval (IR) models to improve the performance of automated traceability recovery between documents and source code. This combination approach takes advantage of the strengths of the three techniques to ameliorate limitations of IR models. Our experimental results show that our approach improves the performance of IR models, increases the precision of retrieved links, and recovers more correct links than IR alone. After having retrieved high-quality traceability links, DCTracVis then utilizes a new approach that combines treemap and hierarchical tree techniques to reduce visual clutter and to allow the visualization of the global structure of traces and a detailed overview of each trace, while still being highly scalable and interactive. Usability evaluation results show that our approach can effectively and efficiently help software developers comprehend, browse, and maintain large numbers of links.  相似文献   

15.
汪慕峰  胥布工 《控制与决策》2019,34(8):1681-1687
基于网络的工业控制系统作为信息物理系统(CPSs)的一种重要应用正迅猛发展.然而,近年来针对工业控制系统的恶意网络攻击引起了人们对CPS安全问题的广泛关注.拒绝服务(DoS)干扰攻击作为CPS中最容易发生的攻击方式得到了深入研究.对此,提出一种能量受限的、周期的DoS干扰攻击模型,攻击的目的是增大无线信道发生数据包随机丢包的概率.基于一类CPS简化模型,考虑CPS中传感器与控制器(S-C)之间无线信道同时存在DoS干扰攻击和固有随机数据包丢失的情况,采用状态反馈,基于随机Lyapunov函数和线性矩阵不等式方法得到可以保证系统稳定的充分条件,并利用系统稳定的充分条件和锥补线性化算法设计控制器.最后,通过两个数值仿真例子验证所提出控制策略的有效性.  相似文献   

16.
基于自适应遗传算法的流水车间作业调度   总被引:2,自引:0,他引:2       下载免费PDF全文
沈斌  周莹君  王家海 《计算机工程》2010,36(14):201-203
流水车间调度问题是NP完全问题。提出一种新的自适应遗传算法,采用初始种群复合化、适应度相同个体的筛选策略、改进自适应交叉变异概率等方法提高算法性能。通过仿真比较,从最优解出现的代数、最优解的相对误差以及随机若干次试验对算法的影响3个方面证明该算法的优越性。  相似文献   

17.
In this work we propose a fine grained approach with self-adaptive migration rate for distributed evolutionary computation. Our target is to gain some insights on the effects caused by communication when the algorithm scales. To this end, we consider a set of basic topologies in order to avoid the overlapping of algorithmic effects between communication and topological structures. We analyse the approach viability by comparing how solution quality and algorithm speed change when the number of processors increases and compare it with an Island model based implementation. A finer-grained approach implies a better chance of achieving a larger scalable system; such a feature is crucial concerning large-scale parallel architectures such as peer-to-peer systems. In order to check scalability, we perform a threefold experimental evaluation of this model: first, we concentrate on the algorithmic results when the problem scales up to eight nodes in comparison with how it does following the Island model. Second, we analyse the computing time speedup of the approach while scaling. Finally, we analyse the network performance with the proposed self-adaptive migration rate policy that depends on the link latency and bandwidth. With this experimental setup, our approach shows better scalability than the Island model and a equivalent robustness on the average of the three test functions under study.  相似文献   

18.
Most face recognition techniques have been successful in dealing with high-resolution (HR) frontal face images. However, real-world face recognition systems are often confronted with the low-resolution (LR) face images with pose and illumination variations. This is a very challenging issue, especially under the constraint of using only a single gallery image per person. To address the problem, we propose a novel approach called coupled kernel-based enhanced discriminant analysis (CKEDA). CKEDA aims to simultaneously project the features from LR non-frontal probe images and HR frontal gallery ones into a common space where discrimination property is maximized. There are four advantages of the proposed approach: 1) by using the appropriate kernel function, the data becomes linearly separable, which is beneficial for recognition; 2) inspired by linear discriminant analysis (LDA), we integrate multiple discriminant factors into our objective function to enhance the discrimination property; 3) we use the gallery extended trick to improve the recognition performance for a single gallery image per person problem; 4) our approach can address the problem of matching LR non-frontal probe images with HR frontal gallery images, which is difficult for most existing face recognition techniques. Experimental evaluation on the multi-PIE dataset signifies highly competitive performance of our algorithm.   相似文献   

19.
Process mining techniques have been used to analyze event logs from information systems in order to derive useful patterns. However, in the big data era, real-life event logs are huge, unstructured, and complex so that traditional process mining techniques have difficulties in the analysis of big logs. To reduce the complexity during the analysis, trace clustering can be used to group similar traces together and to mine more structured and simpler process models for each of the clusters locally. However, a high dimensionality of the feature space in which all the traces are presented poses different problems to trace clustering. In this paper, we study the effect of applying dimensionality reduction (preprocessing) techniques on the performance of trace clustering. In our experimental study we use three popular feature transformation techniques; singular value decomposition (SVD), random projection (RP), and principal components analysis (PCA), and the state-of-the art trace clustering in process mining. The experimental results on the dataset constructed from a real event log recorded from patient treatment processes in a Dutch hospital show that dimensionality reduction can improve trace clustering performance with respect to the computation time and average fitness of the mined local process models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号