共查询到20条相似文献,搜索用时 31 毫秒
1.
Steve Schneider Thai Son Hoang Ken Robinson Helen Treharne 《Electronic Notes in Theoretical Computer Science》2005,137(2):183
The introduction of probabilistic behaviour into the B-Method is a recent development. In addition to allowing probabilistic behaviour to be modelled, the relationship between expected values of the machine state can be expressed and verified. This paper explores the application of probabilistic B to a simple case study: tracking the volume of liquid held in a tank by measuring the flow of liquid into it. The flow can change as time progresses, and sensors are used to measure the flow with some degree of accuracy and reliability, modelled as non-deterministic and probabilistic behaviour respectively. At the specification level, the analysis is concerned with the expectation clause in the probabilistic B machine and its consistency with machine operations. At the refinement level, refinement and equivalence laws on probabilistic GSL are used to establish that a particular design of sensors delivers the required level of reliability. 相似文献
2.
3.
Class Refinement as Semantics of Correct Object Substitutability 总被引:2,自引:0,他引:2
Subtype polymorphism, based on syntactic conformance of objects' methods and used for substituting subtype objects for supertype
objects, is a characteristic feature of the object-oriented programming style. While certainly very useful, typechecking of
syntactic conformance of subtype objects to supertype objects is insufficient to guarantee correctness of object substitutability.
In addition, the behaviour of subtype objects must be constrained to achieve correctness. In class-based systems classes specify
the behaviour of the objects they instantiate. In this paper we define the class refinement relation which captures the semantic
constraints that must be imposed on classes to guarantee correctness of substitutability in all clients of the objects these
classes instantiate. Clients of class instances are modelled as programs making an iterative choice over invocation of class
methods, and we formally prove that when a class C′ refines a class C, substituting instances of C′ for instances of C is refinement for the clients.
Received May 1999 / Accepted in revised form March 2000 相似文献
4.
针对传统洗车低效率、高消耗等问题,设计实现了一套基于物联网技术的新型自助洗车机系统。系统分为洗车机终端和云服务器端两个部分。洗车机终端以STM32F103VET6作为主控芯片,并利用霍尔流量传感器获取水流量,实现流量计费。GPRS-SIM868模块可实现与远程服务器的无线数据通信,并通过搭载各类传感器采集洗车机的状态数据。同时,云服务器端与洗车机终端建立UDP通信,实现对洗车机终端数据的获取,并通过以Tomcat作为Web服务器、MySQL为数据库建立B/S架构的数据管理系统。用户可以根据洗车机终端LCD显示屏提供的信息,自主选择支付方式、洗车模式以及计费模式。同时,后台管理人员可以通过网络及时了解自助洗车机的相关信息,对洗车机进行实时监控与管理。 相似文献
5.
When modelling a complex system, such as one with distributed functionality, we need to choose an appropriate level of abstraction. When analysing quantitative properties of the system, this abstraction is typically probabilistic, since we introduce uncertainty about its state and therefore its behaviour. In particular, when we aggregate several concrete states into a single abstract state we would like to know the distribution over these states. In reality, any probability distribution may be possible, but this leads to an intractable analysis. Therefore, we must find a way to approximate these distributions in a safe manner.We present an abstract interpretation for a simple imperative language with message passing, where truncated multivariate normal distributions are used as the abstraction. This allows the probabilities of transient properties to be bounded, without needing to calculate the exact distribution. We describe the semantics of programs in terms of automata, whose transitions are linear operators on measures. Given an input measure, we generate a probabilistic trace whose states are labelled by measures, describing the distribution of the values of variables at that point. By the use of appropriate widening operators, we are able to abstract the behaviour of loops to various degrees of precision. 相似文献
6.
Savas Konur Michael Fisher Simon Dobson Stephen Knox 《Formal Aspects of Computing》2014,26(4):677-694
As ubiquitous computing becomes a reality, its applications are increasingly being used in business-critical, mission-critical and even in safety-critical, areas. Such systems must demonstrate an assured level of correctness. One approach to the exhaustive analysis of the behaviour of systems is formal verification, whereby each important requirement is logically assessed against all possible system behaviours. While formal verification is often used in safety analysis, it has rarely been used in the analysis of deployed pervasive applications. Without such formality it is difficult to establish that the system will exhibit the correct behaviours in response to its inputs and environment. In this paper, we show how model-checking techniques can be applied to analyse the probabilistic behaviour of pervasive systems. As a case study we apply this technique to an existing pervasive message-forwarding system, Scatterbox. Scatterbox incorporates many typical characteristics of pervasive systems, such as dependence on sensor reliability and dependence on context. We assess the dynamic temporal behaviour of the system, including the analysis of probabilistic elements, allowing us to verify formal requirements even in the presence of uncertainty in sensors. We also draw some tentative conclusions concerning the use of formal verification for pervasive computing in general. 相似文献
7.
系统生物学期望对复杂生物系统建立一个真实的、可计算的模型,以便于以系统的角度去理解生物系统的演变过程。在系统生物学中,一个重要的主题是通过外部的干预控制发展关于基因调控网络的控制理论,以作为未来基因治疗技术。目前,布尔网络及其扩展的概率布尔网络已经被广泛用于对基因调控网络进行建模。在控制问题的研究中,概率布尔控制网络的状态迁移本质上构成一条有限状态空间的离散时间马尔科夫决策过程。依据马尔科夫决策过程的理论,通过概率模型检测方法解决网络中有限范围优化控制问题和无限范围优化控制问题。针对带有随机干扰且上下文相关的概率布尔控制网络,使用概率模型检测器PRISM对其进行形式化建模,然后将两类优化控制问题描述为相应的时序逻辑公式,最后通过模型检测寻找出最优解。实验结果表明,提出的方法可以有效地用于生物网络的分析和优化控制。 相似文献
8.
Prithviraj Sen Amol Deshpande Lise Getoor 《The VLDB Journal The International Journal on Very Large Data Bases》2009,18(5):1065-1090
Due to numerous applications producing noisy data, e.g., sensor data, experimental data, data from uncurated sources, information
extraction, etc., there has been a surge of interest in the development of probabilistic databases. Most probabilistic database
models proposed to date, however, fail to meet the challenges of real-world applications on two counts: (1) they often restrict
the kinds of uncertainty that the user can represent; and (2) the query processing algorithms often cannot scale up to the
needs of the application. In this work, we define a probabilistic database model, PrDB, that uses graphical models, a state-of-the-art probabilistic modeling technique developed within the statistics and machine
learning community, to model uncertain data. We show how this results in a rich, complex yet compact probabilistic database
model, which can capture the commonly occurring uncertainty models (tuple uncertainty, attribute uncertainty), more complex
models (correlated tuples and attributes) and allows compact representation (shared and schema-level correlations). In addition,
we show how query evaluation in PrDB translates into inference in an appropriately augmented graphical model. This allows us to easily use any of a myriad of
exact and approximate inference algorithms developed within the graphical modeling community. While probabilistic inference
provides a generic approach to solving queries, we show how the use of shared correlations, together with a novel inference
algorithm that we developed based on bisimulation, can speed query processing significantly. We present a comprehensive experimental
evaluation of the proposed techniques and show that even with a few shared correlations, significant speedups are possible. 相似文献
9.
During the past decade, sequential pattern mining has been the core of numerous research efforts. It is now possible to efficiently
extract knowledge of users’ behavior from a huge set of sequences collected over time. This has applications in various domains
such as purchases in supermarkets, Web site visits, etc. However, sequence mining algorithms do little to control the risks
of extracting false discoveries or overlooking true knowledge. In this paper, the theoretical conditions to achieve a relevant sequence mining process are examined. Then, the article
offers a statistical view of sequence mining which has the following advantages: First, it uses a compact and generalized
representation of the original sequences in the form of a probabilistic automaton. Second, it integrates statistical constraints
to guarantee the extraction of significant patterns. Finally, it provides an interesting solution in a privacy preserving
context in order to respect individuals’ information. An application in car flow modeling is presented, showing the ability
of our algorithm (acsm) to discover frequent routes without any private information. Comparisons with a classical sequence mining algorithm (spam) are made, showing the effectiveness of our approach. 相似文献
10.
Michael Leuschel Michael Butler 《International Journal on Software Tools for Technology Transfer (STTT)》2008,10(2):185-203
We present ProB, a validation toolset for the B method. ProB’s automated animation facilities allow users to gain confidence in their specifications. ProB also contains a model checker and a refinement checker, both of which can be used to detect various errors in B specifications.
We describe the underlying methodology of ProB, and present the important aspects of the implementation. We also present empirical evaluations as well as several case
studies, highlighting that ProB enables users to uncover errors that are not easily discovered by existing tools.
This research was carried out as part of the EU research project IST 511599 RODIN (Rigorous Open Development Environment for
Complex Systems). 相似文献
11.
Augusto Ciuffoletti 《Distributed Computing》1994,7(3):115-127
Summary We propose a probabilistic algorithm to solve the problem of distributed broadcast. A simple diffusion algorithm is introduced, and its reliability is evaluated. The cost and reliability of the probabilistic algorithm are compared with the corresponding deterministic algorithm.
Augusto Ciuffoletti graduated in computer sciences at the University of Pisa in 1980. From 1980 to 1983 he worked with Selenia and OtoMelara on projects funded by the Italian National Research Council. Since 1984 he has been doing research at the University of Pisa, where he is now an assistant. He is particularly interested in studying efficient and reliable low level services (such as broadcast). 相似文献
12.
Alessandra Di Pierro Chris Hankin Herbert Wiklicky 《Electronic Notes in Theoretical Computer Science》2007,190(3):59
We present a semantics-based technique for analysing probabilistic properties of imperative programs. This consists in a probabilistic version of classical data flow analysis. We apply this technique to pWhile programs, i.e programs written in a probabilistic version of a simple While language. As a first step we introduce a syntax based definition of a linear operator semantics (LOS) which is equivalent to the standard structural operational semantics of While. The LOS of a pWhile program can be seen as the generator of a Discrete Time Markov Chain and plays a similar role as a collecting or trace semantics for classical While. Probabilistic Abstract Interpretation techniques are then employed in order to define data flow analyses for properties like Parity and Live Variables. 相似文献
13.
14.
15.
Ukachukwu Ndukwu 《The Journal of Logic and Algebraic Programming》2012,81(1):26-45
Probabilistic annotations generalise standard Hoare Logic [20] to quantitative properties of probabilistic programs. They can be used to express critical expected values over program variables that must be maintained during program execution. As for standard program development, probabilistic assertions can be checked mechanically relative to an appropriate program semantics. In the case that a mechanical prover is unable to complete such validity checks then a counterexample to show that the annotation is incorrect can provide useful diagnostic information. In this paper, we provide a definition of counterexamples as failure traces for probabilistic assertions within the context of the pB language [19], an extension of the standard B method [1] to cope with probabilistic programs. In addition, we propose algorithmic techniques to find counterexamples where they exist, and suggest a ranking mechanism to return ‘the most useful diagnostic information’ to the pB developer to aid the resolution of the problem. 相似文献
16.
The reliability-based design optimization (RBDO) can be described by the design potential concept in a unified system space,
where the probabilistic constraint is identified by the design potential surface of the reliability target that is obtained
analytically from the first-order reliability method (FORM). This paper extends the design potential concept to treat nonsmooth
probabilistic constraints and extreme case design in RBDO. In addition, refinement of the design potential surface, which
yields better optimum design, can be obtained using more accurate second-order reliability method (SORM). By integrating performance
probability analysis into the iterative design optimization process, the design potential concept leads to a very effective
design potential method (DPM) for robust system parameter design. It can also be applied effectively to extreme case design
(ECD) by directly representing a probabilistic constraint in terms of the system performance function.
Received July 25, 2000 相似文献
17.
Probabilistic Duration Calculus for Continuous Time 总被引:1,自引:0,他引:1
This paper deals with dependability of imperfect implementations concerning given requirements. The requirements are assumed
to be written as formulas in Duration Calculus. Implementations are modelled by continuous semi-Markov processes with finite
state space, which are expressed in the paper as finite automata with stochastic delays of state transitions. A probabilistic
model for Duration Calculus formulas is introduced, so that the satisfaction probabilities of Duration Calculus formulas with
respect to semi-Markov processes can be defined, reasoned about and calculated through a set of axioms and rules of the model.
Received November 1994 / Accepted in revised form June 1998 相似文献
18.
Electrocardiography (ECG) is a heart signal wave that is recorded using medical sensors, which are normally attached to the human body by the heart.
ECG waves have repetitive patterns that can be efficiently used in the diagnosis of heart problems as they carry several characteristics of heart operation.
Traditionally, the analysis of ECG waves is done using informal techniques, like simulation, which is in-exhaustive and thus the analysis results may lead
to ambiguities and life threatening scenarios in extreme cases. In order to overcome such problems, we propose to analyze ECG heart signals using
probabilistic model checking, which is a formal methods based quantitative analysis approach. This work presents the formal probabilistic analysis of ECG
signal abnormalities where the likelihood of abnormal patterns is studied and analyzed using the PRISM model checker. 相似文献
19.
20.
Due to the application-specific nature of wireless sensor networks, the sensitivity to coverage and data reporting latency varies depending on the type of applications. In light of this, algorithms and protocols should be application-aware to achieve the optimum use of highly limited resources in sensors and hence to increase the overall network performance. This paper proposes a probabilistic constrained random sensor selection (CROSS) scheme for application-aware sensing coverage with a goal to maximize the network lifetime. The CROSS scheme randomly selects in each round (approximately) k data-reporting sensors which are sufficient for a user/application-specified desired sensing coverage (DSC) maintaining a minimum distance between any pair of the selected k sensors. We exploit the Poisson sampling technique to force the minimum distance. Consequently, the CROSS improves the spatial regularity of randomly selected k sensors and hence the fidelity of satisfying the DSC in each round, and the connectivity among the selected sensors increase. To this end, we also introduce an algorithm to compute the desired minimum distance to be forced between any pair of sensors. Finally, we present the probabilistic analytical model to measure the impact of the Poisson sampling technique on selecting k sensors, along with the optimality of the desired minimum distance computed by the proposed algorithm. 相似文献