首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 572 毫秒
1.
Arelative specification is a collection of laws relating the behaviour of a required new program to that of one or more existing programs. A two stage method for transforming such relative specifications into effective functional programs is described and illustrated. Theinversion stage re-arranges the specifying laws to obtain a collection ofpartial definitions for each unknown function, typically involving non-deterministic operators. The subsequentfusion stage combines each set of partial definitions into a single complete definition, thereby eliminating non-deterministic operators.Supported by Science and Engineering Research Council, Grant GR/E16571  相似文献   

2.
We present a framework for constructing formal models of object-oriented distributed systems and a property language to express behavioral constraints in such models. Most of the existing models have their origin in specific mathematical notations and/or concepts. In contrast, we have developed our model such that it accounts for a large set of phenomena associated with industrial implementations of object-oriented distributed systems. The model that we propose, while closer to industrial concerns and practice, still has the powerful features of formal approaches. It also offers the possibility to automatically check at service run-time that the final service implementation has not violated and is not violating properties expressed at the abstraction level of our model. In our model, which relies on event-based behavioral abstraction, we use linear-time temporal logic as the underlying formalism for the specification of properties. We introduce two novel operators which are especially useful for object-oriented systems and which provide a number of advantages over the well-known temporal logic operators. A recent decision of one of our industrial partners to adopt our proposal into one of their development platforms can be seen as a strong evidence of the relevance of our work and as a promising step towards a better understanding between the academic formal methods community and industry. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
We investigate the numerical behavior of a fourth-order accurate method. The results are compared with those of a second-order method. We have verified the rate of convergence for the numerical solution. As test cases the simple hyperbolic model equationu 1+u x =0 and the two-dimensional Euler equations over backward-facing step have been used. The fourth-order method has been implemented on a dataparallel computer, and the difference operators have been designed to minimize the bandwidth. We also derive boundary modified, semidefinite artificial viscosity operators of arbitrary order of accuracy. The viscosity operators are presented in a form that is particularly well-suited for the implementation on dataparallel computers.  相似文献   

4.
Summary It is usually assumed that implementations of nondeterministic programs may resolve the nondeterminacy arbitrarily. In some circumstances, however, we may wish to assume that the implementation is in some sense fair, by which we mean that in its long-term behaviour it does not show undue bias in forever favouring some nondeterministic choices over others. Under the assumption of fairness many otherwise failing programs become terminating. We construct various predicate transformer semantics of such fairly-terminating programs. The approach is based on formulating the familiar temporal operators always, eventually, and infinitely often as predicate transformers. We use these operators to construct a framework that accommodates many kinds of fairness, including varieties of socalled weak and strong fairness in both their all-levels and top-level forms. Our formalization of the notion of fairness does not exploit the syntactic shape of programs, and allows the familiar nondeterminacy and fair nondeterminacy to be arbitrarily combined in the one program. Invariance theorems for reasoning about fairly terminating programs are proved. The semantics admits probabilistic implementations provided that unbounded fairness is excluded.  相似文献   

5.
We propose an adaptive approach to merging possibilistic knowledge bases that deploys multiple operators instead of a single operator in the merging process. The merging approach consists of two steps: the splitting step and the combination step. The splitting step splits each knowledge base into two subbases and then in the second step, different classes of subbases are combined using different operators. Our merging approach is applied to knowledge bases which are self-consistent and results in a knowledge base which is also consistent. Two operators are proposed based on two different splitting methods. Both operators result in a possibilistic knowledge base which contains more information than that obtained by the t-conorm (such as the maximum) based merging methods. In the flat case, one of the operators provides a good alternative to syntax-based merging operators in classical logic. This paper is a revised and extended version of [36].  相似文献   

6.
ABSTRACT

Algorithmic differentiation tools can automate the adjoint transformation of parallel message-passing codes [J. Utke, L. Hascoët, P. Heimbach, C. Hill, P. Hovland, and U. Naumann, Toward Adjoinable MPI, in 2009 IEEE International Symposium on Parallel & Distributed Processing, May, IEEE, 2009, pp. 1–8] using the AMPI library. Nevertheless, a non-trivial and manual step after the differentiation is the initialization of the seed and retrieval of the output values from the differentiated code. Ambiguities in seeding occur in programs where the user is unable to expose the complete program flow with a single entry and single exit point to the AD tool. We present the ambiguities associated with seed initialization and output retrieval for adjoint transformation of halo and zero-halo partitioned MPI programs. We introduce a general framework to eliminate ambiguities in seeding and retrieval for shared-node reduction over +, and * operators using a conceptual master-worker model. The model shows the need for new MPI calls for retrieval and eliminate MPI calls for seed initialization. Different implementations for seeding manually assembled adjoints were inferred from the model, namely, partial and unique seeding. We successfully applied the seeding techniques to a 3D zero-halo partitioned unstructured compressible discrete adjoint solver and highlight the merits and demerits of each strategy.  相似文献   

7.
As the competition between mobile telecom operators becomes severe, it becomes critical for operators to diversify their business areas. Especially, the mobile operators are turning from traditional voice communication to mobile value-added services (VAS), which are new services to generate more average revenue per user (ARPU). That is, cross-selling is critical for mobile telecom operators to expand their revenues and profits. In this study, we propose a customer classification model, which may be used for facilitating cross-selling in a mobile telecom market. Our model uses the cumulated data on the existing customers including their demographic data and the patterns for using old products or services to find new products and services with high sales potential. The various data mining techniques are applied to our proposed model in two steps. In the first step, several classification techniques such as logistic regression, artificial neural networks, and decision trees are applied independently to predict the purchase of new products, and each model produces the results of their prediction as a form of probabilities. In the second step, our model compromises all these probabilities by using genetic algorithm (GA), and makes the final decision for a target customer whether he or she would purchase a new product. To validate the usefulness of our model, we applied it to a real-world mobile telecom company’s case in Korea. As a result, we found that our model produced high-quality information for cross-selling, and that GA in the second step contributed to significantly improve the performance.  相似文献   

8.
Justification logics are a family of modal epistemic logics which enables us to reasoning about justifications and evidences. In this paper, we introduce evidence-based multi-agent distributed knowledge logics, called distributed knowledge justification logics. The language of our justification logics contain evidence-based knowledge operators for individual agents and for distributed knowledge , which are interpreted respectively as “t is a justification that agent i accepts for F”, and “t is a justification that all agents accept for F if they combine their knowledge and justifications”. We study basic properties of our logics and prove the conservativity of distributed knowledge justification logics over multi-agent justification logics. We present Kripke style models, pseudo-Fitting and Fitting models, as well as Mkrtychev models (single world Fitting models) and prove soundness and completeness theorems. We also find a class of Fitting models which satisfies the principle of full communication. Finally, we establish the realization theorem, which states that distributed knowledge justification logics can be embedded into the modal distributed knowledge logics, and vise versa.  相似文献   

9.
A CPS translation is a syntactic translation of programs, which is useful for describing their operational behavior. By iterating the standard call-by-value CPS translation, Danvy and Filinski discovered the CPS hierarchy and proposed a family of control operators, shift and reset, that make it possible to capture successive delimited continuations in a CPS hierarchy. Although shift and reset have found their applications in several areas such as partial evaluation, most studies in the literature have been devoted to the base level of the hierarchy, namely, to level-1 shift and reset. In this article, we investigate the whole family of shift and reset. We give a simple calculus with level-n shift and level-n reset for an arbitrary n>0. We then give a set of equational axioms for them, and prove that these axioms are sound and complete with respect to the CPS translation. The resulting set of axioms is concise and a natural extension of those for level-1 shift and reset. This article is a revised version of the paper presented at 18th International Workshop on Computer Science Logic (CSL’04), September, 2004.  相似文献   

10.
Model driven engineering (MDE) of software product lines (SPLs) merges two increasing important paradigms that synthesize programs by transformation. MDE creates programs by transforming models, and SPLs elaborate programs by applying transformations called features. In this paper, we present the design and implementation of a transformational model of a product line of scalar vector graphics and JavaScript applications. We explain how we simplified our implementation by lifting selected features and their compositions from our original product line (whose implementations were complex) to features and their compositions of another product line (whose specifications were simple). We used operators to map higher-level features and their compositions to their lower-level counterparts. Doing so exposed commuting relationships among feature compositions in both product lines that helped validate our model and implementation.  相似文献   

11.
Although several new tone‐mapping operators are proposed each year, there is no reliable method to validate their performance or to tell how different they are from one another. In order to analyze and understand the behavior of tone‐mapping operators, we model their mechanisms by fitting a generic operator to an HDR image and its tone‐mapped LDR rendering. We demonstrate that the majority of both global and local tone‐mapping operators can be well approximated by computationally inexpensive image processing operations, such as a per‐pixel tone curve, a modulation transfer function and color saturation adjustment. The results produced by such a generic tone‐mapping algorithm are often visually indistinguishable from much more expensive algorithms, such as the bilateral filter. We show the usefulness of our generic tone‐mapper in backward‐compatible HDR image compression, the black‐box analysis of existing tone mapping algorithms and the synthesis of new algorithms that are combination of existing operators.  相似文献   

12.
Modern multicore processors, such as the Cell Broadband Engine, achieve high performance by equipping accelerator cores with small “scratch-pad” memories. The price for increased performance is higher programming complexity – the programmer must manually orchestrate data movement using direct memory access (DMA) operations. Programming using asynchronous DMA operations is error-prone, and DMA races can lead to nondeterministic bugs which are hard to reproduce and fix. We present a method for DMA race analysis in C programs. Our method works by automatically instrumenting a program with assertions modeling the semantics of a memory flow controller. The instrumented program can then be analyzed using state-of-the-art software model checkers. We show that bounded model checking is effective for detecting DMA races in buggy programs. To enable automatic verification of the correctness of instrumented programs, we present a new formulation of k-induction geared towards software, as a proof rule operating on loops. Our techniques are implemented as a tool, Scratch, which we apply to a large set of programs supplied with the IBM Cell SDK, in which we discover a previously unknown bug. Our experimental results indicate that our k-induction method performs extremely well on this problem class. To our knowledge, this marks both the first application of k-induction to software verification, and the first example of software model checking in the context of heterogeneous multicore processors.  相似文献   

13.
《Advanced Robotics》2013,27(6):675-694
Selecting an appropriate gait can reduce consumed energy by a biped robot. In this paper, a Genetic Algorithm gait synthesis method is proposed, which generates the angle trajectories based on the minimum consumed energy and minimum torque change. The gait synthesis is considered for two cases: walking and going up-stairs. The proposed method can be applied for a wide range of step lengths and step times during walking; or step lengths, stair heights and step times for going up-stairs. The angle trajectories are generated without neglecting the stability of the biped robot. The angle trajectories can be generated for other tasks to be performed by the biped robot, like going down-stairs, overcoming obstacles, etc. In order to verify the effectiveness of the proposed method, the results for minimum consumed energy and minimum torque change are compared. A Radial Basis Function Neural Network is considered for the real-time application. Simulations are realized based upon the parameters of the 'Bonten-Maru I'humanoid robot, which is under development in our laboratory. The evaluation by simulations shows that the proposed method has a good performance.  相似文献   

14.
We present a flexible initial framework for defining self‐motivated, self‐aware agents in simulated worlds, planning continuously so as to maximize long‐term rewards. While such agents employ reasoned exploration of feasible sequences of actions and corresponding states, they also behave opportunistically and recover from failure, thanks to their continual plan updates and quest for rewards. Our framework allows for both specific and general (quantified) knowledge and for epistemic predicates such as knowing‐that and knowing‐whether. Because realistic agents have only partial knowledge of their world, the reasoning of the proposed agents uses a weakened closed‐world assumption; this has consequences for epistemic reasoning, in particular introspection. The planning operators allow for quantitative, gradual change and side effects such as the passage of time, changes in distances and rewards, and language production, using a uniform procedural attachment method. Question answering (involving introspection) and experimental runs are shown for our particular agent ME in a simple world, demonstrating the value of continual deliberate, reward‐driven planning. Though the primary merit of agents definable in our framework is that they combine all of the aforementioned features, they can also be configured as single or multiple goal‐seeking agents and as such perform comparably with some recent experimental agents.  相似文献   

15.
The interpretative approach to compilation allows compiling programs by partially evaluating an interpreter w.r.t. a source program. This approach, though very attractive in principle, has not been widely applied in practice mainly because of the difficulty in finding a partial evaluation strategy which always obtain “quality” compiled programs. In spite of this, in recent work we have performed a proof of concept of that, at least for some examples, this approach can be applied to decompile Java bytecode into Prolog. This allows applying existing advanced tools for analysis of logic programs in order to verify Java bytecode. However, successful partial evaluation of an interpreter for (a realistic subset of) Java bytecode is a rather challenging problem. The aim of this work is to improve the performance of the decompilation process above in two respects. First, we would like to obtain quality decompiled programs, i.e., simple and small. We refer to this as the effectiveness of the decompilation. Second, we would like the decompilation process to be as efficient as possible, both in terms of time and memory usage, in order to scale up in practice. We refer to this as the efficiency of the decompilation. With this aim, we propose several techniques for improving the partial evaluation strategy. We argue that our experimental results show that we are able to improve significantly the efficiency and effectiveness of the decompilation process.  相似文献   

16.
We address the problem of error detection for programs that take recursive data structures and arrays as input. Previously we proposed a combination of symbolic execution and model checking for the analysis of such programs: we put a bound on the size of the program inputs and/or the search depth of the model checker to limit the search state space. Here we look beyond bounded model checking and consider state matching techniques to limit the state space. We describe a method for examining whether a symbolic state that arises during symbolic execution is subsumed by another symbolic state. Since the number of symbolic states may be infinite, subsumption is not enough to ensure termination. Therefore, we also consider abstraction techniques for computing and storing abstract states during symbolic execution. Subsumption checking determines whether an abstract state is being revisited, in which case the model checker backtracks—this enables analysis of an under-approximation of the program behaviors. We illustrate the technique with abstractions for lists and arrays. We also discuss abstractions for more general data structures. The abstractions encode both the shape of the program heap and the constraints on numeric data. We have implemented the techniques in the Java PathFinder tool and we show their effectiveness on Java programs. This paper is an extended version of Anand et al. (Proceedings of SPIN, pp. 163–181, 2006).  相似文献   

17.
Compositional verification of sequential programs with procedures   总被引:1,自引:0,他引:1  
We present a method for algorithmic, compositional verification of control-flow-based safety properties of sequential programs with procedures. The application of the method involves three steps: (1) decomposing the desired global property into local properties of the components, (2) proving the correctness of the property decomposition by using a maximal model construction, and (3) verifying that the component implementations obey their local specifications. We consider safety properties of both the structure and the behaviour of program control flow. Our compositional verification method builds on a technique proposed by Grumberg and Long that uses maximal models to reduce compositional verification of finite-state parallel processes to standard model checking. We present a novel maximal model construction for the fragment of the modal μ-calculus with boxes and greatest fixed points only, and adapt it to control-flow graphs modelling components described in a sequential procedural language. We extend our verification method to programs with private procedures by defining an abstraction, presented as an inlining transformation. All algorithms have been implemented in a tool set automating all required verification steps. We validate our approach on an electronic purse case study.  相似文献   

18.
提出了一种基于局部最小熵的预测模型构造方法,能够更好地区分待编码位的不同概率分布,从而实现对小波系数的高效压缩。首先,根据小波系数间的相关性选择预测系数,并构造相关性预测函数来综合多个系数的预测效果;以熵值的最小化作为准则,采用逐步筛选法对预测函数划分的多个分类进行选择合并,建立了一种局部最优的预测分类模型;结合熵编码实现对小波系数的高效压缩。实验结果表明,与图像压缩标准JPEG2000相比,所提方法的恢复图像主客观质量均有改善,客观质量平均提高0.4 dB。  相似文献   

19.
基于深度学习的代码漏洞检测模型因其检测效率高和精度准的优势,逐步成为检测软件漏洞的重要方法,并在代码托管平台Github的代码审计服务中发挥重要作用.然而,深度神经网络已被证明容易受到对抗攻击的干扰,这导致基于深度学习的漏洞检测模型存在遭受攻击,降低检测准确率的风险.因此,构建针对漏洞检测模型的对抗攻击,不仅可以发掘此类模型的安全缺陷,而且有助于评估模型的鲁棒性,进而通过相应的方法提升模型性能.但现有的面向漏洞检测模型的对抗攻击方法,依赖于通用的代码转换工具,并未提出针对性的代码扰动操作和决策算法,因此难以生成有效的对抗样本,且对抗样本的合法性依赖于人工检查.针对上述问题,提出了一种面向漏洞检测模型的强化学习式对抗攻击方法.本方法首先设计了一系列语义约束且漏洞保留的代码扰动操作作为扰动集合;其次,将具备漏洞的代码样本作为输入,利用强化学习模型选取具体的扰动操作序列.最后,根据代码样本的语法树节点类型寻找扰动的潜在位置,进行代码转换,从而生成对抗样本.本文基于SARD和NVD构建了两个实验数据集共14,278个代码样本并以此训练了四个具备不同特点的漏洞检测模型作为攻击目标.针对每个目标模型,训练了一个强化学习网络进行对抗攻击.结果显示,本文的攻击方法导致模型的召回率降低了74.34%,攻击成功率达到96.71%,相较基线方法,攻击成功率平均提升了68.76%.实验证明了当前的漏洞检测模型存在被攻击的风险,需要进一步研究提升模型的鲁棒性.  相似文献   

20.
一种计算图象形态梯度的多尺度算法   总被引:28,自引:1,他引:27       下载免费PDF全文
分水岭变换是一种非常适用于图象分割的形态算子,然而,基于分水岭变换的图象分割方法,其性能在很大程度上依赖于用来计算待分割图象梯度的算法。为了高效地进行分水岭变换,提出了一种计算图象形态梯度的多尺度算法,从而对阶跃边缘和“模糊”边缘进行了有效的处理,此外,还提出了一种去除因噪声或量化误差造成的局部“谷底”的算法,实验结果表明,图象采用本文算法处理后,再进行分水岭变换,即使不进行区域合并,也能产生有意义的分割,因而极大地减轻了计算负担。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号