共查询到20条相似文献,搜索用时 0 毫秒
1.
Lalita Jategaonkar Jagadeesan Carlos Puchol James E. Von Olnhausen 《Formal Methods in System Design》1996,8(2):123-151
Esterel is a formally-defined language designed for programming reactive systems; namely, those that maintain a permanent interaction with their environment. The AT&T 5ESS® telephone switching system is an example of a reactive system. We describe an implementation in Esterel of one feature of a 5ESS switch; this implementation has been tested in the 5ESS switch simulator. Furthermore, it has been formally verified that this implementation satisfies some safety properties stated by 5ESS software development. Our experience indicates that Esterel is suitable for programming industrial-strength reactive systems, and affords significant advantages in software development over more traditional programming languages used in industrial settings.An earlier version of this paper appeared in the Proceedings of the Workshop on Industrial-Strength Formal Specification Techniques, Boca Raton, Florida, 1995.The author is currently supported by a Fulbright fellowship from Spain's Ministry of Science and Education. The work described here was performed while the author was visiting AT&T Bell Laboratories. 相似文献
2.
Christian Bessiere Emmanuel Hebrard Brahim Hnich Zeynep Kiziltan Toby Walsh 《Constraints》2006,11(4):271-293
The NValue constraint counts the number of different values assigned to a vector of variables. Propagating generalized arc consistency on this constraint is NP-hard. We show that computing even the lower bound on the number of values is NP-hard. We therefore study different approximation heuristics for this problem. We introduce three new methods for computing a lower bound on the number of values. The first two are based on the maximum independent set problem and are incomparable to a previous approach based on intervals. The last method is a linear relaxation of the problem. This gives a tighter lower bound than all other methods, but at a greater asymptotic cost. 相似文献
3.
Eftychios G. Christoforou Nikolaos V. Tsekos Alpay Özcan 《Journal of Intelligent and Robotic Systems》2006,47(2):175-196
The effective integration of robotics together with magnetic resonance (mr) technology is expected to facilitate the real-time guidance of various diagnostic and therapeutic interventions. Specially designed robotic manipulators are required for this purpose, the development of which is a challenging task given the strong magnetic fields and the space limitations that characterize the mr scanning environment. A prototype mr-compatible manipulator is presented, designed to operate inside cylindrical mr scanners. It was developed for the study of minimally invasive diagnostic and therapeutic interventions in the abdominal and thoracic area with real-time mr image guidance. Initial tests were performed inside a high-field clinical mr scanner and included mr-compatibility tests and phantom studies on image-guided targeting. 相似文献
4.
Pascal Vander-Swalmen Gilles Dequen Michaël Krajecki 《International journal of parallel programming》2009,37(3):324-342
The last decade progresses have led the Satisfiability Problem (sat) to be a great and competitive practical approach to solve a wide range of industrial and academic problems. Thanks to these
progresses, the size and difficulty of the sat instances has grown significantly. Among the recent solvers, a few are parallel and most of them use the message passing
paradigm. In a previous work by Vander-Swalmen et al. (IWOMP, 146–157, 2008), we presented a fine grain parallel sat solver designed for shared memory using OpenMP and named mtss, for Multi Threaded Sat Solver. mtss extends the “guiding path” notion and uses a collaborative approach where a rich thread is in charge of the search-tree evaluation and where a set of poor threads yield logical or heuristics information to simplify the rich task. In this paper, we extend the poor thread abilities
of mtss and present extensive comparative results on random 3-sat problems. These new experimentations show that fine grained techniques associated to poor tasks within the framework of mtss can achieve very interesting speedup on multi-core processors. 相似文献
5.
6.
Much work has been done to clarify the notion of metamodelling and new ideas, such as strict metamodelling, distinction between
ontological and linguistic instantiation, unified modelling elements and deep instantiation, have been introduced. However,
many of these ideas have not yet been fully developed and integrated into modelling languages with (concrete) syntax, rigorous
semantics and tool support. Consequently, applying these ideas in practice and reasoning about their meaning is difficult,
if not impossible. In this paper, we strive to add semantic rigour and conceptual clarity to metamodelling through the introduction
of Nivel, a novel metamodelling language capable of expressing models spanning an arbitrary number of levels. Nivel is based on a core set of conceptual modelling concepts: class, generalisation, instantiation, attribute, value and association.
Nivel adheres to a form of strict metamodelling and supports deep instantiation of classes, associations and attributes. A formal
semantics is given for Nivel by translation to weight constraint rule language (WCRL), which enables decidable, automated reasoning about Nivel. The modelling facilities of Nivel and the utility of the formalisation are demonstrated in a case study on feature modelling.
相似文献
Timo AsikainenEmail: |
7.
Michael R. Olsem 《Journal of Systems Integration》1995,5(4):337-348
The U.S. Air Force (USAF) created the Software Technology Support Center (STSC) at Hill AFB, Utah to enable Air Force software engineering organizations to identify, evaluate, and adopt technologies that will improve their software production. As part of this mission, the STSC collects detailed information about software tools and solicits frank assessments from tool users. Then, as a free service, the STSC provides a customized tools list to anyone within the United States, based on a user's requirements (e.g. hardware platform, operating system, etc.). This article will discuss the STSC's functional goal to act as a central repository of software information with an emphasis on the software reengineering technology domain. 相似文献
8.
We present an algorithm that predicts musical genre and artist from an audio waveform. Our method uses the ensemble learner
ADABOOST to select from a set of audio features that have been extracted from segmented audio and then aggregated. Our classifier
proved to be the most effective method for genre classification at the recent MIREX 2005 international contests in music information
extraction, and the second-best method for recognizing artists. This paper describes our method in detail, from feature extraction
to song classification, and presents an evaluation of our method on three genre databases and two artist-recognition databases.
Furthermore, we present evidence collected from a variety of popular features and classifiers that the technique of classifying
features aggregated over segments of audio is better than classifying either entire songs or individual short-timescale features.
Editor: Gerhard Widmer 相似文献
9.
Víctor Braberman Alfredo Olivero Fernando Schapachnik 《International Journal on Software Tools for Technology Transfer (STTT)》2005,7(1):4-18
In this work we present Zeus, a distributed timed model checker that evolves from the TCTL model checker Kronos [13] and that currently can handle backwards computation of reachability properties [2] over timed automata [3].Zeus was developed following a software architecture-centric approach. Its conceptual architecture was conceived to be sufficiently modular to house several features such as a priori graph partitioning, synchronous and asynchronous computation, communication piggybacking, delayed messaging, and dead-time utilization.Surprisingly enough, early experiments pinpointed the difficulties of getting speedups using asynchronous versions and showed interesting results on the synchronous counterpart, although being intuitively less attractive. 相似文献
10.
The KeY tool 总被引:3,自引:2,他引:3
Wolfgang Ahrendt Thomas Baar Bernhard Beckert Richard Bubel Martin Giese Reiner Hähnle Wolfram Menzel Wojciech Mostowski Andreas Roth Steffen Schlager Peter H. Schmitt 《Software and Systems Modeling》2005,4(1):32-54
KeY is a tool that provides facilities for formal specification and verification of programs within a commercial platform for UML based software development. Using the KeY tool, formal methods and object-oriented development techniques are applied in an integrated manner. Formal specification is performed using the Object Constraint Language (OCL), which is part of the UML standard. KeY provides support for the authoring and formal analysis of OCL constraints. The target language of KeY based development is Java Card DL, a proper subset of Java for smart card applications and embedded systems. KeY uses a dynamic logic for Java Card DL to express proof obligations, and provides a state-of-the-art theorem prover for interactive and automated verification. Apart from its integration into UML based software development, a characteristic feature of KeY is that formal specification and verification can be introduced incrementally. 相似文献
11.
12.
A. J. Guttmann 《International journal of parallel programming》1976,5(2):111-122
Various approaches to the problem of programming recursively defined functions infortran are discussed. The only acceptable method in ASAfortran is shown to be most easily programmed using a stack containing not addresses, but function arguments. The stack is built up and, when full, the function is constructed from the stack elements. A sequence of examples of increasing complexity is given to demonstrate the technique, including one in which a queue is found to be more appropriate than a stack. The emphasis throughout is on the simplicity of the technique. 相似文献
13.
Two base algorithms are known for reachability verification over timed automata. They are called forward and backwards, and traverse the automata edges using either successors or predecessors. Both usually work with a data structure called Difference Bound Matrices (DBMs). Although forward is better suited for on-the-fly construction of the model, the one known as backwards provides the basis for the verification of arbitrary formulae of the TCTL logic, and more importantly, for controller synthesis. Zeus is a distributed model checker for timed automata that uses the backwards algorithm. It works assigning each automata location to only one processor. This design choice seems the only reasonable way to deal with some complex operations involving many DBMs in order to avoid huge overheads due to distribution. This article explores the limitations of Zeus-like approaches for the distribution of timed model checkers.Our findings justify why close-to-linear speedups are so difficult –and sometimes impossible– to achieve in the general case. Nevertheless, we present mechanisms based on the way model checking is usually applied. Among others, these include model-topology-aware partitioning and on-the-fly workload redistribution. Combined, they have a positive impact on the speedups obtained.
相似文献
F. SchapachnikEmail: |
14.
This paper compares two formal methods, B and eb3, for specifying information systems. These two methods are chosen as examples of the state-based paradigm and the event-based paradigm, respectively. The paper considers four viewpoints: functional behavior expression, validation, verification, and evolution. Issues in expressing event ordering constraints, data integrity constraints, and modularity are thereby considered. A simple case study is used to illustrate the comparison, namely, a library management system. Two equivalent specifications are presented using each method. The paper concludes that B and eb3 are complementary. The former is better at expressing complex ordering and static data integrity constraints, whereas the latter provides a simpler, modular, explicit representation of dynamic constraints that are closer to the user’s point of view, while providing loosely coupled definitions of data attributes. The generality of these results from the state-based paradigm and the event-based paradigm perspective are discussed. 相似文献
15.
Michael Koch 《Computer Supported Cooperative Work (CSCW)》1994,3(3-4):359-378
The collaborative editing of documents is a very common task nowadays. Writing groups are often distributed over many locations because of the globalization of organizations and the increasing interdisciplinarity of tasks. Since many writers already use computers for their jobs, providing computer support for the collaborative writing process has been identified as an important goal. Numerous tools for computer supported collaborative writing have already emerged but in most cases have not come into widespread usage. In this article the requirements of users for a collaborative editor are analyzed. Providing as much flexibility as possible to the users is identified as a basic need. According to the requirements summary a model for a group editing environment is presented. The model covers cooperative work in local and wide area networks using synchronous and asynchronous cooperation. Finally, an application of the model is presented in the form of the multi-user editing environmentIris. 相似文献
16.
IDO Leichter MICHAEL LINDENBAUM EHUD RIVLIN 《International Journal of Computer Vision》2006,67(3):343-363
Over the past few years researchers have been investigating the enhancement of visual tracking performance by devising trackers
that simultaneously make use of several different features. In this paper we investigate the combination of synchronous visual
trackers that use different features while treating the trackers as “black boxes”. That is, instead of fusing the usage of
the different types of data as has been performed in previous work, the combination here is allowed to use only the trackers'
output estimates, which may be modified before their propagation to the next time step. We propose a probabilistic framework
for combining multiple synchronous trackers, where each separate tracker outputs a probability density function of the tracked
state, sequentially for each image. The trackers may output either an explicit probability density function, or a sample-set
of it via Condensation. Unlike previous tracker combinations, the proposed framework is fairly general and allows the combination of any set of
trackers of this kind, even in different state-spaces of different dimensionality, under a few reasonable assumptions. The
combination may consist of different trackers that track a common object, as well as trackers that track separate, albeit
related objects, thus improving the tracking performance of each object. The benefits of merely using the final estimates
of the separate trackers in the combination are twofold. Firstly, the framework for the combination is fairly general and
may be easily used from the software aspects. Secondly, the combination may be performed in a distributed setting, where each
separate tracker runs on a different site and uses different data, while avoiding the need to share the data. The suggested
framework was successfully tested using various state-spaces and datasets, demonstrating that fusing the trackers' final distribution
estimates may indeed be applicable.
Electronic supplementary material Electronic supplementary material is available for this article at and accessible for authorised users.
First online version published in October, 2005 相似文献
17.
Kenneth J. Turner 《International Journal on Software Tools for Technology Transfer (STTT)》2005,7(4):361-375
System specification with Lotos (Language Of Temporal Ordering Specification) is briefly introduced. To make test generation practicable, specifications are annotated with event constraints using PCL (Parameter Constraint Language) as a means of stating test purposes. Automated test generation can then use the principle of input-output conformance to check whether an implementation agrees with its specification. Test suites are generated by a transition tour that either visits every transition at least once (for infinite behaviour) or follows every path (for finite behaviour). The approach is applied to a case study in which tests are generated for radiotherapy accelerators used in cancer treatment. A typical specification and set of test purposes yields 256 test cases that can be executed manually or automatically. The goal is to determine situations in which an accelerator does not behave in conformity with its specification. 相似文献
18.
Huafeng Yu Abdoulaye Gamatié Éric Rutten Jean-Luc Dekeyser 《Innovations in Systems and Software Engineering》2008,4(3):215-222
In this paper, we use the UML MARTE profile to model high-performance embedded systems (HPES) in the GASPARD2 framework. We address the design correctness issue on the UML model by using the formal validation tools associated with
synchronous languages, i.e., the SIGALI model checker, etc. This modeling and validation approach benefits from the advantages of UML as a standard, and from the
number of validation tools built around synchronous languages. In our context, model transformations act as a bridge between
UML and the chosen validation technologies. They are implemented according to a model-driven engineering approach. The modeling
and validation are illustrated using the multimedia functionality of a new-generation cellular phone. 相似文献
19.
We describe two algorithms, BiBoost (Bipartite Boosting) and MultBoost (Multiparty Boosting), that allow two or more participants to construct a boosting classifier without explicitly sharing
their data sets. We analyze both the computational and the security aspects of the algorithms. The algorithms inherit the
excellent generalization performance of AdaBoost. Experiments indicate that the algorithms are better than AdaBoost executed separately by the participants, and that, independently of the number of participants, they perform close to AdaBoost executed using the entire data set.
Responsible Editor: Charu Aggarwal. 相似文献
20.
Julius T. Tou 《International journal of parallel programming》1979,8(6):541-547
A new technique for automatic clustering of multivariate data is proposed. In this approach a performance index for determining optimal clusters is introduced. This performance index is expressed in terms of the ratio of the minimum interset distance to maximum intraset distance. The optimal clusters are found when the performance index reaches a global maximum. If there are alternative groupings with equal number of clusters, the one with the largest performance index is chosen. 相似文献