This paper is concerned with methods for refinement of specifications written using a combination of Object-Z and CSP. Such a combination has proved to be a suitable vehicle for specifying complex systems which involve state and behaviour, and several proposals exist for integrating these two languages. The basis of the integration in this paper is a semantics of Object-Z classes identical to CSP processes. This allows classes specified in Object-Z to be combined using CSP operators. It has been shown that this semantic model allows state-based refinement relations to be used on the Object-Z components in an integrated Object-Z/CSP specification. However, the current refinement methodology does not allow the structure of a specification to be changed in a refinement, whereas a full methodology would, for example, allow concurrency to be introduced during the development life-cycle. In this paper, we tackle these concerns and discuss refinements of specifications written using Object-Z and CSP where we change the structure of the specification when performing the refinement. In particular, we develop a set of structural simulation rules which allow single components to be refined to more complex specifications involving CSP operators. The soundness of these rules is verified against the common semantic model and they are illustrated via a number of examples. 相似文献
Formal tools are either too labor intensive or are completely impractical for industrial-size problems. This paper describes two formal verification tools used within Motorola, Versys2 and CBV, that challenge this assertion. The two tools are being used in current design verification flows and have shown that it is possible to seamlessly integrate formal tools into existing design flows. 相似文献
This paper presents a novel method for measuring the magnetizing inductance of an induction machine. The approach uses a static DC excitation technique which can be employed whenever the neutral of the machine is accessible. The proposed method measures only the magnetizing inductance and not the self inductance which normally includes the effect of the stator leakage inductance. Because this test uses a DC excitation, the iron losses in the motor are considerably reduced as well and minimally influence the measurement when compared to the traditional 60-Hz no-load test. By using the proposed method for measuring only the magnetizing inductance, the stator leakage inductance can be later individually determined by performing a separate no-load test. Test results using the method are compared with theoretical values and confirm its feasibility. 相似文献
Most work on adaptive agents have a simple, single layerarchitecture. However, most agent architectures support three levels ofknowledge and control: a reflex level for reactive responses, a deliberatelevel for goal-driven behavior, and a reflective layer for deliberateplanning and problem decomposition. In this paper we explore agentsimplemented in Soar that behave and learn at the deliberate and reflectivelevels. These levels enhance not only behavior, but also adaptation. Theagents use a combination of analytic and empirical learning, drawing from avariety of sources of knowledge to adapt to their environment. We hypothesize that complete, adaptive agents must be able to learn across all three levels. 相似文献
Symmetric multiprocessor systems are increasingly common, not only as high-throughput servers, but as a vehicle for executing
a single application in parallel in order to reduce its execution latency. This article presents Pedigree, a compilation tool
that employs a new partitioning heuristic based on the program dependence graph (PDG). Pedigree creates overlapping, potentially
interdependent threads, each executing on a subset of the SMP processors that matches the thread’s available parallelism.
A unified framework is used to build threads from procedures, loop nests, loop iterations, and smaller constructs. Pedigree
does not require any parallel language support; it is post-compilation tool that reads in object code. The SDIO Signal and
Data Processing Benchmark Suite has been selected as an example of real-time, latency-sensitive code. Its coarse-grained data
flow parallelism is naturally exploited by Pedigree to achieve speedups of 1.63×/2.13× (mean/max) and 1.71×/2.41× on two and
four processors, respectively. There is roughly a 20% improvement over existing techniques that exploit only data parallelism.
By exploiting the unidirectional flow of data for coarse-grained pipelining, the synchronization overhead is typically limited
to less than 6% for synchronization latency of 100 cycles, and less than 2% for 10 cycles.
This research was supported by ONR contract numbers N00014-91-J-1518 and N00014-96-1-0347. We would like to thank the Pittsburgh
Supercomputing Center for use of their Alpha systems. 相似文献
The early history of applying electronic computers to the task of translating natural languages is chronicled, from the first suggestions by Warren Weaver in March 1947 to the first demonstration of a working, if limited, program in January 1954. 相似文献
The achievement of design and development solutions can be enhanced through consulting appropriate guidelines Although a wide range exist, frequently their full benefits are not realized by guideline-users because of the costs associated with their use. Guideline-users are people who use guidelines to support purposeful activity. Major cost drivers for guideline-users are the processes of 'selecting' appropriate guidelines and their subsequent 'translation' to an applied setting both of which can be prohibitively expensive. A strategy for producing guidelines is proposed, to minimize these costs, which is illustrated by the use of a case study concerned with the development of guidelines to assist in the production of management and administrative tools which will support project managers concerned with Human Factors Acceptance Testing. A process to support the assessment of guidelines is also proposed. 相似文献
Databases are a critical element of virtually all conventional and ebusiness applications. How does an organization know if the information derived from the database is any good? To ensure a quality database application, should the emphasis during model development be on the application of quality assurance metrics (designing it right)? A large number of database applications fail or are unusable. A quality process does not necessarily lead to a usable database product. A database application can also be ‘well-formed’ with high data quality but lack semantic or cognitive fidelity (the right design). This paper expands on the growing body of literature in the area of data quality by proposing additions to a hierarchy of database quality dimensions that includes model and behavioral factors in addition to process and data factors.
The performance of conjugate gradient (CG) algorithms for the solution of the system of linear equations that results from the finite-differencing of the neutron diffusion equation was analyzed on SIMD, MIMD, and mixed-mode parallel machines. A block preconditioner based on the incomplete Cholesky factorization was used to accelerate the conjugate gradient search. The issues involved in mapping both the unpreconditioned and preconditioned conjugate gradient algorithms onto the mixed-mode PASM prototype, the SIMD MasPar MP-1, and the MIMD Intel Paragon XP/S are discussed. On PASM , the mixed-mode implementation outperformed either SIMD or MIMD alone. Theoretical performance predictions were analyzed and compared with the experimental results on the MasPar MP-1 and the Paragon XP/S. Other issues addressed include the impact on execution time of the number of processors used, the effect of the interprocessor communication network on performance, and the relationship of the number of processors to the quality of the preconditioning. Applications studies such as this are necessary in the development of software tools for mapping algorithms onto either a single parallel machine or a heterogeneous suite of parallel machines. 相似文献