首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We lay out the design of HasCasl, a higher order extension of the algebraic specification language Casl that serves both as a wide-spectrum language for the rigorous specification and development of software, in particular but not exclusively in modern functional programming languages, and as an expressive standard language for higher-order logic. Distinctive features of HasCasl include partial higher order functions, higher order subtyping, shallow polymorphism, and an extensive type-class mechanism. Moreover, HasCasl provides dedicated specification support for monad-based functional-imperative programming with generic side effects, including a monad-based generic Hoare logic.  相似文献   

2.
The design and manufacturing advisor (DMA) is a generic architecture for an expert system which assists in the process of product design. It integrates various parts of the production environment which affect the final product—such as fabrication, assembly, inspection and inventory. The design process can be partitioned into four stages: problem specification, feature specification, testing and generation. DMA facilitates the development of tractable designs, defined by the intersection of three spaces relating to specifications, features and productibility. The system consists of a domain-independent module and a domain-dependent module, each of which contains subcomponents; this modular structure facilitats system modification, expansion, and portability. The DMA has been validated by specializing the architecture to the design of gas-turbine disks for jet engines.  相似文献   

3.
The paper describes an adaptation of genetic algorithms (GA's) in decomposition-based design of multidisciplinary systems. The coupled multidisciplinary design problem is adaptively deomposed into a number of smaller subproblems, each with fewer design variables, and the design in each subproblem allowed to proceed in parallel. Fewer design variables allow for shorter string lengths to the used in the GA-based optimization in each subproblem, reducing the number of design alternatives to be explored, and hence also reducing the required number of function evaluations to convergence. A novel procedure is proposed to account for interactions between the decomposed subproblems, and is based on the modelling of the biological immune system. This approach also uses the genetic algorithm approach to update in each subproblem the design changes of all other subproblems. The design representation scheme, therefore, is common to both the design optimization step and the procedure required to account for interaction among the subproblems. The decomposition based solution of a dual structural-control design problem is used as a test problem for the proposed approach. The convergence characteristics of the proposed approach are compared against those available from a nondecomposition-based method.  相似文献   

4.
We consider the problem of approximating an arbitrary generic surface with a given set of simple surface primitives. In contrast to previous approaches based on variational surface approximation, which are primarily concerned with finding an optimal partitioning of the input geometry, we propose to integrate a model selection step into the algorithm in order to also optimize the type of primitive for each proxy. Our method is a joint global optimization of both the partitioning of the input surface as well as the types and number of used shape proxies. Thus, our method performs an automatic trade‐off between representation complexity and approximation error without relying on a user supplied predetermined number of shape proxies. This way concise surface representations are found that better exploit the full approximative power of the employed primitive types.  相似文献   

5.
A testing-based approach for constructing and refining very high-level software functionality representations such as intentions, natural language assertions, and formal specifications is presented and applied to a standard line-editing problem as an illustration. The approach involves the use of specification-based (black-box) test-case generation strategies, high-level specification formalisms, redundant or parallel development and cross-validation, and a logic programming support environment. Test-case reference sets are used as software functionality representations for the purposes of cross-validating two distinct high-level representations, and identifying ambiguities and omissions in those representations. In fact, we propose the use of successive refinements of such test reference sets as the authoritative specification throughout the software development process. Potential benefits of the approach include improvements in user/ designer communication over all life cycle phases, and an increase in the quality of specifications and designs.  相似文献   

6.
7.
The FLaSH (Functional Languages for Synthesising Hardware) system allows a designer to map a high-level functional language, SAFL, and its more expressive extension, SAFL+, into hardware. The system has two phases: first we perform architectural exploration by applying a series of semantics-preserving transformations to SAFL specifications; then the resulting specification is compiled into hardware in a resource-aware manner – that is, we map separate functions to separate hardware functional units (functions which are called multiple times become shared functional units). This article introduces the SAFL language and shows how program transformations on it can explore area-time trade-offs. We then show how the FLaSH compiler compiles SAFL to synchronous hardware and how SAFL transformations can also express hardware/software co-design. As a case study we demonstrate how SAFL transformations allow us to refine a simple specification of a MIPS-style processor into pipelined and superscalar implementations. The superset language SAFL+ (adding process calculi features but retaining many of the design aims) is then described and given semantics both as hardware and as a programming language. Published online: 17 December 2002  相似文献   

8.
The SpecCharts language, which builds on VHDL to meet the unique requirements of system-level specification and design, is described. With an underlying model of behavioral hierarchy, SpecCharts modeling constructs enable designers to capture system specifications simply and precisely. SpecCharts constructs facilitate system-level design tasks by permitting high-level communication, maintaining information, and allowing design modification at an easy-to-comprehend level. The results of system-level design tasks are reflected in a modified SpecCharts, enabling the designer to evaluate the quality of each step. A detailed example demonstrates the use of SpecCharts  相似文献   

9.
FGSPEC is a wide spectrum specification language intended to facilitate the software specification and the expression of transformation process from the functional specification whic describes “what to do ”to the corresponding design(perational)specification whic describer“how to do ”.The design emphasizes the coherence of multi-level specification mechanisms and a tree structure model is provided whic unifies the wide spectrum specification styles from“what”to“how”.  相似文献   

10.
In this paper we attempt to develop a problem representation technique which enables the decomposition of a problem into subproblems such that their solution in sequence constitutes a strategy for solving the problem. An important issue here is that the subproblems generated should be easier than the main problem. We propose to represent a set of problem states by a statement which is true for all the members of the set. A statement itself is just a set of atomic statements which are binary predicates on state variables. Then, the statement representing the set of goal states can be partitioned into its subsets each of which becomes a subgoal of the resulting strategy. The techniques involved in partitioning a goal into its subgoals are presented with examples.  相似文献   

11.
A top-down method is presented for the derivation of algorithms from a formal specification of a problem. This method has been implemented in a system called cypress. The synthesis process involves the top-down decomposition of the initial specification into a hierarchy of specifications for subproblems. Synthesizing programs for each of these subproblems results in the composition of a hierarchically structured program. The initial specification is allowed to be partial in that some or all of the input conditions may be missing. cypress completes the specification and produces a totally correct applicative program. Much of cypress' knowledge comes in the form of ‘design strategies’ for various classes of algorithms. The structure of a class of divide-and-conquer algorithms is explored and provides the basis for several design strategies. Detailed derivations of mergesort and quicksort algorithms are presented.  相似文献   

12.
This paper describes the Java Metaheuristics Search framework (JAMES, v1.1): an object‐oriented Java framework for discrete optimization using local search algorithms that exploits the generality of such metaheuristics by clearly separating search implementation and application from problem specification. A wide range of generic local searches are provided, including (stochastic) hill climbing, tabu search, variable neighbourhood search and parallel tempering. These can be applied to any user‐defined problem by plugging in a custom neighbourhood for the corresponding solution type. Using an automated analysis workflow, the performance of different search algorithms can be compared in order to select an appropriate optimization strategy. Implementations of specific components are included for subset selection, such as a predefined solution type, generic problem definition and several subset neighbourhoods used to modify the set of selected items. Additional components for other types of problems (e.g. permutation problems) are provided through an extensions module which also includes the analysis workflow. In comparison with existing Java metaheuristics frameworks that mainly focus on population‐based algorithms, JAMES has a much lower memory footprint and promotes efficient application of local searches by taking full advantage of move‐based evaluation. Releases of JAMES are deployed to the Maven Central Repository so that the framework can easily be included as a dependency in other Java applications. The project is fully open source and hosted on GitHub. More information can be found at http://www.jamesframework.org . Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
14.
15.
Analytical Target Cascading (ATC) is a decomposition-based optimization methodology that partitions a system into subsystems and then coordinates targets and responses among subsystems. Augmented Lagrangian with Alternating Direction method of multipliers (AL-AD), one of efficient ATC coordination methods, has been widely used in both hierarchical and non-hierarchical ATC and theoretically guarantees convergence under the assumption that all subsystem problems are convex and continuous. One of the main advantages of distributed coordination which consists of several non-hierarchical subproblems is that it can solve subsystem problems in parallel and thus reduce computational time. Therefore, previous studies have proposed an augmented Lagrangian coordination strategy for parallelization by eliminating interactions among subproblems. The parallelization is achieved by introducing a master problem and support variables or by approximating a quadratic penalty function to make subproblems separable. However, conventional AL-AD does not guarantee convergence in the case of parallel solving. Our study shows that, in parallel solving using targets and responses of the current iteration, conventional AL-AD causes mismatch of information in updating the Lagrange multiplier. Therefore, the Lagrange multiplier may not reach the optimal point, and as a result, increasing penalty weight causes numerical difficulty in the augmented Lagrangian coordination approach. To solve this problem, we propose a modified AL-AD with parallelization in non-hierarchical ATC. The proposed algorithm uses the subgradient method with adaptive step size in updating the Lagrange multiplier and also maintains penalty weight at an appropriate level not to cause oscillation. Without approximation or introduction of an artificial master problem, the modified AL-AD with parallelization can achieve similar accuracy and convergence with much less computational cost compared with conventional AL-AD with sequential solving.  相似文献   

16.
A fundamental step in the software design process is the selection of a refinement (implementation) for a data abstraction. This step traditionally involves investigating the expected performance of a system under different refinements of an abstraction and then selecting a single alternative which minimizes some performance cost metric. In this paper we reformulate this design step to allow different refinements of the same data abstraction within a computation. This reformulation reflects the fact that the implementation appropriate for a data abstraction is dependent on the behavior exhibited by the objects of the abstraction. Since this behavior can vary among the objects of a computation, a single refinement is often inappropriate. Accordingly, three frameworks are presented for understanding and representing variations in the behavior of objects and, thus, the potential for multiple implementations. The three frameworks are based upon: 1) a static partitioning of objects into disjoint implementation classes; 2) static partitioning of classes into implementation regions; and 3) dynamic partitioning of classes into implementation regions. These frameworks and analytic tools useful in investigating expected performance under multiple implementations are described in detail.  相似文献   

17.
Several decomposition methods have been proposed for the distributed optimal design of quasi-separable problems encountered in Multidisciplinary Design Optimization (MDO). Some of these methods are known to have numerical convergence difficulties that can be explained theoretically. We propose a new decomposition algorithm for quasi-separable MDO problems. In particular, we propose a decomposed problem formulation based on the augmented Lagrangian penalty function and the block coordinate descent algorithm. The proposed solution algorithm consists of inner and outer loops. In the outer loop, the augmented Lagrangian penalty parameters are updated. In the inner loop, our method alternates between solving an optimization master problem and solving disciplinary optimization subproblems. The coordinating master problem can be solved analytically; the disciplinary subproblems can be solved using commonly available gradient-based optimization algorithms. The augmented Lagrangian decomposition method is derived such that existing proofs can be used to show convergence of the decomposition algorithm to Karush–Kuhn–Tucker points of the original problem under mild assumptions. We investigate the numerical performance of the proposed method on two example problems.  相似文献   

18.
Verifying whether an ω-regular property is satisfied by a finite-state system is a core problem in model checking. Standard techniques build an automaton with the complementary language, compute its product with the system, and then check for emptiness. Generalized symbolic trajectory evaluation (GSTE) has been recently proposed as an alternative approach, extending the computationally efficient symbolic trajectory evaluation (STE) to general ω-regular properties. In this paper, we show that the GSTE algorithms are essentially a partitioned version of standard symbolic model-checking (SMC) algorithms, where the partitioning is driven by the property under verification. We export this technique of property-driven partitioning to SMC and show that it typically does speed up SMC algorithms. A shorter version of this paper has been presented at CAV’04 (R. Sebastiani et al., Lecture Notes in Comput. Sci., vol. 3114, pp. 143–160, 2004). R. Sebastiani supported in part by the CALCULEMUS! IHP-RTN EC project, code HPRN-CT-2000-00102, by a MIUR COFIN02 project, code 2002097822_003, and by a grant from the Intel Corporation. M.Y. Vardi supported in part by NSF grants CCR-9988322, CCR-0124077, CCR-0311326, IIS-9908435, IIS-9978135, EIA-0086264, and ANI-0216467 by BSF grant 9800096, and by a grant from the Intel Corporation.  相似文献   

19.
Convex multi-task feature learning   总被引:2,自引:1,他引:1  
We present a method for learning sparse representations shared across multiple tasks. This method is a generalization of the well-known single-task 1-norm regularization. It is based on a novel non-convex regularizer which controls the number of learned features common across the tasks. We prove that the method is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the former step it learns task-specific functions and in the latter step it learns common-across-tasks sparse representations for these functions. We also provide an extension of the algorithm which learns sparse nonlinear representations using kernels. We report experiments on simulated and real data sets which demonstrate that the proposed method can both improve the performance relative to learning each task independently and lead to a few learned features common across related tasks. Our algorithm can also be used, as a special case, to simply select—not learn—a few common variables across the tasks. Editors: Daniel Silver, Kristin Bennett, Richard Caruana. This is a longer version of the conference paper (Argyriou et al. in Advances in neural information processing systems, vol. 19, 2007a). It includes new theoretical and experimental results.  相似文献   

20.
2D decision-making for multicriteria design optimization   总被引:1,自引:1,他引:0  
The high dimensionality encountered in engineering design optimization due to large numbers of performance criteria and specifications leads to cumbersome and sometimes unachievable trade-off analyses. To facilitate those analyses and enhance decision-making and design selection, we propose to decompose the original problem by considering only pairs of criteria at a time, thereby making trade-off evaluation the simplest possible. For the final design integration, we develop a novel coordination mechanism that guarantees that the selected design is also preferred for the original problem. The solution of an overall large-scale problem is therefore reduced to solving a family of bicriteria subproblems and allows designers to effectively use decision-making in merely two dimensions for multicriteria design optimization.
Margaret M. Wiecek is on leave from the Department of Mathematical Sciences, Clemson University, South Carolina 29634, USA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号