首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 252 毫秒
1.
Contemporary design process requires the development of a new computational intelligence or soft computing methodology that involves intelligence integration and hybrid intelligent systems for design, analysis and evaluation, and optimization. This paper first presents a discussion of the need to incorporate intelligence into an automated design process and the various constraints that designers face when embarking on industrial design projects. Then, it presents the design problem as optimizing the design output against constraints and the use of soft computing and hybrid intelligent systems techniques. In this paper, a soft-computing-integrated intelligent design framework is developed. A hybrid dual cross-mapping neural network (HDCMNN) model is proposed using the hybrid soft computing technique based on cross-mapping between a back-propagation network (BPNN) and a recurrent Hopfield network (HNN) for supporting modeling, analysis and evaluation, and optimization tasks in the design process. The two networks perform different but complementary tasks—the BPNN decides if the design problem is a type 0 (rational) or type 1 (non-rational) problem, and the output layer weights are then used as the energy function for the HNN. The BPNN is used for representing design patterns, training classification boundaries, and outputting network weight values to the HNN, and then the HNN uses the calculated network weight values to evaluate and modify or re-design the design patterns. The developed system provides a unified soft-computing-integrated intelligent design framework with both symbolic and computational intelligence. The system has self-modifying and self-learning functions. Within the system, only one network training is needed for accomplishing the evaluation, rectification/modification, and optimization tasks in the design process. Finally, two case studies are provided to illustrate and validate the developed model and system.  相似文献   

2.
A new parallel dynamic unstructured grid DSMC method is presented in this paper. The code developed has been applied to the simulation of thin film deposition over microstructures. Surface deformation in such cases poses a challenge for accurate evaluation of gas flow due to the fact that the deposited film thickness is comparable to the feature size. In this study a method is developed to move the mesh at run time. Since in parallel simulation each partition moves independently of the others, a parallel version of moving mesh is proposed to synchronize the displacement of the neighboring partitions, so that there is a smooth transition from one partition to another. An efficient tool for tracking particles during simulation is also presented. Furthermore, the influence of parameters, such as sticking coefficient and aspect ratio on step coverage for a 1m wide trench by sputter deposition was studied. The results showed that the step coverage deteriorated with increasing sticking coefficient and aspect ratio.  相似文献   

3.
In this paper we propose a general framework for compiling, scheduling, and executing parallel programs on parallel computers. We discuss important aspects of program partitioning, scheduling, and execution, and consider possible realistic alternatives for each issue. Subsequently we propose a possible implementation of an auto-scheduling compiler and give simple examples to illustrate the principles. Our approach to the entire problem is to utilize program information available to the compiler while, at the same time, allowing for run-time corrections and flexibility.This work was supported in part by the National Science Foundation under Grant NSF MIP-8410110, the U.S. Department of Energy under Grant DE-FG02-85ER25001, an IBM donation and a grant from AT&T.  相似文献   

4.
In recent years, the state of the art in shape optimization has advanced due to new approaches proposed by various researchers. A fundamental difficulty in shape optimization is that the original finite element mesh may become invalid during large shape changes. Automatic remeshing and velocity field approaches are most commonly used for conventionalh-type finite element analysis to address this problem.In this paper, we describe a different approach to shape optimization based on the use of high-orderp-type finite elements tightly coupled to a parameterized computational geometry module. The advantages of this approach are as follows.Accurate results can be obtained with much fewer finite elements, so large shape changes are possible without remeshing.Automatic adaptive analysis may be performed so that accurate results are achieved at each step of the optimization process.Since the elements derive their geometric mapping from the underlying geometry, the fundamental equivalent of velocity field element shape updating may be readily achieved.Results are presented for sizing and shape optimization with this approach and contrasted with previous results from the literature.  相似文献   

5.
A Fast Parallel Algorithm for Convex Hull Problem of Multi-Leveled Images   总被引:1,自引:0,他引:1  
In this paper, we propose a parallel algorithm to solve the convex hull problem for an (n×n) multi-leveled image using a reconfigurable mesh connected computer of the same size as a computational model. The algorithm determines parallely the convex hull of all the connected components of the multileveled image. It is based on some geometric properties and a top-down strategy. The complexity of the algorithm is O(logn) times. Using some approximations on the component contours, this complexity is reduced to O(logm) times where m is the number of the vertices of the convex hull of the biggest component of the image.This complexity is reached thanks to the polymorphic properties of the mesh where all the components are simultaneously and separately processed.  相似文献   

6.
Exploratory data mining and analysis requires a computing environment which provides facilities for the user-friendly expression and rapid execution of scientific queries. In this paper, we address research issues in the parallelization of scientific queries containing complex user-defined operations. In a parallel query execution environment, parallelizing a query execution plan involves determining how input data streams to evaluators implementing logical operations can be divided to be processed by clones of the same evaluator in parallel. We introduced the concept of relevance window that characterizes data lineage and data partitioning opportunities available for an user-defined evaluator. In addition, we developed a query parallelization framework by extending relational parallel query optimization algorithms to allow the parallelization characteristics of user-defined evaluators to guide the process of query parallelization in an extensible query processing environment. We demonstrated the utility of our system by performing experiments mining cyclonic activity, blocking events, and the upward wave-energy propagation features from several observational and model simulation datasets.  相似文献   

7.
Sparse QR factorization on a massively parallel computer   总被引:1,自引:0,他引:1  
This paper shows that QR factorization of large, sparse matrices can be performed efficiently on massively parallel SIMD (single instruction stream/multiple data stream) computers such as the Connection Machine CM-2. The problem is cast as a dataflow graph, whose nodes are mapped to a virtual dataflow machine in such a way that only nearest-neighbor communication is required. This virtual machine is implemented by programming the CM-2 processors to support a restricted dataflow protocol. Execution results for several test matrices show that good performance can be obtained without relying on nested dissection techniques.  相似文献   

8.
A Probabilistic Exclusion Principle for Tracking Multiple Objects   总被引:5,自引:0,他引:5  
Tracking multiple targets is a challenging problem, especially when the targets are identical, in the sense that the same model is used to describe each target. In this case, simply instantiating several independent 1-body trackers is not an adequate solution, because the independent trackers tend to coalesce onto the best-fitting target. This paper presents an observation density for tracking which solves this problem by exhibiting a probabilistic exclusion principle. Exclusion arises naturally from a systematic derivation of the observation density, without relying on heuristics. Another important contribution of the paper is the presentation of partitioned sampling, a new sampling method for multiple object tracking. Partitioned sampling avoids the high computational load associated with fully coupled trackers, while retaining the desirable properties of coupling.  相似文献   

9.
In this paper we investigate the general problem of discovering recurrent patterns that are embedded in categorical sequences. An important real-world problem of this nature is motif discovery in DNA sequences. There are a number of fundamental aspects of this data mining problem that can make discovery easy or hard—we characterize the difficulty of this problem using an analysis based on the Bayes error rate under a Markov assumption. The Bayes error framework demonstrates why certain patterns are much harder to discover than others. It also explains the role of different parameters such as pattern length and pattern frequency in sequential discovery. We demonstrate how the Bayes error can be used to calibrate existing discovery algorithms, providing a lower bound on achievable performance. We discuss a number of fundamental issues that characterize sequential pattern discovery in this context, present a variety of empirical results to complement and verify the theoretical analysis, and apply our methodology to real-world motif-discovery problems in computational biology.  相似文献   

10.
A level set algorithm for tracking discontinuities in hyperbolic conservation laws is presented. The algorithm uses a simple finite difference approach, analogous to the method of lines scheme presented in [36]. The zero of a level set function is used to specify the location of the discontinuity. Since a level set function is used to describe the front location, no extra data structures are needed to keep track of the location of the discontinuity. Also, two solution states are used at all computational nodes, one corresponding to the real state, and one corresponding to a ghost node state, analogous to the Ghost Fluid Method of [12]. High order pointwise convergence was demonstrated for scalar linear and nonlinear conservation laws, even at discontinuities and in multiple dimensions in the first paper of this series [3]. The solutions here are compared to standard high order shock capturing schemes, when appropriate. This paper focuses on the issues involved in tracking discontinuities in systems of conservation laws. Examples will be presented of tracking contacts and hydrodynamic shocks in inert and chemically reacting compressible flow.  相似文献   

11.
Blum  Avrim  Burch  Carl 《Machine Learning》2000,39(1):35-58
The problem of combining expert advice, studied extensively in the Computational Learning Theory literature, and the Metrical Task System (MTS) problem, studied extensively in the area of On-line Algorithms, contain a number of interesting similarities. In this paper we explore the relationship between these problems and show how algorithms designed for each can be used to achieve good bounds and new approaches for solving the other. Specific contributions of this paper include: An analysis of how two recent algorithms for the MTS problem can be applied to the problem of tracking the best expert in the decision-theoretic setting, providing good bounds and an approach of a much different flavor from the well-known multiplicative-update algorithms. An analysis showing how the standard randomized Weighted Majority (or Hedge) algorithm can be used for the problem of combining on-line algorithms on-line, giving much stronger guarantees than the results of Azar, Y., Broder, A., & Manasse, M. (1993). Proc ACM-SIAM Symposium on Discrete Algorithms (pp. 432–440) when the algorithms being combined occupy a state space of bounded diameter. A generalization of the above, showing how (a simplified version of) Herbster and Warmuth's weight-sharing algorithm can be applied to give a finely competitive bound for the uniform-space Metrical Task System problem. We also give a new, simpler algorithm for tracking experts, which unfortunately does not carry over to the MTS problem.Finally, we present an experimental comparison of how these algorithms perform on a process migration problem, a problem that combines aspects of both the experts-tracking and MTS formalisms.  相似文献   

12.
We construct a nearest-neighbor interaction whose ground states encode the solutions to the NP-complete problem independent set for cubic planar graphs. The important difference to previously used Hamiltonians in adiabatic quantum computing is that our Hamiltonian is spatially local. Due to its special structure our Hamiltonian can be easily simulated by Ising interactions between adjacent particles on a 2D rectangular lattice. We describe the required pulse sequences. Our methods could help to implement adiabatic quantum computing by physically reasonable Hamiltonians like short-range interactions. Therefore, this universal resource Hamiltonian can be used for different graphs by applying suitable control operations. This is in contrast to a previous proposal where the Hamiltonians have to be wired in hardware for each graph. PACS: 03.67.Lx  相似文献   

13.
14.
The idea of hierarchical gradient methods for optimization is considered. It is shown that the proposed approach provides powerful means to cope with some global convergence problems characteristic of the classical gradient methods. Concerning global convergence problems, four topics are addressed: The detour effect, the problem of multiscale models, the problem of highly ill-conditioned objective functions, and the problem of local-minima traps related to ambiguous regions of attractions. The great potential of hierarchical gradient algorithms is revealed through a hierarchical Gauss-Newton algorithm for unconstrained nonlinear least-squares problems. The algorithm, while maintaining a superlinear convergence rate like the common conjugate gradient or quasi-Newton methods, requires the evaluation of partial derivatives with respect to only one variable on each iteration. This property enables economized consumption of CPU time in case the computer codes for the derivatives are intensive CPU consumers, e.g., when the gradient evaluations of ODE or PDE models are produced by numerical differentiation. The hierarchical Gauss-Newton algorithm is extended to handle interval constraints on the variables and its effectiveness demonstrated by computational results.  相似文献   

15.
Chen  Peter C. Y.  Wonham  W. M. 《Real-Time Systems》2002,23(3):183-208
In this article, a method for scheduling a processor for non-preemptive execution of periodic tasks is presented. This method is based on the formal framework of supervisory control of timed discrete-event systems. It is shown that, with this method, the problem of determining schedulability and the problem of finding a scheduling algorithm are dual since a solution to the former necessarily implies a solution to the latter and vice versa. Furthermore, the solution to the latter thus obtained is complete in the sense that it contains all safe sequences of task execution with the guarantee that no deadline is missed. Examples are described to illustrate this method. Implication of the results and computational complexity associated with this method are discussed.  相似文献   

16.
A Q4/Q4 continuum structural topology optimization implementation   总被引:4,自引:0,他引:4  
A node-based design variable implementation for continuum structural topology optimization in a finite element framework is presented and its properties are explored in the context of solving a number of different design examples. Since the implementation ensures C0continuity of design variables, it is immune to element-wise checkerboarding instabilities that are a concern with element-based design variables. Nevertheless, in a subset of design examples considered, especially those involving compliance minimization with coarse meshes, the implementation is found to introduce a new phenomenon that takes the form of layering or islanding in the material layout design. In the examples studied, this phenomenon disappears with mesh refinement or the enforcement of sufficiently restrictive design perimeter constraints, the latter sometimes being necessary in design problems involving bending to ensure convergence with mesh refinement. Based on its demonstrated performance characteristics, the authors conclude that the proposed node-based implementation is viable for continued usage in continuum topology optimization.  相似文献   

17.
This article first briefly discusses the use of the computer in three fields of historical research in Norway: text retrieval in medieval documents, roll call analysis, and the study of social history and historical demography. The treatment of highly structured source material like censuses is then explored more fully, especially the coding of information about family status, occupation and birth place. In order to standardize this information, historians have developed several coding schemes and sophisticated software for the combined use of the full text and the encoded versions.Gunnar Thorvaldsen is Manager of research at the Norwegian Historical Data Center, the University of Tromsø. His main research interests are migration and record linkage. He has published several articles on historical computing, e.g. The Preservation of Computer Readable Records in the Nordic Countries,History and Computing, 4 (1992).  相似文献   

18.
The token distribution (TD) problem, an abstract static variant of load balancing, is defined as follows: letM be a (parallel processor) network with setP of processors. Initially, each processorP P has a certain amountl(P) of tokens. The goal of a TD algorithm, run onM, is to distribute the tokens evenly among the processors. In this paper we introduce the notion of strongly adaptive TD algorithms, i.e., algorithms whose running times come close to the best possible runtime, the off-line complexity of the TD problem, for each individual initial token distributionl. Until now, only weakly adaptive algorithms have been considered, where the running time is measured in terms of the maximum initial load max{l(P)P P}.We design an almost optimal, strongly adaptive algorithm on mesh-connected networks of arbitrary dimension extended by a single 1-bit bus. This result shows that an on-line TD algorithm can come close to the optimal (off-line) bound for each individual initial load. Furthermore, we exactly characterize the off-line complexity of arbitrary initial token distributions on arbitrary networks. As an intermediate result, we design almost optimal weakly adaptive algorithms for TD on mesh-connected networks of arbitrary dimension.This research was partially supported by DFG-Forschergruppe Effiziente Nutzung massiv paralleler Systeme, Teilprojekt 4, by the ESPRIT Basic Research Action No. 7141 (ALCOM II), and by the Volkswagen-stiftung. A preliminary version was presented at the 20th ICALP, 1993, see [9].  相似文献   

19.
Intelligent data analysis implies the reasoned application of autonomous or semi-autonomous tools to data sets drawn from problem domains. Automation of this process of reasoning about analysis (based on factors such as available computational resources, cost of analysis, risk of failure, lessons learned from past errors, and tentative structural models of problem domains) is highly non-trivial. By casting the problem of reasoning about analysis (MetaReasoning) as yet another data analysis problem domain, we have previously [R. Levinson and J. Wilkinson, in Advances in Intelligent Data Analysis, edited by X. Liu, P. Cohen, and M. Berthold, volume LNCS 1280, Springer-Verlag, Berlin, pp. 89–100, 1997] presented a design framework, MetaReasoning for Data Analysis Tool Allocation (MRDATA). Crucial to this framework is the ability of a Tool Allocator to track resource consumption (i.e. processor time and memory usage) by the Tools it employs, as well as the ability to allocate measured quantities of resources to these Tools. In order to test implementations of the MRDATA design, we now implement a Runtime Environment for Data Analysis Tool Allocation, RE:DATA. Tool Allocators run as processes under RE:DATA, are allotted system resources, and may use these resources to run their Tools as spawned sub-processes. We also present designs of native RE:DATA implementations of analysis tools used by MRDATA: K-Nearest Neighbor Tables, Regression Trees, Interruptible (Any-Time) Regression Trees, and Hierarchy Diffusion Temporal Difference Learners. Preliminary results are discussed and techniques for integration with non-native tools are explored.  相似文献   

20.
In recent years, constraint satisfaction techniques have been successfully applied to disjunctive scheduling problems, i.e., scheduling problems where each resource can execute at most one activity at a time. Less significant and less generally applicable results have been obtained in the area of cumulative scheduling. Multiple constraint propagation algorithms have been developed for cumulative resources but they tend to be less uniformly effective than their disjunctive counterparts. Different problems in the cumulative scheduling class seem to have different characteristics that make them either easy or hard to solve with a given technique. The aim of this paper is to investigate one particular dimension along which problems differ. Within the cumulative scheduling class, we distinguish between highly disjunctive and highly cumulative problems: a problem is highly disjunctive when many pairs of activities cannot execute in parallel, e.g., because many activities require more than half of the capacity of a resource; on the contrary, a problem is highly cumulative if many activities can effectively execute in parallel. New constraint propagation and problem decomposition techniques are introduced with this distinction in mind. This includes an O(n2) edge-finding algorithm for cumulative resources (where n is the number of activities requiring the same resource) and a problem decomposition scheme which applies well to highly disjunctive project scheduling problems. Experimental results confirm that the impact of these techniques varies from highly disjunctive to highly cumulative problems. In the end, we also propose a refined version of the edge-finding algorithm for cumulative resources which, despite its worst case complexity in O(n3) , performs very well on highly cumulative instances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号