共查询到20条相似文献,搜索用时 15 毫秒
1.
This work proposes an object-oriented unified optimization framework (UOF) for general problem optimization. Based on biological inspired techniques, numerical deterministic methods, and C++ objective design, the UOF itself has significant potential to perform optimization operations on various problems. The UOF provides basic interfaces to define a general problem and generic solver, enabling these two different research fields to be bridged. The components of the UOF can be separated into problem and solver components. These two parts work independently allowing high-level code to be reused, and rapidly adapted to new problems and solvers. The UOF is customized to deal with several optimization problems. The first experiment involves a well-known discrete combinational problem, wihle the second one studies the robustness for the reverse modeling problem, which is in high demanded by device manufacturing companies. Additionally, experiments are undertaken to determine the capability of the proposed methods in both analog and digital circuit design automation. The final experiment designs antenna for rapidly growing wireless communication. Most experiments are categorized as simulation-based optimization tasks in the microelectronics industry. The results confirm that UOF has excellent flexibility and extensibility to solve these problems successfully. The developed open-source project is publicly available. 相似文献
2.
Structural and Multidisciplinary Optimization - Reliability-based design optimization (RBDO) is an active field of research with an ever increasing number of contributions. Numerous methods have... 相似文献
3.
Jianbing Shen Author Vitae Hanqiu Sun Author Vitae Author Vitae Hanli Zhao Author Vitae Author Vitae Shiaofen Fang Author Vitae 《Pattern recognition》2010,43(2):457-469
A unified framework is proposed for designing textures using energy optimization and deformation. Our interactive scheme has the ability to globally change the visual properties of texture elements, and locally change texture elements with little user interaction. Given a small sample texture, the design process starts with applying a set of global deformation operations (rotation, translation, mirror, scale and flip) to the sample texture to obtain a set of deformed textures automatically. Then we further make the local deformation to the deformed textures interactively by replacing the local-texture elements regions from other textures. By utilizing the energy optimization method, interactive selections and deformations of local-texture elements are accomplished simply through indicating the positions of texture elements very roughly with a brush tool. Finally the deformed textures are further utilized to create large textures with the fast layer-based texture deformation algorithm, and the wavelet-based energy optimization. Our experimental results demonstrate that the proposed approach can help design a large variety of textures from a small example, change the locations of texture elements, increase or decrease the density of texture elements, and design cyclic marbling textures. 相似文献
4.
I. Vajk Author Vitae 《Automatica》2005,41(8):1385-1393
The paper derives a framework suitable to discuss the classical Koopmans-Levin (KL) and maximum likelihood (ML) algorithms to estimate parameters of errors-in-variables linear models in a unified way. Using the capability of the unified approach a new parameter estimation algorithm is presented offering flexibility to ensure acceptable variance in the estimated parameters. The developed algorithm is based on the application of Hankel matrices of variable size and can equally be considered as a generalized version of the KL method (GKL) or as a reduced version of the ML estimation. The methodology applied to derive the GKL algorithm is used to present a straightforward derivation of the subspace identification algorithm. 相似文献
5.
A unified framework for chaotic neural-network approaches to combinatorial optimization. 总被引:7,自引:0,他引:7
As an attempt to provide an organized way to study the chaotic structures and their effects in solving combinatorial optimization with chaotic neural networks (CNN), a unifying framework is proposed to serve as a basis where the existing CNN models ran be placed and compared. The key of this proposed framework is the introduction of an extra energy term into the computational energy of the Hopfield model, which takes on different forms for different CNN models, and modifies the original Hopfield energy landscape in various manners. Three CNN models, namely the Chen and Aihara model with self-feedback chaotic simulated annealing [CSA] (1995, 1997), the Wang and Smith model with timestep CSA (1998), and the chaotic noise model, are chosen as examples to show how they can be classified and compared within the proposed framework. 相似文献
6.
7.
Graph-based induction as a unified learning framework 总被引:6,自引:0,他引:6
We describe a graph-based induction algorithm that extracts typical patterns from colored digraphs. The method is shown to be capable of solving a variety of learning problems by mapping the different learning problems into colored digraphs. The generality and scope of this method can be attributed to the expressiveness of the colored digraph representation, which allows a number of different learning problems to be solved by a single algorithm. We demonstrate the application of our method to two seemingly different learning tasks: inductive learning of classification rules, and learning macro rules for speeding up inference. We also show that the uniform treatment of these two learning tasks enables our method to solve complex learning problems such as the construction of hierarchical knowledge bases. 相似文献
8.
Recent rapid advances in Information and Communication Technologies (ICTs) have highlighted the rising importance of the Business Model (BM) concept in the field of Information Systems (IS). Despite agreement on its importance to an organization's success, the concept is still fuzzy and vague, and there is little consensus regarding its compositional facets. Identifying the fundamental concepts, modeling principles, practical functions, and reach of the BM relevant to IS and other business concepts is by no means complete. This paper, following a comprehensive review of the literature, principally employs the content analysis method and utilizes a deductive reasoning approach to provide a hierarchical taxonomy of the BM concepts from which to develop a more comprehensive framework. This framework comprises four fundamental aspects. First, it identifies four primary BM dimensions along with their constituent elements forming a complete ontological structure of the concept. Second, it cohesively organizes the BM modeling principles, that is, guidelines and features. Third, it explains the reach of the concept showing its interactions and intersections with strategy, business processes, and IS so as to place the BM within the world of digital business. Finally, the framework explores three major functions of BMs within digital organizations to shed light on the practical significance of the concept. Hence, this paper links the BM facets in a novel manner offering an intact definition. In doing so, this paper provides a unified conceptual framework for the BM concept that we argue is comprehensive and appropriate to the complex nature of businesses today. This leads to fruitful implications for theory and practice and also enables us to suggest a research agenda using our conceptual framework. 相似文献
9.
Optimizing the kernel in the empirical feature space 总被引:17,自引:0,他引:17
In this paper, we present a method of kernel optimization by maximizing a measure of class separability in the empirical feature space, an Euclidean space in which the training data are embedded in such a way that the geometrical structure of the data in the feature space is preserved. Employing a data-dependent kernel, we derive an effective kernel optimization algorithm that maximizes the class separability of the data in the empirical feature space. It is shown that there exists a close relationship between the class separability measure introduced here and the alignment measure defined recently by Cristianini. Extensive simulations are carried out which show that the optimized kernel is more adaptive to the input data, and leads to a substantial, sometimes significant, improvement in the performance of various data classification algorithms. 相似文献
10.
A unified numerical approach, based on a control parameterization technique, for solving structural crosssectional optimization problems is presented. The key factor to the unified formulation lies in the framing of the objective functional and the constraints into the same unified canonical form. Consequently, the different types of objective functionals, geometrical and performance constraints can be treated in the same way, thus paving the path for the problems to be solved under a single approach using a general purpose software. To demonstrate this versatile approach, several illustrative examples of cross-sectional shape optimization of structural members under a variety of constraints were examined. 相似文献
11.
Menasce D.A. Yesha Y. Kalpakis K. 《IEEE transactions on pattern analysis and machine intelligence》1994,20(11):868-884
Quorum attainment protocols are an important part of many mutual exclusion algorithms. Assessing the performance of such protocols in terms of number of messages, as is usually done, may be less significant than being able to compute the delay in attaining the quorum. Some protocols achieve higher reliability at the expense of increased message cost or delay. A unified analytical model which takes into account the network delay and its effect on the time needed to obtain a quorum is presented. A combined performability metric, which takes into account both availability and delay, is defined, and expressions to calculate its value are derived for two different reliable quorum attainment protocols: D. Agrawal and A. El Abbadi's (1991) and Majority Consensus algorithms (R.H. Thomas, 1979). Expressions for the primary site approach are also given as upper bound on performability and lower bound on delay. A parallel version of the Agrawal and El Abbadi protocol is introduced and evaluated. This new algorithm is shown to exhibit lower delay at the expense of a negligible increase in the number of messages exchanged. Numerical results derived from the model are discussed 相似文献
12.
Knowledge patterns, such as association rules, clusters or decision trees, can be defined as concise and relevant information that can be extracted, stored, analyzed, and manipulated by knowledge workers in order to drive and specialize business decision processes. In this paper we deal with data mining patterns. The ability to manipulate different types of patterns under a unified environment is becoming a fundamental issue for any ‘intelligent’ and data-intensive application. However, approaches proposed so far for pattern management usually deal with specific and predefined types of patterns and mainly concern pattern extraction and exchange issues. Issues concerning the integrated, advanced management of heterogeneous patterns are in general not (or marginally) taken into account. 相似文献
13.
In this paper, a unified framework for multimodal content retrieval is presented. The proposed framework supports retrieval of rich media objects as unified sets of different modalities (image, audio, 3D, video and text) by efficiently combining all monomodal heterogeneous similarities to a global one according to an automatic weighting scheme. Then, a multimodal space is constructed to capture the semantic correlations among multiple modalities. In contrast to existing techniques, the proposed method is also able to handle external multimodal queries, by embedding them to the already constructed multimodal space, following a space mapping procedure of a submanifold analysis. In our experiments with five real multimodal datasets, we show the superiority of the proposed approach against competitive methods. 相似文献
14.
Bing Liu 《国际智能系统杂志》1995,10(8):691-713
Constraint Satisfaction Problem (CSP) involves finding values for variables to satisfy a set of constraints. Consistency check is the key technique in solving this class of problems. Past research has developed many algorithms for such a purpose, e.g., node consistency, are consistency, generalized node and arc consistency, specific methods for checking specific constraints, etc. In this article, an attempt is made to unify these algorithms into a common framework. This framework consists of two parts. the first part is a generic consistency check algorithm, which allows and encourages each individual constraint to be checked by its specific consistency methods. Such an approach provides a direct way of practical implementation of the CSP model for real problem-solving. the second part is a general schema for describing the handling of each type of constraint. the schema characterizes various issues of constraint handling in constraint satisfaction, and provides a common language for expressing, discussing, and exchanging constraint handling techniques. © 1995 John Wiley & Sons, Inc. 相似文献
15.
It is widely recognized that whether the selected kernel matches the data determines the performance of kernel-based methods. Ideally it is expected that the data is linearly separable in the kernel induced feature space, therefore, Fisher linear discriminant criterion can be used as a cost function to optimize the kernel function. However, the data may not be linearly separable even after kernel transformation in many applications, e.g., the data may exist as multimodally distributed structure, in this case, a nonlinear classifier is preferred, and obviously Fisher criterion is not a suitable choice as kernel optimization rule. Motivated by this issue, we propose a localized kernel Fisher criterion, instead of traditional Fisher criterion, as the kernel optimization rule to increase the local margins between embedded classes in kernel induced feature space. Experimental results based on some benchmark data and measured radar high-resolution range profile (HRRP) data show that the classification performance can be improved by using the proposed method. 相似文献
16.
We propose a general framework for structure identification, as defined by Dechter and Pearl. It is based on the notion of prime implicate, and handles Horn, bijunctive and affine, as well as Horn-renamable formulas, for which, to our knowledge, no polynomial algorithm has been proposed before. This framework, although quite general, gives good complexity results, and in particular we get for Horn formulas the same running time and better output size than the algorithms previously known. 相似文献
17.
This paper presents a novel regression framework to model both the translational equivalence problem and the parameter estimation
problem in statistical machine translation (SMT). The proposed method kernelizes the training process by formulating the translation
problem as a linear mapping among source and target word chunks (word n-grams of various length), which yields a regression problem with vector outputs. A kernel ridge regression model and a one-class
classifier called maximum margin regression are explored for comparison, between which the former is proved to perform better
in this task. The experimental results conceptually demonstrate its advantages of handling very high-dimensional features
implicitly and flexibly. However, it shares the common drawback of kernel methods, i.e. the lack of scalability. For real-world
application, a more practical solution based on locally linear regression hyperplane approximation is proposed by using online
relevant training examples subsetting. In addition, we also introduce a novel way to integrate language models into this particular
machine translation framework, which utilizes the language model as a penalty item in the objective function of the regression
model, since its n-gram representation exactly matches the definition of our feature space. 相似文献
18.
The application of swarm intelligence (SI) in the optimization field has been gaining much popularity, and various SI algorithms have been proposed in last decade. However, with the increased number of SI algorithms, most research focuses on the implementation of a specific choice of SI algorithms, and there has been rare research analyzing the common features among SI algorithms coherently. More importantly, no general principles for the implementation and improvement of SI algorithms exist for solving various optimization problems. In this research, aiming to cover such a research gap, a unified framework towards SI is proposed inspired by the in-depth analysis of SI algorithms. The unified framework consists of the most frequently used operations and strategies derived from typical examples of SI algorithms. Following the proposed unified framework, the intrinsic features of SI algorithms can be understood straightforwardly and the implementation and improvement of SI algorithms can be achieved effortlessly, which is of great importance in practice. The numerical experiments examine the effects of the possible strategies employed in the unified framework, and provide pilot attempts to validate the performance of different combinations of strategies, which can not only facilitate specific SI algorithm application, but also can motivate SI algorithm innovation. 相似文献
19.
20.
Two notions which have been introduced with the aim of classifying NP-complete optimization problems are compared: the notion of strong NP-completeness, due to Garey and Johnson, and that of simple and rigid problems, due to Paz and Moran. In particular, we show under what conditions reductions preserve rigidity, simplicity, strong simplicity and p-simplicity and we show that under reasonable hypothesis, p-simple problems are solved by pseudo-polynomial algorithms and strong NP-complete problems are weakly rigid. 相似文献