共查询到20条相似文献,搜索用时 15 毫秒
1.
M. Ehsan Abbasnejad Dhanesh Ramachandram Rajeswari Mandava 《Neural computing & applications》2011,20(5):703-715
Recently there has been a steep growth in the development of kernel-based learning algorithms. The intrinsic problem in such
algorithms is the selection of the optimal kernel for the learning task of interest. In this paper, we propose an unsupervised
approach to learn a linear combination of kernel functions, such that the resulting kernel best serves the objectives of the
learning task. This is achieved through measuring the influence of each point on the structure of the dataset. This measure
is calculated by constructing a weighted graph on which a random walk is performed. The measure of influence in the feature
space is probabilistically related to the input space that yields an optimization problem to be solved. The optimization problem
is formulated in two different convex settings, namely linear and semidefinite programming, dependent on the type of kernel
combination considered. The contributions of this paper are twofold: first, a novel unsupervised approach to learn the kernel
function, and second, a method to infer the local similarity represented by the kernel function by measuring the global influence
of each point toward the structure of the dataset. The proposed approach focuses on the kernel selection which is independent
of the kernel-based learning algorithm. The empirical evaluation of the proposed approach with various datasets shows the
effectiveness of the algorithm in practice. 相似文献
2.
Dietrich CA Scheidegger CE Comba JL Nedel LP Silva CT 《IEEE transactions on visualization and computer graphics》2008,14(6):1651-1658
Marching Cubes is the most popular isosurface extraction algorithm due to its simplicity, efficiency and robustness. It has been widely studied, improved, and extended. While much early work was concerned with efficiency and correctness issues, lately there has been a push to improve the quality of Marching Cubes meshes so that they can be used in computational codes. In this work we present a new classification of MC cases that we call Edge Groups, which helps elucidate the issues that impact the triangle quality of the meshes that the method generates. This formulation allows a more systematic way to bound the triangle quality, and is general enough to extend to other polyhedral cell shapes used in other polygonization algorithms. Using this analysis, we also discuss ways to improve the quality of the resulting triangle mesh, including some that require only minor modifications of the original algorithm. 相似文献
3.
A moving mesh approach to the numerical modelling of problems governed by nonlinear time-dependent partial differential equations (PDEs) is applied to the numerical modelling of glaciers driven by ice diffusion and accumulation/ablation. The primary focus of the paper is to demonstrate the numerics of the moving mesh approach applied to a standard parabolic PDE model in reproducing the main features of glacier flow, including tracking the moving boundary (snout). A secondary aim is to investigate waiting time conditions under which the snout moves. 相似文献
4.
Adaptation in the presence of a general nonlinear parameterization:an error model approach 总被引:1,自引:0,他引:1
Ai-Poh Loh Annaswamy A.M. Skantze F.P. 《Automatic Control, IEEE Transactions on》1999,44(9):1634-1652
Parametric uncertainties in adaptive estimation and control have been dealt with, by and large, in the context of linear parameterizations. Algorithms based on the gradient descent method either lead to instability or inaccurate performance when the unknown parameters occur nonlinearly. Complex dynamic models are bound to include nonlinear parameterizations which necessitate the need for new adaptation algorithms that behave in a stable and accurate manner. The authors introduce, in this paper, an error model approach to establish these algorithms and their global stability and convergence properties. A number of applications of this error model in adaptive estimation and control are included, in each of which the new algorithm is shown to result in global boundedness. Simulation results are presented which complement the authors' theoretical derivations 相似文献
5.
Benjamin S. Kirk John W. Peterson Roy H. Stogner Graham F. Carey 《Engineering with Computers》2006,22(3-4):237-254
In this paper we describe the libMesh (http://libmesh.sourceforge.net) framework for parallel adaptive finite element applications. libMesh is an open-source software library that has been developed to facilitate serial and parallel simulation of multiscale, multiphysics applications using adaptive mesh refinement and coarsening strategies. The main software development is being carried out in the CFDLab (http://cfdlab.ae.utexas.edu) at the University of Texas, but as with other open-source software projects; contributions are being made elsewhere in the US and abroad. The main goals of this article are: (1) to provide a basic reference source that describes libMesh and the underlying philosophy and software design approach; (2) to give sufficient detail and references on the adaptive mesh refinement and coarsening (AMR/C) scheme for applications analysts and developers; and (3) to describe the parallel implementation and data structures with supporting discussion of domain decomposition, message passing, and details related to dynamic repartitioning for parallel AMR/C. Other aspects related to C++ programming paradigms, reusability for diverse applications, adaptive modeling, physics-independent error indicators, and similar concepts are briefly discussed. Finally, results from some applications using the library are presented and areas of future research are discussed. 相似文献
6.
An increasing number of studies have appeared that evaluate and rank journal quality and the productivity of IS scholars and their institutions. In this paper, we describe the results of one recent study identifying the ‘Top 30’ IS Researchers, revealing many unexamined assumptions about which IS publication outlets should be included in any definition of high-quality, scholarly IS journals. Drawing from the argument that all categories and classification schemes are grounded in politics, we critique the process by which the recent study in question (and several earlier studies) have derived the set of journals from which they count researcher publications. Based on a critical examination of the widespread inclusion of practitioner outlets, and the consistent exclusion of European scholarly IS journals, we develop our own arguments for which journals should be included in such evaluations of researcher productivity. We conduct our own analysis of IS researcher productivity for the period 1999–2003, based on articles published in a geographically balanced set of 12 IS journals, and then we compare our results with those from the recent study in question and their predecessors. Our results feature a more diverse set of scholars – both in terms of location (specifically, the country and continent in which the researchers are employed) and gender. We urge future studies of IS research productivity to follow our practice of including high-quality European journals, while eschewing practitioner-oriented publications (such as Harvard Business Review and Communications of the ACM). We also advocate that such studies count only research contributions (e.g., research articles), and that other genres of non-research articles – such as book reviews, ‘issues and opinions’ pieces and editorial introductions – not be conflated with counts of research contributions. 相似文献
7.
Wei-Min Lu 《Automatic Control, IEEE Transactions on》1995,40(9):1576-1588
A state-space approach to the Youla parameterization of stabilizing controllers for linear and nonlinear systems is suggested. The stabilizing controllers (or a class of stabilizing controllers for nonlinear systems) are characterized as fractional transformations of stable parameters. The main idea behind this approach is to decompose the output feedback stabilization problem into state feedback and state estimation problems. The parameterized output feedback controllers have separation structures. This machinery allows the parameterization of stabilizing controllers to be conducted directly in state space without using coprime factorization 相似文献
8.
In the past few years there has been a tumultuous activity aimed at introducing novel conceptual schemes for quantum computing.
The approach proposed in (Marzuoli and Rasetti, 2002, 2005a) relies on the (re)coupling theory of SU(2) angular momenta and can be viewed as a generalization to arbitrary values of the spin variables of the usual quantum-circuit
model based on ‘qubits’ and Boolean gates. Computational states belong to finite-dimensional Hilbert spaces labelled by both
discrete and continuous parameters, and unitary gates may depend on quantum numbers ranging over finite sets of values as
well as continuous (angular) variables. Such a framework is an ideal playground to discuss discrete (digital) and analogic
computational processes, together with their relationships occurring when a consistent semiclassical limit takes place on
discrete quantum gates. When working with purely discrete unitary gates, the simulator is naturally modelled as families of
quantum finite states-machines which in turn represent discrete versions of topological quantum computation models. We argue
that our model embodies a sort of unifying paradigm for computing inspired by Nature and, even more ambitiously, a universal
setting in which suitably encoded quantum symbolic manipulations of combinatorial, topological and algebraic problems might
find their ‘natural’ computational reference model. 相似文献
9.
In typical nucleation, growth and coarsening problems in the study of defect/adatom accumulation in crystalline solids or surfaces, a large number of Master equations are involved to describe the evolution process. As examples, defect clusters nucleate and grow from point defects in solids when subjected to particle irradiation, and atoms depositing on a substrate form clusters leading to film growth. To efficiently solve the large number of master equations, the grouping method was used, which we have coded into a standard C++ program, taking full advantage of the object-oriented programming style supported in the C++ language. Because of the generic nature of this code, it may be of interest to the modeling nucleation and growth processes. As an example to demonstrate the application of this computer code, the Ostwald ripening process of vacancy clustering during aging in metal nickel is calculated. 相似文献
10.
In this paper, we introduce a new class of fuzzy measures, referred to as q-measures, based on a modification of the definition of the well-known Sugeno /spl lambda/-measure. Our proposed definition of the q-measures not only includes the /spl lambda/-measure as a special case, but also preserves all desirable properties and avoids some of the limitations of the conventional /spl lambda/-measure. The q-measure approach provides a more flexible and powerful method for constructing various fuzzy measures. We provide an iterative algorithm for constructing an interesting sequence of q-measures and analytically prove its convergence as a distinguishing characteristic of the proposed formulation. 相似文献
11.
In this article we address linear anti-windup design for linear discrete-time control systems guaranteeing regional and global stability and performance. The techniques that we develop are the discrete-time counterpart of existing techniques for anti-windup augmentation which lead to convex constructions by way of linear matrix inequalities (LMIs) when adopting static and plant order anti-windup augmentation. Interesting system theoretic interpretations of the performance bounds for the non-linear closed loop can also be given. We show here that parallel results apply to the discrete-time case. We derive the corresponding conditions and prove their effectiveness by adapting the continuous-time approaches to the discrete-time case. 相似文献
12.
Mesh decomposition is critical for analyzing, understanding, editing and reusing of mesh models. Although there are many methods for mesh decomposition, most utilize only triangular meshes. In this paper, we present an automated method for decomposing a volumetric mesh into semantic components. Our method consists of three parts. First, the outer surface mesh of the volumetric mesh is decomposed into semantic features by applying existing surface mesh segmentation and feature recognition techniques. Then, for each recognized feature, its outer boundary lines are identified, and the corresponding splitter element groups are setup accordingly. The inner volumetric elements of the feature are then obtained based on the established splitter element groups. Finally, each splitter element group is decomposed into two parts using the graph cut algorithm; each group completely belongs to one feature adjacent to the splitter element group. In our graph cut algorithm, the weights of the edges in the dual graph are calculated based on the electric field, which is generated using the vertices of the boundary lines of the features. Experiments on both tetrahedral and hexahedral meshes demonstrate the effectiveness of our method. 相似文献
13.
In this paper, we present a detailed theoretical analysis on the information-theoretic Independent Component Analysis (IT-ICA) approach. We first provide a number of lemmas and theorems on properties of the corresponding cost function in the general n-channel case with differentiable, odd, monotonic decreasing nonlinearity. A theorem on behaviour of the cost function along a radially outward line is given for characterizing the global configuration of the cost function in the parameter space. Furthermore, on the 2-channel IT-ICA system with cubic nonlinearity, we not only exhaustively solve out all equilibrium points and the condition for stability, but also give a global convergence theorem. 相似文献
14.
15.
This article describes a practical approach to the manual re-engineering of numerical software systems. The strategy has been applied to re-develop a medium sized FORTRAN-77 Computational Fluid Dynamics (CFD) code into C++. The motivation for software reverse-engineering is described, as are the special problems which influence the re-use of a legacy numerical code. The aim of this case study was to extract the implicit logical structure from the legacy code to form the basis of a C++ version using an imposed object-oriented design. An important secondary consideration was for the preservation of tried and tested numerical algorithms without excessive degradation of run-time performance. To this end an incremental re-engineering strategy was adopted that consisted of nine main stages, with extensive regression testing between each stage. The stages used in this development are described in this paper, with examples to illustrate the techniques employed and the problems encountered. This paper concludes with an appraisal of the development strategy used and a discussion of the central problems that have been addressed in this case study. 相似文献
16.
This note extends the robust global direct adaptive control results of Kosut and Friedlander [1] and Kosut [2] to the case of linear finite-dimensional systems with unstructured time-varying uncertainties. Subject to conditions which include i) a certain operator (usually denotedH^{*}_{e0} ) is strictly positive real and ii) the tuned error lies in L2 , theorems are presented establishing the L2 andL_{infty} stability of the adaptive control scheme. 相似文献
17.
The optimal driving strategy for a train is essentially a power–speedhold–coast–brake strategy unless the track contains steep grades in which case the speedhold mode must be interrupted by phases of power for steep uphill sections and coast for steep downhill sections. The Energymiser® device is used on freight and passenger trains in Australia and the United Kingdom to provide on-board advice for drivers about energy-efficient driving strategies. Energymiser® uses a specialized numerical algorithm to find optimal switching points for each steep section of track. Although the algorithm finds a feasible strategy that satisfies the necessary optimality conditions there has been no direct proof that the corresponding switching points are uniquely defined. We use a comprehensive perturbation analysis to show that a key local energy functional is convex with a unique minimum and in so doing prove that the optimal switching points are uniquely defined for each steep section of track. Hence we also deduce that the global optimal strategy is unique. We present two examples using realistic parameter values. 相似文献
18.
The assignment problem arises in multi-robot task-allocation scenarios. Inspired by existing techniques that employ task exchanges between robots, this paper introduces an algorithm for solving the assignment problem that has several appealing features for online, distributed robotics applications. The method may start with any initial matching and incrementally improve the current solution to reach the global optimum, producing valid assignments at any intermediate point. It is an any-time algorithm with a performance profile that is attractive: quality improves linearly with stages (or time). Additionally, the algorithm is comparatively straightforward to implement and is efficient both theoretically (complexity of $O(n^3\lg n)$ O ( n 3 lg n ) is better than many widely used solvers) and practically (comparable to the fastest implementation, for up to hundreds of robots/tasks). The algorithm generalizes “swap” primitives used by existing task exchange methods already used in the robotics community but, uniquely, is able to obtain global optimality via communication with only a subset of robots during each stage. We present a centralized version of the algorithm and two decentralized variants that trade between computational and communication complexity. The centralized version turns out to be a computational improvement and reinterpretation of the little-known method of Balinski–Gomory proposed half a century ago. Thus, deeper understanding of the relationship between approximate swap-based techniques—developed by roboticists—and combinatorial optimization techniques, e.g., the Hungarian and Auction algorithms—developed by operations researchers but used extensively by roboticists—is uncovered. 相似文献
19.
Global deterministic identifiability of nonlinear systems is studied by constructing the family of local state isomorphisms that preserve the structure of the parametric system. The method is simplified for homogeneous systems, where such isomorphisms are shown to be linear, thereby reducing the identifiability problem to solving a set of algebraic equations. The known conditions for global identifiability in linear and bilinear systems are special cases of these results 相似文献
20.
Wallace G. Ferreira Alberto L. Serpa 《Structural and Multidisciplinary Optimization》2018,57(1):131-159
In this work we present LSEGO, an approach to drive efficient global optimization (EGO), based on LS (least squares) ensemble of metamodels. By means of LS ensemble of metamodels it is possible to estimate the uncertainty of the prediction with any kind of model (not only kriging) and provide an estimate for the expected improvement function. For the problems studied, the proposed LSEGO algorithm has shown to be able to find the global optimum with less number of optimization cycles than required by the classical EGO approach. As more infill points are added per cycle, the faster is the convergence to the global optimum (exploitation) and also the quality improvement of the metamodel in the design space (exploration), specially as the number of variables increases, when the standard single point EGO can be quite slow to reach the optimum. LSEGO has shown to be a feasible option to drive EGO with ensemble of metamodels as well as for constrained problems, and it is not restricted to kriging and to a single infill point per optimization cycle. 相似文献