首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Linear subspace learning is of great importance for the purpose of visualization of high-dimensional observations. Sparsity-preserved learning (SPL) is a recently developed technique for linear subspace learning. Its objective function is formulated by using the $\ell _2$ -norm, which implies that the obtained projection vectors are likely to be distorted by outliers. In this paper, we develop a new SPL algorithm called SPL-L1 based on the $\ell _1$ -norm instead of the $\ell _2$ -norm. The proposed approach seeks projection vectors by minimizing a reconstruction error subject to a constraint of samples dispersion, both of which are defined using the $\ell _1$ -norm. As a robust alternative, SPL-L1 works well in the presence of atypical samples. We design an iterative algorithm under the framework of bound optimization to solve the projection vectors of SPL-L1. The experiments on image visualization demonstrate the superiority of the proposed method.  相似文献   

2.
The paper presents a linear matrix inequality (LMI)-based approach for the simultaneous optimal design of output feedback control gains and damping parameters in structural systems with collocated actuators and sensors. The proposed integrated design is based on simplified $\mathcal{H}^2$ and $\mathcal{H}^{\infty}$ norm upper bound calculations for collocated structural systems. Using these upper bound results, the combined design of the damping parameters of the structural system and the output feedback controller to satisfy closed-loop $\mathcal{H}^2$ or $\mathcal{H}^{\infty}$ performance specifications is formulated as an LMI optimization problem with respect to the unknown damping coefficients and feedback gains. Numerical examples motivated from structural and aerospace engineering applications demonstrate the advantages and computational efficiency of the proposed technique for integrated structural and control design. The effectiveness of the proposed integrated design becomes apparent, especially in very large scale structural systems where the use of classical methods for solving Lyapunov and Riccati equations associated with $\mathcal{H}^2$ and $\mathcal{H}^{\infty}$ designs are time-consuming or intractable.  相似文献   

3.
A $C^0$ -weak Galerkin (WG) method is introduced and analyzed in this article for solving the biharmonic equation in 2D and 3D. A discrete weak Laplacian is defined for $C^0$ functions, which is then used to design the weak Galerkin finite element scheme. This WG finite element formulation is symmetric, positive definite and parameter free. Optimal order error estimates are established for the weak Galerkin finite element solution in both a discrete $H^2$ norm and the standard $H^1$ and $L^2$ norms with appropriate regularity assumptions. Numerical results are presented to confirm the theory. As a technical tool, a refined Scott-Zhang interpolation operator is constructed to assist the corresponding error estimates. This refined interpolation preserves the volume mass of order $(k+1-d)$ and the surface mass of order $(k+2-d)$ for the $P_{k+2}$ finite element functions in $d$ -dimensional space.  相似文献   

4.
A set of quantum states for $M$ colors and another set of quantum states for $N$ coordinates are proposed in this paper to represent $M$ colors and coordinates of the $N$ pixels in an image respectively. We design an algorithm by which an image of $N$ pixels and $m$ different colors is stored in a quantum system just using $2N+m$ qubits. An algorithm for quantum image compression is proposed. Simulation result on the Lena image shows that compression ratio of lossless is 2.058. Moreover, an image segmentation algorithm based on quantum search quantum search which can find all solutions in the expected times in $O(t\sqrt{N} )$ is proposed, where $N$ is the number of pixels and $t$ is the number of targets to be segmented.  相似文献   

5.
In this paper, we consider the linearly constrained $\ell _1$ $\ell _2$ minimization, and we propose an accelerated Bregman method for solving this minimization problem. The proposed method is based on the extrapolation technique, which is used in accelerated proximal gradient methods proposed by Nesterov and others, and on the equivalence between the Bregman method and the augmented Lagrangian method. A convergence rate of $\mathcal{O }(\frac{1}{k^2})$ is proved for the proposed method when it is applied to solve a more general linearly constrained nonsmooth convex minimization problem. We numerically test our proposed method on a synthetic problem from compressive sensing. The numerical results confirm that our accelerated Bregman method is faster than the original Bregman method.  相似文献   

6.
This paper studies an online scheduling problem with immediate and reliable lead-time quotation. A manufacturer either accepts an order by quoting a reliable lead-time on its arrival or rejects it immediately. The objective is to maximize the total revenue of completed orders. Keskinocak et al. (Management Science 47(2):264–279, 2001) studied a linear revenue function in a discrete model with integer release time of order, and proposed a competitive strategy Q-FRAC. This paper investigates a relaxed revenue function in both discrete and continuous models where orders are released at integer and real time points, respectively. For the discrete model, we present a revised Q-FRAC strategy that is optimal in competitiveness for concave and linear revenue functions with unit length and uniform weight of order, improving the previous results in Keskinocak et al. (Management Science 47(2):264–279, 2001). For the scenario with uniform length $p$ and nonuniform weight of order, we prove an optimal strategy for the case with $p=1$ and the nonexistence of competitive strategies for the case with $p>1$ . For the continuous model, we present an optimal strategy in competitiveness for the case with uniform weight of order and linear revenue functions, and prove the nonexistence of competitive strategies for the other case with nonuniform weight of order.  相似文献   

7.
In this paper we study the weak convergence of the semidiscrete and full discrete finite element methods for the stochastic elastic equation driven by additive noise, based on $C^0$ or $C^1$ piecewise polynomials. In order to simplify the analysis of weak convergence, we rewrite the stochastic elastic equation in an abstract problem and the solutions of the semidiscrete and full discrete problems in a unified form. We obtain that the weak order is twice the strong order, both in time and in space. Numerical experiments are carried out to verify the theoretical results.  相似文献   

8.
The TreeRank algorithm was recently proposed in [1] and [2] as a scoring-based method based on recursive partitioning of the input space. This tree induction algorithm builds orderings by recursively optimizing the Receiver Operating Characteristic curve through a one-step optimization procedure called LeafRank. One of the aim of this paper is the in-depth analysis of the empirical performance of the variants of TreeRank/LeafRank method. Numerical experiments based on both artificial and real data sets are provided. Further experiments using resampling and randomization, in the spirit of bagging and random forests are developed [3, 4] and we show how they increase both stability and accuracy in bipartite ranking. Moreover, an empirical comparison with other efficient scoring algorithms such as RankBoost and RankSVM is presented on UCI benchmark data sets.  相似文献   

9.
Recently, a great deal of research work has been devoted to the development of algorithms to estimate the intrinsic dimensionality (id) of a given dataset, that is the minimum number of parameters needed to represent the data without information loss. id estimation is important for the following reasons: the capacity and the generalization capability of discriminant methods depend on it; id is a necessary information for any dimensionality reduction technique; in neural network design the number of hidden units in the encoding middle layer should be chosen according to the id of data; the id value is strongly related to the model order in a time series, that is crucial to obtain reliable time series predictions. Although many estimation techniques have been proposed in the literature, most of them fail on noisy data, or compute underestimated values when the id is sufficiently high. In this paper, after reviewing some of the most important id estimators related to our work, we provide a theoretical motivation of the bias that causes the underestimation effect, and we present two id estimators based on the statistical properties of manifold neighborhoods, which have been developed in order to reduce this effect. We exhaustively evaluate the proposed techniques on synthetic and real datasets, by employing an objective evaluation measure to compare their performance with those achieved by state of the art algorithms; the results show that the proposed methods are promising, and produce reliable estimates also in the difficult case of datasets drawn from non-linearly embedded manifolds, characterized by high?id.  相似文献   

10.
One-way quantum computation (1WQC) is a model of universal quantum computations in which a specific highly entangled state called a cluster state (or graph state) allows for quantum computation by only single-qubit measurements. The needed computations in this model are organized as measurement patterns. Previously, an automatic approach to extract a 1WQC pattern from a quantum circuit has been proposed. It takes a quantum circuit consisting of CZ and \(J(\alpha )\) gates and translates it into an optimized 1WQC pattern. However, the quantum synthesis algorithms usually decompose circuits using a library containing CNOT and any single-qubit gates. In this paper, we show how this approach can be modified in a way that it can take a circuit consisting of CNOT and any single-qubit gates to produce an optimized 1WQC pattern. The single-qubit gates are first automatically \(J\) -decomposed and then added to the measurement patterns. Moreover, a new optimization technique is proposed by presenting some algorithms to add Pauli gates to the measurement patterns directly, i.e., without their \(J\) -decomposition which leads to more compact patterns for these gates. Using these algorithms, an improved approach for adding single-qubit gates to measurement patterns is proposed. The optimized pattern of CNOT gates is directly added to the measurement patterns. Experimental results show that the proposed approach can efficiently produce optimized patterns for quantum circuits and that adding CNOT gates directly to the measurement patterns decreases the translation runtime.  相似文献   

11.
Two-dimensional orthogonal matching pursuit (2D-OMP) algorithm is an extension of the one-dimensional OMP (1D-OMP), whose complexity and memory usage are lower than the 1D-OMP when they are applied to 2D sparse signal recovery. However, the major shortcoming of the 2D-OMP still resides in long computing time. To overcome this disadvantage, we develop a novel parallel design strategy of the 2D-OMP algorithm on a graphics processing unit (GPU) in this paper. We first analyze the complexity of the 2D-OMP and point out that the bottlenecks lie in matrix inverse and projection. After adopting the strategy of matrix inverse update whose performance is superior to traditional methods to reduce the complexity of original matrix inverse, projection becomes the most time-consuming module. Hence, a parallel matrix–matrix multiplication leveraging tiling algorithm strategy is launched to accelerate projection computation on GPU. Moreover, a fast matrix–vector multiplication, a parallel reduction algorithm, and some other parallel skills are also exploited to boost the performance of the 2D-OMP further on GPU. In the case of the sensing matrix of size 128 \(\times \) 256 (176 \(\times \) 256, resp.) for a 256 \(\times \) 256 scale image, experimental results show that the parallel 2D-OMP achieves 17 \(\times \) to 41 \(\times \) (24 \(\times \) to 62 \(\times \) , resp.) speedup over the original C code compiled with the O \(_2\) optimization option. Higher speedup would be further obtained with larger-size image recovery.  相似文献   

12.
In order to overcome the premature convergence in particle swarm optimization (PSO), we introduce dynamical crossover, a crossover operator with variable lengths and positions, to PSO, which is briefly denoted as CPSO. To get rid of the drawbacks of only finding the convex clusters and being sensitive to the initial points in $k$ -means algorithm, a hybrid clustering algorithm based on CPSO is proposed. The difference between the work and the existing ones lies in that CPSO is firstly introduced into $k$ -means. Experimental results performing on several data sets illustrate that the proposed clustering algorithm can get completely rid of the shortcomings of $k$ -means algorithms, and acquire correct clustering results. The application in image segmentation illustrates that the proposed algorithm gains good performance.  相似文献   

13.
In this paper the harmony search (HS) algorithm and Lyapunov theory are hybridized together to design a stable adaptive fuzzy tracking control strategy for vision-based navigation of autonomous mobile robots. The proposed variant of HS algorithm, with complete dynamic harmony memory (named here as DyHS algorithm), is utilized to design two self-adaptive fuzzy controllers, for $x$ -direction and $y$ -direction movements of a mobile robot. These fuzzy controllers are optimized, both in their structures and free parameters, such that they can guarantee desired stability and simultaneously they can provide satisfactory tracking performance for the vision-based navigation of mobile robots. In addition, the concurrent and preferential combinations of global-search capability, utilizing DyHS algorithm, and Lyapunov theory-based local search method, are employed simultaneously to provide a high degree of automation in the controller design process. The proposed schemes have been implemented in both simulation and real-life experiments. The results demonstrate the usefulness of the proposed design strategy and shows overall comparable performances, when compared with two other competing stochastic optimization algorithms, namely, genetic algorithm and particle swarm optimization.  相似文献   

14.
Efficient processing of high-dimensional similarity joins plays an important role for a wide variety of data-driven applications. In this paper, we consider $\varepsilon $ -join variant of the problem. Given two $d$ -dimensional datasets and parameter $\varepsilon $ , the task is to find all pairs of points, one from each dataset that are within $\varepsilon $ distance from each other. We propose a new $\varepsilon $ -join algorithm, called Super-EGO, which belongs the EGO family of join algorithms. The new algorithm gains its advantage by using novel data-driven dimensionality re-ordering technique, developing a new EGO-strategy that more aggressively avoids unnecessary computation, as well as by developing a parallel version of the algorithm. We study the newly proposed Super-EGO algorithm on large real and synthetic datasets. The empirical study demonstrates significant advantage of the proposed solution over the existing state of the art techniques.  相似文献   

15.
We present a technique for numerically solving convection-diffusion problems in domains $\varOmega $ with curved boundary. The technique consists in approximating the domain $\varOmega $ by polyhedral subdomains $\mathsf{{D}}_h$ where a finite element method is used to solve for the approximate solution. The approximation is then suitably extended to the remaining part of the domain $\varOmega $ . This approach allows for the use of only polyhedral elements; there is no need of fitting the boundary in order to obtain an accurate approximation of the solution. To achieve this, the boundary condition on the border of $\varOmega $ is transferred to the border of $\mathsf{D }_h$ by using simple line integrals. We apply this technique to the hybridizable discontinuous Galerkin method and provide extensive numerical experiments showing that, whenever the distance of $\mathsf{{D}}_h$ to $\partial \varOmega $ is of order of the meshsize $h$ , the convergence properties of the resulting method are the same as those for the case in which $\varOmega =\mathsf{{D}}_h$ . We also show numerical evidence indicating that the ratio of the $L^2(\varOmega )$ norm of the error in the scalar variable computed with $d>0$ to that of that computed with $d=0$ remains constant (and fairly close to one), whenever the distance $d$ is proportional to $\min \{h,Pe^{-1}\}/(k+1)^2$ , where $Pe$ is the so-called Péclet number.  相似文献   

16.
The Voronoi diagram is an important technique for answering nearest-neighbor queries for spatial databases. We study how the Voronoi diagram can be used for uncertain spatial data, which are inherent in scientific and business applications. Specifically, we propose the Uncertain-Voronoi diagram (or UV-diagram), which divides the data space into disjoint “UV-partitions”. Each UV-partition $P$ is associated with a set $S$ of objects, such that any point $q$ located in $P$ has the set $S$ as its nearest neighbor with nonzero probabilities. The UV-diagram enables queries that return objects with nonzero chances of being the nearest neighbor (NN) of a given point $q$ . It supports “continuous nearest-neighbor search”, which refreshes the set of NN objects of $q$ , as the position of $q$ changes. It also allows the analysis of nearest-neighbor information, for example, to find out the number of objects that are the nearest neighbors of any point in a given area. A UV-diagram requires exponential construction and storage costs. To tackle these problems, we devise an alternative representation of a UV-diagram, by using a set of UV-cells. A UV-cell of an object $o$ is the extent $e$ for which $o$ can be the nearest neighbor of any point $q \in e$ . We study how to speed up the derivation of UV-cells by considering its nearby objects. We also use the UV-cells to design the UV-index, which supports different queries, and can be constructed in polynomial time. We have performed extensive experiments on both real and synthetic data to validate the efficiency of our approaches.  相似文献   

17.
A procedure is developed for the design of reinforced concrete footings subjected to vertical, concentric column loads that satisfies both structural requirements and geotechnical limit states using a hybrid Big Bang-Big Crunch (BB-BC) algorithm. The objectives of the optimization are to minimize cost, CO $_{2}$ emissions, and the weighted aggregate of cost and CO $_{2}$ . Cost is based on the materials and labor required for the construction of reinforced concrete footings and CO $_{2}$ emissions are associated with the extraction and transportation of raw materials; processing, manufacturing, and fabrication of products; and the emissions of equipment involved in the construction process. The cost and CO $_{2}$ objective functions are based on weighted values and are subjected to bending moment, shear force, and reinforcing details specified by the American Concrete Institute (ACI 318-11), as well as soil bearing and displacement limits. Two sets of design examples are presented: low-cost and low-CO $_{2}$ emission designs based solely on geotechnical considerations; and designs that also satisfy the ACI 318-11 code for structural concrete. A multi-objective optimization is applied to cost and CO $_{2}$ emissions. Results are presented that demonstrate the effects of applied load, soil properties, allowable settlement, and concrete strength on designs.  相似文献   

18.
This study aims to minimize the sum of a smooth function and a nonsmooth \(\ell _{1}\) -regularized term. This problem as a special case includes the \(\ell _{1}\) -regularized convex minimization problem in signal processing, compressive sensing, machine learning, data mining, and so on. However, the non-differentiability of the \(\ell _{1}\) -norm causes more challenges especially in large problems encountered in many practical applications. This study proposes, analyzes, and tests a Barzilai–Borwein gradient algorithm. At each iteration, the generated search direction demonstrates descent property and can be easily derived by minimizing a local approximal quadratic model and simultaneously taking the favorable structure of the \(\ell _{1}\) -norm. A nonmonotone line search technique is incorporated to find a suitable stepsize along this direction. The algorithm is easily performed, where each iteration requiring the values of the objective function and the gradient of the smooth term. Under some conditions, the proposed algorithm appears globally convergent. The limited experiments using some nonconvex unconstrained problems from the CUTEr library with additive \(\ell _{1}\) -regularization illustrate that the proposed algorithm performs quite satisfactorily. Extensive experiments for \(\ell _{1}\) -regularized least squares problems in compressive sensing verify that our algorithm compares favorably with several state-of-the-art algorithms that have been specifically designed in recent years.  相似文献   

19.
In this study we utilize the self-sensing capabilities of piezoelectric micro-actuators in hard disk drives (HDD) to actively suppress in-plane resonance modes of the suspension in an HDD. The self-sensing circuit is based on a tunable capacitance bridge that decouples the control signal from the sensing signal in the micro-actuator. A hybrid modeling technique based on a realization algorithm and least-squares optimization for continuous-time systems is used to model the single-input dual-output system. An analog controller was computed using standard $H_{\infty}$ -controller design tools and reduced in order using model reduction routines. Experimental implementation using analog filter design shows the effectiveness of the proposed method in reducing the main sway modes of the suspension.  相似文献   

20.
Software development processes have been evolving from rigid, pre-specified, and sequential to incremental, and iterative. This evolution has been dictated by the need to accommodate evolving user requirements and reduce the delay between design decision and feedback from users. Formal verification techniques, however, have largely ignored this evolution and even when they made enormous improvements and found significant uses in practice, like in the case of model checking, they remained confined into the niches of safety-critical systems. Model checking verifies if a system’s model \(\mathcal{M}\) satisfies a set of requirements, formalized as a set of logic properties \(\Phi\) . Current model-checking approaches, however, implicitly rely on the assumption that both the complete model \(\mathcal{M}\) and the whole set of properties \(\Phi\) are fully specified when verification takes place. Very often, however, \(\mathcal{M}\) is subject to change because its development is iterative and its definition evolves through stages of incompleteness, where alternative design decisions are explored, typically to evaluate some quality trade-offs. Evolving systems specifications of this kind ask for novel verification approaches that tolerate incompleteness and support incremental analysis of alternative designs for certain functionalities. This is exactly the focus of this paper, which develops an incremental model-checking approach for evolving Statecharts. Statecharts have been chosen both because they are increasingly used in practice natively support model refinements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号