首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The explosive development of computational tools these days is threatening security of cryptographic algorithms, which are regarded as primary traditional methods for ensuring information security. The physical layer security approach is introduced as a method for both improving confidentiality of the secret key distribution in cryptography and enabling the data transmission without relaying on higher-layer encryption. In this paper, the cooperative jamming paradigm - one of the techniques used in the physical layer is studied and the resulting power allocation problem with the aim of maximizing the sum of secrecy rates subject to power constraints is formulated as a nonconvex optimization problem. The objective function is a so-called DC (Difference of Convex functions) function, and some constraints are coupling. We propose a new DC formulation and develop an efficient DCA (DC Algorithm) to deal with this nonconvex program. The DCA introduces the elegant concept of approximating the original nonconvex program by a sequence of convex ones: at each iteration of DCA requires solution of a convex subproblem. The main advantage of the proposed approach is that it leads to strongly convex quadratic subproblems with separate variables in the objective function, which can be tackled by both distributed and centralized methods. One of the major contributions of the paper is to develop a highly efficient distributed algorithm to solve the convex subproblem. We adopt the dual decomposition method that results in computing iteratively the projection of points onto a very simple structural set which can be determined by an inexpensive procedure. The numerical results show the efficiency and the superiority of the new DCA based algorithm compared with existing approaches.  相似文献   

2.
During the last years, GPU manycore devices have demonstrated their usefulness to accelerate computationally intensive problems. Although arriving at a parallelization of a highly parallel algorithm is an affordable task, the optimization of GPU codes is a challenging activity. The main reason for this is the number of parameters, programming choices, and tuning techniques available, many of them related with complex and sometimes hidden architecture details. A useful strategy to systematically attack these optimization problems is to characterize the different kernels of the application, and use this knowledge to select appropriate configuration parameters. The All-Pair Shortest-Path (APSP) problem is a well-known problem in graph theory whose objective is to find the shortest paths between any pairs of nodes in a graph. This problem can be solved by highly parallel and computational intensive tasks, being a good candidate to be exploited by manycore devices. In this paper, we use kernel characterization criteria to optimize an APSP algorithm implementation for NVIDIA GPUs. Our experimental results show that the combined use of proper configuration policies, and the concurrent kernels capability of new CUDA architectures, leads to a performance improvement of up to 62 % with respect to one of the possible configurations recommended by CUDA, considered as baseline.  相似文献   

3.
With impressive progress in Boolean Satisfiability (SAT) solving and several extensions to pseudo-Boolean (PB) constraints, many applications that use SAT, such as high-performance formal verification techniques are still restricted to checking satisfiability of certain conditions. However, there is also frequently a need to express a preference for certain solutions. Extending SAT-solving to Boolean optimization allows the use of objective functions to describe a desirable solution. Although recent work in 0–1 Integer Linear Programming (ILP) offers extensions that can optimize a linear objective function, this is often achieved by solving a series of SAT or ILP decision problems. Our work articulates some pitfalls of this approach. An objective function may complicate the use of any symmetry that might be present in the given constraints, even when the constraints are unsatisfiable and the objective function is irrelevant. We propose several new techniques that treat objective functions differently from CNF/PB constraints and accelerate Boolean optimization in many practical cases. We also develop an adaptive flow that analyzes a given Boolean optimization problem and picks the symmetry-breaking technique that is best suited to the problem characteristics. Empirically, we show that for non-trivial objective functions that destroy constraint symmetries, the benefit of static symmetry-breaking is lost but dynamic symmetry-breaking accelerates problem-solving in many cases. We also introduce a new objective function, Localized Bit Selection (LBS), that can be used to specify a preference for bit values in formal verification applications.  相似文献   

4.
We present new methods for fast Gaussian process (GP) inference in large-scale scenarios including exact multi-class classification with label regression, hyperparameter optimization, and uncertainty prediction. In contrast to previous approaches, we use a full Gaussian process model without sparse approximation techniques. Our methods are based on exploiting generalized histogram intersection kernels and their fast kernel multiplications. We empirically validate the suitability of our techniques in a wide range of scenarios with tens of thousands of examples. Whereas plain GP models are intractable due to both memory consumption and computation time in these settings, our results show that exact inference can indeed be done efficiently. In consequence, we enable every important piece of the Gaussian process framework—learning, inference, hyperparameter optimization, variance estimation, and online learning—to be used in realistic scenarios with more than a handful of data.  相似文献   

5.
Multiple kernel learning (MKL) approach has been proposed for kernel methods and has shown high performance for solving some real-world applications. It consists on learning the optimal kernel from one layer of multiple predefined kernels. Unfortunately, this approach is not rich enough to solve relatively complex problems. With the emergence and the success of the deep learning concept, multilayer of multiple kernel learning (MLMKL) methods were inspired by the idea of deep architecture. They are introduced in order to improve the conventional MKL methods. Such architectures tend to learn deep kernel machines by exploring the combinations of multiple kernels in a multilayer structure. However, existing MLMKL methods often have trouble with the optimization of the network for two or more layers. Additionally, they do not always outperform the simplest method of combining multiple kernels (i.e., MKL). In order to improve the effectiveness of MKL approaches, we introduce, in this paper, a novel backpropagation MLMKL framework. Specifically, we propose to optimize the network over an adaptive backpropagation algorithm. We use the gradient ascent method instead of dual objective function, or the estimation of the leave-one-out error. We test our proposed method through a large set of experiments on a variety of benchmark data sets. We have successfully optimized the system over many layers. Empirical results over an extensive set of experiments show that our algorithm achieves high performance compared to the traditional MKL approach and existing MLMKL methods.  相似文献   

6.
Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selector--which we call the Fisher-Markov selector--to identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid--dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.  相似文献   

7.
Existing collaborative optimization techniques with multiple coupled subsystems are predominantly focused on single-objective deterministic optimization. However, many engineering optimization problems have system and subsystems that can each be multi-objective, constrained and with uncertainty. The literature reports on a few deterministic Multi-objective Multi-Disciplinary Optimization (MMDO) techniques. However, these techniques in general require a large number of function calls and their computational cost can be exacerbated when uncertainty is present. In this paper, a new Approximation-Assisted Multi-objective collaborative Robust Optimization (New AA-McRO) under interval uncertainty is presented. This new AA-McRO approach uses a single-objective optimization problem to coordinate all system and subsystem multi-objective optimization problems in a Collaborative Optimization (CO) framework. The approach converts the consistency constraints of CO into penalty terms which are integrated into the system and subsystem objective functions. The new AA-McRO is able to explore the design space better and obtain optimum design solutions more efficiently. Also, the new AA-McRO obtains an estimate of Pareto optimum solutions for MMDO problems whose system-level objective and constraint functions are relatively insensitive (or robust) to input uncertainties. Another characteristic of the new AA-McRO is the use of online approximation for objective and constraint functions to perform system robustness evaluation and subsystem-level optimization. Based on the results obtained from a numerical and an engineering example, it is concluded that the new AA-McRO performs better than previously reported MMDO methods.  相似文献   

8.
We present a bundle algorithm for multiple-instance classification and ranking. These frameworks yield improved models on many problems possessing special structure. Multiple-instance loss functions are typically nonsmooth and nonconvex, and current algorithms convert these to smooth nonconvex optimization problems that are solved iteratively. Inspired by the latest linear-time subgradient-based methods for support vector machines, we optimize the objective directly using a nonconvex bundle method. Computational results show this method is linearly scalable, while not sacrificing generalization accuracy, permitting modeling on new and larger data sets in computational chemistry and other applications. This new implementation facilitates modeling with kernels.  相似文献   

9.
Qin  Huabin  Liu  Mingliang  Wang  Jian  Guo  Zijian  Liu  Junbo 《Applied Intelligence》2021,51(7):4888-4907

Traditional fault diagnosis methods of DC (direct current) motors require high expertise and human labor. However, the other disadvantages of these methods are low efficiency and poor accuracy. To address these problems, a new adaptive and intelligent mechanical fault diagnosis method for DC motors based on variational mode decomposition (VMD), singular value decomposition (SVD), and residual deep convolutional neural networks with wide first-layer kernels (R-WDCNN) was proposed. First, the vibration signals of a DC motor were collected by a designed acquisition system. Subsequently, VMD was employed to decompose the raw signals adaptively into several intrinsic mode functions (IMFs). Moreover, the transient frequency means method, which can quickly and accurately obtain the optimal value of K, is proposed. SVD was applied to reduce the dimensionality of the IMF matrix for further feature extraction. Finally, the reconstructed matrix containing the main fault feature information was used to train and test the R-WDCNN. Based on residual learning, identification and classification of four types of vibration signals were achieved, while the R-WDCNN was optimized by the adaptive batch normalization algorithm (AdaBN). The recognition rate and the convergence were improved by this classifier. The results show that the method proposed in this paper has better adaptability and intelligence than other methods, and the R-WDCNN can reach a 94% recognition rate on unknown samples. Therefore, the proposed method is more intelligent and accurate than other methods.

  相似文献   

10.
Community detection is believed to be a very important tool for understanding both the structure and function of complex networks, and has been intensively investigated in recent years. Community detection can be considered as a multi-objective optimization problem and the nature-inspired optimization techniques have shown promising results in dealing with this problem. In this study, we present a novel multi-objective discrete backtracking search optimization algorithm with decomposition for community detection in complex networks. First, we present a discrete variant of the backtracking search optimization algorithm (DBSA) where the updating rules of individuals are redesigned based on the network topology. Then, a novel multi-objective discrete method (MODBSA/D) based on the proposed discrete variant DBSA is first proposed to minimize two objective functions in terms of Negative Ratio Association (NRA) and Ratio Cut (RC) of community detection problems. Finally, the proposed algorithm is tested on some real-world networks to evaluate its performance. The results clearly show that MODBSA/D has effective and promising performance for dealing with community detection in complex networks.  相似文献   

11.
This paper introduces a general principle for constructing multiscale kernels on surface meshes, and presents a construction of the multiscale pre‐biharmonic and multiscale biharmonic kernels. Our construction is based on an optimization problem that seeks to minimize a smoothness criterion, the Laplacian energy, subject to a sparsity inducing constraint. Namely, we use the lasso constraint, which sets an upper bound on the l1 ‐norm of the solution, to obtain a family of solutions parametrized by this upper‐bound parameter. The interplay between sparsity and smoothness results in smooth kernels that vanish away from the diagonal. We prove that the resulting kernels have gradually changing supports, consistent behavior over partial and complete meshes, and interesting limiting behaviors (e.g. in the limit of large scales, the multiscale biharmonic kernel converges to the Green's function of the biharmonic equation); in addition, these kernels are based on intrinsic quantities and so are insensitive to isometric deformations. We show empirically that our kernels are shape‐aware, are robust to noise, tessellation, and partial object, and are fast to compute. Finally, we demonstrate that the new kernels can be useful for function interpolation and shape correspondence.  相似文献   

12.
Multiple Kernel Learning (MKL) is a popular generalization of kernel methods which allows the practitioner to optimize over convex combinations of kernels. We observe that many recent MKL solutions can be cast in the framework of oracle based optimization, and show that they vary in terms of query point generation. The popularity of such methods is because the oracle can fortuitously be implemented as a support vector machine. Motivated by the success of centering approaches in interior point methods, we propose a new approach to optimize the MKL objective based on the analytic center cutting plane method (accpm). Our experimental results show that accpm outperforms state of the art in terms of rate of convergence and robustness. Further analysis sheds some light as to why MKL may not always improve classification accuracy over naive solutions.  相似文献   

13.
Lei  Mengyi  Zhou  Yongquan  Luo  Qifang 《Multimedia Tools and Applications》2020,79(43-44):32151-32168

Flower pollination algorithm (FPA) is a swarm-based optimization technique that has attracted the attention of many researchers in several optimization fields due to its impressive characteristics. This paper proposes a new application for FPA in the field of image processing to solve the color quantization problem, which is use the mean square error is selected as the objective function of the optimization color quantization problem to be solved. By comparing with the K-means and other swarm intelligence techniques, the proposed FPA for Color Image Quantization algorithm is verified. Computational results show that the proposed method can generate a quantized image with low computational cost. Moreover, the quality of the image generated is better than that of the images obtained by six well-known color quantization methods.

  相似文献   

14.
Large margin hidden Markov models for speech recognition   总被引:1,自引:0,他引:1  
In this paper, motivated by large margin classifiers in machine learning, we propose a novel method to estimate continuous-density hidden Markov model (CDHMM) for speech recognition according to the principle of maximizing the minimum multiclass separation margin. The approach is named large margin HMM. First, we show this type of large margin HMM estimation problem can be formulated as a constrained minimax optimization problem. Second, we propose to solve this constrained minimax optimization problem by using a penalized gradient descent algorithm, where the original objective function, i.e., minimum margin, is approximated by a differentiable function and the constraints are cast as penalty terms in the objective function. The new training method is evaluated in the speaker-independent isolated E-set recognition and the TIDIGITS connected digit string recognition tasks. Experimental results clearly show that the large margin HMMs consistently outperform the conventional HMM training methods. It has been consistently observed that the large margin training method yields significant recognition error rate reduction even on top of some popular discriminative training methods.  相似文献   

15.
The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that “classical” flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. One key implementation detail is the median filtering of intermediate flow fields during optimization. While this improves the robustness of classical methods it actually leads to higher energy solutions, meaning that these methods are not optimizing the original objective function. To understand the principles behind this phenomenon, we derive a new objective function that formalizes the median filtering heuristic. This objective function includes a non-local smoothness term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that can better preserve motion details. To take advantage of the trend towards video in wide-screen format, we further introduce an asymmetric pyramid downsampling scheme that enables the estimation of longer range horizontal motions. The methods are evaluated on the Middlebury, MPI Sintel, and KITTI datasets using the same parameter settings.  相似文献   

16.
The kernel functions play a central role in kernel methods, accordingly over the years the optimization of kernel functions has been a promising research area. Ideally Fisher discriminant criteria can be used as an objective function to optimize the kernel function to augment the margin between different classes. Unfortunately, Fisher criteria are optimal only in the case that all the classes are generated from underlying multivariate normal distributions of common covariance matrix but different means and each class is expressed by a single cluster. Due to the assumptions, Fisher criteria obviously are not a suitable choice as a kernel optimization rule in some applications such as the multimodally distributed data. In order to solve this problem, recently many improved discriminant criteria (DC) have been also developed. Therefore, to apply these discriminant criteria to kernel optimization, in this paper based on a data-dependent kernel function we propose a unified kernel optimization framework, which can use any discriminant criteria formulated in a pairwise manner as the objective functions. Under the kernel optimization framework, to employ different discriminant criteria, one has to only change the corresponding affinity matrices without having to resort to any complex derivations in feature space. Experimental results based on some benchmark data demonstrate the efficiency of our method.  相似文献   

17.
One approach to multiobjective optimization is to define a scalar substitute objective function that aggregates all objectives and solve the resulting aggregate optimization problem (AOP). In this paper, we discern that the objective function in quasi-separable multidisciplinary design optimization (MDO) problems can be viewed as an aggregate objective function (AOF). We consequently show that a method that can solve quasi-separable problems can also be used to obtain Pareto points of associated AOPs. This is useful when AOPs are too hard to solve or when the design engineer does not have access to the models necessary to evaluate all the terms of the AOF. In this case, decomposition-based design optimization methods can be useful to solve the AOP as a quasi-separable MDO problem. Specifically, we use the analytical target cascading methodology to formulate decomposed subproblems of quasi-separable MDO problems and coordinate their solution in order to obtain Pareto points of the associated AOPs. We first illustrate the approach using a well-known simple geometric programming example and then present a vehicle suspension design problem with three objectives related to ground vehicle ride and handling.  相似文献   

18.
Bandwidth optimization is considered when several classes of Internet traffic are served in generalized processor sharing (GPS) servers. Internet traffic shows self-similar patterns that make it difficult to obtain analytical performance in GPS. Thus, for performance estimation of different classes of traffic, we use fluid simulation techniques that can reduce the simulation complexity, compared to packet-level simulation. Using the relationship between the guaranteed bandwidth vector and the corresponding performance, we propose a bandwidth optimization problem to minimize the total bandwidth such that performance requirements are satisfied. We use an exterior penalty function method to solve the optimization problem. However, a penalized objective function may have local minimum which is not a global minimum. Thus, we propose a new methodology to circumvent the limitation of the exterior penalty function method.  相似文献   

19.
A small number of combinatorial optimization problems have search spaces that correspond to elementary landscapes, where the objective function f is an eigenfunction of the Laplacian that describes the neighborhood structure of the search space. Many problems are not elementary; however, the objective function of a combinatorial optimization problem can always be expressed as a superposition of multiple elementary landscapes if the underlying neighborhood used is symmetric. This paper presents theoretical results that provide the foundation for algebraic methods that can be used to decompose the objective function of an arbitrary combinatorial optimization problem into a sum of subfunctions, where each subfunction is an elementary landscape. Many steps of this process can be automated, and indeed a software tool could be developed that assists the researcher in finding a landscape decomposition. This methodology is then used to show that the subset sum problem is a superposition of two elementary landscapes, and to show that the quadratic assignment problem is a superposition of three elementary landscapes.  相似文献   

20.
Piecewise linear optimization is one of the most frequently used optimization models in practice, such as transportation, finance and supply-chain management. In this paper, we investigate a particular piecewise linear optimization that is optimizing the norm of piecewise linear functions (NPLF). Specifically, we are interested in solving a class of Brugnano–Casulli piecewise linear systems (PLS), which can be reformulated as an NPLF problem. Speaking generally, the NPLF is considered as an optimization problem with a nonsmooth, nonconvex objective function. A new and efficient optimization approach based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) is developed. With a suitable DC formulation, we design a DCA scheme, named ℓ1-DCA, for the problem of optimizing the ℓ1-norm of NPLF. Thanks to particular properties of the problem, we prove that under some conditions, our proposed algorithm converges to an exact solution after a finite number of iterations. In addition, when a nonglobal solution is found, a numerical procedure is introduced to find a feasible point having a smaller objective value and to restart ℓ1-DCA at this point. Several numerical experiments illustrate these interesting convergence properties. Moreover, we also present an application to the free-surface hydrodynamic problem, where the correct numerical modeling often requires to have the solution of special PLS, with the aim of showing the efficiency of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号