首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The weak aggregating algorithm (WAA) developed from learning and prediction with expert advice makes decisions by considering all the experts’ advice, and each expert’s weight is updated according to his performance in previous periods. In this paper, we apply the WAA to the online portfolio selection problem. We first consider a simple case in which the expert advice is the strategy for investing in one stock; for this case, we obtain a portfolio selection strategy WAAS and prove that the WAAS can identify the best stock. We also discuss a more complicated case in which constant rebalanced portfolios are considered as expert advice, and obtain a corresponding portfolio selection strategy WAAC. The theoretical result shows that the cumulative gain that WAAC achieves is as large as that of the best constant rebalanced portfolio. Numerical analysis shows that the cumulative gains of our proposed strategies are as large as those of the best expert advice.  相似文献   

2.
We analyze and compare two solvers for Boolean optimization problems: WMaxSatz, a solver for Partial MaxSAT, and MinSatz, a solver for Partial MinSAT. Both MaxSAT and MinSAT are similar, but previous results indicate that when solving optimization problems using both solvers, the performance is quite different on some cases. For getting insights about the differences in the performance of the two solvers, we analyze their behaviour when solving 2SAT-MaxOnes problem instances, given that 2SAT-MaxOnes is probably the most simple, but NP-hard, optimization problem we can solve with them. The analysis is based first on the study of the bounds computed by both algorithms on some particular 2SAT-MaxOnes instances, characterized by the presence of certain particular structures. We find that the fraction of positive literals in the clauses is an important factor regarding the quality of the bounds computed by the algorithms. Then, we also study the importance of this factor on the typical case complexity of Random-p 2SAT-MaxOnes, a variant of the problem where instances are randomly generated with a probability p of having positive literals in the clauses. For the case p=0, the performance results indicate a clear advantage of MinSatz with respect to WMaxSatz, but as we consider positive values of p WMaxSatz starts to show a better performance, although at the same time the typical complexity of Random-p 2SAT-MaxOnes decreases as p increases. We also study the typical value of the bound computed by the two algorithms on these sets of instances, showing that the behaviour is consistent with our analysis of the bounds computed on the particular instances we studied first.  相似文献   

3.
The successive projection algorithm (SPA) can quickly solve a nonnegative matrix factorization problem under a separability assumption. Even if noise is added to the problem, SPA is robust as long as the perturbations caused by the noise are small. In particular, robustness against noise should be high when handling the problems arising from real applications. The preconditioner proposed by Gillis and Vavasis (SIAM J Optim 25(1):677–698, 2015) makes it possible to enhance the noise robustness of SPA. Meanwhile, an additional computational cost is required. The construction of the preconditioner contains a step to compute the top-k truncated singular value decomposition of an input matrix. It is known that the decomposition provides the best rank-k approximation to the input matrix; in other words, a matrix with the smallest approximation error among all matrices of rank less than k. This step is an obstacle to an efficient implementation of the preconditioned SPA. To address the cost issue, we propose a modification of the algorithm for constructing the preconditioner. Although the original algorithm uses the best rank-k approximation, instead of it, our modification uses an alternative. Ideally, this alternative should have high approximation accuracy and low computational cost. To ensure this, our modification employs a rank-k approximation produced by an SPA based algorithm. We analyze the accuracy of the approximation and evaluate the computational cost of the algorithm. We then present an empirical study revealing the actual performance of the SPA based rank-k approximation algorithm and the modified preconditioned SPA.  相似文献   

4.
Nowadays, many real-world problems are encoded into SAT instances and efficiently solved by modern SAT solvers. These solvers, usually known as Conflict-Driven Clause Learning (CDCL) SAT solvers, include a variety of sophisticated techniques, such as clause learning, lazy data structures, conflict-based adaptive branching heuristics, or random restarts, among others. However, the reasons of their efficiency in solving real-world, or industrial, SAT instances are still unknown. The common wisdom in the SAT community is that these technique exploit some hidden structure of real-world problems.In this thesis, we characterize some important features of the underlying structure of industrial SAT instances. Namely, they are the community structure and the self-similar structure. We observe that most industrial SAT formulas, viewed as graphs, have these two properties. This means that (i) in a graph with a clear community structure, i.e. having high modularity, we can find a partition of its nodes into communities such that most edges connect nodes of the same community; and (ii) in a graph with a self-similar pattern, i.e. being fractal, its shape is kept after re-scalings, i.e., grouping sets of nodes into a single node. We also analyze how these structures are affected by the effects of CDCL techniques during the search.Using the previous structural studies, we propose three applications. First, we face the problem of generating pseudo-industrial random SAT instances using the notion of modularity. Our model generates instances similar to (classical) random SAT formulas when the modularity is low, but when this value is high, our model is also adequate to model realistic pseudo-industrial problems. Second, we propose a method based on the community structure of the instance to detect relevant learnt clauses. Our technique augments the original instance with this set of relevant clauses, and this results into an overall improvement of the efficiency of several state-of-the-art CDCL SAT solvers. Finally, we analyze the classification of industrial SAT instances into families using the previously analyzed structure features, and we compare them to other classifiers commonly used in portfolio SAT approaches.In summary, this dissertation extends the understandings of the structure of SAT instances, with the aim of better explaining the success of CDCL techniques and possibly improve them, and propose a number of applications based on this analysis of the underlying structure of SAT formulas.  相似文献   

5.
In this paper, we treat optimization problems as a kind of reinforcement learning problems regarding an optimization procedure for searching an optimal solution as a reinforcement learning procedure for finding the best policy to maximize the expected rewards. This viewpoint motivated us to propose a Q-learning-based swarm optimization (QSO) algorithm. The proposed QSO algorithm is a population-based optimization algorithm which integrates the essential properties of Q-learning and particle swarm optimization. The optimization procedure of the QSO algorithm proceeds as each individual imitates the behavior of the global best one in the swarm. The best individual is chosen based on its accumulated performance instead of its momentary performance at each evaluation. Two data sets including a set of benchmark functions and a real-world problem—the economic dispatch (ED) problem for power systems—were used to test the performance of the proposed QSO algorithm. The simulation results on the benchmark functions show that the proposed QSO algorithm is comparable to or even outperforms several existing optimization algorithms. As for the ED problem, the proposed QSO algorithm has found solutions better than all previously found solutions.  相似文献   

6.
We consider a single-cell network with a hybrid full-/half-duplex base station. For the practical scenario with N channels, K uplink users, and M downlink users (max{K,M} ≤ NK + M), we tackle the issue of user admission and power control to simultaneously maximize the user admission number and minimize the total transmit power when guaranteeing the quality-of-service requirement of individual users. We formulate a 0–1 integer programming problem for the joint-user admission and power allocation problem. Because finding the optimal solution of this problem is NP-hard in general, a low-complexity algorithm is proposed by introducing the novel concept of adding dummy users. Simulation results show that the proposed algorithm achieves performance similar to that of branch and bound algorithm and significantly outperforms the random pairing algorithm.  相似文献   

7.
Traditional elasticity imaging systems use short pulses with low sound power, causing the signal to be attenuated severely in deep zones. On the basis of the coded excitation and spatial composition theorems, an ultrasonic elastography optimization algorithm is proposed in this paper. It takes advantage of coded excitation and spatial compounding such as high peak power and average sound power, suppresses speckle noise, and improves the imaging quality effectively. Specifically, a coded excitation system encodes the long pulses when transmitting, and then decodes the long pulses into short pulses upon receiving. This increases the average sound power of the beam without sacrificing the spatial resolution. A imaging system based on coded excitation can therefore achieve a good signal-to-noise ratio (SNR e ) and contrast-to-noise ratio (CNR e ) in deep zones below the detection surface. The proposed algorithm combines coded excitation with a filter-group based spatial compounding algorithm at the receiving terminal. Finally, experimental results show that the proposed algorithm yields a higher SNR e and CNR e than using chirp coded excitation or spatial compounding alone.  相似文献   

8.
An optimal probabilistic-planning algorithm solves a problem, usually modeled by a Markov decision process, by finding an optimal policy. In this paper, we study the k best policies problem. The problem is to find the k best policies of a discrete Markov decision process. The k best policies, k?>?1, cannot be found directly using dynamic programming. Naïvely, finding the k-th best policy can be Turing reduced to the optimal planning problem, but the number of problems queried in the naïve algorithm is exponential in k. We show empirically that solving k best policies problem by using this reduction requires unreasonable amounts of time even when k?=?3. We then provide two new algorithms. The first is a complete algorithm, based on our theoretical contribution that the k-th best policy differs from the i-th policy, for some i?k, on exactly one state. The second is an approximate algorithm that skips many less useful policies. We show that both algorithms have good scalability. We also show that the approximate algorithms runs much faster and finds interesting, high-quality policies.  相似文献   

9.
Choosing the best location for starting a business or expanding an existing enterprize is an important issue. A number of location selection problems have been discussed in the literature. They often apply the Reverse Nearest Neighbor as the criterion for finding suitable locations. In this paper, we apply the Average Distance as the criterion and propose the so-called k-most suitable locations (k-MSL) selection problem. Given a positive integer k and three datasets: a set of customers, a set of existing facilities, and a set of potential locations. The k-MSL selection problem outputs k locations from the potential location set, such that the average distance between a customer and his nearest facility is minimized. In this paper, we formally define the k-MSL selection problem and show that it is NP-hard. We first propose a greedy algorithm which can quickly find an approximate result for users. Two exact algorithms are then proposed to find the optimal result. Several pruning rules are applied to increase computational efficiency. We evaluate the algorithms’ performance using both synthetic and real datasets. The results show that our algorithms are able to deal with the k-MSL selection problem efficiently.  相似文献   

10.
In this paper, a steganographic scheme adopting the concept of the generalized K d -distance N-dimensional pixel matching is proposed. The generalized pixel matching embeds a B-ary digit (B is a function of K and N) into a cover vector of length N, where the order-d Minkowski distance-measured embedding distortion is no larger than K. In contrast to other pixel matching-based schemes, a N-dimensional reference table is used. By choosing d, K, and N adaptively, an embedding strategy which is suitable for arbitrary relative capacity can be developed. Additionally, an optimization algorithm, namely successive iteration algorithm (SIA), is proposed to optimize the codeword assignment in the reference table. Benefited from the high dimensional embedding and the optimization algorithm, nearly maximal embedding efficiency is achieved. Compared with other content-free steganographic schemes, the proposed scheme provides better image quality and statistical security. Moreover, the proposed scheme performs comparable to state-of-the-art content-based approaches after combining with image models.  相似文献   

11.
In this paper, a univariate marginal distribution algorithm in continuous domain (UMDA C ) based on extreme elitism (EEUMDA C ) is proposed for solving the inverse displacement problem (IDP) of robotic manipulators. This algorithm highlights the effect of a few top best solutions to form a primary evolution direction and obtains a fast convergence rate. Then it is implemented to determine the IDP of a 4-degree-of-freedom (DOF) Barrett WAM robotic arm. After that, the algorithm is combined with differential evolution (EEUMDA C -DE) to solve the IDP of a 7-DOF Barrett WAM robotic arm. In addition, three other heuristic optimization algorithms (enhanced leader particle swarm optimization, intersect mutation differential evolution, and evolution strategies) are applied to find the IDP solution of the 7-DOF arm and their performance is compared with that of EEUMDA C -DE.  相似文献   

12.
A video segmentation algorithm that takes advantage of using a background subtraction (BS) model with low learning rate (LLR) or a BS model with high learning rate (HLR) depending on the video scene dynamics is presented in this paper. These BS models are based on a neural network architecture, the self-organized map (SOM), and the algorithm is termed temporal modular self-adaptive SOM, TMSA_SOM. Depending on the type of scenario, the TMSA_SOM automatically classifies and processes each video into one of four different specialized modules based on an initial sequence analysis. This approach is convenient because unlike state-of-the-art (SoA) models, our proposed model solves different situations that may occur in the video scene (severe dynamic background, initial frames with dynamic objects, static background, stationary objects, etc.) with a specialized module. Furthermore, TMSA_SOM automatically identifies whether the scene has drastically changed (e.g., stationary objects of interest become dynamic or drastic illumination changes have occurred) and automatically detects when the scene has become stable again and uses this information to update the background model in a fast way. The proposed model was validated with three different video databases: Change Detection, BMC, and Wallflower. Findings showed a very competitive performance considering metrics commonly used in the literature to compare SoA models. TMSA_SOM also achieved the best results on two perceptual metrics, Ssim and D-Score, and obtained the best performance on the global quality measure, FSD (based on F-Measure, Ssim, and D-Score), demonstrating its robustness with different and complicated non-controlled scenarios. TMSA_SOM was also compared against SoA neural network approaches obtaining the best average performance on Re, Pr, and F-Measure.  相似文献   

13.
This paper focuses on the application of hp hierarchic genetic strategy (hp–HGS) for solution of a challenging problem, the inversion of 3D direct current (DC) resistivity logging measurements. The problem under consideration has been formulated as the global optimization one, for which the objective function (misfit between computed and reference data) exhibits multiple minima. In this paper, we consider the extension of the hp–HGS strategy, namely we couple the hp–HGS algorithm with a gradient based optimization method for a local search. Forward simulations are performed with a self-adaptive hp finite element method, hp–FEM. The computational cost of misfit evaluation by hp–FEM depends strongly on the assumed accuracy. This accuracy is adapted to the tree of populations generated by the hp–HGS algorithm, which makes the global phase significantly cheaper. Moreover, tree structure of demes as well as branch reduction and conditional sprouting mechanism reduces the number of expensive local searches up to the number of minima to be recognized. The common (direct and inverse) accuracy control, crucial for the hp–HGS efficiency, has been motivated by precise mathematical considerations. Numerical results demonstrate the suitability of the proposed method for the inversion of 3D DC resistivity logging measurements.  相似文献   

14.
We employ the conditional version of sandwiched Tsallis relative entropy to determine \(1:N-1\) separability range in the noisy one-parameter families of pseudopure and Werner-like N-qubit W, GHZ states. The range of the noisy parameter, for which the conditional sandwiched Tsallis relative entropy is positive, reveals perfect agreement with the necessary and sufficient criteria for separability in the \(1:N-1\) partition of these one parameter noisy states.  相似文献   

15.
We present a fully-distributed self-healing algorithm dex that maintains a constant degree expander network in a dynamic setting. To the best of our knowledge, our algorithm provides the first efficient distributed construction of expanders—whose expansion properties hold deterministically—that works even under an all-powerful adaptive adversary that controls the dynamic changes to the network (the adversary has unlimited computational power and knowledge of the entire network state, can decide which nodes join and leave and at what time, and knows the past random choices made by the algorithm). Previous distributed expander constructions typically provide only probabilistic guarantees on the network expansion which rapidly degrade in a dynamic setting; in particular, the expansion properties can degrade even more rapidly under adversarial insertions and deletions. Our algorithm provides efficient maintenance and incurs a low overhead per insertion/deletion by an adaptive adversary: only \(O(\log n)\) rounds and \(O(\log n)\) messages are needed with high probability (n is the number of nodes currently in the network). The algorithm requires only a constant number of topology changes. Moreover, our algorithm allows for an efficient implementation and maintenance of a distributed hash table on top of dex  with only a constant additional overhead. Our results are a step towards implementing efficient self-healing networks that have guaranteed properties (constant bounded degree and expansion) despite dynamic changes.  相似文献   

16.
Rapid advances in image acquisition and storage technology underline the need for real-time algorithms that are capable of solving large-scale image processing and computer-vision problems. The minimum st cut problem, which is a classical combinatorial optimization problem, is a prominent building block in many vision and imaging algorithms such as video segmentation, co-segmentation, stereo vision, multi-view reconstruction, and surface fitting to name a few. That is why finding a real-time algorithm which optimally solves this problem is of great importance. In this paper, we introduce to computer vision the Hochbaum’s pseudoflow (HPF) algorithm, which optimally solves the minimum st cut problem. We compare the performance of HPF, in terms of execution times and memory utilization, with three leading published algorithms: (1) Goldberg’s and Tarjan’s Push-Relabel; (2) Boykov’s and Kolmogorov’s augmenting paths; and (3) Goldberg’s partial augment-relabel. While the common practice in computer-vision is to use either BK or PRF algorithms for solving the problem, our results demonstrate that, in general, HPF algorithm is more efficient and utilizes less memory than these three algorithms. This strongly suggests that HPF is a great option for many real-time computer-vision problems that require solving the minimum st cut problem.  相似文献   

17.
The optimization of algorithm (hyper-)parameters is crucial for achieving peak performance across a wide range of domains, ranging from deep neural networks to solvers for hard combinatorial problems. However, the proper evaluation of new algorithm configuration (AC) procedures (or configurators) is hindered by two key hurdles. First, AC scenarios are hard to set up, including the target algorithm to be optimized and the problem instances to be solved. Second, and even more significantly, they are computationally expensive: a single configurator run involves many costly runs of the target algorithm. Here, we propose a benchmarking approach that uses surrogate scenarios, which are computationally cheap while remaining close to the original AC scenarios. These surrogate scenarios approximate the response surface corresponding to true target algorithm performance using a regression model. In our experiments, we construct and evaluate surrogate scenarios for hyperparameter optimization as well as for AC problems that involve performance optimization of solvers for hard combinatorial problems. We generalize previous work by building surrogates for AC scenarios with multiple problem instances, stochastic target algorithms and censored running time observations. We show that our surrogate scenarios capture overall important characteristics of the original AC scenarios from which they were derived, while being much easier to use and orders of magnitude cheaper to evaluate.  相似文献   

18.
Maximal clique enumeration is a fundamental problem in graph theory and has been extensively studied. However, maximal clique enumeration is time-consuming in large graphs and always returns enormous cliques with large overlaps. Motivated by this, in this paper, we study the diversified top-k clique search problem which is to find top-k cliques that can cover most number of nodes in the graph. Diversified top-k clique search can be widely used in a lot of applications including community search, motif discovery, and anomaly detection in large graphs. A naive solution for diversified top-k clique search is to keep all maximal cliques in memory and then find k of them that cover most nodes in the graph by using the approximate greedy max k-cover algorithm. However, such a solution is impractical when the graph is large. In this paper, instead of keeping all maximal cliques in memory, we devise an algorithm to maintain k candidates in the process of maximal clique enumeration. Our algorithm has limited memory footprint and can achieve a guaranteed approximation ratio. We also introduce a novel light-weight \(\mathsf {PNP}\)-\(\mathsf {Index}\), based on which we design an optimal maximal clique maintenance algorithm. We further explore three optimization strategies to avoid enumerating all maximal cliques and thus largely reduce the computational cost. Besides, for the massive input graph, we develop an I/O efficient algorithm to tackle the problem when the input graph cannot fit in main memory. We conduct extensive performance studies on real graphs and synthetic graphs. One of the real graphs contains 1.02 billion edges. The results demonstrate the high efficiency and effectiveness of our approach.  相似文献   

19.
We develop a cache-oblivious data structure for storing a set S of N axis-aligned rectangles in the plane, such that all rectangles in S intersecting a query rectangle or point can be found efficiently. Our structure is an axis-aligned bounding-box hierarchy and as such it is the first cache-oblivious R-tree with provable performance guarantees. If no point in the plane is contained in more than a constant number of rectangles in S, we can construct, for any constant ε, a structure that answers a rectangle query using \(O(\sqrt{N/B}+T/B)\) memory transfers and a point query using O((N/B) ε ) memory transfers, where T is the number of reported rectangles and B is the block size of memory transfers between any two levels of a multilevel memory hierarchy. We also develop a variant of our structure that achieves the same performance on input sets with arbitrary overlap among the rectangles. The rectangle query bound matches the bound of the best known linear-space cache-aware structure.  相似文献   

20.
Relief algorithm is a feature selection algorithm used in binary classification proposed by Kira and Rendell, and its computational complexity remarkably increases with both the scale of samples and the number of features. In order to reduce the complexity, a quantum feature selection algorithm based on Relief algorithm, also called quantum Relief algorithm, is proposed. In the algorithm, all features of each sample are superposed by a certain quantum state through the CMP and rotation operations, then the swap test and measurement are applied on this state to get the similarity between two samples. After that, Near-hit and Near-miss are obtained by calculating the maximal similarity, and further applied to update the feature weight vector WT to get \({\overline{WT}}\) that determine the relevant features with the threshold \(\tau \). In order to verify our algorithm, a simulation experiment based on IBM Q with a simple example is performed. Efficiency analysis shows the computational complexity of our proposed algorithm is O(M), while the complexity of the original Relief algorithm is O(NM), where N is the number of features for each sample, and M is the size of the sample set. Obviously, our quantum Relief algorithm has superior acceleration than the classical one.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号