首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
为提高粒子群算法的优化效率,在分析粒子群优化算法的基础上,提出了一种基于Bloch球面坐标编码的量子粒子群优化算法。该算法每个粒子占据空间三个位置,每个位置代表一个优化解。采用传统粒子群优化方法的搜索机制调整量子位的两个参数,可以实现量子位在Bloch球面上的旋转,从而使每个粒子代表的三个优化解同时得到更新,并快速逼近全局最优解。标准测试函数极值优化和模糊控制其参数优化的实验结果表明,与同类算法相比,该算法在优化能力和优化效率两方面都有改进。  相似文献   

2.
自适应Bloch球面的量子遗传算法   总被引:1,自引:0,他引:1  
在基于量子位Bloch坐标的量子遗传算法的基础上,提出一种自适应Bloch球面的量子遗传算法。该算法按两种方式自适应地选取Bloch球面的一部分进行搜索:沿经线方向选取和沿纬线方向选取,并在理论上证明了这两种选取方式都能够包含所求连续优化问题的所有可行解。在对选取的Bloch球面进行搜索时,提出了近似等面积搜索的方法,进而推导出两个相位转角大小之间的反比例关系,染色体的变异操作也作了相应的修改以适应选取区域的限制。实验表明该算法在搜索能力方面与基于量子位Bloch坐标的量子遗传算法基本相当,但优化效率方面有明显提高。  相似文献   

3.
为了提高进化算法的优化能力,提出一种量子行为进化算法.该算法基于Bloch球面建立搜索机制,首先用量子位描述个体,用泡利矩阵建立旋转轴,用量子位在Bloch球面上的绕轴旋转实现进化搜索;然后用Hadamard门实现个体变异,以避免早熟收敛.这种旋转可使当前量子位沿着Bloch球面上的大圆逼近目标量子位,从而可加速优化进程.以函数极值优化为例,实验结果表明该算法具有较高的优化能力和优化效率.  相似文献   

4.
基于量子位Bloch坐标的量子遗传算法及其应用   总被引:8,自引:1,他引:7  
提出了一种基于量子位Bloch坐标的量子遗传算法. 该方法用量子位构成染色体; 用量子位的Bloch坐标构成染色体上的基因位; 用量子旋转门进行染色体上量子位的更新; 用量子非门进行染色体变异. 对于量子旋转门的转角大小及方向的确定, 提出了一种简易快捷的新方法; 对旋转和变异操作, 提出了基于量子位Bloch坐标的新算子. 该算法将量子位的3个Bloch 坐标都看作基因位, 每条染色体包含3条并列的基因链, 每条基因链代表1个优化解.在染色体数目相同时, 可加速优化进程. 以函数极值优化和神经网络权值优化为例, 仿真结果表明该方法在搜索能力和优化效率两个方面优于普通量子遗传算法和简单遗传算法.  相似文献   

5.
为提高布谷鸟搜索算法的寻优能力,通过在经典布谷鸟搜索算法中引入量子计算机制,提出了一种量子衍生布谷鸟搜索算法.该算法采用量子比特编码个体,采用泡利矩阵确定旋转轴,采用Levy飞行原理确定旋转角度,采用量子比特在Bloch球面上的绕轴旋转实现个体更新.标准函数极值优化的实验结果表明,与传统布谷鸟搜索算法相比,该算法的搜索能力确有明显提升.  相似文献   

6.

涡流搜索是最近提出的新型优化算法, 具有操作简单且搜索能力强的突出优点, 但在后期容易陷入早熟收敛. 对比, 通过在该算法中引入量子计算, 提出一种量子衍生涡流搜索算法. 首先将涡流中心用量子比特编码; 然后将其在Bloch 球面上实施多次旋转得到多个个体, 将最优个体作为新的涡流中心, 完成一次迭代. 对新的涡流中心再次实施旋转, 直至满足终止条件. 标准函数极值优化的实验结果表明, 所提出的方法明显优于普通涡流搜索算法.

  相似文献   

7.
为了提高粒子群优化(PSO)算法的优化效率,结合量子理论提出一种基于Bloch球面坐标的量子粒子群优化算法。在Bloch球面坐标下,粒子自动更新旋转角大小和粒子位置,不需将旋转角以查询表的形式设定(或设定为区间上的固定值),弥补了Bloch球面坐标下量子进化算法和量子遗传算法的不足,算法更具有普遍性;用量子Hadamard门实现粒子的变异,增强了种群的多样性,促使粒子跳出局部极值点。对典型函数优化问题的仿真结果表明,提出的算法稳定性强,精度高,收敛速度快,具有一定的实用价值。  相似文献   

8.
In this paper we consider the problem of finding two parallel rectangles in arbitrary orientation for covering a given set of n points in a plane, such that the area of the larger rectangle is minimized. We propose an algorithm that solves the problem in O(n3) time using O(n2) space. Without altering the complexity, our approach can be used to solve another optimization problem namely, minimize the sum of the areas of two arbitrarily oriented parallel rectangles covering a given set of points in a plane.  相似文献   

9.
Coffman et al. presented the 3-tangle of three qubits in Phys Rev A 61, 052306 (2000). Wong and Christensen (Phys Rev A 63, 044301, 2001) extended the standard form of the 3-tangle to even number of qubits, known as n-tangle. In this paper, we propose a generalization of the standard form of the 3-tangle to any odd n-qubit pure states and call it the n-tangle of odd n qubits. We show that the n-tangle of odd n qubits is invariant under permutations of the qubits, and is an entanglement monotone. The n-tangle of odd n qubits can be considered as a natural entanglement measure of any odd n-qubit pure states, and used for stochastic local operations and classical communication classification.  相似文献   

10.
为提高布谷鸟搜索算法的寻优能力,通过在经典布谷鸟搜索算法中引入量子计算机制,提出一种量子衍生布谷鸟搜索算法。该算法采用量子比特编码个体,采用泡利矩阵确定旋转轴,采用Levy飞行原理确定旋转角度,采用量子比特在Bloch球面上的绕轴旋转实现个体更新。针对钻井剖面地层对比的具体特点及需要满足的约束条件,提出应用量子衍生布谷鸟算法进行地层对比优化的实施方案,该方法既能对比不同地层之间的相似性,也能处理对比井地层因断层或尖灭等因素造成的缺失。实验结果表明,在复杂地质情况下,该算法是有效的和可行的。   相似文献   

11.
Aiming at the series with small samples, seasonal character, nonlinearity, randomicity and fuzziness, the existing support vector kernel does not approach the random curve of the sales time series in the L2(Rn) space (quadratic continuous integral space). A new wavelet support vector machine (WN ν-SVM) is proposed based on wavelet theory and modified support vector machine. A particle swarm optimization (PSO) algorithm is designed to select the best parameters of WN ν-SVM model in the scope of constraint permission. The results of application in car sale series forecasting show that the forecasting approach based on the PSOWN ν-SVM model is effective and feasible, the comparison between the method proposed in this paper and other ones is also given which proves this method is better than PSOW ν-SVM and other traditional methods.  相似文献   

12.
Region-expansion for the Voronoi diagram of 3D spheres   总被引:1,自引:0,他引:1  
Given a set of spheres in 3D, constructing its Voronoi diagram in Euclidean distance metric is not easy at all even though many mathematical properties of its structure are known. This Voronoi diagram has been known for many important applications from science and engineering. In this paper, we characterize the Voronoi diagram of spheres in three-dimensional Euclidean space, which is also known as an additively weighted Voronoi diagram, and propose an algorithm to construct the diagram. Starting with the ordinary Voronoi diagram of the centers of the spheres, the proposed region-expansion algorithm constructs the desired diagram by expanding the Voronoi region of each sphere, one after another. We also show that the whole Voronoi diagram of n spheres can be constructed in O(n3) time in the worst case.  相似文献   

13.
Efficient dynamic simulation code is essential in many situations (including hardware-in-the-loop and model-predictive control applications), and highly beneficial in others (such as design optimization, sensitivity analysis, parameter identification, and controller tuning tasks). When the number of modeling coordinates n exceeds the degrees-of-freedom of the system f, as is often the case when closed kinematic chains are present, the governing dynamic equations consist of n second-order ordinary differential equations (ODEs) coupled with m=n?f algebraic constraint equations. This set of n+m index-3 differential-algebraic equations can be difficult to solve in an efficient yet accurate manner. Embedding (or generalized coordinate partitioning) can be used to obtain f ODEs (one for each independent acceleration), which are generally more amenable to numerical integration; however, the dependent positions are typically computed from the independent positions at each time step. Newton–Raphson iteration is often used for solving the position-level kinematics, but only provides solutions to within a specified tolerance, and can require several iterations to converge. In this work, Gröbner bases are used to obtain recursively solvable symbolic solutions for the dependent positions, which can then be evaluated to within machine precision using a fixed number of arithmetic operations. Natural coordinates are particularly attractive in this context, since the resulting constraint equations are maximally quadratic polynomials and are, therefore, easily triangularized. The proposed approach is suitable for use in an automated formulation procedure and, as demonstrated by three examples, is capable of generating highly efficient simulation code with minimal additional effort required at the formulation stage.  相似文献   

14.
In this paper, a hybrid method for optimization is proposed, which combines the two local search operators in chemical reaction optimization with global search ability of for global optimum. This hybrid technique incorporates concepts from chemical reaction optimization and particle swarm optimization, it creates new molecules (particles) either operations as found in chemical reaction optimization or mechanisms of particle swarm optimization. Moreover, some technical bound constraint handling has combined when the particle update in particle swarm optimization. The effects of model parameters like InterRate, γ, Inertia weight and others parameters on performance are investigated in this paper. The experimental results tested on a set of twenty-three benchmark functions show that a hybrid algorithm based on particle swarm and chemical reaction optimization can outperform chemical reaction optimization algorithm in most of the experiments. Experimental results also indicate average improvement and deviate over chemical reaction optimization in the most of experiments.  相似文献   

15.
The longest common subsequence and sequence alignment problems have been studied extensively and they can be regarded as the relationship measurement between sequences. However, most of them treat sequences evenly or consider only two sequences. Recently, with the rise of whole-genome duplication research, the doubly conserved synteny relationship among three sequences should be considered. It is a brand new model to find a merging way for understanding the interleaving relationship of sequences. Here, we define the merged LCS problem for measuring the interleaving relationship among three sequences. An O(n3) algorithm is first proposed for solving the problem, where n is the sequence length. We further discuss the variant version of this problem with the block information. For the blocked merged LCS problem, we propose an algorithm with time complexity O(n2m), where m is the number of blocks. An improved O(n2+nm2) algorithm is further proposed for the same blocked problem.  相似文献   

16.
The polynomial chaos (PC) method has been widely adopted as a computationally feasible approach for uncertainty quantification (UQ). Most studies to date have focused on non-stiff systems. When stiff systems are considered, implicit numerical integration requires the solution of a non-linear system of equations at every time step. Using the Galerkin approach the size of the system state increases from n to S × n, where S is the number of PC basis functions. Solving such systems with full linear algebra causes the computational cost to increase from O(n3) to O(S3n3). The S3-fold increase can make the computation prohibitive. This paper explores computationally efficient UQ techniques for stiff systems using the PC Galerkin, collocation, and collocation least-squares (LS) formulations. In the Galerkin approach, we propose a modification in the implicit time stepping process using an approximation of the Jacobian matrix to reduce the computational cost. The numerical results show a run time reduction with no negative impact on accuracy. In the stochastic collocation formulation, we propose a least-squares approach based on collocation at a low-discrepancy set of points. Numerical experiments illustrate that the collocation least-squares approach for UQ has similar accuracy with the Galerkin approach, is more efficient, and does not require any modification of the original code.  相似文献   

17.
The data cube operator computes group-bys for all possible combinations of a set of dimension attributes. Since computing a data cube typically incurs a considerable cost, the data cube is often precomputed and stored as materialized views in data warehouses. A materialized data cube needs to be updated when the source relations are changed. The incremental maintenance of a data cube is to compute and propagate only its changes, rather than recompute the entire data cube from scratch. For n dimension attributes, the data cube consists of 2n group-bys, each of which is called a cuboid. To incrementally maintain a data cube with 2n cuboids, the conventional methods compute 2ndelta cuboids, each of which represents the change of a cuboid. In this paper, we propose an efficient incremental maintenance method that can maintain a data cube using only a subset of 2n delta cuboids. We formulate an optimization problem to find the optimal subset of 2n delta cuboids that minimizes the total maintenance cost, and propose a heuristic solution that allows us to maintain a data cube using only delta cuboids. As a result, the cost of maintaining a data cube is substantially reduced. Through various experiments, we show the performance advantages of the proposed method over the conventional methods. We also extend the proposed method to handle partially materialized cubes and dimension hierarchies.  相似文献   

18.
Different from the previous works on generating entangled states, this work is focused on how to transfer the prepared entangled states onto memory qubits for protecting them against decoherence. We here consider a physical system consisting of n operation qubits and 2n memory qubits placed in a cavity or coupled to a resonator. A method is presented for transferring n-qubit Greenberger–Horne–Zeilinger (GHZ) entangled states from the operation qubits (i.e., information processing cells) onto the memory qubits (i.e., information memory elements with long decoherence time). The transferred GHZ states are encoded in a decoherence-free subspace against collective dephasing and thus can be immune from decoherence induced by a dephasing environment. In addition, the state transfer procedure has nothing to do with the number of qubits, the operation time does not increase with the number of qubits, and no measurement is needed for the state transfer. This proposal can be applied to a wide range of hybrid qubits such as natural atoms and artificial atoms (e.g., various solid-state qubits).  相似文献   

19.
We investigate the proportional relationships for spectrums and standard Jordan normal forms (SJNFs) of the 4 by 4 matrices constructed from coefficient matrices of two SLOCC (stochastic local operations and classical communication) equivalent states of n qubits. The proportional relationships permit a reduction of SLOCC classification of n (\(\ge 4\)) qubits to a classification of 4 by 4 complex matrices. Invoking the proportional relationships for spectrums and SJNFs, pure states of n (\(\ge 4\)) qubits are partitioned into 12 groups or less and 34 families or less under SLOCC, respectively. Specially, it is true for four qubits.  相似文献   

20.
Considering that Elliptic Curve Digital Signature Algorithm (ECDSA) implementations need to be efficient, flexible and Side Channel Attack (SCA) resistant, in this paper, a design approach and architecture for ECDSA and the underlined scalar multiplication operation is proposed for GF(2k), satisfying the above three directives. To achieve that, in the paper, Binary Edwards Curves (BECs) are adopted as an alternative to traditional Weierstrass Elliptic Curves (ECs) for GF(2k) since they offer intrinsic SCA resistance against simple attacks due to their uniformity, operation regularity and completeness. To achieve high performance and flexibility, we propose a hardware/software ECDSA codesign approach where scalar multiplication is implemented in hardware and integrated in the ECDSA functionality through appropriate drivers of an ECDSA software stack. To increase BEC scalar multiplier performance and introduce SCA resistance we adopt and expand a parallelism design strategy/methodology where GF(2k) operations of a scalar multiplier round for both point operations performed in this round are reordered and assigned into parallelism layer in order to be executed concurrently. Within this strategy we include hardware and software based SCA countermeasures that rely on masking/randomization and hiding. While scalar randomization is realized by the ECDSA software stack in an easy way, in order to achieve resistance using hardware means, we propose and introduce in every scalar multiplier round, within the parallelism layers, projective coordinates randomization of all the round’s output points. So, in our approach, considering that with the proposed parallelism plan in every scalar multiplier round BEC point operations are performed in parallel and that the round’s output points are randomized with a different number in each round, we manage to achieve maximum SCA resistance. To validate this resistance, we introduce and realize a leakage assessment process on BEC scalar multipliers for the first time in research literature. This process is based on real measurements collected from a controlled SAKURA X environment with a GF(2233) based scalar multiplier implementation. Using Welch’s t-test we investigate possible information leakage of the multiplier’s input point and scalar and after an extended analysis we find trivial leakage. Finally, we validate the ECDSA architecture and its scalar multiplier efficiency by implementing it on a Zynq 7000 series FPGA Avnet Zedboard and collecting very promising, well balanced, results on speed and hardware resources in comparison with other works.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号