首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Projection type neural network for optimization problems has advantages over other networks for fewer parameters , low searching space dimension and simple structure. In this paper, by properly constructing a Lyapunov energy function, we have proven the global convergence of this network when being used to optimize a continuously differentiable convex function defined on a closed convex set. The result settles the extensive applicability of the network. Several numerical examples are given to verify the efficiency of the network.  相似文献   

2.
In this paper, a new neural network was presented for solving nonlinear convex programs with linear constrains. Under the condition that the objective function is convex, the proposed neural network is shown to be stable in the sense of Lyapunov and globally converges to the optimal solution of the original problem. Several numerical examples show the effectiveness of the proposed neural network.  相似文献   

3.
A novel neural network for nonlinear convex programming   总被引:5,自引:0,他引:5  
In this paper, we present a neural network for solving the nonlinear convex programming problem in real time by means of the projection method. The main idea is to convert the convex programming problem into a variational inequality problem. Then a dynamical system and a convex energy function are constructed for resulting variational inequality problem. It is shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. Compared with the existing neural networks for solving the nonlinear convex programming problem, the proposed neural network has no Lipschitz condition, no adjustable parameter, and its structure is simple. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.  相似文献   

4.
In this paper, we propose a recurrent neural network for solving nonlinear convex programming problems with linear constraints. The proposed neural network has a simpler structure and a lower complexity for implementation than the existing neural networks for solving such problems. It is shown here that the proposed neural network is stable in the sense of Lyapunov and globally convergent to an optimal solution within a finite time under the condition that the objective function is strictly convex. Compared with the existing convergence results, the present results do not require Lipschitz continuity condition on the objective function. Finally, examples are provided to show the applicability of the proposed neural network.  相似文献   

5.
连续时间 Hopfield网络模型数值实现分析   总被引:2,自引:0,他引:2       下载免费PDF全文
讨论使用Euler方法和梯形方法在数值求解连续时间的Hopfield网络模型时,离散时间步长的选择和迭代停止条件问题.利用凸函数的定义研究了能量函数下降的条件,根据凸函数的性质分析它的共轭函数减去二次函数之差仍为凸函数的条件.分析连续时间Hopfield网络模型的收敛性证明,提出了一个广义的连续时间Hopfield网络模型.对于常用的Euler方法和梯形方法数值求数值实现连续时间Hopfield网络,讨论了离散时间步长的选择.由于梯形方法为隐式方法,分析了它的迭代求算法的停止条件.根据连续时间Hopfield网络的特点,提出改进的迭代算法,并对其进行了分析.数值实验的结果表明,较大的离散时间步长不仅加速了数值实现,而且有利于提高优化性能.  相似文献   

6.
In this paper, a feedback neural network model is proposed for solving a class of convex quadratic bi-level programming problems based on the idea of successive approximation. Differing from existing neural network models, the proposed neural network has the least number of state variables and simple structure. Based on Lyapunov theories, we prove that the equilibrium point sequence of the feedback neural network can approximately converge to an optimal solution of the convex quadratic bi-level problem under certain conditions, and the corresponding sequence of the function value approximately converges to the optimal value of the convex quadratic bi-level problem. Simulation experiments on three numerical examples and a portfolio selection problem are provided to show the efficiency and performance of the proposed neural network approach.  相似文献   

7.
离散时间Hopfield网络的动力系统分析   总被引:2,自引:0,他引:2  
离散时间的Hopfield网络模型是一个非线性动力系统.对网络的状态变量引入新的能量函数,利用凸函数次梯度性质可以得到网络状态能量单调减少的条件.对于神经元的连接权值且激活函数单调非减(不一定严格单调增加)的Hopfield网络,若神经元激活函数的增益大于权值矩阵的最小特征值,则全并行时渐进收敛;而当网络串行时,只要网络中每个神经元激活函数的增益与该神经元的自反馈连接权值的和大于零即可.同时,若神经元激活函数单调,网络连接权值对称,利用凸函数次梯度的性质,证明了离散时间的Hopfield网络模型全并行时收敛到周期不大于2的极限环.  相似文献   

8.
The problem of constructing a general continuous piecewise-linear neural network is considered in this paper. It is shown that every projection domain of an arbitrary continuous piecewise-linear function can be partitioned into convex polyhedra by using difference functions of its local linear functions. Based on these convex polyhedra, a group of continuous piecewise-linear basis functions are formulated. It is proven that a linear combination of these basis functions plus a constant, which we call a standard continuous piecewise-linear neural network, can represent all continuous piecewise-linear functions. In addition, the proposed standard continuous piecewise-linear neural network is applied to solve some function approximation problems. A number of numerical experiments are presented to illustrate that the standard continuous piecewise-linear neural network can be a promising tool for function approximation.   相似文献   

9.
We consider a convex, or nonlinear, separable minimization problem with constraints that are dual to the minimum cost network flow problem. We show how to reduce this problem to a polynomial number of minimum s,t-cut problems. The solution of the reduced problem utilizes the technique for solving integer programs on monotone inequalities in three variables, and a so-called proximity-scaling technique that reduces a convex problem to its linear objective counterpart. The problem is solved in this case in a logarithmic number of calls, O(log U), to a minimum cut procedure, where U is the range of the variables. For a convex problem on n variables the minimum cut is solved on a graph with O(n2) nodes. Among the consequences of this result is a new cut-based scaling algorithm for the minimum cost network flow problem. When the objective function is an arbitrary nonlinear function we demonstrate that this constrained problem is solved in pseudopolynomial time by applying a minimum cut procedure to a graph on O(nU) nodes.  相似文献   

10.
We investigate the qualitative properties of a recurrent neural network (RNN) for minimizing a nonlinear continuously differentiable and convex objective function over any given nonempty, closed, and convex subset which may be bounded or unbounded, by exploiting some key inequalities in mathematical programming. The global existence and boundedness of the solution of the RNN are proved when the objective function is convex and has a nonempty constrained minimum set. Under the same assumption, the RNN is shown to be globally convergent in the sense that every trajectory of the RNN converges to some equilibrium point of the RNN. If the objective function itself is uniformly convex and its gradient vector is a locally Lipschitz continuous mapping, then the RNN is globally exponentially convergent in the sense that every trajectory of the RNN converges to the unique equilibrium point of the RNN exponentially. These qualitative properties of the RNN render the network model well suitable for solving the convex minimization over any given nonempty, closed, and convex subset, no matter whether the given constrained subset is bounded or not.  相似文献   

11.
1 Introduction Optimization problems arise in a broad variety of scientific and engineering applica- tions. For many practice engineering applications problems, the real-time solutions of optimization problems are mostly required. One possible and very pr…  相似文献   

12.
针对智能交通系统中对交通路口场景理解的需求,提出一种基于线特征先验和凸包损失函数的空间分割网络,目标是对斑马线以及斑马线所围路口区域进行精确检测和分割。利用公安交通管理系统平台采集并标注路口数据集;引入线特征先验,将RGBL图像作为网络输入,为深度学习实例分割提供显著的物体边缘特征以加强深度网络对图像特征学习的针对性;在分割网络中引入SCNN网络结构,构成空间分割网络以增强网络对空间结构的学习;引入凸包二值交叉熵动态损失函数来优化网络的输出精度。实验结果表明,该空间分割网络对斑马线及路口区域的检测正确率和分割完整度和精确度都有了显著的提升。  相似文献   

13.
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.  相似文献   

14.
A convex network problem is a mathematical program where the objective function is convex and the constraint set is a network with flow conservation at each node. Further there are upper and lower bounds associated with each edge. In this paper, we construct the dual of such a program and hence reduce the dimension of the problem from that of the arc set of the underlying network to that of the node set. Further the dual program is unconstrained. Generally this reduction is significant and in one particular case, the dimension is reduced to unity and hence trivially solvable. The mathematical machinery is provided by the duality theory of generalized geometric programming.  相似文献   

15.
混合专家模型是将一个问题的输入空间适当地分解为多个不同的区域,一个区域用一个专家网络来映射,然后再用一个门网协调专家,去决定专家网络的输出。基于惩罚函数的混合专家模型利用了凸函数共轭性质中的Fenchel不等式,构造带有约束项的优化目标函数,这个目标函数单独对于权值和隐层输出来说均为凸函数,优化时不存在局部最小,因此,优化速度快,大大提高了混合专家网络的学习效率。数值实验表明,该方法较传统的梯度下降方法效果好。  相似文献   

16.
In this paper, a neural network model is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle to solve general convex nonlinear programming (GCNLP) problems. Based on the Saddle point theorem, the equilibrium point of the proposed neural network is proved to be equivalent to the optimal solution of the GCNLP problem. By employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. The simulation results also show that the proposed neural network is feasible and efficient.  相似文献   

17.
时侠圣  孙佳月  徐磊  杨涛 《控制与决策》2023,38(5):1336-1344
分布式资源分配问题旨在满足局部约束下完成一定量资源分配的同时使全局成本函数最小.首先,针对无向连通网络下二阶积分器型线性智能体系统,结合Karush-Kuhn-Tucker条件,提出一种初始值任意的分布式优化算法,其中,全局等式约束对偶变量实现比例积分控制,局部凸函数不等式约束对偶变量实现自动获取.当全局成本函数为非光滑凸函数时,借助集值LaSalle不变性原理理论证明所提出算法渐近收敛到全局最优解.其次,将所提出算法推广至无向连通网络下参数未知的Euler-Lagrange多智能体系统.当全局成本函数为非光滑凸函数时,借助Barbalat引理理论证明所提出算法渐近收敛到全局最优解.最后,通过数值仿真验证了所提算法的有效性.  相似文献   

18.
In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network.  相似文献   

19.
This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.  相似文献   

20.
Robustness to the environmental variations is an important feature of any reliable communication network. This paper reports on a network theory approach to the design of such networks where the environmental changes are traffic fluctuations, topology modifications, and changes in the source of external traffic. Motivated by the definition of betweenness centrality in network science, we introduce the notion of traffic-aware betweenness (TAB) for data networks, where usually an explicit (or implicit) traffic matrix governs the distribution of external traffic into the network. We use the average normalized traffic-aware betweenness, which is referred to as traffic-aware network criticality (TANC), as our main metric to quantify the robustness of a network. We show that TANC is directly related to some important network performance metrics, such as average network utilization and average network cost. We prove that TANC is a linear function of end-to-end effective resistances of the graph. As a result, TANC is a convex function of link weights and can be minimized using convex optimization techniques. We use semi-definite programming method to study the properties of the optimization problem and derive useful results to be employed for robust network planning purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号