首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 203 毫秒
1.
唐述  万盛道  杨书丽  谢显中  夏明  张旭 《软件学报》2019,30(12):3876-3891
运动模糊核的准确估计是实现单幅运动模糊图像盲复原成功的关键.但是,因为不能准确提取出有利的图像边缘以及简单的正则化约束项的设计,导致现有运动模糊核(motion blur kernel,简称MBK)的估计并不十分准确,存在瑕疵.因此,为了能够估计出准确的运动模糊核,提出了一种基于空间尺度信息的运动模糊核估计方法.首先,为了准确地提取有利的图像边缘,移除有害的图像结构,提出了一种基于图像空间尺度信息的图像平滑模型,实现有利图像边缘的准确快速提取;然后,从运动模糊核的内在特性出发,将空间域的L0范数和梯度域的L2范数结合到一起,提出了一种正则化约束模型,很好地保证了运动模糊核的稀疏平滑特性,并结合之前提取出的有利的图像边缘,共同实现运动模糊核的准确估计;最后,采用一种半二次性分裂的交互式最优化策略对提出的模型进行最优化求解.在客观的评价指标和主观的视觉效果上进行了大量实验,其结果证明所提出的方法能够估计出更准确的MBK和复原出更高质量的去模糊图像.  相似文献   

2.
水位的准确预测可以指导城市的防洪减灾举措及水利工程建设, 提升城市洪涝灾害应急响应速度. 基于数据驱动的水位预测模型, 尤其是LSTM模型, 在模拟自然界中水文要素的强非线性关系时展现出优势从而得到广泛应用. 然而, 自然界中水文数据的采集往往伴随着噪声以及人为干扰因素, 这些问题影响了模型的预测性能. 针对这一问题, 本文开发了一种新的组合模型, 即SSA-LSTM模型. 该模型首先利用SSA方法将观测到的时间序列分解为周期、趋势和噪声分量, 接着利用LSTM对SSA方法去噪后的序列进行模型训练并得到最终预测结果.本文选取涡河流域涡阳闸1971年5月至2020年12月的闸上水位为数据集, 1)利用奇异谱分析方法将原始水位时序数据分解为多个趋势和噪声分量(RC1RC12), 选取分量(RC1RC10)为趋势项并重构为新的水位时序信号; 2)利用LSTM模型对重构的信号进行了训练和验证, 并将预测结果与LSTM模型的结果进行了对比; 3)为得到最优的SSA-LSTM模型, 针对不同的时间步长(7、14、21、28、35天)开展了单步预测性能评估实验, 实验结果表明, 在不同的时间步长下, SSA-LSTM水位预测模型的决定系数R2、均方根误差RMSE、平均绝对误差百分比MAPE均优于LSTM模型. 由此可见, 采用 SSA方法对涡阳闸水位的预处理可有效提高 LSTM 的预测效果, 相比于传统 LSTM 模型, SSA-LSTM模型具有高可靠和低误差的特点, 在水位预测应用中更具适应性, 可以为城市防洪、灌溉、供水等水利措施的合理调度提供更优的决策依据.  相似文献   

3.
针对传统神经网络模型在洪水预测过程中存在准确性低、过拟合等问题,本文以赣江流域外洲水文站每月平均水位为研究对象,提出基于正则化GRU神经网络的洪水预测模型来提高洪水预报精度.选用relu函数作为整个神经网络的输出层激活函数,将弹性网正则化引入到GRU模型中,对网络中输入权重w实施正则化处理,以提升GRU模型的泛化性能,并将该模型应用于外洲水文站每月平均水位的拟合及预测.实验对比表明,弹性网正则化优化后的模型预测拟合程度较高,合格率提高了9.3%,计算出的均方根误差较小.  相似文献   

4.
改进的正则化模型在图像恢复中的应用   总被引:3,自引:3,他引:0       下载免费PDF全文
目的 由拟合项与正则项组成的海森矩阵,如果不具有特殊结构,其逆矩阵计算比较困难,为克服此缺点,提出一种海森矩阵可分块对角化的牛顿投影迭代算法。方法 首先,用L2范数描述拟合项,用自变量是有界变差函数的复合函数刻画正则项,建立能量泛函正则化模型。其次,引入势函数,将正则化模型转化为增广能量泛函。再次,构造预条件矩阵,使得海森矩阵可分块对角化。最后,为防止牛顿投影迭代算法收敛到局部最优解,采用回溯线性搜索算法和改进的Barzilai-Borwein步长更新准则使得算法全局收敛。结果 针对图像去模糊正则化模型容易使边缘平滑和产生阶梯效应“两难”问题,提出一种新的正则化模型和牛顿投影迭代算法。仿真结果表明,“两难”问题通过本文算法得到了很好的解决。结论 与其他正则化图像去模糊模型相比,本文算法明显改善图像的质量,如有效地保护图像的边缘,抑制阶梯效应,相对偏差和误差较小,较高的峰值信噪比和结构相似测度。  相似文献   

5.
胡庆辉  丁立新  何进荣 《软件学报》2013,24(11):2522-2534
在机器学习领域,核方法是解决非线性模式识别问题的一种有效手段.目前,用多核学习方法代替传统的单核学习已经成为一个新的研究热点,它在处理异构、不规则和分布不平坦的样本数据情况下,表现出了更好的灵活性、可解释性以及更优异的泛化性能.结合有监督学习中的多核学习方法,提出了基于Lp范数约束的多核半监督支持向量机(semi-supervised support vector machine,简称S3VM)的优化模型.该模型的待优化参数包括高维空间的决策函数fm和核组合权系数θm.同时,该模型继承了单核半监督支持向量机的非凸非平滑特性.采用双层优化过程来优化这两组参数,并采用改进的拟牛顿法和基于成对标签交换的局部搜索算法分别解决模型关于fm的非平滑及非凸问题,以得到模型近似最优解.在多核框架中同时加入基本核和流形核,以充分利用数据的几何性质.实验结果验证了算法的有效性及较好的泛化性能.  相似文献   

6.
刘刚  王国瑾 《软件学报》2010,21(6):1473-1479
给出了计算Said-Bézier型广义Ball曲线(SBGB曲线)在L2范数下保持端点约束的一种最佳降多阶算法.基于SBGB基函数、幂基函数和Jacobi基函数之间的相互转换关系,得到了SBGB基函数和Jacobi基函数之间的显式转换矩阵;进一步利用Jacobi基的正交性和上述转换矩阵的逆矩阵,导出了SBGB曲线在L2范数下的显式约束降多阶算法.此算法蕴含了Said-Ball曲线、Bézier曲线以及位置介于这两类曲线之间的一大类参数曲线的相应降多阶算法.证明了这是一种可以预报最佳误差且满足端点高阶约束的一次性降多阶算法.最后用数值实例说明了算法的正确性和优越性.  相似文献   

7.
为了提高对大气污染物SO2的预测准确率,基于多个空气质量预测模式(WRF-CHEM、CMAQ、CAMx),以过去一段时间内各单项空气质量预测模式的组合预测误差平方和最小为原则,构建出针对大气污染物SO2的最优定权组合预测模型.选取2018年云南省楚雄、昭通、蒙自三个站点1至5月份的实际观测数据和前述三个空气质量模式的预测数据作为实验样本,然后分别采用多元线性回归法和动态权重更新法在相同的实验条件下与所提的最优定权组合预测法进行预测对比实验.实验结果表明,所提方法的预测值相较其他两种方法更加贴近实际观测值,其两项误差评估指标值均最小.总体而言,最优定权组合预测模型很好地综合了各单项空气质量预测模式的优势,提高了SO2的预测精度.  相似文献   

8.
不同通信模型下的全光树环网波长分配算法   总被引:1,自引:0,他引:1  
研究了波分复用全光树环网在不同通信模型下的波长分配算法及其最坏性能分析.对于静态模型,证明了5L/2是树环网所需波长数的紧界.对于动态模型,提出了一种近似比为∑i=1hmaxrRi[log|V(r)|]+h的波长分配算法,其中h为树环网的基树的层数,Ri为树环网中处于第i层的环的集合,|V(r)|为环r上的节点数.对于增量模型,提出了一种近似度为O[log2(t+1)]的波长分配算法,其中t为树环网中的环数.  相似文献   

9.
邓少波  黎敏  曹存根  眭跃飞 《软件学报》2015,26(9):2286-2296
提出具有模态词□φ=1V2φ的命题模态逻辑,给出其语言、语法与语义,其公理化系统是可靠与完备的,其中,12是给定的模态词.该逻辑的公理化系统具有与公理系统S5相似的语言,但具有不同的语法与语义.对于任意的公式φ,□φ=1V2φ;框架定义为三元组W,R1,R2,模型定义为四元组W,R1,R2,I;在完备性定理证明过程中,需要在由所有极大协调集所构成的集合上构造出两个等价关系,其典型模型的构建方法与经典典型模型的构建方法不同.如果1的可达关系R1等于2的可达关系R2,那么该逻辑的公理化系统变成S5.  相似文献   

10.
基于替代函数及贝叶斯框架的1范数ELM算法   总被引:3,自引:0,他引:3  
韩敏  李德才 《自动化学报》2011,37(11):1344-1350
针对极端学习机 (Extreme learning machine, ELM)算法的不适定问题和模型规模控制问题,本文提出基于1范数正则项的改进型ELM算法. 通过在二次损失函数基础上引入1范数正则项以控制模型规模,改善ELM的泛化能力.此外,为简化1范数 正则化方法的求解过程,利用边际优化方法,构建适当的替代函数,以便于采用贝叶斯方法代替计算复杂的 交叉检验方法,并实现正则化参数的自适应估计.仿真结果表明,本文所提算法能够有效简化模型结构,并 保持较高的预测精度.  相似文献   

11.
p范数正则化支持向量机分类算法   总被引:6,自引:3,他引:3  
L2范数罚支持向量机(Support vector machine,SVM)是目前使用最广泛的分类器算法之一,同时实现特征选择和分类器构造的L1范数和L0范数罚SVM算法也已经提出.但是,这两个方法中,正则化阶次都是事先给定,预设p=2或p=1.而我们的实验研究显示,对于不同的数据,使用不同的正则化阶次,可以改进分类算法的预测准确率.本文提出p范数正则化SVM分类器算法设计新模式,正则化范数的阶次p可取范围为02范数罚SVM,L1范数罚SVM和L0范数罚SVM.  相似文献   

12.
Quantiles are computed by optimizing an asymmetrically weighted L1 norm, i.e. the sum of absolute values of residuals. Expectiles are obtained in a similar way when using an L2 norm, i.e. the sum of squares. Computation is extremely simple: weighted regression leads to the global minimum in a handful of iterations. Least asymmetrically weighted squares are combined with P-splines to compute smooth expectile curves. Asymmetric cross-validation and the Schall algorithm for mixed models allow efficient optimization of the smoothing parameter. Performance is illustrated on simulated and empirical data.  相似文献   

13.
The sparsity driven classification technologies have attracted much attention in recent years, due to their capability of providing more compressive representations and clear interpretation. Two most popular classification approaches are support vector machines (SVMs) and kernel logistic regression (KLR), each having its own advantages. The sparsification of SVM has been well studied, and many sparse versions of 2-norm SVM, such as 1-norm SVM (1-SVM), have been developed. But, the sparsification of KLR has been less studied. The existing sparsification of KLR is mainly based on L 1 norm and L 2 norm penalties, which leads to the sparse versions that yield solutions not so sparse as it should be. A very recent study on L 1/2 regularization theory in compressive sensing shows that L 1/2 sparse modeling can yield solutions more sparse than those of 1 norm and 2 norm, and, furthermore, the model can be efficiently solved by a simple iterative thresholding procedure. The objective function dealt with in L 1/2 regularization theory is, however, of square form, the gradient of which is linear in its variables (such an objective function is the so-called linear gradient function). In this paper, through extending the linear gradient function of L 1/2 regularization framework to the logistic function, we propose a novel sparse version of KLR, the 1/2 quasi-norm kernel logistic regression (1/2-KLR). The version integrates advantages of KLR and L 1/2 regularization, and defines an efficient implementation scheme of sparse KLR. We suggest a fast iterative thresholding algorithm for 1/2-KLR and prove its convergence. We provide a series of simulations to demonstrate that 1/2-KLR can often obtain more sparse solutions than the existing sparsity driven versions of KLR, at the same or better accuracy level. The conclusion is also true even in comparison with sparse SVMs (1-SVM and 2-SVM). We show an exclusive advantage of 1/2-KLR that the regularization parameter in the algorithm can be adaptively set whenever the sparsity (correspondingly, the number of support vectors) is given, which suggests a methodology of comparing sparsity promotion capability of different sparsity driven classifiers. As an illustration of benefits of 1/2-KLR, we give two applications of 1/2-KLR in semi-supervised learning, showing that 1/2-KLR can be successfully applied to the classification tasks in which only a few data are labeled.  相似文献   

14.
目的 为了提高运动模糊图像盲复原清晰度,提出一种混合特性正则化约束的运动模糊盲复原算法。方法 首先利用基于局部加权全变差的结构提取算法提取显著边缘,降低了噪声对边缘提取的影响。然后改进模糊核模型的平滑与保真正则项,在保证精确估计的同时,增强了模糊核的抗噪性能。最后改进梯度拟合策略,并加入保边正则项,使图像梯度更加符合重尾分布特性,且保证了边缘细节。结果 本文通过两组实验验证改进模型与所提算法的优越性。实验1以模拟运动模糊图像作为实验对象,通过对比分析5种组合步骤算法的复原效果,验证了本文改进模糊核模型与改进复原图像模型的鲁棒性较强。实验结果表明,本文改进模型复原图像的边缘细节更加清晰自然,评价指标明显提升。实验2以小型无人机真实运动模糊图像为实验对象,通过与传统算法进行对比,对比分析了所提算法的鲁棒性与实用性。实验结果表明,本文算法复原图像的标准差提升约11.4%,平均梯度提升约30.1%,信息熵提升约2.2%,且具有较好的主观视觉效果。结论 针对运动模糊图像盲复原,通过理论分析和实验验证,说明了本文改进模型的优越性,所提算法的复原效果较好。  相似文献   

15.
ABSTRACT

Hyperspectral unmixing is essential for image analysis and quantitative applications. To further improve the accuracy of hyperspectral unmixing, we propose a novel linear hyperspectral unmixing method based on l1?l2 sparsity and total variation (TV) regularization. First, the enhanced sparsity based on the l1?l2 norm is explored to depict the intrinsic sparse characteristic of the fractional abundances in a sparse regression unmixing model because the l1?l2 norm promotes stronger sparsity than the l1 norm. Then, TV is minimized to enforce the spatial smoothness by considering the spatial correlation between neighbouring pixels. Finally, the extended alternating direction method of multipliers (ADMM) is utilized to solve the proposed model. Experimental results on simulated and real hyperspectral datasets show that the proposed method outperforms several state-of-the-art unmixing methods.  相似文献   

16.
Optimal state estimation from given observations of a dynamical system by data assimilation is generally an ill-posed inverse problem. In order to solve the problem, a standard Tikhonov, or L2, regularization is used, based on certain statistical assumptions on the errors in the data. The regularization term constrains the estimate of the state to remain close to a prior estimate. In the presence of model error, this approach does not capture the initial state of the system accurately, as the initial state estimate is derived by minimizing the average error between the model predictions and the observations over a time window. Here we examine an alternative L1 regularization technique that has proved valuable in image processing. We show that for examples of flow with sharp fronts and shocks, the L1 regularization technique performs more accurately than standard L2 regularization.  相似文献   

17.
Qiao  Chen  Yang  Lan  Shi  Yan  Fang  Hanfeng  Kang  Yanmei 《Applied Intelligence》2022,52(1):237-253

To have the sparsity of deep neural networks is crucial, which can improve the learning ability of them, especially for application to high-dimensional data with small sample size. Commonly used regularization terms for keeping the sparsity of deep neural networks are based on L1-norm or L2-norm; however, they are not the most reasonable substitutes of L0-norm. In this paper, based on the fact that the minimization of a log-sum function is one effective approximation to that of L0-norm, the sparse penalty term on the connection weights with the log-sum function is introduced. By embedding the corresponding iterative re-weighted-L1 minimization algorithm with k-step contrastive divergence, the connections of deep belief networks can be updated in a way of sparse self-adaption. Experiments on two kinds of biomedical datasets which are two typical small sample size datasets with a large number of variables, i.e., brain functional magnetic resonance imaging data and single nucleotide polymorphism data, show that the proposed deep belief networks with self-adaptive sparsity can learn the layer-wise sparse features effectively. And results demonstrate better performances including the identification accuracy and sparsity capability than several typical learning machines.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号