首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 234 毫秒
1.
提出了一种双树复小波变换域最大后验概率图像复原方法。该方法通过在最大后验概率图像迭代复原过程中构建噪声残差,并采用双树复小波变换零均值高斯模型对参差进行降噪处理,从而避免了原泊松最大后验概率图像复原过程中噪声放大的问题,实现了迭代复原的正则化目的。对比实验结果表明,该图像复原方法能很好解决恢复迭代中噪声放大的问题,同时,在视觉效果、PSNR、ISNR等指标上均比Wiener、Pisson-MAP等算法好。  相似文献   

2.
许多三维重建算法的结果以二进制体数据的形式给出,然而直接从中抽取等值面会造成阶梯化、锯齿化等走样现象.为此提出基于最大后验概率?马尔科夫随机场的二进制体数据优化方法.假设目标数据是随机变量并具有马尔科夫性,通过计算其最大后验概率推导了通用的优化公式,以及在常用模型下的优化公式;在此基础上,用户可选择不同的先验模型和观察模型来预测数据最有可能的取值,并将其作为优化结果.实验结果表明,文中方法可用于二进制体数据的可视化,光顺,去噪,修复等.  相似文献   

3.
江静  陈渝  孙界平  琚生根 《计算机应用》2022,42(6):1789-1795
用于文本表示的预训练语言模型在各种文本分类任务上实现了较高的准确率,但仍然存在以下问题:一方面,预训练语言模型在计算出所有类别的后验概率后选择后验概率最大的类别作为其最终分类结果,然而在很多场景下,后验概率的质量能比分类结果提供更多的可靠信息;另一方面,预训练语言模型的分类器在为语义相似的文本分配不同标签时会出现性能下降的情况。针对上述两个问题,提出一种后验概率校准结合负例监督的模型PosCal-negative。该模型端到端地在训练过程中动态地对预测概率和经验后验概率之间的差异进行惩罚,并在训练过程中利用带有不同标签的文本来实现对编码器的负例监督,从而为每个类别生成不同的特征向量表示。实验结果表明:PosCal-negative模型在两个中文母婴护理文本分类数据集MATINF-C-AGE和MATINF-C-TOPIC的分类准确率分别达到了91.55%和69.19%,相比ERNIE模型分别提高了1.13个百分点和2.53个百分点。  相似文献   

4.
现今图像成像技术日益普及,但受成像设备、成像环境以及在获取图像过程中外界噪声等因素的相互制约,在实际应用中很多图像成像分辨率较低,带来诸多问题.为此,提出一种有效的基于最大后验概率和非局部低秩先验的图像超分辨重建模型.首先,该模型采用连续图像序列作为数据输入,利用单幅图像内与连续图像间的相似性作为先验知识,提升相似图像块匹配度,消除图像细节丢失现象.然后,以最大后验概率框架建模,使用高斯分布和吉布斯分布拟合模型参数,提升模型泛化能力.通过相似块的奇异值估计待求块的奇异值,采用低秩截断抑制重建过程中引入的噪声.最后,利用图像的非局部自相似性和低秩性质,以非局部低秩约束正则化图像重建过程,添加图像的局部和全局信息来提升重建效果.在标准光流数据集、纽约大学和山东省千佛山医院提供的数据集上的实验结果表明,文中基于最大后验和非局部低秩先验的模型与传统插值算法、基于重建的优秀算法相比,在5组仿真实验中,其平均峰值信噪比提升6.3 dB,在保持图像纹理特征和恢复图像细节方面可取得更好的重建性能.  相似文献   

5.
基于可能性理论的可能性线性模型(PLM)在模糊建模等应用中有重要的作用.本文首先借鉴统计学习理论将此模型扩展为正则化(regularized)的可能性线性模型(RPLM),以提高其泛化能力.然后利用将其优化问题转换为最大后验估计问题的新方法,研究当数据含有噪声时,模型中的拟合门限值λ和输入噪声均方差σ之间的关系.理论推导和仿真实验均证明,当输入噪声为高斯模型时,λ和σ成近似的线性反比关系.该结论对PLM和RPLM均有借鉴意义,为已知输入噪声均方差时,合理选择λ提供理论依据.  相似文献   

6.
探讨了一种基于贝叶斯框架的时空标记场最大后验边缘概率与最大后验概率相结合的运动对象分割算法.通过建立贝叶斯分布模型,求得对象分割标记场的最大后验概率,引入最大后验边缘概率求取最小能量.该算法将时间域分割结果作为初始标记场,空间域的分割结果作为图像的观察场,获取初始运动数目以及相应的运动模型的初始参数,然后通过参数估计,不断更新模型参数,之后通过把每个运动区域和运动模型相关联,来估计运动区域,最终达到分割的目的.实验结果证明,研究的方法对运动目标分割具有较好的分割效果.  相似文献   

7.
能量泛函正则化模型在图像恢复中的应用分析   总被引:2,自引:2,他引:0       下载免费PDF全文
目的 能量泛函正则化模型是图像恢复研究的热点。为使更多工程领域的研究者对正则化技术进行探索和应用,推动不适定问题的研究,对能量泛函正则化模型的进展进行了分析。方法 首先建立图像整体坐标与局部坐标的关系,分析图像恢复正则化模型的基本原理,给出并证明正则化模型各向同性与各向异性扩散定理。然后结合函数空间、图像分解和紧框架,评述能量泛函正则化模型国内外发展现状,并对正则化模型解的适定性进行分析。结果 推导出图像恢复正则化模型扩散基本原理,给出正则化模型通用表达式,讨论正则化模型存在的问题及未来的发展方向。结论 正则化技术在解决图像恢复、修复等反问题起着重要作用。目前,国内外学者对该问题的研究取得了一些成果,但许多理论问题有待进一步研究。  相似文献   

8.
数字图像被动盲取证是指在不依赖任何预签名提取或预嵌入信息的前提下,对图像的真伪和来源进行鉴别和取证。图像在经篡改操作时,为了消除图像在拼接边缘产生的畸变,篡改者通常会采用后处理消除伪造痕迹,其中,模糊操作是最常用的手法之一。提出一种人工模糊痕迹检测方法。将经过模糊操作后图像像素之间存在的高度相关性进行模型化表示;采用EM算法估算出图像中每个像素属于上述模型的后验概率;根据所得后验概率的大小进行模糊操作检测。实验结果表明,该算法能够有效地检测出篡改图像中的人工模糊痕迹,并对不同模糊类型、有损JPEG压缩以及全局缩放操作均具有较好的鲁棒性。  相似文献   

9.
提出一种最大后验概率条件下的运动目标检测方法.首先根据条件随机场模型和马尔可夫随机场模型建立了一个最大后验概率框架.在该框架内融入了连续标记场的时域信息、颜色信息和每个标记场的空域信息.考虑到传统方法融入的特征信息不够,提取目标的准确度不高,在目标模型中充分融入了颜色信息和边缘特征,以便获得更好的检测效果.实验结果表明提出的方法能正确检测到运动目标.  相似文献   

10.
该文研究了两种用于改善深度神经网络声学建模框架下自由表述口语语音评测任务后验概率估计的方法: 1)使用RNN语言模型对一遍解码N-best候选做语言模型得分重估计来获得更准确的识别结果以重新估计后验概率;2)借鉴多语种神经网络训练框架,提出将方言数据聚类状态加入解码神经网络输出节点,在后验概率估计中引入方言似然度得分以评估方言程度的新方法。实验表明,这两种方法估计出的后验概率与人工分相关度分别绝对提升了3.5%和1.0%,两种方法融合后相关度绝对提升4.9%;对于一个真实的评测任务,结合该文改进的后验概率评分特征,总体评分相关度绝对提升2.2%。  相似文献   

11.
Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples.  相似文献   

12.
基于后验概率的支持向量机   总被引:8,自引:0,他引:8  
在支持向量机(support vector machines,SVM)中,训练样本总是具有明确的类别信息,而对于一些不确定性问题并不恰当.受贝叶斯决策规则的启发,利用样本的后验概率来表示这种不确定性.将贝叶斯决策规则与SVM相结合,建立后验概率支持向量机(posteriori probability support vector machine,PPSVM)的体系框架.并详细讨论线性可分性、间隔、最优超平面以及软间隔算法,得到了一个新的优化问题,同时给出了一个支持向量的新定义.实际上,后验概率支持向量机是建立于统计学习理论(statistical learning theory)基础之上,是标准SVM的扩展.针对数据,还提出了一个确定后验概率的经验性方法.实验也证明了后验概率支持向量机的合理性、有效性.  相似文献   

13.
Graph embedding (GE) is a unified framework for dimensionality reduction techniques. GE attempts to maximally preserve data locality after embedding for face representation and classification. However, estimation of true data locality could be severely biased due to limited number of training samples, which trigger overfitting problem. In this paper, a graph embedding regularization technique is proposed to remedy this problem. The regularization model, dubbed as Locality Regularization Embedding (LRE), adopts local Laplacian matrix to restore true data locality. Based on LRE model, three dimensionality reduction techniques are proposed. Experimental results on five public benchmark face datasets such as CMU PIE, FERET, ORL, Yale and FRGC, along with Nemenyi Post-hoc statistical of significant test attest the promising performance of the proposed techniques.  相似文献   

14.
We consider a variational model for the determination of the optic-flow in a general setting of non-smooth domains. This problem is ill-posed and its solution with PDE techniques includes a regularization procedure. The goal of this paper is to study a method to solve the optic flow problem and to control the effects of the regularization by allowing, locally and adaptively the choice of its parameters. The regularization in our approach is not controlled by a single parameter but by a function of the space variable. This results in a dynamical selection of the variational model which evolves with the variations of this function. Such method consists of new adaptive finite element discretization and an a posteriori strategy for the control of the regularization in order to achieve a trade-off between the data and the smoothness terms in the energy functional. We perform the convergence analysis and the a posteriori analysis, and we prove that the error indicators provide, as, a by-product, a confidence measure which shows the effects of regularization and serves to compute sparse solutions. We perform several numerical experiments, to show the efficiency and the reliability of the method in the computations of optic flow, with high accuracy and of low density.  相似文献   

15.
Recently, addressing the few-shot learning issue with meta-learning framework achieves great success. As we know, regularization is a powerful technique and widely used to improve machine learning algorithms. However, rare research focuses on designing appropriate meta-regularizations to further improve the generalization of meta-learning models in few-shot learning. In this paper, we propose a novel meta-contrastive loss that can be regarded as a regularization to fill this gap. The motivation of our method depends on the thought that the limited data in few-shot learning is just a small part of data sampled from the whole data distribution, and could lead to various bias representations of the whole data because of the different sampling parts. Thus, the models trained by a few training data (support set) and test data (query set) might misalign in the model space, making the model learned on the support set can not generalize well on the query data. The proposed meta-contrastive loss is designed to align the models of support and query sets to overcome this problem. The performance of the meta-learning model in few-shot learning can be improved. Extensive experiments demonstrate that our method can improve the performance of different gradient-based meta-learning models in various learning problems, e.g., few-shot regression and classification.  相似文献   

16.
In this paper, we investigate the use of brain activity for person authentication. It has been shown in previous studies that the brainwave pattern of every individual is unique and that the electroencephalogram (EEG) can be used for biometric identification. EEG-based biometry is an emerging research topic and we believe that it may open new research directions and applications in the future. However, very little work has been done in this area and was focusing mainly on person identification but not on person authentication. Person authentication aims to accept or to reject a person claiming an identity, i.e., comparing a biometric data to one template, while the goal of person identification is to match the biometric data against all the records in a database. We propose the use of a statistical framework based on Gaussian mixture models and maximum a posteriori model adaptation, successfully applied to speaker and face authentication, which can deal with only one training session. We perform intensive experimental simulations using several strict train/test protocols to show the potential of our method. We also show that there are some mental tasks that are more appropriate for person authentication than others  相似文献   

17.
The maximum a posteriori (MAP) criterion is popularly used for feature compensation (FC) and acoustic model adaptation (MA) to reduce the mismatch between training and testing data sets. MAP-based FC and MA require prior densities of mapping function parameters, and designing suitable prior densities plays an important role in obtaining satisfactory performance. In this paper, we propose to use an environment structuring framework to provide suitable prior densities for facilitating MAP-based FC and MA for robust speech recognition. The framework is constructed in a two-stage hierarchical tree structure using environment clustering and partitioning processes. The constructed framework is highly capable of characterizing local information about complex speaker and speaking acoustic conditions. The local information is utilized to specify hyper-parameters in prior densities, which are then used in MAP-based FC and MA to handle the mismatch issue. We evaluated the proposed framework on Aurora-2, a connected digit recognition task, and Aurora-4, a large vocabulary continuous speech recognition (LVCSR) task. On both tasks, experimental results showed that with the prepared environment structuring framework, we could obtain suitable prior densities for enhancing the performance of MAP-based FC and MA.  相似文献   

18.
重叠社区发现是社交网络分析与挖掘中的一个重要研究问题,现有的大部分方法都要求采用人工方法预先设定社区个数[K],这样做存在很多问题。将无限潜特征模型推广应用到关系型数据,以非参数贝叶斯层次模型为框架建立带重叠社区结构的网络生成模型,就可以避免预先设定[K]的值。然而,关系型无限潜特征模型的后验参数推理结果是一个[N×K]列的0、1矩阵上的概率分布,如何对这种多变量结构参数进行后验推理结果总结和后验推理质量评估是一个挑战,因此提出了一种利用基于对抗样本训练图卷积神经网络的图分类器来帮助总结推理结果和估计推理质量的方法。  相似文献   

19.
The architecture of the cerebellar model articulation controller (CMAC) presents a rigid compromise between learning and generalization. In the presence of a sparse training dataset, this limitation manifestly causes overfitting, a drawback that is not overcome by current training algorithms. This paper proposes a novel training framework founded on the Tikhonov regularization, which relates to the minimization of the power of the /spl sigma/-order derivative. This smoothness criterion yields to an internal cell-interaction mechanism that increases the generalization beyond the degree hardcoded in the CMAC architecture while preserving the potential CMAC learning capabilities. The resulting training mechanism, which proves to be simple and computationally efficient, is deduced from a rigorous theoretical study. The performance of the new training framework is validated against comparative benchmarks from the DELVE environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号