首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
基于贝叶斯的改进WSNs信任评估模型   总被引:2,自引:0,他引:2  
基于贝叶斯和熵,提出一种改进的WSNs信任评估模型。考虑到非入侵因素带来的网络异常行为,引入异常衰减因子,利用修正后的贝叶斯方程估算直接信任,并利用滑窗和自适应遗忘因子进行更新。根据直接信任的置信水平确定其是否足够可信来作为综合信任,减少网络能耗,并降低恶意反馈的影响。如果直接信任不足够可信,计算间接信任来获得综合信任,利用熵来对不同的推荐赋予权重,克服主观分配权重带来的局限性,加强模型的适应性。仿真实验表明,该模型能够有效检测恶意节点,具有较高的检测率和较低的误检率,同时在很大程度上降低了网络的能量消耗。  相似文献   

2.
针对现有动作识别中对连续动作识别研究较少且单一算法对连续动作识别效果较差的问题,提出在单个动作建模的基础上,采用滑动窗口法和动态规划法结合,实现连续动作的分割与识别。首先,采用深度置信网络和隐马尔可夫结合的模型DBN-HMM对单个动作建模;其次,运用所训练动作模型的对数似然值和滑动窗口法对连续动作进行评分估计,实现初始分割点的检测;然后,采用动态规划对分割点位置进行优化并对单个动作进行识别。在公开动作数据库MSR Action3D上进行连续动作分割与识别测试,结果表明基于滑动窗口的动态规划能够优化分割点的选取,进而提高识别精度,能够用于连续动作识别。  相似文献   

3.
Motion segmentation refers to the problem of separating the objects in a video sequence according to their motion. It is a fundamental problem of computer vision, since various systems focusing on the analysis of dynamic scenes include motion segmentation algorithms. In this paper we present a novel approach, where a video shot is temporally divided in successive and overlapping windows and motion segmentation is performed on each window respectively. This attribute renders the algorithm suitable even for long video sequences. In the last stage of the algorithm the segmentation results for every window are aggregated into a final segmentation. The presented algorithm can handle effectively asynchronous trajectories on each window even when they have no temporal intersection. The evaluation of the proposed algorithm on the Berkeley motion segmentation benchmark demonstrates its scalability and accuracy compared to the state of the art.  相似文献   

4.
动态数据流具有数据量大、变化快、随机存取代价高、详细数据难以存储等特点,挖掘动态数据流对计算能力与存储能力要求非常高。针对动态数据流的以上特点,设计了一种基于自助抽样的动态数据流贝叶斯分类算法,算法运用滑动窗口模型对动态数据流进行处理分析。该模型以每个窗口的数据为基本单位,对窗口内的数据进行处理分析;算法采用自助抽样技术对待分类数据中的属性进行裁剪和优化,解决了数据属性间的多重线性相关问题;算法结合贝叶斯算法的特点,采用动态增量存储树来解决动态样本数据流的存储问题,实现了无限动态数据流无信息失真的静态有限存储,解决了动态数据流挖掘最大的难题——数据存储;对优化的待分类数据使用all-贝叶斯分类器和k-贝叶斯分类器进行分类,结合数据流的特性对两个分类器进行实时更新。该算法有效克服了贝叶斯分类属性独立性的约束和传统贝叶斯只对静态数据分类的缺点,克服了动态数据流最大的难题——数据存储问题。通过实验测试证明,基于自助抽样的贝叶斯分类具有很高的时效性和精确性。  相似文献   

5.
The common paradigm employed for object detection is the sliding window (SW) search. This approach generates grid-distributed patches, at all possible positions and sizes, which are evaluated by a binary classifier: The tradeoff between computational burden and detection accuracy is the real critical point of sliding windows; several methods have been proposed to speed up the search such as adding complementary features. We propose a paradigm that differs from any previous approach since it casts object detection into a statistical-based search using a Monte Carlo sampling for estimating the likelihood density function with Gaussian kernels. The estimation relies on a multistage strategy where the proposal distribution is progressively refined by taking into account the feedback of the classifiers. The method can be easily plugged into a Bayesian-recursive framework to exploit the temporal coherency of the target objects in videos. Several tests on pedestrian and face detection, both on images and videos, with different types of classifiers (cascade of boosted classifiers, soft cascades, and SVM) and features (covariance matrices, Haar-like features, integral channel features, and histogram of oriented gradients) demonstrate that the proposed method provides higher detection rates and accuracy as well as a lower computational burden w.r.t. sliding window detection.  相似文献   

6.
In this paper, we propose three divide-and-conquer approaches for Bayesian information criterion (BIC)-based speaker segmentation. The approaches detect speaker changes by recursively partitioning a large analysis window into two sub-windows and recursively verifying the merging of two adjacent audio segments using $Delta BIC$ , a widely-adopted distance measure of two audio segments. We compare our approaches to three popular distance-based approaches, namely, Chen and Gopalakrishnan's window-growing-based approach, Siegler 's fixed-size sliding window approach, and Delacourt and Wellekens's DISTBIC approach, by performing computational cost analysis and conducting speaker change detection experiments on two broadcast news data sets. The results show that the proposed approaches are more efficient and achieve higher segmentation accuracy than the compared distance-based approaches. In addition, we apply the segmentation approaches discussed in this paper to the speaker diarization task. The experiment results show that a more effective segmentation approach leads to better diarization accuracy.   相似文献   

7.
符号聚合近似表示法是提取时间序列特征的重要方式。然而,传统的符号聚合近似表示法存在平均化分段数、同等对待划分区间,以及无法准确反映非平稳序列的突变信息等多项缺陷。鉴于此,通过引入局部均值分解和改进小波熵的分段算法,建立了一种新的时序SAX模型。该模型的基本原理是采用局部均值分解技术对原始序列进行去噪处理,利用滑动窗口阈值法获取分段数,并使用SAX表示法进行符号表示,利用KNN分类器实现分类性能测试。基于这一改进模型,进行了实证检验,实验结果表明,该模型能够有效提取序列的信息特征,具有较高的拟合度,达到了降维的目的,更重要的是,提高了KNN分类算法在SAX表示法中分类的准确率。  相似文献   

8.
A combined 2D, 3D approach is presented that allows for robust tracking of moving people and recognition of actions. It is assumed that the system observes multiple moving objects via a single, uncalibrated video camera. Low-level features are often insufficient for detection, segmentation, and tracking of non-rigid moving objects. Therefore, an improved mechanism is proposed that integrates low-level (image processing), mid-level (recursive 3D trajectory estimation), and high-level (action recognition) processes. A novel extended Kalman filter formulation is used in estimating the relative 3D motion trajectories up to a scale factor. The recursive estimation process provides a prediction and error measure that is exploited in higher-level stages of action recognition. Conversely, higher-level mechanisms provide feedback that allows the system to reliably segment and maintain the tracking of moving objects before, during, and after occlusion. Heading-guided recognition (HGR) is proposed as an efficient method for adaptive classification of activity. The HGR approach is demonstrated using “motion history images” that are then recognized via a mixture-of-Gaussians classifier. The system is tested in recognizing various dynamic human outdoor activities: running, walking, roller blading, and cycling. In addition, experiments with real and synthetic data sets are used to evaluate stability of the trajectory estimator with respect to noise.  相似文献   

9.
10.
钟忺  杨光  卢炎生 《计算机科学》2016,43(6):289-293
随着多媒体技术的发展,当今工作和生活中的多媒体信息日渐丰富。如何通过分析海量视频快速有效地检索出有用信息成为一个日益严重的问题。为了解决上述问题,提出了一种基于双阈值滑动窗口 子镜头分割和完全连通图的关键帧提取方法。该方法采用基于双阈值的镜头分割算法,通过设置双阈值滑动窗口来判断镜头的突变边界和渐变边界,从而划分镜头;并采用基于滑动窗口的子镜头分割算法,通过给视频帧序列加一个滑动窗口,在窗口的范围内利用帧差来对镜头进行再划分,得到子镜头;此外,利用基于子镜头分割的关键帧提取算法,通过处理顶点为帧、边为帧差的完全连通图的方法来提取关键帧。实验结果表明,与其他方法相比,提出的方法平均精确率较高,并且平均关键帧数目较低,可以很好地提取视频的关键帧。  相似文献   

11.
The Bayesian Information Criterion (BIC) is a widely adopted method for audio segmentation, and has inspired a number of dominant algorithms for this application. At present, however, literature lacks in analytical and experimental studies on these algorithms. This paper tries to partially cover this gap.Typically, BIC is applied within a sliding variable-size analysis window where single changes in the nature of the audio are locally searched. Three different implementations of the algorithm are described and compared: (i) the first keeps updated a pair of sums, that of input vectors and that of square input vectors, in order to save computations in estimating covariance matrices on partially shared data; (ii) the second implementation, recently proposed in literature, is based on the encoding of the input signal with cumulative statistics for an efficient estimation of covariance matrices; (iii) the third implementation consists of a novel approach, and is characterized by the encoding of the input stream with the cumulative pair of sums of the first approach.Furthermore, a dynamic programming algorithm is presented that, within the BIC model, finds a globally optimal segmentation of the input audio stream.All algorithms are analyzed in detail from the viewpoint of the computational cost, experimentally evaluated on proper tasks, and compared.  相似文献   

12.
广播语音的音频分割   总被引:1,自引:2,他引:1  
本文的广播电视新闻的分割系统分为三部分:分割、分类和聚类。分割部分是采用本文提出的基于检测熵变化趋势的分割算法来检测连续语音音频信号的声学特征跳变点,从而实现不同性质的音频信号的分割。这种检测方法不同于传统的需要门限的跳变点检测方法,它是以检测一定窗长的信号内部的每一个可能的分割点所分割的两段信号的信号熵的变化趋势来检测音频信号声学特征跳变点的,可以避免由于门限的选择不当所带来的分割错误。分类部分是采用传统的基于高斯混合模型(GMM)的高斯分类器进行分类,聚类部分采用基于矢量量化(VQ)的说话人聚类算法进行说话人聚类。应用此系统分割三段30分钟的新闻,成功的实现了连续音频信号的分割,去除掉了所有的背景音乐,以较高的精度把属于同一个人的说话语音划归为一类,为广播语音的分类识别打下了良好的基础。  相似文献   

13.
We describe a method for player detection in field sports with a fixed camera setup based on a new player feature extraction strategy. The proposed method detects players in static images with a sliding window technique. First, we compute a binary edge image and then the detector window is shifted over the edge regions. Given a set of binary edges in a sliding window, we introduce and solve a particular diffusion equation to generate a shape information image. The proposed diffusion to generate a shape information image is the key stage and the main theoretical contribution in our new algorithm. It removes the appearance variations of an object while preserving the shape information. It also enables the use of polar and Fourier transforms in the next stage to achieve scale- and rotation-invariant feature extraction. A support vector machine classifier is used to assign either player or non-player class inside a detector window. We evaluate our approach on three different field hockey datasets. In general, results show that the proposed feature extraction is effective and performs competitive results compared to the state-of-the-art methods.  相似文献   

14.
This paper presents a wavelet-based texture segmentation method using multilayer perceptron (MLP) networks and Markov random fields (MRF) in a multi-scale Bayesian framework. Inputs and outputs of MLP networks are constructed to estimate a posterior probability. The multi-scale features produced by multi-level wavelet decompositions of textured images are classified at each scale by maximum a posterior (MAP) classification and the posterior probabilities from MLP networks. An MRF model is used in order to model the prior distribution of each texture class, and a factor, which fuses the classification information through scales and acts as a guide for the labeling decision, is incorporated into the MAP classification of each scale. By fusing the multi-scale MAP classifications sequentially from coarse to fine scales, our proposed method gets the final and improved segmentation result at the finest scale. In this fusion process, the MRF model serves as the smoothness constraint and the Gibbs sampler acts as the MAP classifier. Our texture segmentation method was applied to segmentation of gray-level textured images. The proposed segmentation method shows better performance than texture segmentation using the hidden Markov trees (HMT) model and the HMTseg algorithm, which is a multi-scale Bayesian image segmentation algorithm.  相似文献   

15.
This article addresses a problem of moving object detection by combining two kinds of segmentation schemes: temporal and spatial. It has been found that consideration of a global thresholding approach for temporal segmentation, where the threshold value is obtained by considering the histogram of the difference image corresponding to two frames, does not produce good result for moving object detection. This is due to the fact that the pixels in the lower end of the histogram are not identified as changed pixels (but they actually correspond to the changed regions). Hence there is an effect on object background classification. In this article, we propose a local histogram thresholding scheme to segment the difference image by dividing it into a number of small non-overlapping regions/windows and thresholding each window separately. The window/block size is determined by measuring the entropy content of it. The segmented regions from each window are combined to find the (entire) segmented image. This thresholded difference image is called the change detection mask (CDM) and represent the changed regions corresponding to the moving objects in the given image frame. The difference image is generated by considering the label information of the pixels from the spatially segmented output of two image frames. We have used a Markov Random Field (MRF) model for image modeling and the maximum a posteriori probability (MAP) estimation (for spatial segmentation) is done by a combination of simulated annealing (SA) and iterated conditional mode (ICM) algorithms. It has been observed that the entropy based adaptive window selection scheme yields better results for moving object detection with less effect on object background (mis) classification. The effectiveness of the proposed scheme is successfully tested over three video sequences.  相似文献   

16.
This paper presents a method for designing semi-supervised classifiers trained on labeled and unlabeled samples. We focus on probabilistic semi-supervised classifier design for multi-class and single-labeled classification problems, and propose a hybrid approach that takes advantage of generative and discriminative approaches. In our approach, we first consider a generative model trained by using labeled samples and introduce a bias correction model, where these models belong to the same model family, but have different parameters. Then, we construct a hybrid classifier by combining these models based on the maximum entropy principle. To enable us to apply our hybrid approach to text classification problems, we employed naive Bayes models as the generative and bias correction models. Our experimental results for four text data sets confirmed that the generalization ability of our hybrid classifier was much improved by using a large number of unlabeled samples for training when there were too few labeled samples to obtain good performance. We also confirmed that our hybrid approach significantly outperformed generative and discriminative approaches when the performance of the generative and discriminative approaches was comparable. Moreover, we examined the performance of our hybrid classifier when the labeled and unlabeled data distributions were different.  相似文献   

17.
Writer identification from musical score documents is a challenging task due to its inherent problem of overlapping of musical symbols with staff-lines. Most of the existing works in the literature of writer identification in musical score documents were performed after a pre-processing stage of staff-lines removal. In this paper we propose a novel writer identification framework in musical score documents without removing staff-lines from the documents. In our approach, Hidden Markov Model (HMM) has been used to model the writing style of the writers without removing staff-lines. The sliding window features are extracted from musical score-lines and they are used to build writer specific HMM models. Given a query musical sheet, writer specific confidence for each musical line is returned by each writer specific model using a log-likelihood score. Next, a log-likelihood score in page level is computed by weighted combination of these scores from the corresponding line images of the page. A novel Factor Analysis-based feature selection technique is applied in sliding window features to reduce the noise appearing from staff-lines which proves efficiency in writer identification performance. In our framework we have also proposed a novel score-line detection approach in musical sheet using HMM. The experiment has been performed in CVC-MUSCIMA data set and the results obtained show that the proposed approach is efficient for score-line detection and writer identification without removing staff-lines. To get the idea of computation time of our method, detail analysis of execution time is also provided.  相似文献   

18.
图像分割在多媒体,图像处理,计算机视觉领域扮演着重要角色。提出了基于图像分割熵的二区域图像分割方法。 首先,根据熵的特性:单个随机变量所对应的熵越大,所包含的信息量越大,图像是单一区域时,所含的信息量(熵)较小,引入图像分割熵(ISE)测度,用于度量两区域图像分割准确程度,将两区域图像分割问题转化成ISE最小值问题。然后,采用迭代图切(graph cut)算法给出ISE最小值问题的近似解,实现二区域图像分割。实验结果表明,基于图像分割熵的二区域图像分割方法是可行有效的。  相似文献   

19.
This paper presents a comparative study of two machine learning techniques for recognizing handwritten Arabic words, where hidden Markov models (HMMs) and dynamic Bayesian networks (DBNs) were evaluated. The work proposed is divided into three stages, namely preprocessing, feature extraction and classification. Preprocessing includes baseline estimation and normalization as well as segmentation. In the second stage, features are extracted from each of the normalized words, where a set of new features for handwritten Arabic words is proposed, based on a sliding window approach moving across the mirrored word image. The third stage is for classification and recognition, where machine learning is applied using HMMs and DBNs. In order to validate the techniques, extensive experiments were conducted using the IFN/ENIT database which contains 32,492 Arabic words. Experimental results and quantitative evaluations showed that HMM outperforms DBN in terms of higher recognition rate and lower complexity.  相似文献   

20.
为解决复杂场景下,基于整体表观模型的目标跟踪算法容易丢失目标的问题,提出一种多模型协作的分块目标跟踪算法.融合基于局部敏感直方图的产生式模型和基于超像素分割的判别式模型构建目标表观模型,提取局部敏感直方图的亮度不变特征来抵制光照变化的影响;引入目标模型的自适应分块划分策略以解决局部敏感直方图算法缺少有效遮挡处理机制的问题,提高目标的抗遮挡性;通过相对熵和均值聚类度量子块的局部差异置信度和目标背景置信度,建立双权值约束机制和子块异步更新策略,在粒子滤波框架下,选择置信度高的子块定位目标.实验结果表明,本文方法在复杂场景下具有良好的跟踪精度和稳定性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号