首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we present a frame-patch matching based robust semi-blind video watermarking using KAZE feature. The KAZE feature is employed for matching the feature points of frame-patch with those of all frames in video for detecting the embedding and extracting regions. In our method, the watermark information is embedded in Discrete Cosine Transform (DCT) domain of randomly generated blocks in the matched region. In the extraction process, we synchronize the embedded region from the distorted video by using KAZE feature matching. Based on the matched KAZE feature points, RST (rotation, scaling, translation) parameters are estimated and the watermark information can be successfully extracted. Experimental results show that our proposed method is robust against geometrical attacks, video processing attacks, temporal attacks, and so on.  相似文献   

2.
A double optimal projection method that involves projections for intra-cluster and inter-cluster dimensionality reduction are proposed for video fingerprinting. The video is initially set as a graph with frames as its vertices in a high-dimensional space. A similarity measure that can compute the weights of the edges is then proposed. Subsequently, the video frames are partitioned into different clusters based on the graph model. Double optimal projection is used to explore the optimal mapping points in a low-dimensional space to reduce the video dimensions. The statistics and geometrical fingerprints are generated to determine whether a query video is copied from one of the videos in the database. During matching, the video can be roughly matched by utilizing the statistics fingerprint. Further matching is thereafter performed in the corresponding group using geometrical fingerprints. Experimental results show the good performance of the proposed video fingerprinting method in robustness and discrimination.  相似文献   

3.
基于特征点匹配的电子稳像算法研究   总被引:1,自引:0,他引:1  
崔昌浩  王晓剑  刘鑫 《激光与红外》2015,45(9):1119-1122
针对基于特征点匹配的电子稳像算法中,SIFT算子计算量大,Harris算子检测不稳的问题,提出用Harris算子来进行特征点提取,并采用SIFT特征描述的方式对提取出的特征点进行描述,从而寻求算法在计算复杂度和匹配精度上的平衡点;在特征点匹配过程中加入RANSAC准则,以提高配对的准确性。仿真实验表明,本文算法对存在抖动的红外视频具有较好的稳像效果。  相似文献   

4.
The detection of near-duplicate video clips (NDVCs) is an area of current research interest and intense development. Most NDVC detection methods represent video clips with a unique set of low-level visual features, typically describing color or texture information. However, low-level visual features are sensitive to transformations of the video content. Given the observation that transformations tend to preserve the semantic information conveyed by the video content, we propose a novel approach for identifying NDVCs, making use of both low-level visual features (this is, MPEG-7 visual features) and high-level semantic features (this is, 32 semantic concepts detected using trained classifiers). Experimental results obtained for the publicly available MUSCLE-VCD-2007 and TRECVID 2008 video sets show that bimodal fusion of visual and semantic features facilitates robust NDVC detection. In particular, the proposed method is able to identify NDVCs with a low missed detection rate (3% on average) and a low false alarm rate (2% on average). In addition, the combined use of visual and semantic features outperforms the separate use of either of them in terms of NDVC detection effectiveness. Further, we demonstrate that the effectiveness of the proposed method is on par with or better than the effectiveness of three state-of-the-art NDVC detection methods either making use of temporal ordinal measurement, features computed using the Scale-Invariant Feature Transform (SIFT), or bag-of-visual-words (BoVW). We also show that the influence of the effectiveness of semantic concept detection on the effectiveness of NDVC detection is limited, as long as the mean average precision (MAP) of the semantic concept detectors used is higher than 0.3. Finally, we illustrate that the computational complexity of our NDVC detection method is competitive with the computational complexity of the three aforementioned NDVC detection methods.  相似文献   

5.
该文提出一种基于优选特征轨迹的视频稳定算法。首先,采用改进的Harris角点检测算子提取特征点,通过K-Means聚类算法剔除前景特征点。然后,利用帧间特征点的空间运动一致性减少错误匹配和时间运动相似性实现长时间跟踪,从而获取有效特征轨迹。最后,建立同时包含特征轨迹平滑度与视频质量退化程度的目标函数计算视频序列的几何变换集以平滑特征轨迹获取稳定视频。针对图像扭曲产生的空白区,由当前帧定义区与参考帧的光流作引导来腐蚀,并通过图像拼接填充仍属于空白区的像素。经仿真验证,该文方法稳定的视频,空白区面积仅为Matsushita方法的33%左右,对动态复杂场景和多个大运动前景均具有较高的有效性并可生成内容完整的视频,既提高了视频的视觉效果,又减轻了费时的边界修复任务。  相似文献   

6.
赵明富  陈兵  宋涛  曹利波 《半导体光电》2019,40(4):539-545, 549
图像特征匹配是视觉里程计的重要环节,针对视觉图像序列特征点匹配中存在的匹配精度低问题,提出一种融合金字塔特征光流与角点特征的精确快速图像特征匹配算法。算法首先利用ORB(二进制定向简单描述符)算法快速提取图像特征点,然后融合金字塔Lucas-Kanade特征光流的追踪特性,使用局部特征窗口计算图像特征点位移矢量。接着针对图像特征的匹配对齐问题以及特征丢失问题,算法采用K最近邻半径搜索作为特征滤波器移除混淆的匹配,最后使用RANSAC(Random Sample Consensus)算法剔除冗余误匹配点对,提高匹配率。通过多组实验数据对比,该算法的图像特征匹配率可达到98%。对比传统的ORB特征匹配算法,该算法在实时性和图像特征匹配精度上均有显著提高。  相似文献   

7.
Content-based image retrieval (CBIR) has been an active research topic in the last decade. As one of the promising approaches, salient point based image retrieval has attracted many researchers. However, the related work is usually very time consuming, and some salient points always may not represent the most interesting subset of points for image indexing. Based on fast and performant salient point detector, and the salient point expansion, a novel content-based image retrieval using local visual attention feature is proposed in this paper. Firstly, the salient image points are extracted by using the fast and performant SURF (Speeded-Up Robust Features) detector. Then, the visually significant image points around salient points can be obtained according to the salient point expansion. Finally, the local visual attention feature of visually significant image points, including the weighted color histogram and spatial distribution entropy, are extracted, and the similarity between color images is computed by using the local visual attention feature. Experimental results, including comparisons with the state-of-the-art retrieval systems, demonstrate the effectiveness of our proposal.  相似文献   

8.
文章通过选取30个涵盖人物、风景、设备等不同场景下的短视频,在电视卖场选取9个不同电视机品牌并采用Iphone、VIVO、HUAWEI三种不同拍摄设备对30个短视频进行录制;针对录制视频时出现的位移、形变等问题,采用结合边缘检测及改进SIFT算法的录制视频处理办法,首先使用边缘检测算法寻找播放设备框界,通过逐帧操作实现录制视频空间一致性;然后利用图像特征匹配算法中的尺度不变特征变换匹配视频开始和结束画面,去除时域上产生的冗余;利用录制视频的边缘线性,采用映射边缘检测算法,同时针对视频处理耗时过长的问题,使用曼哈顿距离计算参考图与待匹配图的相似度降低算法复杂度。最后将处理后的录制视频去和原始视频得到空间和时间上的对齐,然后采用SSIM进行质量评估。  相似文献   

9.
王淦  宋利  张文军 《电视技术》2014,38(7):11-14,5
在视频质量评价方法中,常常需要对人眼视觉系统做出合理的假设,其中注意力模型就是一个很重要的因素。提出了一种在注意力模型指导下的视频质量评价方法,在图像帧的质量评价中加入了显著性区域信息,使之更能符合人眼视觉特性,并兼顾了视频中的运动信息,在一定程度上提高了客观质量评价方法的性能。  相似文献   

10.
韩峰  李晓斌 《电视技术》2015,39(23):22-25
随着机器人技术的发展,机器人视觉方面的研究也越来越受到人们的重视。在机器人视觉系统中,双目视觉应用最为广泛。在利用双目视觉对物体进行定位时,文中采用了各方面性能都较有优势的SURF算法,来对图像中的特征点进行提取与匹配。由于客观因素的影响,在SURF匹配过程中存在特征点误匹配现象,为了消除误匹配,文中对SURF算法做了改进,加入了剔除误匹配的RANSAC算法。实验结果表明,改进后的SURF算法,能够大大提高双目视觉定位的精准度。  相似文献   

11.
A video signature is a set of feature vectors that compactly represents and uniquely characterizes one video clip from another for fast matching. To find a short duplicated region, the video signature must be robust against common video modifications and have a high discriminability. The matching method must be fast and be successful at finding locations. In this paper, a frame‐based video signature that uses the spatial information and a two‐stage matching method is presented. The proposed method is pair‐wise independent and is robust against common video modifications. The proposed two‐stage matching method is fast and works very well in finding locations. In addition, the proposed matching structure and strategy can distinguish a case in which a part of the query video matches a part of the target video. The proposed method is verified using video modified by the VCE7 experimental conditions found in MPEG‐7. The proposed video signature method achieves a robustness of 88.7% under an independence condition of 5 parts per million with over 1,000 clips being matched per second.  相似文献   

12.
刘雪琴 《电视技术》2014,38(5):34-37
目标跟踪技术是视频检测技术中一个十分重要的组成部分,为此,提出一种基于特征点的快速跟踪算法。该方法避免了困难的目标分割过程。采用两次帧差共同确定角点选择区域,利用Moravec算法提取合适角点;采用一种特别设计的包含不平滑区域的结构化模板获取更好的匹配点;利用预测点缩小搜索范围,降低计算复杂度和时间复杂度。实验证明该算法能够快速实现目标的实时跟踪,跟踪准确度高,对不同的场景都具有良好的鲁棒性。  相似文献   

13.
AndAR is a project applied to develop Mobile Augmented Reality (MAR) applications on the android platform. The existing registration technologies of AndAR are still base on markers assume that all frames from all videos contain the target objects. With the need of practical application, the registration based on natural features is more popular, but the major limitation of the registration is that many of them are based on low-level visual features. This paper improves AndAR by introducing the planar natural features. The key of registration based on planar natural features is to get the homography matrix which can be calculated with more than 4 pairs of matching feature points, so a 3D registration method based on ORB and optical flow is proposed in this paper. ORB is used for feature point matching and RANSAC is used to choose good matches, called inliers, from all the matches. When the ratio of inliers is more than 50% in some video frame, inliers tracking based on optical flow is used to calculate the homography matrix in the latter frames and when the number of inliers successfully tracked is less than 4, then it goes back to ORB feature point matching again. The result shows that the improved AndAR can augment not only reality based on markers but also reality based on planar natural features in near real time and the hybrid approach can not only improve speed but also extend the usable tracking range.  相似文献   

14.
江泽涛  王琦 《激光与红外》2018,48(6):782-788
针对同一场景下的红外与可见光图像特征点难以提取和匹配的问题,提出一种基于扩散方程和相位一致模型的红外与可见光图像的配准算法。首先提出了收敛速度更快的扩散方程,并用该方程对红外图像除噪;其次利用改进的相位一致模型提取红外与可见光图像的视觉相似性结构;在相似性结构上提取特征点,进行二进制描述;最后采用汉明距离实现特征点匹配。实验结果表明,该算法能够有效滤除红外与可见光图像的差异,减小计算开销的同时实现图像的自动配准。  相似文献   

15.
In this paper, we propose a new and novel modality fusion method designed for combining spatial and temporal fingerprint information to improve video copy detection performance. Most of the previously developed methods have been limited to use only pre-specified weights to combine spatial and temporal modality information. Hence, previous approaches may not adaptively adjust the significance of the temporal fingerprints that depends on the difference between the temporal variances of compared videos, leading to performance degradation in video copy detection. To overcome the aforementioned limitation, the proposed method has been devised to extract two types of fingerprint information: (1) spatial fingerprint that consists of the signs of DCT coefficients in local areas in a keyframe and (2) temporal fingerprint that computes the temporal variances in local areas in consecutive keyframes. In addition, the so-called temporal strength measurement technique is developed to quantitatively represent the amount of the temporal variances; it can be adaptively used to consider the significance of compared temporal fingerprints. The experimental results show that the proposed modality fusion method outperforms other state-of-the-arts fusion methods and popular spatio-temporal fingerprints in terms of video copy detection. Furthermore, the proposed method can save 39.0%, 25.1%, and 46.1% time complexities needed to perform video fingerprint matching without a significant loss of detection accuracy for our synthetic dataset, TRECVID 2009 CCD Task, and MUSCLE-VCD 2007, respectively. This result indicates that our proposed method can be readily incorporated into the real-life video copy detection systems.  相似文献   

16.
With the rapid development of portable digital video equipment, such as camcorders, digital cameras and smart phones, video stabilization techniques for camera de-shaking are strongly required. The cutting-edge video stabilization techniques provide outstanding visual quality by utilizing 3D motion, while early video stabilization is based on 2D motion only. Recently, a content-preserving warping algorithm has been acknowledged as state-of-the-art thanks to its superior stabilization performance. However, the huge computational cost of this technique is a serious burden in spite of its excellent performance. Thus, we propose a fast video stabilization algorithm that provides significantly reduced computational complexity over the state-of-the-art with the same stabilization performance. First, we estimate the 3D information of the feature points in each input frame and define the region of interest (ROI) based on the estimated 3D information. Next, if the number of feature points in the ROI is sufficient, we apply the proposed ROI-based pre-warping and content-preserving warping sequentially to the input frame. Otherwise, conventional full-frame warping is applied. From intensive simulation results, we find that the proposed algorithm reduces computational complexity to 14% of that of the state-of-the-art method, while keeping almost equivalent stabilization performance.  相似文献   

17.
本文提出了一种基于音视模板匹配的新闻视频识别方法。在模板建立过程中,从新闻视频片头中的主题音乐提取音频模板,从主持人镜头中的扩展人脸区域提取视觉模板,这两者共同构成音视模板;在识别过程中,对电视视频流先进行音频模板匹配,然后由匹配通过的候选时间点定位到相应的视频镜头,接着通过视觉模板对镜头中的扩展人脸区域进行匹配,进而确定主持人镜头,最后完成新闻视频识别。实验结果表明,该方法计算效率高、简单易操作,具有较好的实用价值。  相似文献   

18.
Online video nowadays has become one of the top activities for users and has become easy to access. In the meantime, how to manage such huge amount of video data and retrieve them efficiently has become a big issue. In this article, we propose a novel method for video abstraction based on fast clustering of the regions of interest (ROIs). Firstly, the key-frames in each shot are extracted using the average histogram algorithm. Secondly, the saliency and edge maps are generated from each key-frame. According to these two maps, the key points for the visual attention model can be determined. Meanwhile, in order to expand the regions surrounding the key points, several thresholds are calculated from the corresponding key-frame. Thirdly, based on the key points and thresholds, several regions of interest are expanded and thus the main content in each frame is obtained. Finally, the fast clustering method is performed on the key frames by utilizing their ROIs. The performance and effectiveness of the proposed video abstraction algorithm is demonstrated by several experimental results.  相似文献   

19.
李慧鹏  李科 《半导体光电》2020,41(6):865-869
针对电连接器插针匹配过程中特征点存在的近似对称以及在不同图像间存在大角度旋转变换的问题,提出了一种基于六点特征数不变量的特征点匹配算法。将特征点分为凸包与内点两部分,利用凸包上的点在射影变换中排列顺序的不变性实现了凸包匹配,利用以凸包特征点为基准的内点特征向量的相似性实现了内点匹配。实验结果证明,提出的算法能够很好地实现对插针特征点的匹配,具有一定的鲁棒性。  相似文献   

20.
Aiming at the low speed of traditional scale-invariant feature transform (SIFT) matching algorithm, an improved matching algorithm is proposed in this paper. Firstly, feature points are detected and the speed of feature points matching is improved by adding epipolar constraint; then according to the matching feature points, the homography matrix is obtained by the least square method; finally, according to the homography matrix, the points in the left image can be mapped into the right image, and if the distance between the mapping point and the matching point in the right image is smaller than the threshold value, the pair of matching points is retained, otherwise discarded. Experimental results show that with the improved matching algorithm, the matching time is reduced by 73.3% and the matching points are entirely correct. In addition, the improved method is robust to rotation and translation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号