首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 201 毫秒
1.
深度学习在人物动作识别方面已取得较好的成效,但当前仍然需要充分利用视频中人物的外形信息和运动信息。为利用视频中的空间信息和时间信息来识别人物行为动作,提出一种时空双流视频人物动作识别模型。该模型首先利用两个卷积神经网络分别抽取视频动作片段空间和时间特征,接着融合这两个卷积神经网络并提取中层时空特征,最后将提取的中层特征输入到3D卷积神经网络来完成视频中人物动作的识别。在数据集UCF101和HMDB51上,进行视频人物动作识别实验。实验结果表明,所提出的基于时空双流的3D卷积神经网络模型能够有效地识别视频人物动作。  相似文献   

2.
视频行为识别是图像和视觉领域的一个基础问题,在基于深度学习的行为识别模型中,2D卷积方法模型参数较少,但是准确率不高;3D卷积方法在一定程度上提高了准确率,但会产生较多的参数和计算量。为了在保持准确率的前提下降低3D卷积神经网络行为识别模型的参数量,减少计算资源消耗,提出了时域零填充卷积网络行为识别算法,对视频进行3D卷积时不在时间维度上填充额外数据,以此来保证时域信息的完整性。为了充分利用有限的时间信息,设计了适合此填充方式的网络结构:先以时域不填充的方式使用3D卷积提取时空信息,然后利网络重组结构将3D卷积变为2D卷积来进一步提取特征。实验表明,该网络的参数量为10.385×106,不使用预训练权重的情况下在UCF101数据集上准确率为60.28%,与其他3D卷积网络行为识别方法相比在资源占用和准确率上都有明显优势。  相似文献   

3.
一种快速相似视频检索方法   总被引:1,自引:0,他引:1  
曹政  卢宝丰  朱明 《信息与控制》2010,39(5):635-639
为了解决相似性视频检索中相似性度量和快速检索两个难题,本文提出了一种新的相似性视频快速检索方法。从视觉相似性出发,根据视频的时空分布特征统计计算压缩视频签名,通过视频签名的距离度量视频相似性。为了适应可扩展计算的需要,提出了基于聚类索引表的检索方法。通过对大规模数据库的查询测试结果证明该相似性检索算法快速有效。  相似文献   

4.
深度卷积神经网络学习的图像特征表示具有明显的层次结构.随着层数加深,学习的特征逐渐抽象,类的判别性也逐渐增强.基于此特点,文中提出面向图像检索的深度汉明嵌入哈希编码方式.在深度卷积神经网络的末端插入一层隐藏层,依据每个单元的激活情况获得图像的哈希编码.同时根据哈希编码本身的特征提出汉明嵌入损失,更好地保留原数据之间的相似性.在CIFAR-10、NUS-WIDE基准图像数据集上的实验表明,文中方法可以提升图像检索性能,较好改善短编码下的检索性能.  相似文献   

5.
Facebook提出的C3D三维卷积神经网络虽然能达到良好的视频动作识别准确率,但是在速度方面还有很大的改进余地,而且训练得到的模型过大,不便于移动设备使用。本文利用小型卷积核能够减少参数的特点,对已有网络结构进行优化,提出一种新的动作识别方案,将原C3D神经网络常用的3×3×3卷积核分解成深度卷积和点卷积(1×1×1卷积核),并且在UCF101数据集和ActivityNet数据集训练测试。结果表明,与原C3D网络进行对比:改进后的C3D网络准确率比C3D提升了2.4%,在速度方面比C3D提升了12.9%,模型大小压缩到原来的25.8%。  相似文献   

6.
基于视频的行为识别技术在计算机视觉领域有广泛的应用.针对当前存在的网络模型不能有效结合视频数据中的时空信息,并且缺乏对不同尺度数据之间的融合信息进行考虑等问题,提出一种结合双流网络以及3D卷积神经网络的多尺度输入3D卷积融合双流模型.首先利用2D残差网以及多尺度输入3D卷积融合网络获取视频中的时空维度信息;然后将2层网络得到的实验结果进行决策相加,有效地提升网络对视频中时空特征提取的能力;最后通过在多尺度输入3D卷积融合网络对不同尺度的数据进行不同策略的融合,提高了网络对不同尺度数据的泛化能力.实验结果表明,文中模型在数据集UCF-101以及HMDB-51的识别准确率分别为90.5%与66.3%;相比于其他方法,该模型能取得更高的识别精度,体现出文中方法的优越性与鲁棒性.  相似文献   

7.
随着互联网与多媒体技术的迅猛发展,网络数据的呈现形式由单一文本扩展到包含图像、视频、文本、音频和3D模型等多种媒体,使得跨媒体检索成为信息检索的新趋势.然而,"异构鸿沟"问题导致不同媒体的数据表征不一致,难以直接进行相似性度量,因此,多种媒体之间的交叉检索面临着巨大挑战.随着深度学习的兴起,利用深度神经网络模型的非线性建模能力有望突破跨媒体信息表示的壁垒,但现有基于深度学习的跨媒体检索方法一般仅考虑图像和文本两种媒体数据之间的成对关联,难以实现更多种媒体的交叉检索.针对上述问题,提出了跨媒体深层细粒度关联学习方法,支持多达5种媒体类型数据(图像、视频、文本、音频和3D模型)的交叉检索.首先,提出了跨媒体循环神经网络,通过联合建模多达5种媒体类型数据的细粒度信息,充分挖掘不同媒体内部的细节信息以及上下文关联.然后,提出了跨媒体联合关联损失函数,通过将分布对齐和语义对齐相结合,更加准确地挖掘媒体内和媒体间的细粒度跨媒体关联,同时利用语义类别信息增强关联学习过程的语义辨识能力,提高跨媒体检索的准确率.在两个包含5种媒体的跨媒体数据集PKU XMedia和PKU XMediaNet上与现有方法进行实验对比,实验结果表明了所提方法的有效性.  相似文献   

8.
由于互联网+时代的到来,在线图像的数量急剧增加,基于内容的图像检索引起了很多关注。传统的检索方法由于图像表达能力不强,使得检索效率低下,不利于大规模图像检索。因此,提出一种新的基于卷积神经网络的图像检索算法。设计一种新型的端到端的卷积神经网络结构,同时学习基于概率的语义信息相似性和图像特征相似性;引入主成分分析方法,对深层特征进行降维的同时降低信息的损失;通过距离函数计算目标图像与数据库图像的距离,实现检索。在Image Net-1000和Oxford 5K数据集上的实验结果表明,该方法能够有效地增强图像特征的表达能力,提高检索性能,优于对比方法。  相似文献   

9.
传统的2D卷积神经网络在进行视频识别时容易丢失目标在时间维度上的相关特征信息,导致识别准确率降低。针对该问题,本文采用3D卷积网络作为基本的网络框架,使用3D卷积核进行卷积操作提取视频中的时空特征,同时集成多个3D卷积神经网络模型对动态手势进行识别。为了提高模型的收敛速度和训练的稳定性,运用批量归一化(BN)技术优化网络,使优化后的网络训练时间缩短。实验结果表明,本文方法对于动态手势的识别具有较好的识别结果,在Sheffield Kinect Gesture (SKIG)数据集上识别准确率达到98.06%。与单独使用RGB信息、深度信息以及传统2D CNN相比,手势识别率均有所提高,验证了本文方法的可行性和有效性。  相似文献   

10.
针对现有微表情自动识别方法准确率较低及微表情样本数量不足的问题,提出一种融合迁移学习技术与可分离三维卷积神经网络(S3D CNN)的微表情识别方法。通过光流法提取宏表情和微表情视频样本的光流特征帧序列,利用宏表情样本的光流特征帧序列对S3D CNN进行预训练,并采用微表情样本的光流特征帧序列微调模型参数。S3D CNN网络由二维空域卷积层及添加一维时域卷积层的可分离三维卷积层构成,比传统的三维卷积神经网络具有更好的学习能力,且减少了模型所需的训练参数和计算量。在此基础上,采用迁移学习的方式对模型进行训练,以缓解微表情样本数量过少造成的模型过拟合问题,提升模型的学习效率。实验结果表明,所提方法在CASME II微表情数据集上的识别准确率为67.58%,高于MagGA、C3DEvol等前沿的微表情识别算法。  相似文献   

11.
视频片断检索是视频领域的研究热点,为了提高查询效率,利用高维索引结构Vector-Approximation File(VA-File)来组织视频子片段,并采用新的相似度模型和基于限定性滑动窗口的高效视频检索算法进行视频片段检索.提出的子片段的分割算法能够较好地区分运动的细节动作,且相似度模型充分考虑了对应子片段之间的视觉相似性以及时间顺序关系,因此对于运动视频的检索十分有效.实验证明,对于运动视频片段检索不仅具有较高的查询效率,而且能够得到较高的查全率和准确率.  相似文献   

12.
视频的检索反馈   总被引:4,自引:0,他引:4  
基于内容的视频检索是一个重要的研究领导领域,吸引了众多研究者的举,常用的查询方式是通过例子视频进行检索,但是如何定义是否两个视频相似,仍然是尚未解决的问题,限制了检索系统的应用范围,而且由于视频内容的复杂性,不同在检索过程中,即使对同一部视频,其注重的角度也有可能不同,因此接受用户的反馈意见,当用户对查询结果不满意时可以优化查询结果,突出用户的需求,在综合了人类礼堂心理特征的基础上,介绍了一种视频相似衡量的模型,从镜头、视频等多个层次,多种视觉判断角度,对视频间相似度进行衡量,并在此基础上提出了多个粒度--镜头层次和视频层次进行检索反馈的方法,整个过程是自动进行的,根据用户的意见灵活地优化检索结果。  相似文献   

13.
Multimedia content has been growing quickly and video retrieval is regarded as one of the most famous issues in multimedia research. In order to retrieve a desirable video, users express their needs in terms of queries. Queries can be on object, motion, texture, color, audio, etc. Low-level representations of video are different from the higher level concepts which a user associates with video. Therefore, query based on semantics is more realistic and tangible for end user. Comprehending the semantics of query has opened a new insight in video retrieval and bridging the semantic gap. However, the problem is that the video needs to be manually annotated in order to support queries expressed in terms of semantic concepts. Annotating semantic concepts which appear in video shots is a challenging and time-consuming task. Moreover, it is not possible to provide annotation for every concept in the real world. In this study, an integrated semantic-based approach for similarity computation is proposed with respect to enhance the retrieval effectiveness in concept-based video retrieval. The proposed method is based on the integration of knowledge-based and corpus-based semantic word similarity measures in order to retrieve video shots for concepts whose annotations are not available for the system. The TRECVID 2005 dataset is used for evaluation purpose, and the results of applying proposed method are then compared against the individual knowledge-based and corpus-based semantic word similarity measures which were utilized in previous studies in the same domain. The superiority of integrated similarity method is shown and evaluated in terms of Mean Average Precision (MAP).  相似文献   

14.
The increasing availability of object-based video content requires new technologies for automatically extracting and matching of the low level features of arbitrarily shaped video. This paper proposes methods for shape retrieval of arbitrarily shaped video objects. Our methods take into account not only the still shape features but also the shape deformations that may occur in an object's lifespan. We compute the shape similarity of video objects by comparing the similarity of their representative temporal instances. We also describe motion of a video object via describing the deformations in an object's shape. Experimental results show that our proposed methods offer very good retrieval performance and match closely with the human ranking.  相似文献   

15.
基于帧数据量波动特性的压缩域视频快速检索方法   总被引:1,自引:0,他引:1  
为实现压缩域视频快速检索,提出基于帧数据量波动特性的检索方法。该方法首先计算压缩域各图像帧的数据量,得出查询片段和目标视频等长内的数据量曲线,然后在I帧对齐的基础上将查询片段在目标视频上进行滑动,滑动窗长为单个图组长度。再在每次滑动后计算查询片段与目标视频数据量曲线波动的差异程度,同时每次滑动后要更新目标视频的数据量曲线。最后结合设定门限进行相似判决并返回结果。该方法不需要为每一帧抽取高维特征向量,用一个向量而不是一组高维向量来表述一段视频。实验结果表明,相比现有快速检索算法,该方法使检索速度得到提高,同时也能达到较高的准确率。 另外,该方法既可用于基于压缩域视频库的快速检索,也可用于在线的视频片段匹配,实时发现与设定目标相似的视频。  相似文献   

16.
Hierarchical video browsing and feature-based video retrieval are two standard methods for accessing video content. Very little research, however, has addressed the benefits of integrating these two methods for more effective and efficient video content access. In this paper, we introduce InsightVideo, a video analysis and retrieval system, which joins video content hierarchy, hierarchical browsing and retrieval for efficient video access. We propose several video processing techniques to organize the content hierarchy of the video. We first apply a camera motion classification and key-frame extraction strategy that operates in the compressed domain to extract video features. Then, shot grouping, scene detection and pairwise scene clustering strategies are applied to construct the video content hierarchy. We introduce a video similarity evaluation scheme at different levels (key-frame, shot, group, scene, and video.) By integrating the video content hierarchy and the video similarity evaluation scheme, hierarchical video browsing and retrieval are seamlessly integrated for efficient content access. We construct a progressive video retrieval scheme to refine user queries through the interactions of browsing and retrieval. Experimental results and comparisons of camera motion classification, key-frame extraction, scene detection, and video retrieval are presented to validate the effectiveness and efficiency of the proposed algorithms and the performance of the system.  相似文献   

17.
We define similar video content as video sequences with almost identical content but possibly compressed at different qualities, reformatted to different sizes and frame-rates, undergone minor editing in either spatial or temporal domain, or summarized into keyframe sequences. Building a search engine to identify such similar content in the World-Wide Web requires: 1) robust video similarity measurements; 2) fast similarity search techniques on large databases; and 3) intuitive organization of search results. In a previous paper, we proposed a randomized technique called the video signature (ViSig) method for video similarity measurement. In this paper, we focus on the remaining two issues by proposing a feature extraction scheme for fast similarity search, and a clustering algorithm for identification of similar clusters. Similar to many other content-based methods, the ViSig method uses high-dimensional feature vectors to represent video. To warrant a fast response time for similarity searches on high dimensional vectors, we propose a novel nonlinear feature extraction scheme on arbitrary metric spaces that combines the triangle inequality with the classical Principal Component Analysis (PCA). We show experimentally that the proposed technique outperforms PCA, Fastmap, Triangle-Inequality Pruning, and Haar wavelet on signature data. To further improve retrieval performance, and provide better organization of similarity search results, we introduce a new graph-theoretical clustering algorithm on large databases of signatures. This algorithm treats all signatures as an abstract threshold graph, where the distance threshold is determined based on local data statistics. Similar clusters are then identified as highly connected regions in the graph. By measuring the retrieval performance against a ground-truth set, we show that our proposed algorithm outperforms simple thresholding, single-link and complete-link hierarchical clustering techniques.  相似文献   

18.
视频检索中,最普遍的检索方式是提交例子视频,查询出类似的视频。所以要设计一个准确可靠的视频检索系统,就必须定义好怎样的视频才是相似的。论文对基于内容的视频检索的基本原理进行了阐述,介绍了基于帧和镜头的相似性度量方法的基本思想和主要公式,然后对视频相似性度量方法进行了分析研究,最后对新的视频相似性度量研究向提出了展望。  相似文献   

19.
Video indexing is employed to represent the features of video sequences. Motion vectors derived from compressed video are preferred for video indexing because they can be accessed by partial decoding; thus, they are used extensively in various video analysis and indexing applications. In this study, we introduce an efficient compressed domain video indexing method and implement it on the H.264/AVC coded videos. The video retrieval experimental evaluations indicate that the video retrieval based on the proposed indexing method outperforms motion vector based video retrieval in 74 % of queries with little increase in computation time. Furthermore, we compared our method with a pixel level video indexing method which employs both temporal and spatial features. Experimental evaluation results indicate that our method outperforms the pixel level method both in performance and speed. Hence considering the speed and precision characteristics of indexing methods, the proposed method is an efficient indexing method which can be used in various video indexing and retrieval applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号