首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 125 毫秒
1.
Motion Flow-Based Video Retrieval   总被引:2,自引:0,他引:2  
In this paper, we propose the use of motion vectors embedded in MPEG bitstreams to generate so-called ldquomotion flowsrdquo, which are applied to perform video retrieval. By using the motion vectors directly, we do not need to consider the shape of a moving object and its corresponding trajectory. Instead, we simply ldquolinkrdquo the local motion vectors across consecutive video frames to form motion flows, which are then recorded and stored in a video database. In the video retrieval phase, we propose a new matching strategy to execute the video retrieval task. Motions that do not belong to the mainstream motion flows are filtered out by our proposed algorithm. The retrieval process can be triggered by query-by-sketch or query-by-example. The experiment results show that our method is indeed superb in the video retrieval process.  相似文献   

2.
3.
In instructional videos of chalk board presentations, the visual content refers to the text and figures written on the boards. Existing methods on video summarization are not effective for this video domain because they are mainly based on low-level image features such as color and edges. In this work, we present a novel approach to summarizing the visual content in instructional videos using middle-level features. We first develop a robust algorithm to extract content text and figures from instructional videos by statistical modelling and clustering. This algorithm addresses the image noise, nonuniformity of the board regions, camera movements, occlusions, and other challenges in the instructional videos that are recorded in real classrooms. Using the extracted text and figures as the middle level features, we retrieve a set of key frames that contain most of the visual content. We further reduce content redundancy and build a mosaicked summary image by matching extracted content based on K-th Hausdorff distance and connected component decomposition. Performance evaluation on four full-length instructional videos shows that our algorithm is highly effective in summarizing instructional video content.  相似文献   

4.
基于最大化子模和RRWM的视频协同分割   总被引:1,自引:1,他引:0  
苏亮亮  唐俊  梁栋  王年 《自动化学报》2016,42(10):1532-1541
成对视频共同运动模式的协同分割指的是同时检测出两个相关视频中共有的行为模式,是计算机视觉研究的一个热点.本文提出了一种新的成对视频协同分割方法.首先,利用稠密轨迹方法对视频运动部分进行检测,并对运动轨迹进行特征表示;然后,引入子模优化方法对单视频内的运动轨迹进行聚类分析;接着采用基于重加权随机游走的图匹配方法对成对视频运动轨迹进行匹配,该方法对出格点、变形和噪声都具有很强的鲁棒性;同时根据图匹配结果实现运动轨迹的共显著性度量;最后,将所有轨迹分类成共同运动轨迹和异常运动轨迹的问题转化为基于图割的马尔科夫随机场的二值化标签问题.通过典型运动视频数据集的比较实验,其结果验证了本文方法的有效性.  相似文献   

5.
Nowadays, tremendous amount of video is captured endlessly from increased numbers of video cameras distributed around the world. Since needless information is abundant in the raw videos, making video browsing and retrieval is inefficient and time consuming. Video synopsis is an effective way to browse and index such video, by producing a short video representation, while keeping the essential activities of the original video. However, video synopsis for single camera is limited in its view scope, while understanding and monitoring overall activity for large scenarios is valuable and demanding. To solve the above issues, we propose a novel video synopsis algorithm for partially overlapping camera network. Our main contributions reside in three aspects: First, our algorithm can generate video synopsis for large scenarios, which can facilitate understanding overall activities. Second, for generating overall activity, we adopt a novel unsupervised graph matching algorithm to associate trajectories across cameras. Third, a novel multiple kernel similarity is adopted in selecting key observations for eliminating content redundancy in video synopsis. We have demonstrated the effectiveness of our approach on real surveillance videos captured by our camera network.  相似文献   

6.
The majority of existing work on sports video analysis concentrates on highlight extraction. Little work focuses on the important issue as how the extracted highlights should be organized. In this paper, we present a multimodal approach to organize the highlights extracted from racket sports video grounded on human behavior analysis using a nonlinear affective ranking model. Two research challenges of highlight ranking are addressed, namely affective feature extraction and ranking model construction. The basic principle of affective feature extraction in our work is to extract sensitive features which can stimulate user's emotion. Since the users pay most attention to player behavior and audience response in racket sport highlights, we extract affective features from player behavior including action and trajectory, and game-specific audio keywords. We propose a novel motion analysis method to recognize the player actions. We employ support vector regression to construct the nonlinear highlight ranking model from affective features. A new subjective evaluation criterion is proposed to guide the model construction. To evaluate the performance of the proposed approaches, we have tested them on more than ten-hour broadcast tennis and badminton videos. The experimental results demonstrate that our action recognition approach significantly outperforms the existing appearance-based method. Moreover, our user study shows that the affective highlight ranking approach is effective.  相似文献   

7.
In this paper, we propose a Web video retrieval method that uses hierarchical structure of Web video groups. Existing retrieval systems require users to input suitable queries that identify the desired contents in order to accurately retrieve Web videos; however, the proposed method enables retrieval of the desired Web videos even if users cannot input the suitable queries. Specifically, we first select representative Web videos from a target video dataset by using link relationships between Web videos obtained via metadata “related videos” and heterogeneous video features. Furthermore, by using the representative Web videos, we construct a network whose nodes and edges respectively correspond to Web videos and links between these Web videos. Then Web video groups, i.e., Web video sets with similar topics are hierarchically extracted based on strongly connected components, edge betweenness and modularity. By exhibiting the obtained hierarchical structure of Web video groups, users can easily grasp the overview of many Web videos. Consequently, even if users cannot write suitable queries that identify the desired contents, it becomes feasible to accurately retrieve the desired Web videos by selecting Web video groups according to the hierarchical structure. Experimental results on actual Web videos verify the effectiveness of our method.  相似文献   

8.
9.
关键帧提取是基于内容的视频检索中的重要一步,为了能够有效地提取出不同类型视频的关键帧,提出一种基于粒子群的关键帧提取算法。该方法首先提取出视频中每帧的全局运动和局部运动特征,然后通过粒子群算法自适应地提取视频关键帧。实验结果表明,采用该算法对不同类型的视频提取出的关键帧具有较好的代表性。  相似文献   

10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号