首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Video summarization aims at selecting valuable clips for browsing videos with high efficiency. Previous approaches typically focus on aggregating temporal features while ignoring the potential role of visual representations in summarizing videos. In this paper, we present a global difference-aware network(GDANet) that exploits the feature difference across frame and video as guidance to enhance visual features. Initially, a difference optimization module(DOM) is devised to enhance the discrimina...  相似文献   

2.
散列算法已经被广泛应用于视频数据的索引。然而,当前大多数视频散列方法将视频看成是多个独立帧的简单集合,通过综合帧的索引来对每个视频编制索引,在设计散列函数时往往忽略了视频的结构信息。首先将视频散列问题建模为结构正规化经验损失的最小化问题。然后提出一种有监管算法,通过利用结构学习方法来设计高效的散列函数。其中,结构正规化利用了出现于视频帧(与相同的语义类别存在关联)中的常见局部视觉模式,同时对来自同一视频的后续帧保持时域一致性。证明了通过使用加速近端梯度(APG)法可有效求解最小化目标问题。最后,基于两个大规模基准数据集展开全面实验(150 000个视频片断,1 200万帧),实验结果证明了该方法性能优于当前其他算法。  相似文献   

3.
Video action recognition is an important topic in computer vision tasks. Most of the existing methods use CNN-based models, and multiple modalities of image features are captured from the videos, such as static frames, dynamic images, and optical flow features. However, these mainstream features contain much static information including object and background information, where the motion information of the action itself is not distinguished and strengthened. In this work, a new kind of motion feature is proposed without static information for video action recognition. We propose a quantization of motion network based on the bag-of-feature method to learn significant and discriminative motion features. In the learned feature map, the object and background information is filtered out, even if the background is moving in the video. Therefore, the motion feature is complementary to the static image feature and the static information in the dynamic image and optical flow. A multi-stream classifier is built with the proposed motion feature and other features, and the performance of action recognition is enhanced comparing to other state-of-the-art methods.  相似文献   

4.
Demand for efficient ways to represent vast amount of video data has grown rapidly in recent years. The advances in positioning services have led to new possibilities in combining location information to video content. In this paper we present an automatic video editing system for geotagged mobile videos. In our solution the system creates automatically a video summary from a set of unedited video clips. Location information and timestamps are used to group video clips with the same context properties. The groups are used to create a video summary where subshots from same context group are represented as scenes. The novelty in our solution lies in combining geotags with low level content analysis tools in video abstraction. We have evaluated the created video summaries with a group of users and the system usability for service creation by building a semi-automatic web-based video editing service. The evaluations prove that our concept is useful as it improves coherence and enjoyability of the automatic video summaries.  相似文献   

5.
The exploitation of video data requires methods able to extract high-level information from the images. Video summarization, video retrieval, or video surveillance are examples of applications. In this paper, we tackle the challenging problem of recognizing dynamic video contents from low-level motion features. We adopt a statistical approach involving modeling, (supervised) learning, and classification issues. Because of the diversity of video content (even for a given class of events), we have to design appropriate models of visual motion and learn them from videos. We have defined original parsimonious global probabilistic motion models, both for the dominant image motion (assumed to be due to the camera motion) and the residual image motion (related to scene motion). Motion measurements include affine motion models to capture the camera motion and low-level local motion features to account for scene motion. Motion learning and recognition are solved using maximum likelihood criteria. To validate the interest of the proposed motion modeling and recognition framework, we report dynamic content recognition results on sports videos.  相似文献   

6.
Spatiotemporal irregularities (i.e., the uncommon appearance and motion patterns) in videos are difficult to detect, as they are usually not well defined and appear rarely in videos. We tackle this problem by learning normal patterns from regular videos, while treating irregularities as deviations from normal patterns. To this end, we introduce a 3D fully convolutional autoencoder (3D-FCAE) that is trainable in an end-to-end manner to detect both temporal and spatiotemporal irregularities in videos using limited training data. Subsequently, temporal irregularities can be detected as frames with high reconstruction errors, and irregular spatiotemporal patterns can be detected as blurry regions that are not well reconstructed. Our approach can accurately locate temporal and spatiotemporal irregularities thanks to the 3D fully convolutional autoencoder and the explored effective architecture. We evaluate the proposed autoencoder for detecting irregular patterns on benchmark video datasets with weak supervision. Comparisons with state-of-the-art approaches demonstrate the effectiveness of our approach. Moreover, the learned autoencoder shows good generalizability across multiple datasets.  相似文献   

7.
In this paper, we present the first video decomposition framework, named SyCoMo, that factorizes a video into style, content, and motion. Such a fine-grained decomposition enables flexible video editing, and for the first time allows for tripartite video synthesis. SyCoMo is a unified and domain-agnostic learning framework which can process videos of various object categories without domain-specific design or supervision. Different from other motion decomposition work, SyCoMo derives motion from style-free content by isolating style from content in the first place. Content is organized into subchannels, each of which corresponds to an atomic motion. This design naturally forms an information bottleneck which facilitates a clean decomposition. Experiments show that SyCoMo decomposes videos of various categories into interpretable content subchannels and meaningful motion patterns. Ablation studies also show that deriving motion from style-free content makes the keypoints or landmarks of the object more accurate. We demonstrate the photorealistic quality of the novel tripartite video synthesis in addition to three bipartite synthesis tasks named as style, content, and motion transfer.  相似文献   

8.
9.
Most of the existing Action Quality Assessment (AQA) methods for scoring sports videos have deeply researched how to evaluate the single action or several sequential-defined actions that performed in short-term sport videos, such as diving, vault, etc. They attempted to extract features directly from RGB videos through 3D ConvNets, which makes the features mixed with ambiguous scene information. To investigate the effectiveness of deep pose feature learning on automatically evaluating the complicated activities in long-duration sports videos, such as figure skating and artistic gymnastic, we propose a skeleton-based deep pose feature learning method to address this problem. For pose feature extraction, a spatial–temporal pose extraction module (STPE) is built to capture the subtle changes of human body movements and obtain the detail representations for skeletal data in space and time dimensions. For temporal information representation, an inter-action temporal relation extraction module (ATRE) is implemented by recurrent neural network to model the dynamic temporal structure of skeletal subsequences. We evaluate the proposed method on figure skating activity of MIT-skate and FIS-V datasets. The experimental results show that the proposed method is more effective than RGB video-based deep feature learning methods, including SENet and C3D. Significant performance progress has been achieved for the Spearman Rank Correlation (SRC) on MIT-Skate dataset. On FIS-V dataset, for the Total Element Score (TES) and the Program Component Score (PCS), better SRC and MSE have been achieved between the predicted scores against the judge’s ones when compared with SENet and C3D feature methods.  相似文献   

10.
基于深度学习的视频中人体动作识别进展综述   总被引:4,自引:0,他引:4       下载免费PDF全文
罗会兰  童康  孔繁胜 《电子学报》2019,47(5):1162-1173
视频中的人体动作识别是计算机视觉领域内一个充满挑战的课题.不论是在视频信息检索、日常生活安全、公共视频监控,还是人机交互、科学认知等领域都有广泛的应用.本文首先简单介绍了动作识别的研究背景、意义及其难点,接着从模型输入信号的类型和数量、是否结合了传统特征提取方法、模型预训练三个维度详细综述了基于深度学习的动作识别方法,及比较分析了它们在UCF101和HMDB51这两个数据集上的识别效果.最后分别从视频预处理、视频中人体运动信息表征、模型学习训练这三个角度对未来动作识别可能的发展方向进行了论述.  相似文献   

11.
12.
Intelligently tracking objects with varied shapes, color, lighting conditions, and backgrounds is an extremely useful application in many HCI applications, such as human body motion capture, hand gesture recognition, and virtual reality (VR) games. However, accurately tracking different objects under uncontrolled environments is a tough challenge due to the possibly dynamic object parts, varied lighting conditions, and sophisticated backgrounds. In this work, we propose a novel semantically-aware object tracking framework, wherein the key is weakly-supervised learning paradigm that optimally transfers the video-level semantic tags into various regions. More specifically, give a set of training video clips, each of which is associated with multiple video-level semantic tags, we first propose a weakly-supervised learning algorithm to transfer the semantic tags into various video regions. The key is a MIL (Zhong et al., 2020) [1]-based manifold embedding algorithm that maps the entire video regions into a semantic space, wherein the video-level semantic tags are well encoded. Afterward, for each video region, we use the semantic feature combined with the appearance feature as its representation. We designed a multi-view learning algorithm to optimally fuse the above two types of features. Based on the fused feature, we learn a probabilistic Gaussian mixture model to predict the target probability of each candidate window, where the window with the maximal probability is output as the tracking result. Comprehensive comparative results on a challenging pedestrian tracking task as well as the human hand gesture recognition have demonstrated the effectiveness of our method. Moreover, visualized tracking results have shown that non-rigid objects with moderate occlusions can be well localized by our method.  相似文献   

13.
Videos captured by stationary cameras are widely used in video surveillance and video conference. This kind of video often has static or gradually changed background. By analyzing the properties of static-background videos, this work presents a novel approach to detect double MPEG-4 compression based on local motion vector field analysis in static-background videos. For a given suspicious video, the local motion vector field is used to segment background regions in each frame. According to the segmentation of backgrounds and the motion strength of foregrounds, the modified prediction residual sequence is calculated, which retains robust fingerprints of double compression. After post-processing, the detection and GOP estimation results are obtained by applying the temporal periodic analysis method to the final feature sequence. Experimental results have demonstrated better robustness and efficiency of the proposed method in comparison to several state-of-the-art methods. Besides, the proposed method is more robust to various rate control modes.  相似文献   

14.
In recent days, we have witnessed a dramatical growth of videos in various real-life scenarios. In this paper, we address the problem of surveillance video summarization. We present a new method of key-frame selection for this task: By virtue of retrospective analysis of time series, temporal cuts are first detected by sequentially measuring dissimilarities on a given video with threshold-based decision making; then, with the detected cuts, the video is segmented into a number of consecutive clips containing similar video contents; key frames are last selected by performing a typical clustering procedure in each resulted clip for final video summary. We have conducted extensive experiments on the benchmarking ViSOR dataset and the publicly available IVY LAB dataset. Excellent performances outperforming state-of-the-art competitors were demonstrated on key-frame selection for surveillance video summarization, which suggests good potentials of the proposed method in real-world applications.  相似文献   

15.
Image-based rendering (IBR) is an promising technology for rendering photo-realistic views of scenes from a collection of densely sampled images or videos. It provides a framework for developing revolutionary virtual reality and immersive viewing systems. While there has been considerable progress recently in the capturing, storage and transmission of image-based representations, most multiple camera systems are designed to be stationary and hence their ability to cope with moving objects and dynamic environment is somewhat limited. This paper studies the design and construction of a movable image-based rendering system based on a class of dynamic representations called plenoptic videos, its associated video processing algorithms and an application to multiview audio-visual conferencing. It is constructed by mounting a linear array of 8 video cameras on an electrically controllable wheel chair and its motion is controllable manually or remotely through wireless LAN by means of additional hardware circuitry. We also developed a real-time object tracking algorithm and utilize the motion information computed to adjust continuously the azimuth or rotation angle of the movable IBR system in order to cope with a given moving object in a large environment. Due to imperfection in tracking and mechanical vibration encountered in movable systems, the videos may appear very shaky and a new video stabilization technique is proposed to overcome this problem. The usefulness of the system is illustrated by means of a multiview conferencing application using a multiview TV display. Through this pilot study, we hope to disseminate useful experience for the design and construction of movable IBR systems with improved viewing freedom and ability to cope with moving object in a large environment.  相似文献   

16.
Quality of experience (QoE) assessment for adaptive video streaming plays a significant role in advanced network management systems. It is especially challenging in case of dynamic adaptive streaming schemes over HTTP (DASH) which has increasingly complex characteristics including additional playback issues. In this paper, we provide a brief overview of adaptive video streaming quality assessment. Upon our review of related works, we analyze and compare different variations of objective QoE assessment models with or without using machine learning techniques for adaptive video streaming. Through the performance analysis, we observe that hybrid models perform better than both quality-of-service (QoS) driven QoE approaches and signal fidelity measurement. Moreover, the machine learning-based model slightly outperforms the model without using machine learning for the same setting. In addition, we find that existing video streaming QoE assessment models still have limited performance, which makes it difficult to be applied in practical communication systems. Therefore, based on the success of deep learned feature representations for traditional video quality prediction, we also apply the off-the-shelf deep convolutional neural network (DCNN) to evaluate the perceptual quality of streaming videos, where the spatio-temporal properties of streaming videos are taken into consideration. Experiments demonstrate its superiority, which sheds light on the future development of specifically designed deep learning frameworks for adaptive video streaming quality assessment. We believe this survey can serve as a guideline for QoE assessment of adaptive video streaming.  相似文献   

17.
18.
19.
A generic definition of video objects, which is a group of pixels with temporal motion coherence, is considered. The generic video object (GVO) is the superset of the conventional video objects considered in the object segmentation literature. Because of its motion coherence, the GVO can be easily recognised by the human visual system. However, due to its arbitrary spatial distribution, the GVO cannot be easily detected by the existing algorithms which often assume the spatial homogeneousness of the video objects. The concept of extended optical flow is introduced and a dynamic programming framework for the GVO detection and segmentation is developed, whose solution is given by the Viterbi algorithm. Using this dynamic programming formulation, the proposed object detection algorithm is able to discover the motion path of the GVO automatically and refine its spatial region of support progressively. In addition to object segmentation, the proposed algorithm can also be applied to video pre-processing, removing the so-called 'video mask' noise in digital videos. Experimental results show that this type of vision-assisted video pre-processing significantly improves the compression efficiency.  相似文献   

20.
The huge amount of data in surveillance video coding demands high compression rates with lower computational requirements for efficient storage and archival. The motion estimation is a very time-consuming process in the traditional video coding framework, and hence reducing computational complexity is a pressing task, especially for surveillance videos. The presence of significant background proportion in surveillance videos makes its special case for coding. The existing surveillance video coding methods propose separate search mechanisms for background and foreground regions. However, they still suffer from misclassification and inefficient search strategies since it does not consider the inherent motion characteristics of the foreground regions. In this paper, a background-foreground-boundary aware block matching algorithm is proposed to exploit special characteristics of the surveillance videos. A novel three-step framework is proposed for boundary aware block matching process. For this, firstly, the blocks are categorized into three classes, namely, background, foreground, and boundary blocks. Secondly, the motion search is performed by employing different search strategies for each class. The zero-motion vector-based search is employed for background blocks. Whereas, to exploit fast and directional motion characteristics of the boundary and foreground blocks, the eight rotating uni-wing diamond search patterns are proposed. Thirdly, the speed-up is achieved through the novel region-based sub-sampled structure. The experimental results demonstrate that two to four times speed-up over existing methods can be achieved through this scheme while maintaining better matching accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号