首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 890 毫秒
1.
Li  Tianyu  Bing  Bing  Wu  Xinxiao 《Multimedia Tools and Applications》2021,80(2):2123-2139

Temporal action proposal generation for temporal action localization aims to capture temporal intervals that are likely to contain actions from untrimmed videos. Prevailing bottom-up proposal generation methods locate action boundaries (the start and the end) with high classifying probabilities. But for many actions, motions at boundaries are not discriminative, which makes action segments and background segments be classified into boundary classes, thereby generating low-overlap proposals. In this work, we propose a novel method that generates proposals by evaluating the continuity of video frames, and then locates the start and the end with low continuity. Our method consists of two modules: boundary discrimination and proposal evaluation. The boundary discrimination module trains a model to understand the relationship between two frames and uses the continuity of frames to generate proposals. The proposal evaluation module removes background proposals via a classification network, and evaluates the integrity of proposals with probability features by an integrity network. Extensive experiments are conducted on two challenging datasets: THUMOS14 and ActivityNet 1.3, and the results demonstrate that our method outperforms the state-of-the-art proposal generation methods.

  相似文献   

2.
目的 时序动作检测(temporal action detection)作为计算机视觉领域的一个热点课题,其目的是检测视频中动作发生的具体区间,并确定动作的类别。这一课题在现实生活中具有深远的实际意义。如何在长视频中快速定位且实现时序动作检测仍然面临挑战。为此,本文致力于定位并优化动作发生时域的候选集,提出了时域候选区域优化的时序动作检测方法TPO (temporal proposal optimization)。方法 采用卷积神经网络(convolutional neural network,CNN)和双向长短期记忆网络(bidirectional long short term memory,BLSTM)来捕捉视频的局部时序关联性和全局时序信息;并引入联级时序分类优化(connectionist temporal classification,CTC)方法,评估每个时序位置的边界概率和动作概率得分;最后,融合两者的概率得分曲线,优化时域候选区域候选并排序,最终实现时序上的动作检测。结果 在ActivityNet v1.3数据集上进行实验验证,TPO在各评价指标,如一定时域候选数量下的平均召回率AR@100(average recall@100),曲线下的面积AUC (area under a curve)和平均均值平均精度mAP (mean average precision)上分别达到74.66、66.32、30.5,而各阈值下的均值平均精度mAP@IoU (mAP@intersection over union)在阈值为0.75和0.95时也分别达到了30.73和8.22,与SSN (structured segment network)、TCN (temporal context network)、Prop-SSAD (single shot action detector for proposal)、CTAP (complementary temporal action proposal)和BSN (boundary sensitive network)等方法相比,TPO的所有性能指标均有提高。结论 本文提出的模型兼顾了视频的全局时序信息和局部时序信息,使得预测的动作候选区域边界更为准确和灵活,同时也验证了候选区域的准确性能够有效提高时序动作检测的精确度。  相似文献   

3.
Much research on human action recognition has been oriented toward the performance gain on lab-collected datasets. Yet real-world videos are more diverse, with more complicated actions and often only a few of them are precisely labeled. Thus, recognizing actions from these videos is a tough mission. The paucity of labeled real-world videos motivates us to “borrow” strength from other resources. Specifically, considering that many lab datasets are available, we propose to harness lab datasets to facilitate the action recognition in real-world videos given that the lab and real-world datasets are related. As their action categories are usually inconsistent, we design a multi-task learning framework to jointly optimize the classifiers for both sides. The general Schatten \(p\) -norm is exerted on the two classifiers to explore the shared knowledge between them. In this way, our framework is able to mine the shared knowledge between two datasets even if the two have different action categories, which is a major virtue of our method. The shared knowledge is further used to improve the action recognition in the real-world videos. Extensive experiments are performed on real-world datasets with promising results.  相似文献   

4.
弱监督时序动作定位旨在定位视频中行为实例的起止边界及识别相应的行为。现有方法尽管取得很大进展,但依然存在动作定位不完整及短动作的漏检问题。为此,提出了特征挖掘与区域增强(FMRE)的定位方法。首先,通过基础分支计算视频片段之间的相似分数,并以此分数聚合上下文信息,得到更具有区别性的段分类分数,实现动作的完整定位;然后,添加增强分支,对基础分支定位中持续时间较短的动作提案沿时间维度进行动态上采样,进而采用多头自注意机制对动作提案间的时间结构显式建模,促进具有时间依赖关系的动作定位且防止短动作的漏检;最后,在两个分支之间构建伪标签互监督,逐步改进在训练过程中生成动作提案的质量。该算法在THUMOS14和ActivityNet1.3数据集上分别取得了70.3%和40.7%的检测性能,证明了所提算法的有效性。  相似文献   

5.
Text in videos contains rich semantic information, which is useful for content based video understanding and retrieval. Although a great number of state-of-the-art methods are proposed to detect text in images and videos, few works focus on spatiotemporal text localization in videos. In this paper, we present a spatiotemporal text localization method with an improved detection efficiency and performance. Concretely, a unified framework is proposed which consists of the sampling-and-recovery model (SaRM) and the divide-and-conquer model (DaCM). SaRM aims at exploiting the temporal redundancy of text to increase the detection efficiency for videos. DaCM is designed to efficiently localize the text in spatiotemporal domain simultaneously. Besides, we construct a challenging video overlaid text dataset named UCAS-STLData, which contains 57070 frames with spatiotemporal ground truths. In the experiments, we comprehensively evaluate the proposed method on the publicly available overlaid text datasets and UCAS-STLData. A slight performance improvement is achieved compared with the state-of-the-art methods for spatiotemporal text localization, with a significant efficiency improvement.  相似文献   

6.
快速有效地识别视频中的人体动作,具有广泛的应用前景及潜在的经济价值。但目前的视频动作识别方法易受到运动人体晃动、背景变化、摄相机抖动、运动人体阴影等背景因素影响。为解决上述问题,本文提出一种非局域时间段网络方法。该方法在双流网络的基础上,通过加入非局域计算使网络能关注到更大时空范围的信息,并进一步融入光流信息使网络更精确地将注意力放在动作区域,从而增强对视频复杂静态背景的鲁棒性。此外,为了融合双流分段网络的多路判别结果,本文使用可学习的加权平均取代简单平均来融合多模态信息。经过在TDAP数据集上的实验验证,本文的模型可在复杂背景下较为精确地识别出人体动作,与原有模型相比在几乎不增加时间复杂度的前提下提升了识别性能。  相似文献   

7.
We focus on the recognition of human actions in uncontrolled videos that may contain complex temporal structures. It is a difficult problem because of the large intra-class variations in viewpoint, video length, motion pattern, etc. To address these difficulties, we propose a novel system in this paper that represents each action class by hidden temporal models. In this system, we represent the crucial action event per category by a video segment that covers a fixed number of frames and can move temporally within the sequences. To capture the temporal structures, the video segment is described by a temporal pyramid model. To capture large intra-class variations, multiple models are combined using Or operation to represent alternative structures. The index of model and the start frame of segment are both treated as hidden variables. We implement a learning procedure based on the latent SVM method. The proposed approach is tested on two difficult benchmarks: the Olympic Sports and HMDB51 data sets. The experimental results reveal that our system is comparable to the state-of-the-art methods in the literature.  相似文献   

8.
In this paper, we address the analysis and recognition of facial expressions in continuous videos. More precisely, we study classifiers performance that exploit head pose independent temporal facial action parameters. These are provided by an appearance-based 3D face tracker that simultaneously provides the 3D head pose and facial actions. The use of such tracker makes the recognition pose- and texture-independent. Two different schemes are studied. The first scheme adopts a dynamic time warping technique for recognizing expressions where training data are given by temporal signatures associated with different universal facial expressions. The second scheme models temporal signatures associated with facial actions with fixed length feature vectors (observations), and uses some machine learning algorithms in order to recognize the displayed expression. Experiments quantified the performance of different schemes. These were carried out on CMU video sequences and home-made video sequences. The results show that the use of dimension reduction techniques on the extracted time series can improve the classification performance. Moreover, these experiments show that the best recognition rate can be above 90%.  相似文献   

9.
Every Moment Counts: Dense Detailed Labeling of Actions in Complex Videos   总被引:1,自引:0,他引:1  
Every moment counts in action recognition. A comprehensive understanding of human activity in video requires labeling every frame according to the actions occurring, placing multiple labels densely over a video sequence. To study this problem we extend the existing THUMOS dataset and introduce MultiTHUMOS, a new dataset of dense labels over unconstrained internet videos. Modeling multiple, dense labels benefits from temporal relations within and across classes. We define a novel variant of long short-term memory deep networks for modeling these temporal relations via multiple input and output connections. We show that this model improves action labeling accuracy and further enables deeper understanding tasks ranging from structured retrieval to action prediction.  相似文献   

10.
This paper addresses the problem of recognition and localization of actions in image sequences, by utilizing, in the training phase only, gaze tracking data of people watching videos depicting the actions in question. First, we learn discriminative action features at the areas of gaze fixation and train a Convolutional Network that predicts areas of fixation (i.e. salient regions) from raw image data. Second, we propose a Support Vector Machine-based recognition method for joint recognition and localization, in which the bounding box of the action in question is considered as a latent variable. In our formulation the optimization attempts to both minimize the classification cost and maximize the saliency within the bounding box. We show that the results obtained with the optimization where saliency within the bounding box is maximized outperform the results obtained when saliency within the bounding box is not maximized, i.e. when only classification cost is minimized. Furthermore, the results that we obtain outperform the state-of-the-art results on the UCF sports dataset.  相似文献   

11.
在针对视频的人体活动定位和识别领域中,现有的时序行为提名方法无法很好地解决行为特征长期依赖性而导致提名召回率较低。针对此问题,提出了一种上下文信息融合的时序行为提名方法。该方法首先采用三维卷积网络提取视频单元的时空特征,然后采用双向门控循环网络构建上下文关系预测出时序行为区间。针对门控循环单元(GRU)存在参数较多和梯度消失的问题,通过输入特征控制门结构增强并行计算能力,通过引入加权平均增强历史和当前时刻信息融合能力,提出了一个简化的门控循环单元(S-GRU)。最后在数据集Thumos14上进行实验验证和比较,结果表明基于双向S-GRU循环网络的时序行为提名方法提高了提名召回率。  相似文献   

12.
Video question answering (Video QA) involves a thorough understanding of video content and question language, as well as the grounding of the textual semantic to the visual content of videos. Thus, to answer the questions more accurately, not only the semantic entity should be associated with certain visual instance in video frames, but also the action or event in the question should be localized to a corresponding temporal slot. It turns out to be a more challenging task that requires the ability of conducting reasoning with correlations between instances along temporal frames. In this paper, we propose an instance-sequence reasoning network for video question answering with instance grounding and temporal localization. In our model, both visual instances and textual representations are firstly embedded into graph nodes, which benefits the integration of intra- and inter-modality. Then, we propose graph causal convolution (GCC) on graph-structured sequence with a large receptive field to capture more causal connections, which is vital for visual grounding and instance-sequence reasoning. Finally, we evaluate our model on TVQA+ dataset, which contains the groundtruth of instance grounding and temporal localization, three other Video QA datasets and three multimodal language processing datasets. Extensive experiments demonstrate the effectiveness and generalization of the proposed method. Specifically, our method outperforms the state-of-the-art methods on these benchmarks.  相似文献   

13.
Action recognition on large categories of unconstrained videos taken from the web is a very challenging problem compared to datasets like KTH (6 actions), IXMAS (13 actions), and Weizmann (10 actions). Challenges like camera motion, different viewpoints, large interclass variations, cluttered background, occlusions, bad illumination conditions, and poor quality of web videos cause the majority of the state-of-the-art action recognition approaches to fail. Also, an increased number of categories and the inclusion of actions with high confusion add to the challenges. In this paper, we propose using the scene context information obtained from moving and stationary pixels in the key frames, in conjunction with motion features, to solve the action recognition problem on a large (50 actions) dataset with videos from the web. We perform a combination of early and late fusion on multiple features to handle the very large number of categories. We demonstrate that scene context is a very important feature to perform action recognition on very large datasets. The proposed method does not require any kind of video stabilization, person detection, or tracking and pruning of features. Our approach gives good performance on a large number of action categories; it has been tested on the UCF50 dataset with 50 action categories, which is an extension of the UCF YouTube Action (UCF11) dataset containing 11 action categories. We also tested our approach on the KTH and HMDB51 datasets for comparison.  相似文献   

14.
This paper presents a novel approach for action recognition, localization and video matching based on a hierarchical codebook model of local spatio-temporal video volumes. Given a single example of an activity as a query video, the proposed method finds similar videos to the query in a target video dataset. The method is based on the bag of video words (BOV) representation and does not require prior knowledge about actions, background subtraction, motion estimation or tracking. It is also robust to spatial and temporal scale changes, as well as some deformations. The hierarchical algorithm codes a video as a compact set of spatio-temporal volumes, while considering their spatio-temporal compositions in order to account for spatial and temporal contextual information. This hierarchy is achieved by first constructing a codebook of spatio-temporal video volumes. Then a large contextual volume containing many spatio-temporal volumes (ensemble of volumes) is considered. These ensembles are used to construct a probabilistic model of video volumes and their spatio-temporal compositions. The algorithm was applied to three available video datasets for action recognition with different complexities (KTH, Weizmann, and MSR II) and the results were superior to other approaches, especially in the case of a single training example and cross-dataset1 action recognition.  相似文献   

15.
Nowadays, numerous social videos have pervaded on the web. Social web videos are characterized with the accompanying rich contextual information which describe the content of videos and thus greatly facilitate video search and browsing. Generally, those contextual data such as tags are provided at the whole video level, without temporal indication of when they actually appear in the video, let alone the spatial annotation of object related tags in the video frames. However, many tags only describe parts of the video content. Therefore, tag localization, the process of assigning tags to the underlying relevant video segments or frames even regions in frames is gaining increasing research interests and a benchmark dataset for the fair evaluation of tag localization algorithms is highly desirable. In this paper, we describe and release a dataset called DUT-WEBV, which contains about 4,000 videos collected from YouTube portal by issuing 50 concepts as queries. These concepts cover a wide range of semantic aspects including scenes like “mountain”, events like “flood”, objects like “cows”, sites like “gas station”, and activities like “handshaking”, offering great challenges to the tag (i.e., concept) localization task. For each video of a tag, we carefully annotate the time durations when the tag appears in the video and also label the spatial location of object with mask in frames for object related tag. Besides the video itself, the contextual information, such as thumbnail images, titles, and YouTube categories, is also provided. Together with this benchmark dataset, we present a baseline for tag localization using multiple instance learning approach. Finally, we discuss some open research issues for tag localization in web videos.  相似文献   

16.
自注意力机制的视频摘要模型   总被引:1,自引:0,他引:1  
针对如何高效地识别出视频中具有代表性的内容问题,提出了一种对不同的视频帧赋予不同重要性的视频摘要算法.首先使用长短期记忆网络来建模视频序列的时序关系,然后利用自注意力机制建模视频中不同帧的重要性程度并提取全局特征,最后通过每一帧回归得到的重要性得分进行采样,并使用强化学习策略优化模型参数.其中,强化学习的动作定义为每一帧选或者不选,状态定义为当前这个视频的选择情况,反馈信号使用多样性和代表性代价.在2个公开数据集SumMe和TVSum中进行视频摘要实验,并使用F-度量来衡量这2个数据集上不同视频摘要算法的准确度,实验结果表明,提出的视频摘要算法结果要优于其他算法.  相似文献   

17.
如果一个人做了一系列连续动作,并被拍摄成一段视频,那么如何通过这段视频对动作进行分割和识别是人们要考虑的问题.为了对视频中的人的动作进行有效识别,基于半马尔可夫模型框架,提出了一个对人的动作进行识别的方法,该方法通过输入-输出空间的一组特征值来抓住与2个动作相邻的帧的特征,以及相邻的2个动作段之间的特征.为了提高算法的效率,提出了一个类似于Viterbi的算法,该算法被用来解决优化问题.不同数据集上的实验结果表明,该方法是有效的.  相似文献   

18.
A variety of recognizing architectures based on deep convolutional neural networks have been devised for labeling videos containing human motion with action labels. However, so far, most works cannot properly deal with the temporal dynamics encoded in multiple contiguous frames, which distinguishes action recognition from other recognition tasks. This paper develops a temporal extension of convolutional neural networks to exploit motion-dependent features for recognizing human action in video. Our approach differs from other recent attempts in that it uses multiplicative interactions between convolutional outputs to describe motion information across contiguous frames. Interestingly, the representation of image content arises when we are at work on extracting motion pattern, which makes our model effectively incorporate both of them to analysis video. Additional theoretical analysis proves that motion and content-dependent features arise simultaneously from the developed architecture, whereas previous works mostly deal with the two separately. Our architecture is trained and evaluated on the standard video actions benchmarks of KTH and UCF101, where it matches the state of the art and has distinct advantages over previous attempts to use deep convolutional architectures for action recognition.  相似文献   

19.
Anticipating future actions without observing any partial videos of future actions plays an important role in action prediction and is also a challenging task. To obtain abundant information for action anticipation, some methods integrate multimodal contexts, including scene object labels. However, extensively labelling each frame in video datasets requires considerable effort. In this paper, we develop a weakly supervised method that integrates global motion and local fine-grained features from current action videos to predict next action label without the need for specific scene context labels. Specifically, we extract diverse types of local features with weakly supervised learning, including object appearance and human pose representations without ground truth. Moreover, we construct a graph convolutional network for exploiting the inherent relationships of humans and objects under present incidents. We evaluate the proposed model on two datasets, the MPII-Cooking dataset and the EPIC-Kitchens dataset, and we demonstrate the generalizability and effectiveness of our approach for action anticipation.  相似文献   

20.
Temporal action proposal generation aims to output the starting and ending times of each potential action for long videos and often suffers from high computation cost. To address the issue, we propose a new temporal convolution network called Multipath Temporal ConvNet (MTCN). In our work, one novel high performance ring parallel architecture based is further introduced into temporal action proposal generation in order to respond to the requirements of large memory occupation and a large number of videos. Remarkably, the total data transmission is reduced by adding a connection between multiplecomputing load in the newly developed architecture. Compared to the traditional Parameter Server architecture, our parallel architecture has higher efficiency on temporal action detection tasks with multiple GPUs. We conduct experiments on ActivityNet-1.3 and THUMOS14, where our method outperformsother state-of-art temporal action detection methods with high recall and high temporal precision. In addition, a time metric is further proposed here to evaluate the speed performancein the distributed training process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号