首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
Modeling and reasoning of the interactions between multiple entities (actors and objects) are beneficial for the action recognition task. In this paper, we propose a 3D Deformable Convolution Temporal Reasoning (DCTR) network to model and reason about the latent relationship dependencies between different entities in videos. The proposed DCTR network consists of a spatial modeling module and a temporal reasoning module. The spatial modeling module uses 3D deformable convolution to capture relationship dependencies between different entities in the same frame, while the temporal reasoning module uses Conv-LSTM to reason about the changes of multiple entity relationship dependencies in the temporal dimension. Experiments on the Moments-in-Time dataset, UCF101 dataset and HMDB51 dataset demonstrate that the proposed method outperforms several state-of-the-art methods.  相似文献   

2.
Most of the existing Action Quality Assessment (AQA) methods for scoring sports videos have deeply researched how to evaluate the single action or several sequential-defined actions that performed in short-term sport videos, such as diving, vault, etc. They attempted to extract features directly from RGB videos through 3D ConvNets, which makes the features mixed with ambiguous scene information. To investigate the effectiveness of deep pose feature learning on automatically evaluating the complicated activities in long-duration sports videos, such as figure skating and artistic gymnastic, we propose a skeleton-based deep pose feature learning method to address this problem. For pose feature extraction, a spatial–temporal pose extraction module (STPE) is built to capture the subtle changes of human body movements and obtain the detail representations for skeletal data in space and time dimensions. For temporal information representation, an inter-action temporal relation extraction module (ATRE) is implemented by recurrent neural network to model the dynamic temporal structure of skeletal subsequences. We evaluate the proposed method on figure skating activity of MIT-skate and FIS-V datasets. The experimental results show that the proposed method is more effective than RGB video-based deep feature learning methods, including SENet and C3D. Significant performance progress has been achieved for the Spearman Rank Correlation (SRC) on MIT-Skate dataset. On FIS-V dataset, for the Total Element Score (TES) and the Program Component Score (PCS), better SRC and MSE have been achieved between the predicted scores against the judge’s ones when compared with SENet and C3D feature methods.  相似文献   

3.
Three-dimensional human pose estimation (3D HPE) has broad application prospects in the fields of trajectory prediction, posture tracking and action analysis. However, the frequent self-occlusions and the substantial depth ambiguity in two-dimensional (2D) representations hinder the further improvement of accuracy. In this paper, we propose a novel video-based human body geometric aware network to mitigate the above problems. Our network can implicitly be aware of the geometric constraints of the human body by capturing spatial and temporal context information from 2D skeleton data. Specifically, a novel skeleton attention (SA) mechanism is proposed to model geometric context dependencies among different body joints, thereby improving the spatial feature representation ability of the network. To enhance the temporal consistency, a novel multilayer perceptron (MLP)-Mixer based structure is exploited to comprehensively learn temporal context information from input sequences. We conduct experiments on publicly available challenging datasets to evaluate the proposed approach. The results outperform the previous best approach by 0.5 mm in the Human3.6m dataset. It also demonstrates significant improvements in HumanEva-I dataset.  相似文献   

4.
Human pose estimation aims at predicting the poses of human body parts in images or videos. Since pose motions are often driven by some specific human actions, knowing the body pose of a human is critical for action recognition. This survey focuses on recent progress of human pose estimation and its application to action recognition. We attempt to provide a comprehensive review of recent bottom-up and top-down deep human pose estimation models, as well as how pose estimation systems can be used for action recognition. Thanks to the availability of commodity depth sensors like Kinect and its capability for skeletal tracking, there has been a large body of literature on 3D skeleton-based action recognition, and there are already survey papers such as [1] about this topic. In this survey, we focus on 2D skeleton-based action recognition where the human poses are estimated from regular RGB images instead of depth images. We summarize the performance of recent action recognition methods that use pose estimated from color images as input, then show that there is much room for improvements in this direction.  相似文献   

5.
Graph convolutional networks (GCNs) have proven to be an effective approach for 3D human pose estimation. By naturally modeling the skeleton structure of the human body as a graph, GCNs are able to capture the spatial relationships between joints and learn an efficient representation of the underlying pose. However, most GCN-based methods use a shared weight matrix, making it challenging to accurately capture the different and complex relationships between joints. In this paper, we introduce an iterative graph filtering framework for 3D human pose estimation, which aims to predict the 3D joint positions given a set of 2D joint locations in images. Our approach builds upon the idea of iteratively solving graph filtering with Laplacian regularization via the Gauss–Seidel iterative method. Motivated by this iterative solution, we design a Gauss–Seidel network (GS-Net) architecture, which makes use of weight and adjacency modulation, skip connection, and a pure convolutional block with layer normalization. Adjacency modulation facilitates the learning of edges that go beyond the inherent connections of body joints, resulting in an adjusted graph structure that reflects the human skeleton, while skip connections help maintain crucial information from the input layer’s initial features as the network depth increases. We evaluate our proposed model on two standard benchmark datasets, and compare it with a comprehensive set of strong baseline methods for 3D human pose estimation. Our experimental results demonstrate that our approach outperforms the baseline methods on both datasets, achieving state-of-the-art performance. Furthermore, we conduct ablation studies to analyze the contributions of different components of our model architecture and show that the skip connection and adjacency modulation help improve the model performance.  相似文献   

6.
Monocular 3D human pose estimation is a challenging task because of depth ambiguity and occlusion. Recent methods exploit spatio-temporal information and generate different hypotheses for simulating diverse solutions to alleviate these problems. However, these methods do not fully extract spatial and temporal information and the relationship of each hypothesis. To ease these limitations, we propose EMHIFormer (Enhanced Multi-Hypothesis Interaction Transformer) to model 3D human pose with better performance. In detail, we build connections between different Transformer layers so that our model is able to integrate spatio-temporal information from the previous layer and establish more comprehensive hypotheses. Furthermore, a cross-hypothesis model consisting of a parallel Transformer is proposed to strengthen the relationship between various hypotheses. We also design an enhanced regression head which adaptively adjusts the channel weights to export the final 3D human pose. Extensive experiments are conducted on two challenging datasets: Human3.6M and MPI-INF-3DHP to evaluate our EMHIFormer. The results show that EMHIFormer achieves competitive performance on Human3.6M and state-of-the-art performance on MPI-INF-3DHP. Compared with the closest counterpart, MHFormer, our model outperforms it by 0.6% P-MPJPE and 0.5% MPJPE on Human3.6M dataset and 46.0% MPJPE on MPI-INF-3DHP.  相似文献   

7.
Recent developments of video super-resolution reconstruction often exploit spatial and temporal contexts from input frame sequence by making use of explicit motion estimation, e.g., optical flow, which may introduce accumulated errors and requires huge computations to obtain an accurate estimation. In this paper, we propose a novel multi-branch dilated convolution module for real-time frame alignment without explicit motion estimation, which is incorporated with the depthwise separable up-sampling module to formulate a sophisticated real-time video super-resolution network. Specifically, the proposed video super-resolution framework can efficiently acquire a larger receptive field and learn spatial–temporal features of multiple scales with minimal computational operations and memory requirements. Extensive experiments show that the proposed super-resolution network outperforms current state-of-the-art real-time video super-resolution networks, e.g., VESPCN and 3DVSRnet, in terms of PSNR values (0.49 dB and 0.17 dB) on average in various datasets, but requires less multiplication operations.  相似文献   

8.
在动作识别任务中,如何充分学习和利用视频的空间特征和时序特征的相关性,对最终识别结果尤为重要。针对传统动作识别方法忽略时空特征相关性及细小特征,导致识别精度下降的问题,本文提出了一种基于卷积门控循环单元(convolutional GRU, ConvGRU)和注意力特征融合(attentional feature fusion,AFF) 的人体动作识别方法。首先,使用Xception网络获取视频帧的空间特征提取网络,并引入时空激励(spatial-temporal excitation,STE) 模块和通道激励(channel excitation,CE) 模块,获取空间特征的同时加强时序动作的建模能力。此外,将传统的长短时记忆网络(long short term memory, LSTM)网络替换为ConvGRU网络,在提取时序特征的同时,利用卷积进一步挖掘视频帧的空间特征。最后,对输出分类器进行改进,引入基于改进的多尺度通道注意力的特征融合(MCAM-AFF)模块,加强对细小特征的识别能力,提升模型的准确率。实验结果表明:在UCF101数据集和HMDB51数据集上分别达到了95.66%和69.82%的识别准确率。该算法获取了更加完整的时空特征,与当前主流模型相比更具优越性。  相似文献   

9.
As a challenging task of video classification, action recognition has become a significant topic of computer vision community. The most popular methods based on two-stream architecture up to now are still simply fusing the prediction scores of each stream. In that case, the complementary characteristics of two streams cannot be fully utilized and the effect of shallower features is often overlooked. In addition, the equal treatment to features may weaken the role of the feature contributing significantly to the classification. Accordingly, a novel network called Multiple Depth-levels Features Fusion Enhanced Network (MDFFEN) is proposed. It improves on two aspects of two-stream architecture. In terms of the two-stream interaction mechanism, multiple depth-levels features fusion (MDFF) is formed to aggregate spatial–temporal features extracted from several sub-modules of original two streams by spatial–temporal features fusion (STFF). And with respect to further refining the spatiotemporal features, we propose a group-wise spatial-channel enhance (GSCE) module to highlight the meaningful regions and expressive channels automatically by priority assignment. The competitive results are achieved after we validate MDFFEN on three public challenging action recognition datasets, HDMB51, UCF101 and ChaLearn LAP IsoGD.  相似文献   

10.
In this paper, we propose a spatio-temporal contextual network, STC-Flow, for optical flow estimation. Unlike previous optical flow estimation approaches with local pyramid feature extraction and multi-level correlation, we propose a contextual relation exploration architecture by capturing rich long-range dependencies in spatial and temporal dimensions. Specifically, STC-Flow contains three key context modules, i.e., pyramidal spatial context module, temporal context correlation module and recurrent residual contextual upsampling module for the effect of feature extraction, correlation, and flow reconstruction, respectively. Experimental results demonstrate that the proposed scheme achieves the state-of-the-art performance of two-frame based methods on Sintel and KITTI datasets.  相似文献   

11.
张伟  钱沄涛 《信号处理》2019,35(3):507-515
人脸关键点检测是计算机视觉中的典型问题之一,对于人脸三维重建、表情识别、头部姿态估计、人脸跟踪等有重要影响。目前基于深度神经网络的模型在人脸关键点检测性能表现最为突出,已被广泛采用。但是现有关键点检测深度神经网络结构设计越来越复杂,对于训练和测试需要的计算和存储资源要求越来越高。本文提出一种新的精简的关键点检测网络结构以代替现有的网络结构。相对其他网络结构,精简网络只包含一个特征提取模块,以及由几层反卷积层组成的上采样模块。此外我们在网络结构中加入对人脸所有关键点的全局约束,以减少预测离群点的产生。实验表明引入全局约束的精简网络结构在300-W数据集上取得的检测性能超出了目前典型深度神经网络检测模型。   相似文献   

12.
康书宁  张良 《信号处理》2020,36(11):1897-1905
基于深度学习的人体动作识别近几年取得了良好的识别效果,尤其是二维卷积神经网络可以较充分的学习人体动作的空间特征,但在捕获长时间的运动信息上仍存在问题。针对此问题,提出了基于语义特征立方体切片的人体动作识别模型来联合地学习动作的表观和运动特征。该模型在时序分割网络(Temporal Segment Networks,TSN)的基础上,选取InceptionV4作为骨干网络提取人体动作的表观特征,将得到的三维特征图立方体分为二维的空间上和时间上的特征图切片。另外设计一个时空特征融合模块协同的学习多维度切片的权重分配,从而得到人体动作的时空特征,由此实现了网络的端到端训练。与TSN模型相比,该模型在UCF101和 HMDB51数据集上的准确率均有所提升。实验结果表明,该模型在不显著增加网络参数量的前提下,能够捕获更丰富的运动信息,使人体动作的识别结果提高。   相似文献   

13.
Recognizing human interactions in still images is quite a challenging task since compared to videos, there is only a glimpse of interaction in a single image. This work investigates the role of human poses in recognizing human–human interactions in still images. To this end, a multi-stream convolutional neural network architecture is proposed, which fuses different levels of human pose information to recognize human interactions better. In this context, several pose-based representations are explored. Experimental evaluations in an extended benchmark dataset show that the proposed multi-stream pose Convolutional Neural Network is successful in discriminating a wide range of human–human interactions and human poses when used in conjunction with the overall context provides discriminative cues about human–human interactions.  相似文献   

14.
15.
6D object pose (3D rotation and translation) estimation from RGB-D image is an important and challenging task in computer vision and has been widely applied in a variety of applications such as robotic manipulation, autonomous driving, augmented reality etc. Prior works extract global feature or reason about local appearance from an individual frame, which neglect the spatial geometric relevance between two frames, limiting their performance for occluded or truncated objects in heavily cluttered scenes. In this paper, we present a dual-stream network for estimating 6D pose of a set of known objects from RGB-D images. Our novelty stands in contrast to prior work that learns latent geometric consistency in pairwise dense feature representations from multiple observations of the same objects in a self-supervised manner. We show in experiments that our method outperforms state-of-the-art approaches on 6D object pose estimation in two challenging datasets, YCB-Video and LineMOD.  相似文献   

16.
Human actions can be considered as a sequence of body poses over time, usually represented by coordinates corresponding to human skeleton models. Recently, a variety of low-cost devices have been released, able to produce markerless real time pose estimation. Nevertheless, limitations of the incorporated RGB-D sensors can produce inaccuracies, necessitating the utilization of alternative representation and classification schemes in order to boost performance. In this context, we propose a method for action recognition where skeletal data are initially processed in order to obtain robust and invariant pose representations and then vectors of dissimilarities to a set of prototype actions are computed. The task of recognition is performed in the dissimilarity space using sparse representation. A new publicly available dataset is introduced in this paper, created for evaluation purposes. The proposed method was also evaluated on other public datasets, and the results are compared to those of similar methods.  相似文献   

17.
3D skeleton sequences contain more effective and discriminative information than RGB video and are more suitable for human action recognition. Accurate extraction of human skeleton information is the key to the high accuracy of action recognition. Considering the correlation between joint points, in this work, we first propose a skeleton feature extraction method based on complex network. The relationship between human skeleton points in each frame is coded as a network. The changes of action over time are described by a time series network composed of skeleton points. Network topology attributes are used as feature vectors, complex network coding and LSTM are combined to recognize human actions. The method was verified on the NTU RGB + D60, MSR Action3D and UTKinect-Action3D dataset, and have achieved good performance, respectively. It shows that the method of extracting skeleton features based on complex network can properly identify different actions. This method that considers the temporal information and the relationship between skeletons at the same time plays an important role in the accurate recognition of human actions.  相似文献   

18.
In this paper, we propose a strong two-stream point cloud sequence network VirtualActionNet for 3D human action recognition. In the data preprocessing stage, we transform the depth sequence into a point cloud sequence as the input of our VirtualActionNet. In order to encode intra-frame appearance structures, static point cloud technologies are first employed as a virtual action generation sequence module to abstract the point cloud sequence into a virtual action sequence. Then, a two-stream network framework is presented to model the virtual action sequence. Specifically, we design an appearance stream module for aggregating all the appearance information preserved in each virtual action frame. Moreover, a motion stream module is introduced to capture dynamic changes along the time dimension. Finally, a joint loss strategy is adopted during data training to improve the action prediction accuracy of the two-stream network. Extensive experiments on three publicly available datasets demonstrate the effectiveness of the proposed VirtualActionNet.  相似文献   

19.
罗元  李丹  张毅 《半导体光电》2020,41(3):414-419
手语识别广泛应用于聋哑人与正常人之间的交流中。针对手语识别任务中时空特征提取不充分而导致识别率低的问题,提出了一种新颖的基于时空注意力的手语识别模型。首先提出了基于残差3D卷积网络(Residual 3D Convolutional Neural Network,Res3DCNN)的空间注意力模块,用来自动关注空间中的显著区域;随后提出了基于卷积长短时记忆网络(Convolutional Long Short-Term Memory,ConvLSTM)的时间注意力模块,用来衡量视频帧的重要性。所提算法的关键在于在空间中关注显著区域,并且在时间上自动选择关键帧。最后,在CSL手语数据集上验证了算法的有效性。  相似文献   

20.
Human action analysis has been an active research area in computer vision, and has many useful applications such as human computer interaction. Most of the state-of-the-art approaches of human action analysis are data-driven and focus on general action recognition. In this paper, we aim to analyze fitness actions with skeleton sequences and propose an efficient and robust fitness action analysis framework. Firstly, fitness actions from 15 subjects are captured and built to a fitness action dataset (Fitness-28). Secondly, skeleton information is extracted and made alignment with a simplified human skeleton model. Thirdly, the aligned skeleton information is transformed to an uniform human center coordinate system with the proposed spatial–temporal skeleton encoding method. Finally, the action classifier and local–global geometrical registration strategy are constructed to analyze the fitness actions. Experimental results demonstrate that our method can effectively assess fitness action, and have a good performance on artificial intelligence fitness system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号