首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 185 毫秒
1.
行为分析有着广泛的应用前景,如智能视频监控、人机交互、自动识别报警、公共安全等方面,行为分析已成为相关领域的研究热点并有其潜在的经济价值.在人工智能和自动化操控迅速发展的当下,行为分析作为人工智能发展的中流砥柱也成为了国内外研究人员相继探讨的热点,关于人体行为分析的研究方式、模型算法和描述方法都取得了切实有效的发展.根据采用不同识别技术人体行为识别目前主流要分为四大类:基于计算机视觉的行为识别、基于传感器系统的行为识别、基于位置的行为识别和基于人物交互的行为识别.这篇文章主要探讨研究了行为识别技术和应用这2个方面的问题,综述了目前已有的技术情况,在探讨该方向各类技术的发展情况和研究现状的基础上,总结了当前行为分析仍然存在的问题和未来可能的发展前景.  相似文献   

2.
作为计算机视觉的重要分支,异常行为识别与检测技术已在社会安防、人工智能、交通管控等领域获得了广泛应用。针对不同应用场景特点,选择适当的特征提取及异常行为识别与检测方法,进而保证实时预警准确率,保障社会公众安全,在实际应用中至关重要。基于此,文章对基于视频的人群异常行为识别与检测方法进行综述,首先,对人体异常行为中的目标检测算法作一介绍;其次对特征提取方法加以总结,特征提取方法的选取及提取特征的准确与否直接影响后续判别结果;之后,从异常行为识别和异常行为检测两个方面的主流算法进行归纳,并总结常用异常行为检测方法相关性能参数;最后,对该领域未来研究方向提出了展望。  相似文献   

3.
伴随着信息技术的快速发展,人体行为识别技术逐渐被引入到各领域中,如安防监控、运动分析、医学辅助诊断和智能人机交互等,而技术实现的关键在于借助相关的特征融合方法。文章对人体行为识别的相关技术以及兴趣点提取方法、尺度混合特征模型与MKL方法的应用进行分析,以期对人体行为识别技术的发展起到推动作用。  相似文献   

4.
《信息技术》2016,(7):65-70
提出了基于多示例学习法的人体行为识别方法。利用人体骨架模型,将人体主要关节的属性特征作为人体运动的几何特征,提出了一种基于行为几何特征的自适应行为分解算法,将行为分解为简单动作。把分解后的行为看作一个包,各个动作看作包中的各个示例,结合多示例学习法与Any Boost算法提出了多示例行为学习算法(MILBoost算法),通过在多示例框架下对每一类行为进行学习,得到强分类器用于未知行为包的识别。实验结果表明该方法与其他方法相比具有更高的识别精度,并且在有噪声或干扰的情况下具有很好的识别精度。  相似文献   

5.
针对红外视频缺少纹理细节特征以致在人体行为识别中难以兼顾计算复杂度与识别准确率的问题,提出一种基于全局双线性注意力的红外视频行为识别方法。为高效计算红外视频中的人体行为,设计基于两级检测网络的关节点提取模块来获得人体关节点信息,创新性地将所形成的关节点三维热图作为红外视频人体行为识别网络的输入特征;为了在轻量化计算的基础上进一步提升识别准确率,提出一种全局双线性注意力的三维卷积网络,从空间和通道两个维度提升注意力的建模能力,捕获全局结构信息。在InfAR和IITR-IAR数据集上的实验结果表明,该方法在红外视频行为识别中的有效性。  相似文献   

6.
传统识别模型在进行人体异常行为识别时,无法准确地定位到识别目标的IP地址与目标源.针对此问题,设计了一种基于循环神经网络的人体异常行为识别模型.根据人体异常行为在循环神经网络中的传播方式,计算人体规律性异常行为、重复性异常行为在网络中占用的流量,并通过Lex.net技术提取网络规则,对比人体行为执行规则与循环神经网络规则,描述人体异常行为网络执行规则;同时,引进CNN标记方式,对异常信息进行标记,引入卷积神经网络标记异常信息,将标记结果按照高语义特征与低语义特征,以此实现对行为的有效识别.实验证明,本文设计的识别模型,可以在有限时间内输出所有人体异常行为,并能在完成对异常行为的识别后,追踪到行为对应的目标个体.  相似文献   

7.
随着中国制造2025战略的实施,人工智能领域相关技术的发展得到爆炸式的关注,其中计算机视觉是研究的热点之一。目标检测作为计算机视觉领域的热点和难点,其目的是让计算机自动提取目标的行为特征,辨别异常行为,在视频监控、人机交互和医疗监护等诸多领域有着重要的应用价值。文章叙述了目标检测中基于YOLO V5+Deep Sort的数据训练和识别行人和车辆闯红灯的研究方法,对行人和车辆的目标跟踪以及闯红灯检测的效果良好,端到端的方法切实有效,并给出测试结果。  相似文献   

8.
无线非接触式人体行为感知是指利用无线信号的传播特征反推人体行为的过程。由于非接触式感知具有造价低,无需额外设备,可以实现非视距感知等特点,现已成为国内外的研究热点。但是现有的人体行为感知算法需要利用复杂的特征提取技术,使其在嵌入式设备上运行困难。本文针对这一问题,提出了基于GBDT的人体行为感知算法,在不需要复杂的特征提取的基础上实现精准的人体行为感知。试验结果表明,基于GBDT的技术在rRuler数据集上的识别准确度可以达到98%左右,并有利于部署在实际嵌入式设备上。  相似文献   

9.
基于视频流的运动人体行为识别是一项既具有挑战性同时又非常具有广阔应用前景的研究课题。行为识别是基于人体目标识别和人体跟踪更高级的计算机视觉部分,研究出一种健壮的行为识别算法具有重要的理论意义和广泛的应用前景。利用在视频的基础上提取出位置分布图、大小分布图等一系列的属性将人的行为进行分类。采用基于帧间差分和改进混合高斯模型的运动人体分割算法,解决了复杂背景下的运动目标检测问题。实验数据对提出的新的行为描述方法进行了各种指标的讨论,验证了本文提出的算法的合理性与高效性。  相似文献   

10.
人体活动行为识别在医疗、安全、娱乐等方面有着广泛的应用,为了高效、准确地获取人体活动的行为信息,提出一种基于加速度传感器和神经网络的个人活动行为识别方法。该方法通过在个人手上佩戴加速度传感器,实时采集个人活动的行为数据;再通过BP神经网络分析相关行为数据并建立个人活动行为模型,分类识别个人的行走、坐着、躺卧、站立和突然跌倒等活动行为特征。实验结果表明,该方法能够有效检测到个人活动的行为特征参数,并可准确识别出人体活动的五种典型行为。  相似文献   

11.
Human activity recognition is one of the most studied topics in the field of computer vision. In recent years, with the availability of RGB-D sensors and powerful deep learning techniques, research on human activity recognition has gained momentum. From simple human atomic actions, the research has advanced towards recognizing more complex human activities using RGB-D data. This paper presents a comprehensive survey of the advanced deep learning based recognition methods and categorizes them in human atomic action, human–human interaction, human–object interaction. The reviewed methods are further classified based on the individual modality used for recognition i.e. RGB based, depth based, skeleton based, and hybrid. We also review and categorize recent challenging RGB-D datasets for the same. In addition, the paper also briefly reviews RGB-D datasets and methods for online activity recognition. The paper concludes with a discussion on limitations, challenges, and recent trends for promising future directions.  相似文献   

12.
Analysis of human behavior through visual information has been one of the active research areas in computer vision community during the last decade. Vision-based human action recognition (HAR) is a crucial part of human behavior analysis, which is also of great demand in a wide range of applications. HAR was initially performed via images from a conventional camera; however, depth sensors have recently embedded as an additional informative resource to cameras. In this paper, we have proposed a novel approach to largely improve the performance of human action recognition using Complex Network-based feature extraction from RGB-D information. Accordingly, the constructed complex network is employed for single-person action recognition from skeletal data consisting of 3D positions of body joints. The indirect features help the model cope with the majority of challenges in action recognition. In this paper, the meta-path concept in the complex network has been presented to lessen the unusual actions structure challenges. Further, it boosts recognition performance. The extensive experimental results on two widely adopted benchmark datasets, the MSR-Action Pairs, and MSR Daily Activity3D indicate the efficiency and validity of the method.  相似文献   

13.
Detecting and understanding human action under sophisticated lighting condition and backgrounds, also known as human action recognition in real-world context, is an indispensable component in modern intelligent systems and has becoming a hot research topic currently. Nowadays, human action recognition is still a tough challenge due to intra-class and inter-class, environment and temporal-level differences of the same action. Algorithms based on the single visual channel cannot achieve satisfactory performance. Thus, in this paper, we propose a novel action recognition framework towards sophisticated activity understanding, focusing on intelligently combining multimodel quality-related action features. Specifically, we first design a multi-channel feature fusion (MCFF) algorithm to capture visual appearance, motion and acoustic patterns from each video frame, where image-level labels are characterized by choosing high quality multimodel features. Subsequently, we design an adaptive key frame selection algorithm that can be applied to characterize human action from human action video stream. Thereafter, we engineer a multimodel feature based on an auxiliary human action retrieval system to achieve sophisticated activity understanding. Extensive experimental evaluations have demonstrated that the effectiveness and robustness of our proposed method.  相似文献   

14.
基于深度学习的视频中人体动作识别进展综述   总被引:4,自引:0,他引:4       下载免费PDF全文
罗会兰  童康  孔繁胜 《电子学报》2019,47(5):1162-1173
视频中的人体动作识别是计算机视觉领域内一个充满挑战的课题.不论是在视频信息检索、日常生活安全、公共视频监控,还是人机交互、科学认知等领域都有广泛的应用.本文首先简单介绍了动作识别的研究背景、意义及其难点,接着从模型输入信号的类型和数量、是否结合了传统特征提取方法、模型预训练三个维度详细综述了基于深度学习的动作识别方法,及比较分析了它们在UCF101和HMDB51这两个数据集上的识别效果.最后分别从视频预处理、视频中人体运动信息表征、模型学习训练这三个角度对未来动作识别可能的发展方向进行了论述.  相似文献   

15.
For video understanding, namely analyzing who did what in a video, actions along with objects are primary elements. Most studies on actions have handled recognition problems for a well‐trimmed video and focused on enhancing their classification performance. However, action detection, including localization as well as recognition, is required because, in general, actions intersect in time and space. In addition, most studies have not considered extensibility for a newly added action that has been previously trained. Therefore, proposed in this paper is an extensible hierarchical method for detecting generic actions, which combine object movements and spatial relations between two objects, and inherited actions, which are determined by the related objects through an ontology and rule based methodology. The hierarchical design of the method enables it to detect any interactive actions based on the spatial relations between two objects. The method using object information achieves an F‐measure of 90.27%. Moreover, this paper describes the extensibility of the method for a new action contained in a video from a video domain that is different from the dataset used.  相似文献   

16.
Much of the existing work on action recognition combines simple features with complex classifiers or models to represent an action. Parameters of such models usually do not have any physical meaning nor do they provide any qualitative insight relating the action to the actual motion of the body or its parts. In this paper, we propose a new representation of human actions called sequence of the most informative joints (SMIJ), which is extremely easy to interpret. At each time instant, we automatically select a few skeletal joints that are deemed to be the most informative for performing the current action based on highly interpretable measures such as the mean or variance of joint angle trajectories. We then represent the action as a sequence of these most informative joints. Experiments on multiple databases show that the SMIJ representation is discriminative for human action recognition and performs better than several state-of-the-art algorithms.  相似文献   

17.
18.
基于稠密轨迹特征的红外人体行为识别   总被引:4,自引:2,他引:2  
提出了一种使用基于稠密轨迹(DT)融合特征的红外人体行为识别(HAR)方法。主要流程如下:1)通过稠密采样获得输入行为视频的DT;2)计算DT的方向梯度直方图(HOG)、光流直方图(HOF)和运动边界描述子(MBH)3个描述子;3)基于DT的HOG、HOF和MBH,并采取词袋库模型和一定的融合策略,构建融合特征;4)以第3步所构建的融合特征为k近邻分类器(k-NN)的输入,完成人体HAR。实验以IADB红外行为库为研究对象,正确识别率达到96.7%。结果表明,提出的特征融合及识别方法能有效地对红外人体行为进行识别。  相似文献   

19.
无载波超宽带雷达人体动作识别系统的关键优势在于无载波超宽带雷达具有极高的分辨率,能够捕获人体的细微动作变化,并且对于室内复杂环境具有很强的抗干扰能力。但是由于无载波超宽带雷达信号不含载波信息,本身能量集中于极窄的波形内,并且发射信号与回波相关性弱,因此传统的提取信号特征的方法不再适用。针对这一问题,首次搭建无载波超宽带雷达人体动作识别系统,并提出一种新颖的基于主成分分析法(PCA)和离散余弦变换(DCT)相结合的无载波超宽带雷达人体动作识别方法,同时利用改进的网格搜索算法优化支持向量机的参数并验证该方法的优越性。最后,基于实测数据在Matlab平台上进行仿真,对实测的10种不同类型的人体动作进行分类识别,实验结果显示,该方法具有很高的识别率,针对不同的方案识别率均能达到99%以上,对小训练样本具有很强的鲁棒性。  相似文献   

20.
In this paper, we propose a new framework for detecting the unauthorized dumping of garbage in real‐world surveillance camera. Although several action/behavior recognition methods have been investigated, these studies are hardly applicable to real‐world scenarios because they are mainly focused on well‐refined datasets. Because the dumping actions in the real‐world take a variety of forms, building a new method to disclose the actions instead of exploiting previous approaches is a better strategy. We detected the dumping action by the change in relation between a person and the object being held by them. To find the person‐held object of indefinite form, we used a background subtraction algorithm and human joint estimation. The person‐held object was then tracked and the relation model between the joints and objects was built. Finally, the dumping action was detected through the voting‐based decision module. In the experiments, we show the effectiveness of the proposed method by testing on real‐world videos containing various dumping actions. In addition, the proposed framework is implemented in a real‐time monitoring system through a fast online algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号