首页 | 本学科首页   官方微博 | 高级检索  
     

CNN多位置穿戴式传感器人体活动识别
引用本文:邓诗卓,王波涛,杨传贵,王国仁.CNN多位置穿戴式传感器人体活动识别[J].软件学报,2019,30(3):718-737.
作者姓名:邓诗卓  王波涛  杨传贵  王国仁
作者单位:东北大学 计算机科学与工程学院, 辽宁 沈阳 110819,东北大学 计算机科学与工程学院, 辽宁 沈阳 110819,东北大学 计算机科学与工程学院, 辽宁 沈阳 110819,北京理工大学 计算机学院, 北京 100081
基金项目:国家自然科学基金(61872072,U1401256,61173030,61732003)
摘    要:随着人工智能的发展和可穿戴传感器设备的普及,基于传感器数据的人体活动识别(human activity recognition,简称HAR)得到了广泛关注,且具有巨大的应用价值.抽取良好判别力的特征,是提高HAR准确率的关键因素.利用卷积神经网络(convolutional neural networks,简称CNN)无需领域知识抽取原始数据良好特征的特点,针对现有基于传感器的HAR忽略三轴向传感器单一轴向多位置数据空间依赖性的不足,提出了两种动作图片构建方法T-2D和M-2D,构建多位置单轴传感器动作图片和非三轴传感器动作图片;进而提出了卷积网络模型T-2DCNN和M-2DCNN,抽取三组单一轴向动作图片的时空依赖性和非三轴传感器的时间依赖性,并将卷积得到的特征拼接为高层次特征用于分类;为了优化网络结构,减少卷积层训练参数数量,进一步提出了基于参数共享的卷积网络模型.在公开数据集上与现有的工作进行对比实验,默认参数情况下,该方法在公开数据集OPPORTUNITY和SKODA中F1最大提升值分别为6.68%和1.09%;从传感器数量变化和单类识别准确性角度验证了模型的有效性;且基于共享参数模型,在保持识别效果的同时减少了训练参数.

关 键 词:人体活动识别  卷积神经网络  穿戴式传感器  特征提取  动作图片
收稿时间:2018/7/18 0:00:00
修稿时间:2018/9/20 0:00:00

Convolutional Neural Networks for Human Activity Recognition Using Multi-location Wearable Sensors
DENG Shi-Zhuo,WANG Bo-Tao,YANG Chuan-Gui and WANG Guo-Ren.Convolutional Neural Networks for Human Activity Recognition Using Multi-location Wearable Sensors[J].Journal of Software,2019,30(3):718-737.
Authors:DENG Shi-Zhuo  WANG Bo-Tao  YANG Chuan-Gui and WANG Guo-Ren
Affiliation:School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China,School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China,School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China and School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
Abstract:Wearable sensor-based human activity recognition (HAR) plays a significant role in the current smart applications with the development of the theory of artificial intelligence and popularity of the wearable sensors. Salient and discriminative features improve the performance of HAR. To capture the local dependence over time and space on the same axis from multi-location sensor data on convolutional neural networks (CNN), which is ignored by existing methods with 1D kernel and 2D kernel, this study proposes two methods, T-2D and M-2D. They construct three activity images from each axis of multi-location 3D accelerometers and one activity image from the other sensors. Then it implements the CNN networks named T-2DCNN and M-2DCNN based on T-2D and M-2D respectively, which fuse the four activity image features for the classifier. To reduce the number of the CNN weight, the weight-shared CNN, TS-2DCNN and MS-2DCNN, are proposed. In the default experiment settings on public datasets, the proposed methods outperform the existing methods with the F1-value increased by 6.68% and 1.09% at most in OPPORTUNITY and SKODA respectively. It concludes that both naïve and weight-shared model have better performance in most F1-values with different number of sensors and F1-value difference of each class.
Keywords:human activity recognition  convolutional neural network  wearable sensor  feature extraction  activity image
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号