论文标题
使用图形神经网络进行深度转移学习,以基于传感器的人类活动识别
Deep Transfer Learning with Graph Neural Network for Sensor-Based Human Activity Recognition
论文作者
论文摘要
移动应用程序场景中基于传感器的人类活动识别(HAR)通常面临传感器方式变化和注释的数据缺陷。鉴于这一观察结果,我们为基于传感器的HAR任务设计了一种图形启发的深度学习方法,该方法进一步用于建立一个深入的转移学习模型,以为这两个具有挑战性的问题提供临时解决方案。具体而言,我们提出了一个多层残差结构,涉及图形卷积神经网络(RESGCNN),即基于传感器的HAR任务,即HAR-RESGCNN方法。与其他基于传感器的HAR模型相比,PAMAP2和MHealth数据集的实验结果表明,我们的RESGCNN有效地捕获具有可比结果的作用特征(平均准确性分别为98.18%和99.07%)。更重要的是,使用ResGCNN模型进行深度转移学习实验表现出极好的转移性和很少的学习性能。基于图形的框架具有良好的元学习能力,应该是基于传感器的HAR任务中的有前途的解决方案。
The sensor-based human activity recognition (HAR) in mobile application scenarios is often confronted with sensor modalities variation and annotated data deficiency. Given this observation, we devised a graph-inspired deep learning approach toward the sensor-based HAR tasks, which was further used to build a deep transfer learning model toward giving a tentative solution for these two challenging problems. Specifically, we present a multi-layer residual structure involved graph convolutional neural network (ResGCNN) toward the sensor-based HAR tasks, namely the HAR-ResGCNN approach. Experimental results on the PAMAP2 and mHealth data sets demonstrate that our ResGCNN is effective at capturing the characteristics of actions with comparable results compared to other sensor-based HAR models (with an average accuracy of 98.18% and 99.07%, respectively). More importantly, the deep transfer learning experiments using the ResGCNN model show excellent transferability and few-shot learning performance. The graph-based framework shows good meta-learning ability and is supposed to be a promising solution in sensor-based HAR tasks.