Sensor Fusion-based Deep Learning Models for Human Activity Classification

Parshuram N Aarotale1 and Ajita Rattani2
1Biomedical Engineering,Wichita state University, 2University of North Texas


Abstract

Wearable sensors have been widely deployed for human activity recognition (HAR) in various sectors, including health monitoring, medical treatment, and motion analysis. Deep-learning-based HAR models have obtained enhanced accuracy rates over traditional machine learning but there is still a gap in the acceptable recognition accuracy for HAR models. To this front, this paper aims to propose sensor-fusion-based deep learning models for human activity classification for enhanced accuracy. Experimental validations are conducted on two wearable sensor datasets i.e., PPG-Dalia and ScientISST MOVE, containing various bio-signals, such as ECG, collected during daily human activities using chest, forearm, and wrist sensors. Experimental results show that our proposed sensor fusion-based deep learning model that systematically fuses signals from different sensors at one of the intermediate layers of the model, obtained enhanced performance in HAR. On average, our proposed sensor fusion-based models obtained an increment in HAR accuracy of about 9.14%, Precision of 8.15%, Recall of 8.9%, and F1 score of 9.0% over all the deep-learning models based on single sensor data (such as wrist, chest, and forearm) for HAR.