兰州大学机构库
Modality Consistency-Guided Contrastive Learning for Wearable-Based Human Activity Recognition
C. Guo; Y. Zhang; Y. Chen; C. Xu; Z. Wang
2024-03-18
Source PublicationIEEE Internet of Things Journal   Impact Factor & Quartile Of Published Year  The Latest Impact Factor & Quartile
ISSN2327-4662
VolumePPIssue:99Pages:1-1
AbstractIn wearable sensor-based human activity recognition (HAR) research, some factors limit the development of generalized models, such as the time and resource-consuming to acquire abundant annotated data, and the inter-dataset inconsistency of activity category. In this paper, we take advantage of the complementarity and redundancy between different wearable modalities (e.g., accelerometers, gyroscopes, and magnetometers), and propose a Modality Consistency-Guided Contrastive Learning (ModCL) method, which can construct a generalized model using annotation-free self-supervised learning and realize personalized domain adaptation with small amount annotation data. Specifically, ModCL exploits both intra-modality and inter-modality consistency of the wearable device data to construct contrastive learning tasks, encouraging the recognition model to recognize similar patterns and distinguish dissimilar ones. By leveraging these mixed constraint strategies, ModCL can learn the inherent activity patterns and extract meaningful generalized features across different datasets. To verify the effectiveness of ModCL method, we conduct experiments on five benchmark datasets (i.e., OPPORTUNITY and PAMAP2 as pre-training datasets, while UniMiB-SHAR, UCI-HAR, and WISDM as independent validation datasets). Experimental results show that ModCL achieves significant improvements in recognition accuracy compared with other SOTA methods.
Keywordhuman activitity recogniton intra-modality inter-modality self-supervised contrastive learning
PublisherIEEE
DOI10.1109/JIOT.2024.3379019
Indexed ByIEEE
Citation statistics