| MTLFuseNet: A novel emotion recognition model based on deep latent feature fusion of EEG signals and multi-task learning |
| Li, R(李睿) ; Ren, C(任超) ; Ge, Yiqing; Zhao, Qiqi; Yang, Yikun; Shi, Yuhan; Zhang, XW(张晓炜) ; Hu, B(胡斌) |
| 2023-09-27
|
Online publication date | 2023-07
|
Source Publication | KNOWLEDGE-BASED SYSTEMS
Impact Factor & Quartile |
ISSN | 0950-7051
|
Volume | 276 |
page numbers | 16
|
Abstract | How to extract discriminative latent feature representations from electroencephalography (EEG) signals and build a generalized model is a topic in EEG-based emotion recognition research. This study proposed a novel emotion recognition model based on deep latent feature fusion of EEG signals and multi-task learning, referred to as MTLFuseNet. MTLFuseNet learned spatio-temporal latent features of EEG in an unsupervised manner by a variational autoencoder (VAE) and learned the spatio-spectral features of EEG in a supervised manner by a graph convolutional network (GCN) and gated recurrent unit (GRU) network. Afterward, the two latent features were fused to form more complementary and discriminative spatio-temporal–spectral fusion features for EEG signal representation. In addition, MTLFuseNet was constructed based on multi-task learning. The focal loss was introduced to solve the problem of unbalanced sample classes in an emotional dataset, and the triplet-center loss was introduced to make the fused latent feature vectors more discriminative. Finally, a subject-independent leave-one-subject-out cross-validation strategy was used to validate extensively on two public datasets, DEAP and DREAMER. On the DEAP dataset, the average accuracies of valence and arousal are 71.33% and 73.28%, respectively. On the DREAMER dataset, the average accuracies of valence and arousal are 80.43% and 83.33%, respectively. The experimental results show that the proposed MTLFuseNet model achieves excellent recognition performance, outperforming the state-of-the-art methods. © 2023 Elsevier B.V. |
Keyword | Biomedical signal processing
Deep learning
Electroencephalography
Electrophysiology
Learning systems
Speech recognition
Auto encoders
Emotion recognition
Feature representation
Features fusions
Generalized models
Model-based OPC
Multitask learning
Recognition models
Spatio-temporal
Spectral feature
|
Publisher | Elsevier B.V.
|
DOI | 10.1016/j.knosys.2023.110756
|
Indexed By | EI
; SCIE
|
Language | 英语
|
WOS Research Area | Computer Science
|
WOS Subject | Computer Science, Artificial Intelligence
|
WOS ID | WOS:001046356300001
|
EI Accession Number | 20232914417908
|
EI Keywords | Emotion Recognition
|
EI Classification Number | 461.1 Biomedical Engineering
; 461.4 Ergonomics and Human Factors Engineering
; 461.6 Medicine and Pharmacology
; 716.1 Information Theory and Signal Processing
; 723.2 Data Processing and Image Processing
; 751.5 Speech
|
Original Document Type | Journal article (JA)
|
Citation statistics |
|
Document Type | 期刊论文
|
Identifier | https://ir.lzu.edu.cn/handle/262010/532152
|
Collection | 信息科学与工程学院
|
Corresponding Author | Ren, Chao; Hu, Bin |
Affiliation | Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou; 730000, China |
First Author Affilication | School of Information Science and Engineering
|
Corresponding Author Affilication | School of Information Science and Engineering
|
Recommended Citation GB/T 7714 |
Li, Rui,Ren, Chao,Ge, Yiqing,et al. MTLFuseNet: A novel emotion recognition model based on deep latent feature fusion of EEG signals and multi-task learning[J].
KNOWLEDGE-BASED SYSTEMS,2023,276.
|
APA |
Li, Rui.,Ren, Chao.,Ge, Yiqing.,Zhao, Qiqi.,Yang, Yikun.,...&Hu, Bin.(2023).MTLFuseNet: A novel emotion recognition model based on deep latent feature fusion of EEG signals and multi-task learning.KNOWLEDGE-BASED SYSTEMS,276.
|
MLA |
Li, Rui,et al."MTLFuseNet: A novel emotion recognition model based on deep latent feature fusion of EEG signals and multi-task learning".KNOWLEDGE-BASED SYSTEMS 276(2023).
|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.