Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Transfer kernel learning method based on spatial features for motor imagery EEG
Siqi YANG, Tianjian LUO, Xuanhui YAN, Guangju YANG
Journal of Computer Applications    2024, 44 (11): 3354-3363.   DOI: 10.11772/j.issn.1001-9081.2023111593
Abstract156)   HTML4)    PDF (1026KB)(688)       Save

Motor Imagery ElectroEncephaloGram (MI-EEG) signal has gained widespread attention in the construction of non-invasive Brain Computer Interfaces (BCIs) for clinical assisted rehabilitation. Limited by the differences in the distribution of MI-EEG signal samples from different subjects, cross-subject MI-EEG signal feature learning has become the focus of research. However, the existing related methods have problems such as weak domain-invariant feature expression capabilities and high time complexity, and cannot be directly applied to online BCIs. To address this issue, an efficient cross-subject MI-EEG signal classification algorithm, Transfer Kernel Riemannian Tangent Space (TKRTS), was proposed. Firstly, the MI-EEG signal covariance matrices were projected into the Riemannian space and the covariance matrices of different subjects were aligned in Riemannian space while extracting Riemannian Tangent Space (RTS) features. Subsequently, the domain-invariant kernel matrix on the tangent space feature set was learnt, thereby achieving a complete representation of cross-subject MI?EEG signal features. This matrix was then used to train a Kernel Support Vector Machine (KSVM) for classification. To validate the feasibility and effectiveness of TKRTS method, multi-source domain to single-target domain and single-source domain to single-target domain experiments were conducted on three public datasets, and the average classification accuracy is increased by 0.81 and 0.13 percentage points respectively. Experimental results demonstrate that compared to state-of-the-art methods, TKRTS method improves the average classification accuracy while maintaining similar time complexity. Furthermore, ablation experimental results confirm the completeness and parameter insensitivity of TKRTS method in cross-subject feature expression, making this method suitable for constructing online BCIs.

Table and Figures | Reference | Related Articles | Metrics
Fusion imaging-based recurrent capsule classification network for time series
Rongjun CHEN, Xuanhui YAN, Chaocheng YANG
Journal of Computer Applications    2023, 43 (3): 692-699.   DOI: 10.11772/j.issn.1001-9081.2022010089
Abstract567)   HTML27)    PDF (2586KB)(263)       Save

To address the problem of lack of temporal correlations and spatial location relationships in imaging time series, Fusion-Imaing Recurrent Capsule Neural Network (FIR-Capsnet) for time series was proposed to fuse and extract spatial-temporal information from time series images. Firstly, the multi-level spatial-temporal features of time series images were captured by using Gramian Angular Field (GAF), Markov Transition Field (MTF) and Recurrence Plot (RP). Then, the spatial relationships of time series images were learnt by the rotation invariance of capsule neural network and iterative routing algorithm. Finally, the temporal correlations hidden in the time series data were learnt by the gate mechanism of Long-Short Term Memory (LSTM) network. Experimental results show that FIR-Capsnet achieves 15 wins on 30 UCR public datasets and outperforms Fusion-CNN by 7.2 percentage points in classification accuracy on Human Activity Recognition (HAR) dataset, illustrating the advantages of FIR-Capsnet in processing time series data.

Table and Figures | Reference | Related Articles | Metrics