Journal of Computer Applications ›› 2022, Vol. 42 ›› Issue (8): 2343-2352.DOI: 10.11772/j.issn.1001-9081.2021061062
Special Issue: 人工智能
• Artificial intelligence • Previous Articles Next Articles
Yinglü XUAN, Yuan WAN(), Jiahui CHEN
Received:
2021-06-19
Revised:
2021-10-14
Accepted:
2021-10-20
Online:
2022-01-25
Published:
2022-08-10
Contact:
Yuan WAN
About author:
XUAN Yinglü, born in 1998, M. S. candidate. His research interests include machine learning, deep learning, time series classification.Supported by:
通讯作者:
万源
作者简介:
玄英律(1998—),男,贵州贵阳人,硕士研究生,主要研究方向:机器学习、深度学习、时间序列分类;基金资助:
CLC Number:
Yinglü XUAN, Yuan WAN, Jiahui CHEN. Time series classification by LSTM based on multi-scale convolution and attention mechanism[J]. Journal of Computer Applications, 2022, 42(8): 2343-2352.
玄英律, 万源, 陈嘉慧. 基于多尺度卷积和注意力机制的LSTM时间序列分类[J]. 《计算机应用》唯一官方网站, 2022, 42(8): 2343-2352.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2021061062
名称 | 类别数 | 序列长度 | 训练样本数 | 测试样本数 |
---|---|---|---|---|
Chlorine | 3 | 166 | 467 | 3 840 |
MoteStrain | 2 | 84 | 20 | 1 252 |
CinC_ECG | 4 | 1 639 | 40 | 1 380 |
Cricket_X | 12 | 300 | 390 | 390 |
FacesUCR | 14 | 131 | 200 | 2 050 |
ItalyPower | 2 | 24 | 67 | 1 029 |
MALLAT | 8 | 1 024 | 55 | 2 345 |
OliveOil | 4 | 570 | 30 | 30 |
Tab. 1 Details of 8 time series datasets
名称 | 类别数 | 序列长度 | 训练样本数 | 测试样本数 |
---|---|---|---|---|
Chlorine | 3 | 166 | 467 | 3 840 |
MoteStrain | 2 | 84 | 20 | 1 252 |
CinC_ECG | 4 | 1 639 | 40 | 1 380 |
Cricket_X | 12 | 300 | 390 | 390 |
FacesUCR | 14 | 131 | 200 | 2 050 |
ItalyPower | 2 | 24 | 67 | 1 029 |
MALLAT | 8 | 1 024 | 55 | 2 345 |
OliveOil | 4 | 570 | 30 | 30 |
参数搜索次序 | 批处理大小 | 训练轮数 | 降维因子 | 丢弃率 | 平均错误率 |
---|---|---|---|---|---|
1 | 32 | 2 000 | 16 | 0.8 | 0.144 6 |
64 | 2 000 | 16 | 0.8 | 0.141 7 | |
128 | 2 000 | 16 | 0.8 | 0.1313 | |
256 | 2 000 | 16 | 0.8 | 0.139 0 | |
2 | 128 | 1 500 | 16 | 0.8 | 0.138 1 |
128 | 2 000 | 16 | 0.8 | 0.131 3 | |
128 | 2500 | 16 | 0.8 | 0.1309 | |
128 | 3 000 | 16 | 0.8 | 0.1309 | |
3 | 128 | 2 500 | 8 | 0.8 | 0.132 4 |
128 | 2 500 | 16 | 0.8 | 0.1309 | |
128 | 2 500 | 32 | 0.8 | 0.133 8 | |
4 | 128 | 2 500 | 16 | 0.5 | 0.135 5 |
128 | 2 500 | 16 | 0.6 | 0.134 1 | |
128 | 2 500 | 16 | 0.7 | 0.136 3 | |
128 | 2 500 | 16 | 0.8 | 0.1309 | |
128 | 2 500 | 16 | 0.9 | 0.133 7 |
Tab. 2 Mean error of each parameter combination
参数搜索次序 | 批处理大小 | 训练轮数 | 降维因子 | 丢弃率 | 平均错误率 |
---|---|---|---|---|---|
1 | 32 | 2 000 | 16 | 0.8 | 0.144 6 |
64 | 2 000 | 16 | 0.8 | 0.141 7 | |
128 | 2 000 | 16 | 0.8 | 0.1313 | |
256 | 2 000 | 16 | 0.8 | 0.139 0 | |
2 | 128 | 1 500 | 16 | 0.8 | 0.138 1 |
128 | 2 000 | 16 | 0.8 | 0.131 3 | |
128 | 2500 | 16 | 0.8 | 0.1309 | |
128 | 3 000 | 16 | 0.8 | 0.1309 | |
3 | 128 | 2 500 | 8 | 0.8 | 0.132 4 |
128 | 2 500 | 16 | 0.8 | 0.1309 | |
128 | 2 500 | 32 | 0.8 | 0.133 8 | |
4 | 128 | 2 500 | 16 | 0.5 | 0.135 5 |
128 | 2 500 | 16 | 0.6 | 0.134 1 | |
128 | 2 500 | 16 | 0.7 | 0.136 3 | |
128 | 2 500 | 16 | 0.8 | 0.1309 | |
128 | 2 500 | 16 | 0.9 | 0.133 7 |
数据集 | USRL-FordA | USRL-Combined (1-NN) | OS-CNN | Inception-Time | RTFN | MCA-LSTM |
---|---|---|---|---|---|---|
Adiac | 0.760 | 0.645 | 0.839 | 0.841 | 0.793 | 0.857 |
ArrowHead | 0.817 | 0.817 | 0.840 | 0.846 | 0.851 | 0.914 |
Beef | 0.667 | 0.600 | 0.833 | 0.700 | 0.900 | 0.767 |
BeetleFly | 0.800 | 0.800 | 0.800 | 0.800 | 1.000 | 0.750 |
BirdChicken | 0.900 | 0.750 | 0.900 | 0.950 | 1.000 | 0.900 |
Car | 0.850 | 0.800 | 0.933 | 0.883 | 0.883 | 0.950 |
CBF | 0.988 | 0.978 | 0.909 | 0.999 | 1.000 | 0.998 |
ChlorineConcentration | 0.688 | 0.588 | 0.850 | 0.877 | 0.894 | 0.863 |
CinC_ECG_torso | 0.638 | 0.693 | 0.830 | 0.854 | 0.810 | 0.889 |
Coffee | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
Cricket_X | 0.682 | 0.741 | 0.846 | 0.854 | 0.772 | 0.869 |
Cricket_Y | 0.667 | 0.664 | 0.869 | 0.851 | 0.790 | 0.844 |
Cricket_Z | 0.656 | 0.723 | 0.862 | 0.862 | 0.787 | 0.867 |
DiatomSizeReduction | 0.974 | 0.967 | 0.980 | 0.935 | 0.980 | 0.948 |
DistalPhalanxOutlineAgeGroup | 0.727 | 0.669 | 0.755 | 0.734 | 0.719 | 0.798 |
DistalPhalanxOutlineCorrect | 0.764 | 0.683 | 0.772 | 0.783 | 0.772 | 0.793 |
Earthquakes | 0.748 | 0.640 | 0.683 | 0.741 | 0.777 | 0.839 |
ECG200 | 0.830 | 0.850 | 0.910 | 0.930 | 0.920 | 0.920 |
ECG5000 | 0.940 | 0.925 | 0.940 | 0.941 | 0.944 | 0.945 |
ECGFiveDays | 1.000 | 0.999 | 1.000 | 1.000 | 1.000 | 1.000 |
FaceFour | 0.830 | 0.864 | 0.943 | 0.955 | 0.924 | 0.943 |
FacesUCR | 0.835 | 0.860 | 0.964 | 0.971 | 0.951 | 0.950 |
FordA | 0.927 | 0.863 | 0.958 | 0.961 | 0.939 | 0.944 |
FordB | 0.798 | 0.748 | 0.814 | 0.862 | 0.824 | 0.940 |
Gun_Point | 0.987 | 0.833 | 1.000 | 1.000 | 1.000 | 1.000 |
Ham | 0.533 | 0.533 | 0.714 | 0.714 | 0.810 | 0.790 |
HandOutlines | 0.919 | 0.832 | 0.957 | 0.954 | 0.895 | 0.900 |
Haptics | 0.474 | 0.354 | 0.513 | 0.549 | 0.601 | 0.555 |
Herring | 0.578 | 0.563 | 0.609 | 0.672 | 0.750 | 0.750 |
InsectWingbeatSound | 0.599 | 0.506 | 0.637 | 0.639 | 0.652 | 0.638 |
ItalyPowerDemand | 0.929 | 0.942 | 0.948 | 0.965 | 0.964 | 0.972 |
Lighting2 | 0.787 | 0.885 | 0.820 | 0.770 | 0.836 | 0.836 |
Lighting7 | 0.740 | 0.795 | 0.808 | 0.836 | 0.904 | 0.822 |
MALLAT | 0.916 | 0.994 | 0.964 | 0.955 | 0.939 | 0.978 |
Meat | 0.867 | 0.900 | 0.983 | 0.933 | 1.000 | 0.950 |
MedicalImages | 0.725 | 0.603 | 0.768 | 0.795 | 0.793 | 0.797 |
MiddlePhalanxOutlineAgeGroup | 0.623 | 0.506 | 0.539 | 0.552 | 0.662 | 0.758 |
MiddlePhalanxOutlineCorrect | 0.839 | 0.722 | 0.808 | 0.818 | 0.745 | 0.828 |
MiddlePhalanxTW | 0.555 | 0.513 | 0.565 | 0.513 | 0.624 | 0.586 |
MoteStrain | 0.823 | 0.853 | 0.939 | 0.887 | 0.875 | 0.912 |
OliveOil | 0.900 | 0.833 | 0.833 | 0.833 | 0.967 | 0.800 |
Patterns | 0.992 | 0.998 | 1.000 | 1.000 | 1.000 | 1.000 |
plane | 0.981 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
ProximalPhalanxOutlineAgeGroup | 0.839 | 0.805 | 0.844 | 0.849 | 0.878 | 0.859 |
ProximalPhalanxOutlineCorrect | 0.869 | 0.801 | 0.900 | 0.931 | 0.911 | 0.928 |
ProximalPhalanxTW | 0.785 | 0.717 | 0.776 | 0.776 | 0.834 | 0.810 |
ShapeletSim | 0.517 | 0.772 | 0.828 | 0.956 | 1.000 | 0.983 |
ShapesAll | 0.837 | 0.823 | 0.923 | 0.928 | 0.877 | 0.907 |
SonyAIBORobotSurface | 0.840 | 0.825 | 0.978 | 0.869 | 0.882 | 0.956 |
SonyAIBORobotSurfaceII | 0.832 | 0.885 | 0.961 | 0.946 | 0.854 | 0.956 |
Strawberry | 0.946 | 0.903 | 0.981 | 0.984 | 0.986 | 0.980 |
SwedishLeaf | 0.925 | 0.891 | 0.970 | 0.977 | 0.938 | 0.979 |
Symbols | 0.945 | 0.933 | 0.977 | 0.981 | 0.892 | 0.985 |
synthetic_control | 0.977 | 0.977 | 1.000 | 0.997 | 1.000 | 1.000 |
ToeSegmentation1 | 0.899 | 0.851 | 0.956 | 0.962 | 0.982 | 0.982 |
ToeSegmentation2 | 0.900 | 0.900 | 0.938 | 0.938 | 0.938 | 0.931 |
Trace | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
TwoLeadECG | 0.993 | 0.988 | 0.999 | 0.996 | 1.000 | 0.995 |
uWaveGestureLibrary_X | 0.865 | 0.838 | 0.942 | 0.952 | 0.969 | 0.843 |
uWaveGestureLibrary_Y | 0.784 | 0.762 | 0.818 | 0.825 | 0.815 | 0.778 |
uWaveGestureLibrary_Z | 0.697 | 0.666 | 0.750 | 0.767 | 0.752 | 0.789 |
UWaveGestureLibraryAll | 0.729 | 0.679 | 0.758 | 0.764 | 0.758 | 0.966 |
wafer | 0.995 | 0.987 | 0.999 | 0.999 | 1.000 | 0.999 |
Wine | 0.685 | 0.500 | 0.556 | 0.611 | 0.907 | 0.870 |
WordsSynonyms | 0.641 | 0.633 | 0.748 | 0.734 | 0.660 | 0.759 |
Tab. 3 Comparison of Top-1 accuracy on 65 datasets
数据集 | USRL-FordA | USRL-Combined (1-NN) | OS-CNN | Inception-Time | RTFN | MCA-LSTM |
---|---|---|---|---|---|---|
Adiac | 0.760 | 0.645 | 0.839 | 0.841 | 0.793 | 0.857 |
ArrowHead | 0.817 | 0.817 | 0.840 | 0.846 | 0.851 | 0.914 |
Beef | 0.667 | 0.600 | 0.833 | 0.700 | 0.900 | 0.767 |
BeetleFly | 0.800 | 0.800 | 0.800 | 0.800 | 1.000 | 0.750 |
BirdChicken | 0.900 | 0.750 | 0.900 | 0.950 | 1.000 | 0.900 |
Car | 0.850 | 0.800 | 0.933 | 0.883 | 0.883 | 0.950 |
CBF | 0.988 | 0.978 | 0.909 | 0.999 | 1.000 | 0.998 |
ChlorineConcentration | 0.688 | 0.588 | 0.850 | 0.877 | 0.894 | 0.863 |
CinC_ECG_torso | 0.638 | 0.693 | 0.830 | 0.854 | 0.810 | 0.889 |
Coffee | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
Cricket_X | 0.682 | 0.741 | 0.846 | 0.854 | 0.772 | 0.869 |
Cricket_Y | 0.667 | 0.664 | 0.869 | 0.851 | 0.790 | 0.844 |
Cricket_Z | 0.656 | 0.723 | 0.862 | 0.862 | 0.787 | 0.867 |
DiatomSizeReduction | 0.974 | 0.967 | 0.980 | 0.935 | 0.980 | 0.948 |
DistalPhalanxOutlineAgeGroup | 0.727 | 0.669 | 0.755 | 0.734 | 0.719 | 0.798 |
DistalPhalanxOutlineCorrect | 0.764 | 0.683 | 0.772 | 0.783 | 0.772 | 0.793 |
Earthquakes | 0.748 | 0.640 | 0.683 | 0.741 | 0.777 | 0.839 |
ECG200 | 0.830 | 0.850 | 0.910 | 0.930 | 0.920 | 0.920 |
ECG5000 | 0.940 | 0.925 | 0.940 | 0.941 | 0.944 | 0.945 |
ECGFiveDays | 1.000 | 0.999 | 1.000 | 1.000 | 1.000 | 1.000 |
FaceFour | 0.830 | 0.864 | 0.943 | 0.955 | 0.924 | 0.943 |
FacesUCR | 0.835 | 0.860 | 0.964 | 0.971 | 0.951 | 0.950 |
FordA | 0.927 | 0.863 | 0.958 | 0.961 | 0.939 | 0.944 |
FordB | 0.798 | 0.748 | 0.814 | 0.862 | 0.824 | 0.940 |
Gun_Point | 0.987 | 0.833 | 1.000 | 1.000 | 1.000 | 1.000 |
Ham | 0.533 | 0.533 | 0.714 | 0.714 | 0.810 | 0.790 |
HandOutlines | 0.919 | 0.832 | 0.957 | 0.954 | 0.895 | 0.900 |
Haptics | 0.474 | 0.354 | 0.513 | 0.549 | 0.601 | 0.555 |
Herring | 0.578 | 0.563 | 0.609 | 0.672 | 0.750 | 0.750 |
InsectWingbeatSound | 0.599 | 0.506 | 0.637 | 0.639 | 0.652 | 0.638 |
ItalyPowerDemand | 0.929 | 0.942 | 0.948 | 0.965 | 0.964 | 0.972 |
Lighting2 | 0.787 | 0.885 | 0.820 | 0.770 | 0.836 | 0.836 |
Lighting7 | 0.740 | 0.795 | 0.808 | 0.836 | 0.904 | 0.822 |
MALLAT | 0.916 | 0.994 | 0.964 | 0.955 | 0.939 | 0.978 |
Meat | 0.867 | 0.900 | 0.983 | 0.933 | 1.000 | 0.950 |
MedicalImages | 0.725 | 0.603 | 0.768 | 0.795 | 0.793 | 0.797 |
MiddlePhalanxOutlineAgeGroup | 0.623 | 0.506 | 0.539 | 0.552 | 0.662 | 0.758 |
MiddlePhalanxOutlineCorrect | 0.839 | 0.722 | 0.808 | 0.818 | 0.745 | 0.828 |
MiddlePhalanxTW | 0.555 | 0.513 | 0.565 | 0.513 | 0.624 | 0.586 |
MoteStrain | 0.823 | 0.853 | 0.939 | 0.887 | 0.875 | 0.912 |
OliveOil | 0.900 | 0.833 | 0.833 | 0.833 | 0.967 | 0.800 |
Patterns | 0.992 | 0.998 | 1.000 | 1.000 | 1.000 | 1.000 |
plane | 0.981 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
ProximalPhalanxOutlineAgeGroup | 0.839 | 0.805 | 0.844 | 0.849 | 0.878 | 0.859 |
ProximalPhalanxOutlineCorrect | 0.869 | 0.801 | 0.900 | 0.931 | 0.911 | 0.928 |
ProximalPhalanxTW | 0.785 | 0.717 | 0.776 | 0.776 | 0.834 | 0.810 |
ShapeletSim | 0.517 | 0.772 | 0.828 | 0.956 | 1.000 | 0.983 |
ShapesAll | 0.837 | 0.823 | 0.923 | 0.928 | 0.877 | 0.907 |
SonyAIBORobotSurface | 0.840 | 0.825 | 0.978 | 0.869 | 0.882 | 0.956 |
SonyAIBORobotSurfaceII | 0.832 | 0.885 | 0.961 | 0.946 | 0.854 | 0.956 |
Strawberry | 0.946 | 0.903 | 0.981 | 0.984 | 0.986 | 0.980 |
SwedishLeaf | 0.925 | 0.891 | 0.970 | 0.977 | 0.938 | 0.979 |
Symbols | 0.945 | 0.933 | 0.977 | 0.981 | 0.892 | 0.985 |
synthetic_control | 0.977 | 0.977 | 1.000 | 0.997 | 1.000 | 1.000 |
ToeSegmentation1 | 0.899 | 0.851 | 0.956 | 0.962 | 0.982 | 0.982 |
ToeSegmentation2 | 0.900 | 0.900 | 0.938 | 0.938 | 0.938 | 0.931 |
Trace | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
TwoLeadECG | 0.993 | 0.988 | 0.999 | 0.996 | 1.000 | 0.995 |
uWaveGestureLibrary_X | 0.865 | 0.838 | 0.942 | 0.952 | 0.969 | 0.843 |
uWaveGestureLibrary_Y | 0.784 | 0.762 | 0.818 | 0.825 | 0.815 | 0.778 |
uWaveGestureLibrary_Z | 0.697 | 0.666 | 0.750 | 0.767 | 0.752 | 0.789 |
UWaveGestureLibraryAll | 0.729 | 0.679 | 0.758 | 0.764 | 0.758 | 0.966 |
wafer | 0.995 | 0.987 | 0.999 | 0.999 | 1.000 | 0.999 |
Wine | 0.685 | 0.500 | 0.556 | 0.611 | 0.907 | 0.870 |
WordsSynonyms | 0.641 | 0.633 | 0.748 | 0.734 | 0.660 | 0.759 |
模型 | 不同序列长度下的平均Top-1准确率 | ||
---|---|---|---|
小于200 | 200~500 | 大于500 | |
USRL-FordA [ | 0.811 | 0.834 | 0.782 |
USRL-Combined(1-NN) [ | 0.776 | 0.799 | 0.791 |
OS-CNN[ | 0.855 | 0.878 | 0.853 |
Inception-Time[ | 0.860 | 0.875 | 0.862 |
RTFN [ | 0.857 | 0.909 | 0.872 |
MCA-LSTM | 0.878 | 0.911 | 0.870 |
Tab. 4 Average Top-1 accuracies of different models grouped by sequence length
模型 | 不同序列长度下的平均Top-1准确率 | ||
---|---|---|---|
小于200 | 200~500 | 大于500 | |
USRL-FordA [ | 0.811 | 0.834 | 0.782 |
USRL-Combined(1-NN) [ | 0.776 | 0.799 | 0.791 |
OS-CNN[ | 0.855 | 0.878 | 0.853 |
Inception-Time[ | 0.860 | 0.875 | 0.862 |
RTFN [ | 0.857 | 0.909 | 0.872 |
MCA-LSTM | 0.878 | 0.911 | 0.870 |
模型 | ME | AMR | GMR |
---|---|---|---|
USRL-FordA[ | 0.188 4 | 4.57 | 18.00 |
USRL-Combined(1-NN) [ | 0.212 8 | 5.15 | 22.67 |
OS-CNN [ | 0.137 9 | 2.78 | 5.97 |
Inception-Time [ | 0.134 5 | 2.48 | 4.83 |
RTFN [ | 0.121 8 | 2.32 | 3.57 |
MCA-LSTM | 0.113 6 | 2.14 | 3.23 |
Tab. 5 Comparison of evaluation indicators
模型 | ME | AMR | GMR |
---|---|---|---|
USRL-FordA[ | 0.188 4 | 4.57 | 18.00 |
USRL-Combined(1-NN) [ | 0.212 8 | 5.15 | 22.67 |
OS-CNN [ | 0.137 9 | 2.78 | 5.97 |
Inception-Time [ | 0.134 5 | 2.48 | 4.83 |
RTFN [ | 0.121 8 | 2.32 | 3.57 |
MCA-LSTM | 0.113 6 | 2.14 | 3.23 |
模型 | p值 | |||||
---|---|---|---|---|---|---|
USRL-FordA | USRL-Combined (1-NN) | OS-CNN | Inception-Time | RTFN | MCA-LSTM | |
USRL-FordA [ | 0 | |||||
USRL-Combined(1-NN)[ | 1.76E-03 | 0 | ||||
OS-CNN [ | 6.30E-07 | 3.24E-10 | 0 | |||
Inception-Time[ | 1.84E-07 | 7.31E-10 | 8.77E-02 | 0 | ||
RTFN [ | 9.31E-10 | 2.27E-10 | 2.20E-01 | 4.99E-01 | 0 | |
MCA-LSTM | 2.84E-09 | 1.58E-10 | 9.74E-03 | 3.83E-03 | 1.76E-01 | 0 |
Tab. 6 Wilcoxon signed-rank test results between different models
模型 | p值 | |||||
---|---|---|---|---|---|---|
USRL-FordA | USRL-Combined (1-NN) | OS-CNN | Inception-Time | RTFN | MCA-LSTM | |
USRL-FordA [ | 0 | |||||
USRL-Combined(1-NN)[ | 1.76E-03 | 0 | ||||
OS-CNN [ | 6.30E-07 | 3.24E-10 | 0 | |||
Inception-Time[ | 1.84E-07 | 7.31E-10 | 8.77E-02 | 0 | ||
RTFN [ | 9.31E-10 | 2.27E-10 | 2.20E-01 | 4.99E-01 | 0 | |
MCA-LSTM | 2.84E-09 | 1.58E-10 | 9.74E-03 | 3.83E-03 | 1.76E-01 | 0 |
1 | ABANDA A, MORI U, LOZANO J A. A review on distance based time series classification[J]. Data Mining and Knowledge Discovery, 2019, 33(2): 378-412. 10.1007/s10618-018-0596-4 |
2 | LAI G K, CHANG W C, YANG Y M, et al. Modeling long- and short-term temporal patterns with deep neural networks [C]// Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2018: 95-104. 10.1145/3209978.3210006 |
3 | YUAN J D, WANG Z H, HAN M. A discriminative shapelets transformation for time series classification[J]. International Journal of Pattern Recognition and Artificial Intelligence, 2014, 28(6): No.1450014. 10.1142/s0218001414500141 |
4 | BATISTA G E A P A, WANG X Y, KEOGH E J. A complexity- invariant distance measure for time series [C]// Proceedings of the 11th SIAM International Conference on Data Mining. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2011: 699-710. 10.1137/1.9781611972818.60 |
5 | BAGNALL A, LINES J, BOSTROM A, et al. The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances[J]. Data Mining and Knowledge Discovery, 2017, 31(3): 606-660. 10.1007/s10618-016-0483-9 |
6 | SCHÄFER P. The BOSS is concerned with time series classification in the presence of noise[J]. Data Mining and Knowledge Discovery, 2015, 29(6): 1505-1530. 10.1007/s10618-014-0377-7 |
7 | ISMAIL FAWAZ H, FORESTIER G, WEBER J, et al. Deep learning for time series classification: a review[J]. Data Mining and Knowledge Discovery, 2019, 33(4): 917-963. 10.1007/s10618-019-00619-1 |
8 | WANG Z G, YAN W Z, OATES T. Time series classification from scratch with deep neural networks: a strong baseline [C]// Proceedings of the 2017 International Joint Conference on Neural Networks. Piscataway: IEEE, 2017: 1578-1585. 10.1109/ijcnn.2017.7966039 |
9 | RUMELHART D E, HINTON G E, WILLIAMS R J. Learning internal representations by error propagation[J]. Readings in Cognitive Science,1988, 323(6088):399-421. 10.1016/b978-1-4832-1446-7.50035-2 |
10 | SHELHAMER E, LONG J, DATTELL T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4): 640-651. 10.1109/tpami.2016.2572683 |
11 | FRANCESCHI J Y, DIEULEVEUT A, JAGGI M. Unsupervised scalable representation learning for multivariate time series[EB/OL]// Proceedings of the 33rd Conference on Neural Information Processing Systems. (2020-01-11)[2021-06-11]. . |
12 | CUI Z C, CHEN W L, CHEN Y X. Multi-scale convolutional neural networks for time series classification[EB/OL]. (2016-05-11) [2021-06-20]. . |
13 | KARIM F, MAJUMDAR S, DARABI H. Insights into LSTM fully convolutional networks for time series classification[J]. IEEE Access, 2019, 7: 67718-67725. 10.1109/access.2019.2916828 |
14 | KARIM F, MAJUMDAR S, DARABI H, et al. LSTM fully convolutional networks for time series classification[J]. IEEE Access, 2018, 6: 1662-1669. 10.1109/access.2017.2779939 |
15 | CHEN W, SHI K. Multi-scale attention convolutional neural network for time series classification[J]. Neural Networks, 2021, 136: 126-140. 10.1016/j.neunet.2021.01.001 |
16 | HUANG S H, XU L J, JIANG C W. Residual attention net for superior cross-domain time sequence modeling[EB/OL]. (2020-01-13) [2021-06-08]. . 10.1007/978-981-33-6137-9_5 |
17 | LAI S W, XU L H, LIU K, et al. Recurrent convolutional neural networks for text classification [C]// Proceedings of the 29th AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2015: 2267-2273. 10.1609/aaai.v29i1.9513 |
18 | 李梅,宁德军,郭佳程.基于注意力机制的CNN-LSTM模型及其应用[J].计算机工程与应用, 2019, 55(13): 20-27. 10.3778/j.issn.1002-8331.1901-0246 |
LI M, NING D J, GUO J C. Attention mechanism-based CNN-LSTM model and its application[J]. Computer Engineering and Applications, 2019, 55(13): 20-27. 10.3778/j.issn.1002-8331.1901-0246 | |
19 | HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780. 10.1162/neco.1997.9.8.1735 |
20 | LIU Y Q, SU Z F, LI H, et al. An LSTM based classification method for time series trend forecasting [C]// Proceedings of the 2019 14th IEEE Conference on Industrial Electronics and Applications. Piscataway: IEEE, 2019: 402-406. 10.1109/iciea.2019.8833725 |
21 | XIAO Z W, XU X, XING H L, et al. RTFN: a robust temporal feature network for time series classification[J] Information Sciences, 2021, 571: 65-86. 10.1016/j.ins.2021.04.053 |
22 | HU J, SHEN L, SUN G. Squeeze-and-excitation networks [C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 7132-7141. 10.1109/cvpr.2018.00745 |
23 | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2017: 6000-6010. 10.1016/s0262-4079(17)32358-8 |
24 | CARRASCO M, BARBOT A. Spatial attention alters visual appearance[J]. Current Opinion in Psychology, 2019, 29: 56-64. 10.1016/j.copsyc.2018.10.010 |
25 | LeCUN Y, BSEER B, DENKER J S, et al. Backpropagation applied to handwritten zip code recognition[J]. Neural Computation, 1989, 1(4): 541-551. 10.1162/neco.1989.1.4.541 |
26 | HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition [C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. 10.1109/cvpr.2016.90 |
27 | DEHGHANI M, GOUWS S, VINVALS O, et al. Universal transformers[EB/OL]. (2019-03-05) [2021-06-13]. . |
28 | SHUANG K, ZHANG Z X, LOO J, et al. Convolution deconvolution word embedding: an end-to-end multi-prototype fusion embedding method for natural language processing[J]. Information Fusion, 2020, 53: 112-122. 10.1016/j.inffus.2019.06.009 |
29 | FAWAZ H I, LUCAS B, FORESTIER G, et al. InceptionTime: finding AlexNet for time series classification[J]. Data Mining and Knowledge Discovery, 2020, 34(6): 1936-1962. 10.1007/s10618-020-00710-y |
30 | SZEGEDY C, LIU W, JIA Y Q, et al. Going deeper with convolutions [C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 1-9. 10.1109/cvpr.2015.7298594 |
31 | TANG W S, LONG G D, LIU L . et al. Rethinking 1D-CNN for time series classification: a stronger baseline[EB/OL]. (2021-02-12) [2021-06-13]. . |
32 | LIN M, CHEN Q, YAN S C. Network in network[EB/OL]. (2014-03-04) [2021-06-13]. . |
33 | SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15: 1929-1958. |
34 | IOFFE S, SZEGEDY C. Batch normalization: accelerating deep network training by reducing internal covariate shift [C]// Proceedings of the 32nd International Conference on Machine Learning. New York: JMLR.org, 2015: 448-456. |
35 | NAIR V, HINTON G E. Rectified linear units improve restricted Boltzmann machines [C]// Proceedings of the 27th International Conference on Machine Learning. Madison, WI: Omnipress, 2010: 807-814. |
36 | DAU H A, BAGNALL A, KAMGAR K, et al. The UCR time series archive[J]. IEEE/CAA Journal of Automatica Sinica, 2019, 6(6): 1293-1305. 10.1109/jas.2019.1911747 |
37 | KINGMA D P, BA J L. Adam: a method for stochastic optimization[EB/OL]. (2017-01-30) [2021-05-29]. . |
38 | Keras-team. Keras[CP/OL]. [2021-06-13]. . 10.3139/9783446472440.010 |
[1] | Jing QIN, Zhiguang QIN, Fali LI, Yueheng PENG. Diagnosis of major depressive disorder based on probabilistic sparse self-attention neural network [J]. Journal of Computer Applications, 2024, 44(9): 2970-2974. |
[2] | Liting LI, Bei HUA, Ruozhou HE, Kuang XU. Multivariate time series prediction model based on decoupled attention mechanism [J]. Journal of Computer Applications, 2024, 44(9): 2732-2738. |
[3] | Zhiqiang ZHAO, Peihong MA, Xinhong HEI. Crowd counting method based on dual attention mechanism [J]. Journal of Computer Applications, 2024, 44(9): 2886-2892. |
[4] | Kaipeng XUE, Tao XU, Chunjie LIAO. Multimodal sentiment analysis network with self-supervision and multi-layer cross attention [J]. Journal of Computer Applications, 2024, 44(8): 2387-2392. |
[5] | Pengqi GAO, Heming HUANG, Yonghong FAN. Fusion of coordinate and multi-head attention mechanisms for interactive speech emotion recognition [J]. Journal of Computer Applications, 2024, 44(8): 2400-2406. |
[6] | Rui SHI, Yong LI, Yanhan ZHU. Adversarial sample attack algorithm of modulation signal based on equalization of feature gradient [J]. Journal of Computer Applications, 2024, 44(8): 2521-2527. |
[7] | Zhonghua LI, Yunqi BAI, Xuejin WANG, Leilei HUANG, Chujun LIN, Shiyu LIAO. Low illumination face detection based on image enhancement [J]. Journal of Computer Applications, 2024, 44(8): 2588-2594. |
[8] | Shangbin MO, Wenjun WANG, Ling DONG, Shengxiang GAO, Zhengtao YU. Single-channel speech enhancement based on multi-channel information aggregation and collaborative decoding [J]. Journal of Computer Applications, 2024, 44(8): 2611-2617. |
[9] | Li LIU, Haijin HOU, Anhong WANG, Tao ZHANG. Generative data hiding algorithm based on multi-scale attention [J]. Journal of Computer Applications, 2024, 44(7): 2102-2109. |
[10] | Song XU, Wenbo ZHANG, Yifan WANG. Lightweight video salient object detection network based on spatiotemporal information [J]. Journal of Computer Applications, 2024, 44(7): 2192-2199. |
[11] | Dahai LI, Zhonghua WANG, Zhendong WANG. Dual-branch low-light image enhancement network combining spatial and frequency domain information [J]. Journal of Computer Applications, 2024, 44(7): 2175-2182. |
[12] | Wenliang WEI, Yangping WANG, Biao YUE, Anzheng WANG, Zhe ZHANG. Deep learning model for infrared and visible image fusion based on illumination weight allocation and attention [J]. Journal of Computer Applications, 2024, 44(7): 2183-2191. |
[13] | Wu XIONG, Congjun CAO, Xuefang SONG, Yunlong SHAO, Xusheng WANG. Handwriting identification method based on multi-scale mixed domain attention mechanism [J]. Journal of Computer Applications, 2024, 44(7): 2225-2232. |
[14] | Huanhuan LI, Tianqiang HUANG, Xuemei DING, Haifeng LUO, Liqing HUANG. Public traffic demand prediction based on multi-scale spatial-temporal graph convolutional network [J]. Journal of Computer Applications, 2024, 44(7): 2065-2072. |
[15] | Dianhui MAO, Xuebo LI, Junling LIU, Denghui ZHANG, Wenjing YAN. Chinese entity and relation extraction model based on parallel heterogeneous graph and sequential attention mechanism [J]. Journal of Computer Applications, 2024, 44(7): 2018-2025. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||