《计算机应用》唯一官方网站 ›› 2024, Vol. 44 ›› Issue (12): 3899-3906.DOI: 10.11772/j.issn.1001-9081.2023121857
收稿日期:
2024-01-05
修回日期:
2024-03-12
接受日期:
2024-03-15
发布日期:
2024-03-28
出版日期:
2024-12-10
通讯作者:
姚姗姗
作者简介:
王超(1995—),男,山西大同人,硕士研究生,主要研究方向:声纹识别;基金资助:
Received:
2024-01-05
Revised:
2024-03-12
Accepted:
2024-03-15
Online:
2024-03-28
Published:
2024-12-10
Contact:
Shanshan YAO
About author:
WANG Chao, born in 1995, M. S. candidate, His research interests include voiceprint recognition.
Supported by:
摘要:
针对目前的说话人确认(SV)方法在复杂的测试场景或语音质量退化较大时性能下降严重的问题,提出一种基于语音质量自适应和类三元组思想的SV方法(QATM)。首先,利用说话人语音的特征范数关联语音质量;其次,通过判断语音质量好坏选取不同的损失函数,以调整不同质量语音样本的重要性,从而关注语音质量高的难样本,忽略语音质量低的难样本;最后,利用类三元组的思想同时改进AM-Softmax(Additive Margin Softmax)损失和AAM-Softmax(Additive Angular Margin Softmax)损失,旨在更关注困难的说话人样本,从而应对语音质量过差的难样本对模型的损害。实验结果表明,当训练集为VoxCeleb2开发集时,在Half-ResNet34、ResNet34和ECAPA-TDNN (Emphasized Channel Attention, Propagation and Aggregation in Time Delay Neural Network)网络架构中,所提方法与基于AAM-Softmax损失的方法相比,在VoxCeleb1-O测试集上的等错误率(EER)分别降低了6.41%、3.89%和7.27%;当训练集为Cn-Celeb.Train时,在Half-ResNet34网络架构中,所提方法与基于AAM-Softmax损失的方法相比,在评估集Cn-Celeb.Eval上的EER降低了5.25%。可见,所提方法在普通和复杂场景下的准确度均有所提高。
中图分类号:
王超, 姚姗姗. 基于语音质量自适应和类三元组思想的说话人确认方法[J]. 计算机应用, 2024, 44(12): 3899-3906.
Chao WANG, Shanshan YAO. Speaker verification method based on speech quality adaptation and triplet-like idea[J]. Journal of Computer Applications, 2024, 44(12): 3899-3906.
数据集 | 说话人数 | 音频数 | 验证对数 |
---|---|---|---|
VoxCeleb2-dev | 5 944 | 1 092 009 | — |
VoxCeleb1-dev | 1 211 | 148 642 | — |
VoxCeleb1-O | 40 | 4 708 | 37 611 |
VoxCeleb1-E | 1 251 | 145 160 | 579 818 |
VoxCeleb1-H | 1 190 | 137 924 | 550 894 |
Cn-Celeb.Train | 2 800 | 632 740 | — |
Cn-Celeb.Eval | 200 | 26 854 | 3 482 293 |
表1 VoxCeleb和Cn-Celeb:训练集和评估集
Tab. 1 VoxCeleb and Cn-Celeb: training set and evaluation set
数据集 | 说话人数 | 音频数 | 验证对数 |
---|---|---|---|
VoxCeleb2-dev | 5 944 | 1 092 009 | — |
VoxCeleb1-dev | 1 211 | 148 642 | — |
VoxCeleb1-O | 40 | 4 708 | 37 611 |
VoxCeleb1-E | 1 251 | 145 160 | 579 818 |
VoxCeleb1-H | 1 190 | 137 924 | 550 894 |
Cn-Celeb.Train | 2 800 | 632 740 | — |
Cn-Celeb.Eval | 200 | 26 854 | 3 482 293 |
数据集 | 语言 | 场景 类型数 | 媒体源数 | 说话人数 | 语音 条数 | 语音 时长/h | 多场景 说话人数 |
---|---|---|---|---|---|---|---|
Cn-Celeb1 | 中文 | 11 | 1 | 1 000 | 130 109 | 274 | 745 |
Cn-Celeb2 | 中文 | 11 | 5 | 2 000 | 529 485 | 1 090 | 658 |
表2 Cn-Celeb1 和 Cn-Celeb2 的基本信息
Tab. 2 Basic information of Cn-Celeb1 and Cn-Celeb2
数据集 | 语言 | 场景 类型数 | 媒体源数 | 说话人数 | 语音 条数 | 语音 时长/h | 多场景 说话人数 |
---|---|---|---|---|---|---|---|
Cn-Celeb1 | 中文 | 11 | 1 | 1 000 | 130 109 | 274 | 745 |
Cn-Celeb2 | 中文 | 11 | 5 | 2 000 | 529 485 | 1 090 | 658 |
层 | 结构 | 特征图输出尺寸 |
---|---|---|
输入 | — | 1 |
阶段1 | {ResBlock,32,1} | 32 |
阶段2 | {ResBlock,64,1} | 64 |
阶段3 | {ResBlock,128,1} | 128 |
阶段4 | {ResBlock,256,1} | 256 |
特征聚合 | 时间统计池化 | 64F |
输出头 | 256 |
表3 Half-ResNet34结构
Tab.3 Half-ResNet34 structure
层 | 结构 | 特征图输出尺寸 |
---|---|---|
输入 | — | 1 |
阶段1 | {ResBlock,32,1} | 32 |
阶段2 | {ResBlock,64,1} | 64 |
阶段3 | {ResBlock,128,1} | 128 |
阶段4 | {ResBlock,256,1} | 256 |
特征聚合 | 时间统计池化 | 64F |
输出头 | 256 |
网络结构 | 损失函数 | VoxCeleb1-O | VoxCeleb1-H | VoxCeleb1-E | SITW.Eval.Core | Cn-Celeb.eval | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
EER | minDCF | EER | minDCF | EER | minDCF | EER | minDCF | EER | minDCF | ||
Half-ResNet34+TSP | AAM-Softmax | 1.56 | 0.179 | 2.55 | 0.237 | 1.54 | 0.170 | 3.18 | 0.278 | 12.25 | 0.693 |
FTloss | 1.46 | 0.125 | 2.47 | 0.232 | 1.50 | 0.163 | 3.00 | 0.270 | 12.21 | 0.654 | |
ResNet34+ASP | AAM-Softmax | 1.03 | 0.128 | 2.18 | 0.234 | 1.12 | 0.136 | 2.59 | 0.434 | 10.23 | 0.553 |
FTloss | 0.99 | 0.120 | 2.14 | 0.224 | 1.11 | 0.134 | 2.54 | 0.436 | 10.17 | 0.551 | |
ECAPA-TDNN+ASP | AAM-Softmax | 1.10 | 0.158 | 2.44 | 0.260 | 1.29 | 0.156 | 2.82 | 0.493 | 10.29 | 0.558 |
FTloss | 1.02 | 0.124 | 2.42 | 0.257 | 1.26 | 0.145 | 2.73 | 0.487 | 10.24 | 0.552 |
表4 在VoxCeleb2数据集上的结果
Tab.4 Results on VoxCeleb2 dataset
网络结构 | 损失函数 | VoxCeleb1-O | VoxCeleb1-H | VoxCeleb1-E | SITW.Eval.Core | Cn-Celeb.eval | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
EER | minDCF | EER | minDCF | EER | minDCF | EER | minDCF | EER | minDCF | ||
Half-ResNet34+TSP | AAM-Softmax | 1.56 | 0.179 | 2.55 | 0.237 | 1.54 | 0.170 | 3.18 | 0.278 | 12.25 | 0.693 |
FTloss | 1.46 | 0.125 | 2.47 | 0.232 | 1.50 | 0.163 | 3.00 | 0.270 | 12.21 | 0.654 | |
ResNet34+ASP | AAM-Softmax | 1.03 | 0.128 | 2.18 | 0.234 | 1.12 | 0.136 | 2.59 | 0.434 | 10.23 | 0.553 |
FTloss | 0.99 | 0.120 | 2.14 | 0.224 | 1.11 | 0.134 | 2.54 | 0.436 | 10.17 | 0.551 | |
ECAPA-TDNN+ASP | AAM-Softmax | 1.10 | 0.158 | 2.44 | 0.260 | 1.29 | 0.156 | 2.82 | 0.493 | 10.29 | 0.558 |
FTloss | 1.02 | 0.124 | 2.42 | 0.257 | 1.26 | 0.145 | 2.73 | 0.487 | 10.24 | 0.552 |
超参数 | VoxCeleb1-O | VoxCeleb1-H | VoxCeleb1-E | SITW.Dev.Core | SITW.Eval.Core | Cn-Celeb.eval | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
EER | minDCF | EER | minDCF | EER | minDCF | EER | minDCF | EER | minDCF | EER | minDCF | |
m=0.20,s=30 | 1.75 | 0.188 | 2.87 | 0.255 | 1.76 | 0.190 | 3.42 | 0.249 | 3.253 | 0.282 | 12.24 | 0.691 |
m=0.30,s=30 | 1.46 | 0.125 | 2.47 | 0.232 | 1.50 | 0.163 | 2.92 | 0.222 | 2.952 | 0.253 | 12.21 | 0.654 |
m=0.40,s=30 | 1.59 | 0.145 | 2.64 | 0.242 | 1.60 | 0.172 | 2.61 | 0.221 | 2.835 | 0.246 | 12.19 | 0.620 |
表5 在VoxCeleb2数据集上的参数实验结果
Tab.5 Experimental results of parameters on VoxCeleb2 dataset
超参数 | VoxCeleb1-O | VoxCeleb1-H | VoxCeleb1-E | SITW.Dev.Core | SITW.Eval.Core | Cn-Celeb.eval | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
EER | minDCF | EER | minDCF | EER | minDCF | EER | minDCF | EER | minDCF | EER | minDCF | |
m=0.20,s=30 | 1.75 | 0.188 | 2.87 | 0.255 | 1.76 | 0.190 | 3.42 | 0.249 | 3.253 | 0.282 | 12.24 | 0.691 |
m=0.30,s=30 | 1.46 | 0.125 | 2.47 | 0.232 | 1.50 | 0.163 | 2.92 | 0.222 | 2.952 | 0.253 | 12.21 | 0.654 |
m=0.40,s=30 | 1.59 | 0.145 | 2.64 | 0.242 | 1.60 | 0.172 | 2.61 | 0.221 | 2.835 | 0.246 | 12.19 | 0.620 |
超参数 | Cn-Celeb.eval | |
---|---|---|
EER | minDCF | |
m=0.10,s=30 | 10.11 | 0.556 |
m=0.15,s=30 | 10.37 | 0.546 |
m=0.20,s=30 | 10.27 | 0.554 |
m=0.25,s=30 | 10.97 | 0.561 |
m=0.30,s=30 | 11.78 | 0.568 |
表6 在Cn-Celeb数据集上的参数实验结果
Tab.6 Experimental results of parameters on Cn-Celeb dataset
超参数 | Cn-Celeb.eval | |
---|---|---|
EER | minDCF | |
m=0.10,s=30 | 10.11 | 0.556 |
m=0.15,s=30 | 10.37 | 0.546 |
m=0.20,s=30 | 10.27 | 0.554 |
m=0.25,s=30 | 10.97 | 0.561 |
m=0.30,s=30 | 11.78 | 0.568 |
损失函数 | VoxCeleb1-O | VoxCeleb1-H | VoxCeleb1-E | |||
---|---|---|---|---|---|---|
EER | minDCF | EER | minDCF | EER | minDCF | |
AM-Softmax | 1.63 | 0.177 | 2.86 | 0.267 | 1.68 | 0.189 |
TAM-Softmax | 1.53 | 0.142 | 2.48 | 0.231 | 1.51 | 0.163 |
AAM-Softmax | 1.61 | 0.174 | 2.82 | 0.258 | 1.63 | 0.183 |
TAAM-Softmax | 1.53 | 0.161 | 2.58 | 0.243 | 1.58 | 0.174 |
Adaptive Margin | 1.58 | 0.145 | 2.51 | 0.231 | 1.52 | 0.166 |
Adaptive Margin-A | 1.56 | 0.168 | 2.49 | 0.238 | 1.54 | 0.165 |
Adaptive Margin-B | 1.52 | 0.165 | 2.49 | 0.236 | 1.49 | 0.173 |
FTloss | 1.46 | 0.126 | 2.47 | 0.233 | 1.50 | 0.163 |
表7 消融实验的结果
Tab.7 Results of ablation experiments
损失函数 | VoxCeleb1-O | VoxCeleb1-H | VoxCeleb1-E | |||
---|---|---|---|---|---|---|
EER | minDCF | EER | minDCF | EER | minDCF | |
AM-Softmax | 1.63 | 0.177 | 2.86 | 0.267 | 1.68 | 0.189 |
TAM-Softmax | 1.53 | 0.142 | 2.48 | 0.231 | 1.51 | 0.163 |
AAM-Softmax | 1.61 | 0.174 | 2.82 | 0.258 | 1.63 | 0.183 |
TAAM-Softmax | 1.53 | 0.161 | 2.58 | 0.243 | 1.58 | 0.174 |
Adaptive Margin | 1.58 | 0.145 | 2.51 | 0.231 | 1.52 | 0.166 |
Adaptive Margin-A | 1.56 | 0.168 | 2.49 | 0.238 | 1.54 | 0.165 |
Adaptive Margin-B | 1.52 | 0.165 | 2.49 | 0.236 | 1.49 | 0.173 |
FTloss | 1.46 | 0.126 | 2.47 | 0.233 | 1.50 | 0.163 |
损失函数 | Cn-Celeb.eval | |
---|---|---|
EER | minDCF | |
AM-Softmax | 10.50 | 0.545 |
AAM-Softmax | 10.67 | 0.552 |
FTloss | 10.11 | 0.556 |
表8 在Cn-Celeb数据集上的结果
Tab.8 Results on Cn-Celeb dataset
损失函数 | Cn-Celeb.eval | |
---|---|---|
EER | minDCF | |
AM-Softmax | 10.50 | 0.545 |
AAM-Softmax | 10.67 | 0.552 |
FTloss | 10.11 | 0.556 |
损失函数 | 信噪比/dB | EER | minDCF |
---|---|---|---|
AAM-Softmax | 10 | 13.04 | 0.967 |
20 | 9.76 | 0.807 | |
30 | 7.63 | 0.652 | |
40 | 7.42 | 0.637 | |
FTloss | 10 | 12.13 | 0.880 |
20 | 9.11 | 0.715 | |
30 | 7.42 | 0.633 | |
40 | 7.32 | 0.630 |
表9 不同信噪比在VoxCeleb1数据集上的实验结果
Tab.9 Experimental results on VoxCeleb1 dataset with different noise ratios
损失函数 | 信噪比/dB | EER | minDCF |
---|---|---|---|
AAM-Softmax | 10 | 13.04 | 0.967 |
20 | 9.76 | 0.807 | |
30 | 7.63 | 0.652 | |
40 | 7.42 | 0.637 | |
FTloss | 10 | 12.13 | 0.880 |
20 | 9.11 | 0.715 | |
30 | 7.42 | 0.633 | |
40 | 7.32 | 0.630 |
1 | McLAUGHLIN J, REYNOLDS D A, GLEASON T. A study of computation speed-ups of the GMM-UBM speaker recognition system[C]// Proceedings of the 6th European Conference on Speech Communication and Technology. [S.l.]: International Speech Communication Association, 1999: 1215-1218. |
2 | 何亮,杨毅,刘加. 基于TLS-NAP的文本无关说话人识别算法[J]. 模式识别与人工智能, 2012, 25(6):916-921. |
HE L, YANG Y, LIU J. TLS-NAP algorithm for text-independent speaker recognition[J]. Pattern Recognition and Artificial Intelligence, 2012, 25(6):916-921. | |
3 | WANG Q, MUCKENHIRN H, WILSON K, et al. VoiceFilter: targeted voice separation by speaker-conditioned spectrogram masking[EB/OL]. [2018-10-11].. |
4 | ŽMOLÍKOVÁ K, DELCROIX M, KINOSHITA K, et al. Speaker-aware neural network based beamformer for speaker extraction in speech mixtures [C]// Proceedings of the INTERSPEECH 2017. [S.l.]: International Speech Communication Association, 2017: 2655-2659. |
5 | SNYDER D, GARCIA-ROZEMO D, SELL G, et al. X-vectors: robust DNN embeddings for speaker recognition [C]// Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2018: 5329-5333. |
6 | ZHOU D, WANG L, LEE K A, et al. Dynamic margin softmax loss for speaker verification [C]// Proceedings of the INTERSPEECH 2020. [S.l.]: International Speech Communication Association, 2020: 3800-3804. |
7 | VAŇKOVÁ J, SKSRNITZL R. Within- and between-speaker variability of parameters expressing short-term voice quality[C]// Proceedings of the Speech Prosody 2014. [S.l.]: International Speech Communication Association, 2014: 1081-1085. |
8 | KREIMAN J, PARK S J, KEATING P A, et al. The relationship between acoustic and perceived intraspeaker variability in voice quality [C]// Proceedings of the INTERSPEECH 2015. [S.l.]: International Speech Communication Association, 2015: 2357-2360. |
9 | LIU Q, ZHANG X, LIANG X, et al. AWLloss: speaker verification based on the quality and difficulty of speech [J]. IEEE Signal Processing Letters, 2023, 30:1337-1341. |
10 | HAJIBABAEI M, DAI D. Unified hypersphere embedding for speaker recognition [EB/OL]. [2023-07-22].. |
11 | WANG F, CHENG J, LIU W, et al. Additive margin softmax for face verification[J]. IEEE Signal Processing Letters, 2018, 25(7): 926-930. |
12 | XIANG X, WANG S, HUANG H, et al. Margin matters: towards more discriminative deep neural network embeddings for speaker recognition [C]// Proceedings of the 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. Piscataway: IEEE, 2019: 1652-1656. |
13 | ZHOU T, ZHAO Y, WU J. ResNeXt and Res2Net structures for speaker verification[C]// Proceedings of the 2021 IEEE Spoken Language Technology Workshop. Piscataway: IEEE, 2021: 301-307. |
14 | DESPLANQUES B, THIENPONDT J, DEMUYNCK K. ECAPA-TDNN: emphasized channel attention, propagation and aggregation in TDNN based speaker verification [C]// Proceedings of the INTERSPEECH 2020. [S.l.]: International Speech Communication Association, 2020: 3830-3834. |
15 | ZHANG Y, LV Z, WU H, et al. MFA-Conformer: multi-scale feature aggregation Conformer for automatic speaker verification[C]// Proceedings of the INTERSPEECH 2022. [S.l.]: International Speech Communication Association, 2022: 306-310. |
16 | SUN M, SONG Z, JIANG X, et al. Learning pooling for convolutional neural network[J]. Neurocomputing, 2017, 224: 96-104. |
17 | OKABE K, KOSHINAKA T, SHINODA K. Attentive statistics pooling for deep speaker embedding[C]// Proceedings of the INTERSPEECH 2018. [S.l.]: International Speech Communication Association, 2018: 2252-2256. |
18 | ZHANG Z, SABUNCU M R. Generalized cross entropy loss for training deep neural networks with noisy labels [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2018: 8792-8802. |
19 | LI Y, GAO F, QU Z, et al. Angular softmax loss for end-to-end speaker verification[C]// Proceedings of the 11th International Symposium on Chinese Spoken Language Processing. Piscataway: IEEE, 2018: 190-194. |
20 | LI L, WANG D, XING C, et al. Max-margin metric learning for speaker recognition[C]// Proceedings of the 10th International Symposium on Chinese Spoken Language Processing. Piscataway: IEEE, 2016: 1-4. |
21 | BREDIN H. TristouNet: triplet loss for speaker turn embedding[C]// Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2017: 5430-5434. |
22 | RYBICKA M, KOWALCZYK K. On parameter adaptation in softmax-based cross-entropy loss for improved convergence speed and accuracy in DNN-based speaker recognition [C]// Proceedings of the INTERSPEECH 2020. [S.l.]: International Speech Communication Association, 2020: 3805-3809. |
23 | LI L, NAI R, WANG D. Real additive margin softmax for speaker verification[C]// Proceedings of the 2022 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2022: 7527-7531. |
24 | KIM M, JAIN A K, LIU X. AdaFace: quality adaptive margin for face recognition[C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 18729-18738. |
25 | REED S, LEE H, ANGUELOV D, et al. Training deep neural networks on noisy labels with bootstrapping [EB/OL]. [2015-05-27]. . |
26 | LEE K H, HE X, ZHANG L, et al. CleanNet: transfer learning for scalable image classifier training with label noise[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 5447-5456. |
27 | WANG X, KAPANIPATHI P, MUSA R, et al. Improving natural language inference using external knowledge in the science questions domain [C]// Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2019: 7208-7215. |
28 | ZHONG P, GONG Z, LI S, et al. Learning to diversify deep belief networks for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(6): 3516-3530. |
29 | KIM Y, YUN J, SHON H, et al. Joint negative and positive learning for noisy labels [C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 9437-9446. |
30 | GUPTA N, PATEL H, AFZAL S, et al. Data Quality Toolkit: automatic assessment of data quality and remediation for machine learning datasets [EB/OL]. [2023-11-05].. |
31 | ZEINALI H, WANG S, SILNOVA A, et al. BUT system description to VoxCeleb speaker recognition challenge 2019[EB/OL]. [2013-10-16].. |
32 | CORTES C, VAPNIK V. Support-vector networks [J]. Machine Learning, 1995, 20(3):273-297. |
33 | NAGRANI A, CHUNG J S, XIE W, et al. VoxCeleb: large scale speaker verification in the wild[J]. Computer Speech and Language, 2020, 60: No.101027. |
34 | McLAREN M, FERRER L, CASTAN D, et al. The Speakers In The Wild (SITW) speaker recognition database [C]// Proceedings of the INTERSPEECH 2016. [S.l.]: International Speech Communication Association, 2016: 818-822. |
35 | LI L, LIU R, KANG J, et al. CN-Celeb: multi-genre speaker recognition [J]. Speech Communication, 2022, 137: 77-91. |
36 | DODDINGTON G R, PRZYBOCKI M A, MARTIN A F, et al. The NIST speaker recognition evaluation — overview, methodology, systems, results, perspective [J]. Speech Communication, 2000, 31(2/3): 225-254. |
[1] | 姚光磊, 熊菊霞, 杨国武. 基于神经网络优化的花朵授粉算法[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2829-2837. |
[2] | 方介泼, 陶重犇. 应对零日攻击的混合车联网入侵检测系统[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2763-2769. |
[3] | 赵秦壮, 谭红叶. 基于自适应阈值学习的时序因果推断方法[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2660-2666. |
[4] | 杨乐, 张达敏, 何庆, 邓佳欣, 左锋琴. 改进猎人猎物优化算法在WSN覆盖中的应用[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2506-2513. |
[5] | 徐航, 杨智, 陈性元, 韩冰, 杜学绘. 基于自适应敏感区域变异的覆盖引导模糊测试[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2528-2535. |
[6] | 吴锦富, 柳毅. 基于随机噪声和自适应步长的快速对抗训练方法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1807-1815. |
[7] | 李焱, 潘大志, 郑思情. 多车场带时间窗车辆路径问题的改良自适应大邻域搜索算法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1897-1904. |
[8] | 韦修喜, 彭茂松, 黄华娟. 基于多策略改进蝴蝶优化算法的无线传感网络节点覆盖优化[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1009-1017. |
[9] | 李威, 陈玲, 徐修远, 朱敏, 郭际香, 周凯, 牛颢, 张煜宸, 易珊烨, 章毅, 罗凤鸣. 基于多任务学习的间质性肺病分割算法[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1285-1293. |
[10] | 刘一迪, 温自豪, 任富香, 李诗音, 唐德玉. 自适应球形演化的药物-靶标相互作用预测方法[J]. 《计算机应用》唯一官方网站, 2024, 44(3): 989-994. |
[11] | 刘勇, 杨锟. 新能源汽车电池回收网点竞争选址模型及算法[J]. 《计算机应用》唯一官方网站, 2024, 44(2): 595-603. |
[12] | 李俊杰, 望育梅, 李志军, 刘雨. 全景视频基于块的视口自适应传输方案综述[J]. 《计算机应用》唯一官方网站, 2024, 44(2): 536-547. |
[13] | 王震, 张珊珊, 邬斌扬, 苏万华. 基于自适应粒子群优化算法的串联复合涡轮储能优化策略[J]. 《计算机应用》唯一官方网站, 2024, 44(2): 611-618. |
[14] | 闫文杰, 党东月. 基于特征自适应提取的宽度量子态层析模型[J]. 《计算机应用》唯一官方网站, 2024, 44(12): 3861-3866. |
[15] | 刘晶鑫, 黄雯静, 徐亮胜, 黄冲, 吴建生. 字典学习与样本关联保持结合的无监督特征选择模型[J]. 《计算机应用》唯一官方网站, 2024, 44(12): 3766-3775. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||