• •    

NCIIP2021+21+基于教与学优化的可变卷积自编码器的医学图像分类方法

李薇1,樊瑶驰2,江巧永2,王磊3,徐庆征4   

  1. 1. 西安理工大学
    2. 西安理工大学计算机科学与工程学院
    3. 陕西省网络计算与安全技术重点实验室(西安理工大学)
    4. 国防科技大学信息通信学院
  • 收稿日期:2021-06-28 修回日期:2021-07-15 发布日期:2021-12-06
  • 通讯作者: 樊瑶驰

Variable Convolutional Autoencoder Method Based on Teaching-learning-based Optimization for Medical Image classification

  • Received:2021-06-28 Revised:2021-07-15 Online:2021-12-06

摘要: 摘 要: 针对传统手工方法优化卷积神经网络参数时存在耗时、不准确,以及参数设置影响算法性能等问题,提出了一种基于教与学优化的可变卷积自编码器方法。该算法设计了可变长度的个体编码策略,快速构建卷积自编码器结构,堆叠卷积神经网络;此外,充分利用优秀个体结构信息,引导算法朝着更有希望的区域搜索,提高算法性能。实验结果表明,所提算法在解决医学图像分类问题时,分类精度达到89.84%,高于传统卷积神经网络和同类型神经网络的分类精度。算法通过优化卷积自编码器结构,堆叠卷积神经网络解决医学图像分类问题,有效提高了医学图像分类性能。

关键词: 智能问诊, 反问生成, 文本循环神经网络, 双向变形编码器, 开源神经机器翻译

Abstract: Abstract: In order to solve the problems such as time consuming, inaccuracy and influence of parameter setting on algorithm performance when optimizing parameters of convolutional neural network by traditional manual methods, a variable convolutional autoencoder method based on teaching-learning optimization was proposed. The algorithm designed a variable length individual encoding strategy, quickly constructed the convolutional autoencoder structure, and stacked the convolutional neural network. In addition, the excellent individual structure information was fully utilized to guide the algorithm to search more promising regions and improved the algorithm performance. The experimental results show that the proposed algorithm achieved a classification accuracy of 89.84% when solving medical image classification problems, which was higher than that of traditional convolutional neural network and similar neural network. The algorithm solved the problem of medical image classification by optimizing the structure of the convolutional autoencoder and stacking the convolutional neural network, which effectively improved the performance of medical image classification.

Key words: intelligent consultation, rhetorical question generation, Text Recurrent Neural Networks(TextRNN), Bidirectional Encoder Representations from Transformers(BERT), Open-Source Neural Machine Translation(OpenNMT)