《计算机应用》唯一官方网站 ›› 2022, Vol. 42 ›› Issue (10): 3033-3039.DOI: 10.11772/j.issn.1001-9081.2021091607

• 人工智能 • 上一篇    

面向动态事件流的神经网络转换方法

张宇豪1, 袁孟雯2, 陆宇婧2, 燕锐3, 唐华锦2,4   

  1. 1.四川大学 计算机学院, 成都 610065
    2.之江实验室 智能计算硬件研究中心, 杭州 311100
    3.浙江工业大学 计算机科学与技术学院, 杭州 310023
    4.浙江大学 计算机科学与技术学院, 杭州 310027
  • 收稿日期:2021-09-10 修回日期:2022-02-16 接受日期:2022-02-21 发布日期:2022-04-15 出版日期:2022-10-10
  • 通讯作者: 燕锐
  • 作者简介:第一联系人:张宇豪(1998—),男,山西运城人,硕士研究生,CCF会员,主要研究方向:神经形态计算、人工智能
    袁孟雯(1995—),女,河南信阳人,硕士,CCF会员,主要研究方向:神经形态计算、人工智能
    陆宇婧(1997—),女,江苏扬州人,硕士,CCF会员,主要研究方向:神经形态计算、人工智能
    燕锐(1975—),女,山西运城人,教授,博士,主要研究方向:脉冲神经网络、神经形态计算、认知机器人; ryan@zjut.edu.cn
    唐华锦(1975—),男,安徽淮南人,教授,博士,主要研究方向:神经形态计算、神经形态芯片、智能传感器。
  • 基金资助:
    国家自然科学基金委员会-中国工程物理研究院“NSAF”联合基金资助项目(U2030204);国家自然科学基金资助项目(61773271);之江实验室科研攻关项目(2021KC0AC01)

Neural network conversion method for dynamic event stream

Yuhao ZHANG1, Mengwen YUAN2, Yujing LU2, Rui YAN3, Huajin TANG2,4   

  1. 1.College of Computer Science,Sichuan University,Chengdu Sichuan 610065,China
    2.Research Center for Intelligent Computing Hardware,Zhejiang Laboratory,Hangzhou Zhejiang 311100,China
    3.College of Computer Science and Technology,Zhejiang University of Technology,Hangzhou Zhejiang 310023,China
    4.College of Computer Science and Technology,Zhejiang University,Hangzhou Zhejiang 310027,China
  • Received:2021-09-10 Revised:2022-02-16 Accepted:2022-02-21 Online:2022-04-15 Published:2022-10-10
  • Contact: Rui YAN
  • About author:ZHANG Yuhao,born in 1998, M. S. candidate. His research interests include neuromorphic computing, artificial intelligence.
    YUAN Mengwen,born in 1995, M. S. Her research interests include neuromorphic computing, artificial intelligence.
    LU Yujing,born in 1997, M. S. Her research interests include neuromorphic computing, artificial intelligence.
    YAN Rui,born in 1975, Ph. D. , professor. Her research interests include spiking neural network, neuromorphic computing, cognitive robot.
    TANG Huajin,born in 1975, Ph. D. , professor. His research interests include neuromorphic computing, neuromorphic chip,intelligent sensor.
  • Supported by:
    Joint Fund “NSAF” of National Natural Science Foundation of China and China Academy of Engineering Physics(U2030204);National Natural Science Foundation of China(61773271);Key Research Project of Zhejiang Lab(2021KC0AC01)

摘要:

针对基于权重归一化方法的卷积神经网络(CNN)转换方法应用于事件流数据时准确率损失较大以及浮点网络难以在硬件上高效部署等问题,提出一种面向动态事件流的网络转换方法。首先,重构事件流数据并输入CNN进行训练,在训练过程中采用量化激活函数降低转换的准确率损失,并使用对称定点量化方法以减少参数存储量;其次,在网络转换中采用脉冲计数等价原理而非频率等价原理以更好地适应数据的稀疏性。实验结果表明,与使用传统激活函数相比,采用量化激活函数的脉冲卷积神经网络(SCNN)在N-MNIST、POKER-DVS和MNIST-DVS这三个动态事件流数据集上的识别准确率分别提高了0.29个百分点、8.52个百分点和3.95个百分点,转换损失分别降低了21.77%、100.00%和92.48%;此外,相较于基于权重归一化方法生成的高精度SCNN,所提量化SCNN在识别准确率相当的情况下可以有效节省约75%的存储空间,并且在N-MNIST和MNIST-DVS数据集上的转换损失分别降低了6.79%和46.29%。

关键词: 神经网络转换, 动态事件流, 量化激活函数, 定点量化, 脉冲卷积神经网络

Abstract:

Since Convolutional Neural Network (CNN) conversion method based on the weight normalization method for event stream data has a large loss of accuracy and the effective deployment of floating-point networks is difficult on hardware, a network conversion method for dynamic event stream was proposed. Firstly, the event stream data was reconstructed as the input of CNN for training. In the training process, the quantized activation function was adopted to reduce the accuracy loss, and a symmetric fixed-point quantization method was used to reduce the parameter storage. Then, instead of equivalence principle, pulse count equivalence principle was used to adapt to the sparsity of data better. Experimental results show that on three datasets N-MNIST, POKER-DVS and MNIST-DVS, compared with using the traditional activation function, Spiking Convolutional Neural Network (SCNN) using the quantized activation function has the recognition accuracy improved by 0.29 percentage points, 8.52 percentage points and 3.95 percentage points respectively, and the conversion loss reduced by 21.77%, 100.00% and 92.48% respectively. Meanwhile, the proposed quantized SCNN can effectively save 75% of storage space compared with high-precision SCNN generated on the basis of the weight normalization method, and has the conversion loss on N-MNIST and MNIST-DVS datasets reduced by 6.79% and 46.29% respectively.

Key words: neural network conversion, dynamic event stream, quantized activation function, fixed-point quantization, Spiking Convolutional Neural Network (SCNN)

中图分类号: