《计算机应用》唯一官方网站 ›› 2021, Vol. 41 ›› Issue (12): 3432-3437.DOI: 10.11772/j.issn.1001-9081.2021060994

• 第十八届中国机器学习会议(CCML 2021) • 上一篇    

基于深度神经网络和门控循环单元的动态图表示学习方法

李慧博1,2, 赵云霄1,2, 白亮1,2()   

  1. 1.计算机智能与中文信息处理教育部重点实验室(山西大学),太原 030006
    2.山西大学 计算机与信息技术学院,太原 030006
  • 收稿日期:2021-05-12 修回日期:2021-06-28 接受日期:2021-07-26 发布日期:2021-12-28 出版日期:2021-12-10
  • 通讯作者: 白亮
  • 作者简介:李慧博(1997—),男,山西长治人,硕士研究生,主要研究方向:图表示学习
    赵云霄(1988—),男,山西方山人,博士研究生,CCF会员,主要研究方向:聚类、半监督、弱监督;
  • 基金资助:
    国家自然科学基金资助项目(62022052);国家重点研发计划项目(2020AAA0106100);山西省基础研究计划项目(201901D211192)

Dynamic graph representation learning method based on deep neural network and gated recurrent unit

Huibo LI1,2, Yunxiao ZHAO1,2, Liang BAI1,2()   

  1. 1.Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education,Taiyuan Shanxi 030006,China
    2.School of Computer and Information Technology,Shanxi University,Taiyuan Shanxi 030006,China
  • Received:2021-05-12 Revised:2021-06-28 Accepted:2021-07-26 Online:2021-12-28 Published:2021-12-10
  • Contact: Liang BAI
  • About author:LI Huibo, born in 1997, M. S. candidate. His research interests include graph representation learning.
    ZHAO Yunxiao, born in 1988, Ph. D. candidate. His research interests include clustering, semi-supervision, weak supervision.
  • Supported by:
    the National Natural Science Foundation of China(62022052);the National Key Research and Development Program of China(2020AAA0106100);the Shanxi Basic Research Program(201901D211192)

摘要:

学习图中节点的潜在向量表示是一项重要且普遍存在的任务,旨在捕捉图中节点的各种属性。大量工作证明静态图表示已经能够学习到节点的部分信息,然而,真实世界的图是随着时间的推移而演变的。为了解决多数动态网络算法不能有效保留节点邻域结构和时态信息的问题,提出了基于深度神经网络(DNN)和门控循环单元(GRU)的动态网络表示学习方法DynAEGRU。该方法以自编码器作为框架,其中的编码器首先用DNN聚集邻域信息以得到低维特征向量,然后使用GRU网络提取节点时态信息,最后用解码器重构邻接矩阵并将其与真实图对比来构建损失。通过与几种静态图和动态图表示学习算法在3个数据集上进行实验分析,结果表明DynAEGRU具有较好的性能增益。

关键词: 动态网络表示学习, 深度神经网络, 自编码器, 门控循环单元, 链路预测

Abstract:

Learning the latent vector representations of nodes in the graph is an important and ubiquitous task, which aims to capture various attributes of the nodes in the graph. A lot of work demonstrates that static graph representation learning can learn part of the node information; however, real-world graphs evolve over time. In order to solve the problem that most dynamic network algorithms cannot effectively retain node neighborhood structure and temporal information, a dynamic network representation learning method based on Deep Neural Network (DNN) and Gated Recurrent Unit (GRU), namely DynAEGRU, was proposed. With Auto-Encoder (AE) as the framework of the DynAEGRU, the neighborhood information was aggregated by encoder with a DNN to obtain low-dimensional feature vectors, then the node temporal information was extracted by a GRU network,finally, the adjacency matrix was reconstructed by the decoder and compared with the real graph to construct the loss. Experimental results on three real-word datasets show that DynAEGRU method has better performance gain compared to several static and dynamic graph representation learning algorithms.

Key words: dynamic network representation learning, Deep Neural Network (DNN), Auto-Encoder (AE), Gated Recurrent Unit (GRU), link prediction

中图分类号: