《计算机应用》唯一官方网站

• •    下一篇

统一框架的增强深度子空间聚类方法

王清,赵杰煜,叶绪伦,王弄潇   

  1. 宁波大学 信息科学与工程学院,浙江 宁波315211
  • 收稿日期:2023-10-16 修回日期:2024-01-18 接受日期:2024-01-24 发布日期:2024-03-08 出版日期:2024-03-08
  • 通讯作者: 赵杰煜
  • 基金资助:
    国家自然科学基金;国家自然科学基金;浙江省国家自然科学基金;浙江省国家自然科学基金

Enhanced deep subspace clustering method with unified framework

  • Received:2023-10-16 Revised:2024-01-18 Accepted:2024-01-24 Online:2024-03-08 Published:2024-03-08

摘要: 深度子空间聚类是一种用于处理高维数据聚类任务的有效方法。然而,现有的深度子空间聚类方法通常将自表示学习和指标学习作为两个独立的过程进行,导致在处理具有挑战性的数据时,固定的自表示矩阵可能导致次优的聚类结果。同时,自表示矩阵的质量对聚类结果的准确性具有至关重要的影响。为解决上述问题,提出一种统一增强深度子空间聚类方法。首先,通过将特征学习、自表示学习和指标学习集成在一起同时优化所有参数,根据数据的特征动态地学习自表示矩阵,从而确保能够准确地捕捉数据特征;其次,为了提高自表示学习的效果,提出了类原型伪标签学习,为特征学习和指标学习提供自监督信息,进而促进自表示学习;最后,为了增强嵌入表示的判别能力,引入正交性约束帮助实现自表示属性。实验结果表明,与AASSC (Adaptive Attribute and Structure Subspace Clustering Network)相比,所提方法在MNIST数据集上聚类准确率提高了1.84个百分点。可见,所提方法提高了自表示矩阵学习的准确性,进而实现了更强聚类性能。

关键词: 深度子空间聚类, 自表示学习, 指标学习, 亲和矩阵, 正交约束

Abstract:

 Deep subspace clustering is a method that performs well in processing high-dimensional data clustering tasks. However, in the context of challenging data, current deep subspace clustering methods usually exhibit suboptimal clustering results due to the conventional practice of treating self-representative learning and indicator learning as two separate and independent processes. To solve the above problem, a unified enhanced deep subspace clustering method was proposed. Firstly, by integrating feature learning, self-representative learning, and indicator learning together to optimize all parameters, the self-representative matrix was dynamically learned based on the characteristics of the data, ensuring accurate capture of data features. Secondly, to improve the effectiveness of self-representative learning, class prototype pseudo-labels learning was proposed to provide self-supervised information for feature learning and indicator learning, thereby promoting self-representative learning. Finally, to enhance the discriminative ability of embedded representations, orthogonality constraints were introduced to help achieve self-representative attribute. The experimental results show that compared with AASSC (Adaptive Attribute and Structure Subspace Clustering Network), the proposed method improves clustering accuracy by 1.84 percentage points on the MNIST dataset. It can be seen that the proposed algorithm improves the accuracy of self-representative matrix learning, thereby achieving better clustering performance.


Key words: deep subspace clustering, self-representative learning, indicator learning, affinity matrix, orthogonality constraint