[1]HAN J, KAMBER M. Data mining: concepts and techniques [M]. FAN M, MENG X, translated. 2nd ed. Beijing: China Machine Press, 2007.(HAN J, KAMBER M.数据挖掘:概念与技术[M].范明,孟小峰,译.2版.北京:机械工业出版社,2007.)
[2]JAIN A, MURTY M, FLYNN P J. Data clustering: a review [J]. ACM Computing Surveys, 1999, 31(3): 264-323.
[3]LEOPOLD E, KINDERMANN J. Text categorization with support vector machines: how to represent texts in input space? [J]. Machine Learning, 2002, 46(1/2/3): 423-444.
[4]CHEN L. Research on clustering methods for high dimensional data and their applications [D]. Xiamen: Xiamen University, 2008.(陈黎飞.高维数据的聚类方法研究与应用[D].厦门:厦门大学,2008.)
[5]PARSONS L, HAQUE E, LIU H. Subspace clustering for high dimensional data: a review [J]. ACM Knowledge Discovery and Data Mining Explorations Newsletter, 2004, 6(1): 90-105.
[6]VERLEYSEN M. Learning high-dimensional data [C]// Proceedings of the Limitations and Future Trends in Neural Computation. Siena: IOS Press, 2003:141-162.
[7]YANG Q, WU X. 10 challenging problems in data mining research [J]. International Journal of Information Technology and Decision Making, 2006, 5(4): 597-604.
[8]KRIEGEL H P, KRGER P, ZIMEK A. Clustering high-dimen-sional data: a survey on subspace clustering, pattern-based clustering, and correlation clustering [J]. ACM Transactions on Knowledge Discovery from Data, 2009, 3(1): 1-58.
[9]JING L, NG M K, HUANG J Z. An entropy weighting k-means algorithm for subspace clustering of high-dimensional sparse data [J]. IEEE Transactions on Knowledge and Data Engineering, 2007, 19(8): 1026-1041.
[10]HUANG J Z, NG M K, RONG H, et al. Automated variable weighting in k-means type clustering [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(5): 657-668.
[11]DOMENICONI C, GUNOPULOS D, MA S, et al. Locally adaptive metrics for clustering high dimensional data [J]. Data Mining and Knowledge Discovery, 2007, 14(1): 63-97.
[12]GAO G, WU J, YANG Z. A fuzzy subspace clustering algorithm for clustering high dimensional data [C]// Proceedings of the Second International Conference on Advanced Data Mining and Applications. Berlin: Springer, 2006: 271-278.
[13]XU L, JORDAN M I. On convergence properties of the EM algorithm for Gaussian mixtures [J]. Neural Computation, 1996, 8(1): 129-151.
[14]GULLO F, DOMENICONI C, TAGARELLI A. Projective clustering ensembles [J]. Data Mining and Knowledge Discovery, 2013, 26(3): 452-511.
[15]CHEN L, GUO G, JIANG Q. An adaptive algorithm for soft subspace clustering [J]. Journal of Software, 2010, 21(10): 2513-2523.(陈黎飞,郭躬德,姜青山.自适应的软子空间聚类算法[J].软件学报,2010,21(10):2513-2523.)
[16]DENG Z, CHOI K S, CHUNG F L, et al. Enhanced soft subspace clustering integrating within-cluster and between-cluster information [J]. Pattern Recognition, 2010, 43(3): 767-781.
[17]CHEN L, JIANG Q, WANG S. A probability model for projective clustering on high dimensional data [C]// ICDM'08: Proceedings of the Eighth IEEE International Conference on Data Mining. Washington, DC: IEEE Computer Society, 2008: 755-760.
[18]XUE Y. Optimization theory and method [M]. Beijing: Beijing University of Technology Press, 2001.(薛毅.最优化原理与方法[M].北京:北京工业大学出版社,2001.)
[19]ZHAO Y, KARYPIS G. Comparison of agglomerative and partitional document clustering algorithms, TR 02-014 [R]. Minneapolis: University of Minnesota, 2002.
[20]STREHL A, GHOSH J. Cluster ensembles—a knowledge reuse framework for combining multiple partitions [J]. The Journal of Machine Learning Research, 2003, 3: 583-617. |