Text classification is a fundamental task in Natural Language Processing (NLP), aiming to assign text data to predefined categories. The combination of Graph Convolutional neural Network (GCN) and large-scale pre-trained model BERT (Bidirectional Encoder Representations from Transformer) has achieved excellent results in text classification tasks. Undirected information transmission of GCN in large-scale heterogeneous graphs produces information noise, which affects the judgment of the model and reduce the classification ability of the model. To solve this problem, a generative label adversarial model, the Class Adversarial Graph Convolutional Network (CAGCN) model, was proposed to reduce the interference of irrelevant information during classification and improve the classification performance of the model. Firstly, the composition method in TextGCN (Text Graph Convolutional Network) was used to construct the adjacency matrix, which was combined with GCN and BERT models as a Class Generator (CG). Secondly, the pseudo-label feature training method was used in the model training to construct a clueter. The cluster and the class generator were jointly trained. Finally, experiments were carried out on several widely used datasets. Experimental results show that the classification accuracy of CAGCN model is 1.2, 0.1, 0.5, 1.7 and 0.5 percentage points higher than that of RoBERTaGCN model on the widely used classification datasets 20NG, R8, R52, Ohsumed and MR, respectively.