Action Quality Assessment (AQA) has attracted many researchers as a challenging visual task. Current research methods mainly focus on improving the feature extraction capability of backbone networks, ignoring the impact of motion trajectories. However, the consistency of the movements is also an important factor for evaluating execution of the movements in the real world. Firstly, in order to realize the interactive learning between different information, an AQA model with trajectory-guided perceptual learning was proposed by introducing trajectory information, which utilized trajectory descriptors to guide the model to learn information of the consistency of movements perceptually. Secondly, in order to solve the lack of trajectory labels in the current datasets, an unsupervised optical flow trajectory extraction method based on Farneback optical flow method was designed to obtain movement trajectory information, and the acquired optical flow trajectory features were used as cue words to guide the model to learn the video features perceptually. Finally, learnable spline curves of KAN (Kolmogorov-Arnold Network) were used to fit the data distribution of the mixed features, so as to establish a more accurate mapping relationship. The proposed model was evaluated experimentally on the MTL-AQA, AQA-7, FineDiving, and JIGSAWS datasets using Spearman rank Correlation (Sp.Corr) as the evaluation metric. The results show that the proposed model has the Sp.Corr of 0.910 1, 0.912 0, 0.882 0, and 0.990 0, respectively, which is 0.4%, 12.6%, 6.2%, and 57.1% higher than that of USDL (Uncertainty-aware Score Distribution Learning) model, respectively.