Moment invariants have been used as feature descriptors in a variety of object recognition applications since it was proposed. It is necessary to compute geometric moment values in real-time rate. Despite the existence of many algorithms of fast computation of moments, it cannot be implemented for real-time computation to be run on a PC. After analyzing the parallelism of fast moment invariants algorithm based on differential of moments factor, a novel parallel computing method based on CUDA (Compute Unified Device Architecture) technology is presented and implemented on NVIDIA Tesla C1060 GPU(Graphic Processing Unit) in this paper. The computing performance of the proposed method and the traditional serial algorithm are contrasted and analyzed. The experiments show that the parallel algorithm presented in the paper greatly improved the speed of the computation of moments. The new method can be effectively used in real-time feature extraction.