With the growing demand of industrial automation, 3D point cloud anomaly detection has played an increasingly important role in product quality control. However, the existing methods often rely on a single feature, leading to information loss and accuracy reduction. To address these issues, an unsupervised point cloud anomaly detection method based on multi-representation fusion was proposed, called MRF (Multi-Representation Fusion). MRF used multi-angle rotation and various coloring schemes to render point clouds into multi-modal images, and employed pre-trained 2D convolutional neural networks to extract rich semantic features. Simultaneously, pre-trained Point Transformer was adopted to extract 3D structural features. After the above, by fusing 2D image semantic features and 3D structural features, MRF was able to capture point cloud information more comprehensively. In the anomaly detection stage, abnormal point clouds were identified effectively by using a method based on positive sample memory banks and nearest neighbor search. Experimental results on MVTec 3D AD dataset show that MRF achieves a point cloud-level AUROC (Area Under the Receiver Operating Characteristic curve) of 0.972 and a point-level AUPRO (Area Under the Per-Region Overlap) of 0.948, significantly outperforming existing methods. It can be seen that the effectiveness and robustness of MRF makes it a highly promising solution for industrial applications.