Deep learning-based image classification algorithms usually rely on huge amounts of training data. However, it is often difficult to obtain sufficient large-scale high-quality labeled samples in real scenarios. Aiming at the problem of insufficient generalization ability of classification models in few-shot scenarios, a few-shot image classification method based on contrast learning was proposed. Firstly, global contrast learning was added as an auxiliary target in training to enable the feature extraction network to obtain richer information from instances. Then, the query samples were split into patches and used to calculate the local contrast loss, thereby promoting the model to gain the ability to infer the global thing the local things. Finally, saliency detection was used to mix the important regions of the query samples, and complex samples were constructed, so as to improve the model generalization ability. Experimental results of 5-way 1-shot and 5-way 5-shot image classification tasks on two public datasets, miniImageNet and tieredImageNet, show that compared to the few-shot learning baseline model, Meta-Baseline, the proposed method improves the classification accuracy by 5.97 and 4.25 percentage points respectively on miniImageNet, and by 3.86 and 2.84 percentage points respectively on tieredImageNet. Besides, the classification accuracy of the proposed method on miniImageNet is improved by 1.02 and 0.72 percentage points respectively compared to that of DFR (Disentangled Feature Representation) model. It can be seen that the proposed method improves the accuracy of few-shot image classification effectively with good generalization ability.