Remote sensing data have high spatio-temporal correlation and complex surface features, which makes the privacy protection of the data challenging. As a distributed learning method with the goal of protecting data privacy of the participants, federated learning provides an effective solution to overcome the challenges faced by remote sensing data privacy protection. However, during the training phase of federated learning models, malicious attackers may infer private information of the participants through inversion, leading to the disclosure of sensitive information. Aiming at the privacy leakage problem of remote sensing data in federated learning training, a federated learning privacy protection scheme based on local differential privacy was proposed. Firstly, the model was pre-trained, the layer importance of the model was calculated, and the privacy budget was allocated reasonably based on the layer importance. Then, local differential privacy protection was achieved by performing a crop transformation on the model update and performing adaptive random disturbance on the crop value. Finally, model correction was employed to further improve the model performance when the aggregated disturbance was updated. Theoretical analysis and simulation results show that the proposed scheme can not only provide appropriate differential privacy protection for each participant and prevent inferring privacy sensitive information through inversion effectively, but also outperform the segmentation mechanism-based disturbance scheme in accuracy on three remote sensing datasets by 3.28 to 3.93 percentage points. It can be seen that the proposed scheme guarantees model performance effectively while ensuring privacy.