In today’s society with the growing demand for privacy protection, federated learning is receiving widespread attention. However, in federated learning, it is difficult for the server to supervise behaviors of clients, so that the existence of lazy clients poses a potential threat to the performance and fairness of federated learning. Aiming at the problem of how to identify lazy clients efficiently and accurately, a dual-task proof-of-work method based on backdoor was proposed, namely FedBD (FedBackDoor). In FedBD, additional backdoor tasks that are easier to detect were allocated by the server for the clients participating in federated learning, the backdoor tasks were trained by the clients based on the original training tasks, and the clients’ behaviors were supervised by the server indirectly through training status of the backdoor tasks. Experimental results show that FedBD has certain advantages over the classic federated averaging algorithm FedAvg and the advanced algorithm GTG-Shapley (Guided Truncation Gradient Shapley) on datasets such as MNIST and CIFAR10. On CIFAR10 dataset, when the proportion of lazy clients is 15%, FedBD improves the accuracy by more than 10 percentage points compared with FedAvg, and increases the accuracy by 2 percentage points compared with GTG-Shapley. Moreover, the average training time of FedBD is only 11.8% of that of GTG-Shapley, and the accuracy of FedBD in identifying lazy clients can exceed 99% when the proportion of lazy clients is 10%. It can be seen that FedBD can solve the problem of lazy clients being difficult to supervise.