With the continues increasing importance of data privacy, fine-tuning Pre-trained Foundational Model (PFM) for downstream tasks has become increasingly challenging, leading to the emergence of federated learning research based on PFM. However, PFM poses significant challenges to federated learning systems, especially in terms of local computation and communication. Therefore, the corresponding solution schemes were proposed for the two main stages of federated learning: local computing and aggregation communication, namely the local efficient fine-tuning mode and the ring-shaped local aggregation mode. In the first mode, a model pruning algorithm based on Parameter-Efficient Fine-Tuning (PEFT) was employed to reduce local computation and communication costs. In the second mode, the centralized aggregation method was replaced with a distributed local aggregation scheme to enhance communication efficiency during the aggregation stage. Experimental results demonstrate that the proposed federated parameter-efficient fine-tuning framework for large model performs well in terms of both final performance and efficiency.