Journal of Computer Applications
Next Articles
Received:
Revised:
Accepted:
Online:
Published:
樊子靖1,郭银章2
通讯作者:
基金资助:
Abstract: Addressing the cold-start challenge and the performance-resource trade-off in Large Language Model (LLM)-enhanced multi-agent systems for smart manufacturing, this paper introduced an adaptive cloud–edge–device collaborative scheduling framework. The framework adopted a three-tier architecture where a cloud-based LLM provides global guidance, an edge agent orchestrates coordination and arbitration, and lightweight models deployed on devices carry out real-time execution. It incorporates a dynamic role-switching mechanism that enables a smooth transition from LLM-dominated decision-making in early phases toward increasingly autonomous and efficient agent collaboration. To alleviate the cold-start issue, we designed a hybrid method combining virtual experience generation via LLM with importance-aware communication topology construction, which injects prior knowledge and establishes efficient initial interaction structures. For resource efficiency, a multi-granularity confidence-based scheduler dynamically balances the invocation of the LLM and local models, thereby reducing redundant interactions. Experimental evaluations in dynamic flexible job-shop scheduling (DFJSP) and multi-agent communication environments demonstrated that this framework can effectively alleviate the cold start problem. Compared with traditional communication models, it improves convergence speed during the early stages of training and significantly enhances stability. In final scheduling performance, it reduced average job tardiness by 35.73% compared to conventional models, while optimally regulated the LLM scheduling rate to approximately 47.2%, thus striking an effective balance between performance gains and resource costs. Overall, the framework reduces total runtime by 53.13% relative to a fully LLM-dependent strategy, offering a practical and adaptive pathway for scalable and efficient multi-agent collaboration in industrial settings.
Key words: Keywords: deep reinforcement learning, multi-agent structured communication, large language model, large-small model collaboration, dynamic job-shop scheduling
摘要: 针对智能制造中多智能体协作系统存在的决策冷启动、大语言模型(LLM)赋能下的资源消耗与性能平衡难题,本文提出一种生成式大小模型驱动的云边端多智能体自适应协同调度框架。该框架构建了“云侧 LLM 全局引导-边侧智能体协调仲裁-端侧轻量化模型自主执行”的分层协同体系,并设计了动态角色切换与渐进式模型演化机制,使系统能从初期的强引导平滑过渡至后期的自主高效协同。为解决冷启动问题,设计融合 LLM 虚拟经验注入与辅助智能体决策的拓扑构建方法,为智能体提供知识注入与高效的初始通信结构,显著加速初期学习;为控制资源开销,提出基于多粒度置信评估的动态 LLM调度策略,自适应地平衡大模型与小模型的调用比例,避免冗余交互。在动态柔性作业车间调度与多智能体通信环境中的实验表明:本文框架能有效缓解冷启动,相对于传统通信模型,训练初期收敛速度提升,稳定性显著增强;最终调度性能上,平均作业拖期时间较传统模型降低 35.73%,且将 LLM 的调度率精准控制在 47.2%左右,实现了性能与资源消耗的最优权衡;整体耗时较完全依赖 LLM 的方案减少 53.13%。本研究为破解多智能体系统中的冷启动与异构模型低效交互问题,提供了一条具有高适应性的云边端协同技术路径。
关键词: 智能制造, 多智能体结构化通信, 生成式大小模型, 云边端协同, 动态车间作业调度
CLC Number:
TP181
樊子靖 郭银章. 大小模型驱动的云边端多智能体协同通信与任务调度框架[J]. 《计算机应用》唯一官方网站, DOI: 10.11772/j.issn.1001-9081.2025111444.
0 / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2025111444