Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Multi-objective optimization of steel logistics vehicle-cargo matching under multiple constraints
Kaile YU, Jiajun LIAO, Jiali MAO, Xiaopeng HUANG
Journal of Computer Applications    2025, 45 (8): 2477-2483.   DOI: 10.11772/j.issn.1001-9081.2024081125
Abstract37)   HTML2)    PDF (1550KB)(24)       Save

Steel logistics platforms often need to split steel products into multiple waybills for transportation when handling customer orders. Less-Than-Truckload (LTL) cargo, which fails to meet the minimum load requirements of a truck, needs to be consolidated with goods from other customer orders to optimize transportation efficiency. Although previous studies had proposed some solutions for consolidation decision-making, none considered the issues of detour distance and prioritizing high-priority cargo simultaneously in consolidated shipments. Therefore, a multi-objective optimization framework for steel cargo consolidation under multiple constraints was proposed. The globally optimal cargo consolidation decisions were achieved by the framework through designing a hierarchical decision network and a representation enhancement module. Specifically, a hierarchical decision network based on Proximal Policy Optimization (PPO) was used to determine the priorities of the optimization objectives first, and then the LTL cargo was consolidated and selected on the basis of these priorities. Meanwhile, a representation enhancement module based on Graph ATtention network (GAT) was employed to represent cargo and LTL cargo information dynamically, which was then input into the decision network to maximize long-term multi-objective gains. Experimental results on a large-scale real-world cargo dataset show that compared to other online methods, the proposed method achieves a 17.3% increase in the proportion of high-priority cargo weight and a 7.8% reduction in average detour distance, with reducing the total shipping weight by 6.75% compared to the LTL cargo consolidation method that only maximizes cargo capacity. This enhances the efficiency of consolidated transportation effectively.

Table and Figures | Reference | Related Articles | Metrics
Unsupervised contrastive learning for Chinese with mutual information and prompt learning
Peng HUANG, Jiayu LIN, Zuhong LIANG
Journal of Computer Applications    2025, 45 (10): 3101-3110.   DOI: 10.11772/j.issn.1001-9081.2024101464
Abstract1)   HTML0)    PDF (1564KB)(0)       Save

Unsupervised contrastive learning for Chinese faces multiple challenges: first, the structure of Chinese sentences is highly flexible and the semantic ambiguity is high, which make it difficult for models to capture deep semantic features accurately; second, on small-scale datasets, the feature-expression ability of contrastive learning models is insufficient, and effective semantic representations are hard to be learned fully; third, redundant noise may be introduced by the data augmentation process, further enhancing the instability of training. These issues limit the performance of models in Chinese semantic understanding jointly. To solve these problems, an unsupervised contrastive learning method for Chinese with Mutual Information (MI) and Prompt Learning (CMIPL) was proposed. Firstly, data augmentation approach of prompt learning was adopted to construct the sample pairs required for contrastive learning, so that all text information and order were maintained, text diversity was increased, the input structure of samples was standardized, and prompt templates were provided for input samples as context to guide the model to learn fine-grained semantics more deeply. Secondly, based on the output representation of the pre-trained language model, a prompt template denoising method was used to remove the redundant noise introduced by data augmentation. Finally, the structural information of positive samples was incorporated into the model training system, so that MI of the attention tensor of the augmented view was calculated, and the attention MI was introduced into the loss function. By minimizing the loss function, the attention distribution of the model was optimized, and alignment of the augmented view structure was maximized, so as to enable the model to better narrow the distance between positive pairs. Comparison experiments were conducted on few-shot data constructed from three public Chinese text similarity datasets: ATEC, BQ, and PAWSX. The results show that the proposed method has the best average performance, especially when the training data size is small. When using 1% and 10% sample size, compared with the baseline contrastive learning model SimCSE (Simple Contrastive learning of Sentence Embeddings), CMIPL has the average accuracy and the Spearman’s Rank correlation coefficient (SR) increased by 3.45, 4.07 and 1.64, 2.61 percentage points, respectively, verifying the effectiveness of CMIPL in the field of unsupervised few-shot contrastive learning for Chinese.

Table and Figures | Reference | Related Articles | Metrics