Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Judgment document summarization method combining large language model and dynamic prompts
Binbin ZHANG, Yongbin QIN, Ruizhang HUANG, Yanping CHEN
Journal of Computer Applications    2025, 45 (9): 2783-2789.   DOI: 10.11772/j.issn.1001-9081.2024091393
Abstract159)   HTML10)    PDF (1239KB)(380)       Save

In view of the problems of complex case structure, redundant facts involved in cases, and wide distribution of cases in judgment documents, the existing Large Language Models (LLMs) are difficult to focus on structural information effectively and may generate factual errors, resulting in missing structural information and factual inconsistency. To this end, a judgment document summary method combining LLMs and dynamic prompts, named DPCM (Dynamic Prompt Correction Method), was proposed. Firstly, LLMs were used for single-sample learning to generate a judgment document summary. Secondly, high-dimensional similarity between the original text and the summary was calculated to detect possible missing structure or factual inconsistency problems in the summary. If a problem was found, the wrong summary was spliced with the original text, and the prompt words were added. Then, one-shot learning was performed again to correct and generate a new summary, and a similarity test was performed again. If the problem still existed, the generation and detection process would be repeated. Finally, through this iterative method, the prompt words were adjusted dynamically to optimize the generated summary gradually. Experimental results on the CAIL2020 public justice summary dataset show that compared with Least-To-Most Prompting, Zero-Shot Reasoners, Self_Consistency_Cot and other methods, the proposed method has improvements in Rouge-1, Rouge-2, Rouge-L, BERTscore, FactCC (Factual Consistency) indicators.

Table and Figures | Reference | Related Articles | Metrics
Trust management scheme for internet of vehicles based on blockchain and multi-attribute decision making
Xinyang LUO, Wunan WAN, Shibin ZHANG, Jinquan ZHANG
Journal of Computer Applications    2025, 45 (11): 3470-3476.   DOI: 10.11772/j.issn.1001-9081.2024121865
Abstract78)   HTML2)    PDF (901KB)(24)       Save

Aiming at the problems of conducting reasonable trust evaluations for vehicles and ensuring timely updates of consistent trust values among multiple RoadSide Units (RSUs) in the Internet of Vehicles (IoV), a trust management scheme for IoV based on blockchain and multi-attribute decision making was proposed on the basis of the existing trust management schemes for IoV, named BCIoVTrust (BlockChain IoV Trust). Firstly, the comprehensive trust value and the malicious probability indicator of a vehicle were calculated by attribute values and dynamical attribute weights. Secondly, a reward and punishment mechanism was introduced to reduce the time that malicious vehicles stay in the IoV. Finally, a hybrid consensus mechanism was used to dynamically change the blocking out difficulty of the miner node by taking the sum of the absolute values of the vehicles' trust values as the stake. Experimental results show that the scheme can calculate the vehicle trust value more comprehensively and accurately, identify and remove malicious vehicles, and update the trust value stored on the blocks faster to effectively solve the cold-start problem, dynamically adjust the rate of trust decay, reasonably select the optimal recommended nodes, and prevent the malicious vehicles from conspiring and colluding.

Table and Figures | Reference | Related Articles | Metrics
Multi-domain access control scheme in blockchain based on SM2 homomorphic encryption
Bimang SUN, Wunan WAN, Shibin ZHANG, Jinquan ZHANG
Journal of Computer Applications    2025, 45 (11): 3432-3439.   DOI: 10.11772/j.issn.1001-9081.2024121849
Abstract63)   HTML0)    PDF (816KB)(493)       Save

Addressing the issues of attribute privacy leakage and insufficient scalability in existing blockchain multi-domain access control models, a Cross-Chain based Multi-Domain Access Control Model (CC-MDACM) was proposed. Firstly, based on Attribute-Based Access Control (ABAC) and relay chain technology, a cross-blockchain multi-domain access control model was proposed, enabling autonomous authorization within domains and fine-grained access control across heterogeneous blockchains through the relay chain between domains. Secondly, by combining a threshold homomorphic encryption algorithm based on SM2 with zero-knowledge proof technology, a cross-blockchain multi-domain access control scheme with dual concealment of attributes and policies as well as scalability was proposed. This scheme allowed data to be verified and decrypted by distributed nodes on the relay chain and facilitated access control decisions in the ciphertext state. Attributes and policies were protected through dual concealment, and access control policies were dynamically extended. Additionally, Raft consensus was adopted to ensure the reliability of decryption. Finally, the proposed scheme was analyzed by security theoretical analysis and simulation experiments. The results demonstrate that, while ensuring dual concealment of attributes and policies and supporting dynamic expansion of access policies, the proposed scheme effectively resolves the multi-domain access control problem across heterogeneous blockchains. Compared to the Distributed Two trapdoor Public Key Cryptosystem (DT-PKC), encryption and decryption efficiencies of the proposed scheme were improved by 34.4% and 44.9%, respectively.

Table and Figures | Reference | Related Articles | Metrics
Survey of DDoS protection research based on blockchain
Mei TANG, Wunan WAN, Shibin ZHANG, Jinquan ZHANG
Journal of Computer Applications    2025, 45 (11): 3416-3423.   DOI: 10.11772/j.issn.1001-9081.2024121850
Abstract75)   HTML8)    PDF (1234KB)(69)       Save

With the escalating severity of cybersecurity threats, Distributed Denial of Service (DDoS) attacks remain a persistent challenge in network security research. Traditional DDoS protection solutions usually rely on centralized architectures, which suffer from single point of failure, data tampering and other problems, and are difficult to deal with complex and diverse attack scenarios. Blockchain technology provides a new solution for DDoS protection with its characteristics of decentralization, immutability and transparency. In view of the technical challenges in DDoS protection, the progress of blockchain-based DDoS protection was summarized. Firstly, the basic concepts of DDoS attacks and their threats to environments such as traditional networks, Internet of Things (IoT) and Software Defined Networking (SDN) were introduced, and the necessity and potential advantages of introducing blockchain technology were analyzed. Secondly, from the aspects of blockchain combined with smart contracts, deep learning, cross-domain collaboration, and so on, the existing DDoS protection mechanisms were reviewed and compared. Finally, considering the technical difficulties in blockchain performance optimization, multi-domain collaboration, and real-time response, the future development directions of blockchain-based DDoS protection technology were prospected, providing theoretical references for researchers in the field of cybersecurity and further promoting the practical applications of blockchain in DDoS protection.

Table and Figures | Reference | Related Articles | Metrics
Attribute-based entity alignment algorithm for decentralized data storage in large-scale institutions
Zeyi CAO, Yan CHANG, Renxin LAI, Shibin ZHANG, Zhi QIN, Lili YAN, Xuejian ZHANG, Yuanhao DI
Journal of Computer Applications    2025, 45 (10): 3195-3202.   DOI: 10.11772/j.issn.1001-9081.2024091388
Abstract72)   HTML2)    PDF (2210KB)(30)       Save

The data entities stored in large-scale decentralized institutions have issues such as data redundancy, missing information, and inconsistency, which requires integration through entity alignment. Most existing entity alignment methods rely on structural information of entities and perform alignment through subgraph matching. However, the lack of structural information in decentralized data storage will lead to poor alignment results. To address this issue and support identification of important data, a single-layer graph neural network-based attribute-based entity alignment model was proposed. Firstly, a single-layer graph neural network was utilized to avoid interference from secondary neighbor node information. Secondly, an attribute weighting method based on information entropy was designed to distinguish importance of the attributes in the initial stage quickly. Finally, an attention mechanism-based encoder was constructed to represent importance of different attributes in alignment from both local and global perspectives, thereby providing a more comprehensive representation of entity information. Experimental results indicate that on two decentralized storage datasets, the proposed model improves the Hits@1 by 5.24 and 2.03 percentage points, respectively, compared to the suboptimal models, demonstrating superior alignment performance of the proposed model over other entity alignment methods.

Table and Figures | Reference | Related Articles | Metrics
Low-cost adversarial example defense algorithm based on example preprocessing
Xiao CHEN, Yan CHANG, Danchen WANG, Shibin ZHANG
Journal of Computer Applications    2024, 44 (9): 2756-2762.   DOI: 10.11772/j.issn.1001-9081.2023091249
Abstract279)   HTML2)    PDF (1915KB)(955)       Save

In order to defend against existing attacks on artificial intelligence algorithms (especially artificial neural networks) as much as possible, and reduce the additional costs, the rattan algorithm based on example preprocessing was proposed. By cutting the unimportant information part of the image, normalizing the neighboring pixel values and scaling image, the examples were preprocessed to destroy the adversarial disturbance and generate new examples with less threat to the model, ensuring high accuracy of model recognition. Experimental results show that the rattan algorithm can defend against some adversarial attacks against MNIST, CIFAR10 datasets and neural network models such as squeezenet1_1, mnasnet1_3 and mobilenet_v3_large with less overhead than similar algorithms, and the minimum example accuracy after defense can reach 88.50%; meanwhile, it does not reduce the example accuracy too much while processing clean examples, and the defense effect and defense cost are better than those of the comparison algorithms such as Fast Gradient Sign Method (FGSM) and Momentum Iterative Method (MIM).

Table and Figures | Reference | Related Articles | Metrics
Trajectory planning for autonomous vehicles based on model predictive control
Chao GE, Jiabin ZHANG, Lei WANG, Zhixin LUN
Journal of Computer Applications    2024, 44 (6): 1959-1964.   DOI: 10.11772/j.issn.1001-9081.2023050725
Abstract418)   HTML8)    PDF (2720KB)(863)       Save

To help the autonomous vehicle plan a safe, comfortable and efficient driving trajectory, a trajectory planning approach based on model predictive control was proposed. First, to simplify the planning environment, a safe and feasible “three-circle” expansion of the safety zone was introduced, which also eliminates the collision issues caused by the idealized model of the vehicle. Then, the trajectory planning was decoupled in lateral and longitudinal space. A model prediction method was applied for lateral planning to generate a series of candidate trajectories that met the driving requirements, and a dynamic planning approach was utilized for longitudinal planning, which improved the efficiency of the planning process. Eventually, the factors affecting the selection of optimal trajectories were considered comprehensively, and an optimal trajectory evaluation function was proposed for path planning and speed planning more compatible with the driving requirements. The effectiveness of the proposed algorithm was verified by joint simulation with Matlab/Simulink, Prescan and Carsim software. Experimental results indicate that the vehicle achieves the expected effects in terms of comfort metrics, steering wheel angle variation and localization accuracy, and the planning curve also perfectly matches the tracking curve, which validates the advantage of the proposed algorithm.

Table and Figures | Reference | Related Articles | Metrics
Adversarial example detection algorithm based on quantum local intrinsic dimensionality
Yu ZHANG, Yan CHANG, Shibin ZHANG
Journal of Computer Applications    2024, 44 (2): 490-495.   DOI: 10.11772/j.issn.1001-9081.2023020172
Abstract235)   HTML6)    PDF (1918KB)(353)       Save

In order to solve the high time complexity problem of the adversarial example detection algorithm based on Local Intrinsic Dimensionality (LID), combined with the advantages of quantum computing, an adversarial example detection algorithm based on quantum LID was proposed. First, the SWAP-Test quantum algorithm was used to calculate the similarity between the measured example and all examples in one time, avoiding the redundant calculation in the classical algorithm. Then Quantum Phase Estimation (QPE) algorithm and quantum Grover search algorithm were combined to calculate the local intrinsic dimension of the measured example. Finally, LID was used as the evaluation basis of the binary detector to detect and distinguish the adversarial examples. The detection algorithm was tested and verified on IRIS, MNIST, and stock time series datasets. The simulation experimental results show that the calculated LID values can highlight the difference between adversarial examples and normal examples, and can be used as a detection basis to differentiate example attributes. Theoretical research proves that the time complexity of the proposed detection algorithm is the same order of magnitude as the product of the number of iterations of Grover operator and the square root of the number of adjacent examples and the number of training examples, which is obviously better than that of the adversarial example detection algorithm based on LID and achieves exponential acceleration.

Table and Figures | Reference | Related Articles | Metrics
Traceability storage model of charity system oriented to master-slave chain
Jing LIANG, Wunan WAN, Shibin ZHANG, Jinquan ZHANG, Zhi QIN
Journal of Computer Applications    2024, 44 (12): 3751-3758.   DOI: 10.11772/j.issn.1001-9081.2023121821
Abstract179)   HTML1)    PDF (2966KB)(77)       Save

The traceability data of single-chain storage charity system have huge storage pressure, and the charity data need to be shared, may leading to the problem of privacy leakage. Therefore, a charity system traceability storage model oriented to master-slave chain was proposed. Firstly, a master chain and several slave chains were designed in the model. The master chain was mainly responsible for the query of charity traceability data and the supervision of slave chains, and the slave chains were responsible for the storage of a large number of charity traceability data. Then, an intelligent contract for the classification of charity traceability data was designed to classify charity data into public data and private data according to privacy requirements. The public data were stored in the master chain directly, while the private data were encrypted with Ciphertext-Policy Attribute-Based Encryption (CP-ABE) and stored in the slave chains, which ensured data privacy, thus achieving storage scalability and intelligence. Finally, the storage structure of Merkle tree was improved. By designing a smart contract to mark duplicate data, the same block detection and duplicate data deletion of blockchain system were completed, which avoided data redundancy and reduced storage consumption. Experimental results show that compared to the single-chain model, with the increase of total number of data, the proposed model has the response time of the master-slave chain stabilized at 0.53 s and the throughput stabilized at 149 B. It can be seen that the master-slave chain model improves search efficiency, optimizes storage space, and realizes data privacy protection.

Table and Figures | Reference | Related Articles | Metrics
Cross-chain identity authentication scheme based on certificate-less signcryption
Deyuan LIU, Jingquan ZHANG, Xing ZHANG, Wunan WAN, Shibin ZHANG, Zhi QIN
Journal of Computer Applications    2024, 44 (12): 3731-3740.   DOI: 10.11772/j.issn.1001-9081.2023121824
Abstract287)   HTML7)    PDF (2361KB)(134)       Save

In response to the issues of low decentralization, poor scalability, and high resource consumption in the current blockchain cross-chain identity authentication schemes, a Cross-chain Identity Authentication scheme based on Certificate-Less SignCryption (CIA-CLSC) was proposed. Firstly, Certificate-Less SignCryption (CLSC) was utilized to generate keys for cross-chain entities, realize communication encryption, and perform identity authentication. Secondly, secret sharing was employed for key management in the distributed system. Finally, decentralized identities were used to establish the association between entity keys and cross-chain identities. Under the premise of ensuring identity privacy and security, CIA-CLSC achieved cross-chain interactive identity authentication among different blockchain systems. Theoretical analysis and experimental results demonstrate that CIA-CLSC does not rely on centralized certificate authorities or third-party key management organizations, ensuring decentralization; the CIA-CLSC generated digital identities comply with the World Wide Web Consortium (W3C) standards, ensuring scalability. Furthermore, compared to the combination of ECC (Elliptic Curve Cryptography) and AES (Advanced Encryption Standard), CIA-CLSC achieves approximately 34% reduction in time overhead; compared to the combination of RSA (Rivest-Shamir-Adleman algorithm) and AES, CIA-CLSC achieves approximately 38% reduction in time overhead while maintaining decentralization for cross-chain interactive identity authentication. It can be seen that CIA-CLSC can enhance the decentralization, scalability, and interaction efficiency of cross-chain systems in practical applications effectively.

Table and Figures | Reference | Related Articles | Metrics
Cross-chain identity management scheme based on identity-based proxy re-encryption
Xin ZHANG, Jinquan ZHANG, Deyuan LIU, Wunan WAN, Shibin ZHANG, Zhi QIN
Journal of Computer Applications    2024, 44 (12): 3723-3730.   DOI: 10.11772/j.issn.1001-9081.2023121823
Abstract273)   HTML6)    PDF (2492KB)(215)       Save

In view of the current problems of low authentication efficiency, insufficient security performance and poor scalability in cross-chain identity management, a cross-chain identity management scheme based on Identity-Based Proxy Re-Encryption (IBPRE) was proposed. Firstly, an identity chain was built combining Decentralized IDentifier (DID), and DIDs were provided as cross-chain identity identifiers and verifiable certificates were provided as access certificates to the users to build an access control policy based on certificate information. Secondly, the relay chain was combined with the cryptographic accumulator to achieve user identity authentication. Finally, by combining IBPRE and signature algorithm, a cross-chain communication model based on IBPRE was constructed. Experimental analysis and evaluation results show that compared with RSA (Rivest-Shamir-Adleman algorithm) and Elliptic Curve Cryptosystem (ECC), the proposed scheme has the authentication time reduced by 66.9% and 4.8% respectively. It can be seen that relay chain and identity chain can realize identity management, improve decentralization and scalability, build cross-chain communication models and access policies based on certificate information, and ensure security in cross-chain identity management.

Table and Figures | Reference | Related Articles | Metrics
Delegated proof of stake consensus algorithm based on reputation value and strong blind signature algorithm
Zhenhao ZHAO, Shibin ZHANG, Wunan WAN, Jinquan ZHANG, zhi QIN
Journal of Computer Applications    2024, 44 (12): 3717-3722.   DOI: 10.11772/j.issn.1001-9081.2023121822
Abstract225)   HTML8)    PDF (1330KB)(118)       Save

In order to address the issues of Delegated Proof of Stake (DPoS) algorithm, such as the growing centralization trend caused by high-weight nodes having a higher probability of accounting rights, low voting enthusiasm among nodes, and collusion attacks caused by node corruption, a DPoS consensus algorithm based on reputation value and strong blind signature algorithm was proposed. Firstly, the nodes were sorted into two types based on the initial conditions, and the initial selection of nodes was carried out to select the proxy nodes. Secondly, the vote for each other was performed among the proxy nodes, and the top 21 nodes were selected to form the witness node set based on the average of historical reputation value and final number of votes, while the remaining nodes were used to form the standby witness node set. During the voting process, an Elgamal-based strong blind signature algorithm was employed to ensure privacy for voting nodes. Finally, consensus process was achieved after block out of witness nodes. Experimental results demonstrate that compared to the original DPoS consensus algorithm, the proposed algorithm increases active node proportion by approximately 20 percentage points, and reduces malicious node proportion close to zero. It can be observed that the proposed algorithm enhances node enthusiasm in voting and protects privacy information of nodes.

Table and Figures | Reference | Related Articles | Metrics
Linkable ring signature scheme based on SM9 algorithm
Yiting WANG, Wunan WAN, Shibin ZHANG, Jinquan ZHANG, Zhi QIN
Journal of Computer Applications    2024, 44 (12): 3709-3716.   DOI: 10.11772/j.issn.1001-9081.2023121825
Abstract280)   HTML9)    PDF (1254KB)(516)       Save

Aiming at the problem that the Identity-Based Linkable Ring Signature (IBLRS) scheme has excessive overhead and does not meet the requirements of technical autonomy, a Linkable Ring Signature (LRS) scheme based on SM9 algorithm was proposed. Firstly, the identifier of the signer in the ring was sent to the Key Generation Center (KGC) to generate the corresponding private key. Secondly, the private key was combined with SM9 algorithm to generate a signature, and this private key generation method was consistent with the private key generation method in SM9 algorithm. Finally, the signer's private key and the event identifier were bound to construct a linkable label without need of complex calculation operations, which improved the efficiency of the proposed algorithm. Under the random oracle model, it was proved that the proposed scheme has correctness, unforgeability, unconditional anonymity and linkability. At the same time, a multi-notary cross-chain scheme was designed on the basis of the proposed algorithm to achieve efficient and safe cross-chain interaction. Compared with the IBLRS algorithm, the proposed scheme only requires 4 bilinear pairing operations, which reduces the computational overhead and communication overhead by 39.06% and 51.61% respectively. Performance analysis of the scheme shows that the proposed scheme reduces computing overhead and communication overhead, and satisfies the autonomous controllability of the technology.

Table and Figures | Reference | Related Articles | Metrics
Automatic thresholding method guided by maximizing four-directional weighted Shannon entropy
Yaobin ZOU, Bin ZHANG
Journal of Computer Applications    2024, 44 (11): 3565-3573.   DOI: 10.11772/j.issn.1001-9081.2023111639
Abstract243)   HTML3)    PDF (1458KB)(50)       Save

The grayscale histogram of a grayscale image may have non-modal, unimodal, bimodal, or multi-modal morphological characteristics. However, most traditional entropy thresholding methods are only suitable for processing the grayscale images with unimodal or bimodal morphological characteristics. To improve the segmentation accuracy and adaptability of entropy thresholding methods, an automatic thresholding method guided by maximizing four-directional weighted Shannon entropy was proposed, namely FWSE(Four-directional Weighted Shannon Entropy). Firstly, a series of Multi-scale Product Transformation (MPT) images were obtained by performing MPTs with the directional Prewitt convolution kernels in four directions. Secondly, the optimal MPT image in each direction was computed automatically based on the cubic spline interpolation function and the curvature maximization criterion. Thirdly, the pixels on each optimal MPT image were resampled by using inner and outer contour images to reconstruct the grayscale histogram, and the corresponding Shannon entropy was calculated based on the above. Finally, the optimal segmentation threshold was selected based on the criterion of maximizing weighted Shannon entropy in four directions. FWSE method was compared with three recent thresholding methods and two recent non-thresholding methods on 4 synthetic images and 100 real-world images. Experimental results show that: on the synthesis images, the average Matthews Correlation Coefficient (MCC) of the FWSE method reaches 0.999; on the real-world images, the average MCCs of the FWSE method and the other five segmentation methods are 0.974, 0.927, 0.668, 0.595, 0.550, and 0.525 respectively. It can be seen that the FWSE method has higher segmentation accuracy and more flexible segmentation adaptability.

Table and Figures | Reference | Related Articles | Metrics
Multi-parameter channel transmission performance evaluation method with improved TCP/IP frame structure
Fengtao HE, Binghui WANG, Bin ZHANG, Yi YANG, Yibo FENG
Journal of Computer Applications    2024, 44 (11): 3540-3547.   DOI: 10.11772/j.issn.1001-9081.2023111638
Abstract189)   HTML2)    PDF (1273KB)(44)       Save

At present, the phenomenon of network densification accelerates the degradation of channel transmission performance. And the widely used evaluation methods face significant challenges in evaluating channel transmission performance due to their limited consideration of parameters and constrained applicability. In response to the difficulties in evaluating channel transmission performance, a method for evaluating multi-parameter channel transmission performance through an improved Transmission Control Protocol/Internet Protocol (TCP/IP) frame structure was proposed. Firstly, the standardized test data was generated, including pseudo-random codes, basic curve data, and custom curve data, so as to ensure that the test data follow a uniform standard. Secondly, an improved TCP/IP frame structure was employed to package test data information, including total frame quantity and frame sequences, into the TCP/IP frames. In this way, the sending, receiving and parsing of test data were realized, and the statistics on basic channel transmission variables were completed, such as the number of frames by type, the number of frames by length, the total number of frames, and the volume of effective data. Finally, the received data were analyzed to obtain two types of high-level channel transmission information, namely frame error rate and bit error rate, completing the overall evaluation of the channel transmission performance. The designed method employed six parameters to evaluate channel quality, with the evaluation precision of the method reaching 0.01% and maintaining a minimum error margin of 0.01%. It is compatible with all communication channels using TCP/IP. Experimental results demonstrate that the proposed channel transmission performance evaluation method can perform the statistics and analysis of the six channel communication information, and evaluating the channel transmission performance accurately.

Table and Figures | Reference | Related Articles | Metrics
Quantum K-Means algorithm based on Hamming distance
Jing ZHONG, Chen LIN, Zhiwei SHENG, Shibin ZHANG
Journal of Computer Applications    2023, 43 (8): 2493-2498.   DOI: 10.11772/j.issn.1001-9081.2022091469
Abstract481)   HTML41)    PDF (1623KB)(959)       Save

The K-Means algorithms typically utilize Euclidean distance to calculate the similarity between data points when dealing with large-scale heterogeneous data. However, this method has problems of low efficiency and high computational complexity. Inspired by the significant advantage of Hamming distance in handling data similarity calculation, a Quantum K-Means Hamming (QKMH) algorithm was proposed to calculate similarity. First, the data was prepared and made into quantum state, and the quantum Hamming distance was used to calculate similarity between the points to be clustered and the K cluster centers. Then, the Grover’s minimum search algorithm was improved to find the cluster center closest to the points to be clustered. Finally, these steps were repeated until the designated number of iterations was reached or the clustering centers no longer changed. Based on the quantum simulation computing framework QisKit, the proposed algorithm was validated on the MNIST handwritten digit dataset and compared with various traditional and improved methods. Experimental results show that the F1 score of the QKMH algorithm is improved by 10 percentage points compared with that of the Manhattan distance-based quantum K-Means algorithm and by 4.6 percentage points compared with that of the latest optimized Euclidean distance-based quantum K-Means algorithm, and the time complexity of the QKMH algorithm is lower than those of the above comparison algorithms.

Table and Figures | Reference | Related Articles | Metrics
Ancient mural dynasty identification based on attention mechanism and transfer learning
Huibin ZHANG, Liping FENG, Yaojun HAO, Yining WANG
Journal of Computer Applications    2023, 43 (6): 1826-1832.   DOI: 10.11772/j.issn.1001-9081.2022071008
Abstract432)   HTML15)    PDF (1804KB)(244)       Save

Convolutional Neural Networks (CNNs) have been successfully used to classify dynasties of ancient murals from Dunhuang. Aiming at the problem that using some data enhancement methods to expand the training set would reduce the prediction accuracy due to the limited amount of data of Dunhuang murals, a Residual Network (ResNet) model based on attention mechanism and transfer learning was proposed. Firstly, the residual connection method of the residual network was improved. Then, the POlarized Self-Attention (POSA) module was used to help the network model to extract the edge local detail features and global contour features of the images, and the learning ability of the network model in a small sample environment was enhanced. Finally, the algorithm for classifier was improved, so that the classification performance of the network model was improved. Experimental results show that the proposed model achieves 98.05% accuracy of dynastic classification on DH1926 small sample dataset of Dunhuang murals, and the dynasty identification accuracy of the proposed model is improved by 5.21 percentage points compared with that of the standard ResNet20 network model.

Table and Figures | Reference | Related Articles | Metrics
Process tracking multi‑task rumor verification model combined with stance
Bin ZHANG, Li WANG, Yanjie YANG
Journal of Computer Applications    2022, 42 (11): 3371-3378.   DOI: 10.11772/j.issn.1001-9081.2021122148
Abstract343)   HTML9)    PDF (1420KB)(120)       Save

At present, social media platforms have become the main ways for people to publish and obtain information, but the convenience of information publish may lead to the rapid spread of rumors, so verifying whether information is a rumor and stoping the spread of rumors has become an urgent problem to be solved. Previous studies have shown that people's stance on information can help determining whether the information is a rumor or not. Aiming at the problem of rumor spread, a Joint Stance Process Multi?Task Rumor Verification Model (JSP?MRVM) was proposed on the basis of the above result. Firstly, three propagation processes of information were represented by using topology map, feature map and common Graph Convolutional Network (GCN) respectively. Then, the attention mechanism was used to obtain the stance features of the information and fuse the stance features with the tweet features. Finally, a multi?task objective function was designed to make the stance classification task better assist in verifying rumors. Experimental results prove that the accuracy and Macro?F1 of the proposed model on RumorEval dataset are improved by 10.7 percentage points and 11.2 percentage points respectively compared to those of the baseline model RV?ML (Rumor Verification scheme based on Multitask Learning model), verifying that the proposed model is effective and can reduce the spread of rumors.

Table and Figures | Reference | Related Articles | Metrics
Automatic feature selection algorithm based on interaction of ReliefF with maximum information coefficient and SVM
Qian GE, Guangbin ZHANG, Xiaofeng ZHANG
Journal of Computer Applications    2022, 42 (10): 3046-3053.   DOI: 10.11772/j.issn.1001-9081.2021081486
Abstract436)   HTML13)    PDF (793KB)(143)       Save

In order to solve the problems of feature selection ReliefF algorithm, such as poor algorithm stability and low classification accuracy for selected feature subsets caused by using Euclidean distance to select the nearest neighbor samples, an MICReliefF (Maximum Information Coefficient-ReliefF) algorithm based on Maximum Information Coefficient (MIC) was proposed. At the same time, the classification accuracy of the Support Vector Machine (SVM) model was used as the evaluation index, and the optimal feature subset was automatically determined by multiple optimizations, thereby realizing the interactive optimization of the MICReliefF algorithm and the classification model, that is the MICReliefF-SVM automatic feature selection algorithm. The performance of the MICReliefF-SVM algorithm was verified on several UCI public datasets. Experimental results show that the MICReliefF-SVM automatic feature selection algorithm cannot only filter out more redundant features, but also select the feature subsets with good stability and generalization ability. Compared with Random Forest (RF), max-Relevance and Min-Redundancy (mRMR), Correlation-based Feature Selection (CFS) and other classical feature selection algorithms, MICReliefF algorithm has higher classification accuracy.

Table and Figures | Reference | Related Articles | Metrics
Cross-modal tensor fusion network based on semantic relation graph for image-text retrieval
Changhong LIU, Sheng ZENG, Bin ZHANG, Yong CHEN
Journal of Computer Applications    2022, 42 (10): 3018-3024.   DOI: 10.11772/j.issn.1001-9081.2021091622
Abstract501)   HTML23)    PDF (2407KB)(210)       Save

The key of cross-modal image-text retrieval is how to capture the semantic correlation between images and text effectively. Most of the existing methods learn the global semantic correlation between image region features and text features or local semantic correlation between inter-modality objects, and ignore the correlation between the intra-modality object relationships and inter-modality object relationships. To solve this problem, a method of Cross-Modal Tensor Fusion Network based on Semantic Relation Graph (CMTFN-SRG) for image-text retrieval was proposed. Firstly, the relationships of image regions and text words were generated by Graph Convolutional Network (GCN) and Bidirectional Gated Recurrent Unit (Bi-GRU) respectively. Then, the fine-grained semantic correlation between the data of two modals was learned by using the tensor fusion network to match the learned semantic relation graph of image regions and the graph of text words. At the same time, Gated Recurrent Unit (GRU) was used to learn global features of the image, and the global features of the image and the text were matched to capture the inter-modality global semantic correlation. The proposed method was compared with the Multi-Modality Cross Attention (MMCA) method on the benchmark datasets Flickr30K and MS-COCO. Experimental results show that the proposed method improves the Recall@1 of text-to-image retrieval task by 2.6%, 9.0% and 4.1% respectively on the test datasets Flickr30K, MS-COCO1K and MS-COCO5K.And mean Recall (mR) improves by 0.4, 1.3 and 0.1 percentage points respectively. It can be seen that the proposed method can effectively improve the precision of image-text retrieval.

Table and Figures | Reference | Related Articles | Metrics
Virtual-real registration method based on improved ORB algorithm
ZHAO Jian HAN Bin ZHANG Qiliang
Journal of Computer Applications    2014, 34 (9): 2725-2729.   DOI: 10.11772/j.issn.1001-9081.2014.09.2720
Abstract277)      PDF (851KB)(534)       Save

Aiming at the problem that virtual-real registered accuracy and real-time performance are influenced by image texture and uneven illumination in Augmented Reality (AR), a method based on improved ORB (Oriented FAST (Features from Accelerated Segment Test) and Rotated BRIEF (Binary Robust Independent Elementary Features)) algorithm was proposed to solve it. The method firstly optimized the dense region of image feature points by setting the number and distance threshold of it and used parallel algorithm to reserve N points of greater eigenvalue; Then, the method adopted discrete difference feature to enhance the stability of uneven illumination changes and combined the improved ORB with BOF (Bag-of-Features) model to realize quick retrieval of Benchmark image. Finally, it realized the virtual-real registration by using the homographics between images. Comparative experiments among the proposed method, original ORB, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Features (SURF) algorithms were performed from the aspects of accuracy and efficiency, and the proposed method reduced the registration time to about 40% and reached the accuracy more than 95%. The experimental results show that the proposed method can get a better real-time performance and higher accuracy in different texture and uneven illumination.

Reference | Related Articles | Metrics
Promoting accuracy of trust bootstrapping from rating network
LIU Bin ZHANG Renjin
Journal of Computer Applications    2014, 34 (8): 2442-2446.   DOI: 10.11772/j.issn.1001-9081.2014.08.2442
Abstract269)      PDF (771KB)(362)       Save

To reduce the influence of evaluating trust of a commodity may be affected easily by unfair and malicious rates when the commodity has only few rates on e-commerce platform, a trust bootstrapping method based on assessing the credibility of the rate was presented. The credibility of a rate was got through evaluating the rates for other commodities and related to the factors of the number of rates by the rater, the rater's transaction amount and the price of the rated commodity. The trust value of a commodity without a rate was derived from the shop to which this commodity belonged and the declared attributes of this commodity. The trust value of a commodity which owned rates with sufficient high credibility was determined by these rates with high credibility. Otherwise the trust value was determined partly by rates or was processed according to a commodity without a rate. Calculation, analysis and experimental results show that this presented method, evaluating the credibility of a rate by its rating network, compared with the conventional method and k-means clustering method, has the smallest error and is not sensitive to the ratio of malicious rates. This method can help users select reliable commodities sold at the initial stage on e-commerce platforms.

Reference | Related Articles | Metrics
Mixed key management scheme based on domain for wireless sensor network
WANG Binbin ZHANG Yanyan ZHANG Xuelin
Journal of Computer Applications    2014, 34 (1): 90-94.   DOI: 10.11772/j.issn.1001-9081.2014.01.0090
Abstract608)      PDF (768KB)(540)       Save
Concerning the existing problems in the current key management strategies, lower connectivity, higher storage consumption and communication cost, this paper proposed a mixed key management scheme based on domain for Wireless Sensor Network (WSN). The scheme divided the deployment area into a number of square areas, which consisted of member nodes and head nodes. According to their pre-distribution key space information, any pair of nodes in the same area could find a session key, but the nodes in different areas could only communicate with each other through head nodes. The eigenvalues and eigenvectors of the multiple asymmetric quadratic form polynomials were computed, and then the orthogonal diagonalization information was got, by which the head nodes could achieve identification and generate the session key between its neighbor nodes. The analysis of performance shows that compared with the existing key management schemes, this scheme has full connectivity and a bigger improvement in terms of communication overhead, storage consumption and safety.
Related Articles | Metrics
Bit-flipping prediction acquisition algorithm of weak GPS signal
LI Weibin ZHANG Yingxin GUO Xinming ZHANG Wei
Journal of Computer Applications    2013, 33 (12): 3473-3476.  
Abstract807)      PDF (652KB)(526)       Save
Long coherent integration duration is needed for weak Global Positioning System (GPS) signal acquisition. However, it is limited in 10ms by the navigation data bit flipping, which is far from enough. To further improve the acquisition sensitivity, a bit-flipping prediction algorithm was proposed. Firstly, the 5ms signal with possible involve bit-flipping, which was detected through the comparison of the coherent integration results of the several blocks of data, was canceled. Then coherent integration was applied to its sub-block signal in rest 15ms and the results did differential coherent integration to overcome its bit-flip and reduced the square loss of non-coherence. At the same time, the summation operation was done ahead of coherent integration to depress its computational complexity. The theoretical research and simulation results show that the acquisition sensitivity and acquisition efficiency have been improved, and the algorithm even can capture the weak signal with Signal-to-Noise Ratio (SNR) less than -50dB.
Reference | Related Articles | Metrics
Multi-objective optimization algorithm based on dynamic multiple particle swarms
LIU Bin ZHANG Renjin
Journal of Computer Applications    2013, 33 (12): 3375-3379.  
Abstract706)      PDF (753KB)(553)       Save
To keep the diversity of particles when multi-objective particle swarm optimization is running, a multi-objective optimization algorithm was proposed based on particle swarms initialization and dynamic multiple particle swarms cooperation. The quantity of swarms was increased or decreased dynamically according to the distribution of particle swarms in the decision space. To avoid converging too quickly, the factors, which affected the flying speed of a particle, were improved to depend on the current velocity inertia of the particle, the best value of the particle, the best value of the swarm which the particle belonged to, and the optimal value of all swarms. This algorithm was tested by five benchmark functions and compared with the multi-objective particle swarm optimization. The experimental results indicate that the proposed algorithm is superior to the multi-objective particle swarm optimization.
Related Articles | Metrics
Chord protocol and algorithm in distributed programming language
PENG Chengzhang JIANG Zejun CAI Xiaobin ZHANG Zhike
Journal of Computer Applications    2013, 33 (07): 1885-1889.   DOI: 10.11772/j.issn.1001-9081.2013.07.1885
Abstract1010)      PDF (802KB)(579)       Save
The Peer-to-Peer (P2P) Distributed Hash Table (DHT) protocol is concise, and can be understood easily, but implementing and deploying a component like Chord with all functions in practice is very difficult and complicated because of the mismatch between popular imperative language and distributed architecture. To resolve these problems, a P2P DHT protocol based on Bloom system was proposed. Firstly, the distributed logic programming language's key elements of Bloom system were expounded. Secondly, a minimal distributed system was designed. Thirdly, a Chord prototype system was implemented through defining persistent, transient, asynchronous communicating and periodic collections and designing several algorithms for finger table maintaining, successor listing, stabilization preseving and so on. The experimental results show that the prototype system can finish full functions of Chord, and compared to traditional languages, 60% of the code lines can be saved. The analysis indicates such a high degree of uniformity between final code of the algorithm and the DHT protocol specification makes it more readable and reusable, and helpful for further understanding the specific protocol and relative applications.
Reference | Related Articles | Metrics
Multi-context trust and reputation evaluation system for server selection in online transaction
LIU Bin ZHANG Ren-jin
Journal of Computer Applications    2012, 32 (08): 2350-2359.   DOI: 10.3724/SP.J.1087.2012.02350
Abstract1154)      PDF (1229KB)(361)       Save
In the systems which select servers according to reputation and trust values, the common problems, which result in that the servers chosen cannot meet the users' diversity requirement, are a small number of factors being considered and limited in the system and lack of flexibility in methods. In order to resolve these problems, a multi-context reputation and trust evaluation system, which was used for selecting servers in online transaction, was proposed. A server acquired its reputation and trust bootstrapping value through its registered quality attributes and guarantee fund, gained its reputation and trust experimental value by its trade experience in the internal system and external systems. The actual trust value was the dynamic linear combination of both values. With the increase of transaction times, the weight of the latter increased dynamically. Server was selected according to the context quality attribute set by user and the actual trust and reputation value. This selection method of a new server was tested and contrasted with other trust and reputation systems. The experimental results indicate that this method can easily meet the requirements of all kinds of users. This trust and reputation could afford the new server as well as other servers a competitive environment, and also reduced the possibility that user selected malicious server.
Reference | Related Articles | Metrics
Parameter optimization of support vector machine and application based on particle swarm optimization mode search
WANG Xi-bin ZHANG Xiao-ping WANG Han-hu
Journal of Computer Applications    2011, 31 (12): 3302-3304.  
Abstract1111)      PDF (619KB)(669)       Save
Considering the importance of selecting Kernel parameters, the Particle Swarm Optimization (PSO) model search algorithm was proposed to search optimal parameters. This method combined the global search capability of PSO algorithm and the good local convergence of mode search, that making PSO model search algorithm displays higher performance, and applied to an the practice of agricultural technological project classification. The results of experiment show that this method is not only efficient, but also catches the optimal parameters that have achieved higher accuracy.
Related Articles | Metrics
Rough attack model based on object Petri net of expanded time
Guang-qiu HUANG Chun-zi WANG Bin ZHANG
Journal of Computer Applications    2011, 31 (08): 2146-2151.   DOI: 10.3724/SP.J.1087.2011.02146
Abstract1463)      PDF (1132KB)(884)       Save
To solve the redundancy problem caused by similar attack methods and similar node objects in an attack model of complex network, a rough network attack model based on the vulnerability relation model was put forward. The attribute set was defined on the node domain and the transition domain in a Petri net, similar attack methods and similar node objects were classified to form the class space of the domain Petri nets. By defining similar degree of path, all characteristic attack paths which could arrive at an attack goal could be searched out by the ant algorithm, and the maximal threat path, which could access the goal node, could be found out from all these characteristic attack paths. The experimental results show that the proposed model can quickly locate the node objects and the related attack methods from on-time monitoring information and find accurately their positions from all these characteristic attack paths.
Reference | Related Articles | Metrics
Static eparation of duty policy base on mutually exclusive role constraints
Ting WANG Xing-yuan CHEN Bin ZHANG Zhi-yu REN Lu WANG
Journal of Computer Applications    2011, 31 (07): 1884-1886.   DOI: 10.3724/SP.J.1087.2011.01884
Abstract1466)      PDF (668KB)(953)       Save
Static Separation Of Duty (SSOD) is an important principle of information system security. In Role-Based Access Control (RBAC), it is difficult to enforce 2-n SSOD policy directly based on 2-2 Static Mutually Exclusive Role (SMER) constraints. In this paper, the necessary and sufficient conditions of realizing 2-n SSOD policy based on 2-2 SMER constraints were proposed and proved. The sufficient condition proposed was less restrictive than the existing research and allowed more flexible privilege assignment. By the operation rules of authorization management, the sufficient condition was kept and the satisfaction of 2-n SSOD policy during the dynamic change of application environment could be maintained. The application example shows that the method is correct and effective.
Reference | Related Articles | Metrics