In view of the problems of complex case structure, redundant facts involved in cases, and wide distribution of cases in judgment documents, the existing Large Language Models (LLMs) are difficult to focus on structural information effectively and may generate factual errors, resulting in missing structural information and factual inconsistency. To this end, a judgment document summary method combining LLMs and dynamic prompts, named DPCM (Dynamic Prompt Correction Method), was proposed. Firstly, LLMs were used for single-sample learning to generate a judgment document summary. Secondly, high-dimensional similarity between the original text and the summary was calculated to detect possible missing structure or factual inconsistency problems in the summary. If a problem was found, the wrong summary was spliced with the original text, and the prompt words were added. Then, one-shot learning was performed again to correct and generate a new summary, and a similarity test was performed again. If the problem still existed, the generation and detection process would be repeated. Finally, through this iterative method, the prompt words were adjusted dynamically to optimize the generated summary gradually. Experimental results on the CAIL2020 public justice summary dataset show that compared with Least-To-Most Prompting, Zero-Shot Reasoners, Self_Consistency_Cot and other methods, the proposed method has improvements in Rouge-1, Rouge-2, Rouge-L, BERTscore, FactCC (Factual Consistency) indicators.
Aiming at the problems of conducting reasonable trust evaluations for vehicles and ensuring timely updates of consistent trust values among multiple RoadSide Units (RSUs) in the Internet of Vehicles (IoV), a trust management scheme for IoV based on blockchain and multi-attribute decision making was proposed on the basis of the existing trust management schemes for IoV, named BCIoVTrust (BlockChain IoV Trust). Firstly, the comprehensive trust value and the malicious probability indicator of a vehicle were calculated by attribute values and dynamical attribute weights. Secondly, a reward and punishment mechanism was introduced to reduce the time that malicious vehicles stay in the IoV. Finally, a hybrid consensus mechanism was used to dynamically change the blocking out difficulty of the miner node by taking the sum of the absolute values of the vehicles' trust values as the stake. Experimental results show that the scheme can calculate the vehicle trust value more comprehensively and accurately, identify and remove malicious vehicles, and update the trust value stored on the blocks faster to effectively solve the cold-start problem, dynamically adjust the rate of trust decay, reasonably select the optimal recommended nodes, and prevent the malicious vehicles from conspiring and colluding.
Addressing the issues of attribute privacy leakage and insufficient scalability in existing blockchain multi-domain access control models, a Cross-Chain based Multi-Domain Access Control Model (CC-MDACM) was proposed. Firstly, based on Attribute-Based Access Control (ABAC) and relay chain technology, a cross-blockchain multi-domain access control model was proposed, enabling autonomous authorization within domains and fine-grained access control across heterogeneous blockchains through the relay chain between domains. Secondly, by combining a threshold homomorphic encryption algorithm based on SM2 with zero-knowledge proof technology, a cross-blockchain multi-domain access control scheme with dual concealment of attributes and policies as well as scalability was proposed. This scheme allowed data to be verified and decrypted by distributed nodes on the relay chain and facilitated access control decisions in the ciphertext state. Attributes and policies were protected through dual concealment, and access control policies were dynamically extended. Additionally, Raft consensus was adopted to ensure the reliability of decryption. Finally, the proposed scheme was analyzed by security theoretical analysis and simulation experiments. The results demonstrate that, while ensuring dual concealment of attributes and policies and supporting dynamic expansion of access policies, the proposed scheme effectively resolves the multi-domain access control problem across heterogeneous blockchains. Compared to the Distributed Two trapdoor Public Key Cryptosystem (DT-PKC), encryption and decryption efficiencies of the proposed scheme were improved by 34.4% and 44.9%, respectively.
With the escalating severity of cybersecurity threats, Distributed Denial of Service (DDoS) attacks remain a persistent challenge in network security research. Traditional DDoS protection solutions usually rely on centralized architectures, which suffer from single point of failure, data tampering and other problems, and are difficult to deal with complex and diverse attack scenarios. Blockchain technology provides a new solution for DDoS protection with its characteristics of decentralization, immutability and transparency. In view of the technical challenges in DDoS protection, the progress of blockchain-based DDoS protection was summarized. Firstly, the basic concepts of DDoS attacks and their threats to environments such as traditional networks, Internet of Things (IoT) and Software Defined Networking (SDN) were introduced, and the necessity and potential advantages of introducing blockchain technology were analyzed. Secondly, from the aspects of blockchain combined with smart contracts, deep learning, cross-domain collaboration, and so on, the existing DDoS protection mechanisms were reviewed and compared. Finally, considering the technical difficulties in blockchain performance optimization, multi-domain collaboration, and real-time response, the future development directions of blockchain-based DDoS protection technology were prospected, providing theoretical references for researchers in the field of cybersecurity and further promoting the practical applications of blockchain in DDoS protection.
The data entities stored in large-scale decentralized institutions have issues such as data redundancy, missing information, and inconsistency, which requires integration through entity alignment. Most existing entity alignment methods rely on structural information of entities and perform alignment through subgraph matching. However, the lack of structural information in decentralized data storage will lead to poor alignment results. To address this issue and support identification of important data, a single-layer graph neural network-based attribute-based entity alignment model was proposed. Firstly, a single-layer graph neural network was utilized to avoid interference from secondary neighbor node information. Secondly, an attribute weighting method based on information entropy was designed to distinguish importance of the attributes in the initial stage quickly. Finally, an attention mechanism-based encoder was constructed to represent importance of different attributes in alignment from both local and global perspectives, thereby providing a more comprehensive representation of entity information. Experimental results indicate that on two decentralized storage datasets, the proposed model improves the Hits@1 by 5.24 and 2.03 percentage points, respectively, compared to the suboptimal models, demonstrating superior alignment performance of the proposed model over other entity alignment methods.
In order to defend against existing attacks on artificial intelligence algorithms (especially artificial neural networks) as much as possible, and reduce the additional costs, the rattan algorithm based on example preprocessing was proposed. By cutting the unimportant information part of the image, normalizing the neighboring pixel values and scaling image, the examples were preprocessed to destroy the adversarial disturbance and generate new examples with less threat to the model, ensuring high accuracy of model recognition. Experimental results show that the rattan algorithm can defend against some adversarial attacks against MNIST, CIFAR10 datasets and neural network models such as squeezenet1_1, mnasnet1_3 and mobilenet_v3_large with less overhead than similar algorithms, and the minimum example accuracy after defense can reach 88.50%; meanwhile, it does not reduce the example accuracy too much while processing clean examples, and the defense effect and defense cost are better than those of the comparison algorithms such as Fast Gradient Sign Method (FGSM) and Momentum Iterative Method (MIM).
To help the autonomous vehicle plan a safe, comfortable and efficient driving trajectory, a trajectory planning approach based on model predictive control was proposed. First, to simplify the planning environment, a safe and feasible “three-circle” expansion of the safety zone was introduced, which also eliminates the collision issues caused by the idealized model of the vehicle. Then, the trajectory planning was decoupled in lateral and longitudinal space. A model prediction method was applied for lateral planning to generate a series of candidate trajectories that met the driving requirements, and a dynamic planning approach was utilized for longitudinal planning, which improved the efficiency of the planning process. Eventually, the factors affecting the selection of optimal trajectories were considered comprehensively, and an optimal trajectory evaluation function was proposed for path planning and speed planning more compatible with the driving requirements. The effectiveness of the proposed algorithm was verified by joint simulation with Matlab/Simulink, Prescan and Carsim software. Experimental results indicate that the vehicle achieves the expected effects in terms of comfort metrics, steering wheel angle variation and localization accuracy, and the planning curve also perfectly matches the tracking curve, which validates the advantage of the proposed algorithm.
In order to solve the high time complexity problem of the adversarial example detection algorithm based on Local Intrinsic Dimensionality (LID), combined with the advantages of quantum computing, an adversarial example detection algorithm based on quantum LID was proposed. First, the SWAP-Test quantum algorithm was used to calculate the similarity between the measured example and all examples in one time, avoiding the redundant calculation in the classical algorithm. Then Quantum Phase Estimation (QPE) algorithm and quantum Grover search algorithm were combined to calculate the local intrinsic dimension of the measured example. Finally, LID was used as the evaluation basis of the binary detector to detect and distinguish the adversarial examples. The detection algorithm was tested and verified on IRIS, MNIST, and stock time series datasets. The simulation experimental results show that the calculated LID values can highlight the difference between adversarial examples and normal examples, and can be used as a detection basis to differentiate example attributes. Theoretical research proves that the time complexity of the proposed detection algorithm is the same order of magnitude as the product of the number of iterations of Grover operator and the square root of the number of adjacent examples and the number of training examples, which is obviously better than that of the adversarial example detection algorithm based on LID and achieves exponential acceleration.
The traceability data of single-chain storage charity system have huge storage pressure, and the charity data need to be shared, may leading to the problem of privacy leakage. Therefore, a charity system traceability storage model oriented to master-slave chain was proposed. Firstly, a master chain and several slave chains were designed in the model. The master chain was mainly responsible for the query of charity traceability data and the supervision of slave chains, and the slave chains were responsible for the storage of a large number of charity traceability data. Then, an intelligent contract for the classification of charity traceability data was designed to classify charity data into public data and private data according to privacy requirements. The public data were stored in the master chain directly, while the private data were encrypted with Ciphertext-Policy Attribute-Based Encryption (CP-ABE) and stored in the slave chains, which ensured data privacy, thus achieving storage scalability and intelligence. Finally, the storage structure of Merkle tree was improved. By designing a smart contract to mark duplicate data, the same block detection and duplicate data deletion of blockchain system were completed, which avoided data redundancy and reduced storage consumption. Experimental results show that compared to the single-chain model, with the increase of total number of data, the proposed model has the response time of the master-slave chain stabilized at 0.53 s and the throughput stabilized at 149 B. It can be seen that the master-slave chain model improves search efficiency, optimizes storage space, and realizes data privacy protection.
In response to the issues of low decentralization, poor scalability, and high resource consumption in the current blockchain cross-chain identity authentication schemes, a Cross-chain Identity Authentication scheme based on Certificate-Less SignCryption (CIA-CLSC) was proposed. Firstly, Certificate-Less SignCryption (CLSC) was utilized to generate keys for cross-chain entities, realize communication encryption, and perform identity authentication. Secondly, secret sharing was employed for key management in the distributed system. Finally, decentralized identities were used to establish the association between entity keys and cross-chain identities. Under the premise of ensuring identity privacy and security, CIA-CLSC achieved cross-chain interactive identity authentication among different blockchain systems. Theoretical analysis and experimental results demonstrate that CIA-CLSC does not rely on centralized certificate authorities or third-party key management organizations, ensuring decentralization; the CIA-CLSC generated digital identities comply with the World Wide Web Consortium (W3C) standards, ensuring scalability. Furthermore, compared to the combination of ECC (Elliptic Curve Cryptography) and AES (Advanced Encryption Standard), CIA-CLSC achieves approximately 34% reduction in time overhead; compared to the combination of RSA (Rivest-Shamir-Adleman algorithm) and AES, CIA-CLSC achieves approximately 38% reduction in time overhead while maintaining decentralization for cross-chain interactive identity authentication. It can be seen that CIA-CLSC can enhance the decentralization, scalability, and interaction efficiency of cross-chain systems in practical applications effectively.
In view of the current problems of low authentication efficiency, insufficient security performance and poor scalability in cross-chain identity management, a cross-chain identity management scheme based on Identity-Based Proxy Re-Encryption (IBPRE) was proposed. Firstly, an identity chain was built combining Decentralized IDentifier (DID), and DIDs were provided as cross-chain identity identifiers and verifiable certificates were provided as access certificates to the users to build an access control policy based on certificate information. Secondly, the relay chain was combined with the cryptographic accumulator to achieve user identity authentication. Finally, by combining IBPRE and signature algorithm, a cross-chain communication model based on IBPRE was constructed. Experimental analysis and evaluation results show that compared with RSA (Rivest-Shamir-Adleman algorithm) and Elliptic Curve Cryptosystem (ECC), the proposed scheme has the authentication time reduced by 66.9% and 4.8% respectively. It can be seen that relay chain and identity chain can realize identity management, improve decentralization and scalability, build cross-chain communication models and access policies based on certificate information, and ensure security in cross-chain identity management.
In order to address the issues of Delegated Proof of Stake (DPoS) algorithm, such as the growing centralization trend caused by high-weight nodes having a higher probability of accounting rights, low voting enthusiasm among nodes, and collusion attacks caused by node corruption, a DPoS consensus algorithm based on reputation value and strong blind signature algorithm was proposed. Firstly, the nodes were sorted into two types based on the initial conditions, and the initial selection of nodes was carried out to select the proxy nodes. Secondly, the vote for each other was performed among the proxy nodes, and the top 21 nodes were selected to form the witness node set based on the average of historical reputation value and final number of votes, while the remaining nodes were used to form the standby witness node set. During the voting process, an Elgamal-based strong blind signature algorithm was employed to ensure privacy for voting nodes. Finally, consensus process was achieved after block out of witness nodes. Experimental results demonstrate that compared to the original DPoS consensus algorithm, the proposed algorithm increases active node proportion by approximately 20 percentage points, and reduces malicious node proportion close to zero. It can be observed that the proposed algorithm enhances node enthusiasm in voting and protects privacy information of nodes.
Aiming at the problem that the Identity-Based Linkable Ring Signature (IBLRS) scheme has excessive overhead and does not meet the requirements of technical autonomy, a Linkable Ring Signature (LRS) scheme based on SM9 algorithm was proposed. Firstly, the identifier of the signer in the ring was sent to the Key Generation Center (KGC) to generate the corresponding private key. Secondly, the private key was combined with SM9 algorithm to generate a signature, and this private key generation method was consistent with the private key generation method in SM9 algorithm. Finally, the signer's private key and the event identifier were bound to construct a linkable label without need of complex calculation operations, which improved the efficiency of the proposed algorithm. Under the random oracle model, it was proved that the proposed scheme has correctness, unforgeability, unconditional anonymity and linkability. At the same time, a multi-notary cross-chain scheme was designed on the basis of the proposed algorithm to achieve efficient and safe cross-chain interaction. Compared with the IBLRS algorithm, the proposed scheme only requires 4 bilinear pairing operations, which reduces the computational overhead and communication overhead by 39.06% and 51.61% respectively. Performance analysis of the scheme shows that the proposed scheme reduces computing overhead and communication overhead, and satisfies the autonomous controllability of the technology.
The grayscale histogram of a grayscale image may have non-modal, unimodal, bimodal, or multi-modal morphological characteristics. However, most traditional entropy thresholding methods are only suitable for processing the grayscale images with unimodal or bimodal morphological characteristics. To improve the segmentation accuracy and adaptability of entropy thresholding methods, an automatic thresholding method guided by maximizing four-directional weighted Shannon entropy was proposed, namely FWSE(Four-directional Weighted Shannon Entropy). Firstly, a series of Multi-scale Product Transformation (MPT) images were obtained by performing MPTs with the directional Prewitt convolution kernels in four directions. Secondly, the optimal MPT image in each direction was computed automatically based on the cubic spline interpolation function and the curvature maximization criterion. Thirdly, the pixels on each optimal MPT image were resampled by using inner and outer contour images to reconstruct the grayscale histogram, and the corresponding Shannon entropy was calculated based on the above. Finally, the optimal segmentation threshold was selected based on the criterion of maximizing weighted Shannon entropy in four directions. FWSE method was compared with three recent thresholding methods and two recent non-thresholding methods on 4 synthetic images and 100 real-world images. Experimental results show that: on the synthesis images, the average Matthews Correlation Coefficient (MCC) of the FWSE method reaches 0.999; on the real-world images, the average MCCs of the FWSE method and the other five segmentation methods are 0.974, 0.927, 0.668, 0.595, 0.550, and 0.525 respectively. It can be seen that the FWSE method has higher segmentation accuracy and more flexible segmentation adaptability.
At present, the phenomenon of network densification accelerates the degradation of channel transmission performance. And the widely used evaluation methods face significant challenges in evaluating channel transmission performance due to their limited consideration of parameters and constrained applicability. In response to the difficulties in evaluating channel transmission performance, a method for evaluating multi-parameter channel transmission performance through an improved Transmission Control Protocol/Internet Protocol (TCP/IP) frame structure was proposed. Firstly, the standardized test data was generated, including pseudo-random codes, basic curve data, and custom curve data, so as to ensure that the test data follow a uniform standard. Secondly, an improved TCP/IP frame structure was employed to package test data information, including total frame quantity and frame sequences, into the TCP/IP frames. In this way, the sending, receiving and parsing of test data were realized, and the statistics on basic channel transmission variables were completed, such as the number of frames by type, the number of frames by length, the total number of frames, and the volume of effective data. Finally, the received data were analyzed to obtain two types of high-level channel transmission information, namely frame error rate and bit error rate, completing the overall evaluation of the channel transmission performance. The designed method employed six parameters to evaluate channel quality, with the evaluation precision of the method reaching 0.01% and maintaining a minimum error margin of 0.01%. It is compatible with all communication channels using TCP/IP. Experimental results demonstrate that the proposed channel transmission performance evaluation method can perform the statistics and analysis of the six channel communication information, and evaluating the channel transmission performance accurately.
The K-Means algorithms typically utilize Euclidean distance to calculate the similarity between data points when dealing with large-scale heterogeneous data. However, this method has problems of low efficiency and high computational complexity. Inspired by the significant advantage of Hamming distance in handling data similarity calculation, a Quantum K-Means Hamming (QKMH) algorithm was proposed to calculate similarity. First, the data was prepared and made into quantum state, and the quantum Hamming distance was used to calculate similarity between the points to be clustered and the K cluster centers. Then, the Grover’s minimum search algorithm was improved to find the cluster center closest to the points to be clustered. Finally, these steps were repeated until the designated number of iterations was reached or the clustering centers no longer changed. Based on the quantum simulation computing framework QisKit, the proposed algorithm was validated on the MNIST handwritten digit dataset and compared with various traditional and improved methods. Experimental results show that the F1 score of the QKMH algorithm is improved by 10 percentage points compared with that of the Manhattan distance-based quantum K-Means algorithm and by 4.6 percentage points compared with that of the latest optimized Euclidean distance-based quantum K-Means algorithm, and the time complexity of the QKMH algorithm is lower than those of the above comparison algorithms.
Convolutional Neural Networks (CNNs) have been successfully used to classify dynasties of ancient murals from Dunhuang. Aiming at the problem that using some data enhancement methods to expand the training set would reduce the prediction accuracy due to the limited amount of data of Dunhuang murals, a Residual Network (ResNet) model based on attention mechanism and transfer learning was proposed. Firstly, the residual connection method of the residual network was improved. Then, the POlarized Self-Attention (POSA) module was used to help the network model to extract the edge local detail features and global contour features of the images, and the learning ability of the network model in a small sample environment was enhanced. Finally, the algorithm for classifier was improved, so that the classification performance of the network model was improved. Experimental results show that the proposed model achieves 98.05% accuracy of dynastic classification on DH1926 small sample dataset of Dunhuang murals, and the dynasty identification accuracy of the proposed model is improved by 5.21 percentage points compared with that of the standard ResNet20 network model.
At present, social media platforms have become the main ways for people to publish and obtain information, but the convenience of information publish may lead to the rapid spread of rumors, so verifying whether information is a rumor and stoping the spread of rumors has become an urgent problem to be solved. Previous studies have shown that people's stance on information can help determining whether the information is a rumor or not. Aiming at the problem of rumor spread, a Joint Stance Process Multi?Task Rumor Verification Model (JSP?MRVM) was proposed on the basis of the above result. Firstly, three propagation processes of information were represented by using topology map, feature map and common Graph Convolutional Network (GCN) respectively. Then, the attention mechanism was used to obtain the stance features of the information and fuse the stance features with the tweet features. Finally, a multi?task objective function was designed to make the stance classification task better assist in verifying rumors. Experimental results prove that the accuracy and Macro?F1 of the proposed model on RumorEval dataset are improved by 10.7 percentage points and 11.2 percentage points respectively compared to those of the baseline model RV?ML (Rumor Verification scheme based on Multitask Learning model), verifying that the proposed model is effective and can reduce the spread of rumors.
In order to solve the problems of feature selection ReliefF algorithm, such as poor algorithm stability and low classification accuracy for selected feature subsets caused by using Euclidean distance to select the nearest neighbor samples, an MICReliefF (Maximum Information Coefficient-ReliefF) algorithm based on Maximum Information Coefficient (MIC) was proposed. At the same time, the classification accuracy of the Support Vector Machine (SVM) model was used as the evaluation index, and the optimal feature subset was automatically determined by multiple optimizations, thereby realizing the interactive optimization of the MICReliefF algorithm and the classification model, that is the MICReliefF-SVM automatic feature selection algorithm. The performance of the MICReliefF-SVM algorithm was verified on several UCI public datasets. Experimental results show that the MICReliefF-SVM automatic feature selection algorithm cannot only filter out more redundant features, but also select the feature subsets with good stability and generalization ability. Compared with Random Forest (RF), max-Relevance and Min-Redundancy (mRMR), Correlation-based Feature Selection (CFS) and other classical feature selection algorithms, MICReliefF algorithm has higher classification accuracy.
The key of cross-modal image-text retrieval is how to capture the semantic correlation between images and text effectively. Most of the existing methods learn the global semantic correlation between image region features and text features or local semantic correlation between inter-modality objects, and ignore the correlation between the intra-modality object relationships and inter-modality object relationships. To solve this problem, a method of Cross-Modal Tensor Fusion Network based on Semantic Relation Graph (CMTFN-SRG) for image-text retrieval was proposed. Firstly, the relationships of image regions and text words were generated by Graph Convolutional Network (GCN) and Bidirectional Gated Recurrent Unit (Bi-GRU) respectively. Then, the fine-grained semantic correlation between the data of two modals was learned by using the tensor fusion network to match the learned semantic relation graph of image regions and the graph of text words. At the same time, Gated Recurrent Unit (GRU) was used to learn global features of the image, and the global features of the image and the text were matched to capture the inter-modality global semantic correlation. The proposed method was compared with the Multi-Modality Cross Attention (MMCA) method on the benchmark datasets Flickr30K and MS-COCO. Experimental results show that the proposed method improves the Recall@1 of text-to-image retrieval task by 2.6%, 9.0% and 4.1% respectively on the test datasets Flickr30K, MS-COCO1K and MS-COCO5K.And mean Recall (mR) improves by 0.4, 1.3 and 0.1 percentage points respectively. It can be seen that the proposed method can effectively improve the precision of image-text retrieval.
Aiming at the problem that virtual-real registered accuracy and real-time performance are influenced by image texture and uneven illumination in Augmented Reality (AR), a method based on improved ORB (Oriented FAST (Features from Accelerated Segment Test) and Rotated BRIEF (Binary Robust Independent Elementary Features)) algorithm was proposed to solve it. The method firstly optimized the dense region of image feature points by setting the number and distance threshold of it and used parallel algorithm to reserve N points of greater eigenvalue; Then, the method adopted discrete difference feature to enhance the stability of uneven illumination changes and combined the improved ORB with BOF (Bag-of-Features) model to realize quick retrieval of Benchmark image. Finally, it realized the virtual-real registration by using the homographics between images. Comparative experiments among the proposed method, original ORB, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Features (SURF) algorithms were performed from the aspects of accuracy and efficiency, and the proposed method reduced the registration time to about 40% and reached the accuracy more than 95%. The experimental results show that the proposed method can get a better real-time performance and higher accuracy in different texture and uneven illumination.
To reduce the influence of evaluating trust of a commodity may be affected easily by unfair and malicious rates when the commodity has only few rates on e-commerce platform, a trust bootstrapping method based on assessing the credibility of the rate was presented. The credibility of a rate was got through evaluating the rates for other commodities and related to the factors of the number of rates by the rater, the rater's transaction amount and the price of the rated commodity. The trust value of a commodity without a rate was derived from the shop to which this commodity belonged and the declared attributes of this commodity. The trust value of a commodity which owned rates with sufficient high credibility was determined by these rates with high credibility. Otherwise the trust value was determined partly by rates or was processed according to a commodity without a rate. Calculation, analysis and experimental results show that this presented method, evaluating the credibility of a rate by its rating network, compared with the conventional method and k-means clustering method, has the smallest error and is not sensitive to the ratio of malicious rates. This method can help users select reliable commodities sold at the initial stage on e-commerce platforms.