Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Calculation approach of privilege deduction in authorization management
WANG Ting CHEN Xing-yuan REN Zhi-yu
Journal of Computer Applications    2011, 31 (05): 1291-1294.   DOI: 10.3724/SP.J.1087.2011.01291
Abstract1247)      PDF (665KB)(802)       Save
Privilege deduction relation makes the authorization management easier, and at the same time it also causes the calculation of valid privileges more difficult. Therefore, it is important for authorization and access control to calculate deduction relations between privileges correctly and efficiently. Based on the resource hierarchy and operation hierarchy, the rule of privilege deduction was given in this paper. According to the fact that privilege query happens more frequently than privilege update, a new method of calculating deduction relations based on reachability matrix of privilege deduction was proposed. The experimental results show that the new method is more efficient than the way calculating deduction relations directly.
Related Articles | Metrics
Blind detection of image splicing based on image quality metrics and moment features
Zhen ZHANG Jiquan Kang Xijian Ping Yuan Ren
Journal of Computer Applications   
Abstract1795)      PDF (759KB)(2000)       Save
Image splicing is a technique commonly used in image tampering. To implement image splicing blind detection,a new splicing detection scheme was proposed. Image splicing detection could be regarded as a two-class pattern recognition problem and the model was established based on moment features and some Image Quality Metrics (IQMs) extracted from the given test image. This model could measure statistical differences between original image and spliced image. Kernel-based Support Vector Machine (SVM) was chosen as a classifier to train and test the given images. Experimental results demonstrate that this new splicing detection scheme has some advantages of high-accuracy and wide-application.
Related Articles | Metrics
Cross-lingual knowledge transfer method based on alignment of representational space structures
Siyuan REN, Cheng PENG, Ke CHEN, Zhiyi HE
Journal of Computer Applications    0, (): 18-23.   DOI: 10.11772/j.issn.1001-9081.2024030297
Abstract29)   HTML1)    PDF (737KB)(5)       Save

In the field of Natural Language Processing (NLP), as an efficient method for sentence representation learning, contrastive learning mitigates the anisotropy of Transformer-based pre-trained language models effectively and enhances the quality of sentence representations significantly. However, the existing research focuses on English conditions, especially under supervised settings. Due to the lack of labeled data, it is difficult to utilize contrastive learning effectively to obtain high-quality sentence representations in most non-English languages. To address this issue, a cross-lingual knowledge transfer method for contrastive learning models was proposed, transferring knowledge across languages by aligning the structures of different language representation spaces. Based on this, a simple and effective cross-lingual knowledge transfer framework, TransCSE, was developed to transfer the knowledge from supervised English contrastive learning models to non-English models. Through knowledge transfer experiments from English to six directions, including French, Arabic, Spanish, Turkish, and Chinese, knowledge was transferred successfully from the supervised contrastive learning model SimCSE (Simple contrastive learning of sentence embeddings) to the multilingual pre-trained language model mBERT (Multilingual Bidirectional Encoder Representations from Transformers) by TransCSE. Experimental results show that model trained using the TransCSE framework achieves accuracy improvements of 17.95 and 43.27 percentage points on XNLI (Cross-lingual Natural Language Inference) and STS (Semantic Textual Similarity) 2017 benchmark datasets, respectively, compared to the original mBERT, proving the effectiveness of TransCSE. Moreover, compared to cross-lingual knowledge transfer methods based on shared parameters and representation alignment, TransCSE has the best performance on both XNLI and STS 2017 benchmark datasets.

Table and Figures | Reference | Related Articles | Metrics