Aiming at the problem of rapidly growing and frequently changing requirement for risk map generation of immovable cultural heritage, and existing programs and tools cannot meet the needs of actual applications, a method for constructing semantic model was proposed. Based on the semantic model, a Domain-Specific Language (DSL) close to natural language was designed for experts in the field of immovable cultural heritage. Firstly, a business model was extracted by conducting in-depth research on various indicators of immovable cultural heritage, as well as methods and processes for generating risk maps. Secondly, the meta-calculation units of the risk value calculation rules were abstracted, and a semantic model was constructed by analyzing the business model. On this basis, a DSL that can express all semantics in the semantic model was designed. The language script can be programmed by the field experts themselves and used to quickly and efficiently generate risk maps. It is easy to expand and can meet the needs of frequently changing requirements. Compared with the mainstream method of generating risk maps by using Geographic Information System (GIS), the use of DSL to generate risk maps can reduce work hours by more than 66.7%.
In the cross-domain sentiment analysis, the labeled samples in the target domain are seriously insufficient, the distributions of features in different domains are very different, and the emotional polarities expressed by features in one domain differ a lot from the emotional polarities in another domain, all of these problems lead to low classification accuracy. To deal with the above problems, an aspect-level cross-domain sentiment analysis method based on capsule network was proposed. Firstly, the feature representations of text were obtained by BERT (Bidirectional Encoder Representation from Transformers) pre-training model. Secondly, for the fine-grained aspect-level sentiment features, Recurrent Neural Network (RNN) was used to fuse the context features and aspect features. Thirdly, capsule network and dynamic routing were used to distinguish overlapping features, and the sentiment classification model was constructed on the basis of capsule network. Finally, a small amount of data in the target domain was used to fine-tune the model to realize cross-domain transfer learning. The optimal F1 score of the proposed method is 95.7% on Chinese dataset and 91.8% on English dataset, which effectively solves the low accuracy problem of insufficient training samples.
Existing generation models have difficulty in directly generating high-resolution images from complex semantic labels. Thus, a Generative Adversarial Network based on Semantic Labels and Noise Prior (SLNP-GAN) was proposed. Firstly, the semantic labels (including information of shape, position and category) were directly used as input, the global generator was used to encode them, the coarse-grained global attributes were learned by combining the noise prior, and the low-resolution images were generated. Then, with the attention mechanism, the local refined generator was used to query the high-resolution sub-labels corresponding to the sub-regions of the low-resolution images, and the fine-grained information was obtained, the complex images with clear textures were thus generated. Finally, the improved Adam with Momentum (AMM) algorithm was introduced to optimize the adversarial training. The experimental results show that, compared with the existing method text2img, the proposed method has the Pixel Accuracy (PA) increased by 23.73% and 11.09% respectively on COCO_Stuff and the ADE20K datasets; in comparison with the Adam algorithm, the AMM algorithm doubles the convergence speed with much smaller loss amplitude. It proves that SLNP-GAN can efficiently obtain global features as well as local textures and generate fine-grained high-quality images.
To generate an effective visual dictionary for representing the scene of images, and further improve the accuracy of semantic annotation, a scene annotation model based on Formal Concept Analysis (FCA) was presented by means of an abstract from the training image set with the initial visual dictionary as a form context. The weight value of visual words was first marked with information entropy, and FCA structures were built for various types of scene. Then the arithmetic mean of each visual word's weight values was used to describe the contribution among different visual words in the intent to the semantic, and each type of visual vocabularies for the scene was extracted from the structure according to the visual vocabularies thresholds. Finally, the test image was assigned with the class label by using of the K-nearest method. The proposed approach is evaluated on the Fei-Fei Scene 13 natural scene data sets, and the experimental results show that in comparison with the methods of Fei-Fei and Bai, the proposed algorithm has better classification accuracy with β=0.05 and γ=15.