计算机应用 ›› 2015, Vol. 35 ›› Issue (4): 1093-1096.DOI: 10.11772/j.issn.1001-9081.2015.04.1093

• 虚拟现实与数字媒体 • 上一篇    下一篇

基于形式概念分析的图像场景语义标注模型

张素兰, 张继福, 胡立华, 褚萌   

  1. 太原科技大学 计算机科学与技术学院, 太原 030024
  • 收稿日期:2014-10-30 修回日期:2014-12-26 出版日期:2015-04-10 发布日期:2015-04-08
  • 通讯作者: 张素兰
  • 作者简介:张素兰(1971-),女,山西长治人,副教授,博士,主要研究方向:概念格、数据挖掘、图像理解; 张继福(1963-),男,山西平遥人,教授,博士生导师,博士,CCF会员,主要研究方向:数据挖掘、模式识别、高性能计算; 胡立华(1980-),女,山西忻州人,讲师,博士研究生,主要研究方向:概念格、数据挖掘、图像理解; 褚萌(1988-),女,山西运城人,硕士,主要研究方向:概念格、数据挖掘、图像理解。
  • 基金资助:

    国家自然科学基金资助项目(61373099);国家青年科学基金资助项目(61402316)。

Semantic annotation model for scenes based on formal concept analysis

ZHANG Sulan, ZHANG Jifu, HU Lihua, CHU Meng   

  1. School of Computer Science and Technology, Taiyuan University of Science and Technology, Taiyuan Shanxi 030024, China
  • Received:2014-10-30 Revised:2014-12-26 Online:2015-04-10 Published:2015-04-08

摘要:

为生成有效表示图像场景语义的视觉词典,提高场景语义标注性能,提出一种基于形式概念分析(FCA)的图像场景语义标注模型。该方法首先将训练图像集与其初始的视觉词典抽象为形式背景,采用信息熵标识了各视觉单词的权重,并分别构造了各场景类别概念格结构;然后再利用各视觉单词权重的均值刻画概念格内涵上各组合视觉单词标注图像的贡献,按照类别视觉词典生成阈值,从格结构上有效提取了标注各类场景图像语义的视觉词典;最后,利用K最近邻标注测试图像的场景语义。在Fei-Fei Scene 13类自然场景图像数据集上进行实验,并与Fei-Fei方法和Bai方法相比,结果表明该方法在β=0.05和γ=15时,标注分类精度更优。

关键词: 形式概念分析, 内涵权重, 视觉词袋, 类别视觉词典, 场景语义标注

Abstract:

To generate an effective visual dictionary for representing the scene of images, and further improve the accuracy of semantic annotation, a scene annotation model based on Formal Concept Analysis (FCA) was presented by means of an abstract from the training image set with the initial visual dictionary as a form context. The weight value of visual words was first marked with information entropy, and FCA structures were built for various types of scene. Then the arithmetic mean of each visual word's weight values was used to describe the contribution among different visual words in the intent to the semantic, and each type of visual vocabularies for the scene was extracted from the structure according to the visual vocabularies thresholds. Finally, the test image was assigned with the class label by using of the K-nearest method. The proposed approach is evaluated on the Fei-Fei Scene 13 natural scene data sets, and the experimental results show that in comparison with the methods of Fei-Fei and Bai, the proposed algorithm has better classification accuracy with β=0.05 and γ=15.

Key words: Formal Concept Analysis (FCA), intent weight value, bag-of-visterm, category visual vocabulary, scene semantic annotation

中图分类号: