Aiming at the problem of the present mainstream adversarial attack algorithm that the attack invisibility is reduced by disturbing the global image features, an untargeted attack algorithm named PS-MIFGSM (Perceptual-Sensitive Momentum Iterative Fast Gradient Sign Method) was proposed. Firstly, the areas of the image focused by Convolutional Neural Network (CNN) in the classification task were captured by using Grad-CAM algorithm. Then, MI-FGSM (Momentum Iterative Fast Gradient Sign Method) was used to attack the classification network to generate the adversarial disturbance, and the disturbance was applied to the focus areas of the image with the non-focus areas of the image unchanged, thereby, a new adversarial sample was generated. In the experiment, based on three image classification models Inception_v1, Resnet_v1 and Vgg_16, the effects of PS-MIFGSM and MI-FGSM on single model attack and set model attack were compared. The results show that PS-MIFGSM can effectively reduce the difference between the real sample and the adversarial sample with the attack success rate unchanged.
Concerning the problem that traditional recommendation algorithm ignores the time factors, according to the similarity of individuals' short-term behavior, a calculation method of item correlation by using time decay function based on users' interest was proposed. And based on this method, a new item similarity was proposed. At the same time, the TItemRank algorithm was proposed which is an improved ItemRank algorithm by combining with the user interest-based item correlation. The experimental results show that: the improved algorithms have better recommendation effects than classical ones when the recommendation list is small. Especially, when the recommendation list has 20 items, the precision of user interest-based item similarity is 21.9% higher than Cosin similarity and 6.7% higher than Jaccard similarity. Meanwhile, when the recommendation list has 5 items, the precision of TItemRank is 2.9% higher than ItemRank.
为了克服传统密度估计方法受限于算法配置工作量高、高等级密度样本数量有限等因素无法大规模应用的缺点,提出一种基于监控视频的全景密度估计方法。首先,通过自动构建场景的权重图消除成像过程中射影畸变造成的影响,该过程针对不同的场景自动鲁棒地学习出对应的权值图,从而有效降低算法配置工作量;其次,利用仿真模拟方法通过低密度等级样本构建大量高密度等级样本;最后,提取训练样本的面积、周长等特征用于训练支持向量回归机(SVR)来预测每个场景的密度等级。在测试过程中,还通过二维图像与全景地理信息系统(GIS)地图的映射,实时展示全景密度分布情况。在北京北站广场地区的深度应用结果表明,所提全景密度估计方法可以准确、快速、有效地估计复杂场景中人群密度动态变化。
Most existing cloud storage systems are based on the model, which leads to a full dataset scan for multi-dimensional queries and low query efficiency. A KD-tree and R-tree based multi-dimensional cloud data index named KD-R index was proposed. KD-R index adopted two-layer architecture: a KD-tree based global index was built in the global server and R-tree based local indexes were built in local server. A cost model was used to adaptively select appropriate R-tree nodes to publish into global KD-tree index. The experimental results show that, compared with R-tree based global index, KD-R index is efficient for multi-dimensional range queries, and it has high availability in the case of server failure.