[1] CURLESS B. From range scans to 3D models[J]. Computer Graphics, 1999, 33(4):38-41. [2] HENRY P, KRAININ M, HERBST E, et al. RGB-D mapping:using Kinect-style depth cameras for dense 3D modeling of indoor environments[J]. The International Journal of Robotics Research, 2012, 31(5):647-663. [3] ZHAO Z, FENG X, WEI F, et al. Learning representative features for robot topological localization[J]. International Journal of Advanced Robotic Systems, 2013, 10(4), 215. [4] JOO H, LIU H, TAN L, et al. Panoptic studio:a massively multiview system for social motion capture[C]//ICCV 2015:Proceedings of the 2015 IEEE International Conference on Computer Vision. Washington, DC:IEEE Computer Society, 2015:3334-3342. [5] ZHU Y, ZHAO Y, ZHU S. Understanding tools:task-oriented object modeling, learning and recognition[C]//CVPR 2015:Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2015:2855-2864. [6] WANG Z, TENG S, LIU G, et al. Hierarchical sparse representation with deep dictionary for multi-modal classification[J]. Neurocomputing, 2017, 253, 65-69. [7] WANG Zh, ZHAO Z, WENG S, et al. Incremental multiple instance outlier detection[J]. Neural Computing and Applications, 2015, 26(4), 957-968. [8] 伍锡如,黄国明,孙立宁.基于深度学习的工业分拣机器人快速视觉识别与定位算法[J].机器人,2016,38(6):711-719. (WU X R, HUANG G M, SUN L N. Fast visual identification and location algorithm for industrial sorting robots based on deep learning[J]. Robot, 2016, 38(6):711-719.) [9] 杜学丹,蔡莹皓,鲁涛,等.一种基于深度学习的机械臂抓取方法[J].机器人,2017,39(6):820-837. (DU X D, CAI Y H, LU T, et al. A robotic grasping method based on deep learning[J]. Robot, 2017, 39(6):820-837.) [10] LEVINE S, PASTOR P, KRIZHEVSKY A, et al. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection[C]//ISER 2016:Proceedings of the 2016 International Symposium on Experimental Robotics, SPAR 1. Cham:Springer, 2017:173-184. [11] PINTO L, GUPTA A. Supersizing self-supervision:learning to grasp from 50K tries and 700 robot hours[C]//ICRA 2016:Proceedings of the 2016 IEEE International Conference on Robotics and Automation. Piscataway, NJ:IEEE, 2016:3406-3413. [12] ZHANG Z. Microsoft Kinect sensor and its effect[J]. IEEE MultiMedia, 2012, 19(2):4-10. [13] ZHANG Z. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11):1330-1334. [14] RUSU R, COUSINS S. 3D is here:Point Cloud Library (PCL)[C]//ICRA 2011:IEEE International Conference on Robotics and Automation. Piscataway, NJ:IEEE, 2011:1-4. [15] SUCAN I, CHITTA S. MoveIt![OL].[2017-12-15]. http://moveit.ros.org. [16] SARBOLANDI H, LEFLOCH D, KOLB A. Kinect range sensing:structured-light versus time-of-flight Kinect[J]. Computer Vision and Image Understanding, 2015, 139:1-20. [17] QUIGLEY M, CONLEY K, GERKEY P, et al. ROS:an open-source robot operating system[C/OL]//ICRA 2009:Proceedings of the 2009 IEEE International Conference on Robotics and Automation. Piscataway, NJ:IEEE, 2009[2018-02-05]. http://www.willowgarage.com/sites/default/files/icraoss09-ROS.pdf. [18] SILTANEN S, HAKKARAINEN M, HONKAMAA P. Automatic marker field calibration[C]//Proceedings of the Second Virtual Reality International Conference. Berlin:Springer-Verlag, 2007:261-267. [19] CHOO B, LAUDAU M, DEVORE M, et al. Statistical analysis-based error models for the Microsoft Kinect? depth sensor[J]. Sensors, 2014, 14:17430-17450. |