In this thesis we designed and constructed a two-wheel robot vehicle capable of wireless communication with onboard IP camera and motion sensors, the vehicle joins real-time image, measurements from robot’s sensors and robot control command into a local area network, thus achieving cloud controlling with good generalization and expandability between platforms.
As the robot industry turning into a developed market, the complexity of application scenarios and the variety of surrounding factors increases, to enable the robot to perform localization and mapping in various environments, this thesis uses an algorithm to construct 3-D environment model via a single camera, allowing the robot to update its position and build a 3-D point cloud model of the environment while traveling indoor.
This thesis carries out LSD-SLAM ( Large-Scale Direct Monocular SLAM ) on our self-built vehicle, this method can simultaneously reconstruct camera gesture and 3-D model of the environment, reducing possible error in feature extraction method through Semi-Dense Visual Odometry and Pose Graph Optimization, after combining our proposed scale factor correction and coordinate transposition method the estimation results can be presented in real world scale for actual applications after fitting procedure, to further increase computing efficiency we use parallel GPU computation architecture to speed up 3-D environment reconstruction, and reduce both computational complexity and scale shifting by map stitching method, since we adopt a consumer IP camera for image acquisition, we sufficiently reduce the cost of the robot and the size of the image module, giving developers more flexibility with vehicle design.
摘要 I
ABSTRACT II
致謝 III
目錄 VI
圖目錄 VIII
表目錄 XII
第一章 緒論 1
1.1 前言 1
1.2 研究動機與目標 2
1.3 文獻回顧 3
1.4 論文架構 5
第二章 具多軸運動感測器之雲端網路型通用雙輪機器人載具架構 6
2.1 雙輪機器人載具硬體配置 6
2.2 電源配置與電路規劃 13
2.3 雙輪機器人載具控制系統架構與通訊協定 18
2.4 雙輪機器人載具運動模型及控制方法 33
第三章 立體視覺與距離量測 42
3.1 相機幾何 42
3.2 相機參數 44
3.3 相機校正 44
3.4 傳統測距方式 45
第四章 影像同步定位與建圖 49
4.1 同步定位與建圖 49
4.2 MONO-SLAM 49
4.3 LSD-SLAM[36] 51
4.4 比例因子修正與座標轉換 56
第五章 系統建置環境與實驗流程架構 60
5.1 實驗環境 60
5.2 實驗用運算平台軟硬體規格 64
5.3 實驗系統架構 65
5.4 實驗流程 68
第六章 單一相機同步定位與建圖實驗結果 69
6.1 室內環境實驗結果 69
6.2 室內環境實驗結果II 71
6.3 室內環境向上仰視 73
6.4 室外開放空間實驗結果 74
6.5 結果分析 75
第七章 結論與未來展望 78
7.1 結論 78
7.2 相關建議 80
7.3 未來展望 82
參考文獻 83
附錄一 機器人載具元件規格 88
[1]K. Wyrobek, E. Berger, H.F.M. Van der Loos, K. Salisbury, "Towards a Personal Robotics Development Platform: Rationale and Design of an Intrinsically Safe Personal Robot," 2008 IEEE ICRA, May 19-23, 2008.
[2]"What's NXT? LEGO Group Unveils LEGO MINDSTORMS NXT Robotics Toolset at Consumer Electronics Show" ,Press release, NV: LEGO Group. , Las Vegas January 4, 2006.
[3]T.W. McLain ,M.A. Goodrich ,E.P. Anderson ,"Coordinated target assignment and intercept for unmanned air vehicles", Robotics and Automation,pp.911 - 922,2002.
[4]E. Altug ,J.P. Ostrowski ,R. Mahony, "Control of a quadrotor helicopter using visual feedback", Robotics and Automation, 2002. Proceedings. ICRA '02. IEEE International Conference, vol.1,pp. 72 - 77, 2002.
[5]N. Michael, D. Mellinger,Q. Lindsey, V. Kumar, "The GRASP Multiple Micro-UAV Testbed",Robotics & Automation Magazine, IEEE, Vol.17, no.3, pp.56-65,September 2010.
[6]M. Quigley, E. Berger, and A. Y. Ng, “STAIR: Hardware and Software Architecture,”in AAAI 2007 Robotics Workshop, Vancouver, B.C, August, 2007.
[7]H. Durrant-Whyte and T. Bailey, “Simultaneous Localization and mapping (SLAM): Part I the Essential Algorithms.”, Robotics and Automation Magazine, pp. 99-110, June, 2006.
[8]T. Bailey and H. Durrant-Whyte, “Simultaneous Localisation and Mapping (SLAM): Part II State of the Art.”, Robotics and Automation Magazine, pp. 108-117, September, 2006.
[9]胡皓翔,“運用街景影像資料庫的智慧型手機定位”,國立臺灣科技大學機械工程系碩士論文,臺北,2013。
[10]蕭詠稜,“以距離感測器為基礎之室內同步定位與環境地圖實現”,國立臺灣科技大學機械工程系碩士論文,臺北,2013。
[11]邱敬洲,“以深度影像資料庫為基礎的嵌入式全向輪機器人同步定位與建圖”,國立臺灣科技大學機械工程系碩士論文,臺北,2014。
[12]李其真,“基於資料庫影像與RGB-D相機影像之同步定位與建圖”,國立臺灣科技大學機械工程系碩士論文,臺北,2014。
[13]R.C. Smith, P. Cheeseman, “On the Representation and Estimation of Spatial Uncertainty.”, The International Journal of Robotics Research. (IJRR), vol.5, no.4, pp. 56-68, 1986
[14]R.C. Smith, M. Self, P. Cheeseman, “Estimating Uncertain Spatial Relationships in Robotics.”, USA: Elsevier. 1986: pp. 435-461.
[15]A. J. Davison and D. W. Murray, “Simultaneous Localisation and Map-Building Using Active Vision.”, IEEE Trans. on Pattern Analysis and Machine Intelligence, pp. 865-880, July 2002.
[16]A. J. Davison, “Real-time Simultaneous Mapping And Localization with a Single Camera.”, Pro. International Conference on Computer Vision, Nice, October 2003.
[17]A. J. Davison, I. D. Reid, N. D. Molton and O. Stasse, “MonoSLAM: Real-Time Single Camera SLAM.”, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 29, No. 6, pp. 1052-1067, June 2007.
[18]H. P. Morevec, “Towards automatic visual obstacle avoidance.”, Proceedings of 5th International Joint Conference on Artificial Intelligence, pp. 584,1977.
[19]C. Harris and M. Stephens, “A combined corner and edge detector.”, In Alvey Vision Conference, pp. 147-151, 1988
[20]D. G. Lowe, “Distinctive image features from scale-invariant keypoints.”, International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110, November 2004.
[21]H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded-up robust features.”, Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp. 346-359, 2008.
[22]M. Agrawal, K. Konolige, L. Iocchi, “Real-time detection of independent motion using stereo.”, In Proc. of the IEEE Workshop on Motion and Video Computing, pp. 207-214, Jan. 2005.
[23]M. A. Fischler, R. C. Bolles, “Random sample consensus:a paradigm for model fitting with applications to image analysis and automated cartography.”, Comm. of the ACM, pp. 381-395, June 1981.
[24]J. Engel, J. Sturm, and D. Cremers, “Semi-Dense Visual Odometry for a Monocular Camera,” in Proc. IEEE Int. Conf. on Computer Vision, pp.1449-1456, Dec. 2013.
[25]C. Forster, M. Pizzoli, D. Scaramuzza,"SVO: Fast semi-direct monocular visual odometry ",Robotics and Automation (ICRA), 2014, pp.15-22, June 2014.
[26]C. Kerl, J. Sturm, D. Cremers, "Dense visual SLAM for RGB-D cameras", Intelligent Robots and Systems (IROS), 2013, pp.2100-2106,Nov. 2013.
[27]N. Ho and R. Jarvis, “Large Scale 3D Environmental Modelling for Stereoscopic Walk-Through Visualisation.”, in 3DTV Conference, Kos Island, Greece, May 2007.
[28]R. Jarvis, N. Ho, and J. Byrne, “Autonomous robot navigation in cyber and realworlds.”, In CW ’07: Proceedings of the 2007 International Conference on Cyberworlds, pages 66-73, Washington, DC, USA, 2007.
[29]T. Suzuki, M. Kitamura, Y. Amano and T. Hashizume, “6-DOF Localization for a Mobile Robot using Outdoor 3D Voxel Maps.”, Proc. of the 2010 IEEE International Conference on Intelligent Robots and Systems(IROS 2010), pp. 5737-5743, 2010.
[30]A.I. Comport, E. Malis, P. Rives, "Accurate Quadrifocal Tracking for Robust 3D Visual Odometry ",Robotics and Automation, 2007, pp.40-45, April 2007.
[31]B. Hom, H. Hilden, S. Negahdaripour,“Closed-Form Solution of Absolute Orientation Using Orthonormal Matrices”, JOSAA, Vol.5, No.7, 1988
[32]B. Horm, “Closed-Form Solution of Absolute Orientation Using Unit Quatemions”, JOSAA, Vol.4, No.4, 1987
[33]R. Sim, P. Elinas, M. Griffin, and J.J. Little, “Vision-Based SLAM Using the Rao-Blackwellised Particle Filter,”Proc. IJCAI Workshop Reasoning with Uncertainty in Robotics,2005.
[34]J. Mullane, B.N. Vo, M. Adams, W.S. Wijesoma. "A random set formulation for bayesian SLAM." ,IEEE/RSJ International Conference on Intelligent Robots and Systems, France,September 2008
[35]R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, W. Burgard, "G2o: A general framework for graph optimization", Robotics and Automation (ICRA), 2011, pp.3607-3613,May 2011.
[36]J. Engel and T. Schöps and D. Cremers, "LSD-SLAM: Large-scale direct monocular SLAM" In Proceedings of European Conference on Computer Vision, pp. 834-849, 2014.
[37]A. Glover, W. Maddern, M. Warren, S. Reid, M. Milford, G. Wyeth, "OpenFABMAP: An open source toolbox for appearance-based loop closure detection",Robotics and Automation (ICRA), 2012, pp.4730-4729,May 2012.
[38]M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger,R. Wheeler, and A. Ng, “ROS: An open-source robot operating system,”in Proc. Open-Source Software Workshop Int. Conf. Robotics and Automation,Kobe, Japan, 2009.
[39]Z. Zhang, "A flexible new technique for camera calibration", IEEE Trans. Pattern Anal. Mach. Intell., vol.22, no.11, pp.1330-1334, Nov. 2000.
[40]石燿華,“無線測距網路的定位研究”,國立臺灣科技大學機械工程系碩士論文,臺北,2013。
[41]鄭禮逸,“基於鼻部特徵之頭部姿態即時追蹤機器人之實現”,國立臺灣科技大學機械工程系碩士論文,臺北,2015。
[42]張博詠,“基於超寬頻無線測距與運動感測器之輪型機器人定位與建圖”,國立臺灣科技大學機械工程系碩士論文,臺北,2015。