Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision

Han Wang Wei Mou Gerald Seet Mao-Hai Li M. W. S. Lau Dan-Wei Wang

Han Wang, Wei Mou, Gerald Seet, Mao-Hai Li, M. W. S. Lau, Dan-Wei Wang. Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision[J]. 国际自动化与计算杂志(英)/International Journal of Automation and Computing, 2013, 10(5): 397-404. doi: 10.1007/s11633-013-0736-7
引用本文: Han Wang, Wei Mou, Gerald Seet, Mao-Hai Li, M. W. S. Lau, Dan-Wei Wang. Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision[J]. 国际自动化与计算杂志(英)/International Journal of Automation and Computing, 2013, 10(5): 397-404. doi: 10.1007/s11633-013-0736-7
Han Wang, Wei Mou, Gerald Seet, Mao-Hai Li, M. W. S. Lau and Dan-Wei Wang. Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision. International Journal of Automation and Computing, vol. 10, no. 5, pp. 397-404, 2013. doi: 10.1007/s11633-013-0736-7
Citation: Han Wang, Wei Mou, Gerald Seet, Mao-Hai Li, M. W. S. Lau and Dan-Wei Wang. Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision. International Journal of Automation and Computing, vol. 10, no. 5, pp. 397-404, 2013. doi: 10.1007/s11633-013-0736-7

Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision

doi: 10.1007/s11633-013-0736-7

Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision

More Information
    Author Bio:

    Han Wang received his bachelor degree in computer science from Northeast Heavy Machinery Institute, China, and Ph.D. degree from University of Leeds, UK. He is currently an associate professor with the School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore. He has published over 120 top quality international conference and journal papers. He has been invited as a member of Editorial Advisory Board, the Open Electrical and Electronic Engineering Journal. He is a senior member of IEEE. His research interests include computer vision and robotics. E-mail: hw@ntu.edu.sg

    Corresponding author: Wei Mou
  • 摘要: In this paper, we present a novel algorithm for odometry estimation based on ceiling vision. The main contribution of this algorithm is the introduction of principal direction detection that can greatly reduce error accumulation problem in most visual odometry estimation approaches. The principal direction is defined based on the fact that our ceiling is filled with artificial vertical and horizontal lines which can be used as reference for the current robot's heading direction. The proposed approach can be operated in real-time and it performs well even with camera's disturbance. A moving low-cost RGB-D camera (Kinect), mounted on a robot, is used to continuously acquire point clouds. Iterative closest point (ICP) is the common way to estimate the current camera position by registering the currently captured point cloud to the previous one. However, its performance suffers from data association problem or it requires pre-alignment information. The performance of the proposed principal direction detection approach does not rely on data association knowledge. Using this method, two point clouds are properly pre-aligned. Hence, we can use ICP to fine-tune the transformation parameters and minimize registration error. Experimental results demonstrate the performance and stability of the proposed system under disturbance in real-time. Several indoor tests are carried out to show that the proposed visual odometry estimation method can help to significantly improve the accuracy of simultaneous localization and mapping (SLAM).
  • [1] D. M. Helmick, Y. Cheng, D. S. Clouse, L. H. Matthies, S. I. Roumeliotis. Path following using visual odometry for a mars rover in high-slip environments. In Proceedings of the IEEE Aerospace Conference, IEEE, Big Sky, MT, vol.2, pp.772-789, 2004.
    [2] D. Nistér, O. Naroditsky, J. Bergen. Visual odometry for ground vehicle applications. Journal of Field Robotics, vol.23, no.1, pp.3-20, 2006.
    [3] D. Scaramuzza, R. Siegwart. Appearance-guided monocular omnidirectional visual odometry for outdoor ground vehicles. IEEE Transactions on Robotics, vol.24, no.5, pp.1015-1026, 2008.
    [4] A. J. Davison. Real-time simultaneous localisation and mapping with a single camera. In Proceedings of the 9th IEEE International Conference on Computer Vision, IEEE, Nice, France, pp.1403-1410, 2003.
    [5] D. Nister, O. Naroditsky, J. Bergen. Visual odometry. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, Washington, DC, USA, vol.1, pp.652-659, 2004.
    [6] D. Nister. An efficient solution to the five-point relative pose problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.26, no.6, pp.756-770, 2004.
    [7] Y. Yu, C. Pradalier, G. H. Zong. Appearance-based monocular visual odometry for ground vehicles. In Proceedings of IEEE/ASME International Conference on Advanced Intelligent Mechatronics IEEE, Budapest, pp.862-867, 2011.
    [8] K. H. Yang, W. S. Yu, X. Q. Ji. Rotation estimation for mobile robot based on single-axis gyroscope and monocular camera. International Journal of Automation and Computing, vol.9, no.3, pp.292-298, 2012.
    [9] D. Xu, H. W. Wang, Y. F. Li, M. Tan. A new calibration method for an inertial and visual sensing system. International Journal of Automation and Computing, vol.9, no.3, pp.299-305, 2012.
    [10] M. Agrawal, K. Konolige. Real-time localization in outdoor environments using stereo vision and inexpensive GPS. In Proceedings of the 18th International Conference on Pattern Recognition, IEEE, Hong Kong, pp.1063-1068, 2006.
    [11] M. Agrawal, K. Konolige. Rough terrain visual odometry. In Proceedings of the International Conference on Advanced Robotics, Daegu, South Korea, vol.25, pp.28-30, 2007.
    [12] C. Dornhege, A. Kleiner. Visual odometry for tracked vehicles. In Proceedings of the IEEE International Workshop on Safety Security and Rescue Robotics, IEEE, Gaithersburg, Maryland, USA, 2006.
    [13] A. Howard. Real-time stereo visual odometry for autonomous ground vehicles. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, Nice, France, pp.3946-3952, 2008.
    [14] A. Talukder, S. Goldberg, L. Matthies, A. Ansar. Real-time detection of moving objects in a dynamic scene from moving robotic vehicles. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, Las Vegas, NV, USA, vol.2, pp.1308-1313, 2003.
    [15] P. J. Besl, N. D. McKay. A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.14, no.2, pp.239-265, 1992.
    [16] P. Henry, M. Krainin, E. Herbst, X. F. Ren, D. Fox. RGB-D mapping: Using depth cameras for dense 3D modeling of indoor environments. In Proceedings of the International Symposium on Experimental Robotics, ISER, Delhi, India, vol.25, pp.22-25, 2010.
    [17] H. C. Daniel, J. Kannala, J. Heikkilä. Joint depth and color camera calibration with distortion correction. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.24, no.10, pp.2058-2064, 2012.
    [18] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, vol.60, no.2, pp.91-110, 2004.
    [19] H. Bay, T. Tuytelaars, L. J. Van Gool. Surf: Speeded up robust features. In Proceedings of the 9th European Conference on Computer Vision, Graz, Austria, pp.404-417, 2006.
    [20] J. F. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.PAMI-8, no.6, pp.679-698, 1986.
    [21] M. Heath, S. Sarkar, T. Sanocki, K. Bowyer. Comparison of edge detectors: A methodology and initial study. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, San Francisco, CA, USA, pp.143-148, 1996.
    [22] S. Se, M. Brady. Ground plane estimation, error analysis and applications. Robotics and Autonomous Systems, vol.39, no.2, pp.59-71, 2002.
    [23] A. V. Segal, D. Haehnel, S. Thrun. Generalized-ICP. In Proceedings of Robotics: Science and Systems, RSS, Seattle, USA, vol.2, pp.1-8, 2009.
    [24] G. Grisetti, C. Stachniss, W. Burgard. Improved techniques for grid mapping with Rao-Blackwellized particle filters. IEEE Transactions on Robotics, vol.23, no.1, pp.34-46, 2007.
  • [1] Ying Li, Xi-Long Liu, De Xu, Da-Peng Zhang.  Orientation Measurement for Objects with Planar Surface Based on Monocular Microscopic Vision . International Journal of Automation and Computing, 2020, 17(2): 247-256. doi: 10.1007/s11633-019-1202-y
    [2] Qiang Fu, Xiang-Yang Chen, Wei He.  A Survey on 3D Visual Tracking of Multicopters . International Journal of Automation and Computing, 2019, 16(6): 707-719. doi: 10.1007/s11633-019-1199-2
    [3] J. Gimenez, A. Amicarelli, J. M. Toibero, F. di Sciascio, R. Carelli.  Iterated Conditional Modes to Solve Simultaneous Localization and Mapping in Markov Random Fields Context . International Journal of Automation and Computing, 2018, 15(3): 310-324. doi: 10.1007/s11633-017-1109-4
    [4] Tai-Zhi Lv, Chun-Xia Zhao, Hao-Feng Zhang.  An Improved FastSLAM Algorithm Based on Revised Genetic Resampling and SR-UPF . International Journal of Automation and Computing, 2018, 15(3): 325-334. doi: 10.1007/s11633-016-1050-y
    [5] Jian-Wei Li, Wei Gao, Yi-Hong Wu.  Elaborate Scene Reconstruction with a Consumer Depth Camera . International Journal of Automation and Computing, 2018, 15(4): 443-453. doi: 10.1007/s11633-018-1114-2
    [6] Yi Yang, Fan Qiu, Hao Li, Lu Zhang, Mei-Ling Wang, Meng-Yin Fu.  Large-scale 3D Semantic Mapping Using Stereo Vision . International Journal of Automation and Computing, 2018, 15(2): 194-206. doi: 10.1007/s11633-018-1118-y
    [7] Justin Carpentier, Mehdi Benallegue, Jean-Paul Laumond.  On the Centre of Mass Motion in Human Walking . International Journal of Automation and Computing, 2017, 14(5): 542-551. doi: 10.1007/s11633-017-1088-5
    [8] Ying Xie, Xiang-Dong Yang, Zhi Liu, Shu-Nan Ren, Ken Chen.  Method for Visual Localization of Oil and Gas Wellhead Based on Distance Function of Projected Features . International Journal of Automation and Computing, 2017, 14(2): 147-158. doi: 10.1007/s11633-017-1063-1
    [9] Sun-Chun Zhou, Rui Yan, Jia-Xin Li, Ying-Ke Chen, Huajin Tang.  A Brain-inspired SLAM System Based on ORB Features . International Journal of Automation and Computing, 2017, 14(5): 564-575. doi: 10.1007/s11633-017-1090-y
    [10] Fan Zhou, Wei Zheng, Zeng-Fu Wang.  Adaptive Noise Identification in Vision-assisted Motion Estimation for Unmanned Aerial Vehicles . International Journal of Automation and Computing, 2015, 12(4): 413-420. doi: 10.1007/s11633-014-0857-7
    [11] Sheng-Ye Yan, Xin-Xing Xu, Qing-Shan Liu.  Robust Text Detection in Natural Scenes Using Text Geometry and Visual Appearance . International Journal of Automation and Computing, 2014, 11(5): 480-488. doi: 10.1007/s11633-014-0833-2
    [12] R. I. Minu, K. K. Thyagharajan.  Semantic Rule Based Image Visual Feature Ontology Creation . International Journal of Automation and Computing, 2014, 11(5): 489-499. doi: 10.1007/s11633-014-0832-3
    [13] Chao-Lei Wang,  Tian-Miao Wang,  Jian-Hong Liang,  Yi-Cheng Zhang,  Yi Zhou.  Bearing-only Visual SLAM for Small Unmanned Aerial Vehicles in GPS-denied Environments . International Journal of Automation and Computing, 2013, 10(5): 387-396. doi: 10.1007/s11633-013-0735-8
    [14] Hong-Bin Wang,  Mian Liu.  Design of Robotic Visual Servo Control Based on Neural Network and Genetic Algorithm . International Journal of Automation and Computing, 2012, 9(1): 24-29. doi: 10.1007/s11633-012-0612-x
    [15] Dan Li,  Jun-Min Li.  Adaptive Iterative Learning Control for Nonlinearly Parameterized Systems with Unknown Time-varying Delay and Unknown Control Direction . International Journal of Automation and Computing, 2012, 9(6): 578-586 . doi: 10.1007/s11633-012-0682-9
    [16] Fei Li, Hua-Long Xie.  Sliding Mode Variable Structure Control for Visual Servoing System . International Journal of Automation and Computing, 2010, 7(3): 317-323. doi: 10.1007/s11633-010-0509-5
    [17] Jin-Kui Chu,  Rong-Hua Li,  Qing-Ying Li,  Hong-Qing Wang.  A Visual Attention Model for Robot Object Tracking . International Journal of Automation and Computing, 2010, 7(1): 39-46. doi: 10.1007/s11633-010-0039-1
    [18] Aymeric De Cabrol, Thibault Garcia, Patrick Bonnin, Maryline Chetto.  A Concept of Dynamically Reconfigurable Real-time Vision System for Autonomous Mobile Robotics . International Journal of Automation and Computing, 2008, 5(2): 174-184. doi: 10.1007/s11633-008-0174-0
    [19] Mohamed-Faouzi Harkat,  Salah Djelel,  Noureddine Doghmane,  Mohamed Benouaret.  Sensor Fault Detection, Isolation and Reconstruction Using Nonlinear Principal Component Analysis . International Journal of Automation and Computing, 2007, 4(2): 149-155. doi: 10.1007/s11633-007-0149-6
    [20] Woong Choi, Tadao Isaka, Mamiko Sakata, Seiya Tsuruta, Kozaburo Hachimura.  Quantification of Dance Movement by Simultaneous Measurement of Body Motion and Biophysical Information . International Journal of Automation and Computing, 2007, 4(1): 1-7. doi: 10.1007/s11633-007-0001-z
  • 加载中
计量
  • 文章访问数:  4477
  • HTML全文浏览量:  36
  • PDF下载量:  4295
  • 被引次数: 0
出版历程
  • 收稿日期:  2013-03-15
  • 修回日期:  2013-06-17
  • 刊出日期:  2013-10-20

Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision

doi: 10.1007/s11633-013-0736-7
    作者简介:

    Han Wang received his bachelor degree in computer science from Northeast Heavy Machinery Institute, China, and Ph.D. degree from University of Leeds, UK. He is currently an associate professor with the School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore. He has published over 120 top quality international conference and journal papers. He has been invited as a member of Editorial Advisory Board, the Open Electrical and Electronic Engineering Journal. He is a senior member of IEEE. His research interests include computer vision and robotics. E-mail: hw@ntu.edu.sg

摘要: In this paper, we present a novel algorithm for odometry estimation based on ceiling vision. The main contribution of this algorithm is the introduction of principal direction detection that can greatly reduce error accumulation problem in most visual odometry estimation approaches. The principal direction is defined based on the fact that our ceiling is filled with artificial vertical and horizontal lines which can be used as reference for the current robot's heading direction. The proposed approach can be operated in real-time and it performs well even with camera's disturbance. A moving low-cost RGB-D camera (Kinect), mounted on a robot, is used to continuously acquire point clouds. Iterative closest point (ICP) is the common way to estimate the current camera position by registering the currently captured point cloud to the previous one. However, its performance suffers from data association problem or it requires pre-alignment information. The performance of the proposed principal direction detection approach does not rely on data association knowledge. Using this method, two point clouds are properly pre-aligned. Hence, we can use ICP to fine-tune the transformation parameters and minimize registration error. Experimental results demonstrate the performance and stability of the proposed system under disturbance in real-time. Several indoor tests are carried out to show that the proposed visual odometry estimation method can help to significantly improve the accuracy of simultaneous localization and mapping (SLAM).

English Abstract

Han Wang, Wei Mou, Gerald Seet, Mao-Hai Li, M. W. S. Lau, Dan-Wei Wang. Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision[J]. 国际自动化与计算杂志(英)/International Journal of Automation and Computing, 2013, 10(5): 397-404. doi: 10.1007/s11633-013-0736-7
引用本文: Han Wang, Wei Mou, Gerald Seet, Mao-Hai Li, M. W. S. Lau, Dan-Wei Wang. Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision[J]. 国际自动化与计算杂志(英)/International Journal of Automation and Computing, 2013, 10(5): 397-404. doi: 10.1007/s11633-013-0736-7
Han Wang, Wei Mou, Gerald Seet, Mao-Hai Li, M. W. S. Lau and Dan-Wei Wang. Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision. International Journal of Automation and Computing, vol. 10, no. 5, pp. 397-404, 2013. doi: 10.1007/s11633-013-0736-7
Citation: Han Wang, Wei Mou, Gerald Seet, Mao-Hai Li, M. W. S. Lau and Dan-Wei Wang. Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision. International Journal of Automation and Computing, vol. 10, no. 5, pp. 397-404, 2013. doi: 10.1007/s11633-013-0736-7
参考文献 (24)

目录

    /

    返回文章
    返回