Volume 7 Number 1
February 2010
Article Contents
Jin-Kui Chu, Rong-Hua Li, Qing-Ying Li and Hong-Qing Wang. A Visual Attention Model for Robot Object Tracking. International Journal of Automation and Computing, vol. 7, no. 1, pp. 39-46, 2010. doi: 10.1007/s11633-010-0039-1
Cite as: Jin-Kui Chu, Rong-Hua Li, Qing-Ying Li and Hong-Qing Wang. A Visual Attention Model for Robot Object Tracking. International Journal of Automation and Computing, vol. 7, no. 1, pp. 39-46, 2010. doi: 10.1007/s11633-010-0039-1

A Visual Attention Model for Robot Object Tracking

Author Biography:
  • Corresponding author: Jin-Kui Chu graduated from Hangzhou Dianzi University(HDU),PRC in 1986.
  • Received: 2009-03-01
Fund Project:

supported by National Basic Research Program of China (973 Program)(No.2006CB300407);National Natural Science Foundation of China (No.50775017)

  • Inspired by human behaviors, a robot object tracking model is proposed on the basis of visual attention mechanism, which is fit for the theory of topological perception. The model integrates the image-driven, bottom-up attention and the object-driven, top-down attention, whereas the previous attention model has mostly focused on either the bottom-up or top-down attention. By the bottom-up component, the whole scene is segmented into the ground region and the salient regions. Guided by top-down strategy which is achieved by a topological graph, the object regions are separated from the salient regions. The salient regions except the object regions are the barrier regions. In order to estimate the model, a mobile robot platform is developed, on which some experiments are implemented. The experimental results indicate that processing an image with a resolution of 752 480 pixels takes less than 200 ms and the object regions are unabridged. The analysis obtained by comparing the proposed model with the existing model demonstrates that the proposed model has some advantages in robot object tracking in terms of speed and efficiency.
  • 加载中
  • [1] A.De Cabrol,T.Garcia,P.Bonnin,M.Chetto.A concept of dynamically reconfigurable real-time vision system for autonomous mobile robotics.International Journal of Au-tomation and Computing,vol.5,no.2,pp.174-184,2008.
    [2] J.D.Liu,H.Hu.Biologically inspired behaviour design for autonomous robotic fish.International Journal of Automa-tion and Computing,vol.3,no.4,pp.336-347,2006.
    [3] G.V.Lauder,E.J.Anderson,J.Tangorra,P.G.A.Mad-den.Fish biorobotics:Kinematics and hydrodynamics of self-propulsion.Journal of Experimental Biology,vol.210, no.16,pp.2767-2780,2007.
    [4] T.J.Hu,L.C.Shen,L.X.Lin,H.J.Xu.Biological inspira-tions,kinematics modeling,mechanism design and experi-ments on an undulating robotic fin inspired by Gymnarchus niloticus.Mechanism and Machine Theory,vol.44,no.3, pp.633-645,2009.
    [5] L.Itti,C.Koch.A comparison of feature combination strategies for saliency-based visual attention systems.In Proceedings of SPIE Human Vision and Electronic Imaging IV,San Jose,California,USA,pp.473-482,1999.
    [6] L.Itti.Models of Bottom-up and Top-down Visual Atten-tion,Ph.D.dissertation,California Institute of Technology, USA,2000.
    [7] L.Itti,C.Koch.Feature combination strategies for saliency-based visual attention systems.Journal of Elec-tronic Imaging,vol.10,no.1,pp.161-169,2001.
    [8] L.Itti,C.Koch,E.Niebur.A model of saliency-based vi-sual attention for rapid scene analysis.IEEE Transactions on Pattern Analysis and Machine Intelligence,vol.20,no. 11,pp.1254-1259,1998.
    [9] D.Walther,U.Rutishauser,C.Koch,P.Perona.Selective visual attention enables learning and recognition of multi-ple objects in cluttered scenes.Computer Vision and Image Understanding,vol.100,no.1-2,pp.41-63,2005.
    [10] P.Zhang,R.S.Wang.Hierarchical data competition in the bottom-up visual attention system.Journal of Computer-aided Design and Computer Graphics.vol.17,no.8, pp.1667-1672,2005.(in Chinese)
    [11] M.Z.Aziz,B.Mertsching,M.Salah,E.N.Shafik,R.Stem-mer.Evaluation of visual attention models for robots.In Proceedings of the 4th IEEE International Conference on Computer Vision Systems,IEEE,Manhattan,USA,pp.20-25,2006.
    [12] L.Chen,S.W.Zhang,M.V.Srinivasan.Global perception in small brain:Topological pattern recognition in honey-bees.In Proceedings of the National Academy of Sciences of the United States of America,vol.100,no.11,pp.6884-6889,2003.
    [13] I.C.B.Goodhew,B.D.Hutt,K.Warwick.Control and experimentation of a personal robot tracking system.Inter-national Journal of Modelling,Identification and Control, vol.1,no.1,pp.4-12,2006.
    [14] P.Yang,W.Y.Wu,M.Moniri,C.C.Chibelushi.A sensor-based SLAM algorithm for camera tracking in virtual stu-dio.International Journal of Automation and Computing, vol.5,no.2,pp.152-162,2008.
    [15] D.Xu,M.Tan,X.G.Zhao,Z.G.Tu.Seam tracking and visual control for robotic arc welding based on structured light stereovision.International Journal of Automation and Computing,vol.1,no.1,pp.63-82,2004.
    [16] S.Wang,H.Z.Ai,K.Z.He.Di?erence image based multi-ple motion targets detection and tracking.Journal of Image and Graphics,vol.4(A),no.6,pp.470-475,1999.(in Chi-nese)
    [17] S.J.Sun,D.Haynor,Y.M.Kim.Motion estimation based on optical flow with adaptive gradients.In Proceedings of the International Conference on Image Processing,IEEE, Vancouver,BC,Canada,pp.852-855,2000.
    [18] P.J.Burt,E.H.Adelson.The Laplacian pyramid as a com-pact image code.IEEE Transactions on Communications, vol.31,no.4,pp.532-540,1983.
    [19] B.M.H.Romeny.Front-end Vision and Multiscale Image Analysis,Netherlands:Kluwer Academic Publisher,2003.
    [20] K.Fukunaga,L.D.Hostetler.The estimation of the gradi-ent of a density function,with applications in pattern recog-nition.IEEE Transactions on Information Theory,vol.21, no.1,pp.32-40,1975.
    [21] D.Comaniciu,P.Meer.Mean shift:A robust approach to-ward feature space analysis.IEEE Transactions on Pattern Analysis and Machine Intelligence,vol.24,no.5,pp.603-619,2002.
    [22] R.H.Li,J.K.Chu,C.L.Xu,Q.Y.Li.Design and real-ization of intelligent mobile robot platform for micro navi-gation sensors.Transducer and Micro System Technologies, vol.27,no.10,pp.118-120,2008.(in Chinese)
    [23] C.Zhao,X.H.Shang,S.Q.Geng.Control system for mi-cro soccer robot.Journal of Harbin Institute of Technology, vol.35,no.9,pp.1033-1035,2003.(in Chinese)
    [24] D.Walther,U.Rutishauser,C.Koch,P.Perona.On the usefulness of attention for object recognition.In Proceed-ings of Workshop on Attention and Performance in Compu-tational Vision at Europeon Conference on Computer Vi-sion,Czech Republic,pp.96-103,2004.
    [25] D.Walther,L.Itti,M.Riesenhuber,T.Poggio,C.Koch. Attentional selection for object recognition-a gentle way. Lecture Notes in Computer Science,Springer,vol.2525,pp. 251-267,2002.
  • 加载中
  • [1] Peng-Xia Cao, Wen-Xin Li, Wei-Ping Ma. Tracking Registration Algorithm for Augmented Reality Based on Template Tracking . International Journal of Automation and Computing, 2020, 17(2): 257-266.  doi: 10.1007/s11633-019-1198-3
    [2] Zheng Lian, Ya Li, Jian-Hua Tao, Jian Huang, Ming-Yue Niu. Expression Analysis Based on Face Regions in Real-world Conditions . International Journal of Automation and Computing, 2020, 17(1): 96-107.  doi: 10.1007/s11633-019-1176-9
    [3] Wei-Ning Wang, Qi Li, Liang Wang. Robust Object Tracking via Information Theoretic Measures . International Journal of Automation and Computing, 2020, 17(): 1-15.  doi: 10.1007/s11633-020-1235-2
    [4] Xiang Zhang, Qiang Yang. Transfer Hierarchical Attention Network for Generative Dialog System . International Journal of Automation and Computing, 2019, 16(6): 720-736.  doi: 10.1007/s11633-019-1200-0
    [5] Qiang Fu, Xiang-Yang Chen, Wei He. A Survey on 3D Visual Tracking of Multicopters . International Journal of Automation and Computing, 2019, 16(6): 707-719.  doi: 10.1007/s11633-019-1199-2
    [6] Ana Paula Batista, Fábio Gonçalves Jota. Analysis of the Most Likely Regions of Stability of an NCS and Design of the Corresponding Event-driven Controller . International Journal of Automation and Computing, 2018, 15(1): 39-51.  doi: 10.1007/s11633-017-1099-2
    [7] Huan-Zhao Chen, Guo-Hui Tian, Guo-Liang Liu. A Selective Attention Guided Initiative Semantic Cognition Algorithm for Service Robot . International Journal of Automation and Computing, 2018, 15(5): 559-569.  doi: 10.1007/s11633-018-1139-6
    [8] Zhe Gao, Li-Rong Zhai, Yan-Dong Liu. Robust Stabilizing Regions of Fractional-order PIλ Controllers for Fractional-order Systems with Time-delays . International Journal of Automation and Computing, 2017, 14(3): 340-349.  doi: 10.1007/s11633-015-0941-7
    [9] Yi Song, Shu-Xiao Li, Cheng-Fei Zhu, Hong-Xing Chang. Object Tracking with Dual Field-of-view Switching in Aerial Videos . International Journal of Automation and Computing, 2016, 13(6): 565-573.  doi: 10.1007/s11633-016-0949-7
    [10] Sheng-Ye Yan, Xin-Xing Xu, Qing-Shan Liu. Robust Text Detection in Natural Scenes Using Text Geometry and Visual Appearance . International Journal of Automation and Computing, 2014, 11(5): 480-488.  doi: 10.1007/s11633-014-0833-2
    [11] R. I. Minu,  K. K. Thyagharajan. Semantic Rule Based Image Visual Feature Ontology Creation . International Journal of Automation and Computing, 2014, 11(5): 489-499.  doi: 10.1007/s11633-014-0832-3
    [12] Hong-Bin Wang,  Mian Liu. Design of Robotic Visual Servo Control Based on Neural Network and Genetic Algorithm . International Journal of Automation and Computing, 2012, 9(1): 24-29.  doi: 10.1007/s11633-012-0612-x
    [13] Hossein Beikzadeh, Hamid D. Taghirad. Exponential Nonlinear Observer Based on the Differential State-dependent Riccati Equation . International Journal of Automation and Computing, 2012, 9(4): 358-368.  doi: 10.1007/s11633-012-0656-y
    [14] Fei Li, Hua-Long Xie. Sliding Mode Variable Structure Control for Visual Servoing System . International Journal of Automation and Computing, 2010, 7(3): 317-323.  doi: 10.1007/s11633-010-0509-5
    [15] Zhi-Wei Xu,  Ze-Zhong Liang,  Zhong-Qi Sheng. Extended Object Model for Product Configuration Design . International Journal of Automation and Computing, 2010, 7(3): 289-294.  doi: 10.1007/s11633-010-0505-9
    [16] Qi-Cong Wang,  Yuan-Hao Gong,  Chen-Hui Yang,  Cui-Hua Li. Robust Object Tracking under Appearance Change Conditions . International Journal of Automation and Computing, 2010, 7(1): 31-38.  doi: 10.1007/s11633-010-0031-9
    [17] Zhi-Gang Cao,  Xiao-Guang Yang. Coalition Formation in Weighted Simple-majority Games under Proportional Payoff Allocation Rules . International Journal of Automation and Computing, 2009, 6(3): 217-222.  doi: 10.1007/s11633-009-0217-1
    [18] William Wilson,  Phil Birkin,  Uwe Aickelin. The Motif Tracking Algorithm . International Journal of Automation and Computing, 2008, 5(1): 32-44.  doi: 10.1007/s11633-008-0032-0
    [19] Li Zhu, Guo-You Wang, Chen Wang. Formal Photograph Compression Algorithm Based on Object Segmentation . International Journal of Automation and Computing, 2008, 5(3): 276-283.  doi: 10.1007/s11633-008-0276-8
    [20] De Xu,  Min Tan,  Xiaoguang Zhao,  Zhiguo Tu. Seam Tracking and Visual Control for Robotic Arc Welding Based on Structured Light Stereovision . International Journal of Automation and Computing, 2004, 1(1): 63-75.  doi: 10.1007/s11633-004-0063-0
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Abstract Views (4216) PDF downloads (2079) Citations (0)

A Visual Attention Model for Robot Object Tracking

    Corresponding author: Jin-Kui Chu graduated from Hangzhou Dianzi University(HDU),PRC in 1986.
Fund Project:

supported by National Basic Research Program of China (973 Program)(No.2006CB300407);National Natural Science Foundation of China (No.50775017)

Abstract: Inspired by human behaviors, a robot object tracking model is proposed on the basis of visual attention mechanism, which is fit for the theory of topological perception. The model integrates the image-driven, bottom-up attention and the object-driven, top-down attention, whereas the previous attention model has mostly focused on either the bottom-up or top-down attention. By the bottom-up component, the whole scene is segmented into the ground region and the salient regions. Guided by top-down strategy which is achieved by a topological graph, the object regions are separated from the salient regions. The salient regions except the object regions are the barrier regions. In order to estimate the model, a mobile robot platform is developed, on which some experiments are implemented. The experimental results indicate that processing an image with a resolution of 752 480 pixels takes less than 200 ms and the object regions are unabridged. The analysis obtained by comparing the proposed model with the existing model demonstrates that the proposed model has some advantages in robot object tracking in terms of speed and efficiency.

Jin-Kui Chu, Rong-Hua Li, Qing-Ying Li and Hong-Qing Wang. A Visual Attention Model for Robot Object Tracking. International Journal of Automation and Computing, vol. 7, no. 1, pp. 39-46, 2010. doi: 10.1007/s11633-010-0039-1
Citation: Jin-Kui Chu, Rong-Hua Li, Qing-Ying Li and Hong-Qing Wang. A Visual Attention Model for Robot Object Tracking. International Journal of Automation and Computing, vol. 7, no. 1, pp. 39-46, 2010. doi: 10.1007/s11633-010-0039-1
Reference (25)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return