Home  |  About Journal  |  Editorial Board  |  For Authors  |  For Referees  |  For Readers  |  Subscription  |  Contract Us
International Journal of Automation and Computing 2018, Vol. 15 Issue (5) :559-569    DOI: 10.1007/s11633-018-1139-6
Special Issue on Intelligent Control and Computing in Advanced Robotics Current Issue | Next Issue | Archive | Adv Search << Previous Articles | Next Articles >>
A Selective Attention Guided Initiative Semantic Cognition Algorithm for Service Robot
Huan-Zhao Chen, Guo-Hui Tian, Guo-Liang Liu
School of Control Science and Engineering, University of Shandong, Jinan 250061, China
Download: [PDF 45345KB] HTML()   Export: BibTeX or EndNote (RIS)      Supporting Info
Abstract With the development of artificial intelligence and robotics, the study on service robot has made a significant progress in recent years. Service robot is required to perceive users and environment in unstructured domestic environment. Based on the perception, service robot should be capable of understanding the situation and discover service task. So robot can assist humans for home service or health care more accurately and with initiative. Human can focus on the salient things from the mass observation information. Humans are capable of utilizing semantic knowledge to make some plans based on their understanding of the environment. Through intelligent space platform, we are trying to apply this process to service robot. A selective attention guided initiatively semantic cognition algorithm in intelligent space is proposed in this paper. It is specifically designed to provide robots with the cognition needed for performing service tasks. At first, an attention selection model is built based on saliency computing and key area. The area which is highly relevant to service task could be located and referred as focus of attention (FOA). Second, a recognition algorithm for FOA is proposed based on a neural network. Some common objects and user behavior are recognized in this step. At last, a unified semantic knowledge base and corresponding reasoning engine is proposed using recognition result. Related experiments in a real life scenario demonstrated that our approach is able to mimic the recognition process in humans, make robots understand the environment and discover service task based on its own cognition. In this way, service robots can act smarter and achieve better service efficiency in their daily work.
Email this article
Add to my bookshelf
Add to citation manager
Email Alert
Articles by authors
KeywordsService robot   cognition computing   selective attention   semantic knowledge base   artificial neural network     
Received: 2018-01-20; published: 2018-06-04

This work was supported by National Natural Science Foundation of China (Nos. 61773239, 91748115 and 61603213), Natural Science Foundation of Shandong Province (No. ZR2015FM007), and Taishan Scholars Program of Shandong Province.

Corresponding Authors: Guo-Hui Tian     Email: g.h.tian@sdu.edu.cn
About author: Huan-Zhao Chen research interests include service robot, robot recognition, semantic knowledge processing and reasoning. E-mail:drwonkaa@gmail.com ORCID iD:0000-0003-4667-1291;Guo-Hui Tian research interests include service robot, intelligent space, cloud robotics, and brain-inspired intelligent robotics. E-mail:g.h.tian@sdu.edu.cn (Corresponding author) ORCID iD:0000-0001-8332-3064;Guo-Liang Liu research interests include service robot, intelligent space, and SLAM. E-mail:liuguoliang@sdu.edu.cn
Cite this article:   
Huan-Zhao Chen, Guo-Hui Tian, Guo-Liang Liu. A Selective Attention Guided Initiative Semantic Cognition Algorithm for Service Robot[J]. International Journal of Automation and Computing , vol. 15, no. 5, pp. 559-569, 2018.
http://www.ijac.net/EN/10.1007/s11633-018-1139-6      或     http://www.ijac.net/EN/Y2018/V15/I5/559
[1] T. J. Huang. Imitating the brain with neurocomputer a “New” way towards artificial general intelligence. International Journal of Automation and Computing, vol. 14, no. 5, pp. 520-531, 2017. DOI: 10.1007/s11633-017-1082-y
[2] X. L. Fu, L. H. Cai, Y. Liu, J. Jia, W. F. Chen, Z. Yi, G. Z. Zhao, Y. J. Liu, C. X. Wu. A computational cognition model of perception, memory, and judgment. Science China Information Sciences, vol. 57, no. 3, pp. 1-15, 2014. DOI: 10.1007/s11432-013-4911-9
[3] H. Guan, H. J. Yang, J. Wang. An ontology-based approach to security pattern selection. International Journal of Automation and Computing, vol. 13, no. 2, pp. 168-182, 2016. DOI: 10.1007/s11633-016-0950-1
[4] I. H. Suh, G. H. Lim, W. Hwang, H. Suh, J. H. Choi, Y. T. Park. Ontology-based multi-layered robot knowledge framework (OMRKF) for robot intelligence. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, USA, pp. 429-436, 2007.
[5] G. H. Lim, I. H. Suh, H. Suh. Ontology-based unified robot knowledge for service robots in indoor environments. IEEE Transactions on Systems,Man,and Cybernetics-Part A:Systems and Humans, vol. 41, no. 3, pp. 492-509, 2011. DOI: 10.1109/TSMCA.2010.2076404
[6] K. Wongpatikaseree, M. Ikeda, M. Buranarach, T. Supnithi, A. O. Lim, Y. S. Tan. Activity recognition using context-aware infrastructure ontology in smart home domain. In Proceedings of the 7th International Conference on Knowledge, Information and Creativity Support Systems, IEEE, Melbourne, Australia, pp. 50-57, 2012.
[7] J. H. Lee, N. Ando, H. Hashimoto. Intelligent space for human and mobile robot. In Proceedings of IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Atlanta, USA, pp. 784, 1999.
[8] K. Morioka, H. Hashimoto. Appearance based object identification for distributed vision sensors in intelligent space. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, pp. 199-204, 2004.
[9] P. Steinhaus, M. Strand, R. Dillmann. Autonomous robot navigation in human-centered environments based on 3D data fusion. Eurasip Journal on Advances in Signal Processing, vol. 2007, Article number 86831, 2007.
[10] C. Losada, M. Mazo, S. Palazuelos, D. Pizarro, M. Marron. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots. Sensors, vol. 10, no. 4, pp. 3261-3279, 2010. DOI: 10.3390/s100403261
[11] H. Z. Chen, G. H. Tian, F. Lu, G. L. Liu. A hybrid cloud robot framework based on intelligent space. In Proceedings of the 12th World Congress on Intelligent Control and Automation, IEEE, Guilin, China, pp. 2996-3001, 2016.
[12] R. Zhao, W. L. Ouyang, H. S. Li, X. G. Wang. Saliency detection by multi-context deep learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Boston, USA, pp. 1265-1274, 2015.
[13] J. G. Daugman. Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. Journal of the Optical Society of America A, vol. 2, no. 7, pp. 1160-1165, 1985. DOI: 10.1364/JOSAA.2.001160
[14] L. Itti, C. Koch. Computational modelling of visual attention. Nature Reviews Neuroscience, vol. 2, no. 3, pp. 194-203, 2001. DOI: 10.1038/35058500
[15] X. D. Hou, L. Q. Zhang. Saliency detection: A spectral residual approach. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, USA, 2007.
[16] X. D. Hou, J. Harel, C Koch. Image signature: Highlighting sparse salient regions. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 1, pp. 194-201, 2012. DOI: 10.1109/TPAMI.2011.146
[17] L. J. Wang, H. C. Lu, X. Ruan, M. H. Yang. Deep networks for saliency detection via local estimation and global search. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, pp. 3183-3192, 2015.
[18] T. S. Chen, L. Lin, L. B. Liu, X. N. Luo, X. L. Li. DISC: Deep image saliency computing via progressive representation learning. IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 6, pp. 1135-1149, 2016. DOI: 10.1109/TNNLS.2015.2506664
[19] J. T. Pan, E. Sayrol, X. Giro-I-Nieto, K. McGuinness, N. E. O'Connor. Shallow and deep convolutional networks for saliency prediction. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 598-606, 2016.
[20] F. Zhang, B. Du, L. P. Zhang. Saliency-guided unsupervised feature learning for scene classification. IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 4, pp. 2175-2184, 2015. DOI: 10.1109/TGRS.2014.2357078
Copyright 2010 by International Journal of Automation and Computing