Robust Object Tracking via Information Theoretic Measures

Wei-Ning Wang, Qi Li, Liang Wang. Robust Object Tracking via Information Theoretic Measures[J]. International Journal of Automation and Computing. doi: 10.1007/s11633-020-1235-2
 Citation: Wei-Ning Wang, Qi Li, Liang Wang. Robust Object Tracking via Information Theoretic Measures[J]. International Journal of Automation and Computing.

## Robust Object Tracking via Information Theoretic Measures

###### Author Bio: Wei-Ning Wang received the B. Eng. degree in automation from North China Electric Power University, China in 2015. She is currently a Ph. D. degree candidate at National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA), China. Her research interests include computer vision, pattern recognition and video analysis. E-mail: weining.wang@cripac.ia.ac.cn ORCID iD: 0000-0001-7299-6431 Qi Li received the B. Eng. degree in automation from the China University of Petroleum, China in 2011 and the Ph. D. degree in pattern recognition and intelligent systems from CASIA, China in 2016. He is currently an associate professor with the Center for Research on Intelligent Perception and Computing, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, China. His research interests include face recognition, computer vision, and machine learning. E-mail: qli@nlpr.ia.ac.cn (Corresponding author) ORCID iD: 0000-0002-7905-2860 Liang Wang received both the B. Eng. and M. Eng. degrees from Anhui University, China in 1997 and 2000, respectively, and the Ph. D. degree from the Institute of Automation, Chinese Academy of Sciences (CASIA), China in 2004. From 2004 to 2010, he was a research assistant at Imperial College London, UK, and Monash University, Australia, a research fellow with the University of Melbourne, Australia, and a lecturer with the University of Bath, UK, respectively. Currently, he is a full professor of the Hundred Talents Program at the National Lab of Pattern Recognition, CASIA, China. He is currently an IEEE Fellow and IAPR Fellow. His research interests include machine learning, pattern recognition, and computer vision. E-mail: wangliang@nlpr.ia.ac.cn
• Figure  1.  The flowchart of our algorithm. It consists of three main parts. First, some candidate states around previous tracking results are sampled using Brownian model. Then an observation model based on low dimensional subspace method is adopted to obtain the best candidate. Finally, a novel online update scheme is presented to update the tracking template based on the information theoretic measures

Figure  2.  Comparison of different loss functions. Compared with $l_{2}$ or $l_{1}$ loss functions, correntropy-based loss function is more robust to various noises.

Figure  3.  An illustrative example of auxiliary variable $p$

Figure  4.  Center location errors of different methods on thirteen video sequences. The smaller location error is, the better a tracker is.

Figure  5.  Overlap rates of different methods on thirteen video clips. The higher an overlap rate is, the better a tracker is.

Figure  6.  Qualitative results on some typical frames with partial occlusions

Figure  7.  Qualitative results on some typical frames with illumination variations, background clutters and abrupt motions

Figure  8.  Precision plots of different methods based on attributes of image sequences on the OTB-13 dataset. (a)–(k): precision plots on 11 tracking challenges of fast motion, background clutter, motion blur, deformation, illumination variation, in-plane rotation, low resolution, occlusion, out-of-plane rotation, out of view and scale variation. (l): overall precision plots of OPE. The legend contains the precision score of each method.

Figure  9.  Success plots of different methods based on attributes of image sequences on the OTB-13 dataset. (a)–(k): success plots on 11 tracking challenges of fast motion, background clutter, motion blur, deformation, illumination variation, in-plane rotation, low resolution, occlusion, out-of-plane rotation, out of view and scale variation. (l): overall success plots of OPE. The legend contains the AUC score of each method.

•  [1] S. Sun, N. Akhtar, H. S. Song, A. S. Mian, M. Shah. Deep affinity network for multiple object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, to be published. [2] X. Y. Lan, M. Ye, S. P. Zhang, H. Y. Zhou, P. C. Yuen. Modality-correlation-aware sparse representation for RGB-infrared object tracking. Pattern Recognition Letters, vol. 130, pp. 12–20, 2020. DOI:  10.1016/j.patrec.2018.10.002. [3] C. Ma, J. B. Huang, X. K. Yang, M. H. Yang. Adaptive correlation filters with long-term and short-term memory for object tracking. International Journal of Computer Vision, vol. 126, no. 8, pp. 771–796, 2018. DOI:  10.1007/s11263-018-1076-4. [4] T. Z. Zhang, S. Liu, N. Ahuja, M. H. Yang, B. Ghanem. Robust visual tracking via consistent low-rank sparse learning. International Journal of Computer Vision, vol. 111, no. 2, pp. 171–190, 2014. DOI:  10.1007/s11263-014-0738-0. [5] S. Hare, A. Saffari, P. H. S. Torr. Struck: Structured output tracking with kernels. In Proceedings of IEEE International Conference on Computer Vision, IEEE, Barcelona, Spain, pp. 263–270, 2011. [6] X. Mei, H. B. Ling, Y. Wu, E. Blasch, L. Bai. Minimum error bounded efficient ℓ1 tracker with occlusion detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Providence, USA, pp. 1257–1264, 2011. [7] T. Z. Zhang, S. Liu, C. S. Xu, S. C. Yan, B. Ghanem, N. Ahuja, M. H. Yang. Structural sparse tracking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Boston, USA, pp. 150–158, 2015. [8] Z. B. Kang, W. Zou, Z. Zhu, H. X. Ma. Smooth-optimal adaptive trajectory tracking using an uncalibrated fish-eye camera. International Journal of Automation and Computing, vol. 17, no. 2, pp. 267–278, 2020. DOI:  10.1007/s11633-019-1209-4. [9] Q. Fu, X. Y. Chen, W. He. A survey on 3D visual tracking of multicopters. International Journal of Automation and Computing, vol. 16, no. 6, pp. 707–719, 2019. DOI:  10.1007/s11633-019-1199-2. [10] S. Liu, G. C. Liu, H. Y. Zhou. A robust parallel object tracking method for illumination variations. Mobile Networks and Applications, vol. 24, no. 1, pp. 5–17, 2019. DOI:  10.1007/s11036-018-1134-8. [11] Y. K. Qi, L. Qin, S. P. Zhang, Q. M. Huang, H. X. Yao. Robust visual tracking via scale-and-state-awareness. Neurocomputing, vol. 329, pp. 75–85, 2019. DOI:  10.1016/j.neucom.2018.10.035. [12] D. Wang, H. C. Lu. Visual tracking via probability continuous outlier model. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Columbus, USA, pp. 3478–3485, 2014. DOI:  10.1109/CVPR.2014.445. [13] F. Yang, H. C. Lu, M. H. Yang. Robust superpixel tracking. IEEE Transactions on Image Processing, vol. 23, no. 4, pp. 1639–1651, 2014. DOI:  10.1109/TIP.2014.2300823. [14] T. Z. Zhang, B. Ghanem, S. Liu, N. Ahuja. Robust visual tracking via multi-task sparse learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Providence, USA, pp. 2042–2049, 2012. [15] H. G. Ren, W. M. Liu, T. Shi, F. J. Li. Compressive tracking based on online Hough forest. International Journal of Automation and Computing, vol. 14, no. 4, pp. 396–406, 2017. DOI:  10.1007/s11633-017-1083-x. [16] Z. Q. Zhao, P. Zheng, S. T. Xu, X. D. Wu. Object detection with deep learning: A review. IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3212–3232, 2019. DOI:  10.1109/TNNLS.2018.2876865. [17] Q. Wang, L. Zhang, L. Bertinetto, W. M. Hu, P. H. S. Torr. Fast online object tracking and segmentation: A unifying approach. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Long Beach, USA, pp. 1328–1338, 2019. [18] J. R. Xue, J. W. Fang, P. Zhang. A survey of scene understanding by event reasoning in autonomous driving. International Journal of Automation and Computing, vol. 15, no. 3, pp. 249–266, 2018. DOI:  10.1007/s11633-018-1126-y. [19] A. Krizhevsky, I. Sutskever, G. E. Hinton. ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, ACM, Lake Tahoe, USA, pp. 1097–1105, 2012. [20] J. Long, E. Shelhamer, T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Boston, USA, pp. 3431–3440, 2015. [21] H. Fan, L. T. Lin, F. Yang, P. Chu, G. Deng, S. J. Yu, H. X. Bai, Y. Xu, C. Y. Liao, H. B. Ling. LaSOT: A high-quality benchmark for large-scale single object tracking. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Beach, USA, pp. 5374–5383, 2019. [22] K. H. Zhang, Q. S. Liu, Y. Wu, M. H. Yang. Robust visual tracking via convolutional networks without training. IEEE Transactions on Image Processing, vol. 25, no. 4, pp. 1779–1792, 2016. DOI:  10.1109/TIP.2016.2531283. [23] T. Z. Zhang, C. S. Xu, M. H. Yang. Learning multi-task correlation particle filters for visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 2, pp. 365–378, 2018. DOI:  10.1109/TPAMI.2018.2797062. [24] X. Mei, H. B. Ling. Robust visual tracking using ℓ1 minimization. In Proceedings of the 12th IEEE International Conference on Computer Vision, IEEE, Kyoto, Japan, pp. 1436–1443, 2009. [25] T. Z. Zhang, B. Ghanem, S. Liu, N. Ahuja. Low-rank sparse learning for robust visual tracking. In Proceedings of the 12th European Conference on Computer Vision,Springer,Florence,Italy, pp. 470–484, 2012. DOI:  10.1007/978-3-642-33783-3_34. [26] T. Z. Zhang, K. Jia, C. S. Xu, Y. Ma, N. Ahuja. Partial occlusion handling for visual tracking via robust part matching. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Columbus, USA, pp. 1258–1265, 2014. [27] D. A. Ross, J. Lim, R. S. Lin, M. H. Yang. Incremental learning for robust visual tracking. International Journal of Computer Vision, vol. 77, no. 1-3, pp. 125–141, 2008. DOI:  10.1007/s11263-007-0075-7. [28] Y. Wu, H. B. Ling, J. Y. Yu, F. Li, X. Mei, E. K. Cheng. Blurred target tracking by blur-driven tracker. In Proceedings of IEEE International Conference on Computer Vision, IEEE, Barcelona, Spain, pp. 1100–1107, 2011. [29] C. L. Bao, Y. Wu, H. B. Ling, H. Ji. Real time robust L1 tracker using accelerated proximal gradient approach. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Providence, USA, pp. 1830–1837, 2012. [30] B. Babenko, M. H. Yang, S. Belongie. Visual tracking with online multiple instance learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Miami, USA, pp. 983–990, 2009. [31] J. Gall, A. Yao, N. Razavi, L. Van Gool, V. Lempitsky. Hough forests for object detection, tracking, and action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 11, pp. 2188–2202, 2011. DOI:  10.1109/TPAMI.2011.70. [32] S. Liu, T. Z. Zhang, X. C. Cao, C. S. Xu. Structural correlation filter for robust visual tracking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,IEEE,Las Vegas,USA, pp. 4312–4320, 2016. DOI:  10.1109/CVPR.2016.467. [33] A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, M. Shah. Visual tracking: An experimental survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 7, pp. 1442–1468, 2014. DOI:  10.1109/TPAMI.2013.230. [34] Y. Wu, J. Lim, M. H. Yang. Online object tracking: A benchmark. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Portland, USA, pp. 2411–2418, 2013. [35] Z. T. Li, W. Wei, T. Z. Zhang, M. Wang, S. J. Hou, X. Peng. Online multi-expert learning for visual tracking. IEEE Transactions on Image Processing, vol. 29, pp. 934–946, 2019. DOI:  10.1109/TIP.2019.2931082. [36] T. Z. Zhang, S. Liu, C. S. Xu, B. Liu, M. H. Yang. Correlation particle filter for visual tracking. IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 2676–2687, 2018. DOI:  10.1109/TIP.2017.2781304. [37] M. J. Black, A. D. Jepson. Eigentracking: Robust matching and tracking of articulated objects using a view-based representation. International Journal of Computer Vision, vol. 26, no. 1, pp. 63–84, 1998. DOI:  10.1023/A:1007939232436. [38] D. Wang, H. C. Lu, M. H. Yang. Online object tracking with sparse prototypes. IEEE Transactions on Image Processing, vol. 22, no. 1, pp. 314–325, 2013. DOI:  10.1109/TIP.2012.2202677. [39] D. Wang, H. C. Lu, M. H. Yang. Least soft-threshold squares tracking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Portland, OR, USA, pp. 2371–2378, 2013. [40] N. Y. Wang, J. D. Wang, D. Y. Yeung. Online robust non-negative dictionary learning for visual tracking. In Proceedings of IEEE International Conference on Computer Vision, IEEE, Sydney, NSW, Australia, pp. 657–664, 2013. [41] W. F. Liu, P. P. Pokharel, J. C. Principe. Correntropy: Properties and applications in non-Gaussian signal processing. IEEE Transactions on Signal Processing, vol. 55, no. 11, pp. 5286–5298, 2007. DOI:  10.1109/TSP.2007.896065. [42] R. He, W. S. Zheng, B. G. Hu. Maximum correntropy criterion for robust face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 8, pp. 1561–1576, 2011. DOI:  10.1109/TPAMI.2010.220. [43] W. M. Hu, X. Li, X. Q. Zhang, X. C. Shi, S. Maybank, Z. F. Zhang. Incremental tensor subspace learning and its applications to foreground segmentation and tracking. International Journal of Computer Vision, vol. 91, no. 3, pp. 303–327, 2011. DOI:  10.1007/s11263-010-0399-6. [44] T. Wang, I. Y. H. Gu, P. F. Shi. Object tracking using incremental 2D-PCA learning and ml estimation. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Honolulu, HI, USA, pp. I-933–I-936, 2007. [45] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, Y. Ma. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 210–227, 2009. DOI:  10.1109/TPAMI.2008.79. [46] W. Zhong, H. C. Lu, M. H. Yang. Robust object tracking via sparsity-based collaborative model. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Providence, RI, USA, pp. 1838–1845, 2012. [47] R. He, W. S. Zheng, T. N. Tan, Z. N. Sun. Half-quadratic-based iterative minimization for robust sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 2, pp. 261–275, 2014. DOI:  10.1109/TPAMI.2013.102. [48] B. D. Chen, J. C. Principe. Maximum correntropy estimation is a smoothed map estimation. IEEE Signal Processing Letters, vol. 19, no. 8, pp. 491–494, 2012. DOI:  10.1109/LSP.2012.2204435. [49] M. Nikolova, M. K. Ng. Analysis of half-quadratic minimization methods for signal and image recovery. SIAM Journal on Scientific Computing, vol. 27, no. 3, pp. 937–966, 2005. DOI:  10.1137/030600862. [50] A. Adam, E. Rivlin, I. Shimshoni. Robust fragments-based tracking using the integral histogram. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, New York, USA, pp. 798–805, 2006. [51] J. Kwon, K. M. Lee. Visual tracking decomposition. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, San Francisco, USA, pp. 1269–1276, 2010. [52] B. Y. Liu, J. Z. Huang, L. Yang, C. Kulikowsk. Robust tracking using local sparse appearance model and K-selection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Providence, USA, pp. 1313–1320, 2011. [53] Z. Kalal, K. Mikolajczyk, J. Matas. Tracking-learning-detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 7, pp. 1409–1422, 2012. DOI:  10.1109/TPAMI.2011.239. [54] X. Jia, H. C. Lu, M. H. Yang. Visual tracking via adaptive structural local sparse appearance model. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Providence, RI, USA, pp. 1822–1829, 2012.
•  [1] Peng-Xia Cao, Wen-Xin Li, Wei-Ping Ma.  Tracking Registration Algorithm for Augmented Reality Based on Template Tracking . International Journal of Automation and Computing, doi: 10.1007/s11633-019-1198-3 [2] Qiang Fu, Xiang-Yang Chen, Wei He.  A Survey on 3D Visual Tracking of Multicopters . International Journal of Automation and Computing, doi: 10.1007/s11633-019-1199-2 [3] Cui-Mei Jiang, Shu-Tang Liu, Fang-Fang Zhang.  Complex Modified Projective Synchronization for Fractional-order Chaotic Complex Systems . International Journal of Automation and Computing, doi: 10.1007/s11633-016-0985-3 [4] Hong-Ge Ren, Wei-Min Liu, Tao Shi, Fu-Jin Li.  Compressive Tracking Based on Online Hough Forest . International Journal of Automation and Computing, doi: 10.1007/s11633-017-1083-x [5] Xiao-Yi Wang, Guang-Ren Duan.  A Direct Parametric Approach to Spacecraft Attitude Tracking Control . International Journal of Automation and Computing, doi: 10.1007/s11633-017-1089-4 [6] Yi Song, Shu-Xiao Li, Cheng-Fei Zhu, Hong-Xing Chang.  Object Tracking with Dual Field-of-view Switching in Aerial Videos . International Journal of Automation and Computing, doi: 10.1007/s11633-016-0949-7 [7] Mohamadreza Homayounzade, Mehdi Keshmiri, Mostafa Ghobadi.  A Robust Tracking Controller for Electrically Driven Robot Manipulators: Stability Analysis and Experiment . International Journal of Automation and Computing, doi: 10.1007/s11633-014-0850-1 [8] Basant Kumar Sahu, Bidyadhar Subudhi.  Adaptive Tracking Control of an Autonomous Underwater Vehicle . International Journal of Automation and Computing, doi: 10.1007/s11633-014-0792-7 [9] Yue-Hui Zhao, Jin-Liang Wang.  Exponential Synchronization of Impulsive Complex Networks with Output Coupling . International Journal of Automation and Computing, doi: 10.1007/s11633-013-0731-z [10] Guo-Peng Zhang,  Peng Liu,  En-Jie Ding.  Bargaining Game Theoretic Power Control in Selfish Cooperative Relay Networks . International Journal of Automation and Computing, doi: 10.1007/s11633-012-0637-1 [11] Guo-Peng Zhang,  Ya-Li Zhong,  En-Jie Ding.  Game Theoretic Subcarrier and Power Allocation for Wireless OFDMA Networks . International Journal of Automation and Computing, doi: 10.1007/s11633-012-0662-0 [12] Li Zhang, Xian-Wen Gao.  Synthesizing Scheduled Robust Model Predictive Control with Target Tracking . International Journal of Automation and Computing, doi: 10.1007/s11633-012-0653-1 [13] Biliana Alexandrova-Kabadjova, Edward Tsang, Andreas Krause.  Market Structure and Information in Payment Card Markets . International Journal of Automation and Computing, doi: 10.1007/s11633-011-0593-1 [14] Jin-Liang Wang, Huai-Ning Wu, Zhi-Chun Yang.  Passivity Analysis of Impulsive Complex Networks . International Journal of Automation and Computing, doi: 10.1007/s11633-011-0607-z [15] Zhi-Wei Xu,  Ze-Zhong Liang,  Zhong-Qi Sheng.  Extended Object Model for Product Configuration Design . International Journal of Automation and Computing, doi: 10.1007/s11633-010-0505-9 [16] Zhi-Sheng Chen, Yong He, Min Wu.  Robust Fuzzy Tracking Control for Nonlinear Networked Control Systems with Integral Quadratic Constraints . International Journal of Automation and Computing, doi: 10.1007/s11633-010-0532-6 [17] Qi-Cong Wang,  Yuan-Hao Gong,  Chen-Hui Yang,  Cui-Hua Li.  Robust Object Tracking under Appearance Change Conditions . International Journal of Automation and Computing, doi: 10.1007/s11633-010-0031-9 [18] Jin-Kui Chu,  Rong-Hua Li,  Qing-Ying Li,  Hong-Qing Wang.  A Visual Attention Model for Robot Object Tracking . International Journal of Automation and Computing, doi: 10.1007/s11633-010-0039-1 [19] William Wilson,  Phil Birkin,  Uwe Aickelin.  The Motif Tracking Algorithm . International Journal of Automation and Computing, doi: 10.1007/s11633-008-0032-0 [20] Li Zhu, Guo-You Wang, Chen Wang.  Formal Photograph Compression Algorithm Based on Object Segmentation . International Journal of Automation and Computing, doi: 10.1007/s11633-008-0276-8

##### 计量
• 文章访问数:  6
• HTML全文浏览量:  7
• PDF下载量:  2
• 被引次数: 0
##### 出版历程
• 收稿日期:  2020-02-10
• 录用日期:  2020-04-20
• 网络出版日期:  2020-05-30

## Robust Object Tracking via Information Theoretic Measures

### English Abstract

Wei-Ning Wang, Qi Li, Liang Wang. Robust Object Tracking via Information Theoretic Measures[J]. International Journal of Automation and Computing. doi: 10.1007/s11633-020-1235-2
 Citation: Wei-Ning Wang, Qi Li, Liang Wang. Robust Object Tracking via Information Theoretic Measures[J]. International Journal of Automation and Computing.

/

• 分享
• 用微信扫码二维码

分享至好友和朋友圈