Volume 14 Number 5
October 2017
Article Contents
Danko NikolićWhy Deep Neural Nets Cannot Ever Match Biological Intelligence and What To Do About It?. International Journal of Automation and Computing, vol. 14, no. 5, pp. 532-541, 2017. doi: 10.1007/s11633-017-1093-8
Cite as: Danko NikolićWhy Deep Neural Nets Cannot Ever Match Biological Intelligence and What To Do About It?. International Journal of Automation and Computing, vol. 14, no. 5, pp. 532-541, 2017.

# Why Deep Neural Nets Cannot Ever Match Biological Intelligence and What To Do About It?

Author Biography:
• Corresponding author: Danko Nikolić received the degree in psychology and a degree in civil engineering from the University of Zagreb, Croatia. He received the Master's degree and the Ph. D. degree in cognitive psychology from at the University of Oklahoma, USA. In 2010, he received a Private Docent title from the University of Zagreb, and in 2014 an associate professor title from the same university. He is now associated with Frankfurt Institute for Advanced Studies and works at DXC Technology in the field of artificial intelligence and data science.
He has a keen interest in addressing the explanatory gap between the brain and the mind. His interest is in how the physical world of neuronal activity produces the mental world of perception and cognition. For many years he headed an electrophysiological lab at Max-Planck Institute for Brain Research. He approached the problem of explanatory gap from both sides, bottom-up and top-down. The bottom-up approach begins from brain physiology. The top-down approach investigates the behavior and experiences. Each of the two approaches led him to develop a theory:The work on behavior and experiences led to the discovery of the phenomenon of ideasthesia (meaning "sensing concepts"). The work on physiology resulted in the theory of practopoiesis (meaning "creation of actions").
He has conducted many empirical studies in the background of those theories. These studies involved simultaneous recordings of activity of 100+ neurons in the visual cortex (extracellular recordings), behavioral and imaging studies in visual cognition (attention, working memory, long-term memory), and empirical investigations of phenomenal experiences (synesthesia). His research was supported by grants from the Hertie Foundation, Deutsche Forschungsgemeinschaft (DFG) and other sources.
E-mail:danko.nikolic@gmail.com(Corresponding author)
ORCID iD:0000-0002-9317-8494
• Accepted: 2017-05-10
• Published Online: 2017-07-04
• The recently introduced theory of practopoiesis offers an account on how adaptive intelligent systems are organized. According to that theory, biological agents adapt at three levels of organization and this structure applies also to our brains. This is referred to as tri-traversal theory of the organization of mind or for short, a T3-structure. To implement a similar T3-organization in an artificially intelligent agent, it is necessary to have multiple policies, as usually used as a concept in the theory of reinforcement learning. These policies have to form a hierarchy. We define adaptive practopoietic systems in terms of hierarchy of policies and calculate whether the total variety of behavior required by real-life conditions of an adult human can be satisfactorily accounted for by a traditional approach to artificial intelligence based on T2-agents, or whether a T3-agent is needed instead. We conclude that the complexity of real life can be dealt with appropriately only by a T3-agent. This means that the current approaches to artificial intelligence, such as deep architectures of neural networks, will not suffice with fixed network architectures. Rather, they will need to be equipped with intelligent mechanisms that rapidly alter the architectures of those networks.
•  [1] D. Nikolić. Practopoiesis:Or how life fosters a mind. Journal of Theoretical Biology, vol. 373, pp. 40-61, 2015.  doi: 10.1016/j.jtbi.2015.03.003 [2] D. Nikolić. Practopoiesis: How cybernetics of biology can help AI. [Online], Availabe: https://www.singularityweblog.com/practopoiesis/, 2014. [3] G. R. Chen. Pinning control and controllability of complex dynamical networks. International Journal of Automation and Computing, vol. 14, no. 1, pp. 1-9, 2017.  doi: 10.1007/s11633-016-1052-9 [4] Y. Jiang, J. Y. Dai. An adaptive regulation problem and its application. International Journal of Automation and Computing, vol. 14, no. 2, pp. 221-228, 2017.  doi: 10.1007/s11633-015-0900-3 [5] R. S. Sutton, A. G. Barto. Reinforcement Learning, Cambridge, Mass, USA:MIT Press, 1998. [6] C. J. C. H. Watkins. Learning from Delayed Rewards, Ph. D. dissertation, Cambridge University, UK, 1989. [7] W. R. Ashby. Principles of the self-organizing dynamic system. The Journal of General Psychology, vol. 37, no. 2, pp. 125-128, 1947.  doi: 10.1080/00221309.1947.9918144 [8] R. C. Conant, W. R. Ashby. Every good regulator of a system must be a model of that system. International Journal of Systems Science, vol. 1, no. 2, pp. 89-97, 1970.  doi: 10.1080/00207727008920220 [9] T. M. Bartol, C. Bromer, J. P. Kinney, M. A. Chirillo, J. N. Bourne, K. M. Harris, T. J. Sejnowski. Hippocampal spine head sizes are highly precise. bioRxiv, [Online], Available: http://dx.doi.org/10.1101/016329, March 11, 2015. [10] S. Corkin. Lasting consequences of bilateral medial temporal lobectomy:Clinical course and experimental findings in H.M. Seminars in Neurology, vol. 4, no. 2, pp. 249-259, 1984.  doi: 10.1055/s-2008-1041556 [11] A. M. Treisman, G. Gelade. A feature-integration theory of attention. Cognitive Psychology, vol. 12, no. 1, pp. 97-136, 1980.  doi: 10.1016/0010-0285(80)90005-5 [12] A. Treisman. Preattentive processing in vision. Computer Vision, Graphics, and Image Processing, vol. 31, no. 2, pp. 156-177, 1985.  doi: 10.1016/S0734-189X(85)80004-9 [13] G. A. Miller. The magical number seven plus or minus two:Some limits on our capacity for processing information. Psychological Review, vol. 63, no. 2, pp. 81-97, 1956.  doi: 10.1037/h0043158 [14] N. Cowan. The magical number 4 in short-term memory:A reconsideration of mental storage capacity. Behavioral and Brain Sciences, vol. 24, no. 1, pp. 87-114, 2001.  doi: 10.1017/S0140525X01003922 [15] R. W. Engle, M. Kane, S. W. Tuholski. Individual differences in working memory capacity and what they tell us about controlled attention, general fluid intelligence, and functions of the prefrontal cortex. Models of Working Memory: Mechanisms of Active Maintenance and Executive Control, A. Miyake, P. Shah, Eds. , Cambridge, USA: Cambridge University Press, pp. 102-134, 1999. [16] H. Olsson, L. Poom. Visual memory needs categories. Proceedings of the National Academy of Sciences of the United States of America, vol. 102, no. 24, pp. 8776-8780, 2005.  doi: 10.1073/pnas.0500810102 [17] A. Cervatiuc. Highly Proficient Adult Non-native English Speakers Perceptions of their Second Language Vocabulary Learning Process, Ph. D. dissertation, University of Calgary, Canada, 2007. [18] P. Nation, R. Waring. Vocabulary size, text coverage and word lists. Vocabulary: Description, Acquisition and Pedagogy, N. Schmitt, M. McCarthy, Eds. , Cambridge, USA: Cambridge University Press, pp. 6-19, 1997. [19] G. A. Alvarez, P. Cavanagh. The capacity of visual shortterm memory is set both by visual information load and by number of objects. Psychological Science, vol. 15, no. 2, pp. 106-111, 2004.  doi: 10.1111/j.0963-7214.2004.01502006.x [20] S. J. Luck, E. K. Vogel. The capacity of visual working memory for features and conjunctions. Nature, vol. 390, no. 6657, pp. 279-281, 1997.  doi: 10.1038/36846 [21] D. Nikolić, W. Singer. Creation of visual long-term memory. Perception & Psychophysics, vol. 69, no. 6, pp. 904-912, 2007. [22] E. Awh, J. Jonides. Overlapping mechanisms of attention and spatial working memory. Trends in Cognitive Sciences, vol. 5, no. 3, pp. 119-126, 2001.  doi: 10.1016/S1364-6613(00)01593-X [23] J. S. Mayer, R. A. Bittner, D. Nikolić, C. Bledowski, R. Goebel, D. E. J. Linden. Common neural substrates for visual working memory and attention. Neuroimage, vol. 36, no. 2, pp. 441-453, 2007.  doi: 10.1016/j.neuroimage.2007.03.007 [24] D. Nikolić. Testing the theory of practopoiesis using closed loops. Closed Loop Neuroscience, A. El Hady, Ed., Amsterdam:Academic Press, 2016.
•  [1] Fu-Qiang Liu, Zong-Yi Wang. Automatic “Ground Truth” Annotation and Industrial Workpiece Dataset Generation for Deep Learning . International Journal of Automation and Computing, 2020, 17(4): 539-550.  doi: 10.1007/s11633-020-1221-8 [2] Bin Hu, Jiacun Wang. Deep Learning Based Hand Gesture Recognition and UAV Flight Controls . International Journal of Automation and Computing, 2020, 17(1): 17-29.  doi: 10.1007/s11633-019-1194-7 [3] Diclehan Karakaya, Oguzhan Ulucan, Mehmet Turkan. Electronic Nose and Its Applications: A Survey . International Journal of Automation and Computing, 2020, 17(2): 179-209.  doi: 10.1007/s11633-019-1212-9 [4] Maryam Aljanabi, Mohammad Shkoukani, Mohammad Hijjawi. Ground-level Ozone Prediction Using Machine Learning Techniques: A Case Study in Amman, Jordan . International Journal of Automation and Computing, 2020, 17(): 1-11.  doi: 10.1007/s11633-020-1233-4 [5] Zhen-Jie Yao, Jie Bi, Yi-Xin Chen. Applying Deep Learning to Individual and Community Health Monitoring Data: A Survey . International Journal of Automation and Computing, 2018, 15(6): 643-655.  doi: 10.1007/s11633-018-1136-9 [6] Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, Qianli Liao. Why and When Can Deep-but Not Shallow-networks Avoid the Curse of Dimensionality:A Review . International Journal of Automation and Computing, 2017, 14(5): 503-519.  doi: 10.1007/s11633-017-1054-2 [7] S. Arumugadevi, V. Seenivasagam. Color Image Segmentation Using Feedforward Neural Networks with FCM . International Journal of Automation and Computing, 2016, 13(5): 491-500.  doi: 10.1007/s11633-016-0975-5 [8] Xiao-Cheng Shi,  Tian-Ping Zhang. Adaptive Tracking Control of Uncertain MIMO Nonlinear Systems with Time-varying Delays and Unmodeled Dynamics . International Journal of Automation and Computing, 2013, 10(3): 194-201.  doi: 10.1007/s11633-013-0712-2 [9] Fusaomi Nagata, Keigo Watanabe. Adaptive Learning with Large Variability of Teaching Signals for Neural Networks and Its Application to Motion Control of an Industrial Robot . International Journal of Automation and Computing, 2011, 8(1): 54-61.  doi: 10.1007/s11633-010-0554-0 [10] Yuan-Yuan Wu, Tao Li, Yu-Qiang Wu. Improved Exponential Stability Criteria for Recurrent Neural Networks with Time-varying Discrete and Distributed Delays . International Journal of Automation and Computing, 2010, 7(2): 199-204.  doi: 10.1007/s11633-010-0199-z [11] Tie-Jun Li, Gui-Qiang Chen, Gui-Fang Shao. Action Control of Soccer Robots Based on Simulated Human Intelligence . International Journal of Automation and Computing, 2010, 7(1): 55-63.  doi: 10.1007/s11633-010-0055-1 [12] Dieter D. Genske, Dongbin Huang, Ariane Ruff. An Assessment Tool for Land Reuse with Artificial Intelligence Method . International Journal of Automation and Computing, 2010, 7(1): 1-8.  doi: 10.1007/s11633-010-0001-2 [13] Siva S. Sivatha Sindhu, S. Geetha, M. Marikannan, A. Kannan. A Neuro-genetic Based Short-term Forecasting Framework for Network Intrusion Prediction System . International Journal of Automation and Computing, 2009, 6(4): 406-414.  doi: 10.1007/s11633-009-0406-y [14] Ahcene Boubakir, Fares Boudjema, Salim Labiod. A Neuro-fuzzy-sliding Mode Controller Using Nonlinear Sliding Surface Applied to the Coupled Tanks System . International Journal of Automation and Computing, 2009, 6(1): 72-80.  doi: 10.1007/s11633-009-0072-0 [15] Alma Lilia Garcia-Almanza,  Edward P. K. Tsang. Evolving Decision Rules to Predict Investment Opportunities . International Journal of Automation and Computing, 2008, 5(1): 22-31.  doi: 10.1007/s11633-008-0022-2 [16] Jane M. Binner, Alicia M. Gazely, Graham Kendall. Evaluating the Performance of a EuroDivisia Index Using Artificial Intelligence Techniques . International Journal of Automation and Computing, 2008, 5(1): 58-62.  doi: 10.1007/s11633-008-0058-3 [17] . Computational Intelligence and Games:Challenges and Opportunities . International Journal of Automation and Computing, 2008, 5(1): 45-57.  doi: 10.1007/s11633-008-0045-8 [18] Mohamed-Faouzi Harkat,  Salah Djelel,  Noureddine Doghmane,  Mohamed Benouaret. Sensor Fault Detection, Isolation and Reconstruction Using Nonlinear Principal Component Analysis . International Journal of Automation and Computing, 2007, 4(2): 149-155.  doi: 10.1007/s11633-007-0149-6 [19] . Modelling and Multi-Objective Optimal Control of Batch Processes Using Recurrent Neuro-fuzzy Networks . International Journal of Automation and Computing, 2006, 3(1): 1-7.  doi: 10.1007/s11633-006-0001-4 [20] L. Meng,  Q. H. Wu. Fast Training of Support Vector Machines Using Error-Center-Based Optimization . International Journal of Automation and Computing, 2005, 2(1): 6-12.  doi: 10.1007/s11633-005-0006-4
###### 通讯作者: 陈斌, bchen63@163.com
• 1.

沈阳化工大学材料科学与工程学院 沈阳 110142

Figures (1)

## Why Deep Neural Nets Cannot Ever Match Biological Intelligence and What To Do About It?

###### Corresponding author:Danko Nikolić received the degree in psychology and a degree in civil engineering from the University of Zagreb, Croatia. He received the Master's degree and the Ph. D. degree in cognitive psychology from at the University of Oklahoma, USA. In 2010, he received a Private Docent title from the University of Zagreb, and in 2014 an associate professor title from the same university. He is now associated with Frankfurt Institute for Advanced Studies and works at DXC Technology in the field of artificial intelligence and data science.    He has a keen interest in addressing the explanatory gap between the brain and the mind. His interest is in how the physical world of neuronal activity produces the mental world of perception and cognition. For many years he headed an electrophysiological lab at Max-Planck Institute for Brain Research. He approached the problem of explanatory gap from both sides, bottom-up and top-down. The bottom-up approach begins from brain physiology. The top-down approach investigates the behavior and experiences. Each of the two approaches led him to develop a theory:The work on behavior and experiences led to the discovery of the phenomenon of ideasthesia (meaning "sensing concepts"). The work on physiology resulted in the theory of practopoiesis (meaning "creation of actions").    He has conducted many empirical studies in the background of those theories. These studies involved simultaneous recordings of activity of 100+ neurons in the visual cortex (extracellular recordings), behavioral and imaging studies in visual cognition (attention, working memory, long-term memory), and empirical investigations of phenomenal experiences (synesthesia). His research was supported by grants from the Hertie Foundation, Deutsche Forschungsgemeinschaft (DFG) and other sources.    E-mail:danko.nikolic@gmail.com(Corresponding author)    ORCID iD:0000-0002-9317-8494

Abstract: The recently introduced theory of practopoiesis offers an account on how adaptive intelligent systems are organized. According to that theory, biological agents adapt at three levels of organization and this structure applies also to our brains. This is referred to as tri-traversal theory of the organization of mind or for short, a T3-structure. To implement a similar T3-organization in an artificially intelligent agent, it is necessary to have multiple policies, as usually used as a concept in the theory of reinforcement learning. These policies have to form a hierarchy. We define adaptive practopoietic systems in terms of hierarchy of policies and calculate whether the total variety of behavior required by real-life conditions of an adult human can be satisfactorily accounted for by a traditional approach to artificial intelligence based on T2-agents, or whether a T3-agent is needed instead. We conclude that the complexity of real life can be dealt with appropriately only by a T3-agent. This means that the current approaches to artificial intelligence, such as deep architectures of neural networks, will not suffice with fixed network architectures. Rather, they will need to be equipped with intelligent mechanisms that rapidly alter the architectures of those networks.

Danko NikolićWhy Deep Neural Nets Cannot Ever Match Biological Intelligence and What To Do About It?. International Journal of Automation and Computing, vol. 14, no. 5, pp. 532-541, 2017. doi: 10.1007/s11633-017-1093-8
 Citation: Danko NikolićWhy Deep Neural Nets Cannot Ever Match Biological Intelligence and What To Do About It?. International Journal of Automation and Computing, vol. 14, no. 5, pp. 532-541, 2017.
Reference (24)

/