Engineering a Less Artificial Intelligence

Neuron. 2019 Sep 25;103(6):967-979. doi: 10.1016/j.neuron.2019.08.034.

Abstract

Despite enormous progress in machine learning, artificial neural networks still lag behind brains in their ability to generalize to new situations. Given identical training data, differences in generalization are caused by many defining features of a learning algorithm, such as network architecture and learning rule. Their joint effect, called "inductive bias," determines how well any learning algorithm-or brain-generalizes: robust generalization needs good inductive biases. Artificial networks use rather nonspecific biases and often latch onto patterns that are only informative about the statistics of the training data but may not generalize to different scenarios. Brains, on the other hand, generalize across comparatively drastic changes in the sensory input all the time. We highlight some shortcomings of state-of-the-art learning algorithms compared to biological brains and discuss several ideas about how neuroscience can guide the quest for better inductive biases by providing useful constraints on representations and network architecture.

Keywords: artificial intelligence; generalization; inductive bias; machine learning; neuroscience; robustness; sensory systems.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, Non-P.H.S.
  • Review

MeSH terms

  • Algorithms*
  • Artificial Intelligence
  • Bias
  • Brain / physiology
  • Deep Learning
  • Generalization, Psychological
  • Humans
  • Machine Learning*
  • Neural Networks, Computer*
  • Neurosciences