The History of Neural Networks

Frank Rosenblatt, who invented the perceptron. (Division of Rare and Manuscript Collections)

This post is sponsored by Brilliant. Sign up today and get 20% off when you use this link.

The huge investments made in voice assistants, autonomous driving and facial recognition over the last decade seem to suggest that the technology behind them is brand new. That's only half true.

Consider artificial neural networks, the systems behind AI products. They're anything but just-discovered. Their direct origins, for example, can be traced to an 18-year-old with no formal education, who snuck into Bertrand Russell's lectures when he wasn't working. For nearly as long as the scientific community has understood what the brain's neural networks do, professors, enthusiasts and R&D departments have been trying to replicate them in machines.

1873 and 1890: Philosophers Alexander Bain and William James independently argue that the human body, in its thoughts and physical movements, is driven by neural activity in the brain. Bain believed that the mind forms memories from repeated connections between neurons. James was more interested in the electrical current that ran through the brain's biological neural network.

1943: Warren McCulloch and Walter Pitts write a proof of principle for the first artificial neural network. The McCulloch-Pitts neuron is at its base a math function, and is the first of its kind. Pitts, uneducated and homeless when he first met McCulloch, went on to have a storied career at MIT.

1940s: Psychologist Donald Hebb formulates Hebbian Learning, a hypothesis about the relationship between neurons in the brain. According to Hebb, the connection between two neurons should strengthen the more they interact. Hebbian Learning later becomes a basis for unsupervised learning, where AI agents are allowed to act without instruction to find undiscovered patterns in data, and form strong neural pathways on their own.

1955: Dartmouth math professor John McCarthy first coins the term "Artificial Intelligence" in a research proposal, in which he claims that the basis for general artificial intelligence can be developed over the summer. It isn't developed then, nor has it yet today.

1958: With Navy funding, psychologist Frank Rosenblatt creates the perceptron, a neural network built into the IBM 704, an early computer. The perceptron is the world's first image recognition machine. Today's advanced neural nets feed information forward and backward in cycles across the thicket of neurons. Rosenblatt's was far simpler: it's feedfoward, meaning that information only goes in one direction through the neural net. It both excites the world and scares it enough to usher in the first AI winter.

1969: Marvin Minsky and Seymour Papert publish a book called Perceptrons, which cools enthusiasm for AI for the next six years. In it, Minsky and Papert prove that Rosenblatt's single layer perceptron neural network couldn't process an "Exclusive or" operation, which determines if two objects differ. To emphasize their point, they design a strange, difficult-to-process book cover hat even the human eye has trouble understanding.

1969: The same year Minsky and Papert show that simple neural nets can't positively recognize differences between two entities, two other researchers solve the problem. Backpropagation, discovered by Arthur E. Bryson and Yu-Chi Ho, remains a key training method in neural networks today. It doesn't gain traction in the scientific community for years, however.

1982: John Hopfield develops the Hofield Network, an early, inexact recurrent neural network. Recurrent neural nets exhibit memory by drawing on half-formed data while working, allowing them to decipher things like speech, in which the network compares the relationships between words it's already processed when trying to decipher the end of a phrase.

1982: On the other side of the Pacific, Japan announces The Fifth Generation Computer, a government-led attempt to create an advanced computer. It stirs up worry in the United States, and leads to a boom in investment in AI. Japan's ambitions fizzle after ten years, according to the New York Times: "After spending more than $400 million on its widely heralded Fifth Generation computer project," the Times wrote in 1992, "The Japanese Government said this week that it was willing to give away the software developed by the project to anyone who wanted it, even foreigners."

1986: Backpropagation, first found in 1969 or earlier, is rediscovered and popularized by Geoffrey Hinton, Ronald J. Williams and David Rumelhart. Hinton return to the spotlight again and again throughout his career.

1990s: As early as 1991, German computer scientists Jürgen Schmidhuber and Sepp Hochreiter and others developed advanced recurrent neural networks that would go on to underlie Google, Amazon, Apple and Facebook featuresets. Schmidhuber and Hochreiter's network, officially published in 1997 and called Long short-term memory (LSTM), has been described as "the most commercial AI achievement."

1998: Yann LeCunn, who'd go on to head AI at Facebook, publishes LeNet-5, a convolutional neural network used by banks to process handwritten checks. The neural net has 7 layers, and is vastly larger than the pioneering, feedforward nets developed in decades previous. The convolutional neural network would form the basis of much image recognition, and go on to enable massive facial recognition models used in photography, be it on social media or security footage.

2006: Twenty years after repopularizing backpropagation, Hinton and others publish the Restricted Boltzmann Machine, a stochastic neural network used to find undiscovered patterns in raw data and form abstract topics from language data, among other uses.

2010s: Researchers at Google, Facebook, OpenAI, Tesla and elsewhere use reinforcement learning algorithms to create an environment in which AI agents can drive cars, solve puzzles, and compete with humans in games.

2012: Hinton's student, Alex Krizhevsky, develops AlexNet, a convolutional neural network based on LeCunn's work, which wins the highly competitive annual ImageNet image recognition competition. See Diagram's write up on ImageNet and its significance here. His company is acquired by Google, and he leaves the company in 2017.

2015: Asked by fellow researcher Nick Bostrom what he believes will happen with AI in the future, Geoffery Hinton says, "I think political systems will use it to terrorize people." But he continues to work in the field, he says, because "the prospect of discovery is too sweet."

Learn more with Brilliant. Get 20% off today.