Is AI research headed toward a dead end? Let's look at past AI winters for clues

February 24th
A Google data center in Hamina, Finland. (Google)

This post is sponsored by Brilliant, which offers courses like Introduction to Neural Networks and the more advanced Artificial Neural Networks and Machine Learning, as well as fundamental courses like Programming with Python, and Algorithm Fundamentals. Sign up for Brilliant through Diagram and get 20% off today.

A team of researchers write a proposal outlining their plan to “find how to make machines use language, form abstractions and concepts.” They promise their bosses it’s possible, and describe a myriad of applications for the technology -- machine translation, image recognition, and lifelike assistants.

The algorithms they write will “solve the kinds of problems now reserved for humans, and improve themselves.”

But the team ultimately come up short, and despite making valuable progress, becomes the victims of their own salesmanship. After two decades of overpromising, funding dries up, and so does their work.

This is the story of John McCarthy and his generation of AI researchers from Dartmouth college, starting back in 1954. But some also think it could soon be the story of Google, SenseTime and Tesla, too.

McCarthy’s ambitious plans set the tone for the next seven decades of research. His team’s optimism helped secure funding, but that same enthusiasm helped start the first generational ebb in funding, known as AI winter.

There have been two major ebbs since McCarthy first pitched his summer research project. Some say we’re headed toward another today. Let’s take a look at past winters and see what they looked like.

The Lighthill report and DARPA’s funding cuts

In 1974, twenty years after McCarthy’s original proposal, the UK parliament commissioned a report on AI research from mathematics professor Sir James Lighthill.

His write up came to be known as “The Lighthill report,” and was a key driver in the UK’s decision to cut funding to artificial intelligence research in all but two universities.

Lighthill noted how little progress had been made in the United States, where federal funding had been flush for work in machine translation. He wrote that in the UK and elsewhere, it was “the most generalised types of studies whose end-products have proved most disappointing.”

At the same time, DARPA in the US had drastically cut back on funding for AI.

Hans Moravec, a robotics researcher at Carnegie Mellon, described the disconnect between promises and deliverables at the time:

“Many researchers were caught up in a web of increasing exaggeration,” he said. “Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that.”

LISP and Japan’s Fifth-Generation Computer failures

It took nearly a decade for enthusiasm to return to the field, partly spurred by the US Government’s concern over a large project taking place in Japan.

In 1981, Japan launched its Fifth-Generation Computer project, funded with $850 million. It was an attempt to build a computer that could reason like a human, carry on conversations and translate languages.

That same decade, researchers in the United States began widely promoting the LISP programming language as the basis for AI research. DARPA funding followed. However, LISP soon lost traction as Apple and IBM developed computers far cheaper and more more efficient. LISP, and its presence in the AI world, fell by the wayside.

As for Japan, its Fifth-Generation Computer bookended the middle generation of AI research and hype. It helped end the first AI Winter and begin the next one, when in 1991 its chief researcher admitted failure.

"Ten years ago we faced criticism of being too reckless," project head researcher Kazuhiro Fuchi said. "Now we see criticism from inside and outside the country because we have failed to achieve such grand goals."

Today

Eventually, enthusiasm came roaring back. There have been significant milestones in neural network architectures over the last 20 years, and much of that is still driving investment today. So, what’s different?

Enthusiasm is high as ever. The United States is poised to double AI funding by 2022, and China, the second-largest economy in the world, has funneled hundreds of billions into the field. Both are funding research and competing for AI talent, reminiscent of the DARPA’s funding spree after Japan announced its Fifth-Generation Computer program.

But private industry is a larger driver. In the past, AI was primarily an academic field. Even breakthrough image recognition algorithms like AlexNet were created in graduate programs.

But that may not be the case anymore.

Take Microsoft, which invested $1 billion in OpenAI in 2019, and won the ultra coveted Pentagon JEDI contract for cloud computing, largely thanks to its strong AI systems. Its main competitor for JEDI was Amazon, which makes more money off its AI-reliant cloud computing platform than it does in e-commerce.

Today, the world’s top researchers are almost entirely sheltered inside wealthy companies. It will be up to executives at Facebook and Microsoft, as well as China’s state-reliant tech giants, to decide whether their profits and funding are funneled toward further research or not.

But is there a new Lighthill report? Perhaps soon. Today, experts are warning about the limitations in today’s most popular tools -- particularly deep learning.

NYU researcher Gary Marcus’s paper Deep Learning: A Critical Appraisal reads something like the Lighthill report. Marcus writes that deep learning will plateau as scientists struggle to transition their algorithms from controlled environments to general purpose applications.

Other issues, including the cost of computation, are hemming even the most wealthy corporations.

Speaking with the BBC in 2019, Marcus said the end of the 2010s saw a cool off in enthusiasm in the field among its top researchers.

“By the end of the decade there was a growing realisation that current techniques can only carry us so far,” he said.

Deep learning has led to a number of very profitable and sought after applications, but it’s not an end-all. Much like the way early neural networks paved the way for later discoveries, Marcus argues that the processes used today will have to eventually be seen for the stepping tones they actually are.

We need to reconceptualize it: not as a universal solvent, but simply as one tool among many, a power screwdriver in a world in which we also need hammers, wrenches, and pliers, not to mention chisels and drills, voltmeters, logic probes, and oscilloscopes.