Facebook's AI head distances the company's focus from those of its competitors, and says the field may hit a wall
Facebook's Menlo Park office. (OfficeSnapShots.com)
Jerome Pesenti, Facebook's VP of AI, said two surprising things in a recent interview with Wired.
First, he criticised the field's long-talked about goal of artificial general intelligence (AGI), distancing his lab's plans from those of Google and OpenAI. Second, he said the field may soon "hit a wall" as experiment costs become prohibitively expensive.
On the goal of AGI, Presenti said he didn't find the concept interesting.
"As a lab, our objective is to match human intelligence," he said.
On the one hand, you have people who assume that AGI is human intelligence. But I think it's a bit disingenuous because if you really think of human intelligence, it is not very general. Then other people project onto AGI the idea of the singularity—that if you had an AGI, then you will have an intelligence that can make itself better, and keep improving. But there’s no real model for that. Humans can’t can’t make themselves more intelligent. I think people are kind of throwing it out there to pursue a certain agenda.
Speaking later about the shortcomings of current research models, Presenti spoke to some criticisms of deep learning.
"We are very very far from human intelligence, and there are some criticisms that are valid," he said. "It can propagate human biases, it’s not easy to explain, it doesn't have common sense, it’s more on the level of pattern matching than robust semantic understanding."
Later, he discussed another shortcoming: the cost of AI experiments.
"If you look at top experiments, each year the cost it going up 10-fold," he said. "Right now, an experiment might be in seven figures, but it’s not going to go to nine or ten figures, it’s not possible, nobody can afford that."
It means that at some point we're going to hit the wall. In many ways we already have. Not every area has reached the limit of scaling, but in most places, we're getting to a point where we really need to think in terms of optimization, in terms of cost benefit, and we really need to look at how we get most out of the compute we have. This is the world we are going into.
OpenAI wrote recently that, since 2012, the computational power used by AI experiments has doubled every 3.4 months.
"The trend represents an increase by roughly a factor of 10 each year," OpenAI wrote.