The AI writer GPT-2 proves that automated text "requires less philosophical sophistication than we thought."

Yesterday, I published a short piece on GPT-2, the sophisticated text generator that OpenAI released on November fifth. Later, I came across an entertaining video by Rob Miles, a PhD student at the University of Nottingham.

In it, he combs through GPT-2 authored recipes and other texts, but begins by summing up the significance of what the tool is able to do: without a sophisticated ruleset or database of names or other contextual information, GPT-2 can write convincing text from rudimentary prompts.

But, for as impressive as it is, it still doesn't represent a general form of intelligence. Underneath everything, GPT-2 matches words based on the text it's fed in training. It doesn't know what the words themselves mean.

This, Miles says, is significant. If a statistical model can perform tasks we previously thought required human-level intelligence, what does that say about us?

From the video:

We used to think you had to be very clever to be good at chess, and if you could play chess then you were a "real" intelligence. You had to be. And then we realized that playing chess at a superhuman level doesn't require something that we would call intelligence, and it's slightly unsettling to find that, like, writing coherent and plausible news prose apparently also doesn't require general intelligence. Like, if you just learn the statistical relationships between words but do that really really well, that seems to be enough.

Related people

Sections