Text automation tools are easier than ever to use, meaning anyone (even you) can learn natural language processing
This post is sponsored by Brilliant. If you're considering a career in natural language processing and you'd like to gain a solid foundation in this large and growing domain of computer science, check out Brilliant’s courses in Programming with Python, Algorithm Fundamentals, Artificial Neural Networks and Machine Learning.
A much-explored but slow moving branch of AI research may have finally found its moment.
Like the now-famous ImageNet algorithms that helped create facial recognition products around the world, a number of text-focused neural network models have recently been released, and could allow commercial expansion of automation in writing.
The discipline itself, natural language processing (NLP), is to text what facial and pattern recognition are to imagery and video -- it’s used to find people, companies and events in writing, categorize data and generate convincing “synthetic text.”
NLP has been slow to gain traction because human language is messy. It’s not only difficult for a machine to know the difference between “a river bank,” and “a bank account,” it also has to create original text that’s similar to the imperfect way humans speak and write.
That appears to be changing. Two recently-published models in particular have produced convincing, lifelike synthetic text on par with today’s visual effects in film and deep fakes in media, for better or worse.
It should be noted that the fake text AIs generate aren’t always malicious. In fact, the tech is most often used today to help you finish Google searches, text messages and emails. It’s also used for chat bots, gaming and news articles. For many, their first interaction with a text automation tool that’s been called “too dangerous to share,” has been harmless and fun -- see AI Dungeon to interact with and see one of the below models in action.
The rest of this post will be dedicated to outlining the recently published models:
OpenAI’s GPT-2 text generator, which the start-up initially said they wouldn’t fully publish for fear of misuse, has since been fully released for public use. GPT-2 generates convincing text from prompts users give it. If, for example, the user writes “I left home early today,” the generator can create paragraphs of writing to continue the story. If the user prompt contains specific language or tone, GPT-2 tries to mimic it as best as it can.
But rather than being used to create fake news or automated bots, the most written about product of GPT-2 since its release in 2019 came from a PhD student, who trained GPT-2 to create AI Dungeon, an infinite text-only adventure game.
You can play AI Dungeon online, explore infinite caves and mountain ranges and run into strange characters. The game, as the site describes, can go on forever. The Verge described the experience as “like cowriting a novel with an easily distracted toddler possessing an encyclopedic knowledge of cultural references and prose cliches.”
But that’s not to say GPT-2 won’t (or isn’t already) being used for malicious purposes. For now, the most public attempts have been made by researchers to show what could go wrong.
Middlebury’s Center for Terrorism, Extremism and Counterterrorism have trained GPT-2 on hate material and terrorist manifestos to some success, as did Max Weiss, a Harvard student, who used GPT-2 to spam a federal comment page in October, 2019.
Google recently open sourced ALBERT, an ultra powerful text summarizer, which was built on top of several groundbreaking technologies that Google has developed since 2017.
It started with the transformer neural network, which solved a problem that plagued NLP researchers for years -- how can a neural net “remember” context and relationships among pronouns, people and events over long blocks of text? The transformer helped solve that, and was later used to create BERT, a model that trained a transformer using novels and Wikipedia articles.
BERT’s successor model, ALBERT, was open sourced in late December, 2019. But while BERT was also open source, it was prohibitively expensive. ALBERT is not, meaning that anyone can get started using it.
That’s because ALBERT’s architecture, unlike BERT’s, shares information up and down the layers of its deep neural network. BERT’s performance often saw separate layers completing the same tasks over and over. This ultimately led to more accurate text generation, but its redundancies were expensive and time consuming. ALBERT’s sharing across layers, according to Google, “slightly diminishes the accuracy, but the more compact size is well worth the tradeoff.”
In 2018, when BERT was released, NYU professor Sam Bowman spoke with the New York times, calling it “a step toward a lot of still-faraway goals in A.I., like technologies that can summarize and synthesize big, messy collections of information to help people make important decisions.”
The latest release, a reformer neural network, claims to do just that. Whereas the most efficient transformers can remember and consider up to thousands of words at a time, the reformer, published on January 16, 2020, can remember up to one million. In fact, a reformer can be used to generate images as well, but researchers appear to be more interested in its use on text:
While the application of Reformer to imaging and video tasks shows great potential, its application to text is even more exciting. Reformer can process entire novels, all at once and on a single device.
If you have the know-how and want to see it in action, try this colab document, which processes the entirety of Crime and Punishment and produces text in the same tone as Dostoevsky’s existential novel.
AI Research and Development →
Follow the history and current research climate of AI.
The company's work in AI involves consumer-facing services like Google Assistant and Google Cloud, startups like DeepMind, and industry resources like TensorFLow, a machine learning library.
Brilliant offers courses in computer science, math, and natural sciences.
Brilliant is made with the loving efforts of lifelong learners from MIT, Caltech, Duke, the University of Chicago, and more.
In school, people are often trained to apply formulas to rote problems. But this traditional approach prevents deeper understanding of concepts, reduces independent critical thinking, and cultivates few useful skills.
Whether you're looking for Computer Science Fundamentals or are ready to learn to write your own Neural Networks, Brilliant has a course for you: