A Harvard student used GPT-2, a powerful text generator, to publish over a thousand fake comments on a government site

A table showing how many sentences can be created using an approach called synonym replacement. (techscience.org)

A Harvard student used OpenAI's ultra-powerful GPT-2 text generator to write and post 1,001 fake comments on a federal government public comment site.

In a post on techscience.org, Max Weiss, the bot's creator, said his bot posted the fake comments over the course of four days without significant interruptions, despite publishing 90% of the comments from the same IP address, a usual telltale sign for bot activity. During that time, he hired contractors on Amazon's Mechanical Turk, a microjob site, to judge which comments were real and which were written by a machine.

Take the quiz yourself here.

From his post:

Survey respondents, who were trained and assessed through exercises in which they distinguished more obvious bot versus human comments, were only able to correctly classify the submitted deepfake comments half (49.63%) of the time, which is comparable to the expected result of random guesses or coin flips.

Weiss published the comments in October, 2019 on a comment site for the Section 1115 Idaho Medicaid Reform Waiver. Weiss says that he ultimately deleted the deepfake comments, but that his bot accounted for 55% of all comments posted on the site over the four day period.

Besides highlighting how easy it was to bypass security, Weiss shares suggestions for how to prevent others from doing the same:

Federal public comment websites currently are unable to detect Deepfake Text once submitted, but technological reforms (e.g., CAPTCHAs) can be implemented to help prevent massive numbers of submissions by bots.

He also shares his opinions on how to regulate bot submissions on federal public sites, saying:

One could imagine a smorgasbord of policy big sticks with threats and criminal penalties. But society seems better off playing the technology cat-and-mouse game than risking draconian policies that may drive the ability to actually witness imbalances and fix them. A policy that would impose criminal penalties for bot submissions to federal public comment websites that accept anonymous submissions, as a gross example, would not stop motivated actors who have virtually no risk of being caught.

Learn more with Brilliant. Get 20% off today.