The company behind the fake Joe Rogan voice recording releases a tool to detect audio deep fakes
The tool analyzes visualizations of audio recordings, called spectrograms, to pick out telltale signs of fakes.
"While to the unsuspecting ear they sound basically identical, spectrograms of real audio vs. fake audio actually look different from one another," the company said in a blog post.
In the example below, the top spectogram is a visualization of real audio. The bottom graph, which has visibly blurrier green bands, is fake.
The company received substantial attention in May, 2019 when it released a fake audio recording of podcaster Joe Rogan.
The company wrote that such technology could become a part of software security in the future.
"In a future world, our vision for a model like this is a kind of system that could fit into the real-world infrastructure powering our phones and other media," the company wrote.
Dessa co-founder Ragavan Thurairatnam admitted to Axios that the existence of the tool, which is open source, could be used to help others create a detector-proof fake audio model.
"I think it's inevitable that malicious actors are going to move much faster than those who want to stop it," he said.
AI in movies, TV, and pop culture
From predictive science fiction to deepfakes, AI has permeated entertainment, social media and our imaginations.