Where do tech bosses and politicians stand on AI regulation?

Google's Sundar Pichai. (Wikipedia)

Since the New York Times published its Clearview AI expose, a renewed call for AI regulation has spread across parts of tech and government. Now is as good a time as any to get a sense of where the largest players stand:

Sundar Pichai, Alphabet and Google

In a Financial Times editorial following the Clearview article, Pichai wrote:

Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.

Google's researchers are at the forefront of AI development and maintain Tensorflow, the open source platform used by many hobbyists and non-professional users of AI software.

"There is no question in my mind that artificial intelligence needs to be regulated. It is too important not to," Pichai wrote. "The only question is how to approach it."

Tim Cook, Apple

When asked about tech regulation in general, Cook told Axios in 2018 that he believes it's inevitable and necessary.

"Generally speaking, I am not a big fan of regulation," Cook said. "I'm a big believer in the free market. But we have to admit when the free market is not working. And it hasn’t worked here. I think it’s inevitable that there will be some level of regulation."

Elon Musk, Neuralink and Tesla Motors

Musk was among the earliest and most vocal proponents of AI regulation.

Speaking in 2017, Musk said that "AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late."

Musk has spoken about it often since, even as he develops a brain implant startup.

"We are headed toward either superintelligence or civilization ending," he's said.

Mark Zuckerberg and Yann LeCun, Facebook

Yann LeCun, a veteran AI researcher, called Musk "panicky," in 2018 when asked about his thoughts on AI regulation. The truth, LeCun claimed, is that dangerous AI is farther away than most people expect. Zuckerberg agreed.

Then, in 2019, Zuckerberg changed his tone. In a blog post titled Four Ideas to Regulate the Internet, Zuckerberg wrote:

As lawmakers adopt new privacy regulations, I hope they can help answer some of the questions GDPR leaves open. We need clear rules on when information can be used to serve the public interest and how it should apply to new technologies such as artificial intelligence.

Peter Thiel, Palantir

"As a libertarian, I always dislike regulation," Thiel is quoted saying.

Thiel, despite founding the data company Palantir and investing in Clearview, has also said that privacy rules should be rethought to accomodate changing technology.

Satya Nadella, Microsoft

Nadella was earlier than even Musk. In 2016, he wrote 6 principles for AI regulation. Since then, like Thiel, []he's tried to position Microsoft as a partner company to the US government. To do so, of course, would mean following and possibly helping to shape the regulations that are ultimately made:

From 2018::

We want to partner with government — not to be dependent on us from a technology standpoint, but to become independent users and builders of technology, working together with us.

Xu Li, SenseTime

Xu Li, the co-founder of China's SenseTime, argued that governments should regulate AI instead of banning it, supposedly in response to California's myriad bans on facial recognition technology.

The Trump Administration, United States

The White House recently released a statement calling for lax regulations on AI, which it argues is necessary for the US to maintain its edge over its rivals in China.

From Michael Kratsios, US CEO:

The best way to counter this dystopian approach is to make sure America and our allies remain the top global hubs of AI innovation. Europe and our other international partners should adopt similar regulatory principles that embrace and shape innovation, and do so in a manner consistent with the principles we all hold dear.

Europe, however, is not likely to follow suit, because...

California

... Europe is more likely to work with the state of California, which has passed aggressive facial recognition regulations in the name of protecting minorities and slowly the spread of disinformation.

In late 2019, California passed a law banning police use of facial recognition cameras for three years while the technology develops. Elsewhere in the industry, the state outlawed creating and sharing deepfakes within two months of an election.

The European Commission, European Union

The European Union itself is emerging as the most powerful national proponent of AI regulation in the world. The European Commission's new president, Ursula von der Leyen, promised to begin drafting extensive AI regulations within her first 100 days. Watchers expect to hear news soon.

Ahead of such an announcement, government papers and announcements suggest that the regulations will build on top of 2018's General Data Privacy Regulation (GDPR), which protects user's privacy on sites they visit.

Chinese Communist Party, China

The Chinese government's AI regulations don't appear to be headed in the same direction as elsewhere in the world, where the majority of regulatory supporters concern themselves with racial bias and privacy protection. In China, AI regulations revolve around promoting "Xi Jinping Thought," and the values advocated by the Communist Party.

On more practical levels, recent laws like those requiring facial recognition scans to register mobile phone numbers in the country are aimed at reducing fraud.

Learn more with Brilliant. Get 20% off today.