A grouping of images pulled from OpenAI neural networks mid-task. (OpenAI)

Machine learning has a blackbox problem

As AI systems learn to perform increasingly sophisticated tasks, their decision making becomes harder to understand.

There is a looming problem in AI development, and it's easy to find examples of it everywhere.

Whenever asked to explain how a neural network performs well in video games or image recognition, most people will say that an AI is able to do something because it's done it many -- often millions -- of times. If it can recognize a dog in an image, for example, it's because it's seen countless examples of dogs.

This, however, isn't a good answer. It describes how an AI learns, but it doesn't explain how it makes decisions.

A good example of this can be found in gaming.

Even in board games or controlled virtual environments, AI agents' behavior confuses human opponents and onlookers. During its now famous 2016 Go tournament, the AI agent AlphaGo performed many unorthodox moves that led to it ultimately beating Lee Sedol, the then-world champion. And more recently, when playing against the top team in Dota 2, the AI team OpenAI Five announced early on that it believed it had a 95% chance of winning. The game announcers scoffed at the message, but the bot was right -- they won shortly afterward.

These examples show that, even if we know how they learn, we don't know what makes AI agents perform the way they do.

This may become a serious problem as AI takes on larger roles in medicine, science, military planning and energy management. If we can't account for why an AI performs a task well, we may not be able to explain why it fails to save a patient's life.

Increased transparency is also needed as AI itself becomes a product for large corporations, according to IBM research:

"We know people in organizations will not use or deploy AI technologies unless they really trust their decisions," IMB research fellow Saska Mojsilovic told VentureBeat in August.

Right to Explanation

This problem has some precedent in another uncertain and complicated part of life, at least in the United States: credit scores.

Until laws were passed in 1974, creditors were not obligated to give costumers a reason when they denied them a loan. The Equal Credit Opportunity Act, in addition to banning discrimination against applicants, requires creditors to give explicit reasons when denying someone a loan.

More recently, EU law tackles the AI explainability problem in its General Data Privacy Regulation (GDPR). However, it hasn't been tested in court.

Explore other parts of this issue in the stories and explainers below.

Explainers

Neural networks: a computing model inspired by the human brain

November 1st
A pivotal technology in AI that enables machines to discover and recognize patterns and objects.

Updates

Facebook releases Captum, a tool it says will help explain the decisions made by its machine learning frameworks

October 15th
Captum pulls data from neural networks mid-task.

Microsoft open sources InterpretML, a python package it says will help find bias and regulatory issues in machine learning frameworks

May 10th
The tool will help developers avoid building discriminatory and explainable systems.

Sections

OpenAI

August 11th

OpenAI was founded as a non-profit research lab by Elon Musk and Sam Altman in 2015.

In February, 2018, Musk left, citing a conflict of interest with his work on Tesla's autopilot system.

In 2019, with Altman in charge, OpenAI formed OpenAI LP, a for-profit company it wrote will allow them "to rapidly increase our investments in compute and talent while including checks and balances to actualize our mission."

OpenAI has produced some impressive accomplishments-- in early 2019, its neural networks beat the world's best Dota 2 players. And in July, Microsoft invested $1 billion in OpenAI to pursue artificial general intelligence, an accomplishment many think is atill decades away, if not longer.