Why AI Doesn’t Scare Me

Graphic by Robert Samec
Graphic by Robert Samec

When two-time world champion Defense of the Ancients (DotA) esports team member Johan Sundstein was asked about playing against OpenAI Five, a team controlled by artificial intelligence, he said, “I don’t believe in comparing OpenAI Five to human performance, since it’s like comparing the strength we have to hydraulics… As people, it’s about being realistic and learning from the brain of the AI and not the hydraulic strength that machines have.”

The first time a computer beat a top human at chess was 1996. The last time a human beat a top computer at chess was 2005. But playing chess, an activity that requires a series of discrete moves, is far removed from navigating the real world. As a multiplayer strategy game, however, DotA is far more complex, and begins to approach the continuous nature of reality. By building AI that could handle DotA, OpenAI, an AI research company, brings us one step closer to having AI that can function in the real world. The AI’s victory over the world’s best team illustrates their success.

OpenAI’s DotA team, OpenAI Five, learns in a way that’s fundamentally different from humans. Instead of getting better by playing random people on the internet, OpenAI’s team trained against itself, battling it out for the equivalent of 10,000 years in-game time. Through the usage of supercomputers, it accomplished this in only a few short months. Considering how much experience OpenAI Five has, it’s no wonder it trumps the best human players. That computing scale is far beyond human capability.

This computational power is the biggest advantage modern artificial intelligence has over humans. Another OpenAI project, GPT-3, was trained to understand English by reading the entire internet. OpenAI didn’t tell GPT-3 to do anything other than digest what it read. This is an important departure from traditional machine learning approaches to understanding language. Traditionally, researchers have built datasets full of example inputs to an algorithm and the desired output for each of those inputs. For example, someone trying to create a machine learning algorithm to translate between English and French would typically gather a huge number of English texts and their French translations, and tell the algorithm to learn to translate from those examples. By contrast, GPT-3 was told to try to predict missing words from text on the internet based on the surrounding words it processed. While searching through the entire internet, it learned to translate between French and English by figuring out what French words might mean based on the context in which they appeared. Like OpenAI Five, this marks a significant departure from how humans learn and process information. 

While OpenAI Five is able to beat the world’s top humans at a game that more closely resembles the real world than anything we’ve seen before, GPT-3 is able to write poetry, correctly answer quiz questions, and create fake news articles that people can’t tell are fake. Taken at face value, this may seem incredible and scary, and someone may reasonably assume that we’ll all be killed by the robot uprising within the next few decades. I’d argue, however, that this concern is faulty in two fundamental ways: it will happen much sooner than a few decades from now, and the robots themselves won’t be the ones revolting.

First off, there is something fundamentally different about GPT-3 and OpenAI Five compared to other major machine learning advances: neither of them used new algorithms. For the first time, incredible advances in artificial intelligence have resulted from simply scaling up existing algorithms, as opposed to making new ones. GPT-3 uses the same algorithm as its predecessor GPT-2, but has access to far more computing resources. The team behind OpenAI Five was surprised to discover that they could get incredible performance without fundamentally changing the algorithm they’d used for other research.That’s big. We don’t have to wait for a fundamental shift in artificial intelligence to pass the Turing Test. We just need to give the algorithms more resources. To me, this means that on a very basic level, we’re done. We’ve made it.

So what have we made, exactly? Just as hydraulics are fundamentally different from muscles, artificial intelligence is fundamentally different from human intelligence. The biggest difference between human intelligence and AI is that these algorithms are static. While hydraulics have superior strength to muscles, they can’t grow and change over time. Similarly, OpenAI Five and GPT-3 are not designed to keep learning from new information. If I tell GPT-3 my birthday, it won’t remember it, and GPT-3 will give the same answer to a question I ask, whether I ask it today or 20 years from now. These algorithms aren’t changing, are not forming ideas, and cannot plot to overthrow humanity. Creating algorithms that are able to form opinions based on new information will require a paradigm shift, and could very well be decades away. But the algorithms behind GPT-3 and OpenAI Five are powerful enough to radically change society.

Awesome as they may be, based on the fact that these algorithms do not change over time, they do not themselves pose a threat to humanity. But that’s not to say they’re safe in the hands of humans. In a break from tradition in the machine learning space, OpenAI did not initially release GPT-2 to the public, fearing that it could be used as a weapon to produce fake news on an unprecedented scale. Instead of publicly releasing GPT-3, they gave Microsoft an exclusive license to the AI and have continued providing a public interface for companies to interact with GPT-3. The technology to automatically generate fake news exists and is already in the hands of companies that do not necessarily have humanity’s best interests in mind. Anyone can access the technology required to create AI with the ability to out-strategize the best human e-sports players. While this AI itself doesn’t pose a threat, it absolutely can be used as a weapon by people with mal-intent. Don’t fear artificial intelligence. Fear the people who control it.

Leave a Reply