A short history of AI schooling humans at their own games

Twenty-one years ago today, IBM computer Deep Blue famously beat chess world champion Garry Kasparov at his own game.

While Deep Blue would go on to lose the full match, the event launched a long line of victories by artificial intelligence (AI) over humans in gaming.

Since Deep Blue’s initial triumph, many computer systems have challenged humans in other complicated games, like Go and poker.

Games might seem a trivial way to measure AI. But they offer “a nice, simple controlled environment,” University of Alberta computer scientist Jonathan Schaeffer said. “You demonstrate the ideas in the computer games, and then you scale them up to bigger real world problems. Games allow us to learn how to walk, before we learn how to run.”

In honor of the anniversary, here’s a look at the AI gamers who have made notable contributions to computer science, in the words of designers who created them.

Editor’s note: Responses have been edited for clarity based on interviews with PBS NewsHour.

Murray Campbell, Deep Blue, IBM
Chess
Deep Blue faced off against chess world champion Garry Kasparov in 1996. While the computer won the first game, it lost the match. Deep Blue won the rematch a year later

Chess is a game that has a huge number of possibilities. In every turn, players can make about 40 different possible moves; their opponent, in turn, has 40 different responses. If you try to calculate into the future what will happen, you’ll quickly run into an exponential explosion of possibilities, and you’ll just have to throw your hands up and make your best guess at what the best move is. Human players have this great ability to only look at a small number of possibilities when making their move, and having a good intuition about what’s good and what’s bad.

We couldn’t emulate that in a computer, but we could create a very thorough search through the possibilities, and that was what Deep Blue was good at. It looked through 100 million possibilities per second while calculating its move.

The Deep Blue matches in both 1996 and 1997 gave many people their first understanding of AI systems. Back in the ’90s, everybody was familiar with computers’ doing fairly familiar mundane tasks like calculating payroll. But everybody knows that chess requires intelligence, and that smart people tend to play really well.

To see that a computer could beat the best in the world, at least in one game, was a sign that great things were to come.

Eric Brown, director of Watson Algorithms for Watson Health, Watson, IBM
Jeopardy1
In 2011, Watson won a two-game series of Jeopardy against prior champions Ken Jennings and Brad Rutter, with a total score of $77,147.

Watson competing on Jeopardy against humans was a very public, accessible and understandable demonstration of a computer’s ability to understand natural language. In particular, it was also a demonstration of a computer’s ability to look at questions that for many humans are very hard to understand and certainly difficult to answer. For a computer system to interpret that language, come up with an answer and be confident in that answer — which is a key part of being able to successfully answer a Jeopardy question — it was a great demonstration of those core abilities.

Contestant Ken Jennings competes against 'Watson' at a press conference for the Man V. Machine "Jeopardy!" competition. Photo: Ben Hider/Getty Images

Contestant Ken Jennings competes against ‘Watson’ at a press conference for the Man V. Machine “Jeopardy!” competition. Photo: Ben Hider/Getty Images

This system was built to leverage knowledge the way humans naturally record and communicate it: through text. It was pulling potential answers and evidence to support those answers out of an enormous text repository. We had encyclopedic information, we had some web content in there, and we had various books and news articles. Being able to analyze that and understand it at a deep enough level, so that you could pull out possible answers was really the core part of the problem.

Tuomas Sandholm and Noam Brown, Libratus, Carnegie Mellon University
Poker-Original

Libratus won a Texas Hold’em poker tournament in January 2017 against four of the world’s top players.

Brown: If you look at the games that AI has traditionally addressed — checkers, chess and Go — they all fall into this category called perfect information games. These are games in which both players have access to all the information available. Everything is laid out neatly for everybody to see.

Sandholm:There are two things that, combined, make poker very hard. One is the size of the game tree, which is two to the power of 161 different situations that the player can face. That’s more than the number of atoms in the universe! But that’s shared with chess and Go. They also have these huge game trees.

What makes no limits Texas Hold’em so difficult is the imperfect information. So, when it’s a player’s turn to move, they don’t actually know what the state of the game is. And therefore, a player has to interpret the opponents’ actions as details about their private information, and conversely, how well the opponents will interpret the actions as signals about their own private information.

Brown: So, you need a fundamentally different kind of approach when it comes to games like poker. That’s really what we bring to the table with this new AI. It takes a fundamentally different approach to handling uncertainty.

Jonathan Schaeffer, Chinook, University of Alberta
Checkers
Chinook lost to Marion Tinsley, the world champion of Checkers, in 1991, but would go on to win the title in 1994. The researchers then “solved” the game of Checkers in 2007, such that Chinook can now win or draw against any opponent.

We [Chinook and Schaeffer] played an exhibition match against Marion Tinsley in 1991. And the computer told me to make this one particular move. When I made it, Tinsley immediately said, “You’re going to regret that.”

Not being a checkers player, I thought, “what does he know, my computer is looking 20 moves ahead.” But a few moves later, the computer said that Tinsley had the advantage and a few moves after that I resigned.

Checkers Champion Marion Tinsley in 1988 Photo: State Archives of Florida/Foley

Checkers Champion Marion Tinsley in 1988 Photo: State Archives of Florida/Foley


Tinsley, based on his ability to search ahead and deep knowledge of the game, was able to figure out that he was going to win. The techniques [the computer] was using [at the time] would take an analysis 64 moves deep. It was far beyond anything I could even imagine ever building and playing within the constraints of a real game. So, the real challenge in computer games is to overcome the incredible abilities that humans have.

So, the historical significance is very simple. Chinook was the first to achieve computer supremacy in any game.

We did this in 1994 by winning the World Man-Machine Championship, and we did this the hard way in the sense that we competed in tournaments to earn the right to play against the world champion.

Contrast that with Deep Blue. Many people think it became world champion in 1997 but it didn’t. The matches were exhibition games, and although Deep Blue won, it certainly didn’t claim the title. So that’s the major impact. [Chinook] was the first program to have superhuman abilities in a game that’s normally played by humans.

As for the lasting technological significance, that’s always fleeting. At the time, this was one of the biggest computations that had ever been performed. We continued from 1994 until 2007, when we actually solved the game of checkers. And that turned out to be quite a milestone.

Many years ago, John McCarthy of Stanford called computer chess the drosophila (fruit fly) of artificial intelligence. The analogy is that if you’re a geneticist, you do your research with fruit flies and not humans. Computer games are essentially the fruit fly of AI research.