29 December 2014

When a Computer Outsmarted a Master

29 December 2014

When a Computer Outsmarted a Master

On May 11, 1997, a single event signaled a turn in the world of computing. The reigning chess champion Garry Kasparov was beaten in a set of six games by a computer. The computer, known as Deep Blue, was created by IBM to tackle the complexity of chess. If the robotic process automation robots of 2014 had a grandfather, it would be Deep Blue.

Image used under Creative Commons license: https://www.flickr.com/photos/calliope/2313727442Conquering the game of chess had been a goal for computer scientists since the 1950s. Various computers were built for this purpose, but none could hold their own against a true chess master. Then, in 1985, a graduate student at Carnegie Mellon University named Feng-hsiung Hsu began work on a chess-playing computer as his dissertation. He was joined by a classmate, Murray Campbell, and the two were hired by IBM to continue their project with added team members.

Kasparov actually won the first match proposed by IBM in 1996. The game in 1997 was a rematch after significant upgrades were made to Deep Blue. In a small television studio, Kasparov faced off against the computer (presumably with human assistance to move the pieces). The chess champion and computer both won a game each, followed by three draws, and the match concluded with a mistake by Kasparov which allowed Deep Blue to seize victory. Kasparov later claimed that the computer had cheated, but Deep Blue was retired soon after, so they never got a rematch.

Of course, technology has progressed so that computer programs (not even dedicated computers) can routinely beat chess masters. The key to Deep Blue’s success was utilizing large data sets to tackle a complex problem. This is called deep computing. It also made use of parallel processing, which splits a problem into many pieces for separate processing by multiple CPUs. Problems are solved much faster with parallel processing, and it’s at the heart of all high level computing today. At the time of the match, Deep Blue could examine 200 million positions per second and see strategies up to 14 levels deep.

Chess was an appropriate challenge for pushing the limits of computing, because the game combines a set of simple rules with the element of immense possibility. What IBM achieved with Deep Blue led to amazing developments in deep computing and parallel processing which are now used by businesses and science labs across the globe. It was a significant step forward in the history of robotics, and we are still reaping the benefits today.

This is the third part of a series on the history of computer robotics. Read the first and second parts here and here.

Contact us to learn more

by Katie Behrens

TOPICS: Robotics

Show sidebar