Skip to main content

Thoughts on Chess and AI

In 1997, Gary Kasparov, one of history’s most gifted chess players, lost to Deep Blue, a $10 million specialized supercomputer programmed by a team from IBM. When I met Gary over dinner one night in London in 2001, I don’t think even he would have predicted how far and how fast the related fields of artificial intelligence and machine learning would develop in the twenty years since that match; moving beyond Chess, to Atari arcade games and finally the greatest board game challenge of them all, the game of Go.

It was Soviet mathematician and computer scientist, Alexander Kronrod’s idea that “chess is the Drosophila of artificial intelligence.” In other words, looking at chess is one way to make sense of the broader picture, just as the humble fruit fly has helped us decipher human genetics.

In today’s big data world, AI and machine learning applications already analyze massive amounts of structured and unstructured data and produce valuable insights in a fraction of the time.  A chess Grandmaster very likely holds a library of some 10,000 or more chess positions in his or her mind, and over dinner in London, I was struck by Gary Kasparov’s favorite topic, the differences between the early versions of the Encyclopedia Britannica and its most recent edition.

Kasparov appeared to me as a walking Wikipedia and I felt completely retarded in comparison with his intellect, but with the benefit of seventeen years hindsight, I think that like chess positions, he was seeing the encyclopedia as patterns and if I’m right; and it’s broadly where Kasparov appears to be coming from in his new book, it’s that 'combinatorial' partnership of man and machine, experience and algorithm, which points in the direction of the future. Kasparov realized that he could have performed better against Deep Blue if he’d had the same instant access to a massive database of all previous chess moves that Deep Blue had and it was Kasparov who pioneered the concept of ‘man-plus-machine chess matches’, (Centaur chess) in which AI augments human chess players rather than competes against them.

In 1997, when IBM’s Deep Blue defeated Kasparov, its core NegaScout planning algorithm was fourteen years old, whereas its key data set of 700,000 Grand-master chess games (known as the “The Extended Book”) was only six years old.

Now run this forward a year with Amazon Alexa or a similar AI device like Google Home or Siri. Very soon and I mean soon, you’ll be able to ask your iPad or similar what the best move in a game might be in a manner reminiscent of the iconic match between the HAL 9000 computer (Heuristic Algorithmic) and astronaut Bowman in the Stanley Kubrick movie, 2001. In theory, the computer should be able to suggest a move from anyone of perhaps millions of recorded games that led to a position of advantage in a similar position.

In a nutshell then, it’s all about pattern recognition, processor speed and powerful algorithms and it’s the latter; the code that is making the greatest difference (Martin Grotschel) that while Moore’s Law might improve a thousand times over a period, algorithms are improving 30,0000 times:
I use solving the Rubik’s cube as a good example. It’s simply an algorithm but the machine record is now down to .686 of a second.

A significant technology capability threshold has silently been crossed. We are in a kind of Cambrian explosion right now. Making sense of noisy, infinitely variable, visual and auditory patterns has been a Holy Grail of artificial intelligence. Perhaps the most important news of our day is that data sets, not algorithms, might be the key limiting factor in the development of much greater artificial intelligence. The tipping point was reached recently not by fundamentally new insights but by processing speeds that make possible larger networks, bigger data sets, more iterations and something re-discovered from the 70’s neural network research as “back-propagation” a straight-forward calculus trick

The final point, with my PlayStation tuned on in the living room and Game of Tanks waiting for me; the huge multiplayer game, is that we humans are entirely, statistically predictable. Facebook and Google know that and so does the game when I’ve played it enough times. The AI can now infer my behavior better than I can to around 98% accuracy in much the same way that Google can infer my day to day behavior and shopping habits.  It’s all about patterns and huge data sets.

Read Gary Kasparov's latest book for more insight: "Deep Thinking."

Popular posts from this blog

Mainframe to Mobile

Not one of us has a clue what the world will look like in five years’ time, yet we are all preparing for that future – As  computing power has become embedded in everything from our cars and our telephones to our financial markets, technological complexity has eclipsed our ability to comprehend it’s bigger picture impact on the shape of tomorrow.

Our intuition has been formed by a set of experiences and ideas about how things worked during a time when changes were incremental and somewhat predictable. In March 1953. there were only 53 kilobytes of high-speed RAM on the entire planet.

Today, more than 80 per cent of the value of FTSE 500* firms is ‘now dark matter’: the intangible secret recipe of success; the physical stuff companies own and their wages bill accounts for less than 20 per cent: a reversal of the pattern that once prevailed in the 1970s. Very soon, Everything at scale in this world will be managed by algorithms and data and there’s a need for effective platforms for ma…

Civilisational Data Mining

It’s a new expression I haven’t heard before. ‘Civilisational data mining.’

Let me start by putting it in some context. Every character, you or I have typed into the Google search engine or Facebook over the last decade, means something, to someone or perhaps ‘something,’ if it’s an algorithm.


In May 2014, journalists revealed that the United States National Security Agency, the NSA, was recording and archiving every single cell-phone conversation that took place in the Bahamas. In the process they managed to transform a significant proportion of a society’s day to day interactions into unstructured data; valuable information which can of course be analysed, correlated and transformed for whatever purpose the intelligence agency deems fit.

And today, I read that a GOP-hired data company in the United States has ‘leaked’ personal information, preferences and voting intentions on… wait for it… 198 million US citizens.

Within another decade or so, the cost of sequencing the human genome …

The Big Steal

I’m not here to predict the future;” quipped the novelist, Ray Bradbury. “I’m here to prevent it.” And the future looks much like one where giant corporations who hold the most data, the fastest servers, and the greatest processing power will drive all economic growth into the second half of the century.

We live in an unprecedented time. This in the sense that nobody knows what the world will look like in twenty years; one where making confident forecasts in the face of new technologies becomes a real challenge. Before this decade is over, business leaders will face regular and complex decisions about protecting their critical information and systems as more of the existing solutions they have relied upon are exposed as inadequate.

The few real certainties we have available surround the uninterrupted march of Moore’s Law - the notion that the number of transistors in the top-of-the-line processors doubles approximately every two years - and the unpredictability of human nature. Exper…