Eco and distributed computing

by ChessBase
12/10/2005 – Is chess mathematically solvable? It's an idea which has been bounced around on computer chess message boards for years. Can a chessplaying program find a forced win for either player? Can a chess engine even find an advantage after twenty moves in a specific variation? We crunch some numbers and melt a few calculators in this week's ChessBase Workshop.

ChessBase 18 - Mega package ChessBase 18 - Mega package

Winning starts with what you know
The new version 18 offers completely new possibilities for chess training and analysis: playing style analysis, search for strategic themes, access to 6 billion Lichess games, player preparation by matching Lichess games, download Chess.com games with built-in API, built-in cloud engine and much more.

More...

Ladies and gentlemen, for this week's offering I present to you -- a snowball.

It's mid-October as I write this and (around these parts anyway) we're not going to be thinking about snow for at least another month. But I have a knack for creating snowballs from thin air at least twice a year. Well, rhetorical snowballs anyway. It seems as though each time I offer an editorial "opinion piece" in this column, I create a small snowball which rolls downhill, growing larger and larger as it goes. Sometimes it's a fullscale avalanche (witness the "intelligent mistakes" column, which is still rolling); sometimes it's something a bit smaller and more managable (or maybe manglable).

A few columns ago I presented a couple of user e-mails. One was from a user who wanted to use several chess engines to evaluate every variation in ECO (the Encyclopedia of Chess Openings), while another was a tongue-in-cheek response from Bob Durrett, who suggested a similar project. These e-mails have produced yet another response, this time from Angelo DePalma:

The computer solving idea isn't so weird. Many computationally huge scientific problems, e.g. the search for extraterrestrial life, are under investigation using a technology known as distributed computing. It works by having a central computer with the "brains" taps into the processing power of volunteers' computers during idle time.

This idea's been kicked around in chess message boards for years now, usually under a topic header like "Computers solving chess". The basic premise is that chess is a "solvable" game -- if each player makes the best possible move, right from Move One, that there is a set of optimal moves that proves chess is a forced win for White or Black. The idea from the message boards is that a computer could conceivably "solve" the game of chess as a forced win for one of the two colors.

Now what we were talking about in those prior ChessBase Workshop columns is somewhat similar, though less daunting (and that's a relative term here, as we'll soon see) -- all we're attempting to do is computer analyze every line from ECO and arrive at some "ultimate" evaluation for each line.

The snowball I'm about to create involves a simple idea: that neither of these two events will ever happen.

Now before I get branded as a "Luddite" (or worse), let me make clear that I fully understand the appeal of these tasks for the mathematically-inclined. Chess is a game of "perfect information"; each player has full access to the board conditions and positions of the pieces at all times. There are no hidden variables. This stands in contrast to something like poker, in which each player has information that is his and his alone (i.e. the specific cards in his hand, which are hidden from the other players); said information is kept from the other players, who are likewise concealing their hands from all other players. Poker is thus a game of "imperfect information"; you must devise and act on a strategy based on the incomplete information available to you.

Chess, because of its qualities of perfect information, appeals to the mathematically-inclined because of its resemblance to other perfect-information games. Many of these games appear in "puzzle" form. Two examples are logic problems and those infernal Japanese number problems which are all the rage right now. Theoretcially, the only way you can fail to solve a logic problem is because the puzzle's creator left information out of his set of hints. Likewise, the only way you can fail to solve one of those number problems is if the creator screwed up and made a puzzle which violates the terms of the game (that each number can only be used in a given row or column one time; and, by the way, this also puts a finite limit on the number of individual problems which can be created).

But is chess really solvable? In theory, yes, because of the finite (though still astronomical) number of possible board positions. In practice, no, because we have to evaluate every possible position in light of the positions which follow it. That's pretty easy if the side to move has mate-in-one sitting on the board. It's even possible for the average player if the moving side has, say, mate-in-three (provided that all of the "victim" player's moves are forced). Top GMs can solve "mate-in-x" problems which go much higher.

But is chess a forced "mate-in-seventy-two" (or whatever the number would possibly be)? We'll never know, because a human can't possibly see all possibilities that far ahead, and because a computer can't evaluate a sequence that deep (and this occurs for two reasons).

The first reason is one which renders the entire rest of the argument moot (although that's not going to stop me from discussing it): computers don't play perfect chess. The basic premise of both tasks (solving chess and evaluating every ECO variation out to some "ultimate truth") is that a computer chess program always plays the best move. They don't. Some don't even play good moves (my old BORIS computer from the 1970's is a perfect example). However, unless it's handicapped in some way, a chess computer will always play the best move it finds -- but forces both external and internal will always act to limit this.

One of these internal forces is the "horizon effect". A chess program, while lightning fast, can still evaluate only x number of positions within a given time constraint. Give a computer thirty seconds to find the best variation and it will do so -- as long as you're willing to accept that as the best move it finds within thirty seconds. Give it ten minutes instead and it may well find a better move/variation: the best one it can find within ten minutes. Bump it up to an hour and you get the same situation; the move or variation it finds will be the best one it can find within the hour you gave it. And so on. But it's not (usually, unless the initial position is from an endgame or one in which there's a forced mate) going to "see" the whole game out to the end and basing its suggested move on some concrete ultimate game conclusion (i.e. mate). It's just going to assume best play for both sides over the x number of moves that it can see and make its suggestion based on that limited information. And you must understand that this may or may not be the "best" move in an "ultimate truth" sense -- there might be a better move, but all you're seeing is the best one that the program can find.

A second internal factor (which can be "turned off" in some programs) is tree pruning. A chess engine will disregard "unpromising" lines early in its search, looking at them no further. For example, if an engine sees a variation which results in a player loses his Queen with no compensation (within its search horizon, that is), it'll blow that variation off and consider it no further. The danger here is that fifteen or twenty moves later the loss of that Queen will result in something good happening for that player (in other words a Queen sac with distant compensation -- far-fetched, but possible) and that the computer won't see it since that result is outside of its search horizon.

A third internal factor is the fallibility of the human programmer. There are a lot of good chess engines available on the market, but they're all programmed differently and will sometimes evaluate a position differently. Some bad chess engines are just that for a reason: the programmer hasn't done his job nearly well enough and a mistake in his code causes the engine to misevaluate a position. But even with the current good, strong chess engines an element of variability creeps in. I've analyzed dozens of positions using multiple chess engines in which two engines analyze a position identically numerically (i.e. give the position the same exact numerical evaluation, such as +0.03 for the moving side) but suggest different concrete variations (including different starting candidate moves).

So which engine is correct here? That's the whole rub and the reason why this is an unsolvable problem: which engine do you use and, if you use different engines, which one do you believe?

The external reason which makes it impossible for a computer to "solve" chess is also the second reason why a computer can't evaluate a position deeply enough to arrive at "the ultimate truth": the sheer number of variations in the variation tree, the horrendous number of forks in the road caused by the choices that either player can make at nearly every move of the game.

For purposes of the discussion, let's imagine a hypothetical middlegame position in which it's White's turn to move and he has twenty possible legal moves. Each of these choices might result in a different number of legal replies for Black (for example, White's Move A might leave Black with eighteen possible responses, Move B with nineteen legal replies, Move C with eighteen, Move D with twenty-one, etc.), but to keep it simple we'll imagine that Black has twenty legal replies, each of which gives White twenty legal replies, and so on. White's initial choice can result in twenty possible board positions. Black's replies will result in 400 unique board positions. Let's do the math and see where this exponential progression leads us:

  • White's first move: 20 positions
  • Black's first move: 400 positions
  • White's second move: 8,000 positions
  • Black's second move: 160,000 positions
  • White's third move: 3,200,000 positions
  • Black's third move: 64,000,000 positions
  • White's fourth move: 1,280,000,000 positions
  • Black's fourth move: 25,600,000,000 positions
  • White's fifth move: 512,000,000,000 positions
  • Black's fifth move: 10,240,000,000,000 positions
  • White's sixth move: 204,800,000,000,000 positions
  • Black's sixth move: 4,096,000,000,000,000 positions

That's right, sports fans -- after just six moves for each player a computer would have to assign a numerical value to more than four quadrillion positions. Just for chuckles (and since smoke isn't rolling out of my calculator just yet), let's extend it:

  • White's seventh move: 81,920,000,000,000,000 positions
  • Black's seventh move: 1,638,400,000,000,000,000 positions
  • White's eighth move: 32,768,000,000,000,000,000 positions
  • Black's eighth move: 655,360,000,000,000,000,000 positions
  • White's ninth move: 13,107,200,000,000,000,000,000 positions
  • Black's ninth move: 262,144,000,000,000,000,000,000 positions
  • White's tenth move: 5,242,880,000,000,000,000,000,000 positions
  • Black's tenth move: 104,857,600,000,000,000,000,000,000 positions
  • White's eleventh move: 2,097,152,000,000,000,000,000,000,000 positions
  • Black's eleventh move: 41,943,040,000,000,000,000,000,000,000 positions
  • White's twelfth move: 838,860,800,000,000,000,000,000,000,000 positions
  • Black's twelfth move: 16,777,216,000,000,000,000,000,000,000,000 positions

OK, smoke's a-rollin' now, so I'll have to stop. I'm not sure how you'd express this last number verbally (it's whatever comes after "octillion", or is that some kind of debutante ball? I forget) but where I come from we just use the technical term "a helluva lot".

Now I do recall that we were talking about distributed computing here -- using a lot of computers in order to spread the workload. Assuming that you could find a way to divide the labor amongst, say, ten thousand computers, each unit would still have to evaluate 1,677,721,600,000,000,000,000,000,000 positions. Not quite a "helluva lot" but still qualifying as a "metric boatload".

So I think it's safe to assume that until such time as we discover some alternate dimension in which math equations are enormously simplified the idea of "solving chess" is right off the table.

Instead we'll look at the "simpler" task of evaluating every variation in ECO. For those who haven't seen The Encyclopedia of Chess Openings, it's a five volume set of books in which all chess openings are categorized amongst 500 separate alphanumeric classifications (A00-E99). Each of these classifications is further divided amongst several numbered "main line" variations (typically ten to twenty), which is why you'll sometimes see a specific ECO variation cited in the form C02_01 (the first numbered variation in classification C02). Each of these numbered variations contains a certain amount of footnoted subvariations, and these often appear as several unnumbered sidelines which each also contain further sub-subvariations.

That's a lot of variations. Hundreds of thousands of them, in fact. And it's not even every legal move in those variations.

But what we're after is an "ultimate truth" evaluation for each of these variations. To give an example of what we're up against, let's consider just one of these numbered variations: the first variation of code C68 in the second edition of ECO:

1.e4 e5 2.Nf3 Nc6 3.Bb5 a6 4.Bxc6 bxc6 5.d4
[5.Nxe5 Qe7 (5...Qg5 6.Nf3 Qxg2 7.Rg1 Qh3 8.d4 Nf6 9.Rg3 Qh5 10.Nc3 Bb4 11.Qe2 Bxc3+ 12.bxc3 Qa5 13.Ne5 Bb7 14.Bg5) 6.d4 d6 7.Nxc6 Qxe4+ 8.Qe2 Qxe2+ 9.Kxe2 Bb7 10.d5 Bxc6 11.dxc6 Ne7 12.Nc3 Nxc6 13.Nd5 0-0-0 14.Be3;
5.0-0 d6 6.d4 f6 7.Be3 (7.dxe5 fxe5 8.Nxe5 dxe5 9.Qh5+ Ke7 10.Rd1 Bd7 11.Qxe5+ Kf7 12.Qd4) 7...g6 8.Nbd2 Bg7 9.Qe2]
5...exd4 6.Qxd4 [6.Nxd4 g6 7.Nc3 Bg7 8.0-0 Ne7 9.Be3 (9.f4 c5 10.Nb3 d6 11.f5 gxf5 12.Qh5 0-0) 9...c5 10.Nb3 d6 11.Qd2 0-0] 6...Qf6 [6...d6 7.0-0 Ne7 8.b4 Ng6 9.Bg5 f6 10.Be3 Be7 11.Nbd2] 7.e5 [7.0-0 Qxd4 8.Nxd4 Nf6 (8...Rb8 9.Nb3 Ne7 10.Bd2 Ng6 11.Bc3) 9.Nc3 Rb8;
7.Nc3 Qxd4 8.Nxd4 g6 (8...Ne7) 9.Bf4 Bg7 10.Rd1 Ra7 (10...d6 11.Nxc6 Bxc3+ 12.bxc3 Bb7) 11.0-0 Ne7 12.Be3 Rb7 13.Nb3 d6;
7.Be3 Qxd4 8.Bxd4 Nf6 9.Nbd2 Be7 10.0-0-0 0-0 11.Rhe1 a5 12.a4 c5 13.Bxf6 Bxf6 14.Nc4;
7.Qd3]
7...Qg6 8.0-0 Bb7 [8...Qxc2 9.Nc3] 9.Nbd2 [9.e6 fxe6 10.Rd1 (10.Ne5 Qxg2+) 10...Bd6 11.Bf4 e5] 9...0-0-0 10.Nb3 c5 11.Qc3 f6

All of the moves which appear at the end of a variation (what computer weenies like me call "leaf nodes") are in bold type. And each of them (except for one or two) end in an evaluation symbol (which I've omitted for copyright purposes) which tells you which side has the advantage (or that the position is equal or "unclear"). These evaluations are made by master or grandmaster chessplayers.

My initial question is this: since each line already ends in an evaluation, why do we need a computer to provide a further evaluation? I'll freely admit to the possibility of human error (or deliberate deception) in the assignment of these evaluations (in fact my good friend Sid Pickard wrote a book called ECO Busted! in which he pointed out these misevaluations which appear in the second edition of ECO), but for the most part the evaluations should hold true enough. I'm not sure why further analysis is required, unless it's to double-check the analysis provided in the books. But we'll continue with the exercise, despite my reservations.

By my count there are eighteen leaf nodes which must be evaluated in this single numbered line of a single alphanumeric classification from ECO. The next question is how far do we evaluate each line (in other words, how many moves deep must we go before the analysis is deemed conclusive)? Let's use the final position of the main line (11...f6) as our test subject. There are thirty-five legal White moves in this position. The exact number of Black moves will vary according to what White plays but as a benchmark let's take approximately half the initial number (rounding down to seventeen) as the number we'll use for each successive iteration in our exponential progression (assuming that as an average or at least as a median number):

  • White's twelfth move: 35
  • Black's twelfth move: 595
  • White's thirteenth move: 10,115
  • Black's thirteenth move: 171,955
  • White's fourteenth move: 2,923,235
  • Black's fourteenth move: 49,694,995
  • White's fifteenth move: 844,814,915
  • Black's fifteenth move: 14,361,853,555
  • White's sixteenth move: 244,151,510,435
  • Black's sixteenth move: 4,150,575,677,395
  • White's seventeenth move: 7,0559,786,515,715
  • Black's seventeenth move: 1,199,516,370,767,155
  • White's eighteenth move: 20,391,778,303,041,635
  • Black's eighteenth move: 346,660,231,151,707,795
  • White's nineteenth move: 5,893,223,929,579,032,515
  • Black's nineteenth move: 100,184,806,802,843,552,755
  • White's twentieth move: 1,703,141,715,648,340,396,835
  • Black's twentieth move: 28,953,409,166,021,786,746,195

Ok, let's stop there, with the assumption of the following parameter for our search depth determination: twenty full moves for each variation unless a prior forced mate or overwhelming advantage is found sooner.

Just look at the number of positions which have to be evaluated to reach twenty full moves (and this is a very rough approximation based on the estimate we made earlier -- the actual number could be even higher). This assumes that no "tree pruning" is allowed (so that the engine doesn't miss anything; remember, we're after "ultimate truth" here).

Fritz8 on my machine evaluates positions at about 250,000 positions a second (give or take -- this is a high-end figure). It will require 115,813,636,664,087,146.98478 seconds to perform this analysis. That's 1930227277734785.7830796666666667 minutes. It can also be expressed as 32170454628913.096384661111111111 hours. In days, that's 1340435609538.045682694212962963. Divide that by 365 and we get 3,672,426,327.5014950210800355149671 years.

While I do plan on living forever, I don't feel like waiting that long to see who Fritz thinks is better after twenty moves in this variation.

I really was going to say that "you'd never know in a million years who's winning in this position". I was. But now I'm just laughing instead.

So let's use distributed computing and divide the workload across 10,000 computers. Now we only have to wait a bit more than 367,242 years for our answer.

And you need to remember that we have seventeen more such analyses to preform on just this one numbered line in ECO. There are thousands more.

Sorry, but the problem is just too vast. And, realistically, trivial. This might sound like sacrilege to a few readers, but there are a whole lot of more important things in life than chess to expend computing power on. Let's explore the cosmos, cure some fatal epidemics, and solve some real world problems first -- White's winning advantage from Move One can wait just a bit longer.

But if you really want to tackle this problem, feel free. Seriously. Let me know when you have the answer -- my e-mail address is at the bottom of this column.

Until I hear from you, have fun!

You can e-mail me with your comments on ChessBase Workshop. All responses will be read, and sending an e-mail to this address grants us permission to use it in a future column. No tech support questions, please.


© 2005, Steven A. Lopez. All rights reserved.


Reports about chess: tournaments, championships, portraits, interviews, World Championships, product launches and more.

Discuss

Rules for reader comments

 
 

Not registered yet? Register