Category Archives: chess engines

Useful Technology?

This review has been printed in the March 2020 issue of Chess Life.  A penultimate (and unedited) version of the review is reproduced here. Minor differences exist between this and the printed version. My thanks to the good folks at Chess Life for allowing me to do so.


Kaufman, Larry. Kaufman’s New Repertoire for Black and White: A Complete, Sound and User-friendly Chess Opening Repertoire. Alkmaar: New in Chess, 2019. ISBN 9789056918620. PB 464pp.

Kaufman’s New Repertoire for Black and White: A Complete, Sound and User-Friendly Chess Opening Repertoire is the third incarnation of Larry Kaufman’s one-volume opening repertoire. While the first two – The Chess Advantage in Black and White: Opening Moves of the Grandmasters (2004) and The Kaufman Repertoire for Black and White: A Complete, Sound and User-Friendly Chess Opening Repertoire (2012) – were well-regarded, this new edition appears at something of an inflection point in the history of chess theory.

Opening theory has exploded over the past two decades, due largely to the influence of engines and databases. As one of the developers of Rybka and Komodo, among other important projects, Kaufman has made good use of engines in his writing, and both previous versions of this project proclaim the central role played by the computer. In 2004 it was Fritz, Junior, and Hiarcs, and in 2012 he used Houdini and especially Komodo.

Today the landscape has changed. The rise of artificial intelligence and neural network engines, first Alpha Zero and now Leela Chess Zero, is reshaping opening theory. In Mind Master, reviewed here last month, Viswanathan Anand relates that Caruana and Carlsen were the first elite players to make use of Leela in their 2018 match preparations, and that his trainer introduced it into their workflow at the end of that year.  Chess authors have picked up on the trend, and works written under Leela’s influence are beginning to appear.

Kaufman’s New Repertoire is advertised as “the first opening book that is primarily based on Monte Carlo search.” This is somewhat imprecise – Leela’s evaluations come from the neural network, not game rollouts – but the point remains that Kaufman has chosen to make use of the newest technologies in writing his book. He relied on Leela and a special “Monte Carlo” version of Komodo to craft the repertoire, generally deferring to Leela’s view while reserving the right to serve as “referee” if the engines disagree.

So what does Kaufman’s new repertoire look like? As the title suggests, the book contains a complete opening solution for both colors, focusing on 1. e4 for White, and the Grunfeld and Ruy Lopez for Black. Kaufman is covering a lot of ground here, generally offering two systems or ideas against most major continuations. In the mainline Ruy he offers readers three choices with Black: the Breyer, the Marshall, and the Møller.

The virtue of this approach is clear. Kaufman’s New Repertoire gives readers a one-stop opening repertoire, featuring professional lines, particularly with Black, and computer-tested ideas that can inspire confidence. But in an age where multi-volume single color repertoires are increasingly the norm, is it possible to include enough detail in less than 500 pages?

Let’s dive a bit deeper and take a look at specific recommendations.

White: 1. e4
  • vs Caro-Kann – (a) 4. Bd3 Exchange Variation, (b) 3. Nc3 dxe4 4. Nxe4 Bf5 5. Qf3!?, (c) Two Knights.
  • vs French – Tarrasch Variation.
  • vs 1. … e5 – (a) Italian Game, with multiple repertoire choices offered, (b) Ruy Lopez with 6. d3, and 5. Re1 against Berlin.
  • vs Sicilian – (a) 2. Nc3 ideas, including 2. Nc3 d6 3. d4 cxd4 4. Qxd4 Nc6 5. Qd2 and the anti-Sveshnikov 2. Nc3 Nc6 3. Nf3 e5 4. Bc4; (b) 2. Nf3 and 3. Bb5 against 2. … d6 and 2. … Nc6, and 3. c3 against 2. … e6, entering the Alapin.
Black: 1. e4 e5 and Grunfeld
  • … Nf6 against the Scotch.
  • … Bd6 against the Scotch Four Knights.
  • … Bc5 in the Italian game, focusing on 4. d3 Nf6 5. 0-0 0-0 and now 6. c3 d5, 6. Re1 Ng4, 6. a4 h6 followed by … a5, and 6. Nbd2 d6.
  • the Breyer is the “best all-purpose defense” in the 9. h3 Ruy Lopez, but Kaufman also includes Leela’s favored Marshall Attack and the Møller, inspired by Anand.
  • Neo-Grunfeld without … c6 vs the Fianchetto.
  • f3 Nc6.
  • …a6 against the Russian System.
  • … Qxa2 and 12. … b6 against 7. Nf3 in the Exchange variation.
  • three options – 10. … Qc7 11. Rc1 b6, 10. … e6, and 10. … b6 – against the 7. Bc4 Exchange.
  • c4 / 1. Nf3 – Anti-Grunfeld, Symmetrical English, and a tricky path into the Queen’s Indian Defense for transpositional reasons.

While chapter introductions explain his reasons for individual repertoire choices, Kaufman’s analysis revolves mostly around concrete lines, using commented games as his vehicle. He tends to propose variations that avoid the heaviest theory with White, while turning to two of the most professional of openings – the Breyer and Grunfeld – as the backbones of his Black repertoire.

In the Introduction Kaufman warns his readers that he omits “rare” responses from the opponent to save space and offer alternative ideas. This means that the book is unlikely to be refuted, but readers will have to do some extra work to flesh out their repertoires.

The analysis in Kaufman’s New Repertoire is heavily influenced by the computer, and individual lines are usually punctuated with numerical evaluations from Komodo. This is not to say that the book is perfect. Attributions of novelty status are sometimes incorrect, although that may have more to do with differing data sets than anything else. More worrisome are the analytical errors and omissions. Two examples:

(a) Kaufman recommends 8. Qf3 in the Two Knights, and after 1.e4 e5 2.Nf3 Nc6 3.Bc4 Nf6 4.Ng5 d5 5.exd5 Na5 6.Bb5+ c6 7.dxc6 bxc6 8. Qf3 he analyzes the two traditional mainlines of 8. … Be7 and 8. … Rb8. Checking his work, I discovered that neural net engines thinks sacrificing the exchange with 8. … cxb5 is fully playable, giving Black good compensation after 9. Qxa8 Be7 (Leela) or 9. … Qc7 (Fat Fritz). See the recent game Chandra-Theodoru from the SPICE Cup in 2019 for an example of the latter.

Jan Gustafsson made an analogous, and equally Leela inspired, discovery in his new (and outstanding) Lifetime Repertoire: 1. e4 e5 series for Chess24, analyzing 8. … h6 9. Ne4 cxb5 10. Nxf6+ gxf6 11. Qxa8 Qd7! where the best White can do is head for a perpetual.

While the idea of giving the exchange is considered inferior by theory, the fact that Leela approves it should have been just the kind of discovery that Kaufman would trumpet here. Perhaps he didn’t believe what he was seeing, although it should be noted that Komodo verifies Black’s compensation.

(b) After 1.e4 e5 2.Nf3 Nc6 3.Bc4 Bc5 4.c3 Nf6 5.d4 exd4 6.e5!? d5 7.Bb5 Ne4 8.cxd4 Bb6 9.Nc3 0–0 10.Be3 Bg4 11.h3 Bh5 12.Qc2 we reach a “rather critical” position.

Here Kaufman discusses five moves: 12. … Bg6, 12. … Bxf3, 12. … Nxc3, 12. … Rb8, and 12. … Ba5!, which “may be Black’s only path to roughly equal chances.” (96)

I found two problems with the analysis, both involving Kaufman glossing over a poor move towards the end of a line, allowing him to claim an advantage for the side he is championing. After 12. … Bxf3, 18. … Nf5 is dubious; better is 18. … Ng6 as in Vocaturo-Moradiabadi, Sitges 2019. His analysis of 11. Qc2 is also flawed – check the pgn at uschess.org for more details. And these were not the only “tail-errors” I found in my study.

I’m torn on how to assess these analytical lapses. On the whole the book is well-researched and up to date, and the broad outlines of all Kaufman’s repertoire choices seem sound. So why are there these small problems, especially when the entire conceit of the book is its being computer-proofed, and with so many of the lines cribbed verbatim from the engine? I don’t have an answer to this, but I do wonder if Kaufman doesn’t suffer from a bit of confirmation bias.

As one of the co-authors of Komodo, Kaufman surely trusts the engine a great deal, but the version used here – Komodo MCTS – is markedly inferior to traditional Komodo or Stockfish, and is rated some 200 points lower on most testing lists. Komodo MCTS has the advantage of being able to analyze multiple lines at once without a performance hit, but its (very relative) tactical shallowness can be a concern. Because Leela suffers from similar issues, it may have been smarter to pair it with traditional Komodo instead.

Kaufman’s New Repertoire for Black and White is a solid repertoire offering despite these problems. His recommendations are well-conceived, and I was impressed with how much Kaufman was able to stuff into these pages. There’s not a lot of conceptual hand-holding here, so readers will have to be strong enough – say 2000 and above – to get maximum value from the book, and many lines will require supplemental study and analysis for the sake of completeness. Still, for those looking for a one-stop repertoire, particularly from the Black side, Kaufman’s book might be just what the doctor ordered.

End of an Era

This review has been printed in the June 2019 issue of Chess Life.  A penultimate (and unedited) version of the review is reproduced here. Minor differences exist between this and the printed version. My thanks to the good folks at Chess Life for allowing me to do so.

Readers may also be interested in an interview I did with Avrukh for Chess Life Online, where we talk about the book, his writing process, and look at a recent game of his from the 2019 Chicago Open.

—————-

Avrukh, Boris. Grandmaster Repertoire 2B: 1.d4 Dynamic Systems. Glasgow: Quality Chess, 2019. ISBN 978-1784830465. PB 529pp.

With the publication of Grandmaster Repertoire 2B: 1.d4 Dynamic Systems, the fourth and final volume in his revised White 1.d4 repertoire and his tenth title published with Quality Chess, GM Boris Avrukh has announced that he is taking “a break” from book publishing. It is, at least for now, the end of an era.

When Avrukh published the first edition of his 1.d4 repertoire in 2008 and 2010, the effect was nothing short of revolutionary. He coupled astute opening choices with World Championship level analysis – Avrukh seconded Gelfand in the 2012 World Championship match with Anand – to create a professional, poisonous two volume repertoire that anyone could buy for $65.

Opening theory never stops moving, of course, and with the appearance of GM Repertoire 2B, Avrukh has completed the revision and expansion of his repertoire. What was two volumes is now four. Two – 1A (2015) and 1B (2016) – focus on 1.d4 d5, including the Catalan, Queen’s Gambit Accepted, the Slav, the Tarrasch, etc. Two more – 2A (2018) and 2B (2019) – treat everything else, including the King’s Indian, Grunfeld, Dutch, Benko, and so forth.

While statistics show that Catalan was already in ascendence when GM Repertoire 1 was published, Avrukh’s influence on the popularization of the opening cannot be overstated, and I would argue that it was his treatment of the Catalan that made his name in the chess publishing world. His analysis in GM Repertoire 1 reshaped both the theory and practice of the system, and again, we can see his influence in database statistics.

Avrukh’s original recommendation in the Open Catalan – 1.d4 Nf6 2.c4 e6 3.g3 d5 4.Nf3 Be7 5.Bg2 O-O 6.O-O dxc4 7.Qc2 a6 and now 8.Qxc4 instead of 8.a4 – took a somewhat neglected move and reinvigorated it. The relative popularity of 8.Qxc4 spiked after GM Repertoire 1 was published in 2008, and then waned after Avrukh argued for 8.a4 in 1A.

Correlation is not causation, and Black improvements after 8.Qxc4 no doubt contributed to this shift. But the fact remains that Avrukh’s books have had a palpable effect on opening theory at even the highest levels. The same can be said for his Anti-Slav ideas. His move order against Meran-style setups – 1. d4 d5 2. c4 c6 3. Nf3 Nf6 4. e3 e6 5. b3!? – was little known before he wrote about it, and today it is one of the main ways that White tries to eke out an advantage against the Slav.

While Avrukh tweaks his recommendations in 1A and 1B, he does not fundamentally alter his repertoire. There is the shift to 8.a4 in the Open Catalan, as discussed above, a move from 3.e3 to 3.e4 in the Queen’s Gambit Accepted, and the replacement of 10.Nd2 in the mainline Fianchetto Benoni with 10.Bf4. The basic contours of his 1.d4 Nf6 and 1.d4 “varia” repertoires also remain the same in the revised GM Repertoires 2A and 2B.

Fianchetto setups are integral to Avrukh’s repertoire against the Grunfeld and King’s Indian in 2A. Against the “Solid Grunfeld” he offers 1.d4 Nf6 2.c4 g6 3.g3 c6 4.Bg2 d5 5.Qa4!?, hoping to prevent Black from recapturing on d5 with a pawn. The “Dynamic Grunfeld” builds upon his GM Repertoire 2 analysis, and the bulk of the book (nearly 80%) is a revised and extended treatment of his ideas in the Fianchetto King’s Indian.

This leaves the sundry defences that many 1.d4 players dread – the Dutch, the Benko, and the Budapest, along with the odd sidelines that strong players trot out from time to time. GM Repertoire 2B offers remedies for all of these, and it’s worth spending some time looking at three specific prescriptions to get a sense of Avrukh’s style and analysis.

(1) One of Avrukh’s more prominent ideas in GM Repertoire 2 came in the Classical Dutch. After 1.d4 f5 2.g3 Nf6 3.Bg2 e6 4.c4 Be7 5.Nf3 0–0 6.0–0 d6 7.Nc3 Ne4 8.Nxe4 fxe4 9.Nd2 d5 10.f3 Nc6 and here Avrukh recommended 11.fxe4 Rxf1+ 12.Nxf1 dxc4 13.Be3 in GM Repertoire 2, but Simon Williams’ improvement 13. …Bd7! (Sen-Williams, Uxbridge 2010) led Avrukh to search for another path forward.

image

His new idea is 11.e3!? exf3 12.Nxf3, when “[t]he position resembles a Catalan, except that the f-pawns have been removed.” (2B, 78) This seems a canny choice, fitting with the larger contours of Avrukh’s repertoire: playing for a positional advantage and limiting the opponent’s dynamism. That Stockfish 10 approves it also doesn’t hurt! Avrukh analyzes two continuations.

[A] 12. …b6 is seen in a correspondence game: 13.Bd2 Bb7 14.Rc1 Qd6 15.Qc2 Rac8 16.cxd5 exd5 17.b4! (Oppermann,P-Prystenski,A, ICCF email 2016)

[B] 12. …Bf6 13.Bd2 a5 14.Rc1 Kh8 and now instead of 15.Ne1 (Schmid-Halkias, Wunsiedel 2014) Avrukh analyzes the novelty 15.Rf2!? with good prospects for White.

(2) The Benko Gambit is often dreaded by club players. Black sacs a pawn for what appears to be solid compensation and plays on ‘auto-pilot,’ making typical moves while White sweats her way through the middlegame, frantically clutching her extra pawn. Avrukh shifts in 2B from his earlier recommendation of the Fianchetto Variation to the now-trendy 12.a4 ‘King-Walk,’ and he also gives White a weapon against a new sideline in the Benko.

1.d4 Nf6 2.c4 c5 3.d5 b5 4.cxb5 a6 5.bxa6 g6!?

Postponing the pawn capture is a new idea, and the subject of Milos Perunovic’s very interesting The Modernized Benko Gambit. Benko players have flocked to it, largely because of the current problems in the Benko proper.

Avrukh follows current theoretical trends in the ‘old’ Benko by recommending 5. …Bxa6 6.Nc3 g6 7.e4 Bxf1 8.Kxf1 d6 9.Nf3 Bg7 10.g3 0–0 11.Kg2 Nbd7 12.a4!. White is currently scoring very well in this line championed by none other than Magnus Carlsen (via transposition). See Carlsen-Bologan, Biel 2012.

6.Nc3 Bg7 7.e4 0–0 (7. …Qa5 8.a7!) 8.a7!

“The most dangerous idea for Black. White’s idea is clear: with Black’s rook on a7, he can always win a tempo with Nb5. Now we can’t play …Qa5 because after Bd2, White has the threat Nb5.” (Perunovic, 109)

Avrukh notes that we can’t play 8.Nf3 because of 8. …Qa5! when the pin and attack on e4 forces us to choose between 9.Bd2 and 9.Nd2.

8. …Rxa7 9.Nf3 e6

Perunovic’s recommendation. Black has a few alternatives: 9. …d6 10.Be2 Ba6 11.0–0; 9. …Qa5 10.Bd2!; and 9. …Qb6 10.Be2 Ba6 11.0–0.

10.Be2 exd5 11.exd5 d6 12.0–0 Na6

If 12. …Ba6 Avrukh likes 13.Re1, which provides “a [simple] route to an edge.”

13.Nb5 Rd7 14.Bc4 Bb7 15.Bg5

Perunovic analyzes this position out to move 18, saying that Black has compensation for the pawn. Avrukh extends that analysis to move 23 and thinks that White gets the better end of things.

(3) After recommending 4.Nf3 against the Budapest in GM Repertoire 2, Avrukh turns to a little-known sideline to justify his new selection, 4.Bf4.

1.d4 Nf6 2.c4 e5 3.dxe5 Ng4 4.Bf4 g5

Avrukh had avoided this line in GM Repertoire 2, feeling that 5.Bg3 Bg7 was “quite reliable for Black.” He revises his opinion in 2B, having found a “powerful antidote… [that is] both easier to learn and objectively stronger, in my opinion.” (339, 340)

Note that White is said to get an advantage after the alternative 4. …Nc6 5.Nf3 Bb4+ 6.Nbd2 Qe7 7.e3 Ngxe5 8.Nxe5 Nxe5 9.Be2 0–0 10.0–0 Bxd2 11.Qxd2 d6 12.b4, preparing c4–c5.

5.Bd2!? Nxe5 6.Nf3 Bg7

6. …Nbc6 7.Nc3 d6 8.Qc2 Bg7 9.0–0–0 and Avrukh’s analysis runs to move 16, giving White a strong edge.

7.Nxe5 Bxe5 8.Nc3! d6 9.g3 Nc6 10.Bg2 Be6 11.Nd5 g4 (Dreev-Zwardon, Warsaw 2013) and now 12.Bf4 h5 13.Qd2 “with a clear positional advantage.”

What do these examples teach us about Avrukh’s work in 2B, and about his repertoire more broadly? Keeping in mind the impossibility of summarizing nearly 1800 pages of analysis, we can perhaps draw a few conclusions.

It’s clear that Avrukh has done his due diligence in these books. He cites all the relevant sources, and attempts to improve on each of them. Avrukh makes extensive use of correspondence games in his research, and he’s not ashamed to mention the (heavy) influence of the computer in his recommendations. Very few authors meet the standard of excellence Avrukh sets in these books.

What about the repertoire itself? My sense is that Avrukh’s recommendations tend to follow the Quality Chess shibboleth to “try the main lines.” There are no dodgy gambits here, but mainly concrete, positionally oriented variations that allow White to aim for a two-result game. This explains, in part, the use of the kingside fianchetto against the King’s Indian (and Grunfeld). His recommended lines minimize Black’s attacking chances, and force the game into more controlled channels.

Who should adopt Avrukh’s repertoire? Because it is concrete and positionally oriented, some of the key positions require serious technique to convert the small edge he claims. (I’m particularly thinking of his recommendations in the Catalan.) This is high-level chess, and it’s probably best suited for experts at minimum. That’s not to say that class players can’t learn something here, but the kinds of advantages that Avrukh aims for with White – sometimes just a “space advantage and bishop pair,” as he says in GM Repertoire 1 (11) – often barely register as advantages on the amateur level.

Because Avrukh’s analysis is so vast and detailed, some kind of “executive summary” of key recommendations would have been welcome. Some Quality Chess opening books – I’m thinking of Kotronias’ GM Repertoire 18: The Sicilian Sveshnikov in particular – have summaries after each chapter that, in themselves, could function as a first repertoire. The chapter summaries here are perfunctory at best, and it’s an opportunity missed.

As Avrukh steps back from book publishing, it remains to be seen what is next for the Chicago-based Grandmaster. One of his web projects, Chess Openings 24-7, discontinued its services as of April 2nd. He has authored an opening file for modern-chess.com as recently as March 16th of this year; see our May 2017 issue for a review of a similar effort. Will he continue in this vein? Will he keep writing at all? Like many fans of chess literature, I’ll be interested to find out.

And Then There Were Two

Komodo 9, written by Don Dailey, Larry Kaufman and Mark Lefler. Available (1) with Fritz GUI from Amazon ($80ish as of 5/28), (2) for download with Fritz GUI from ChessBase.com ($73.50 w/o VAT as of 5/28) and (3) directly from the Komodo website without GUI for $59.98; also available as part of a 1 year subscription package for $99.97.

Stockfish 6, written by the Stockfish Collective. Open-source and available at the Stockfish website.

—–

Now that Houdini seems to have gone gentle into that good night, there are two engines vying for the title of strongest chess engine in the world. Those two engines – Stockfish and Komodo – have each seen new releases in recent months. Stockfish 6 was released at the end of January, while Komodo 9 became available at the end of April from komodochess.com and the end of May from ChessBase.

Last year I wrote a review of Komodo 8 and Stockfish 5 that was republished at ChessBase.com, and much of what I wrote there applies here as well. Fear not, frazzled reader: you don’t need to go back and read that review, as most of the key points will be reiterated here.

First things first: any top engine (Komodo, Stockfish, Houdini, Rybka, Fritz, Hiarcs, Junior, Chiron, Critter, Equinox, Gull, Fire, Crafty, among many others) is plenty strong to beat any human player alive. This is not because each of these engines are equally strong. While they don’t always play the absolute best moves, none of the aforementioned engines ever make big mistakes. Against fallible humans, that’s a recipe for domination. It’s nearly useless – not to mention soul-crushing! – to play full games against the top engines, although I do recommend using weaker engines (Clueless 1.4, Monarch, Piranha) as sparring partners for playing out positions or endgames.

Even if all the major engines can beat us, they’re not all created equal. Three major testing outfits – CCRL, CEGT, and IPON – engage in ongoing and extensive testing of all the best engines, and they do so by having the engines play thousands of games against one another at various time controls. In my previous review I noted that Komodo, Stockfish and Houdini were the top three engines on the lists, and in that order. This remains the case after the release of Komodo 9 and Stockfish 6:

CCRL (TC 40 moves/40 min, 4-cpu computers):
1. Komodo 9, 3325 (Komodo 8 was rated 3301)
2. Stockfish 6, 3310 (Stockfish 5 was rated 3285)
3. Houdini 4, 3269

CEGT
40/4: 1. Komodo 9, 2. Stockfish 6, 3. Houdini 4
G/5’+3”: 1. Komodo 9, 2. Stockfish 6, 3. Houdini 4
40/20: 1. Komodo 9, 2. Stockfish 6, 3. Houdini 4 (NB: list includes multiple versions of each engine)
40/120: 1. Stockfish 6, 2. Komodo 8 (does not yet include version 9), 3. Houdini 4 (NB””: list includes multiple versions of each engine)

IPON
1. Komodo 9, 3190 (Komodo 8 was 3142)
2. Stockfish 6, 3174 (Stockfish 4 was 3142)
3. Houdini 4, 3118

The results are fairly clear. Komodo 9 is ever so slightly stronger than Stockfish 6 when it comes to engine-engine play, and this advantage seems to grow when longer time controls are used.

For my purposes, though, what’s important is an engine’s analytical strength. This strength is indicated by engine-engine matches, in part, but it is also assessed through test suites and – perhaps most importantly – by experience. Some engines might be more trustworthy in specific types of positions than others or exhibit other misunderstandings. Erik Kislik, for instance, reports in his April 2015 Chess Life article on the TCEC Finals – some of which appeared in his earlier Chessdom piece on TCEC Season 6 – that only Komodo properly understood the imbalance of three minor pieces against a queen. There are undoubtedly other quirks known to strong players who use engines on a daily basis.

In my previous review I ran Komodo, Stockfish and Houdini (among others) through two test suites on my old Q8300. Since then I’ve upgraded my hardware, and now I’m using an i7-4790 with 12gb of RAM and an SSD for the important five and six-man Syzygy tablebases included with ChessBase’s Endgame Turbo 4. (Note: if you have an old-fashioned hard drive, only use the five-man tbs in your search; if you use the six-man, it will slow the engine analysis down dramatically.) Because I have faster hardware I thought that a more difficult test suite would be in order, and – lucky me! – just such a suite was recently made available in the TalkChess forums. I gave Komodo 9 and Stockfish 6 one minute per problem to solve the 112 problems in the suite, and the results were as follows:

Komodo 9 solved 37 out 110 problems (33.6%) with an average time/depth of 20.04 seconds and 24.24 ply. Stockfish 6 solved 30/110 (27.2%) with an average time/depth of 20.90 seconds and 29.70 ply. (Note that while there are 112 problems in the suite, two of them were rejected by both engines because they had incomplete data.) The entire test suite along with embedded results can be found at:

http://www.viewchess.com/cbreader/2015/6/6/Game1753083657.html

I have also been using both Komodo 9 and Stockfish 6 in my analytical work and study. So that you might also get a feeling for how each evaluates typical positions, I recorded a video of the two at work.  Each engine ran simultaneously (2 cpus, 2gb of RAM) as I looked at a few games of interest, most of which came from Alexander Baburin’s outstanding e-magazine Chess Today. The video is 14 minutes long. You can replay the games at this link:

http://www.viewchess.com/cbreader/2015/6/6/Game1752975735.html

Komodo 9 and Stockfish 6 in comparative analysis

Even a brief glance at the above video will make clear just how good top engines are becoming in their ability to correctly assess positions, but it also shows (in Gusev-Averbakh) that they are far from perfect. They rarely agree fully in positions that are not clear wins or draws, and this is due to the differences in evaluation and search between the two. Broadly speaking, we can say that evaluation is the criteria or heuristics used by each engine to ‘understand’ a position, while search is the way that the engine ‘prunes’ the tree of analysis. While many engines might carry similar traits in their evaluation or search, none are identical, and this produces the differences in play and analysis between them.

Stockfish 6 is a rather deep searcher. It achieves these depths through aggressive pruning of the tree of analysis. While there are real advantages to this strategy, not the least of which is quick analytical sight and tactical ingenuity, there are some drawbacks. Stockfish can miss some resources hidden very deep in the position. I find it to be a particularly strong endgame analyst, in part because it now reads Syzygy tablebases and refers to them in its search. Stockfish is an open-source program, meaning that it is free to download and that anyone can contribute a patch, but all changes to evaluation or search are tested on a distributed network of computers (“Fishtest”) to determine their value.

Komodo 9 is slightly more aggressive in its pruning than is Komodo 8, and it is slightly faster in its search as well. (Both changes seem to have been made, to some degree, with the goal of more closely matching Stockfish’s speed – an interesting commercial decision.) While Komodo’s evaluation is, in part, automatically tuned through automated testing, it is also hand-tuned (to what degree I cannot say) by GM Larry Kaufman.

The result is an engine that feels – I know this sounds funny, but it’s true – smart. It seems slightly more attuned to positional nuances than its competitors, and as all the top engines are tactical monsters, even a slight positional superiority can be important.  I have noticed that Komodo is particularly good at evaluating positions where material imbalances exist, although I cannot say exactly why this is the case!

As more users possess multi-core systems, the question of scaling – how well an engine is able to make use of those multiple cores – becomes increasingly important. Because it requires some CPU cycles to hand out different tasks to the processors in use, and because some analysis will inevitably be duplicated on multiple CPUs, there is not a linear relation between number of CPUs and analytical speed.

Komodo 8 was reputedly much better than Stockfish 5 in its implementation of parallel search, but recent tests published on the Talkchess forum suggest that the gap is narrowing. While Stockfish 6 sees an effective speedup of 3.6x as it goes from 1 to 8 cores, Komodo 9’s speedup is about 4.5x. And the gap is further narrowed if we consider the developmental versions of Stockfish, where the speedup is now around 4x.

Hardcore engine enthusiasts have, as the above suggests, become accustomed to downloading developmental versions of Stockfish. In an effort to serve some of the same market share, the authors of Komodo have created a subscription service that provides developmental versions of Komodo to users. This subscription, which costs $99.97, entitles users to all official versions of Komodo released in the following year along with developmental versions on a schedule to be determined. Only those who order Komodo directly from the authors are currently able to choose this subscription option.

The inevitable question remains: which engine should you choose? My answer is the same now as it was in my previous review. You should choose both – and perhaps more.

Both Komodo and Stockfish are insanely strong engines. There remain some positions, however, where one engine will get ‘stuck’ or otherwise prove unable to discern realistic (i.e. human) looking moves for both sides. In that case it is useful to query another engine to get a second (or perhaps even third) opinion. I find myself using Komodo 9 more than Stockfish 6 in my day-to-day work, but your mileage may well vary. Serious analysts, no matter their preference, will want to have both Komodo 9 and Stockfish 6 as part of their ‘teams.’

Lucky Number 13?

ChessBase 13.

I’ve said it before, and I’ll say it again: if you are an ambitious chess player, no matter your age or rating, you should be using ChessBase.

ChessBase, created by the company of the same name, is a chess database manager and GUI used by nearly all the best players in the world. It allows users to access millions of games played across history and the globe, to make use of chess engines while studying those games, and to curate one’s own data with great ease. Opening books and endgame tablebases are available to assist with analysis, and links to the Playchess server and the Engine Cloud are built into the interface.

After ChessBase 10 was released in 2008, I was under the impression that most all necessary features were baked into the product, leaving little room for improvement and little need to upgrade. ChessBase 11, released in 2010, did little to change my mind. The shift to a GUI based on the Office ribbon wasn’t a game changer for me, and while I thought access to online game databases from within the GUI was nice, I didn’t see it as worth the money required to upgrade.

This changed with ChessBase 12. Released in 2012 – note the two year dev cycle? – ChessBase 12 introduced a slew of neat bells and whistles that made me take notice. The ‘deep analysis’ function, perhaps meant to rival Aquarium’s IDea feature, was handy (if still a work in progress). The ability to search for similar endgames and pawn structures was very useful, as was the expanded access to the online database. Direct publishing of games to the viewchess website was a real time saver. But what really impressed me about ChessBase 12 was the initial movement towards the cloud.

“Let’s Check,” which first appeared (if memory serves) in the Fritz 13 GUI, is something like a gigantic, decentralized database of analyzed positions. If you are connected to the “Let’s Check” server while you work, ChessBase 12 uploads your engine evaluations of positions studied to the cloud, and it gives you access to the evaluations of others. This can be very useful if, say, you are looking at games from important tournaments. In some cases you are able to ask the server to ‘annotate’ games played that same day, leaving you with suggestions and evaluations from users around the globe.

Even more interesting was the launch of the “Engine Cloud.” In simple terms, the “Engine Cloud” allows for remote access of analytical engines anywhere in the world. Those with powerful hardware can, in essence, rent time on their computers to other people, granting them access to their analytical engines for a small fee. (You can also configure your own hardware to be privately available to only you.) Those of us without ‘big iron’ at home can, for very reasonable prices, have blazing fast engines at our beck and call; you might even, if you investigate usernames, get to use a former World Champion’s hardware in the process. Brilliant, brilliant stuff.

Now – two years later – ChessBase has released version 13 of their flagship program. It is true, as we were promised in Jon Edwards’ eminently useful guide to ChessBase 12, that most of the features in 12 reappear in 13. What you know from 12 is still true for 13, so there is no real learning curve to be navigated.

So what is new in ChessBase 13?

C13Splash

“The ChessBase Cloud”

ChessBase has gotten into the cloud data storage business with ChessBase 13. You can now save data to the ChessBase Cloud, where it will (eventually) be available to credentialed users in the ChessBase GUI, in mobile apps, and in a web interface.

Let’s dive a bit more deeply into this, and what it might mean for users. Right now I keep some of my data in a Dropbox folder. This includes my opening analysis, which gets updated fairly often, a database of my games (OTB, ICC, etc.), a folder of data related to endings and a folder of games from local events. When I write a new game to my games database, it is immediately mirrored to the cloud, and that change is written to my other computers the next time they boot up.

The ChessBase Cloud duplicates this functionality, so that databases in the Cloud are mirrored to other computers linked to the same login, but it might also create some additional possibilities. Databases can be shared between users. You can make a database public on the web, or you can specify that only certain users can access the data. This might make joint preparation or joint analysis a real possibility – ‘the Hammer’ (Jon Ludwig Hammer) could update opening analysis overnight and save it to the cloud, where ‘the Dane’ (Peter Heine Nielsen) and ‘the Champ’ (Magnus Carlsen) would find it in the morning.

Screenshot 2014-11-21 19.31.47

(The game in the screenshot is from an article retweeted by Peter Svidler. Carlsen may well have had a win in Game 7 of the World Championship! There’s a pgn at the end of the article, so check it out!)

There is also something in the documentation about data being eventually accessible via a web GUI. I could make a file available to a friend who is travelling or who does not have a Windows computer, and they could study it in their browsers or on an app. It’s not fully implemented yet, but if and when it is, this could be a very useful addition to the ChessBase ecosystem.

“Analysis jobs”

With the new “analysis jobs” feature, you can now specify a list of positions to be subjected to automated analysis without your intervention. This is not the same thing as the automatic game analysis in the Fritz GUI; instead, this seems to be an iterative improvement on the ‘deep analysis’ feature introduced in CB12. The positions can be analyzed two ways: either you get n-lines of branchless variations, or you can use the ‘deep analysis’ feature. In both cases you can specify the engines to be used, the time allotted per position or per batch of positions, and how you want the results of the analysis to be recorded.

Let’s say that you’ve been studying the Grunfeld, and you want to check a few positions that came up in Peter Svidler’s masterful video series over at chess24.com. You can put those positions into ChessBase, add them to the list of positions to be analyzed, and then walk away while your engines do their magic. I can see how this might be useful for me at my level, and I can only imagine how it could be useful for a professional with dozens of positions to check before a big event.

Screenshot 2014-11-19 21.18.51

It should be noted that, as of RC #5 (or version 1 of the official release), I could not coax this feature into full operation. While both the ‘variations’ and ‘deep analysis’ settings lead to analysis on the screen, only the ‘variations’ option correctly writes to the .cbone file that would hold the finished analytical product. I am told, through e-mails with ChessBase, that this should be fixed in the immediate future.

Update 11/24: The above bug was fixed in Service Pack #2, out today.

Repertoire Function

The repertoire function is said to be improved in ChessBase 13, so that now White and Black repertoires are distinguished from one another. I have never used the repertoire functions before, not having really seen the need, so I can’t comment on how much of a difference this makes from previous versions. For the sake of this piece, however, I thought I’d give it a try.

I created, using around 1400 of my games from the Internet Chess Club, my own opening repertoire files by clicking on ‘Report’ -> ‘Generate Repertoire’ in the database window and following the prompts. This presented me with two repertoire databases, one for my games with White and one for my games with Black. ChessBase put all of my games with each color into the appropriate game files, giving each game an easily recognizable English-language name and saving the databases to the Cloud.

Screenshot 2014-11-19 17.14.17

I’d always wondered where one would proceed from here. Certainly it’s interesting to see my games rendered in an orderly fashion, to see what I’ve played at key junctures in my openings, etc., but I never understood what could be done with these repertoire databases after that. One thing you can do is to scan new databases – issues of The Week in Chess, Informants, or CBMs – to see what new games appear in lines that you play. I tested this with ChessBase Magazine 162 and my black repertoire.

ChessBase produced a report listing all the relevant games from CBM 162 for my repertoire.

Screenshot 2014-11-19 17.10.24

I could, for example, add the game Kelires-Lee (Tromso ol, 2014) to my repertoire database, or I could mark a specific move as a key position in my repertoire.

Screenshot 2014-11-19 17.21.58

Having used ChessBase for many years, and having built up some fairly heavy analytical files in that time, I doubt that I’ll switch management of my repertoire over to the Repertoire Function. Still, I can see why some might, and it’s interesting to see my openings ‘dissected,’ their innards on full ChessBase display!

Aesthetics and Ergonomics

The look of ChessBase 13 is basically that of ChessBase 12, but there are a few tweaks of note. ChessBase can now offer ‘extended information’ in the game window, which means that pictures, flags and rating information for players appears next to names in the game window. There is also a small toolbar at the bottom of the game window containing a palette of Informant symbols.

Screenshot 2014-11-19 17.28.48

This might make it easier to annotate games, although I’ve always just right-clicked and chosen the required symbol from the menu items. It is also easier to create variations in a game, as the variation dialog appears less often during input.

Odds and Ends

ChessBase 13 allows you to run multiple instances of the program as well as multiple instances of engines within it. This might be useful for the ‘analysis job’ function described above, or if you want to run multiple maintenance tasks at once. There are some new classification tabs available, including one that classifies games by final material count. A few recent additions to ChessBase 12 have also migrated to 13, including support for Syzygy tablebases and for creating and saving illegal positions to a database. This last feature is very useful for teaching, especially if one uses the Stappenmethode series of books. Finally (and anecdotally) startup of ChessBase 13 seems much snappier than 12.

Stability and quirks

I have been using beta versions (#2-#5) of ChessBase 13 for perhaps two weeks now, and for most purposes, it has been stable and without problems. Some oddities remain: for example, you can’t use the keyboard shortcut ‘T’ to take back the last move in a game and enter a variation beginning with that same move, and menu items remain grayed out even when they should be available. [Update 11/24: This second quirk was fixed as of Service Pack #2. All menu items are back to normal.] Players used to typing ‘T’ for ‘takeback’ should instead press the Ctrl key while entering a move to create a variation.

Database management – finding / killing doubles, checking / fixing integrity, etc. – is an under-appreciated feature in the ChessBase programs. My original thought for this review was to really put these functions to the test by creating a true Frankenstein of a database, filled with doubles / errors, for testing. I cobbled together a database of nearly 21 million games from dodgy sources and set ChessBase 13 to finding doubles. This was a bad idea. I killed the effort when, after an hour plus, the program had made it through approximately 19% of the database with a nearly one in three rate of double detection. It would have taken another four or five hours to finish the job!

Screenshot 2014-11-19 14.40.09

Instead, on the advice of a fellow ChessPub-ian, I asked around amongst some friends and was given access to an Opening Master database (Golem 01.13) containing approximately 8.7 million games. I compared how long it took ChessBase 12 and 13 to find and kill the doubles in that database. CB13 was faster, taking 2 minutes and 21 seconds to complete the job, while CB12 took 3 minutes and 55 seconds. 13 also used about three times the RAM to do the job, which may account for its increased speed. Both detected an identical number of doubles in the database (48,784).

Upon finishing the task, the Clipboard opens in both 12 and 13. Here, ChessBase 13 froze. This also happened when I stopped the program in the midst of killing Frankenstein’s doubles. In the case of the Frankenstein database, I chalked it up to the enormity of the project, but if the same problem was replicated with the smaller database, there might have been a bug involved. This problem was fixed as of Release Candidate #5.

I would have also tested the ‘pack database’ and ‘integrity check’ functions of both ChessBase 12 and 13, but (1) the integrity check is the same in both cases (version 6.04 dated 9.25.13) and (2) the OM Golem database had critical errors that could not be repaired, even with the slow integrity check option.

Summary

ChessBase 13 represents an iterative improvement over ChessBase 12, but not a paradigm-shifting one. It will become so when the ChessBase Cloud features are fully functional, but for now, I’m not convinced that it’s a mandatory upgrade for ChessBase 12 users. (It’d be nice, though!) Serious analysts, professionals and correspondence players might be the exception here, as the automated position analysis could prove very valuable.

Those still using ChessBase 10 / 11 (or, worse, not using ChessBase at all!) should absolutely consider getting a copy of ChessBase 13. The old advertising for ChessBase 3 still holds true: ChessBase is something of a time multiplier, allowing you to do more chess work in much less time. This is truer today than it was then. We have massive, immaculate databases like Big 2015 or Mega 2015 to search for ideas, and we have inordinately strong engines like Houdini, Komodo and Stockfish to assist us. There is a reason that the strongest players in the world use ChessBase: it is indispensable for the modern chess player!

ChessBase 13 comes in four ‘flavors.’

  • Download: the download version is available directly from the ChessBase shop. You only get the program itself; no data is included except for the Player Encyclopedia, and you do not get any extension of membership on Playchess.com.
  • Starter: Includes ChessBase 13, the Big Database 2015 (unannotated) with weekly updates, and three issues of the ChessBase Magazine. No Playchess membership is included.
  • Mega: Includes ChessBase 13, the Mega Database 2015 (68k annotated games) with weekly updates, and six issues of the ChessBase Magazine. No Playchess membership is included.
  • Premium: The Mega package plus the Correspondence Database 2013, the 4 DVD set of Syzygy tablebases (Endgame Turbo 4), and a one-year Premium subscription to Playchess.com.

The Starter package runs €179.90 ($190-ish without VAT), the Mega costs €269.90 ($285-ish without VAT), and the Premium package is €369.90 ($390-ish without VAT) when purchased directly from the ChessBase shop. The Download version, available only from the ChessBase shop, is priced at €99.90 ($105-ish without VAT). You can also upgrade from 12 to 13 (program only) for €99.90 ($105-ish without VAT). All these prices will normally be discounted when buying from Amazon sellers.

In terms of choosing between these various packages, my only advice is this: the annotated games in the Mega Database are nice to have, but you can do without them if cost is a factor. Beyond that, it’s entirely up to you.

The Missing Manual

Edwards, Jon. ChessBase Complete: Chess in the Digital Age. Milford: Russell Enterprises, 2014. 350 pp. ISBN 978-1936490547. PB List $34.95.

In my previous review, which focused on the top three chess engines currently available, I said that ChessBase 12 is a nearly mandatory purchase for improving players.  In this review I continue in that vein by reviewing a new book about ChessBase 12, a book that fills a real need in the literature.

Fun fact: I proofread and edited the English help files for ChessBase 8 way back in 2000. Even then, the manual for the ChessBase program seemed something of an afterthought, something that the authors of ChessBase put together out of necessity and nothing more. The ChessBase program has been, and continues to be, difficult to master, and the manual has never been particularly helpful to the neophyte. Some third parties, most notably Steve Lopez with his T-Notes column, tried to remedy this situation, but on the whole there has never been a truly comprehensive, user-friendly introduction to the ChessBase GUI. Until now, that is.

Jon Edwards is an ICCF (International Correspondence Chess Federation) Senior International Master, a USCF OTB expert, a chess teacher and an author with multiple chess related titles to his name. He is is a long-time ChessBase power user, having used the program to research his books and his openings for correspondence games. Edwards also created very early e-books for the ChessBase platform.

Edwards’ new book, ChessBase Complete: Chess in the Digital Age, is a careful and systematic introduction to the ChessBase 12 GUI and its capabilities. Over the course of 14 chapters or ‘scenarios,’ Edwards clearly explains to his readers how to use ChessBase, how to manipulate and maintain data, how to play on the Playchess server, and much more. I reproduce the chapter list from the book below:

SCENARIO 1 The Future of Chess Books (And some very simple searching)
SCENARIO 2 Maintaining Quality Data (Garbage in, Garbage out)
SCENARIO 3 Working well with ChessBase (Organizing and viewing your chess information)
SCENARIO 4 Preparing for an opponent (Because they’re preparing for you)
SCENARIO 5 Playing (At any time of the day or night)
SCENARIO 6 Playchess Tournaments (Competing for fun and profit)
SCENARIO 7 Preserving and annotating your games (Because you must)
SCENARIO 8 Honed opening preparation (No more surprises)
SCENARIO 9 Engines and Kibitzers (Subjecting your games to unbiased scrutiny)
SCENARIO 10 A Grandmaster by your Side (Complex searching made easy)
SCENARIO 11 Watching Grandmaster Chess (It’s better than baseball)
SCENARIO 12 Training and Teaching (Lighting up the board)
SCENARIO 13 Competing at Correspondence Chess (It’s not dead yet)
SCENARIO 14 Writing about Chess (With tips on printing)

Five Appendices are included, including a summary of all the features available via the GUI and – very usefully – a list of all the keyboard shortcuts used in ChessBase.

Edwards is a clear and engaging writer. He makes use of copious screenshots to assist with his tutorials, and numerous ‘tips’ are strewn through the text to remind readers of essential points. Readers are often asked to ‘learn by doing,’ and Edwards carefully leads his pupils through the tasks described in the book. And he takes the time to explain opaque terms and titles, like the ranks of players on the Playchess server.

I have been using ChessBase since the days of DOS, so most of what Edwards had to say wasn’t entirely new to me. Still, I found his discussion of constructing one’s own keys instructive, and as I’ve never played correspondence chess via ICCF, Scenario 13 was rather interesting.

Relatively few typos made it into the final text, although I did find one or two along with the occasional verbal oddity, i.e., “…an inexorable quality to [Morphy’s] games…” (210).  The ChessBase one-click web publishing service is not a joint venture with Facebook (243), and it was surprising to see that Edwards only allocated 1 to 2mb to the tablebases in his screenshots (318). For a book of this length and with this many technical details, I do not find these shortcomings unacceptable.

Players new to ChessBase 12 (or, soon, ChessBase 13) should seriously consider buying a copy of ChessBase Complete, and long-time users might want to as well. It is a sturdy tutorial to the various features of the program, and it doubles as a user-friendly reference guide. I suspect that about 90% of what you need to know about ChessBase can be found in these pages. For that last 10% I would recommend Axel Smith’s Pump Up Your Rating, which has the finest discussion of professional level ChessBase use in print. See my review of Smith’s book for more.

Choosing a Chess Engine

Note: This review has been updated as of 9/24 to reflect my testing and experience with the newly released Komodo 8.

———

Houdini 4, written by Robert Houdart. Standard (up to six cpu cores, $79.95 list) and Pro (up to 32 cpu cores, $99.95 list) versions with Fritz GUIs available. Also available directly from the Houdini website for approximately $52 (Standard) or $78 (Pro) as of 9/11/14.

Komodo 7a, written by Don Dailey, Larry Kaufman and Mark Lefler. Available directly from the Komodo website for $39.95.

Komodo 8, written by Don Dailey, Larry Kaufman and Mark Lefler. Available (1) with Fritz GUI ($97ish as of 9/24) and (2) directly from the Komodo website without GUI for $59.96

Stockfish 5, written by the Stockfish Collective. Open-source and available at the Stockfish website.

Increasingly I’m convinced that a serious chess player must make use of chess technology to fully harness his or her abilities. This, as I have previously discussed, involves three elements: the GUI, the data, and the engine. ChessBase 12 is the gold standard for chess GUIs, and I will be reviewing a new book about proper use of that GUI in the near future. Here, however, I want to take up the thorny issue of choosing a chess engine. Which engine is ‘best’ for the practical player to use in his or her studies?

I put ‘best’ in scare-quotes because there are two ways to look at this question. (1) There is little question at this point that the best chess engines of the past five years can beat 99.9% of human players on modern hardware. So one way that engines are tested now is in a series of engine vs engine battles. While many people process private matches, there are three main public rating lists: IPON, CCRL and CEGT.

Here there is something of a consensus. Houdini, Stockfish and Komodo are the three top engines at the moment, with very little differentiating between them, and with the particular order of the engines varying due to time control and other criteria.

Update: The three lists mentioned above have tested Komodo 8.

  • It is in first place on the IPON list, leading Stockfish 5 by 6 elo points and Houdini 4 by 17.
  • Komodo 8 appears on two of the CCRL lists. In games played at a rate of 40 moves in 4 minutes (40/4), Stockfish 5 leads Komodo 8 by 7 elo points and Houdini 4 by 30 elo points. In games played at the slower rate of 40 moves in 40 minutes (40/40), Komodo 8 has a 22 elo point lead on Stockfish 5 and a 39 point lead on Houdini.
  • Among the many CEGT lists, we find: (a) Stockfish 5 is first on the 40/4 list, followed by Komodo 8 and Houdini 4; (b) Houdini 4 leads the 5’+3″ list, followed by Stockfish 5 and Komodo 8; (c) Komodo 8 leads the 40/20 list followed by Stockfish 5 and Houdini 4; but (d) the 40/120 list has not yet been updated to include Komodo 8.
  • Note: Larry Kaufman compiles the results from these lists and one other in a thread at Talkchess. He argues (a) that Komodo does better at longer time controls, and that (b)  Komodo 8 is roughly equal in strength to the Stockfish development releases, which are slightly stronger than the officially-released Stockfish 5. </update>

From my perspective, however, (2) analytical strength is more important. If all the engines are strong enough to beat me, I think that the quality of their analysis – the ‘humanness’, for lack of a better word – is critical. It used to be the case that humans could trick engines with locked pawn chains, for example, or that engines would fail to understand long-term compensation for exchange sacrifices. Such failings have largely been overcome as the engines and hardware have improved; nevertheless, there remain certain openings and types of positions that are more problematic for our metal friends. Michael Ayton offers one such position in the ChessPub forums; if you want have a laugh, check out the best lines of play on offer by the engines reviewed here:

Screenshot 2014-09-11 12.33.12

FEN: r1b2rk1/pp1nqpbp/3p1np1/2pPp3/2P1P3/2N1BN2/PP2BPPP/R2Q1RK1 w – c6 0 10

Among the multiple engines available, there are three that stand above the fray. These are Houdini by Robert Houdart, Komodo by the late Don Dailey, Larry Kaufman and Mark Lefler, and Stockfish. Houdini and Komodo are commercial engines, while Stockfish is open-source and maintained by dozens of contributors.

How can we understand the differences between the engines? Let’s consider two key components of chess analysis: search and evaluation. Search is the way that the engine ‘prunes’ the tree of analysis; because each ply (move by White or Black) grows the list of possible moves exponentially, modern engines trim that list dramatically to obtain greater search depth. Evaluation is the set of criteria used by the engine to decipher or evaluate each position encountered during the search.

In a very general sense, what differentiates Houdini, Komodo and Stockfish are their search and evaluation functions. How they are different on a technical / programming level, I cannot say: Houdini and Komodo are closed-source and I can’t decipher code in any event. What I can do, however, is cite what some experts in the field have said, and then see if it coheres with my experience of the three engines.

Larry Kaufman, who works on Komodo, said in an interview on the Quality Chess blog that:

Komodo is best at evaluating middlegame positions accurately once the tactics are resolved. Stockfish seems to be best in the endgame and in seeing very deep tactics. Houdini is the best at blitz and at seeing tactics quickly. Rybka is just obsolete; I like to think of Komodo as its spiritual desceendant, since I worked on the evaluation for both, although the rest of the engines are not similar. Fritz is just too far below these top engines to be useful.

…Komodo’s assessment of positions is its strong point relative to the other top two, Houdini best for tactics, Stockfish for endgames and whenever great depth is required. Both Houdini and Stockfish overvalue the queen, Komodo has the best sense for relative piece values I think. Komodo is also best at playing the opening when out of book very early.

Stockfish is, as Kaufman suggests, very aggressive in the way that it prunes the tree of analysis, searching very deeply but narrowing as the ply go forward. It is important to remember that each engine reports search depth and evaluation differently, so that (as Erik Kislik writes in a fascinating article on the recent TCEC superfinal) the way that Stockfish ‘razors’ the search means that its reported depth can’t be directly compared to Houdini or Komodo. Still, it does seem to search more deeply, if narrowly, than do its competitors.  This has advantages in the endgame and in some tactical positions.

Houdini is a tactical juggernaut. It tends to do best on the various tactical test sets that some engine experts have put together, and it is fairly quick to see those tactics, making it useful for a quick analysis of most positions. Its numerical evaluations also differ from other engines in that they are calibrated to specific predicted outcomes.

A +1.00 pawn advantage gives a 80% chance of winning the game against an equal opponent at blitz time control. At +2.00 the engine will win 95% of the time, and at +3.00 about 99% of the time. If the advantage is +0.50, expect to win nearly 50% of the time. (from the Houdini website)

Kaufman argues that his engine, Komodo, is the most positionally accurate of the three, and I don’t disagree. Kaufman is involved in the tuning of Komodo’s evaluation function; as he is a grandmaster, it does not seem outrageous to believe that his engine’s positional play might benefit from his chess expertise. The engine is slightly ‘slower’ (anecdotally, and not judging by NPS, or nodes per second, and ply count) than are Stockfish and Houdini, but Komodo seems to benefit more from longer analysis time than do Houdini or Stockfish.

I’ve been using Komodo 8 in the Fritz GUI from ChessBase for a few days now. The GUI is the same as the Houdini 4 and the Deep Fritz 14 GUIs; in fact, when you install Komodo 8, I think it just adds some configuration files to your ChessProgram14 folder to allow for a Komodo ‘skin’ to appear. The Komodo 8 engine is slightly faster than 7a judging solely by NPS. While coding changes mean that the two can’t be directly compared, Mark Lefler has said that 8 is approximately 9% faster than 7a. The ChessBase package comes with a 1.5 million game database, an opening book, and a six month Premium membership at Playchess.com; all are standard for Fritz GUI releases such as Deep Fritz 14 or Houdini 4.

From my perspective, I tend to use all three engines as I study chess or check analysis for review purposes, but two more than the third. When I look at my games, which aren’t all that complex, I generally use Houdini as my default kibitzer. It seems to be the fastest at seeing basic tactical problems, and its quickness is a plus on some of my antiquated computers. I also tend to bring Komodo into the mix, especially if I want to spend some time trying to figure out one position. Stockfish serves more as a second (or third) option, but I will use it more heavily in endgame positions – unless we get into tablebase territory, as Stockfish does not (generally) use them.

*Note:* for other perspectives on the ‘personalities’ of these three engines, you might consider a couple of threads at the indispensible ChessPub forum.

As I was working on this review, I thought that I might try to ‘objectively’ test the engines on positions that were more positional or prophylactic in nature, or perhaps in some difficult endgame positions. I took 11 positions from books on hand, including a number from Aagaard’s GM Preparation series, and created a small test suite. Each engine (including Deep Fritz 14 for comparison’s sake) had 4 minutes to solve each problem on my old quad-core Q8300, and each engine had 512mb of RAM and access to Syzygy (5-man) or Nalimov (selected 6-man) tablebases as they preferred. You can see the results at the following link:

http://www.viewchess.com/cbreader/2014/9/24/Game31750181.html

or as summarized below:

First test set

Deep Fritz 14, curiously enough, solved more problems than did Houdini 4, Komodo 7a/8 or Stockfish 5. None could solve the famous Shirov …Bh3 ending. None could solve the Polugaevsky endgame, which illustrates a horizon-related weakness still endemic among even the best engines. Only Komodo 7a, Komodo 8 and Deep Fritz 14 solved position #2, which I thought was the most purely positional test among the bunch. This test is only anecdotal, and perhaps the engines would have gotten more answers right on faster hardware; nevertheless, I was a little surprised.

Test #2: Jon Dart (author of Arasan) has created a series of test suites to torture his engine and others. I took the first 50 problems from the Arasan Testsuite 17 and ran Houdini 4, the two Komodos, Stockfish 5, Deep Rybka 4.1 and Deep Fritz 14 through their paces. (I would have added Crafty 23.08, installed with Komodo 8, but it kept crashing the GUI when I tried to include it in the test.) Here the engines only received 60 seconds to solve the problem – the same standard Dart uses in his tests of Arasan, albeit with a much faster computer. You can see the results at the following link:

http://www.viewchess.com/cbreader/2014/9/24/Game31858867.html

or as summarized below:

Arasan test set

Stockfish 5 and Houdini 4 each solved 38/50 problems in the one minute time limit. Komodo 8 solved 30 problems, improving by one over Komodo 7a’s 29 solved problems, and doing so with a faster average solving time. Deep Rybka and Deep Fritz each solved 28 problems correctly. Given the shorter ‘time control’ and the relatively tactical nature (IMHO) of the test set, these results seem representative of the various engines and their characteristics.

So now we have to answer the real question: which engine is best? Which one should you use? Let’s begin by admitting the obvious: for most analytical tasks you throw at an engine, any one of the three would suffice. Most of the other major ‘second-tier’ engines, including Crafty (free to download), Deep Fritz (commercial), Hiarcs (commercial) and Junior (commercial), are also sufficient to analyse the games of amateurs and point out our tactical oversights. If you’re just looking for an engine to blunder-check your games, you have plenty of options.

If, however, you’re using engines for heavy analytical work or on very difficult positions, I think you need to consider buying both Houdini and Komodo and also downloading the open-source Stockfish. Each engine, as discussed above, has relative strengths and weaknesses. The best strategy is to see what each of the engines have to say in their analysis, and then try to draw your own conclusions. Were I forced to decide between Houdini 4 and Komodo 8, I’d probably – at this moment, anyway! – choose Komodo 8, simply because it seems stronger positionally, and its slight comparative tactical disadvantage doesn’t outweigh that positional strength. Both Houdini and Komodo are well worth their purchase price for the serious player and student. Downloading Stockfish should be mandatory!