Tag Archives: Komodo

Useful Technology?

This review has been printed in the March 2020 issue of Chess Life.  A penultimate (and unedited) version of the review is reproduced here. Minor differences exist between this and the printed version. My thanks to the good folks at Chess Life for allowing me to do so.


Kaufman, Larry. Kaufman’s New Repertoire for Black and White: A Complete, Sound and User-friendly Chess Opening Repertoire. Alkmaar: New in Chess, 2019. ISBN 9789056918620. PB 464pp.

Kaufman’s New Repertoire for Black and White: A Complete, Sound and User-Friendly Chess Opening Repertoire is the third incarnation of Larry Kaufman’s one-volume opening repertoire. While the first two – The Chess Advantage in Black and White: Opening Moves of the Grandmasters (2004) and The Kaufman Repertoire for Black and White: A Complete, Sound and User-Friendly Chess Opening Repertoire (2012) – were well-regarded, this new edition appears at something of an inflection point in the history of chess theory.

Opening theory has exploded over the past two decades, due largely to the influence of engines and databases. As one of the developers of Rybka and Komodo, among other important projects, Kaufman has made good use of engines in his writing, and both previous versions of this project proclaim the central role played by the computer. In 2004 it was Fritz, Junior, and Hiarcs, and in 2012 he used Houdini and especially Komodo.

Today the landscape has changed. The rise of artificial intelligence and neural network engines, first Alpha Zero and now Leela Chess Zero, is reshaping opening theory. In Mind Master, reviewed here last month, Viswanathan Anand relates that Caruana and Carlsen were the first elite players to make use of Leela in their 2018 match preparations, and that his trainer introduced it into their workflow at the end of that year.  Chess authors have picked up on the trend, and works written under Leela’s influence are beginning to appear.

Kaufman’s New Repertoire is advertised as “the first opening book that is primarily based on Monte Carlo search.” This is somewhat imprecise – Leela’s evaluations come from the neural network, not game rollouts – but the point remains that Kaufman has chosen to make use of the newest technologies in writing his book. He relied on Leela and a special “Monte Carlo” version of Komodo to craft the repertoire, generally deferring to Leela’s view while reserving the right to serve as “referee” if the engines disagree.

So what does Kaufman’s new repertoire look like? As the title suggests, the book contains a complete opening solution for both colors, focusing on 1. e4 for White, and the Grunfeld and Ruy Lopez for Black. Kaufman is covering a lot of ground here, generally offering two systems or ideas against most major continuations. In the mainline Ruy he offers readers three choices with Black: the Breyer, the Marshall, and the Møller.

The virtue of this approach is clear. Kaufman’s New Repertoire gives readers a one-stop opening repertoire, featuring professional lines, particularly with Black, and computer-tested ideas that can inspire confidence. But in an age where multi-volume single color repertoires are increasingly the norm, is it possible to include enough detail in less than 500 pages?

Let’s dive a bit deeper and take a look at specific recommendations.

White: 1. e4
  • vs Caro-Kann – (a) 4. Bd3 Exchange Variation, (b) 3. Nc3 dxe4 4. Nxe4 Bf5 5. Qf3!?, (c) Two Knights.
  • vs French – Tarrasch Variation.
  • vs 1. … e5 – (a) Italian Game, with multiple repertoire choices offered, (b) Ruy Lopez with 6. d3, and 5. Re1 against Berlin.
  • vs Sicilian – (a) 2. Nc3 ideas, including 2. Nc3 d6 3. d4 cxd4 4. Qxd4 Nc6 5. Qd2 and the anti-Sveshnikov 2. Nc3 Nc6 3. Nf3 e5 4. Bc4; (b) 2. Nf3 and 3. Bb5 against 2. … d6 and 2. … Nc6, and 3. c3 against 2. … e6, entering the Alapin.
Black: 1. e4 e5 and Grunfeld
  • … Nf6 against the Scotch.
  • … Bd6 against the Scotch Four Knights.
  • … Bc5 in the Italian game, focusing on 4. d3 Nf6 5. 0-0 0-0 and now 6. c3 d5, 6. Re1 Ng4, 6. a4 h6 followed by … a5, and 6. Nbd2 d6.
  • the Breyer is the “best all-purpose defense” in the 9. h3 Ruy Lopez, but Kaufman also includes Leela’s favored Marshall Attack and the Møller, inspired by Anand.
  • Neo-Grunfeld without … c6 vs the Fianchetto.
  • f3 Nc6.
  • …a6 against the Russian System.
  • … Qxa2 and 12. … b6 against 7. Nf3 in the Exchange variation.
  • three options – 10. … Qc7 11. Rc1 b6, 10. … e6, and 10. … b6 – against the 7. Bc4 Exchange.
  • c4 / 1. Nf3 – Anti-Grunfeld, Symmetrical English, and a tricky path into the Queen’s Indian Defense for transpositional reasons.

While chapter introductions explain his reasons for individual repertoire choices, Kaufman’s analysis revolves mostly around concrete lines, using commented games as his vehicle. He tends to propose variations that avoid the heaviest theory with White, while turning to two of the most professional of openings – the Breyer and Grunfeld – as the backbones of his Black repertoire.

In the Introduction Kaufman warns his readers that he omits “rare” responses from the opponent to save space and offer alternative ideas. This means that the book is unlikely to be refuted, but readers will have to do some extra work to flesh out their repertoires.

The analysis in Kaufman’s New Repertoire is heavily influenced by the computer, and individual lines are usually punctuated with numerical evaluations from Komodo. This is not to say that the book is perfect. Attributions of novelty status are sometimes incorrect, although that may have more to do with differing data sets than anything else. More worrisome are the analytical errors and omissions. Two examples:

(a) Kaufman recommends 8. Qf3 in the Two Knights, and after 1.e4 e5 2.Nf3 Nc6 3.Bc4 Nf6 4.Ng5 d5 5.exd5 Na5 6.Bb5+ c6 7.dxc6 bxc6 8. Qf3 he analyzes the two traditional mainlines of 8. … Be7 and 8. … Rb8. Checking his work, I discovered that neural net engines thinks sacrificing the exchange with 8. … cxb5 is fully playable, giving Black good compensation after 9. Qxa8 Be7 (Leela) or 9. … Qc7 (Fat Fritz). See the recent game Chandra-Theodoru from the SPICE Cup in 2019 for an example of the latter.

Jan Gustafsson made an analogous, and equally Leela inspired, discovery in his new (and outstanding) Lifetime Repertoire: 1. e4 e5 series for Chess24, analyzing 8. … h6 9. Ne4 cxb5 10. Nxf6+ gxf6 11. Qxa8 Qd7! where the best White can do is head for a perpetual.

While the idea of giving the exchange is considered inferior by theory, the fact that Leela approves it should have been just the kind of discovery that Kaufman would trumpet here. Perhaps he didn’t believe what he was seeing, although it should be noted that Komodo verifies Black’s compensation.

(b) After 1.e4 e5 2.Nf3 Nc6 3.Bc4 Bc5 4.c3 Nf6 5.d4 exd4 6.e5!? d5 7.Bb5 Ne4 8.cxd4 Bb6 9.Nc3 0–0 10.Be3 Bg4 11.h3 Bh5 12.Qc2 we reach a “rather critical” position.

Here Kaufman discusses five moves: 12. … Bg6, 12. … Bxf3, 12. … Nxc3, 12. … Rb8, and 12. … Ba5!, which “may be Black’s only path to roughly equal chances.” (96)

I found two problems with the analysis, both involving Kaufman glossing over a poor move towards the end of a line, allowing him to claim an advantage for the side he is championing. After 12. … Bxf3, 18. … Nf5 is dubious; better is 18. … Ng6 as in Vocaturo-Moradiabadi, Sitges 2019. His analysis of 11. Qc2 is also flawed – check the pgn at uschess.org for more details. And these were not the only “tail-errors” I found in my study.

I’m torn on how to assess these analytical lapses. On the whole the book is well-researched and up to date, and the broad outlines of all Kaufman’s repertoire choices seem sound. So why are there these small problems, especially when the entire conceit of the book is its being computer-proofed, and with so many of the lines cribbed verbatim from the engine? I don’t have an answer to this, but I do wonder if Kaufman doesn’t suffer from a bit of confirmation bias.

As one of the co-authors of Komodo, Kaufman surely trusts the engine a great deal, but the version used here – Komodo MCTS – is markedly inferior to traditional Komodo or Stockfish, and is rated some 200 points lower on most testing lists. Komodo MCTS has the advantage of being able to analyze multiple lines at once without a performance hit, but its (very relative) tactical shallowness can be a concern. Because Leela suffers from similar issues, it may have been smarter to pair it with traditional Komodo instead.

Kaufman’s New Repertoire for Black and White is a solid repertoire offering despite these problems. His recommendations are well-conceived, and I was impressed with how much Kaufman was able to stuff into these pages. There’s not a lot of conceptual hand-holding here, so readers will have to be strong enough – say 2000 and above – to get maximum value from the book, and many lines will require supplemental study and analysis for the sake of completeness. Still, for those looking for a one-stop repertoire, particularly from the Black side, Kaufman’s book might be just what the doctor ordered.

And Then There Were Two

Komodo 9, written by Don Dailey, Larry Kaufman and Mark Lefler. Available (1) with Fritz GUI from Amazon ($80ish as of 5/28), (2) for download with Fritz GUI from ChessBase.com ($73.50 w/o VAT as of 5/28) and (3) directly from the Komodo website without GUI for $59.98; also available as part of a 1 year subscription package for $99.97.

Stockfish 6, written by the Stockfish Collective. Open-source and available at the Stockfish website.

—–

Now that Houdini seems to have gone gentle into that good night, there are two engines vying for the title of strongest chess engine in the world. Those two engines – Stockfish and Komodo – have each seen new releases in recent months. Stockfish 6 was released at the end of January, while Komodo 9 became available at the end of April from komodochess.com and the end of May from ChessBase.

Last year I wrote a review of Komodo 8 and Stockfish 5 that was republished at ChessBase.com, and much of what I wrote there applies here as well. Fear not, frazzled reader: you don’t need to go back and read that review, as most of the key points will be reiterated here.

First things first: any top engine (Komodo, Stockfish, Houdini, Rybka, Fritz, Hiarcs, Junior, Chiron, Critter, Equinox, Gull, Fire, Crafty, among many others) is plenty strong to beat any human player alive. This is not because each of these engines are equally strong. While they don’t always play the absolute best moves, none of the aforementioned engines ever make big mistakes. Against fallible humans, that’s a recipe for domination. It’s nearly useless – not to mention soul-crushing! – to play full games against the top engines, although I do recommend using weaker engines (Clueless 1.4, Monarch, Piranha) as sparring partners for playing out positions or endgames.

Even if all the major engines can beat us, they’re not all created equal. Three major testing outfits – CCRL, CEGT, and IPON – engage in ongoing and extensive testing of all the best engines, and they do so by having the engines play thousands of games against one another at various time controls. In my previous review I noted that Komodo, Stockfish and Houdini were the top three engines on the lists, and in that order. This remains the case after the release of Komodo 9 and Stockfish 6:

CCRL (TC 40 moves/40 min, 4-cpu computers):
1. Komodo 9, 3325 (Komodo 8 was rated 3301)
2. Stockfish 6, 3310 (Stockfish 5 was rated 3285)
3. Houdini 4, 3269

CEGT
40/4: 1. Komodo 9, 2. Stockfish 6, 3. Houdini 4
G/5’+3”: 1. Komodo 9, 2. Stockfish 6, 3. Houdini 4
40/20: 1. Komodo 9, 2. Stockfish 6, 3. Houdini 4 (NB: list includes multiple versions of each engine)
40/120: 1. Stockfish 6, 2. Komodo 8 (does not yet include version 9), 3. Houdini 4 (NB””: list includes multiple versions of each engine)

IPON
1. Komodo 9, 3190 (Komodo 8 was 3142)
2. Stockfish 6, 3174 (Stockfish 4 was 3142)
3. Houdini 4, 3118

The results are fairly clear. Komodo 9 is ever so slightly stronger than Stockfish 6 when it comes to engine-engine play, and this advantage seems to grow when longer time controls are used.

For my purposes, though, what’s important is an engine’s analytical strength. This strength is indicated by engine-engine matches, in part, but it is also assessed through test suites and – perhaps most importantly – by experience. Some engines might be more trustworthy in specific types of positions than others or exhibit other misunderstandings. Erik Kislik, for instance, reports in his April 2015 Chess Life article on the TCEC Finals – some of which appeared in his earlier Chessdom piece on TCEC Season 6 – that only Komodo properly understood the imbalance of three minor pieces against a queen. There are undoubtedly other quirks known to strong players who use engines on a daily basis.

In my previous review I ran Komodo, Stockfish and Houdini (among others) through two test suites on my old Q8300. Since then I’ve upgraded my hardware, and now I’m using an i7-4790 with 12gb of RAM and an SSD for the important five and six-man Syzygy tablebases included with ChessBase’s Endgame Turbo 4. (Note: if you have an old-fashioned hard drive, only use the five-man tbs in your search; if you use the six-man, it will slow the engine analysis down dramatically.) Because I have faster hardware I thought that a more difficult test suite would be in order, and – lucky me! – just such a suite was recently made available in the TalkChess forums. I gave Komodo 9 and Stockfish 6 one minute per problem to solve the 112 problems in the suite, and the results were as follows:

Komodo 9 solved 37 out 110 problems (33.6%) with an average time/depth of 20.04 seconds and 24.24 ply. Stockfish 6 solved 30/110 (27.2%) with an average time/depth of 20.90 seconds and 29.70 ply. (Note that while there are 112 problems in the suite, two of them were rejected by both engines because they had incomplete data.) The entire test suite along with embedded results can be found at:

http://www.viewchess.com/cbreader/2015/6/6/Game1753083657.html

I have also been using both Komodo 9 and Stockfish 6 in my analytical work and study. So that you might also get a feeling for how each evaluates typical positions, I recorded a video of the two at work.  Each engine ran simultaneously (2 cpus, 2gb of RAM) as I looked at a few games of interest, most of which came from Alexander Baburin’s outstanding e-magazine Chess Today. The video is 14 minutes long. You can replay the games at this link:

http://www.viewchess.com/cbreader/2015/6/6/Game1752975735.html

Komodo 9 and Stockfish 6 in comparative analysis

Even a brief glance at the above video will make clear just how good top engines are becoming in their ability to correctly assess positions, but it also shows (in Gusev-Averbakh) that they are far from perfect. They rarely agree fully in positions that are not clear wins or draws, and this is due to the differences in evaluation and search between the two. Broadly speaking, we can say that evaluation is the criteria or heuristics used by each engine to ‘understand’ a position, while search is the way that the engine ‘prunes’ the tree of analysis. While many engines might carry similar traits in their evaluation or search, none are identical, and this produces the differences in play and analysis between them.

Stockfish 6 is a rather deep searcher. It achieves these depths through aggressive pruning of the tree of analysis. While there are real advantages to this strategy, not the least of which is quick analytical sight and tactical ingenuity, there are some drawbacks. Stockfish can miss some resources hidden very deep in the position. I find it to be a particularly strong endgame analyst, in part because it now reads Syzygy tablebases and refers to them in its search. Stockfish is an open-source program, meaning that it is free to download and that anyone can contribute a patch, but all changes to evaluation or search are tested on a distributed network of computers (“Fishtest”) to determine their value.

Komodo 9 is slightly more aggressive in its pruning than is Komodo 8, and it is slightly faster in its search as well. (Both changes seem to have been made, to some degree, with the goal of more closely matching Stockfish’s speed – an interesting commercial decision.) While Komodo’s evaluation is, in part, automatically tuned through automated testing, it is also hand-tuned (to what degree I cannot say) by GM Larry Kaufman.

The result is an engine that feels – I know this sounds funny, but it’s true – smart. It seems slightly more attuned to positional nuances than its competitors, and as all the top engines are tactical monsters, even a slight positional superiority can be important.  I have noticed that Komodo is particularly good at evaluating positions where material imbalances exist, although I cannot say exactly why this is the case!

As more users possess multi-core systems, the question of scaling – how well an engine is able to make use of those multiple cores – becomes increasingly important. Because it requires some CPU cycles to hand out different tasks to the processors in use, and because some analysis will inevitably be duplicated on multiple CPUs, there is not a linear relation between number of CPUs and analytical speed.

Komodo 8 was reputedly much better than Stockfish 5 in its implementation of parallel search, but recent tests published on the Talkchess forum suggest that the gap is narrowing. While Stockfish 6 sees an effective speedup of 3.6x as it goes from 1 to 8 cores, Komodo 9’s speedup is about 4.5x. And the gap is further narrowed if we consider the developmental versions of Stockfish, where the speedup is now around 4x.

Hardcore engine enthusiasts have, as the above suggests, become accustomed to downloading developmental versions of Stockfish. In an effort to serve some of the same market share, the authors of Komodo have created a subscription service that provides developmental versions of Komodo to users. This subscription, which costs $99.97, entitles users to all official versions of Komodo released in the following year along with developmental versions on a schedule to be determined. Only those who order Komodo directly from the authors are currently able to choose this subscription option.

The inevitable question remains: which engine should you choose? My answer is the same now as it was in my previous review. You should choose both – and perhaps more.

Both Komodo and Stockfish are insanely strong engines. There remain some positions, however, where one engine will get ‘stuck’ or otherwise prove unable to discern realistic (i.e. human) looking moves for both sides. In that case it is useful to query another engine to get a second (or perhaps even third) opinion. I find myself using Komodo 9 more than Stockfish 6 in my day-to-day work, but your mileage may well vary. Serious analysts, no matter their preference, will want to have both Komodo 9 and Stockfish 6 as part of their ‘teams.’

Choosing a Chess Engine

Note: This review has been updated as of 9/24 to reflect my testing and experience with the newly released Komodo 8.

———

Houdini 4, written by Robert Houdart. Standard (up to six cpu cores, $79.95 list) and Pro (up to 32 cpu cores, $99.95 list) versions with Fritz GUIs available. Also available directly from the Houdini website for approximately $52 (Standard) or $78 (Pro) as of 9/11/14.

Komodo 7a, written by Don Dailey, Larry Kaufman and Mark Lefler. Available directly from the Komodo website for $39.95.

Komodo 8, written by Don Dailey, Larry Kaufman and Mark Lefler. Available (1) with Fritz GUI ($97ish as of 9/24) and (2) directly from the Komodo website without GUI for $59.96

Stockfish 5, written by the Stockfish Collective. Open-source and available at the Stockfish website.

Increasingly I’m convinced that a serious chess player must make use of chess technology to fully harness his or her abilities. This, as I have previously discussed, involves three elements: the GUI, the data, and the engine. ChessBase 12 is the gold standard for chess GUIs, and I will be reviewing a new book about proper use of that GUI in the near future. Here, however, I want to take up the thorny issue of choosing a chess engine. Which engine is ‘best’ for the practical player to use in his or her studies?

I put ‘best’ in scare-quotes because there are two ways to look at this question. (1) There is little question at this point that the best chess engines of the past five years can beat 99.9% of human players on modern hardware. So one way that engines are tested now is in a series of engine vs engine battles. While many people process private matches, there are three main public rating lists: IPON, CCRL and CEGT.

Here there is something of a consensus. Houdini, Stockfish and Komodo are the three top engines at the moment, with very little differentiating between them, and with the particular order of the engines varying due to time control and other criteria.

Update: The three lists mentioned above have tested Komodo 8.

  • It is in first place on the IPON list, leading Stockfish 5 by 6 elo points and Houdini 4 by 17.
  • Komodo 8 appears on two of the CCRL lists. In games played at a rate of 40 moves in 4 minutes (40/4), Stockfish 5 leads Komodo 8 by 7 elo points and Houdini 4 by 30 elo points. In games played at the slower rate of 40 moves in 40 minutes (40/40), Komodo 8 has a 22 elo point lead on Stockfish 5 and a 39 point lead on Houdini.
  • Among the many CEGT lists, we find: (a) Stockfish 5 is first on the 40/4 list, followed by Komodo 8 and Houdini 4; (b) Houdini 4 leads the 5’+3″ list, followed by Stockfish 5 and Komodo 8; (c) Komodo 8 leads the 40/20 list followed by Stockfish 5 and Houdini 4; but (d) the 40/120 list has not yet been updated to include Komodo 8.
  • Note: Larry Kaufman compiles the results from these lists and one other in a thread at Talkchess. He argues (a) that Komodo does better at longer time controls, and that (b)  Komodo 8 is roughly equal in strength to the Stockfish development releases, which are slightly stronger than the officially-released Stockfish 5. </update>

From my perspective, however, (2) analytical strength is more important. If all the engines are strong enough to beat me, I think that the quality of their analysis – the ‘humanness’, for lack of a better word – is critical. It used to be the case that humans could trick engines with locked pawn chains, for example, or that engines would fail to understand long-term compensation for exchange sacrifices. Such failings have largely been overcome as the engines and hardware have improved; nevertheless, there remain certain openings and types of positions that are more problematic for our metal friends. Michael Ayton offers one such position in the ChessPub forums; if you want have a laugh, check out the best lines of play on offer by the engines reviewed here:

Screenshot 2014-09-11 12.33.12

FEN: r1b2rk1/pp1nqpbp/3p1np1/2pPp3/2P1P3/2N1BN2/PP2BPPP/R2Q1RK1 w – c6 0 10

Among the multiple engines available, there are three that stand above the fray. These are Houdini by Robert Houdart, Komodo by the late Don Dailey, Larry Kaufman and Mark Lefler, and Stockfish. Houdini and Komodo are commercial engines, while Stockfish is open-source and maintained by dozens of contributors.

How can we understand the differences between the engines? Let’s consider two key components of chess analysis: search and evaluation. Search is the way that the engine ‘prunes’ the tree of analysis; because each ply (move by White or Black) grows the list of possible moves exponentially, modern engines trim that list dramatically to obtain greater search depth. Evaluation is the set of criteria used by the engine to decipher or evaluate each position encountered during the search.

In a very general sense, what differentiates Houdini, Komodo and Stockfish are their search and evaluation functions. How they are different on a technical / programming level, I cannot say: Houdini and Komodo are closed-source and I can’t decipher code in any event. What I can do, however, is cite what some experts in the field have said, and then see if it coheres with my experience of the three engines.

Larry Kaufman, who works on Komodo, said in an interview on the Quality Chess blog that:

Komodo is best at evaluating middlegame positions accurately once the tactics are resolved. Stockfish seems to be best in the endgame and in seeing very deep tactics. Houdini is the best at blitz and at seeing tactics quickly. Rybka is just obsolete; I like to think of Komodo as its spiritual desceendant, since I worked on the evaluation for both, although the rest of the engines are not similar. Fritz is just too far below these top engines to be useful.

…Komodo’s assessment of positions is its strong point relative to the other top two, Houdini best for tactics, Stockfish for endgames and whenever great depth is required. Both Houdini and Stockfish overvalue the queen, Komodo has the best sense for relative piece values I think. Komodo is also best at playing the opening when out of book very early.

Stockfish is, as Kaufman suggests, very aggressive in the way that it prunes the tree of analysis, searching very deeply but narrowing as the ply go forward. It is important to remember that each engine reports search depth and evaluation differently, so that (as Erik Kislik writes in a fascinating article on the recent TCEC superfinal) the way that Stockfish ‘razors’ the search means that its reported depth can’t be directly compared to Houdini or Komodo. Still, it does seem to search more deeply, if narrowly, than do its competitors.  This has advantages in the endgame and in some tactical positions.

Houdini is a tactical juggernaut. It tends to do best on the various tactical test sets that some engine experts have put together, and it is fairly quick to see those tactics, making it useful for a quick analysis of most positions. Its numerical evaluations also differ from other engines in that they are calibrated to specific predicted outcomes.

A +1.00 pawn advantage gives a 80% chance of winning the game against an equal opponent at blitz time control. At +2.00 the engine will win 95% of the time, and at +3.00 about 99% of the time. If the advantage is +0.50, expect to win nearly 50% of the time. (from the Houdini website)

Kaufman argues that his engine, Komodo, is the most positionally accurate of the three, and I don’t disagree. Kaufman is involved in the tuning of Komodo’s evaluation function; as he is a grandmaster, it does not seem outrageous to believe that his engine’s positional play might benefit from his chess expertise. The engine is slightly ‘slower’ (anecdotally, and not judging by NPS, or nodes per second, and ply count) than are Stockfish and Houdini, but Komodo seems to benefit more from longer analysis time than do Houdini or Stockfish.

I’ve been using Komodo 8 in the Fritz GUI from ChessBase for a few days now. The GUI is the same as the Houdini 4 and the Deep Fritz 14 GUIs; in fact, when you install Komodo 8, I think it just adds some configuration files to your ChessProgram14 folder to allow for a Komodo ‘skin’ to appear. The Komodo 8 engine is slightly faster than 7a judging solely by NPS. While coding changes mean that the two can’t be directly compared, Mark Lefler has said that 8 is approximately 9% faster than 7a. The ChessBase package comes with a 1.5 million game database, an opening book, and a six month Premium membership at Playchess.com; all are standard for Fritz GUI releases such as Deep Fritz 14 or Houdini 4.

From my perspective, I tend to use all three engines as I study chess or check analysis for review purposes, but two more than the third. When I look at my games, which aren’t all that complex, I generally use Houdini as my default kibitzer. It seems to be the fastest at seeing basic tactical problems, and its quickness is a plus on some of my antiquated computers. I also tend to bring Komodo into the mix, especially if I want to spend some time trying to figure out one position. Stockfish serves more as a second (or third) option, but I will use it more heavily in endgame positions – unless we get into tablebase territory, as Stockfish does not (generally) use them.

*Note:* for other perspectives on the ‘personalities’ of these three engines, you might consider a couple of threads at the indispensible ChessPub forum.

As I was working on this review, I thought that I might try to ‘objectively’ test the engines on positions that were more positional or prophylactic in nature, or perhaps in some difficult endgame positions. I took 11 positions from books on hand, including a number from Aagaard’s GM Preparation series, and created a small test suite. Each engine (including Deep Fritz 14 for comparison’s sake) had 4 minutes to solve each problem on my old quad-core Q8300, and each engine had 512mb of RAM and access to Syzygy (5-man) or Nalimov (selected 6-man) tablebases as they preferred. You can see the results at the following link:

http://www.viewchess.com/cbreader/2014/9/24/Game31750181.html

or as summarized below:

First test set

Deep Fritz 14, curiously enough, solved more problems than did Houdini 4, Komodo 7a/8 or Stockfish 5. None could solve the famous Shirov …Bh3 ending. None could solve the Polugaevsky endgame, which illustrates a horizon-related weakness still endemic among even the best engines. Only Komodo 7a, Komodo 8 and Deep Fritz 14 solved position #2, which I thought was the most purely positional test among the bunch. This test is only anecdotal, and perhaps the engines would have gotten more answers right on faster hardware; nevertheless, I was a little surprised.

Test #2: Jon Dart (author of Arasan) has created a series of test suites to torture his engine and others. I took the first 50 problems from the Arasan Testsuite 17 and ran Houdini 4, the two Komodos, Stockfish 5, Deep Rybka 4.1 and Deep Fritz 14 through their paces. (I would have added Crafty 23.08, installed with Komodo 8, but it kept crashing the GUI when I tried to include it in the test.) Here the engines only received 60 seconds to solve the problem – the same standard Dart uses in his tests of Arasan, albeit with a much faster computer. You can see the results at the following link:

http://www.viewchess.com/cbreader/2014/9/24/Game31858867.html

or as summarized below:

Arasan test set

Stockfish 5 and Houdini 4 each solved 38/50 problems in the one minute time limit. Komodo 8 solved 30 problems, improving by one over Komodo 7a’s 29 solved problems, and doing so with a faster average solving time. Deep Rybka and Deep Fritz each solved 28 problems correctly. Given the shorter ‘time control’ and the relatively tactical nature (IMHO) of the test set, these results seem representative of the various engines and their characteristics.

So now we have to answer the real question: which engine is best? Which one should you use? Let’s begin by admitting the obvious: for most analytical tasks you throw at an engine, any one of the three would suffice. Most of the other major ‘second-tier’ engines, including Crafty (free to download), Deep Fritz (commercial), Hiarcs (commercial) and Junior (commercial), are also sufficient to analyse the games of amateurs and point out our tactical oversights. If you’re just looking for an engine to blunder-check your games, you have plenty of options.

If, however, you’re using engines for heavy analytical work or on very difficult positions, I think you need to consider buying both Houdini and Komodo and also downloading the open-source Stockfish. Each engine, as discussed above, has relative strengths and weaknesses. The best strategy is to see what each of the engines have to say in their analysis, and then try to draw your own conclusions. Were I forced to decide between Houdini 4 and Komodo 8, I’d probably – at this moment, anyway! – choose Komodo 8, simply because it seems stronger positionally, and its slight comparative tactical disadvantage doesn’t outweigh that positional strength. Both Houdini and Komodo are well worth their purchase price for the serious player and student. Downloading Stockfish should be mandatory!

Chess Holiday Buying Guide: Part II

In Part I of this buying guide, I discussed digital clocks and the central element in chess software, the GUI.  Here, in Part II, I will provide options for the purchase of chess databases and engines.  Finally, I will list in Part III a veritable cornucopia of chess books for that special chess player in your life.  Really, let’s be honest: isn’t your chess player worth it? Smile

As I wrote in Part I, there are three components or facets of chess software that every aspiring chess player should own.  First, there is the GUI, or the graphical interface.  I discussed both ChessBase 12, a full database solution for chess data, and the Fritz family of GUIs, which have limited database function but include playing engines and capabilities.  Second, there is the database itself, containing millions of games, and in some cases, audio and video instruction.  Finally, there is the engine, that dab of programming magic that analyzes the position and provides super-GM output.

Here, in Part II, I will discuss the two main databases available from ChessBase, as well as a number of options for chess engines.  Readers who are coming to this post without having read Part I are advised to read that piece at their leisure.

Database: ChessBase is the author of two reference databases, the Big and Mega Databases.  (The data in each database is identical, save the fact that there are no annotated games in the Big Database and approximately 68,000 of them in the Mega.)  New editions of each are published each November, and the 2014 edition of the Big and Mega Databases is now available.

The download and installation process for the Mega Database is fairly easy, but be warned: the main database is over a gigabyte of data compressed, so it will take some time to download.  The installer required a few clicks, and soon enough, the icon for Mega Database 2014 was sitting in my ChessBase window, ready for my use.

Mega Database contains approximately 5.8 million games, 68k or so of which are annotated.  The database has a number of indexes or ‘keys’ that users can search to pinpoint just what they are looking for: a specific player, an opening position, a tournament, or even a tactical motif.  ChessBase 12 users have many more search and key options than do users of the GUI; this, to me, is one of the reasons that (if finances allow) ChessBase 12 should be on your shopping list.

Long-time computer users will remember the acronym GIGO – Garbage In, Garbage Out.  If your data is ‘dirty,’  your output will suffer.  One of the great things about the Big and Mega Databases is that they are absolutely pristine.  ChessBase employs full time data-wranglers – two GMs among them – to update the database, keep player names correct, etc.  They also offer free weekly updates to the Big and Mega Databases for download with purchase, allowing your chess player to keep her database completely up to date.

There are lots of other goodies included with these databases, including a player encyclopedia with pictures of thousands of players around the globe.  I don’t use this feature, to be frank, so I can’t speak to it.  Interested parties can check out Albert Silver’s review at chessbase.com.

If your player is serious about their chess software, they’ll need a reference database.  The Big and Mega Databases are the best around, and they’re well worth your purchase.  Either will be a valuable addition to your player’s setup.  The annotated games are nice, but feel free to save a little money here and go with the Big Database.  Access to the games is what’s important.

The Big Database is available at Amazon for just over $50, and the Mega Database sells there for about $150.  If you’re in a time crunch, of course, you can always directly purchase and download both the Big Database and Mega Database from ChessBase.  Note that if your favorite player has an older version of the Mega Database, you can also purchase an update to the 2014 edition for a reduced price at the ChessBase site.

(Note: ChessBase also publishes dozens upon dozens of training DVDs and downloads.  Any one would probably be a welcome gift for your player, but recommending any specific training module would require some knowledge of your player, what openings she plays, etc.  Peruse their wares at your leisure and see if maybe something strikes your fancy.)

Engines: All of the Fritz family of GUIs come with playing engines.  These engines can be plugged into ChessBase 12, or they can run on their own inside the Fritz GUIs.  (ChessBase 12 includes an older version of the Fritz engine and an open-source engine called Crafty.  Both are plenty strong, but neither is state of the art.)  There are three commercial engines to consider for your gift giving needs, but I’ll also clue you in on some free alternatives as well.

  • Deep Fritz 14: Fritz is the granddaddy of commercial engines, but with this year’s release of version 14, a few things have changed.  The old Fritz engine has been retired, and the ‘new’ Fritz is actually the 2013 medal-winning Pandix engine by Gyula Horváth.  In contrast to older Fritzes, Deep Fritz 14 is a multi-processor engine, meaning that it can run on up to eight cores at once.  This dramatically speeds up the search and strength of the engine.  Deep Fritz 14 comes with a 1.5 million game database.
  • Houdini 4: Houdini 4 is a UCI engine sold by ChessBase in the Fritz interface.  Basically you get the same GUI as with Deep Fritz, but instead of the Deep Fritz engine, it comes with Houdini 4.  Robert Houdart is the author of Houdini, and the engine is generally considered to be the strongest engine publicly available.  Houdini is also the engine of choice for many grandmasters in their published analysis.  It, like all of the Fritz GUIs, comes with a 1.5 million game database.
  • Komodo: Komodo, unlike Fritz or Houdini, is not sold by ChessBase.  It is also a UCI engine, and it is currently developed by IM Larry Kaufman and Mark Lefler.  The late Don Dailey was the original author of the engine, and Kaufman and Lefler are continuing its development after Dailey’s recent untimely death.  The current version – Komodo TCEC – just won a major tournament, staking its claim to being one of the top engines in the world.

Deep Fritz 14 is available at Amazon for about $80, and you can also purchase a downloadable version of the GUI / engine combo at ChessBase for about the same price.  Both versions include a six month premier membership at Playchess.com, allowing your gift-recipient to watch videos and live tournament broadcasts online for free.

Houdini 4 comes in two flavors: the Standard, which runs on up to six cores, and the Professional, which will run on up to thirty-two.  Houdini 4 Standard sells on Amazon for about $100, and the Pro version will run you $116.  As always, you can order a downloadable version of the Standard and the Pro from ChessBase for about the same price.  The ChessBase Houdini also comes with a six month premier Playchess membership.

Readers should note that Houdini is also available as a stand-alone purchase directly from Houdart.  Buying Houdini 4 directly from the author is slightly cheaper (Standard is about $55, Pro is just over $80) and will also allow your player to access discounted updates to the engine in the future.  This purchase does not include a GUI, but it might make sense if your player has an older version of the Fritz or Houdini interface and just needs the latest and greatest engine.

Komodo is only available from the developers.  It is currently the cheapest option at $49.95, and it also requires some kind of GUI for its proper use.

From my perspective, Houdini and Komodo are the two strongest engines available for purchase.  (There is a third engine – Stockfish – that might be about as strong as Houdini and Komodo, but I leave that to your research.)  I’ve used Houdini extensively in my own chess study, and its analysis is both fast and frighteningly accurate.  Komodo is slightly slower in terms of its search, but it makes up for that relative slowness with a highly precise positional sense.  Deep Fritz is, of course, strong as well – most any modern engine will destroy even top GMs in over-the-board play – but it’s not in the same league as Houdini or Komodo.

Were I to choose one, I’d go with Houdini.  It gets to the depths of the position quickly, making it indispensible for analytical work and chess study.  Komodo is nearly as good a choice, and Deep Fritz – while coming in third in this race – will also serve your gift recipient well.

Summary of buying chess software: Chess software, as I have said, involves three elements – the GUI, the database, and the engine.  The GUI is the most basic of these, and that without which the other two are inaccessible.

For that reason, my number one recommendation for a gift for your player is the Houdini 4 Standard engine and GUI from ChessBase. [ Amazon link | ChessBase downloadable link ]  You can play against Houdini and have it analyze your games, and both the included database and database functions are sufficient for most players.  If your gift is your player’s first step into the world of chess software, Houdini 4 will be a real pleaser.

More advanced players – in terms of rating or ambition – would be thrilled to own the full ChessBase 12 package.  The standard package [ Amazon link | ChessBase link ] includes the Big Database and will serve your player well for years to come.   If you’re hoping to save a little money, consider the downloadable version of ChessBase 12 from ChessBase, and tell your gift-ee to download free games updated weekly at Mark Crowther’s wonderful website The Week in Chess.