Saturday, December 24, 2011

Definition of "something new"


You might have read over it in the comments of the previous post, so I will summarize it here.
Munich had a period of 2-3 months in which he improved 200 points.
In 2001 I had a period of 6 weeks in wich I gained 170 points.

I always felt that this should be the standaard and not the exception when you train something. Oddly enough I couldn't replicate that trainingseffect allthough I tried it for 10 years.

It seems logical to look after what the training of Munich and me have in common:
  • We learned something that was new to us.
  • We both were aware that we were improving.
  • The automation of the knowledge happened somehow automatically, without specialized training.
  • No repetition exercises.
  • No speed exercises.
The essence of the training seems to be that we learned something new.

Of course you can't expect to improve 200 points every 3 months. But it sets a new standaard. Just like the 620 points in 2 years of DLM did.

Why can't we improve 200 points every 3 months? It's not the speed of learning that is limiting us since it is hard to believe that this speed changes all that much within a few months. If the reason for diminishing returns doesn't come from the inside, it must come from the outside.
This means that it becomes more difficult to find something new to learn that matters. That has a direct influence on the outcome of the game.

  • It has to be new.
  • It has to have an impact on the outcome of the game.
Since I'm focussing on tactics solely again, everything I learn directly changes the outcome of the game. While that is the very nature of tactics. This means that I only have to worry about how to create something new. How does something new looks like in the realm of tactics?

 Have a look at the following diagram. I suggest kindly you have a serious look at it before reading further.

White to move.

It is always difficult to find a catchy example since what is difficult for me is easy for others with the same rating and vice versa. What you have collected as familiar patterns is highly personal.

What happened to me here while I tried to solve this position for the first time I got lost in some sort of "concours hippique" of knight moves. There were just way too many possibilities. With no direction in my thoughts I had to throw in the towel.

Let's see what happens if we add some guidance to our thoughts:
  • What is this position about? The main feature is the passer on d6. This position is mainly about promothing the d6 pawn.
  • Besides that, the black king has to be aware that he can be mated at the backrank or the h-file. But it is unlikely that mate can be forced by white.
  • The pawn on d6 is hanging. It is not so easy to see how to save it and how to guide it towards the promotion square.
  • This adds some serious restrictions on the moves to consider. The moves have to happen with tempo and are geared around the pawn.
See the solution of the problem here.

Try to map the moves of the solution on the guidance above. Ok, at a certain moment CT reacts a bit strange, but that is irrelevant. What you see is that the moves are no longer random probes but that they follow a distinctive goal. The moves make use of accidental mating threats and accidental threats of knight forks, but they are goal-driven. Save the pawn and queen it. Of course black has to make consessions to prevent it, but you can worry about that when it happens.

In stead of relying on recognition of geometrical patterns I have learned something new. I learned a new pattern of logical reasoning. Which can be used in all sorts of similar positions.

Of course a logical pattern will be, in the end, just another pattern. And patterns you have to consolidate by some sort of repetition, while speed always can come in handy. But you have to realize, without something new, you have nothing. So keep your priorities straight.

During the first circle of 200 puzzles of 2200-2300 rating at CT I scored a measly 2% correct.
I'm now busy with the second circle and I score about 80% right. Since I can reconstruct the moves from the reasoning. Now that is what I call learning something new!


  1. Due to bad internet lag, my comment probably got lost.

    I make a long story short I simply ask this question:
    Would it make much of a difference if you chose a rating range where you "only" fail 80% of all puzzles? And then train these 80%? It could speed things a little bit up, allowing you to maybe do twice as much puzzles (you would need to find out how much faster).

    Everything you have written I totally agree. There are some questions left:
    How much repetition is needed? and what is the best training range?
    I have the feeling that trying to find "fails" in a way-too-easy-range will only let you find mostly fails due to mouse-slips and tiredness.
    While trying to find "fails" in the very difficult range is easy, but these puzzles are statistically of little importance. Even Master player fail them, and I assume a Master player knows all puzzles that are more common and more likely to come up in your real games.
    So a lot of training ranges could make sense: the rather easy range, the middle range, and the rather difficult range. But all ranges will modify the outcome of your new acquired knowledge. The rather difficult range will be better for guidance training, while the rather easy range (not "very ultra easy" range, because I regard these ultra easy ones as pretty useless) will give you new patterns you should know, but actually dont know (well).

    For the right range aoxomoxoa´s hypothesis is pretty good. I think point 5, 8 & 9 are pointing into the direction of rather easy range. But then again, the rather easy range will give you no guidance training. And I very much agree, that the guidance training is considerably different than to try to find new patterns you did not know before.

    Besides: Happy Christmas!

  2. @Munich,
    If you fail 80% you spill 20% of your time with problems you allready know, right? And when you fail, it takes as much time as any other failure, isn't it? So what's the difference?

    I think that you and I have a fundamental different approach to masters. You think there is something inherent difficult in high rated problems. I think that a master is a master because he sees an easy problem as easy. I'm not a master because I see an easy problem as complex.

    I make one consession, though. I only look at <= 4 movers. In order to avoid complexity due to visualization problems. Thus saving time.

    Look how general this guidance is:
    First question: what is this position about (possible answers: mate, wood, promotion, invasion, counterattack)).
    Second question: What is the problem? Answer: the passer is hanging.
    Third question: what to do about it? Answer: save your pawn with make moves which gain tempo.

    How simple is that? With only 37 questions and answers all middlegame tactics are covered.

    So I don't think the amount of different logical elements is all that much. Hence the amount of exercises you need will only be a 1000 or so at max. Maybe less. It's all about the quality of your reasoning. 3-4 problems a day during a year is all you need.

  3. You are right. The difference is, that the 20% you no are wasted time. That is absolutely a correct answer.

    I feel this is only half of the answer, though. There is another difference:
    The statistical importance of the set with 80% fails is higher than the statistical importance of the set where you failed 100%.
    Look at the distribution of CT Puzzles
    of 2 mio games in the database (which contain many master games).
    Most puzzles are at CT Blitz rating range of 1400-1500.
    Of 40.000 CT puzzle, there are almost 10.000 of them between 1350-1550.

    There are hardly CT puzzles that have a CT Blitz rating of 2300++. The practical relevance of the set with the 80% failures is probably twice as high as the set where you failed 100%.
    However, if you do 250 puzzles, 50 of them (=20%) are wasted time. You wont repeat these 50 (20%) puzzles later anyway.
    You will only repeat the 80% fails. So the sacrifice of wasted time is not so high. (250 for the first go-through, after which you repeat the 200 fails = 450. This number now compare with 200 for the first go through, after which you repeat 200 of them = 400).
    In return for the "wasted" 50 puzzles, you get a set of fails that contain 200 fails (= 80% of 250).
    And these 200 failures are of more "value".

    And there is even a 3rd aspect.
    Working through the still difficult set (it is difficult, otherwise you would not fail 80% of them), can be done faster.
    You will still save on total training time, even though 50 puzzles of them will be wasted.

    I believe there is a trade off. If you reduce difficulty even lower, than the whole guidance aspect does not make sense anymore, because puzzles just get to simple.

    Also it depends on the outcome of your training: Do you just want to rule out a guidance list of maybe 50 questions?
    Because that aim can be achieved pretty soon. Maybe another 100 puzzles, and you are not going to add any new questions to the list. Then there is no need to change the range.
    But if you intend to do more training in solving difficult puzzles with this acquired guidance list, then I suggest you lower the difficulty of the set a little bit. You will see, you will be able to cover much more ground (more puzzles in number). And at the same time, the statistical relevance is significantly higher.

    You know where to get the statistical distribution of CT puzzles? I can see this statistical gauss distribution in "problems"-->"Problem Set Stats".

    Lowering more and more the difficulty of a set will result in more and more wasted puzzles. Lets look at the extrem case of ultra easy puzzles (maybe below a CT Blitz rating of 1200): for 200 fails you need to waste 10.000 puzzles if your solving successrate is 98%.
    Plus 100 of these 200 fails will likely be the result of a mouse-slip or really big tiredness and distraction.
    This cant be an efficient way, too, because you will hardly learn anything new.

    So the optimal range you wont find in either of the both extrem ranges. You might better look at a range where you fail 80%. Or where you fail only 20%. It really depends on what you intend to learn. If you want to learn and automate logical reasoning (and I agree this is a different animal than learning patterns), then you should better look at the difficult range above your own rating. Somewhere, where you miss 80% or even a bit more.
    At the other side of the possible training options (for learning patterns) you are better off searching in a range where you will fail not more than 20%.

    I will change my training and get rid of the ultra easy range.
    - - -
    Something totally different:
    Will you publish your list of guidance questions one day? It would enhance my start when I do the guidance training.

    Sorry for the long text, but for a shorter text I would need more time. (See the similarity? Same as doing 250 puzzles or 200 puzzles.)

  4. @Munich,
    I really think this is a non-discussion.

    It is about the quality of our logical thinking, not about quantity in any way.

    I can come up with arguments for both sides. But we are not brewing a scientific paper here. Start at any range. If you feel you are too low, go up. If you feel you are too high, go down. Besides that, it is probably a matter of taste.

    I want to learn something at which my opponent is not good. That is what high rated problems are. A bonus is that the problems are not difficult at all. You just have to slap your head twice and say "aha" a few times.

    I only have to learn to apply the 37 elements of logical tactical reasoning well. No matter if that takes me 50 puzzles or 50.000.

    Why does a lower rated problem take less time? Because it offers you only one move to go astray, while a high rated problem offers you 3 moves where you likely go astray. So you have to do 3 simple problems "to catch up".

    Do you really think that you can become better at 2300-rated problems without improving at 1800-rated problems?

    There are only 169 <=4 movers at CT so if you you are not a tactical monster by problem 169 you have to move down anyway (did I already mention that it is a non-discussion:)

  5. I agree wholeheartedly with your approach Tempo. Being good at what other players aren't will put you ahead of the pack (especially if what you are doing can easily win or lose the majority of games).

    In my own pursuit I've found that the thing I don't want to do is often the thing that offers me the most bang for my buck improvement wise. Yes, it's easy and fun to go through countless easy tactical problems but at the end of the day if you don't improve your efforts are naught. Right now all I do is hard problems at CT and Reinfeld's 1001 Sac. and Combinations and I feel that already I've gotten much more from 1 month than I did for a year of easy problems at various tactical training sites.

    Anyways, I do the same thing when solving tactics problems as well. Vocally translate what is going on at the board, imbalances, squares to watch out for, current threats etc. then based on those factors create a dream position where I would be clearly winning then calculate some lines to see if it is possible.

    Keep up the hard work. Read your blog often but just now decided to post.

    Merry Christmas.

  6. @Anon,

    Thx for the cheering. I think that the addiction to crunch big numbers leads to a low quality of problem treatment. It is better to do a few problems really, really well. Believe me, I speak from experience.

  7. First things first:
    Merry Christmas to you Temposchlucker and to all of you!

    You've have written often about your list of 37 guidance principles or tactical elements or whatever you call them. I wonder if you could share the list or let's say 5-8 items from the list that you feel are representative. I do not intend to copy-and-use the list because I think a major point of such a list is to built it yourself. However, I would like to understand better the nature of the items on it.

    I also wonder how this list relates to your current focus on 'learning something new'.
    Is the conscious use of the list as such a new approach?
    Or are some items on the list new?
    Or am I missing the point compeletely?

  8. I'm not unwilling to give this list away. But I like to ask a little quid pro quo in return.

    There are a lot of readers of this blog who are never commenting and just lurking in the dark. Or who are commenting but have no blog of their own. They know everything about me, even when I leave home to play a tournament. If they are clever enough they know my real name and where I live. But I know nothing about them. That is rather a one way traffic.

    So drop me a line and I wil send you my file. And tell a little about yourself and what you are up to. No need to send the report of your shrink or the footage of your marriage though.

    I will send you a treepad file which you can read with treepad lite (freeware).
    You can find my e-mailadress in my blogger profile.

  9. The quality is one thing, but the quantity an other. It should be about both. The quantity --> you need for automation. The quality --> you need to produce the guidance list. Once you think the guidance list is complete, then you probably want to test it against a lot of puzzles?

    I was not aware, that the number of way-too-difficult puzzle is already very low. If you look than at the follow up difficult range below the top-difficult puzzles (169 in number), then you already score 20% of them?

    For your question: "Why does a lower rated set take less time?"
    I actually dont know how the solving time behaves at puzzles higher than CT Rating 2000++. I simply assumed it behaves like this: the higher the problem the higher the average solving time. Not only your solving time (this, too), but the solving time of the other users, who are at the end responsible for the rating of a puzzle.
    There is this tendency that the higher a puzzle is rated, the longer the solving time will be. I think I know the reason why this is so, but it is not important for this "non-discussion" here.

    From your answer, I think you misunderstood my part about the statistical relevance. I give you an example:
    The endgame K+N+N versus K+pawn is theoretically often won, if the black pawn is not too much advanced. It is a very difficult endgame though.
    You need to get the opponent King to a corner with just one knight, while the other knight blocks the opponent pawn.
    It is an interesting endgame.
    You could train it. And then you know that your opponent does not know.
    (You said: "I want to learn something at which my opponent is not good. That is what high rated problems are.")

    But how likely is it to come up in your real games? This is what I am talking about. The statistical relevance of this interesting endgame is very low. It is much lower than to train the endgame K+Q versus K+R: An endgame which is also considered to be very difficult. And K+Q vs. K+R can often be won, too. It might be not so difficult like the "K+2N vs. K+pawn" endgame.
    But "K+Q vs. K+R" is bloody difficult, too.
    At the same time, learning the endgame "K+Q vs. K+R" is much more likely to come up in one of your real games.

    If you dont know these two endgames, you will probably fail to win them.
    But if you decide which endgame is of more statistical "value", I strongly recommend you to learn endgames, whose likelyhood to come up in one of your real games is at least realistic.

    And this is not only valid for endgames (which are by the way typical guidance trainings). It should be valid for your puzzles, too.
    Better you train a range, where you only fail 80% of the puzzles, which wastes the other 20%. But for the sacrifice of this wasted 20% (=puzzles you could solve) you get plenty of compensation. Read my previous comment again about the compensation, maybe now you will understand it.
    In both cases, you will learn new, which your opponents wont know.

  10. @Munich,
    I had a long think and I must admit you are right again. I was focussing on learning logical reasoning and for that it is good that problems are difficult. But I tested a few problems with your ideas in mind and indeed there is an aspect of pattern recognition too. And in that case statistical relevance is important indeed.

    I made a second selection with 2000-2200 which consist of 783 problems. That seems to be a nice amount. I don't know how I score them yet, but I'll let you know.

    Ther is another aspect though and that is that I do not necessarily always take the time to find the solution myself (No DIY). This has a positive effect on the amount of time needed. But not on the statistical relevance of course.

    Thanks for being persistent.