Skip to content

Book Review-Thinking, Fast and Slow

In Boston at SPTechCon, I had the pleasure of giving the keynote titled “SharePoint Psychology”. After that Jeremy Thake and I were talking about the keynote and he mentioned that to him many of the concepts were familiar. He traced his thoughts back to the book Thinking, Fast and Slow by Daniel Kahneman. Having not read the book I put it in my queue to read. Knowing that Jeremy aligned it to my previous reading and works, I decided to prioritize reading it. There are a few important bits about the author and the book before I start to review the contents.

First, the author is the winner of the Nobel Prize for economics. There is certainly sufficient information in the book to support this. While it is at its core a psychology book, there’s a great deal of the focus which is on how people work in an economy and how the rational behavior model isn’t sufficient to describe human behavior.

Second, the book is 512 pages. It’s not the short, mind-candy, reading of Who Moved My Cheese. It’s the same length as the Diffusion of Innovations book I recently reviewed. My notes for the book (the process I use is outlined here) are 25 pages in length. Needless to say this is a short summary of what I got from the book – no matter how long it may seem.

With that out of the way, I think Jeremy is right. The undertones and overtones of Thinking, Fast and Slow are woven through out other books that I’ve read like Demand, The Happiness Hypothesis, Switch, Sources of Power, Finding Flow, and Social Engineering. In fact many of the authors are quoted in Thinking, Fast and Slow and Gary Klein – the author of Sources of Power – was a reported one time collaborator of Daniel Kahneman’s for a while though they approach decisions from opposite sides of the table.

Of Two Minds

When folks say they’re of two minds about something perhaps they’re exposing their inner conflict or perhaps they’re showing you a glimpse of how they think. Kahneman proposes a model where we have two different but interrelated systems for processing information. System 1 is the automatic and rather mindless operating mode that we find ourselves in daily. It’s the threat management engine that watches for urgent threats to us and in its constant vigilance it is called upon to make very complex assessments with relatively little information. This mode of operating – this system – is designed to jump to conclusions without realizing how far it has jumped.

The other system, called System 2 throughout the book, is the deliberate processing of information. This system is “in control” in that its answers are generally the more thoughtful ones and the ones which tend to be more accurate – except when it collaborates with System 1. System 2 is lazy. Honestly trying to keep our brains actively engaged in thinking about every little thing will drive us crazy. So it’s important that System 2 be lazy, to ensure that it’s not left on too long. The collusion between System 1 and System 2 happens when System 2 doesn’t bother to check the facts for the assumptions it got from System 1. System 2 in its laziness will just assume that System 1’s information was accurate – or at least it will accept System 1’s assessment of how accurate the information is. The problem is that System 1 – our automatic operating mode – isn’t designed to assess the accuracy of the information. It’s designed to create coherent stories that it can base evaluations on.

I’ve often said that I construct mental models for people I meet – based on Myers-Briggs and others. I then run a mental simulation of how I believe they’ll react to information and how I believe they’ll behave (Sources of Power talks about mental simulations). This is my way of leveraging a coherent story about a person that I’ve made up – to be clear it’s made up. However, some folks (Chris Riley for instance) are amused by the fact that I can get pretty close pretty quickly. For me, most of this process is automatic. I’m building on the automatic capabilities of System 1 to do on-the-fly assessments and then trying to leverage them in a quasi-conscious way.

Before we leave the idea of two minds, I’ve got to be clear that the personification of the two systems allows them to be more clearly understood, Kahneman was clear that his academic colleagues would object to this. From my point of view I believe it’s eminently helpful in creating understand.

Put Out the Fire

Jump to Conclusions, Please

One of my favorite Emerson quotes is “A foolish consistency is the hobgoblin of little minds.” The key word is foolish. Our automatic mode of processing information is ever vigilant for information that matches its conception of the world and integrating information that’s consistent with our view of the world. In essence, System 1 is in place in part to keep a single unified model of the world that it can use to predict situations quickly. System 1 is designed to jump to conclusions. It is designed to predict that a red light will come after a yellow one – and it’s designed to do this very quickly. It’s designed to determine when to engage System 2 and when to continue to operate on the mental model that it has created.

System 1 operates with heuristics. That is to say that it uses rules of thumb to operate. It assumes that all things will be the way they normally are. This plus a vigilant observation for items which violate the expectations is very effective as an operating mode. Where problems occur is when the heuristics that are being applied by System 1 are wrong or inaccurate. It takes careful analysis to determine why an operating mode may not be right – and most of the time System 2 doesn’t get the message that it’s needed.

I wrote a blog post about the Apprentice, Journeyman, Master journey that we use to train in industries where tacit knowledge is the lingua franca. What I failed to mention in my post was that apprentices are assigned simple tasks which don’t require much global knowledge, just a few basic local skills. Journeyman are taught to use “rules of thumb” or heuristics to do their slightly more complicated and global understanding dependent work. Masters can use the “rules of thumb” but their expertise is in knowing when they don’t apply or they don’t have to apply. In a trade the master is, hopefully, constantly looking over the shoulder of the journeyman and the apprentice – noticing when they’re not executing a skill correctly, or when they’re incorrectly using a “rule of thumb” where it shouldn’t be used. Of course, if the master it is busy it’s possible he’ll miss something critical that he should have stopped.

This is the problem of System 2. Being lazy, System 2 by default it will rarely (if ever) check the work of System 1. As a result, System 1 applies the wrong heuristic or applies the right heuristic too broadly. This leads to a systemic bias – and this is often the way that we go wrong. Consider the situation where System 1 quickly substitutes a difficult problem to solve with an easier problem – and doesn’t even tell System 2 what is going on. One of the quoted studies asked students about the number of dates they had in the last six weeks and then asked them how happy they were. These two answers had a very high degree of correlation – so the students substituted how happy am I with my love life for the question of how happy am I, in general, without notifying System 2 that the substitution had been done.

I mentioned in my review of both Switch and The Happiness Hypothesis about the Elephant, Rider, and Path. I believe System 1 describes the Elephant well, mostly in control. System 2 is the rider. He believes himself to be in control but really is subject to the elephant much more than he realizes. The elephant and the rider are even more concrete ways to see the model Kahneman proposes with System 1 and System 2.

WYSIATI

One of the recurring themes in the book is What You See Is All There Is (WYSIATI). This is a bias that you’ll develop that the entire world is similar to what you’ve seen. For me, I see this bias most prominently with folks who never travel. They may know that China is different from Illinois abstractly but they don’t understand the extent of the differences. If you pull four red marbles and a single white marble from a pot, you’ll automatically make the assumption that the pot contains 80% red marbles when you haven’t pulled enough marbles to really know. This WYSIATI makes us believe extremes. Whatever we see at the moment we believe will continue forever.

Babies when they’re developing cry when their mother leaves the room because for them their mother is gone forever. It’s difficult for them to realize that people will come back. Whatever they can see is all there is. If they’re in a room alone there is no one else in their world. It takes time for them to be able to realize that what they’re experiencing (or not experiencing) at the moment will not last forever. (I want to attribute the preceding to Brain Rules, but I can’t be sure.) In Thinking, Fast and Slow there’s an interesting discussion about how people experience their world and how they remember it. If you offer students 60 seconds of their hand in cold water or 60 seconds in cold water followed by 30 seconds of slightly less cold (but still painful) water they’ll pick the 90 seconds. It seems that the way that we perceive time and pain is different than how we remember it. So it’s no wonder that we believe that WYSIATI. For most animals – including young humans – remembering something that’s out of sight is difficult.

How We Learn

I mentioned above that our automatic system (System 1) is always running, always trying to integrate information into a mental model that will allow it to predict events – particularly negative events. I have mentioned a few times in previous blog posts, the work of Marcia Bates where she asserts that 80% of the information that we learn comes to us from undirected and passive behavior. That is to say that are learning – and integrating that learning – all the time. We don’t have to be actively pursuing specific information (which by the way Bates estimates at 1% of our overall learning).

I recently had an opportunity to see this in action. During a road trip my wife was working on a puzzle book. The page she was on was one where the answer was two words the first word beginning in the letter E and the second word ending in the letter E. She turned to me and asked me for the nickname for Ireland. After an incredibly brief pause I answered Emerald Isle. This startled me. I don’t ever remember studying Ireland. I don’t anticipate that this was ever anything that I consciously was aware of knowing and yet after a microseconds’ search of my entire library of experiences I was lead to the answer. This isn’t good for those in the business of training and development including me.

Kahneman diverts his attention to speaking of expertise and outlines how we learn well from high-validity environments – that is places where there is a true causal chain and when feedback happens regularly and reliably. We’ve all gone to a hotel and struggled to get the water temperature right for a shower because we have to wait some period of time before our adjustment in the controls results in the final adjustment to the water coming out. The longer the gap between feedback and the less clarity to the feedback the less likely we’re going to learn well. We need reliable feedback in order to make adjustments.

One caution is that we sometimes see patterns where none exist. We are too eager to assign a pattern to the random. We’re equally likely to confuse correlation with causation. That is to say that just because two variables seem to be related we may choose to believe that one of the variables causes the other. The most poignant example for this to me is the housing bubble in the US. One of the factors was the decision to encourage home ownership. This was due to the fact that economic stability and home ownership were shown to be correlated. The error was in the belief that home ownership caused economic stability (instead of perhaps the other way around.) As a result, policies were enacted both explicitly and implicitly that led to many more folks owning homes than previously. When the economy sputtered and the housing market crashed numerous home owners defaulted on their loans taking out a significant part of the financial industry and setting off alarm bells. The belief that home ownership caused the desirable state of economic stability created a policy that lead (in part) to the collapse. These policies created artificial home ownership in a group of folks who were not able to maintain them. This was a costly misstep on the road to learning.

In my world it’s interesting to speak of experts – those who have reportedly learned a great deal about a topic. It’s interesting because of my work with certification exams and training where we establish a baseline that is typically quite low across a broad set of skills. Candidates that pass the certification exam have shown reasonable competence across the skills measured by the test. Even here, however, some candidates have extremely good skills in one area and missing skills in other areas. It’s a balance to ensure that there’s a baseline set of skills to support the designation earned through the certification.

So called Experts are even more diverse in their skills. Some who would consider themselves to be an architect are great at the IT Pro side of things and lousy at the development side of things – or vice versa. Kahneman offers a simple explanation. Expertise isn’t a single thing. It’s a collection of mini-skills. That is a set of skills that overlap and build on one another. It’s completely possible to be an expert and to have areas of missing skills. That’s because they never built those skills – but those mini-skills as a total percentage of the area of practice are relatively small. For instance, I’ve never worked with multi-tenant environments in SharePoint. It’s a skill set that I’ve never developed. Does that mean that I’m not an expert? No, it just means I’ve got more to learn when the opportunity arises.

Kahneman points out that even experts with experience and skills can often produce bad results unintentionally. He relates a project where an expert was brought in to be a part of the group. One of the exercises was to estimate the remaining time on the project. Everyone on the project produced similar estimates around a two year estimate – including the embedded expert. However, when the expert was questioned about other projects similar to this one the result was a completely different – seven year – answer. The expert had the requisite knowledge but it wasn’t integrated to a single thinking. Kahneman refers to this as an inside view (the first estimate) and an outside view (the second estimate). Sometimes we have the knowledge and experience that is necessary to realize the folly that we are on but we remain ignorant of our delusion.

Humorously, Kahneman points out that even when taught extensively, students don’t always apply what they learn of human psychology. It seems that we may be caught in the same trap described in Diffusion of Innovation where we the progression between knowledge, attitudes, and practices isn’t linear. We can intellectually be aware of information and at the same time not use it to influence our behavior. We continue to see the illusion even after we know it is an illusion.

As a sidebar here, it occurs to me that reflection time is absolutely critical for the ability to identify when a delusion is happening. I think of all of the times that I’ve been on delusional trips and I realize that they’re caused in part because of a lack of reflection time on my part. I fill my “me time” with a desire to read more or to do more. As a result, I reflect less and end up allowing delusions to continue to the point where they can no longer be supported. If you’re trying to ensure that you’re not being delusional in any aspect of your life, make sure that you have time to reflect. I believe it’s this reflection time that allows you to build the connections that System 1 needs to leverage the information in the moment. I mentioned above that I’ve got a process for reading books. However, I didn’t say that I spend a lot of time capturing the notes, copying them and refining them into notes. I spend even more time on blog posts like this one trying to understand what I got from the book and to connect it to the other things that I’ve learned from other readings and experiences. (While this is a labor of love – or at least passion, it is still a labor.)

When Gary Klein was studying fire commanders for the research that eventually found its way into Sources of Power, he rejected intuition. There had to be a reason. However, time-after-time fire commanders said it just “felt wrong.” Intuition is just subconscious recognition. In the case of the fire commanders it could have been a violated expectancy or it could have been an opportunity that others would have missed because of the recognition of a pattern. So if intuition is simply recognition at a subconscious level, shouldn’t we be able to encourage the incorporation of experiences into our thinking to make them available not as a carefully considered variable but as a part of our automatic operating system, System 1?

To understand how to integrate our experiences, I want to rewind and connect some learning from Lost Knowledge. This title was focused around the conversion of tacit knowledge into explicit knowledge – or holding on to tacit knowledge as much as possible. The problem with tacit knowledge – and the experts that hold it is that expertise is notoriously fickle. Give the same expert the same evidence twice and you’re likely to get two different answers. Kahneman speaks about judges whose case reviews for parole would vary based on the time of day that they saw them (and by extension their blood sugar level). To eliminate the unseen biases that influence an expert, we need to pull up the key criteria that they’re subconsciously using.

Ultimately, the conversion of tacit information into explicit information is about the identification of the specific attributes and characteristics that influence the situation. From there it’s a hop, skip, and a jump to getting to a formula that can be used to create a quick assessment of a situation given a relatively complex situation. The process of converting tacit into explicit information is the process of converting intuition into a repeatable formula. That isn’t to say that the process will be easy nor that everyone will like it. Ashenfelter converted the tacit knowledge about the impact of weather to the value of wine in the future. His algorithm has a 90% correlation to the price of wine in the future but wine connoisseurs were quite unhappy about this. That doesn’t mean that it wasn’t still the right thing to do. It converted what the industry implicitly knew into a very repeatable formula – which is a good thing.

The Specifics

In parting I’d love to leave with the specific heuristics, biases, and effects mentioned in the book (along with a few that were just related thoughts) and my own definitions for them.

Heuristics

  • Availability Heuristic – We believe that things which are easier for us to retrieve in our minds are more frequent.
  • Affect Heuristic – Your likes and dislikes sway your perception of the entire system. If you like the benefits of a technology, you’ll deemphasize the risks.

Biases

  • Imaginability Bias – We assess the frequency of a class of events based on the retrievability of a few instances (See Availability Heuristic)
  • Hindsight Bias (“I knew it all along” effect) – We believe that we remember our past well, however, our memories are subject to reevaluation when we learn something new. We tend to believe that our previous perceptions match our current perceptions.
  • Confirmation Bias [From Sources of Power]– We tend to seek and be aware of information that confirms our position rather than refutes it.

Effects

  • Priming Effect – The effect of priming someone with some information to cause a temporary bias in responses. Sales people are taught to get their prospects saying ‘Yes’ to lead them to say yes when asked if they want to buy.
  • Halo Effect – The tendency to view all aspects of a person favorably or disfavorably based on a very narrow interaction. Consider your perceptions of a person that was volunteering with a cause you liked. You’re more likely to believe that person is good – with insufficient background.
  • Framing Effect – A decision can be framed (presented) in a way more likely to lead to one outcome over another. Consider a discount for cash or a surcharge for credit. One will cause a negative emotional reaction (surcharge for credit). The framing will drive behavior towards paying with cash.
  • Exposure Effect – If we’re exposed to something – even briefly and unconsciously – it will have an impact on us. (See Subliminal stimuli @ Wikipedia)
  • Illusory Correlation Effect – The impact of randomly occurring stimuli being erroneously correlated in a person’s mind.
  • Ego-Depleted Effect – In sugar deprived situations the increased tendency to make intuitive errors. i.e. System 2 doesn’t get engaged.
  • Anchoring Effect – The effect of presenting a person with an initial value from which they will adjust their perception. Adjustments are frequently insufficient and therefore anchoring creates bias in the perception of the person.
  • Above Average Effect – The belief, with moderate skill, that the person has above average skill. E.g. 90% of the people believe they’re above-average drivers.
  • Endowment Effect – The resistance of a person to trade something that they have. Possessions have a higher value than what would be paid to acquire them. Exceptions are those items held “for exchange” like money.
  • Possibility Effect – The tendency of people to weight small possibilities with more weight than would be statistically appropriate.
  • Certainty Effect – The tendency for people to weight certainty with greater emphasis than it should be when compared to a near certainty.
  • Disposition Effect – The bias in investments to sell winning stocks when compared to loosing stocks.
  • Polarization Effect [from Unknown] – The tendency to be prone to one result or the other and not a moderated answer. A bias away from indifference.

So Read It Already

If you’ve managed to plow your way through to this point in the post… You need to pickup – and read – Thinking, Fast and Slow.

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share this: