Book Review-Wired for Story: The Writer’s Guide to Using Brain Science to Hook Readers from the Very First Sentence

Socrates thought that books would do terrible things to our memories. Since the beginning of time, our knowledge had been passed on in the oral tradition of stories. These stories were memorized and repeated. They were handed down from generation to generation, and that’s how story co-evolved with our species. Wired for Story takes us through a journey of the mind, speaking not only of what causes us to be so intrigued and enthralled by stories but what it takes to create a good one.

There is now interesting neuroscience that supports that the brain is wired for stories. Parts of the brain are selectively disengaged. We get little shots of dopamine to keep us coming back for more, and the more research we look at, the clearer the picture becomes that our brains are designed by evolution to be story-centric.

Why Story?

The importance of stories has been a mystery. They’re powerful in the neurochemicals they release. They have the power to suck us in and cause us to stay engaged, even when we need to sleep or take a bathroom break. This sounds a great deal, like flow. It has the capacity to so engage us that we forget our biological needs. (See Flow, Finding Flow, and The Rise of Superman for the power of flow.) Much like play, there must be an evolutionary advantage to stories. (For more on the benefit of play, see Play.)

If something with such a powerful pull needs an evolutionary reason to hold it in place, what then is the power of story? The power seems to come from our ability to safely experience what’s impractical to experience personally. A spy novel allows us to experience the thrill of being James Bond without the real-world consequences of making a mistake and being killed or placed in prison. Less action-oriented stories still have their pull, as they teach us something that we didn’t know before.

Dress Rehearsal

One of the tricks that evolution taught us was the use of simulation. It allows us to learn from things that haven’t happened to us, whether they’re things that haven’t happened yet and may lead to stress (see Why Zebras Don’t Get Ulcers). Johnathan Haidt in The Righteous Mind speaks of the change that happened that allowed us to have shared intentionality. Somewhere along the line, we leveraged our ability to simulate to create an opportunity to work together towards a goal – the original team-building exercises.

Stories leverage this simulation and shared intentionality to create an opportunity to learn and live vicariously. We don’t have to be the Indiana Jones figure swinging from a whip and narrowly avoiding huge rolling boulders of stone in an ancient pyramid. We can allow the story to unfold and learn something. In some cases, the learning isn’t useful or realistic, but in less-glamorized stories, the lessons can be powerful. Aesop’s Fables are a good example of stories with a purpose. They teach important moral lessons.

Moral Lessons

The real goal of a story is to answer one overriding question. It answers one truly profound question about how the protagonist overcomes adversity by changing themselves. It’s a common misconception that story is about the plot – it’s about what happened. However, the real heart and soul of the story isn’t what happened, but is instead how the protagonist changed. If your protagonist doesn’t change as a result of the story, well, then it’s not really a story – at least not a compelling one. You can have a story like Dude, Where’s My Car? that is filled with silly situations, one-line zingers, and the occasional plot twist. However, without a real change in the protagonist(s), there’s no point. It’s mind candy, just something silly to fill the time.

Done well, a story reveals human nature to us through the changes we see in the protagonist.

Prediction Failure

It’s well-established that we’re lousy at predicting our own happiness. (See Stumbling on Happiness, The Happiness Hypothesis, and Hardwiring Happiness for more.) However, we even fail to predict our own behavior. Kurt Lewin’s equation that behavior is a function of both person and environment means that we’ll never really know what we’ll do until we’re actually placed in the situation. We want to believe that we wouldn’t mistreat mock prisoners (see The Lucifer Effect) or that we wouldn’t administer seemingly lethal shocks (see Milgram’s work in Influencer).

Stories give us an opportunity to view what happens when a protagonist who may seem similar to us reacts to a set of circumstances – things that most of us never want to see happen to us.

Tragedy and Comedy

The difference, they say, between tragedy and comedy is timing. However, Inside Jokes revealed that there’s slightly more to it than that. Comedy is about creating misperceptions. It’s a 1-2 pause, 3 waltz that makes comedy work. We intentionally lead the audience down the wrong path, and then, with a flick of the tongue, we shift them in a totally different direction. The detection of the fault in the prediction is rewarded with a small shot of dopamine – at least in the cases of real laughter.

Comedy is, as I learned, all about the control of the release of information. (See I am a Comedian for more.) The comedian quite obviously knows the punchline but cannot do anything to reveal it before its time. Obviously, comedy is more nuanced and special, with tags and call backs and other tricks to cause the mind to reach the wrong conclusions; however, done well, all comedy is like all magic – it’s about the art of misdirection.

At some level, the difference between tragedy and comedy is timing, as the secret to comedy in and of itself is timing. So, too, is the case with story: the careful, timely release of information that exposes the inner struggle of the protagonist results in a story that keeps the reader leading in, leaning forward, looking for the next clue that will help them solve the puzzle. Occasionally, folks like me will take alternate approaches to solving a puzzle, as in How This Developer Solves a Puzzle, which is akin to reading the end of the story without progressing through each of the pages.

The Setup

At some level, it’s the writer’s job to set the protagonist up to run through the gauntlet of the plot and to emerge the other side. The setup can be subtle in terms of its origins. Perhaps it’s where or when they’re born. Perhaps it’s the people they’re attracted to. At some level, however, it must feel like fate. It must feel like this is what they are meant to do – otherwise, why wouldn’t they turn away from the challenge and do something else?

Done properly, the setup in the story feels like fate. It feels like it’s what the person was born to do. Most of us go through life without that sense of purpose and meaning. We don’t know what fate would have for us. We seem to be just moving through life like a leaf blown in the wind. (See Extreme Productivity for more about how our paths are rarely straight.)

Pacing

With the setup and the protagonist in their spots, all that is necessary is for the plot – and the transformation of the protagonist – to unfold. Perhaps the most difficult component to story writing is to find a pace of the plot where each twist is precisely where the audience needs it to be to keep engaged. Each revealing story comes before they lose interest. Where lives are sheer boredom punctuated with stark terror, stories should contain only the elements necessary to solving the final puzzle of the protagonist’s change.

I believe that distilling the story down to its necessary elements and controlling the pacing of those elements is the heart of how we can capture the power of story. After all, humans are Wired for Story.

Announcing the Implementing Information Management on SharePoint and Office 365 Course

I’m pleased to announce that today my Implementing Information Management on SharePoint and Office 365 course went live on the AIIM web site. This project has been a long time coming both from the most recent development of the content and the history.

It was 2010 when I was meeting with the Microsoft product teams for the development of what would become the SharePoint ECM Implementers course. It was available for a few years internally and to partners and ultimately was available in its recorded version online. It’s long since been removed. It’s too bad, because, as I speak at conferences, I consistently find people who are struggling to leverage SharePoint as their enterprise content management system.

The opportunity came up to help translate the good work of AIIM’s ECM Master program into a pragmatic implementation guide for SharePoint and Office 365, and I jumped on it. I started from the conversations I’ve had with the product team – and with organizations looking for a way to implement SharePoint successfully – and created a brand new four-day instructor-led course. Then I took the course and recorded all the instruction and the labs to turn it into an online offering par excellence.

The online course includes a 715-page student/lab manual, 324 minutes of recorded instruction and 181 minutes of recorded labs. The course gives you everything you need to be successful with SharePoint and Office 365. It covers thorny topics like retention and records. It explains how to leverage search to create findability. It makes the use of site columns, content types, and content type hubs real. It even walks step-by-step how to manage security in the environment, how to create user experiences, and dozens of other important topics that are relevant whether you’re using SharePoint Online or you’re running SharePoint on your premises.

Book Review-Mindreading

Mindreading – it’s the stuff of comic books and science fiction. At the same time, Dr. Paul Ekman struggles with the implications of his discovery of micro-expressions and the emotions they reveal (see Nonverbal Messages and Telling Lies). All the while, Jonathan Haidt believes our ability to read others’ intentions is the point at which we became the truly social and cooperative species we are today. (See The Righteous Mind.)

Somewhere between the superhero capacities and the reality of our evolution lies questions. The question is, how does it work? How is it that we have any capacity to read another’s mind? What is it that allows us to “know” what is in someone else’s head? This is the question that plunged me into the academic writing of Mindreading.

Making Models

Other than Steven Pinker, I don’t know anyone who claims to know exactly How the Mind Works. In his book with the same title, Pinker attempts to walk through the topic, but my initial journey through the material was called on the account of boredom. It’s back on my list to try to read again, but I can’t say that I’m looking forward to it. The neurology books I’ve read can describe the firing of neurons and their structure – but not how they work together to produce consciousness. (See Emotional Intelligence, Incognito, and The End of Memory.)

Psychology has its problems too. Science and Pseudoscience in Clinical Psychology and The Heart and Soul of Change are both clear that psychology doesn’t have all the answers for how the mind works. The DSM-5 is a manual of all the manifestations of problems with psychological development without any understanding of what’s broken or what to do about it. It’s sort of like a categorized list of all of the complaints that people have had with their car when they take it to an auto mechanic. Warning: Psychiatry can be Hazardous to Your Mental Health speaks of the rise of the use of drugs – with limited, if any, efficacy – and how we still don’t effectively know how to treat mental health problems.

With all these problems, one might reasonably wonder why we bother making models at all. The answer lies in a simple statement. The statistician George Box said, “All models are wrong, but some models are useful.” The fact that each model moves us closer to an approximation of reality is why we make models. Much of Mindreading is spent exploring the author’s model of mind reading and comparing it to the models that others have proposed – and how the author’s model builds on the models of others.

Telling Lies

I ended my review of Telling Lies with the idea of stealing the truth. That is, how detection of lies could be used to steal the truth from those who wished to keep the truth secret. This is, for me, an interesting moral dilemma. Our ability to read minds – to have shared intentionality – allowed us to progress as a species. It was an essential difference, just as was our ability to use tools. At the same time, we believe that we should have a right to keep our thoughts private.

Mind reading, or shared intentionality, has been one of the greatest factors in our growth as a species and at the same time we struggle with what it means.

Understanding Beliefs

Show a small child of one or two years old what’s in a box, and close it. Watch as their playmate enters the room, and ask them what their playmate will believe is in the box, and they’ll confidently explain the item you showed them. Of course, the playmate has no idea what’s in the box. Young children are unable to comprehend that the beliefs that they have aren’t the beliefs that everyone has. They believe the illusion that their brains are creating. (See Incognito for more.) However, somewhere around three years old, if you revisit this test, you’ll find the child identifies that their perceptions and those of their playmate are not the same.

There’s a transition between the belief that everything is the same for everyone to a more nuanced understanding that your beliefs and others’ are different. However, differentiating between you having a belief and someone else not having it – or having a different one – doesn’t help to understand their desires.

Reading Desire

Understanding different desires is something different entirely. It’s one thing to understand that someone else doesn’t know what’s in a box but something entirely different to understand that not everyone loves brussels sprouts. Young children tilt their heads like a confused puppy when you tell them that you don’t desire something that they do.

Soon after they’re able to accept the principle that you don’t have the same desires they have, they start to try to figure out what your desires are. They begin the process of looking for markers in behavior that either confirm or disconfirm that your desires match theirs. They look for whether you take the brussels sprouts from the buffet.

Desire is inferred from behavior or lack of behavior more or less like adults assess others’ desires. The models that we have in our head and the number of markers that we’re able to use expands, but, fundamentally, it’s the same process. Where we get off track is more frequently reading intentions.

Reading Intention

“Fundamental attribution error” is the name that Kahneman gave the tendency to attribute intentions to others. (See Thinking, Fast and Slow). It’s our tendency to leap to conclusions. It’s our tendency to reach out and make the wrong leap about what other people were intending.

When it comes to leaping, Chris Argyris has a ladder. His Ladder of Inference describes how we make assumptions and conclusions about other people and what is going on inside of them. Most of the time, when we talk about the Ladder of Inference, we’re talking about the problems that it causes. (See Choice Theory.) We’re talking about where it misses the mark. However, the inferences we can make to read the intention of someone else is a marvelous piece of mental machinery.

Consider Gary Klein’s work in Sources of Power and Seeing What Others Don’t, which lay out the mental models we use to simulate the world around us. Reading intentions means that we model the mental processing of other people. This sort of box within a box has been mastered by virtualization software, but wasn’t popular for the first several decades of computer technology. We know that a mind can simulate the processing of another mind – but how?

What’s the Harm in a Thought?

Research has shown that thoughts can be harmful. They can lead to stress responses and harm. (See Why Zebras Don’t Get Ulcers.) However, a thought or belief can rewrite history. People struggle with the curse of knowledge (see The Art of Explanation for more). We simply don’t see how people couldn’t realize that the round wheel is best. Our awareness of the current state shapes our perception.

Andrew Carnegie is perhaps my best example of a man who understood the power of a thought. In his time, he was called a “robber baron.” He was reviled. However, through his gift of public libraries, he shaped people’s perceptions of him – for generations. The thought that he is a benefactor of public knowledge pushes out the incompatible robber baron thought.

Thoughts are substantially more powerful than we give them credit for. They can change our biology. They can change our world, and, ultimately, they can change the world. Incompatible thoughts wage a war inside our heads, duking it out to see which one gets to survive. Einstein described “genius” as the ability to hold two incompatible thoughts inside our head at the same time.

The harm in a thought can be how it pushes out other thoughts – necessary thoughts. (See Beyond Boundaries for more on confirmation bias.)

Possible World Box – The Heart of Simulation

At the heart of our ability to project the future and to simulate situations is the possible world box. In this box, the bounds of our perception of reality are weakened. We copy our thoughts and expectations into this box from our belief box – but inside the possible world box, anything is possible. We can overwrite our beliefs. We can change our world view – at least for a moment. The possible world box is where we simulate. We simulate the future. We simulate other people and other situations.

Without the possible world box (or some equivalent), we would not be able to simulate at all. We’d be limited to the experiences that are directly within our perception. With a possible world box, we can create flights of fancy and any sort of world or simulation we might like – including what might be going on inside another human.

It’s this ability to simulate that is unique to our human existence, and it’s one fraught with problems. Many of these problems revolve around the challenge of cognitive quarantine.

Cognitive Quarantine

It’s great that we have a possible world to run simulations in, but what do we do with the results of those simulations? If we had complete cognitive quarantine, there would be no way to migrate the output of our simulations into our belief system. So, we clearly need to take things from the possible world box – or the output of the simulations we run in the possible world box to our beliefs. This gets us into trouble.

Suddenly, it’s possible to get things from the possible world box – which aren’t constrained by reality – into our belief system. The mental mechanisms that regulate this process are far from perfect. In fact, we know through research that the introduction of information into a simulation can bleed into beliefs about the real world.

I wonder whether schizophrenia as we understand it is really a failure of the mechanisms designed to limit, regulate, and control the flow of information out of the possible world box in such a way as the possible world leaks into our real world and our real beliefs. Once that happens, it becomes fascinatingly hard to loosen the belief. (See Change or Die for more.)

Displacing the False Belief

Let’s say you are placed in a situation of seeing a set of suicide notes – some fake and some real. You’re asked to sort them into fake and real. You’re told that your sorting is very good – much larger than chance. Then later, you’re told that the feedback was wrong. In truth, all the suicide notes were fake. The whole experiment wasn’t about sorting suicide letters. It was about persistence of beliefs. And then you were asked whether you were good at sorting suicide notes between the fake and the real.

Your perception will have changed. You’ll believe that you’re good (or better than average) at sorting real suicide letters from the fake. You’ve been told, by the same researcher that told you were good, that they were lying. You should – if you’re completely rational – not hold any belief about your ability to sort suicide letters. However, the research shows that you will. You’ll hang on to the lingering belief that you are good at this sorting.

In this very controlled experiment, you received direct evidence that you are not good at the task, and yet the belief persists. What does this say for the beliefs that leak out of the possible world box? How difficult would it to be to displace a bad belief if you don’t have direct, disconfirming evidence? Would it even be possible? In many cases, it isn’t.

Inference Mechanisms

We’ve got finely-tuned inference engines. We ascribe our thoughts to others. In fact, this is something all young children can do. Shortly after they discover object permanence – that is, that an object doesn’t disappear when it moves out of their field of view – they start to expect that what they know is something everyone knows. If they see an object move behind another until it’s hidden, they expect that other children who didn’t see the object get hidden will know where it is. They infer that, because they know it, then everyone should know it.

As we get older, our inferences get more complex. We move from being able to identify the number missing in a series to being able to infer what someone else believes based on their behaviors. We test possible beliefs in the possible world box until we can find a belief set that could create the behaviors we’re observing.

Behavior Prediction

In many ways, our mental systems evolved in ways that allow us to predict the behaviors of others. That is, we want to know what to expect out of others. We predict behaviors, because, as social animals, we know that our safety is dependent upon how others behave.

Our behavior prediction engine is fed information through play and through our experiences. (See Play for more on the role of play.) As we amass more data, we expect that our ability to predict others’ behaviors improves. We do this because, by predicting the behavior of others, we can learn to work together and stay safe.

Failure of Prediction

Though we’re good at predicting other people’s behavior, our failure to predict their behavior is inevitable.

The more certain we are of how we believe someone will behave, the more hurt and betrayed we feel with they don’t meet our expectations. (See Trust => Vulnerability => Intimacy for more clarity.) In evolutionary history, we needed to know how someone would behave, because it quite literally could mean the difference between life and death.

Kurt Lewin tried to expose a simple model for behavior prediction. Behavior, he said is a function of both person and environment. So, it’s not possible to predict behavior without considering both the person and the environment. Folks like Steven Reiss have worked to characterize the personal factor of behavior by isolating and identifying the 16 basic motivators – sort of like a periodic table of elements for motivation. (See The Normal Personality.) Others have proposed other ways of categorizing people to make the explicit prediction of behaviors easier. (You can find more in The Cult of Personality Testing.)

Despite all of these tools and models, we still fail to predict others behavior. Caesar Agustus asking “et tu Brute?” is perhaps the most historic example of a betrayal that cost a life. The good news is that every failure to predict isn’t a life or death situation. Sometimes it’s trivial.

Pretense – Something and Not at the Same Time

Have you ever picked up a banana, held it to your head, and started to talk into it like a phone? Or have you seen a child pick up a block and talk into it like a cell phone? These are examples of pretense. It’s the basic forerunner of our ability to simulate the mind of others and the start of the possible world box. We can simultaneously accept that what we’re “talking on” can’t make calls – and at the same time pretend to be doing just that.

The interesting part of this is that we can imbue the attributes of the target item, the phone, to the source item, the banana, while at the same time recognizing that the banana is still a banana. This bit of cognitive distinction is why the possible world box makes so much sense. We can pin our beliefs into a possible world and recognize our beliefs that are “real.”

So, we start by pretending one thing is another. And we end up with a way that we can read other people’s minds. It may not be the stuff of comic books. However, Mindreading is pretty cool – and something worth learning more about.

Complicated Made Simple – Projector Brightness

A recurring theme throughout my career has been to convert the complicated into something simple. Most of the time, I’m doing this through my education. In preparing a session for some meeting planners, I realized that sometimes there are complicated topics that they face – including the topic of how bright the projector should be and the size of the screen. Most people – including AV companies – just guess at how bright the projector needs to be and how large the screen needs to be; however, it really boils down to some well-established math.

I dusted off an old spreadsheet I had used to do some calculations for myself for a projector for my office and reworked the calculations to work with conference settings. I’m going to make the spreadsheet available for anyone to use (see the end of this post). With a handful of simple answers, you can get a good understanding of how bright your projector needs to be.

Size Matters

Obviously, size matters, so how do you know how big a screen you need? The answer is based on the visual acuity of the audience and the size of the room. Most people have vision that’s corrected to 20/20. That’s assessed with a Snellen Eye Chart. At the standard distance of 10′, a person with 20/20 vision can identify a letter that’s 3/16″ high. However, as anyone who has done this knows, that’s very hard. If you assume that 20/40 is very readable, you have to have letters that are about 5/16″ high. If we keep this same relative size at any distance, we maintain readability. Thus, if we are 20′ from the screen, we need a letter that’s 10/16″ (or 5/8″) to get the same visibility. (End size = 5/16 / 10 * distance in feet.)

From there, you need to figure out how big the letters are going to be natively. That’s typically measured in points. In presentations, most letters are 24 points or larger. For situations where documents or spreadsheets are going to be shown, you assume that the font size will be around 12 points. 72 points are an inch, and computers typically render 92 dots (or pixels) per inch. We can use this to convert into the number of pixels that a font will be rendered. (Font size in points /72 * 92 = font size in pixels.) Knowing how many vertical pixels there are allows us to calculate the screen size. We divide the end size by the number of vertical pixels we get the screen height. Knowing the height and the ratio of height to width (9:16) allows us to calculate the width and the diagonal size of the screen.

With the screen size worked out, we have to work on brightness.

Footlamberts and Lumens

The motion picture industry has historically used footlamberts as a way to measure the light reflected from the screen. The standard for a darkened theatre is 16 footlamberts. That is, every part of the screen reflects at least that much light. For a business setting with lights on, the reasonable range of light in foot lumens is somewhere between 40-50. If we can get to 16 footlamberts in a dark room or near 50 in a bright room, the audience will be able to see well.

Because footlamberts are a reflected light, we need to consider how reflective the surface is. A standard whiteboard has a reflectivity of 1. Better screen materials may be able to achieve reflectivity of as much as 1.4 – which means the projector needs to output less light to get the same number of footlamberts.

With the size of the screen and the screen reflectivity, we can calculate the number of lumens we need from the projector. That’s a lot of math – but the spreadsheet at the end of this post has all the math handled for you.

Quick Chart

If you’re looking for something even more simple and straightforward, look at this quick chart with some basic room configurations. The assumptions for the rooms are that there’s two feet from the front of the room to the screen surface, five feet from the screen to the audience and five feet at the back of the room. Finally, we assume a 1.2 reflectivity from a screen (so you have a good screen).

Room Length in feet (Max audience to screen) Minimum font size Diagonal screen size in inches Footlamberts Lumens
30 (18) 12 81 50 807
30 (18) 24 40 50 202
40 (28) 12 126 50 1954
50 (38) 12 171 50 3599
50 (38) 12 171 40 2879
80 (68) 12 305 50 11524
80 (68) 24 153 50 2881

If you want to do the numbers yourself and see what you get, you can use this spreadsheet to test configurations yourself.

Book Review-What Got You Here Won’t Get You There: How Successful People Become Even More Successful

When multiple arrows point to the same place, you’ve got to go there. What Got You Here Won’t Get You There is one of those places. The book Who: The A Method for Hiring and The Power of the Other both refer to Marshall Goldsmith’s work. It’s a powerful reminder that you need to continue to grow and improve no matter how successful we are – that is, if we want to continue the upward spiral of success.

Four Success Beliefs

Goldsmith believes in the strong character of successful people. While not every successful person could be described as having an “unerring sense of direction,” most successful people know where they’re going – at least most of the time. They have a set of beliefs that carry them forward. The four beliefs are summarized as:

  1. I have succeeded
  2. I can succeed
  3. I will succeed
  4. I choose to succeed

At some level, these beliefs are true, but they are also delusions. For all of us there have been failures as well as successes. Some challenges are more than they’re capable of – at the moment. (See Peak and The Rise of Superman for self-improvement.) Some situations are unwinnable. Finally, willpower has its limits. (See Willpower, Grit and The Happiness Hypothesis for limits of willpower.)

Kurt Lewin said that behavior is a function of both person and environment. (See Helping Children Succeed for more.) The Halo Effect reminds us that we live in a probabilistic world, not one of certainty. We can’t say that we will succeed. There is no certainty in the world we live in, particularly as we consider complex goals and objectives.

The “internal locus of control” that successful people believe in may be a fallacy, but it is helpful. (You don’t want to recommend that they read Mastering Logical Fallacies too deeply.) The belief system gives them the strength to keep on with the climb. (See Grit: How to Keep Going When You Want to Give Up and Grit: The Power of Passion and Perseverance for more on what it takes to keep going.)

Top 20

Beyond the limitations of the beliefs that successful people hold, there are things that they do to hold themselves back. These are the brakes being applied while they’re trying to stomp on the gas. As you get into greater leadership and management roles, your technical skills matter less, and the skills that you have as a leader and manager can either make people more effective – or you can minimize people. (See Multipliers for more on maximizing people’s output.) The top 20 of Goldsmith’s 21 appear below (quoted):

  1. Winning too much: The need to win at all costs and in all situations—when it matters, when it doesn’t, and when it’s totally beside the point.
  2. Adding too much value: The overwhelming desire to add our two cents to every discussion.
  3. Passing judgment: The need to rate others and impose our standards on them.
  4. Making destructive comments: The needless sarcasms and cutting remarks that we think make us sound sharp and witty.
  5. Starting with “No,” “But,” or “However”: The overuse of these negative qualifiers which secretly say to everyone, “I’m right. You’re wrong.”
  6. Telling the world how smart we are: The need to show people we’re smarter than they think we are.
  7. Speaking when angry: Using emotional volatility as a management tool.
  8. Negativity, or “Let me explain why that won’t work”: The need to share our negative thoughts, even when we weren’t asked.
  9. Withholding information: The refusal to share information in order to maintain an advantage over others.
  10. Failing to give proper recognition: The inability to praise and reward.
  11. Claiming credit that we don’t deserve: The most annoying way to overestimate our contribution to any success.
  12. Making excuses: The need to reposition our annoying behavior as a permanent fixture so people excuse us for it.
  13. Clinging to the past: The need to deflect blame away from ourselves and onto events and people from our past; a subset of blaming everyone else.
  14. Playing favorites: Failing to see that we are treating someone unfairly.
  15. Refusing to express regret: The inability to take responsibility for our actions, admit we’re wrong, or recognize how our actions affect others.
  16. Not listening: The most passive-aggressive form of disrespect for colleagues.
  17. Failing to express gratitude: The most basic form of bad manners.
  18. Punishing the messenger: The misguided need to attack the innocent who are usually only trying to help us.
  19. Passing the buck: The need to blame everyone but ourselves.
  20. An excessive need to be “me”: Exalting our faults as virtues simply because they’re who we are.

Goal Obsession

The 21st brake on success isn’t always a bad thing. The problem is when it’s out of balance. A little bit of healthy goal obsession is necessary – like the mistaken beliefs about success – to be able to sustain the fight against the onslaught of storms. Goal obsession can cause people the problem of forgetting the ultimate vision for the sake of a minor point that doesn’t matter.

Goal obsession is getting the blinders on and forgetting that one small goal is nearly always a means to the end vision that you want. Sometimes making something work isn’t the right answer, because the cost is too high.

Sharing and Withholding

Looking back on Goldsmith’s list, he calls out that half of the items are based on managing the balance of sharing information and withholding information. Too far on one side and you share too much. You don’t allow space for other people to share. Too far on the other side and you withhold too much. You don’t support others in ways that you can. This is Goldsmith’s assessment, not mine.

I’ve certainly seen the effects of withholding in very personal ways. In Intimacy Anorexia, we found that withholding is the primary weapon of the intimacy anorexic. I’ve also been repulsed by people who suck all the oxygen out of the room with their incessant talking about me, me, me. However, I would say that his list is more about being comfortable with oneself than it is with something as simple as the degree to which you communicate. After all, the intimacy anorexic is withholding communication, because they don’t want people to know who they are.

Comfortable in Your Own Skin

Learning to be yourself should be easy. It should be natural. However, for many people, they’re not clear about who they want to be and how they will define themselves. As a result, it’s hard to be themselves. You can’t behave in a consistent way if you don’t know the ways that you want to behave any more than you can hit a target that you’re not aiming at.

Clarity on the kind of person that we want to be and making our wants and desires subservient to the goal of the person we want to be is difficult. It’s a challenge to stay focused on the end goal, on the character that we want to develop in ourselves, but it’s also the most rewarding.

Books like The Anatomy of Peace speak of the boxes that we get in where we are threatened or wounded, and how it causes us to behave in ways that are counter to the ways we want to behave. Folks like the Dalai Lama and Paul Ekman have conversations about eastern and western philosophies about Emotional Awareness. Tools like the Enneagram are designed to reveal our tendencies while also exposing the awareness that we can be more or less functional within our natural tendencies. (See Personality Types: Using the Enneagram for Self-Discovery for more on the Enneagram.)

Becoming the best you that you can be – being comfortable in your own skin – is a lifelong goal and its own reward. There’s a peace about knowing yourself and appreciating yourself for both the good and the bad.

Integrated Self Image

Understanding that you are both good and bad is important. (For more on the good/bad dichotomy see The Lucifer Effect – Normal Evil.) More important than recognizing the good and the bad within yourself is the need to accept that this is one person, not two. You are both the good and the bad. In ancient Egypt, they used to believe that when you died, your heart would be weighed against a feather. Only those whose heart was as light as a feather would pass into the afterlife.

For the Egyptians, it wasn’t about the good or the bad that you did. It was how heavy your heart was that prevented the move into eternity. Unfortunately, too many people carry a heavy heart, which is burdened by the conflict between seeing themselves as either all good or all bad. (See Rising Strong for more on having an integrated self-image.)

Misplaced Blame

Have you ever been in a public place where smoking was prohibited, and yet someone was smoking nearby? Have you ever gently reminded the person that their smoking was a violation of the rules (or the law) – only to be rebuffed as if you were the one doing something wrong? That’s misplaced blame. You’re pointing out that someone is not behaving according to socially-acceptable norms, and suddenly their focus is on you.

We’ve all experienced some degree of this in our relationships. Psychologists call it “projection” or “misdirection.” (See Changes that Heal for some of the mechanisms that people use to protect themselves.) The problem with misplaced blame is that we’re not taking responsibility for ourselves – and that limits our ability to succeed. We can’t resolve our issues if we’re unable to accept or see them.

Feedback

Feedback is perhaps the most effective way for us to see what our limitations and challenges are. Feedback can be positive – allowing us to extend further in a direction – or negative – encouraging us to change our course. Feedback isn’t something that we – generally – want to hear, and it’s not something that other people want to do either.

Providing feedback is risky. It’s natural for people to view someone giving negative feedback more negatively than they might without any feedback. Even good leaders struggle to not hold negative feedback against someone. That’s one of the reasons why so much effort is put into creating safe, 360 evaluations for leaders. The people providing feedback need to know that they’ll be protected through commitment of the organization or through anonymity to be able to provide honest and forthright feedback.

Apologies

One of the things that too few people are good at is apologies. Goldsmith advises to get in and get out with apologies, as the more one talks when making an apology, the more the temptation is to justify, defend, or support the action that one is apologizing for.

I tend to separate one aspect that troubles most people. I can be sorry for the impact on someone without necessarily accepting that I could have easily or possibly foreseen the outcome. That is, I don’t necessarily accept responsibility with my apology. I simply connect with the other person and acknowledge their pain or loss.

On the other hand, there are times when an apology that means more than “I’m sorry” is necessary. Sometimes it’s necessary to specifically outline the steps that you are going to perform to prevent further recurrences of the situation. This is particularly necessary when you’re responsible but also when the same problem tends to happen repeatedly.

Singularly Special

Masters of relationships have another way of developing and maintaining their relationships. They have the gift of making the person that they’re speaking with feel singularly special. When you’re talking to them, their grocery lists, unresolved business issues, and distractions melt away, and their entire focus is on you. Bill Clinton is described as having this gift – that when you’re talking with him, it’s like nothing else matters. Whether you are, at that moment, singularly special and whether you like his politics or not is immaterial. There’s something special about being the complete focus of another human being.

Learning to Ask

Peter Drucker said, “The leader of the future will be a person who knows how to ask.” His statement is supported by research – though he couldn’t have known that at the time he made the comment. The efficacy of techniques like Motivational Interviewing is based on the knowledge of knowing how to ask questions in a way that helps people become more open. Have you ever been asked a question, and the instant you heard it you realized that the question was insightful enough to propel the conversation forward to a better understanding? That’s the art of learning how to ask the right questions.

How to Handle Me

What if people came with instruction manuals? What if everyone had a set of care instructions attached to their ears instead of earrings? What would it be like to know where they’re likely to be sensitive? While I doubt that we’ll start wearing care instructions on our ears, masters of relationships have learned to help others understand how they can bring out the best in the master.

Masters are self-aware enough to know where they’re going to struggle, and that, by coaching their peers and their subordinates on how they can best handle them, they’ll be better off as a team. In my relationship with our office manager, I had to share that I struggle when I feel like we’re not making progress. As a result, she adapted to a communication style that helps me see where we’re making progress – and highlighting where we’re not and why.

The Lost Causes

Not everyone is someone that you can form a healthy and productive relationship with. Some people just can’t be in a relationship (personal or professional) with another person in a healthy way. (See Intimacy Anorexia for more.) However, even when dealing with people like this, you may benefit from What Got You Here Won’t Get You There.

Book Review-The Lucifer Effect: Understanding How Good People Turn Evil – Normal Evil

The Lucifer Effect: Understanding How Good People Turn Evil left me with a terrifying thought. What if we are all evil? What if we don’t turn people evil? What if, instead, we’re all evil and only briefly rise to be good?

This is the third and final post on The Lucifer Effect. The first post was The Devil Made Me Do It, and the second was Constructing a Prison.

The Evidence

Let’s look at the evidence. The kids in the Stanford Prison Experiment (SPE) were normal, healthy, moderately affluent kids. Milgram’s shock experiments were done with a random cross section of people. (See Mistakes Were Made (But Not by Me) for more on Milgram.) Asch’s perception experiments were likewise randomized. (See Unthink for more on Asch.) Despite study controls for normalcy, we found the capacity to warp our perceptions and cause harm with relatively little brainwashing.

If we take a step back from research, and we instead review actual events and the analysis of the aftermath, we see that here, too, we find normal. Adolf Eichmann was responsible for managing the deportation of Jews to concentration camps. His trial in 1961 was a spectacle. It was set on the world stage. Eichmann considered himself not guilty of the atrocities that he had committed. One would think that he was delusional. He clearly had worked diligently to lead so many Jews to their death, and yet he claimed that he had no special role. He was doing what was required of him.

“Delusional” is a word that could be used to describe the situation. The problem is that the psychology of Eichmann wasn’t the delusional sort. The problem was, in fact, that his psychological workup found a normal man. It didn’t find a monster. The problem for all of us is that he “wasn’t a bad guy.” He was just a guy swept up into a very bad system.

The Implication

The line between good and evil seems bright and hard to miss. The line between the good guys and the bad guys was as easy to tell as the white hats the good guys wore and the black hats the bad guys wore in old movies. It should be simple. Right is right. Wrong is wrong. The problem is it’s not that simple. The problem is, time after time, we find that good people and bad people are most often separated by time, not distance. The problem is that the same person – each of us – is both good and bad. We are neither saint or sinner, we’re both.

Bandura in Moral Disengagement explains how the high and mighty fall to the depths of depravity and harm to one another. Mistakes Were Made (But Not by Me) explains how we can acknowledge our faults and assign the blame to others. Change or Die speaks about the discrepancy between what we know we need to change and why we don’t – the same schism between truth and perception that fuels our inhumane treatment of others.

So What Now?

We’re not a lost cause. We’re not relegated to the immoral behavior and split identity of Dr. Jekyll and Mr. Hyde. On the one hand, we cannot deny our nature. We cannot deny that we have the capacity for both good and evil. However, on the other hand, we must accept that we need to cultivate the kind of mental states that make us more resistant to the fall into the abyss of evil. We can cultivate compassion (see The Dalai Lama’s Big Book of Happiness for more). The fall into evil is precipitated by the dehumanization of others. Compassion seeks to anchor all people as innately human and always worthy of our concern.

We can guard against our beliefs that the ends justify the means. That our actions are noble, honorable, and right – even if people are harmed or killed by them. We must accept that whatever good may come from the end, the harm comes now and disproportionally.

Getting Caught Up

We all have the capacity for evil within us. Our grander notions of ourselves are able to keep this evil away from our interactions with others most of the time – some more than others. We must accept that, in some circumstances, our natural fight of the evil within us becomes weary and tenuous. We must keep from getting caught up in The Lucifer Effect.

Book Review-The Lucifer Effect: Understanding How Good People Turn Evil – Prison Construction

It seems as if the construction of prisons is all about the bricks, mortar, and iron bars. On the surface, constructing a prison is about preventing break outs. However, The Lucifer Effect: Understanding How Good People Turn Evil explains that the real construction of the prison isn’t in the walls and bars. The real construction is in the beliefs.

This is the second in a series of three posts about The Lucifer Effect. The first was The Devil Made Me Do It, and the final post in the series will be “Normal Evil“.

Alcatraz

“The Rock.” It’s a short name for a tiny island in the middle of San Francisco bay that once served as a maximum-security prison. Even if prisoners escaped their cells, the water currents and relative distance of the shore meant near-certain death to anyone willing to attempt it. It’s not that people didn’t try to escape; however, their bodies were never found. As a result, the record of Alcatraz as an inescapable prison remains.

Alcatraz was a formidable prison. The “Battle for Alcatraz” attempted breakout, however, proved that, even if it was not escapable, it was possible for the prisoners to overpower the guards – at least temporarily. The real walls in the prison weren’t the ones made of concrete. The real walls were the ones that were created in the prisoner’s minds. The most troublesome and notorious prisoners called Alcatraz their temporary home and ultimately succumbed to the power of The Rock, a power that wasn’t expressed in its concrete structures, but instead in its relational power structures.

Power Structures

Lord John Dalberg-Acton said, “Absolute power corrupts absolutely.” It’s the structure of power that makes a prison run. If there are too few controls, limits, expectations, and monitoring, the power of the guards spirals up and the power of the prisoners down. The result is the temporary corruption of the guards into tyrannical monsters.

The Stanford Prison Experiment (SPE), as it came to be known, showed how minimal oversight and poor limits on guard behavior caused them to emotionally torment the prisoners. In the Abu Ghraib, the conditions weren’t simulated and the results were real. Much to the military’s disgrace, the conditions established at this and other prisons had guards doing unthinkable things to prisoners.

When the Geneva Conventions were removed by changing the status of the prisoners from prisoners of war to unlawful combatants, the safety valves were shut off, and the power of the guards was allowed to escalate to impossible levels. Add to this mixture of circumstances, poor supervision, and a severe lack of resources, and the power structure became unsustainably out of balance.

Even good men and women who had faithfully served their country began to disengage their morality (see Moral Disengagement) and do unspeakable things. Lord Acton’s statement had become all too real. These guards had been corrupted by the power that they held over other people’s lives.

Not every guard changes at the same rate. Not everyone’s moral beliefs and boundaries are bent, moved, or disengaged so quickly – but, ultimately, it seems that everyone’s beliefs are “adjusted.” Most frequently, the adjustments are in a failure to speak up. They’re not acts of commission, but are instead acts of omission.

Acts of Omission

To understand the power of the group and how hard it is to speak up for what’s right, we have to step back in time to 1955 and the work of Solomon Asch at Swarthmore College. Imagine you’ve been recruited with other volunteers to study perception. The challenge is easy. You’re there to compare the length of lines. One reference line and three possible lines, one of which matches the length of the first. You might expect this to be the sort of visual illusion test that is designed to test how we process visual information and some of the hidden flaws. (See Incognito for more.)

However, of the eight participants in each experiment, only you were a volunteer. The other seven people were confederates of Asch. They were there to see how you could be influenced by your desire for conformity. It turns out that, on a test that expected a very low error rate, 75% of the subjects gave at least one incorrect response when pressured by incorrect answers by the other confederates.

Instead of speaking up and giving the correct answer – one that was easy to identify – they gave an incorrect answer. The repetitions of the experiment, with the aid of fMRI machines, indicate that the areas of activation aren’t about conflict but are in areas of visual perception. This says that, literally, the person’s perception of the line was changed.

How can you express your true perceptions when you no longer have true perceptions – your perceptions are literally changed?

On Your Death Bed

If you listen carefully to the regrets of the dying, you’ll find, as Bonnie Ware did, that number three on the list is “I wish I’d had the courage to express my feelings.” She records this in The Top Five Regrets of the Dying. Everyone wants to know what they’ll regret most. Perhaps more interesting is that another variation of the regret of omission is number one on the list – “I wish I’d had the courage to live a life true to myself, not the life others expected of me.” That is, they regret that they couldn’t be themselves – to express themselves completely more frequently.

Private Prisons

Back in the SPE, even the most morally-strong failed to speak out against the abuses that were happening. The prisoner who was on a hunger strike couldn’t rally the support of the other prisoners. Part of that was due to a lack of communication and rapport building, but at least some of it was tied up in the power of conformity. The Hidden Brain relates the story of the Belle Isle bridge in Detroit, where in August 1995, a woman was brutally beaten while people all around did nothing.

Malcolm Gladwell relates the story of Kitty Genovese in The Tipping Point. Kitty was stabbed to death. Thirty-eight people ultimately admitted to hearing her screams, and exactly zero called the police.

The morally-conflicted guards disengaged, performed small acts of kindness towards the prisoners, but failed to elevate their concerns either by confronting the aggressors or reporting the concerns through the chain of command at the mock prison.

Prison Building 101

The great lesson from the SPE is that to build a prison you need no walls. You need no bars. You need only those capacities within the human mind to succumb to group pressure and the lack of initiative needed to stand up and fight for what is right. President Franklin Roosevelt said it best: “Men are not prisoners of fate, but only prisoners of their own mind.” Perhaps the real prison is doing nothing to test the walls in our mind. Perhaps doing nothing is The Lucifer Effect.

Article: The Actors in Training Development: Distribution Specialists

The dull murmur of instructors and students casually chatting before a class begins has been replaced by the hum of server fans and air conditioning in computer rooms. The instructor standing in front of a class has been replaced by the flow of packets from faraway servers to the student’s computer. It’s the distribution specialists who keep these connections flowing and the servers humming along.

Part of the TrainingIndustry.com series, the Actors in Training Development. Read more…

Book Review-The Lucifer Effect: Understanding How Good People Turn Evil – The Devil Made Me Do It

Young children can say things that adults could never get away with. Ask a child why they did something wrong, and one answer you may get is, “The devil made me do it.” The personification of evil, they proclaim, can override their free will and cause them to take one more cookie after they’ve been told no more. We laugh at this childish idea. Of course, no one can make you do something against your will. Hypnotists reportedly can’t get you to do something you don’t want to do. So how silly is it that “the devil made me do it?” The Lucifer Effect: Understanding How Good People Turn Evil tries to help us understand that this may not be as far-fetched as we’d like to believe, but the devil isn’t in the details – the devil is in the system.

This is the first of three posts about The Lucifer Effect. The second post will address constructing a prison, and the third about “normal evil“.

Studies at Stanford

The linchpin of The Lucifer Effect is the study that Philip Zimbardo ran at Stanford University. The study randomly assigned healthy students into either a guard or a prisoner role. The situation was structured to create anonymity, deindividualization, and dehumanization. The structure worked too well. The experiment had to be terminated prematurely, because it was spinning out of control, as the mock guards were abusing the mock prisoners. (As a sidebar, Zimbardo has done other things as well, but none more popular than this experiment. One of his other books, The Time Paradox, is one I read years ago.)

Somehow, the reality that this was an experiment was lost and everyone descended into the belief that the prisoners and the guards were real. They started to act like the situation wasn’t contrived but was instead a result of misdeeds by the prisoners. The escape hatches (metaphorically speaking) to get out of the study were easy enough to realize, but, strikingly, no one reached for them, because no one seemed to believe that they could use them.

In this experiment, the power of the situation – or the system – overwhelmed the good senses of the guards and the prisoners and plunged them both into behaviors that weren’t characteristically theirs. Instead, these students’ behavior was shaped, as Kurt Lewin would say, by their environment.

B=f(P,E)

Kurt Lewin was a German-American psychologist who contributed greatly to our ability to understand how people behave. His famous equation is B[ehavior] = f[unction](P[erson], E[nvironment]). Put simply, the behavior of anyone is a function of both their person – their unique traits and personality – and the environment that they’re placed in. The mathematics of the function itself is unknown. The complexity of the person and the complexity of their environment make it difficult to predict how someone will really behave. (See Leading Successful Change for more discussion on Lewin’s equation.)

Our legal system rests on the notion that people are responsible for their behaviors, and the environment has no impact on our behavior. (See Incognito for more on this foundation.) However, Lewin says that this is incorrect. In Incognito, Eagleman explains how our will is far from free. Kahneman shares similar concerns in Thinking, Fast and Slow. He goes so far as to say that System 1 (automatic or emotional processing) lies to System 2 (higher-order reasoning.) The result of that deception is that we’re not really in control, we just think we are.

This is the dual-control model that Haidt explains in The Happiness Hypothesis about the rational rider and the emotional elephant. Our laws are constructed for the rational rider without the awareness that the rider isn’t really in control. We make only occasional allowances in our system of government for temporary insanity. This is the slightest acknowledgement where there are times that our emotions get the better of us – and would get the better of anyone.

However, the other variable to the equation is more challenging. Defining the environment is about what courts see as extenuating circumstances – even if they don’t exonerate people – that are worth considering. Zimbardo proves the power of the structural influences on the behavior of carefully screened, well-functioning students. However, he’s not alone in raising the alarm about how good people can be made to do bad things.

Shocking Authority

In the post-World War II world, it’s hard to understand how Adolf Hitler and the Nazi party could exterminate so many Jewish people. It’s unthinkable – yet it happened. The question was why people would agree to do such awful things. Stanley Milgram, as a Jew himself, was curious as to what people would do when they were told to. How quickly and easily would people bend to the power of authority. The experiment was simple in structure. Two volunteers would be selected and paired so that one was the teacher and the other was the learner. The teacher would be assessing the effect of electric shocks on the ability to improve learning retention.

At least it looked simple. The real assessment was whether normal people would be willing to administer what they believed to be life-threatening shocks to someone hidden from them. The learner was not a volunteer at all. The learner was a conspirator (or agent) of Milgram’s. The teacher would feel a small shock, then the learner and the teacher would be separated and would communicate through audio only. The teacher would administer what they thought were progressively larger and larger voltage to the learner – while he’d scream, indicate concerns for his heart, and generally indicate his displeasure.

In the presence of a researcher who pressured the teacher to press on, over 90% of people administered what they thought to be potentially lethal shocks to someone in another room. Of course, there were no shocks after the test shock the teacher received. However, the actual outcome of the research was that it was all too easy to get people to disengage their morals in the presence of a false authority. (See Mistakes Were Made for more on this terrifying research.)

Moral Disengagement

Bandura artfully explains the mechanisms that allow for Moral Disengagement. The tools of moral disengagement are the same tools that Zimbardo used to construct his mock prison experiment. The system setup for the Stanford Prison Experiment was designed – effectively – to disengage normal, healthy people’s moral safeguards. Free of these bonds, they were free to do anything. The study design in effect created a bubble of reality, of society, of culture that was free to evolve separate from the “real world” outside of the walls of the mock prison.

Bandura affirms that morality is relative to the environment that a person is in. In Paul Ekman’s autobiographical book Nonverbal Messages: Cracking the Code of My Life’s Pursuit, he shares how a chief’s statement that he would eat Ekman when he died made him a respected man. In this culture, the statement of eating a dead man caused him to achieve respect, while in most cultures, this idea would be repulsive.

Perhaps the greatest surprise wasn’t that morality was relative to culture, it was the speed with which the prison’s culture evolved on its own. It took hours to start to form and days to have a firm hold. By the end of the first week, it was strong enough to have psychologically broken three prisoners and to have shaken Zimbardo’s awareness of his responsibility for controls.

The Devil is the System

Maybe the childish beliefs aren’t so strange. Maybe the devil really did make them do it. However, maybe it’s the systems that we put in place that are the real devil. Maybe it’s the system that is The Lucifer Effect.

Book Review-The Signal and the Noise: Why So Many Predictions Fail-but Some Don’t (Statistics and Models)

In the first part of this review we spoke of how people make predictions all the time. The Signal and the Noise: Why So Many Predictions Fail- but Some Don’t has more to offer than some generic input on predictions, it has a path for us to walk about the models and statistics we can use to make better predictions.

All Models are Wrong but Some are Useful

Statistician George Box famously said, “All models are wrong, but some are useful.” The models that we use to process our world are inherently wrong. Every map inherently leaves out details that shouldn’t be important – but might be. We try to simplify our world in ways that are useful and that our feeble brains can process. Models allow us to simplify our world.

Rules of thumb – or heuristics – allow a simple reduction of a complex problem or system. In this reduction, they are, as Box said, wrong. They do not and cannot account for everything. However, at the same time, they can be useful.

The balance between underfitting and overfitting data is in creating a model that’s more useful and less wrong.

Quantifying Risks

Financial services, including investments and insurance, are tools that humans have designed to make our lives better. The question is, making whose lives better? Insurance provides a service in a world where we’re disconnected and we don’t have a community mentality where we support each other. In Hutterite communities – which is a division of the Anabaptist movement like the Amish and Mennonites – all property is owned in community. In a large enough community, the loss of one barn or one building is absorbed through the community. However, that level of community support doesn’t exist in many places in the modern world.

Insurance provides an alternative relief for catastrophic losses. If you lose a house or a barn or something of high value, insurance can provide a replacement. To do this, insurance providers must assess risk. That is, they must forecast their risk. The good news is that insurance providers can write many insurance policies with an expected risk and see how close they get to calculating the actual risk.

Starting with a break-even point, the insurance company can then add their desired profit. For those people and organizations that believe there’s good value in the insurance, their assessment of risk or willingness to accept risk is such that they want the insurance buy it. Given that people are more impacted by loss than by reward, it’s no wonder that insurance is a booming business. (See Thinking, Fast and Slow for more on the perceived impact of loss.)

The focus then becomes on the ability of the insurance company to quantify their risk. The more accurately they can do this, and take reasonable returns, the more policies they can sell and the more money they can make. Risk, however, is difficult to quantify, ignoring for the moment black swan events (see The Black Swan for more). You still must first separate the signal from the noise. You must be able to tell what is the rate of naturally-occurring events, and which events are just normal random deviations from this pattern.

Next, the distribution of the randomness must be assessed. What’s the probability that the outcome will fall outside of the model? When referring to the stock markets, John Maynard Keynes said, “The market can stay irrational longer than you can stay solvent.” The same applies to insurance: you must be able to weather the impact of a major disaster and still stay solvent. Whether it’s a particularly difficult tornado season or a very bad placement of a hurricane, the perceived degree of randomness matters.

Then you have the black swan events, the events that you’ve never seen before. These are the events that some say should never happen. However, many of the times when this has been used, the risk was well-known and discussed. A hurricane hitting New Orleans was predicted and even at some level prepared for – though admittedly not prepared for well enough. This is not a true black swan, or completely unknown and unpredictable event. It and other purported black swan events were, in fact, predicted in the data.

When predicting risks, you have the known risks and the unknown risks. The black swan idea focuses on the unknown risks, those for which there’s no data that can be used to predict the possibility. However, when we look closely, many of these risks are predictable – we just choose to ignore them, because they’re unpleasant. The known risks – or, more precisely, the knowable risks – are the ones that we accept as a part of the model. The real problem comes in when we believe we’ve got a risk covered, but, in reality, we’ve substantially misrepresented it.

Earthquakes and Terrorist Attacks

Insurance can cover the threat of earthquakes and the threat of terrorist attacks. However, how can we predict the frequency and severity of both? It turns out that both obey a similar pattern. Though most people are familiar with Edward Richter’s scale for earthquake intensity, few realize that it’s an exponential scale. That is, the difference in magnitude between a 4.1 and a 5.1 earthquake isn’t 25% more energy released, it’s 10 times more. Thus, the difference between a magnitude 6.1 and an 8.1 earthquake is 100 times more energy released.

This simple base-10 power rule is an elegant way to describe the release of energy that can be dramatically different. What’s more striking is that there is a line that moves from the frequency of smaller earthquakes to larger ones on this scale. It forecasts several large earthquakes for a given period of time. Of all the energy released in all the earthquakes from 1906 to 2005, just three large earthquakes—the Chilean earthquake of 1960, the Alaskan earthquake of 1964, and the Great Sumatra Earthquake of 2004—accounted for almost half the total energy release of all earthquakes in the world. They don’t happen frequently, but these earthquakes make sense when you look at the forecast along the line of frequency of smaller earthquakes.

Strikingly, terrorist attacks follow the same power law. The severity rises as frequency decreases. The 9/11 attacks are predictable with the larger framework of terrorism in general. There will be, from time to time, larger terrorist attacks. While the specific vector from which an attack will come or the specific fault line will cause an earthquake will be unknown, we know that there’s a deceasing frequency of large events.

Industrial and Computer Revolutions

If you were to try to map the gross domestic product by person, the per-person output would move imperceptibly up over the long history of civilization, right up to the point of the industrial revolution when something changes. Instead of all of us struggling to survive, we started to produce more value each year.

Suddenly, we could harness the power of steam and mechanization to improve our lives and the lives of those we care about. We were no longer reduced to living in one-room houses as large, extended families and began to have a level of escape from the threat of death. (See The Organized Mind for more on the changes in our living conditions.) Suddenly, we had margin in our lives to pursue further timesaving tools and techniques. We invested some of our spare capacity into making our lives in the future better – and it paid off.

Our ability to generate data increased as our prosperity did. We moved from practical, material advances to an advance in our ability to capture and process data with the computer revolution. After a brief dip in overall productivity, we started leveraging our new-found computer tools to create even more value.

Now the problem isn’t capturing data. The Internet of Things (IoT) threatens to create mountains of data. The problem isn’t processing capacity. Moore’s law suggests the processing capacity of an individual microchip doubles roughly every 18 months. While this pattern (it’s more of a pattern and less of a law) is not holding as neatly as it was, processing capacity far outstrips our capacity to leverage it. The problem isn’t data and processing. The problem is our ability to identify and create the right models to process the information with.

Peer Reviewed Paucity

The gold standard for a research article is a peer-reviewed journal. The idea is that if you can get your research published in a peer reviewed journal, then it should be good. The idea is, however, false. John Loannidis published a controversial article “Why Most Published Research Findings Are False,” which shared how research articles are often wrong. This finding was confirmed by Bayer Laboratories when they discovered they could not replicate two-thirds of the findings.

Speaking as someone who has a published peer-reviewed journal article, the reviews are primarily for specificity and secondarily for clarity. The findings – unless you make an obvious statistical error – can’t be easily verified. While I have done thousands of pages of technical editing over the years where I would verify the author’s work, I could test their statements easily. For the most part, being a technical editor means verifying that what the author is saying isn’t false and making sure that the code they were writing would compile and run.

However, I did make a big error once. We were working on a book that was being converted from Visual Basic to Visual C++. The book was about developing in Visual Basic and how Visual Basic can be used with Office via Visual Basic for Applications. There was a section in the introduction where search and replace done by the author said that there was Visual C++ for Applications. Without anything to verify, and since the book was working on a beta of the software for which limited information was available, I let it go without a thought. The problem is that there is no Visual C++ for Applications. I should have caught it. I should have noticed that it wasn’t something that made sense, but I didn’t.

Because the ability to validate wasn’t easy – I couldn’t just copy code and run a program – I failed to validate the information. Peer-reviewed journals are much the same thing. It’s not easy to replicate experimental conditions. Even if you could replicate experimental conditions, you’re likely to not get exactly the same results. So, consequently, reviewers don’t try to replicate the results, and that means we don’t really know whether the results can be replicated – particularly, using the factors that the researcher specifies.

On Foxes and Hedgehogs

There’s a running debate on whether you should be either a fox – that is, know a little about many things – or a hedgehog – that is know a lot about one thing. Many places like Peak tell of the advantages of focused work on one thing . The Art of Learning follows this pattern in sharing Josh Waitzkin’s rise to both chess and martial arts. However, when we look at books on creativity and innovation like Creative Confidence, The Medici Effect, and The Innovator’s DNA, the answer is the opposite. You’re encouraged to take a bite out of life’s sampler platter – rather than roasting a whole cow.

When it comes to making predictions, foxes with their broad experiences have a definite advantage. They seem to be able to consider multiple approaches to the forecasting problem and look for challenges that the hedgehogs can’t see. I don’t believe that the ability to accurately forecast is a reason to choose one strategy over another – but it’s interesting. Foxes seem to be able to see the world more broadly than the hedgehogs.

The Danger of a Lack of Understanding

There’s plenty of blame to go around for the financial meltdown of 2008. There’s the enforcement of the Community Reinvestment Act (CRA) and the development of derivatives. (I covered correlation and causation and the impact on the meltdown in my review of The Halo Effect.) The problem that started with some bad home loans ended with bankruptcies as financial services firms created derivatives from the mortgages.

These complicated instruments were validated with ratings agencies, but were sufficiently complex that many of the buyers didn’t understand what they were buying. This is always a bad sign. When you don’t understand what you’re buying, you end up relying on third parties to ensure that your purchase is a good one – and when they fail, the world comes falling down, with you left holding the bag.

The truth is that there is always risk in any prediction. Any attempt to see if there’s going to profit or loss in the future is necessarily filled with risk. We can’t believe anyone that says that there is no risk.

Bayes Theorem

I’m not statistician. However, I can follow a simple, iterative formula to continue to refine my estimates. It’s Bayes theorem, and it can be simplified to:

Prior Probability (Variable) (Value)
Initial estimate of probability X
New Event
Probability of event if yes Y
Probability of event if no Z
Posterior Probability
Revised Estimate XY
——
xy + z(1-x)

You can use the theorem over and over again as you get more evidence and information. Ultimately, it allows you to refine your estimates as you learn more information. It is, however, important to consider the challenge of anchoring, as discussed in Thinking, Fast and Slow and How to Measure Anything.

The Numbers Do Not Speak for Themselves

Despite the popular saying, the numbers do not, and never do, speak for themselves. We’re required to apply meaning to the numbers and to speak for them. Whatever we do, however we react, we need to understand that it’s our insights that we’re applying to the data. If we apply our tools well, we’ll get valuable information. If we apply our tools poorly, we’ll get information without value. Perhaps if you have a chance to read, you’ll be able to separate The Signal and the Noise.