Quick Tip: Microsoft Word: Keep With Next

Every once in a while, it seems like no matter what you do, formatting isn’t getting you the results you want in your document. Sure, breaks can separate one section from another – but how do you make lines stay together? The keep with next function is Word’s way of allowing you to keep the important things on the same page, even if you change formatting or add breaks, and I’ll show you how to find and use it in this quick tip.

See more quick tips here: Quick Tips for Microsoft Office Applications.

Book Review-Saving Our Sons: A New Path for Raising Healthy and Resilient Boys

Raising all kids today is hard. Since I’ve only attempted to raise kids in today’s environment, I can’t comment about whether it’s harder or easier than previous generations. I can say that the grandparents I talk to tell me that they believe it’s harder. It’s for that reason that every parenting program, resource, and book is a welcome tool to better understand, cope, and succeed in the critical task of parenting. Michael Gurian’s book Saving Our Sons: A New Path for Raising Healthy and Resilient Boys is one tool.

Gurian plays the conclusions fast and loose, sometimes making leaps that would make Superman afraid of heights. Occasionally, he reaches conclusions directly in contrast with intense research. Despite these challenges, he does have important messages to send, messages that should help all parents understand more about the children they care so much for.

Dominant Gender Paradigm

The starting point for much of Gurian’s perspective is his belief that we, as a society, see men as the dominant gender. We believe that men unfairly earn more than women. We believe that women are denied opportunities that they should get because of their gender. These perspectives echo feelings of racial minorities in the past and today.

In my experience with friends and colleagues, I’m aware that they (both women and minorities) are discriminated against. In technology, where I’ve spent most of my career, women report being undervalued, overlooked, and, worse yet, harassed. I can tell you that the discrimination, both with women and minorities, is real, because I’ve heard the stories – and, in some cases, I’ve been a firsthand witness to it happening.

However, I’ve also seen the reverse happen. I’ve seen groups of people who hire only from within their group. In Trust: Human Nature and the Reconstitution of Social Order, I learned how different societies value in-family and in-group members to the exclusion of others, and how it negatively impacts their overall economic growth in the long term and limits their ability to grow individual organizations. The needle on the gauge goes both ways. Sometimes the dominant group is excluded too – not as much as minorities or women – but it happens.

The question isn’t whether it happens in either direction. The question is what to do about it. The question is whether employment quotas worked to eliminate the discrimination for minorities. The question is also what is happening that we’re not even aware of. Are we unfairly discriminating against our boys because of the belief that they’ll one day become men?

By the Numbers

If you look at the numbers, men make more than women. However, that’s not the whole story. American boys and men commit suicide at four times the rate of girls and women – despite what we might believe about girls being more emotional and therefore more susceptible to commiting suicide. Boys also account for two-thirds of the Ds and Fs issued in school. (Glasser argues there should be no Fs in Schools Without Failure.) Boys are also four times as likely to be suspended or expelled.

Certainly, there are biases that need to be eliminated. There are inequalities that we need to address; however, it’s not like they’re one-sided. If I gave you these numbers without identifying the gender, you might rightfully claim that there’s a crisis and something must be done. However, because the victims are boys, the concern is ignored.

In most parts of the world, girls are doing better than boys on most health and psychological indicators. Gurin is not advocating that we stop helping girls. He’s advocating that we start helping boys.

In a World Without Fathers

Our Kids speaks to the challenges of kids without fathers active in their lives. At the bottom of the socioeconomic ladder, fewer fathers are present. They’re simply missing. As a result, children are being raised by mothers – if they’re being raised by anyone at all. The father’s influence, which might be rough on the edges, is exactly the kind of tumbling that is needed to take the corners and sharp edges off of boys who need a struggle to grow.

It was E.O. Wilson, a biologist, who said, “I have been blessed by brilliant enemies.” This is not to say that fathers are enemies of their boys – far from it. Rather I’m saying that the need for refinement exists in all of us, but particularly in our boys. Fathers are strong sparring partners that allow boys to grow.

Boys will be Boys

To someone with both boys and girls, I can tell you with assurance that they’re different. I recognize that this is not a revolutionary statement for most of you – but I can say with conviction that they are different. However, I can also say that each individual boy and each individual girl are different. Yes, gender does play a role in what children need; however, individual differences exist as well.

Gurian asserts that boys need the rough and tumble life, that life is dangerous for boys and that our overparenting, called “helicopter parenting,” has robbed our children, and particularly our boys, of the growth that they need. By eliminating all possible threats to our boys, we’ve deprived them of the need to overcome.

I’m reminded of the category Balan from the Dyirbal language – the language of the aboriginal people of Australia – that includes fire, women, and dangerous things (which I discovered through Ambient Findability). Boys need to learn about these things with just the right amount of safety.

Growing Up Boys

What happens when boys grow up, but they don’t mature? Are they overgrown man children? Perhaps a man in this case becomes just a tall boy. Aging is assured. Maturing is not. When we deprive our children of stress, challenge, and conflict by depriving them of nothing else, we’ve done them the greatest disservice of all. Why Zebras Don’t Get Ulcers, while cautioning against the downside of chronic stress, extolls the value of short-term stressors. We need stress in our lives to help us mature. We need stress to make us better.

Women Need Powerful Men

There’s an unfortunate reality that reports of men raping and dominating women have become commonplace. This is unfortunate, in part, because not all the reports are true. It’s more unfortunate because some of them are. Reports of rape – both accurate and inaccurate are thankfully the exception and not the rule. Most men are no longer overgrown man children.

Women don’t need men to lord over them. They don’t need to be victims. For them to express their life, they need to know that they’re equal partners in life. It’s too easy for any of us, including women and men, to move into victimhood and take up permanent residence. (A good place to start on victimhood is Hurtful, Hurt, Hurting.)

Women don’t need men who are carbon copies of themselves, they need men with character, who are powerful in their ability to support and grow with the women that they care about.

Family Rings

Gurian speaks of three families: the nuclear, the extended, and the community. Robert Putnam speaks of the decline of this social capital in Bowling Alone and the decline of the nuclear family in Our Kids. Our social fabric is straining to stay intact. Our mobile world has moved us farther from our extended families and has transplanted us from one community to another several times during the course of our lives. The structure that we have to help raise our children is different than it was.

I can remember being told to go out and have fun. Others I know have told me that their parents told them to go out until the street lights turn on – and then to return for dinner. Children didn’t have cell phones or even wrist watches. There was a natural order to things that we’ve disrupted. Ironically, the fact that we’re more connected makes us more distant. (See Alone Together for how technology is changing our relationships.)

Today, parents who allow their children to walk to the park unsupervised are considered criminals, because they put their children at undue risk. They’re considered neglectful for not walking down the block. The ways that used to work to raise children are no longer trusted. Strangely, the actual number of crimes against children isn’t appreciably increasing; however, our awareness and hysteria about it is.

Providing nuclear family support for the growth of our children (and specifically boys) has become more and more challenging. No longer do we have regular contact with our extended families, and as we pick up that role, we’re also warier of our communities.

Processing and Ruminating

It’s no secret that, stereotypically, men and women process thoughts and emotions differently. What isn’t well known or understood is that the way that boys process information is less about rumination and more about processing for completion. Women turn over ideas like a dryer, continuously tumbling them until they’re dry and then occasionally running them on an anti-wrinkle cycle. Men, on the other hand, process information, decisions, etc., for completion.

Instead of the idea being stuck in an endless anti-wrinkle cycle of being turned over, they’re processed and done. This can free a man’s mind from the tyranny of reconsideration. Processing allows freedom, where rumination is enslaving.

Citizen Science

Gurian encourages everyone to perform what he calls “citizen science.” That is, he’s encouraging experimentation and testing of hypotheses. I consider the idea that you would explore and test your world something to applaud but the idea of calling it “citizen science” deplorable. The problem is in our human nature. We’ve got confirmation bias that tells us we’ll find what we’re looking for. (See Thinking, Fast and Slow for more on biases.)

By applying citizen science without controls and observation, we’re quite likely to reach the wrong conclusion. In the end, I believe some of the gravest errors that Gurian makes are because he’s performed citizen science on too small of sample sizes and his confirmation bias has gotten the better of him.

I do, however, invite you to try your own citizen science and look with a careful eye at Saving Our Sons – you’ll likely find some things you agree with and some that you don’t.

Quick Tip: Microsoft Word: Breaks

When you want to control how a document looks, using breaks is a helpful way to start. By separating your document, you can keep headers from getting stuck at the end of a page, and even change the formatting one section to another without having to manually select and format them. In this quick tip, I’ll show you how to use breaks to split your document into sections, whether you just need to start something on a new page or have to overhaul the whole document.

See more quick tips here: Quick Tips for Microsoft Office Applications.

Book Review-The Signal and the Noise: Why So Many Predictions Fail-but Some Don’t (Statistics and Models)

In the first part of this review we spoke of how people make predictions all the time. The Signal and the Noise: Why So Many Predictions Fail- but Some Don’t has more to offer than some generic input on predictions, it has a path for us to walk about the models and statistics we can use to make better predictions.

All Models are Wrong but Some are Useful

Statistician George Box famously said, “All models are wrong, but some are useful.” The models that we use to process our world are inherently wrong. Every map inherently leaves out details that shouldn’t be important – but might be. We try to simplify our world in ways that are useful and that our feeble brains can process. Models allow us to simplify our world.

Rules of thumb – or heuristics – allow a simple reduction of a complex problem or system. In this reduction, they are, as Box said, wrong. They do not and cannot account for everything. However, at the same time, they can be useful.

The balance between underfitting and overfitting data is in creating a model that’s more useful and less wrong.

Quantifying Risks

Financial services, including investments and insurance, are tools that humans have designed to make our lives better. The question is, making whose lives better? Insurance provides a service in a world where we’re disconnected and we don’t have a community mentality where we support each other. In Hutterite communities – which is a division of the Anabaptist movement like the Amish and Mennonites – all property is owned in community. In a large enough community, the loss of one barn or one building is absorbed through the community. However, that level of community support doesn’t exist in many places in the modern world.

Insurance provides an alternative relief for catastrophic losses. If you lose a house or a barn or something of high value, insurance can provide a replacement. To do this, insurance providers must assess risk. That is, they must forecast their risk. The good news is that insurance providers can write many insurance policies with an expected risk and see how close they get to calculating the actual risk.

Starting with a break-even point, the insurance company can then add their desired profit. For those people and organizations that believe there’s good value in the insurance, their assessment of risk or willingness to accept risk is such that they want the insurance buy it. Given that people are more impacted by loss than by reward, it’s no wonder that insurance is a booming business. (See Thinking, Fast and Slow for more on the perceived impact of loss.)

The focus then becomes on the ability of the insurance company to quantify their risk. The more accurately they can do this, and take reasonable returns, the more policies they can sell and the more money they can make. Risk, however, is difficult to quantify, ignoring for the moment black swan events (see The Black Swan for more). You still must first separate the signal from the noise. You must be able to tell what is the rate of naturally-occurring events, and which events are just normal random deviations from this pattern.

Next, the distribution of the randomness must be assessed. What’s the probability that the outcome will fall outside of the model? When referring to the stock markets, John Maynard Keynes said, “The market can stay irrational longer than you can stay solvent.” The same applies to insurance: you must be able to weather the impact of a major disaster and still stay solvent. Whether it’s a particularly difficult tornado season or a very bad placement of a hurricane, the perceived degree of randomness matters.

Then you have the black swan events, the events that you’ve never seen before. These are the events that some say should never happen. However, many of the times when this has been used, the risk was well-known and discussed. A hurricane hitting New Orleans was predicted and even at some level prepared for – though admittedly not prepared for well enough. This is not a true black swan, or completely unknown and unpredictable event. It and other purported black swan events were, in fact, predicted in the data.

When predicting risks, you have the known risks and the unknown risks. The black swan idea focuses on the unknown risks, those for which there’s no data that can be used to predict the possibility. However, when we look closely, many of these risks are predictable – we just choose to ignore them, because they’re unpleasant. The known risks – or, more precisely, the knowable risks – are the ones that we accept as a part of the model. The real problem comes in when we believe we’ve got a risk covered, but, in reality, we’ve substantially misrepresented it.

Earthquakes and Terrorist Attacks

Insurance can cover the threat of earthquakes and the threat of terrorist attacks. However, how can we predict the frequency and severity of both? It turns out that both obey a similar pattern. Though most people are familiar with Edward Richter’s scale for earthquake intensity, few realize that it’s an exponential scale. That is, the difference in magnitude between a 4.1 and a 5.1 earthquake isn’t 25% more energy released, it’s 10 times more. Thus, the difference between a magnitude 6.1 and an 8.1 earthquake is 100 times more energy released.

This simple base-10 power rule is an elegant way to describe the release of energy that can be dramatically different. What’s more striking is that there is a line that moves from the frequency of smaller earthquakes to larger ones on this scale. It forecasts several large earthquakes for a given period of time. Of all the energy released in all the earthquakes from 1906 to 2005, just three large earthquakes—the Chilean earthquake of 1960, the Alaskan earthquake of 1964, and the Great Sumatra Earthquake of 2004—accounted for almost half the total energy release of all earthquakes in the world. They don’t happen frequently, but these earthquakes make sense when you look at the forecast along the line of frequency of smaller earthquakes.

Strikingly, terrorist attacks follow the same power law. The severity rises as frequency decreases. The 9/11 attacks are predictable with the larger framework of terrorism in general. There will be, from time to time, larger terrorist attacks. While the specific vector from which an attack will come or the specific fault line will cause an earthquake will be unknown, we know that there’s a deceasing frequency of large events.

Industrial and Computer Revolutions

If you were to try to map the gross domestic product by person, the per-person output would move imperceptibly up over the long history of civilization, right up to the point of the industrial revolution when something changes. Instead of all of us struggling to survive, we started to produce more value each year.

Suddenly, we could harness the power of steam and mechanization to improve our lives and the lives of those we care about. We were no longer reduced to living in one-room houses as large, extended families and began to have a level of escape from the threat of death. (See The Organized Mind for more on the changes in our living conditions.) Suddenly, we had margin in our lives to pursue further timesaving tools and techniques. We invested some of our spare capacity into making our lives in the future better – and it paid off.

Our ability to generate data increased as our prosperity did. We moved from practical, material advances to an advance in our ability to capture and process data with the computer revolution. After a brief dip in overall productivity, we started leveraging our new-found computer tools to create even more value.

Now the problem isn’t capturing data. The Internet of Things (IoT) threatens to create mountains of data. The problem isn’t processing capacity. Moore’s law suggests the processing capacity of an individual microchip doubles roughly every 18 months. While this pattern (it’s more of a pattern and less of a law) is not holding as neatly as it was, processing capacity far outstrips our capacity to leverage it. The problem isn’t data and processing. The problem is our ability to identify and create the right models to process the information with.

Peer Reviewed Paucity

The gold standard for a research article is a peer-reviewed journal. The idea is that if you can get your research published in a peer reviewed journal, then it should be good. The idea is, however, false. John Loannidis published a controversial article “Why Most Published Research Findings Are False,” which shared how research articles are often wrong. This finding was confirmed by Bayer Laboratories when they discovered they could not replicate two-thirds of the findings.

Speaking as someone who has a published peer-reviewed journal article, the reviews are primarily for specificity and secondarily for clarity. The findings – unless you make an obvious statistical error – can’t be easily verified. While I have done thousands of pages of technical editing over the years where I would verify the author’s work, I could test their statements easily. For the most part, being a technical editor means verifying that what the author is saying isn’t false and making sure that the code they were writing would compile and run.

However, I did make a big error once. We were working on a book that was being converted from Visual Basic to Visual C++. The book was about developing in Visual Basic and how Visual Basic can be used with Office via Visual Basic for Applications. There was a section in the introduction where search and replace done by the author said that there was Visual C++ for Applications. Without anything to verify, and since the book was working on a beta of the software for which limited information was available, I let it go without a thought. The problem is that there is no Visual C++ for Applications. I should have caught it. I should have noticed that it wasn’t something that made sense, but I didn’t.

Because the ability to validate wasn’t easy – I couldn’t just copy code and run a program – I failed to validate the information. Peer-reviewed journals are much the same thing. It’s not easy to replicate experimental conditions. Even if you could replicate experimental conditions, you’re likely to not get exactly the same results. So, consequently, reviewers don’t try to replicate the results, and that means we don’t really know whether the results can be replicated – particularly, using the factors that the researcher specifies.

On Foxes and Hedgehogs

There’s a running debate on whether you should be either a fox – that is, know a little about many things – or a hedgehog – that is know a lot about one thing. Many places like Peak tell of the advantages of focused work on one thing . The Art of Learning follows this pattern in sharing Josh Waitzkin’s rise to both chess and martial arts. However, when we look at books on creativity and innovation like Creative Confidence, The Medici Effect, and The Innovator’s DNA, the answer is the opposite. You’re encouraged to take a bite out of life’s sampler platter – rather than roasting a whole cow.

When it comes to making predictions, foxes with their broad experiences have a definite advantage. They seem to be able to consider multiple approaches to the forecasting problem and look for challenges that the hedgehogs can’t see. I don’t believe that the ability to accurately forecast is a reason to choose one strategy over another – but it’s interesting. Foxes seem to be able to see the world more broadly than the hedgehogs.

The Danger of a Lack of Understanding

There’s plenty of blame to go around for the financial meltdown of 2008. There’s the enforcement of the Community Reinvestment Act (CRA) and the development of derivatives. (I covered correlation and causation and the impact on the meltdown in my review of The Halo Effect.) The problem that started with some bad home loans ended with bankruptcies as financial services firms created derivatives from the mortgages.

These complicated instruments were validated with ratings agencies, but were sufficiently complex that many of the buyers didn’t understand what they were buying. This is always a bad sign. When you don’t understand what you’re buying, you end up relying on third parties to ensure that your purchase is a good one – and when they fail, the world comes falling down, with you left holding the bag.

The truth is that there is always risk in any prediction. Any attempt to see if there’s going to profit or loss in the future is necessarily filled with risk. We can’t believe anyone that says that there is no risk.

Bayes Theorem

I’m not statistician. However, I can follow a simple, iterative formula to continue to refine my estimates. It’s Bayes theorem, and it can be simplified to:

Prior Probability (Variable) (Value)
Initial estimate of probability X
New Event
Probability of event if yes Y
Probability of event if no Z
Posterior Probability
Revised Estimate XY
——
xy + z(1-x)

You can use the theorem over and over again as you get more evidence and information. Ultimately, it allows you to refine your estimates as you learn more information. It is, however, important to consider the challenge of anchoring, as discussed in Thinking, Fast and Slow and How to Measure Anything.

The Numbers Do Not Speak for Themselves

Despite the popular saying, the numbers do not, and never do, speak for themselves. We’re required to apply meaning to the numbers and to speak for them. Whatever we do, however we react, we need to understand that it’s our insights that we’re applying to the data. If we apply our tools well, we’ll get valuable information. If we apply our tools poorly, we’ll get information without value. Perhaps if you have a chance to read, you’ll be able to separate The Signal and the Noise.

Quick Tip: Microsoft Word: Keyboard Movement and Selection

Keyboard shortcuts are a well-known way to reduce the amount of times you move your hand from your keyboard to your mouse and vice-versa when you’re editing your document. However, there are ways to navigate your document and even select text using just your keyboard as well. I’ll show you in this quick tip how to reduce the number of times you reach for your mouse when you want to select text or move around your document.

See more quick tips here: Quick Tips for Microsoft Office Applications.

Book Review-The Signal and the Noise: Why So Many Predictions Fail-but Some Don’t (Predictions)

People make predictions all the time. They predict that their team will win the Super Bowl, or they’ll win the lottery. These predictions are based on little more than hope. The Signal and the Noise: Why So Many Predictions Fail- but Some Don’t seeks to set us on the right path to understanding what we can learn from data, what we can infer from data, and what we can’t. By looking at the power and weaknesses of statistics, including both using the wrong model and supplying bad data, we can see how statistics has the power to improve our lives through productive forecasts and predictions.

In this part of the two-part review, we’ll look at predictions.

Forecasts and Predictions

Sometimes in our rush to be amazed at something, we simplify the questions we ask. We fail to recognize that our brain has simplified the thing that we’re trying to sort out (see Thinking, Fast and Slow for more on substitution). In the case of looking into the future, what we really want is prediction, and what statistics gives us most frequently is a forecast. Forecasts necessarily have a certain amount of error and involve statistical relationships. Forecasts become predictions when they become specific and precise.

Each day when we look at the weather, what we want is a soothsayer to predict what the weather will be like. However, what they offer us is a forecast based on models that result in a chance of rain somewhere between zero and 100%. We look at economists and seek the answer about whether we’ll make more money next year – or not. We want to know whether a risky investment will be worth it. However, economists and meteorologists are subject to the same rules as any other statistician.

While it’s true that statistics can predict – as long as we’re using this in a general sense of the word – events that are to happen in the future, there must always be some level of uncertainty as to whether the event will happen – or not. Predictions are just an attempt to refine forecasts into specific, tangible probable outcomes. Sometimes that process is successful but often it is not.

Falsifiable by Prediction

Karl Popper suggested that every forecast should be falsifiable via prediction. To test a model, you needed to be able to make some sort of a prediction with it that then could be proven false. In this way, you could create a test to ensure that your model was accurate and useful. A model that doesn’t forecast appropriately and that you can’t make a prediction from doesn’t do much good.

Everything Regresses to the Mean

One thing about statistics is that it can tell you with relative authority things you want to know with less precision than is useful. Statisticians can forecast the economy but not predict whether you will get a raise or not. The Black Swan artfully points out the challenges of statistics and modeling when the sampling size is insufficient. Until you’ve seen a black swan, you’ve not sampled enough to make the statistical models work. Until you’ve sampled enough, the noise will dramatically pull your results askew.

With large sample sizes, everything regresses to the mean. We no longer see the outlier, even as something that is distinct and that does happen, rather it gets lost in the law of averages. Tragic events like 9/11 are never forecast using the wrong model. They’re not perceived as possible if they’re averaged into the data. It’s like the proverbial statistician drowning in a river that is, on average, only 3 feet deep – all the depth of the data was averaged out.

Right Model, Right Results

Perhaps the most difficult challenge when working with data is not the data collection process. Collecting data is tedious and needs to be done with meticulous attention to detail; however, it’s not necessarily imaginative, creative, or insightful. It’s the work that must be done to get to the magic moment when the right model is uncovered for working with data. Though statisticians have ways of evaluating different models for their ability to predict the data, they must see some inherent signal in the noise.

For a long time, we couldn’t find planets outside of our solar system. One day, someone identified a detection model – that is, they discovered a theory for the strange oscillations in the light frequency from distant stars. The theory proposed that super-massive planets in close orbit were causing the star to move. This created a Doppler effect with the light from the star causing what we perceived as light frequency oscillations. Consensus coalesced, and the scientific community agreed that this was indeed what was happening. We had found the first extra-solar planet. Almost immediately, we found nearly a dozen more.

These super-massive planets were hiding in the data we already had. We had already captured and recorded the data to indicate the presence of other planets, but we didn’t have a model to process the data that we had to allow us to understand it.

There were plenty of ideas, thoughts, theories, and models which were tried to explain the light variations, but it wasn’t until the consideration of a super-massive planet that we settled on a model that was right.

The Failure of Predictions

We got lucky finding extra-solar planets. The right idea at the right time. It was a good fit model. It wasn’t a specific prediction. With predictions, our luck is very, very poor. The old joke goes, “Economists have predicted nine of the last six recessions.” They predicted a recession where none happened. Earthquakes and other disastrous cataclysmic events are predicted with startling frequency. It seems that everyone has some prediction of something. Sometimes the predictions are harmless enough, like whose team will win the super bowl. Sometimes the consequences are much direr.

Disease

When you think in systems, delays are a very bad thing. Delays make it harder for the system to react to a change in circumstances. In the case of the SR-71 Blackbird, the delays in a mechanical system made engine unstarts a regular occurrence. Reduce the delay with electronic controls and the unstart problem is dramatically reduced. (See The Complete Book of the SR-71 Blackbird for more.) In the creation of vaccines, the delay is great. To scale up production and get enough doses for the country, it takes six months.

What makes the vaccination “game” worse is that vaccines are designed to target specific viral strains. If the virus mutates, the hard work of creating the vaccine may be wasted, as it may become ineffective at protecting against the new strain. Each year, the vaccine makers attempt to predict which variations of influenza will be the most challenging. They start cooking up batches of vaccines to combat the most virulent.

What happens, however, when you get noise in the identification of the influenza that will be the most impactful? From 1918 to 1920, swine flu afflicted roughly one-third of humanity and killed over 50 million. So when there was an apparent outbreak of a strain of it at Fort Dix, who can blame President Ford for encouraging the vaccine industry to create a vaccine for it and encouraging every American to do their part in preventing the spread of the disease by getting vaccinated – and hopefully increasing the herd immunity?

It turns out it was all a bad call. Issues with the vaccine caused Guillain–Barré disease in some. The virus strain turned out to not be that virulent. The noise at Fort Dix that had produced the scare wasn’t a result of the virus’s potential but was instead a result of environmental and cultural factors that allowed the disease to spread at Fort Dix but weren’t generalizable to the population.

SIR

A classic statistical way of modeling diseases is the SIR model, which is an acronym for susceptible, infected, and recovered. The assumption is made that everyone who is recovered is not susceptible again, and everyone has an equal level of susceptibility. This simplified model works relatively OK for measles, but fails to account for natural variations in susceptibility in humans. More importantly, the model fails to account for the connections that we have with each other. It fails to account for how we interact.

Another classic example of disease was cholera in London, but it didn’t seem to have any connections. There was no discernable pattern – that is, until John Snow discovered a connection in the Broad Street well and removed the pump handle. The disease slowly dissipated, as Snow had correctly identified the root cause. However, his job wasn’t easy, because people who were far away from the pump were getting sick. Those who weren’t close to the Broad Street pump had hidden connections. Sometimes they lived near the pump in the past and still used it for their main water source; in other cases, they had relatives close by. The problem with forecasting diseases is the hidden patterns that make it hard to see the root cause. To correctly forecast, we need to find and then use a correct model.

An Inconvenient Truth

It’s an inconvenient truth that, in the decade when An Inconvenient Truth was released, there was no substantial change in temperatures across the planet – in truth, there was an infinitesimal reduction in temperature from 2001 to 2011. However, Gore wasn’t the first to claim that there were problems. In 1968, Paul and Anne Ehrlich wrote The Population Bomb. It was 1974 when Donella Meadows (who also wrote Thinking in Systems), Jorgen Randers, and Dennis Meadows first published Limits to Growth. (It’s still on my reading list.) These books both sought to predict our future – one with which the authors were most concerned. Of course, population is increasing, but it’s far from a bomb, and we’ve not yet reached the feared limits to growth.

These predictions missed what Everett Rogers discovered when working with innovations. In
Diffusion of Innovations
, he talks about the breakdown of society created by the introduction of steel axe heads in aboriginal tribes in Australia. They missed the counter-balancing forces that cause us to avoid catastrophe. However, presenting a balanced and well-reasoned point of view isn’t sensational, and therefore doesn’t sell books, nor does it make TV exciting. The McLaughlin Group pundits’ forecasts about political elections are not at all well-reasoned, balanced, or even accurate – but that doesn’t stop people from tuning into what amounts to be a circus performance every week.

So the real inconvenient truth is that our predictions fail. That we overestimate, and we ignore competing forces that attempt to bring a system into balance. In fairness to Gore, the global temperature on a much longer trend seems to be climbing at 1.5 degrees centigrade per year. It’s just that there’s so much noise in the signal of temperatures that it’s hard to see – even over the course of a decade. We need to be concerned, but the sky isn’t falling.

Watching the Weather

If you want to find a prediction that’s guaranteed to be wrong, it’s got to be the weather. The oft quoted remark “What job can you be wrong most of the time and still keep your job?” refers to meteorologists. However, in truth, forecasts are substantially better than they were even a decade ago. They’ve done a startlingly good job of eliminating the problems with the mathematical models that generate weather forecasts. Increases in processing power has made it more possible to create more accurate and more precise forecasts. And they’re still frequently wrong. A wise weatherman goes outside and looks at the sky before going on air to share their predictions, because they know that the computer models can be wrong.

The problem isn’t the model. The problem isn’t our ability to model what will happen with the forces of nature. The problem is in our ability to measure precisely the inputs for the model and the inherent dynamic instability of the systems. It was Lorenz that first started the conversation about the butterfly effect. That is, a butterfly in Brazil can set off a tornado in Texas. That’s a mighty powerful butterfly – or the result of an inherently unstable and dynamic system. A very small change in input has a very large change in output.

As a quick aside, this is where the hash algorithms have their roots. We use hash algorithms to ensure that messages aren’t tampered with. They work by small changes in input resulting in large changes in the output.

The problem with predicting the weather, then, isn’t that we don’t know how to process the signal and arrive at the desired outcome. The problem is that we can’t get a precise enough signal to eliminate all the noise.

Overfitting and Underfitting

In attempts to find the models that perfectly describe the data, we run the risk of two sides of the same coin. On the one hand, we can overfit the data and try to account for every variation in the dataset. Or we can look for mathematical purity and simplicity and ignore the outliers – this is “underfitting.”

“Overfitting” mistakes noise for signal. An attempt is made to account for the randomness of noise inside the signal we’re trying to process. The result is that our ultimate predictions try to copy the same randomness that we saw in our sample data. In other words, we’ve mistaken the noise for the signal and could not eliminate it.

Underfitting, on the opposite side of the coin, is the inability to distinguish the signal in the noise. That is, we ignore data that is real signal, because it looks like noise. In a quest for mathematical simplicity, we ignore data that is inconvenient.

Brené Brown speaks of her scientific approach to shame and vulnerability as grounded theory and the need to fit every single piece of data into the framework. (See The Gifts of Imperfection for more.) When I first read this, it stood in stark contrast to what I saw with scientists ignoring data that didn’t fit their model. It seems like too many scientists are willing to ignore the outliers, because their theory doesn’t explain it. In other words, most scientists, in my experience, tend to underfit the data. They are willing to allow data to slip through their fingers for the elegance of a simpler model. Brown and those who follow the grounded theory approach may be making the opposite error in overfitting their data.

Statistical Models

In the next part of this review, we’ll talk about models and statistics.

Article: The Actors in Training Development: Instructors

If a tree falls in the woods and no one hears it, did it really make a sound? This question is at the heart of the need for people who help training reach students. It’s only by helping students through the course that it has had any impact or value. There’s no good in a course that sits on the shelves, never to be used. Distribution staff, of which instructors are a part, are the bridge from the completed training to the impactful implementation.

Part of the TrainingIndustry.com series, the Actors in Training Development. Read more…

Quick Tip: Microsoft Word: Quick Parts

If you’re working in a collaborative space, such as a SharePoint library app, you’ll often use certain fields or metadata to contain important information. Microsoft Word can capture this information, and even change it. In this quick tip, I’ll show you how you can use quick parts to update some of the document’s properties, which can then be populated to a collaborative space.

See more quick tips here: Quick Tips for Microsoft Office Applications.

Book Review-A Spy’s Guide to Thinking

I never wanted to be a spy. Astronaut, yes. Spy, no. I’m not sure why. Spies are glamorized in the movies (unless it is Spies Like Us), but it wasn’t my thing. When the short book A Spy’s Guide to Thinking came across my path, I thought it was worth looking into. It’s a short book, a quick read, and more of an interesting aside than it is hard-hitting details about how spies think. Still, there are some interesting things from the book to consider.

Side of Paranoia

In my head, being a spy means being at least a little bit paranoid. You’ve got to be on guard for people discovering who you really are and your mission. While this wasn’t an acknowledged component, the book centered around one encounter on a subway – which had nothing to do with being a spy, but could provide insight to how a spy thinks. Generally, the word would be “paranoid.”

The entire encounter kept asking the question about whether the other person knew he was a spy was. Great. He’d rule out that the other person was a spy catcher and then retest that observation over and over again. I suppose that is what makes a good spy. They’re paranoid.

Observe, Orient, Decision, Action

Throughout the book, our spy did a loop: observe (data), orient (analysis), decision, and finally, action. The origin of this loop is John Boyd. He talked about how the most successful pilots can run the loop quicker than their peers. It’s not smarter that matters, it’s quicker through the loops.

Whether you use the word “observe” or “data,” “orient” or “analysis,” the result is the same. You observe the situation, assess or orient to the data you have, and then make a decision and act upon it. The loop – the slightly paranoid loop – was running frighteningly fast.

Zero, Positive, Negative

There are only three types of games we can play. Those that are net positive, those that are net negative, and those that are zero-sum. When we play a net positive game, more is created – it may not be evenly distributed, but more is created through the game. In zero-sum games, one person may win, but the other person loses by the same amount. In net negative games, someone always loses something.

It’s interesting to view life through the lens of a spy, always wondering who knows what. A Spy’s Guide to Thinking really does get you thinking – about whether you could be a spy or not.

Quick Tip: Microsoft Word: Record a Macro

Sometimes you need the same piece of text used multiple times over tons of different documents. It could be a short piece of text, like a slogan or trademark, or a longer paragraph. In Word, you can create, or record, a macro, as I’ll show you in this quick tip, and use that macro in all sorts of documents, removing the need to copy and paste from one document to the next.

See more quick tips here: Quick Tips for Microsoft Office Applications.