“What doesn’t kill you makes you stronger” is how the saying goes. The saying ignores broken bones or other frailties that are not sufficient to cause death. However, the heart of this cliché is what Taleb explains in Antifragile: Things That Gain from Disorder. It’s not enough to just be resilient. It’s not enough to not be fragile. We want things that, when you hurt them, get stronger.
What would it be like to find the next box on your porch (presumably from Amazon) marked “Please handle roughly?” What would it be like to not only tolerate and accept roughness, but to seek it out so that you can grow? This is a secret of those people who have become the leaders we admire. They were handled roughly over a long time and got better for it.
I’m Not Playing Around Here
While reviewing Play, I mentioned that, from an evolutionary standpoint, play is strange. While play is supposed to be safe, it’s not completely safe. There is always the risk of serious injury, yet evolution kept the practice. This begs the question, “Why?” The answer is like all things that evolution has kept. It’s useful. It’s useful, because play allows the participants to hone their skills and to become better in the space of relative safety. In short, the bumps and bruises of play lead to a stronger person.
When people and animals play, they’re not playing around. They’re expressing their antifragile nature and the ability to tear down to be able to rebuild. They’re learning in a relatively safe way. They’re getting hurt, so that they can learn how to not be critically hurt later. Among other things, play causes us to strain our muscles to the point that we do damage.
Muscles, Athletes, and Coaches
Muscles rebuild stronger than before they were torn. They’re antifragile. They get better the more they’re harmed. Athletes and coaches use this antifragile property of our muscles to develop better athletes. Some athletes like Josh Waitzken have learned, both physically and mentally, how to move forward in the face of harm to come out stronger. (See The Art of Learning for more of Josh’s journey.)
Peak performance of any individual, according to Anders Ericsson, is based on purposeful practice. (See Peak for more.) Purposeful practice is pushing yourself just slightly beyond your capabilities with a clear goal of how you want to get better. Czikszentmihalyi describes how to enter a heightened productive state of flow by moving just 4% beyond what your current capabilities are. (See Flow.) By going beyond what we’re capable of – by just a bit – we can break down our old habits, beliefs, and limits to move on to a new higher level.
At the heart of antifragility is the concept of overcompensation. It’s the overcompensation that creates antifragility. “I’m never going to let this happen again” is the mantra of the antifragile. “Let them try to knock this down” is the cry of a frustrated man who is demonstrating antifragility.
My wife had an armrest between the seats of her car. It was fancy. It slid forward and backwards. However, the design made it fragile. Fragile enough that, while leaning back to grab something from the back seat, I broke it off completely. I repaired it with super glue and it worked ok until the next time someone leaned on it a bit while reaching into the back seat.
That’s when I decided on an overcompensation strategy. I immobilized the fanciness by filling the gaps – all the gaps – with hotmelt glue. I can’t tell you how many sticks I used, but I can tell you I more than doubled the weight of the armrest. I then bolted it back on. Now we can apply as much weight as we would like and it doesn’t move. We may hear cracking, but it’s not moving. That’s just one small example of overcompensating.
I could have glued it with a bit more superglue and immobilized the slides. However, that would not have been enough so that it will never break again. I overcompensated in the fix, because I didn’t want it to break again.
Up to a Point
It’s important to recognize that antifragility as a capacity has its limits. First, you’re only antifragile up to a point. We may be able to accept variations in temperature or pressure, and by repeated infrequent exposures, we may be able to extend our range. However, if you attempt something too far outside your current capabilities, you’re likely to get hurt – perhaps even seriously.
Second, you can’t be robust to everything. You can’t prepare for every possible outcome. To do so requires levels of redundancy that reduce your agility, and that in turn reduces your ability to react and recover quickly enough to be antifragile.
Antifragility hides in its folds an undiscovered weakness. Overcompensation requires time to complete. After each injury, there needs to be a recovery time that allows the person or system to overcompensate. If you disrupt this process by interrupting it or subjecting the system to too much stress, you’ll find that it may fail to rebuild or may fail completely.
When designing for antifragility, we need to consider how we might engineer into the system the concept of recovery periods. We should design methods to protect the system during recovery – even if that means offering less to the outside world. After tearing muscles down, it’s necessary to have a rest period before attempting to stress them again. That’s a part of the normal process of being antifragile.
If someone were to disrupt the recovery process, we’d find that the antifragile would become potentially very fragile.
Correlation and Causation
Which way does the arrow point? Does A cause B, or does B cause A? Just because two events are correlated doesn’t make one the cause of the other. Take, for instance, the financial meltdown in 2007 and 2008. What’s the cause? Well, the balanced answer seems to be two things: 1) Allowing people who couldn’t afford homes to buy them, and 2) Profiteering by banks and financial institutions. But let’s back up. Why did we let people who couldn’t afford to buy a house buy a house?
The answer comes from an observation. The observation was that economies that had more home ownership had more stability. Economists concluded that home ownership was a stabilizing factor for an economy. In other words, home ownership caused stability. So, if you want to increase economic stability, you increase home ownership. It all sounds good, right up until the point when it doesn’t.
The error was in believing home ownership caused economic stability, and that by manipulating home ownership, we could manipulate the stability of the economy. The problem is the arrow flowed (or appears to flow) in the other direction. Stable economies mean that more people believe in that stability and are willing to take on the risk of a long-term investment. (See more coverage in The Halo Effect, The Black Swan, Thinking Fast and Slow, and Redirect.)
If I had to pick an error that comes up most, this is it. It may not be a logical fallacy directly, but it’s a logical mistake that’s made frequently. (See Mastering Logical Fallacies for other fallacies) There’s one relatively surefire way to know what is the cause, but that requires knowing what came first. Was it the chicken or the egg? In most cases, we can’t see the timing, so we don’t know which way the arrow flows. (See Thinking in Systems for more about causal diagrams.)
Strangely, it’s the detractors that help the antifragile grow. It’s the critics that make an actress or a restaurant interesting. It’s the envy of other people that causes you to wonder what someone has – and how they’re going to keep it. The Antifragile want harsh feedback. They want to get feedback that makes them question their beliefs. They know that the only way to get better is to go through the refiner’s fire.
I value my supporters, the people who are in my corner no matter what. However, I also value those who can point out holes in my plans, in my thinking, and in my ideas. There’s a natural human tendency to dismiss what the critics say – and to minimize their comments – but in doing so, you limit your ability to become truly great.
It’s said that the struggles we face build perseverance and, ultimately, character. The people who have the clearest sense of who they are are those who have faced the greatest challenges. Lincoln is arguably our best president (or #2 depending upon the poll). Most people aren’t truly aware of how many failures existed in his life.
He failed at business – twice. He ran for – and lost – state legislature. He had a nervous breakdown. He ran for congress (both the House and Senate) and lost. Again and again, Lincoln met failure toe to toe. He persevered. However, more than just being resilient, he learned and grew. He became more of a man and developed the character that led our nation through one of its darkest hours through roughness – not being handled with care.
Sometimes we can activate antifragility in a system accidentally. There has been a rise of antibiotic-resistant bacteria, and it’s becoming a real concern. The cause for the antibiotic-resistant bacteria is the use of antibiotics itself. Let’s talk through what’s happening.
First, there are those people for whom an antibiotic is prescribed, and they take only part of the dose. The result is the weaker versions of the bacteria are killed and the stronger ones are left over to replicate and get stronger – without the competitive pressure of the other similar bacteria. Let me slow that down a bit.
Most of the time, an individual variation (mutation) of bacteria competes for resources with the other bacteria – and the host. When you remove most of the competitors using an antibiotic, you reduce the competition and make it easier for the bacteria to replicate. And with the reduced pressure, its replication rate is higher. Thus, people who fail to take all the prescribed dose of an antibiotic can cause a rebound where the bacteria is harder to kill.
Second, even with appropriate use of antibiotics we create antibiotic resistance. Antibiotics are designed to support the natural immune system that we have in our bodies. Normally the antibiotic kills a specific kind of bacteria. Any mutations that the antibiotic didn’t kill are left for our body’s immune system. (You can find out a bit more about our immune system in my review of Why Zebras Don’t Get Ulcers.) Since the bacterial mutations are relatively small, our immune systems normally knock them out relatively quickly. However, what happens with an immunocompromised individual, where this new variant of the bacteria can’t be removed by the immune system? The result is a strain of the bacteria that resists antibiotics when introduced into someone else.
The systems thinking way of reducing the number of new antibiotic bacteria that are created is to reduce the number of antibiotics used and achieve compliance with the protocol when antibiotics are used to ensure that all the bacteria are killed. This ensures that we only give the variants the smallest necessary chance to thrive, and when they are given a chance to thrive, we increase the chance that the body will kill off the antibiotic resistant bacteria that remain.
By most measures, the RMS Titanic wasn’t a small failure. The ship that was described as “unsinkable” was a terrible maritime disaster – but not the one with the greatest loss of life. The tragedy of the Titanic’s sinking taught us something about the hubris of the ship building industry. Had the Titanic’s tragic accident not happened, there would have been an even larger tragedy – according Henry Petroski. He should know. He wrote To Engineer Is Human: The Role of Failure in Successful Design. In it, he reveals his discoveries from his research on some of the greatest engineering failures. Ultimately, the conclusion is that failure is an unfortunate reality as we continue to push the boundaries of our knowledge.
It’s these small failures that prevent larger ones. Of course, the “small” failures are black swan events when they happen (see The Black Swan). But ultimately, if we left our path unchecked, the tragedies would be greater.
In many cases, being antifragile is about these small failures – in diffusing failures – so that the rest of the system can learn from and adapt to the failures. This is what entrepreneurs do for the economy. They take big personal risks that are small in the context of the economy.
The Risk Takers
Literally, entrepreneurs are risk takers. They are the heroes in the eternal struggle that we call life. Most people don’t know that about 1/3rd of the employment of the US is in organizations with fewer than 100 people. The same can be said for 100- to 1,000-person companies and companies with over 1,000 employees. (I’m generalizing quite a bit, but these are roughly right.) The innovation that happens in small companies fuels the economy. Without the risk takers – those willing to get knocked down and get up again – our economy simply wouldn’t work.
Some people believe that you can only be an entrepreneur if you’re willing to bet the farm (see Bold). There are stories about FedEx’s future hinging on a roulette wheel in Vegas. However, most of the entrepreneurs that I know are more focused on taking calculated risks. They know that there’s the chance of failure, and their objective isn’t necessarily to win every “at bat.” Instead they want to learn something and survive long enough for the next at bat. They look for an asymmetry, where the upside is unbounded, and the downside is limited.
As it turns out the most persistent and pervasive stressor is time. Ideas wither and fade over time. Only 12% of the Fortune 500 firms on the list in 1955 have survived to 2016. Timeless classics like Studebaker, Detroit Steel, Zenith Electronics are all gone. They couldn’t survive the winds to time. Entrepreneurs often take risks in ways that are designed to improve their resilience to the pressures of time. (Being Antifragile with time doesn’t seem to be possible.)
Time, it seems, goes in cycles. There’s a natural rhythm of time. There are times for action and time for reflection. That’s what Taleb calls “barbell” situations – pure action followed by pure reflection.
In software development, we have iterative, or agile, development practices that are designed to allow us to take an action and observe the result. The goal of this strategy is to recognize the facility of planning and instead opt for experience and then a period of reflection and discovery. The sprint reviews are a way to get pure reflection on what has happened and make tiny course corrections while they’re still possible – instead of being forced to make large corrections.
Properly, barbells are strategies that are composed of a mixture of high-positive open risk and low-negative open risk. That is, some of the investments are very risky and have a large possibility of gain. The other investments have limited loss potential. These are placed in a portfolio together to mitigate risk while retaining the best positive upsides.
Agile software development does this by limiting the potential waste due to a bad direction – while accepting the overhead of frequent iteration. In coping with the pressures of time, it’s about taking action and risk in ways that a negative outcome doesn’t kill the person (or sink their financial ship).
Framing the Question
In Rising Strong (see my reviews Part 1 and Part 2), Brené Brown shares an important question when faced with an either-or dilemma: “Who benefits by forcing people to choose?” This is the heart of the ability to force the framing of a question. In Pitch Anything, Oren Klaff strongly advocates controlling the frame. In fact, it’s the lead point in his six-step process for pitching – and selling. While I don’t believe in forcing a frame on other people as Klaff does, I do recognize there’s a value in reframing a question.
Consider the old joke question: “Congressman, are you beating your wife less?” It’s a trap. If you answer “yes,” the presumption is both that you are beating your wife and you need to reduce the frequency. If you answer “no,” the presumption is that you’re still beating your wife. The frame here is comically bad. The only way out would be for the congressman to change the frame by asking a question back – to create disruption and then answer the question that was implied but not asked. “What do you mean? I’ve never beat my wife.”
One of the tools that antifragile people use is to monitor the frame and look toward the implication of the other person’s frame – and when it’s necessary to change it.
Amateurs and Scholars
It was in the 20th century that the tide began to turn. The teachers in higher education became researchers and scholars. Before that, they were simply there to share existing knowledge. They were less concerned with generating new knowledge and innovation. This change is playing a game of catchup with the amateurs.
It was 1816, and a Scottish minister named Robert Stirling filed for a patent. The patent was for a heat economizer (now known as a regenerator, or as a Stirling engine.) His patent came from a frustration. Steam engines were driving the industrial revolution (in concert with standardization). However, steam engines, by their nature, used pressure vessels – tanks. When those pressure vessels exploded or popped a rivet, as they were prone to do, there was often harm and sometimes death. Stirling was reportedly frustrated with this situation and started looking for a better way to power the industrial revolution.
The Stirling engine doesn’t require high pressure and therefore doesn’t have the same risks as a traditional steam engine, so it was substantially safer. It also leverages temperature differential, so any place that nature supplies a relatively constant temperature differential can be used as a source of power. Though he created the first operating engine in 1818, the technology never caught on, because it was bulkier and less cost-effective than steam engines – however, the technology continues to see some use today because of the useful characteristics.
The point isn’t that the technology wasn’t widely adopted – the point is that citizen scientists like Stirling materially moved science forward as a citizen scientist. (See Saving Our Sons and Bold for more on citizen scientists.) Variations on the initial designs are being considered by NASA for some extended missions. Think about that for a moment: the work of an 19th-century minister may be powering the next 21st-century space mission. All from a citizen scientist who was frustrated.
Though it’s commonly believed that the Hippocratic oath contains the words “first, do no harm,” it contains no such instruction. Furthermore, most physicians don’t actually take this oath. Originally penned 2,500 years ago, it’s an oath sworn to Greek gods. Though this is a myth, most physicians do believe in the concept that they should do no harm. That’s a good thing.
However, the problem exists when the physician either doesn’t know or doesn’t believe they’re causing harm. While this might seem like the kind of splitting hairs that Ekman makes in what constitutes a lie or not, it’s an important point. Ekman asserts in Telling Lies that if the person doesn’t know it’s an untruth, then it isn’t a lie. What happens if a physician doesn’t know that it’s bad?
Ignaz Semmelweis and his colleagues did research on cadavers, and then went to assist women in childbirth, and in the process substantially increased mortality. That is, until Semmelweis discovered the need to wash his hands – and laid the foundation for germ theory. So he caused harm to his patients – though without his knowledge.
The problem is that the long-term negative effects of a behavior are often unknown or unknowable. Consider the fluoroscope. It was a shoe-fitting device frequently found in shoe stores that used x-rays to allow customers and parents of child customers to check a shoe for fit. The problem is that the intensity of x-rays being used had the possibility of creating genetic defects and cancer. Once this was widely reported, the devices quickly fell out of favor. However, these were used for over 30 years.
From a risk perspective, we should take risks only when the benefits are clear and necessary, so that if long-term negative effects are ultimately discovered, they don’t outweigh the benefits received. Experimental drugs for life-threatening illnesses make sense. Experimental drugs when the problems aren’t severe or threatening don’t. In these cases, the upside of solving the problem isn’t very large and the risks of negative health outcomes is potentially very large.
However, there is in human nature a bias towards action that often causes us to try medical treatments to solve a relatively minor problem, when ignoring the problem would naturally cause it to go away.
Bias Towards Action
In most cases, we have a bias towards action. It’s this bias that causes doctors to overprescribe antibiotics. It’s why we want to do something – anything – when loved ones are sick. This bias action seems to serve the evolutionary purpose of saving us from the fate of the frog. (If a frog is placed in cold water that is slowly warmed, he’ll stay there until he’s boiled.) We need to accept the need to take action – but temper that with thought and reason. However, there’s no reason to hold back the urge to read Antifragile. You might just become a bit more antifragile in the process.