🤔 The Nassim Taleb Starter Pack

Apr 04, 20207 min

I've always had fun learning new ways of thinking, especially when the ideas turn out to be practical. If this is your sort of thing as well, you've come to the right place. Nassim Taleb has produced a gold mine of meaningful ways of thinking that have actionable impact on how you live your life.

I've been familiar with Taleb for some time now. He's a prominent voice on Twitter, both for better and worse. I understood his big ideas pretty well at a high-level. What I didn't realize, was how much I was missing out on.

Admittedly, I still haven't made my way all the way through Taleb's best-selling books, but I did recently come across Alex Danco's summaries of the key concepts and I was hooked! If you have a little more patience, I recommend checking out those links below.

This post aims to provide a more concise version of these core ideas, with some commentary from yours truly sprinkled in.

Skin in the Game

"Skin in the game" is a phrase that gets thrown around a lot. We often hear it used in conversations related to incentives. A common example of this in Silicon Valley is stock options. If employees have a significant stake in the company, they supposedly have "skin in the game." This isn't quite right for a few reasons.

When Taleb talks about "skin in the game," he isn't talking about incentives. He's talking about survival. Small business owners don't have skin in the game because they are incentivized to run their business well; they have it because if they don't run it well, it dies.

This is a long-winded way to say that "skin in the game" isn't an incentive; it's a filter. This is more important than it seems.

What this means, is that in systems that require skin in the game, bad performers are naturally eliminated from the system. This leaves only the most effective people.

This built-in filter is useful. It is what guarantees us that someone is a "real expert" rather than a "fake expert" who hasn't gone through the same filter. Taken a step further, Taleb advocates that people shouldn't  be in charge of anything important unless they are willing to bear the consequences. Unless they have skin in the game.

“The curse of modernity is that we are increasingly populated by a class of people who are better at explaining than understanding, or better at explaining than doing.” — Skin in the Game

So how do you use this to your advantage? You put yourself in situations where your survival is a direct function of your performance. If you're a startup raise less money. If you're a creator, quit your day job. If you're building a habit, set up a meaningful penalty for poor execution.

In Seth Godin's The Dip, he explains how every new project has a "down period" somewhere in the journey. These tough times filter out most people.

Skin in the Game feels, in many ways, like a systematized version of The Dip. Skin in the Game systems don't depend on less effective performers to tap out as The Dip does. Instead, they tap them out automatically.

There's a catch though. If you can survive, if you can make it through, then you might just end up in a position of great worth. For a lot of people, this is a more than fair tradeoff to make.

Antifragility

Antifragility is probably the least well-understood of the Talebisms, and also the one that resonated with me the most.

First things first, antifragile does not mean "not fragile." It doesn't mean robust, durability, or the ability to withstand adversity. Instead, antifragile is more like negative fragility. Antifragile things need disorder to thrive, and will actively suffer if left at rest.

To solidify this concept, Taleb outlines a thought experiment. Let's say you have a box of champagne glasses that you want to ship. If the glasses are fragile, you might write "Do not shake" on the box. If the glasses are well protected, you probably wouldn't write anything. Now, what if the glasses were antifragile? In this case, you should write "Please shake" since with every shake or turn, the glasses are better off than they were.

Plenty of complex systems and living things in the world are antifragile, take markets, democracies, and immune systems for example. Without variance, they stagnate and die. With variance, they grow stronger. Disorder is a key ingredient in how they function.

So, why do antifragile things benefit from disorder? In antifragile systems, stressors serve as information; they resolve uncertainty instead of creating it. Antifragile systems don't need a plan. They just have to listen to stressors and react. Without stressors, antifragile systems don't know how to grow or what to do.

Antifragility is an operating state of growing through continuous reaction. It's like the opposite of predicting the future. You're not making forward-looking assumptions. You need a state change to react to. In the best antifragile systems, these reactions happen quickly and correctly.

“The irony of the process of thought control: the more energy you put into trying to control your ideas and what you think about, the more your ideas end up controlling you.” — Antifragile

When I started at Hugo, one of the two books on my desk was Antifragile. I didn't fully understand why at the time, but it makes sense to me now. In the context of startups, antifragility is everything.

Startups that aren't taking in stressors and reacting to information quickly and correctly don't grow and learn about their users in the same way — in most cases, they die.

The current COVID-19 situation is another great example of this. Some companies will take in stressors (there's more than enough to go around) and react to them, while others will stay in their ways and try to weather the storm. This is the difference between antifragility and robustness.

From a self-improvement lens, antifragility might just be the foundational concept to understand. If you're the type of person that is constantly seeking out new challenges, you're constantly getting back information about how the world works. By taking this information in and reacting to it, you improve over time. Congrats, you're antifragile.

Black Swan Events

Last but not least, Black Swan Events aren't just "something we didn't see coming." There's a little more to it. Taleb describes them as having three principal characteristics:

  • They are unpredictable. We ruled them out in our models for the future.
  • They are unprecedented in scale. Their magnitude is shocking.
  • They are explainable. In hindsight, we always see them coming.

Taleb goes on to also introduce two types of environments: Mediocristan and Extremistan. In Mediocristan environments, probabilities follow the laws of Normal Distributions. Imagine the salaries of dentists around the world. In Extremistan environments, tail events are understood and expected. We expect some music star salaries to be bonkers.

Black Swan Events are generally high-magnitude events in domains that we thought belonged to Mediocristan (we had never seen evidence otherwise), but actually belong in the Extremistan space.

Sometimes this is because the environment changes in some fundamental way (new technology, new laws), but more likely, the environment was always that way. We were wrong, not the universe.

“Categorizing is necessary for humans, but it becomes pathological when the category is seen as definitive, preventing people from considering the fuzziness of boundaries, let alone revising their categories.” — The Black Swan

Where I'm most interested, is in the preparation for Black Swan Events. Ideally, I'd like to not be blindsided by these high-magnitude events. As we've seen, that's easier said than done. We are most at risk when we've studied some corner of the world, and we have a lot of data telling us, "this is how this system behaves," and then it doesn't.

With this in mind, it seems like the best way to avoid being susceptible to Black Swan Events is to work from first principles. Don't take generally accepted data and assumptions as law. When one part of the model is wrong, the whole thing comes crashing down. Start from the ground up and you have a greater chance of catching these mismatches between your model of the world and the world itself.

Of course, you can't do this for every nook and cranny of the world. You have be thoughtful. You have to pick your spots. But hey, such is life.


Thanks for reading! If you enjoyed this post and you’re feeling generous, perhaps follow me on Twitter. You can also subscribe in the form below to get future posts like this one straight to your inbox. 🔥

Subscribe
I send an email newsletter each week with the links I thought were interesting, plus any new posts or projects from me. There are 1,000+ subscribers. You should sign up too.