Most failures don’t happen because of one catastrophic mistake. They happen when tiny errors align perfectly, slipping past multiple layers of defence.
This is exactly what the Swiss Cheese Model explains.
It’s the reason behind the deadliest plane crash in history, why companies collapse overnight, and how cybersecurity breaches unfold.
And it’s why your biggest failure won’t be one massive mistake—it’ll be a series of small, unnoticed ones lining up at the worst possible time.
The Day Two Jumbo Jets Shouldn’t Have Been There
March 27, 1977. A thick fog swallowed Tenerife Airport.
Two Boeing 747s, Pan Am 1736 and KLM 4805, prepared for takeoff.
The KLM captain advanced the throttles.
On the same runway, Pan Am was still taxiing.
A radio transmission crackled.
“We’re still taxiing down the runway, Clipper 1736!”
Seconds later, disaster.
The two planes collided at full speed. 583 lives were lost.
It remains the deadliest aviation disaster in history1The September 11 attacks, which killed 2,996 people, are deadlier. However, they are not counted as aviation accidents. Instead, they are counted as terrorist attacks (source)..
But here’s the kicker: neither plane was supposed to be there.
This is exactly what the Swiss Cheese Model explains. And this doesn’t just happen in aviation. It happens in business and everyday life.
What Is the Swiss Cheese Model?
People think disasters happen because of one major failure.
But the reality? Disasters happen when multiple small failures align perfectly.
The Swiss Cheese Model explains this concept using layers of defence.
Here’s how it works:
- Imagine stacking slices of Swiss cheese.
- Each slice is a safeguard—rules, processes, or barriers to prevent failure.
- But every slice has holes, weak spots where mistakes slip through.
- Most of the time, the holes don’t align, so one safeguard catches the mistake.
- But when the holes line up perfectly, disaster slips through all the layers and failure becomes inevitable.
This is exactly what happened at Tenerife in 1977.
🚨 Quick sidebar: Enjoying what you’re reading? Sign up for my newsletter to get similar actionable insights delivered to your inbox, for free!
Psttt, you will also get a free copy of my ebook, Framework for Thoughts, when you sign up!
The Tenerife Disaster: A Case Study
The crash wasn’t caused by one mistake. It was a series of failures, all stacking up perfectly. Several factors contributed to the Tenerife tragedy2All details have been summarised from “ALPA report on the crash”.:
- A bomb threat forced both flights to divert to Tenerife—a small, understaffed airport3Ebert, John David (2012). The Age of Catastrophe: Disaster and Humanity in Modern Times (Pg. 40). that wasn’t equipped for an overflow of jumbo jets.
- Thick fog reduced visibility to near zero—neither plane could see the other. In addition, it caused Pan Am to miss a turn (3) down the runway. As seen in the image above, the crash occurred near turn 4!
- No ground radar meant air traffic controllers had no way of tracking aircraft positions.
- A blocked radio transmission prevented Pan Am’s warning from reaching the KLM crew.
- Authority bias stopped KLM’s First Officer and Flight Engineer from challenging the captain’s premature takeoff decision4The Captain for the KLM flight Chief Flight Instructor for KLM with his photo being featured in the KLM magazine in 1977. So, was highly respected with the staff. The First Officer and Flight Engineer flagged that they didn’t have the clearance but were disregarded..
Individually, these issues might have been manageable. Together, they aligned to create a catastrophic outcome.
I read about the Tenerife disaster in Morgan Housel’s Same As Ever. In the book, Housel also wrote:
Big risks are easy to overlook because they’re just a chain reaction of small events, each of which is easy to shrug off.
So people always underestimate the odds of big risks.
Real-World Applications of the Swiss Cheese Model
The Swiss Cheese Model isn’t limited to aviation. It’s applicable in various fields:
- A bad morning isn’t just about waking up late. It’s the alarm that didn’t go off, the spilled coffee on your shirt, the unexpected traffic, and the urgent email waiting for you, all stacking up until you’re wondering if the universe is out to get you.
- A failed project at work isn’t just a missed deadline. It’s unclear goals, shifting priorities, miscommunication, and last-minute changes—stacking up until everything collapses.
- A company collapse isn’t about one bad decision. It’s often a mix of cash flow problems, leadership issues, and market downturns converging at the worst time.
- Or a cyberattack doesn’t succeed because of one weak password. It’s usually a combination of outdated software, poor training, and a delayed security response that opens the door.
The danger isn’t in any single failure. It’s in their ability to compound into something bigger than the sum of their parts5But the Swiss cheese model has a blindspot. The Swiss Cheese Model shows how failures align but it doesn’t ask why the same cracks keep appearing. For example, a pilot who misses a small miscommunication might be experiencing fatigue or working in an airline with poor training standards across the board..
Preventing Disasters: Mitigating the Risks
Since risk isn’t a single point of failure, the best way to prevent disaster is to design layers that don’t rely on luck.
Assume Risks Are Connected (Because They Are)
Small risks don’t stay small when they stack up.
In Tenerife, nobody accounted for how fog + no ground radar + communication failures + authority bias could combine.
Instead of just fixing individual risks, ask: How might they interact?
Each was a known issue, probably considered low risk. But because they weren’t treated as interconnected risks, they weren’t addressed holistically becoming high impact.
The Only Way to Prevent Disaster? Assume It’s Coming.
The smartest way to prevent failure isn’t by fixing every flaw. It’s by designing systems that can survive failure.
The reason New Zealand managed COVID better than most was that they didn’t rely on a single defence mechanism.
Instead of plugging every hole, build layers that don’t depend on each other. If one fails, the others should still stand.
Strengthen Weak Links
A chain is only as strong as its weakest link. So, fix the weak parts. Don’t just add more layers.
After Tenerife, the aviation industry didn’t just add more checklists but they introduced Crew Resource Management (CRM), training pilots to challenge authority when necessary6Cooper, G. E., White, M. D., & Lauber, J. K. (Eds.) 1980. “Resource management on the flightdeck“, Proceedings of a NASA/Industry Workshop (NASA CP-2120)..
A security system with multiple checkpoints is useless if all the passwords are “123456.”
So, more layers are good but if every layer is weak, failure is inevitable. Strengthen the foundation, not just the walls.
Build Buffers (Because Things Will Go Wrong)
When failure compounds, the best defence isn’t perfection. It’s margin. Add breathing room. Add margin of safety.
- If you’re planning a major project, assume delays and build extra time.
- If you’re investing, don’t put yourself in a position where one bad market day wipes you out.
- If you’re traveling, don’t cut it so close that a single delay means missing your flight.
Disaster happens when there’s no room for error. A little extra space, turns major failures into minor setbacks.
The Swiss Cheese Model: Conclusion
In summary:
- Big failures don’t happen all at once. They happen when small, manageable risks go unnoticed until they align perfectly.
- Increase your layers, distribute them, and strengthen them.
- Expect small things to go wrong, analyse the failures by stacking them.
The best way to avoid disaster? Don’t wait for all the holes to line up. Close them before they do.
Footnotes:
- 1The September 11 attacks, which killed 2,996 people, are deadlier. However, they are not counted as aviation accidents. Instead, they are counted as terrorist attacks (source).
- 2All details have been summarised from “ALPA report on the crash”.
- 3Ebert, John David (2012). The Age of Catastrophe: Disaster and Humanity in Modern Times (Pg. 40).
- 4The Captain for the KLM flight Chief Flight Instructor for KLM with his photo being featured in the KLM magazine in 1977. So, was highly respected with the staff. The First Officer and Flight Engineer flagged that they didn’t have the clearance but were disregarded.
- 5But the Swiss cheese model has a blindspot. The Swiss Cheese Model shows how failures align but it doesn’t ask why the same cracks keep appearing. For example, a pilot who misses a small miscommunication might be experiencing fatigue or working in an airline with poor training standards across the board.
- 6Cooper, G. E., White, M. D., & Lauber, J. K. (Eds.) 1980. “Resource management on the flightdeck“, Proceedings of a NASA/Industry Workshop (NASA CP-2120).