The Failure of Risk Management

Book The Failure of Risk Management

Why It’s Broken and How to Fix It

Wiley,


Recommendation

As capitalist economies careen from one crisis to the next, it’s tempting to believe that risk management is some sort of dark art. Risk consultant Douglas W. Hubbard agrees that the economic collapse of 2008 stained his entire industry. But, he argues persuasively, the problem isn’t that risk-management tools don’t exist or don’t work; the problem is that people too rarely use effective tools. Combining plenty of real-world examples and a clear writing style, Hubbard creates an accessible user’s guide to risk management. To his credit, he doesn’t puff up his own formulation, mentioning it as one of several Monte Carlo approaches. He lays out a strategy that’s easy to follow: Start by adopting a skeptical mind-set, invest in some software, then devote time and energy to gauging the chances that “something bad could happen.” BooksInShort recommends his instructive manual to executives and investors seeking insight about managing risk.

Take-Aways

  • Much of today’s risk management is no more rigorous than astrology or soothsaying.
  • Simply put, risk management means “being smart about taking chances.”
  • To mitigate risk, you can “avoid, reduce, transfer” or “accept” it.
  • Risk managers use various tools, from “gut feel” to “probabilistic models.”
  • The four archetypal risk managers are “actuaries, war quants, economists and management consultants.”
  • The concerns facing the field of risk management include confusion over terminology, human error and subjective scoring.
  • The human mind’s capacity for “overconfidence” hampers effective risk management.
  • “Monte Carlo simulations” are a valuable and accessible risk-management tool.
  • A number of software companies and consultants sell Excel-based Monte Carlo models that you can learn to use in a few days.
  • Successful risk managers take a scientific approach to their jobs: They demand evidence, test constantly, accept criticism and keep an open mind.
 

Summary

Sometimes It’s Science; Too Often It’s Hocus-Pocus

For all its faux sophistication, much of modern risk management is no more rigorous than astrology or soothsaying. To protect their organizations from calamity, risk managers often rely strictly on common sense or fatally flawed models. As a result, managers often overlook or underestimate risks ranging from the financial crash of 2008 to an IT project that doesn’t work or a product launch that falls flat. Risk means that, simply put, “something bad could happen.” Management is the art and science of “using what you have to get what you need.” Join these two perspectives to get the definition of risk management: “Being smart about taking chances.”

“A weak risk-management approach is effectively the biggest risk in the organization.”

Yet far too many smart managers are unwittingly dumb about risk management. Without a feel for statistics and lacking the skeptical soul of a scientist, they’re all too willing to accept their organizations’ faulty practices or a consultant’s bogus recommendations. Three reasons account for the widespread failure of risk management: First, no one measures whether risk management tactics work; you assume that if nothing bad happens, it’s because of prudent risk management. Second, too many biased or weak methods have worked their way into risk management. And third, most risk-management practices omit scientifically proven, mathematically valid ways to measure risk. The 2008 financial crisis is just one example of seemingly sophisticated players overlooking huge risks.

“‘How do we know our risk management efforts work?’ should be the single most persistent question of all those who manage risks.”

Risk managers use a variety of tools, from the simplest to the most complex:

  • “Gut feel” – This unscientific method, based on intuition, is the oldest risk management approach of all. It uses no measurements, probabilities or other mathematical approaches.
  • “An expert audit” – Outside professionals typically conduct these assessments using checklists and scoring.
  • “Simple stratification methods” – These rankings rate risks by using various systems such as “green-yellow-red,” “high-medium-low,” color-coded “heat maps” and matrices.
  • “Weighted scores” – This adds variables, adjusted by specific values, to scoring.
  • “Traditional financial analysis” – An analyst adjusts potential returns for risk by applying a discount rate. Other tools include “best- and worst-case scenarios.”
  • “A calculus of preferences” – Decision analysis tools apply additional rigor to risk management, but they invariably depend on human judgment.
  • “Probabilistic models” – Insurers, financial firms and engineers analyze probabilities to gauge risk. While hardly perfect, these systems are constantly improving.

Responses to Risk

Along with ways to measure risk, managers have ways to mitigate risk. These include:

  • “Avoid” – Not taking a risk is cautious, however, that isn’t always an option, and avoidance can create risks of its own. A drug company that abandons R&D mitigates the risk of failed drug trials – but heightens the risk of losing future revenue streams.
  • “Reduce” – A manager might lessen the possibility of bad things happening by, for instance, investing in safety programs. Training, security systems and emergency planning are all operational tactics for risk reduction.
  • “Transfer” – You can delegate risk to someone else by purchasing insurance or by adding language to your contracts that shifts the risk to counterparties or customers.
  • “Accept” – Take on the risk and proceed with business as usual.
“When it came to designing a method for assessing and managing risks, these scientists and engineers developed an approach with no more scientific rigor behind it than an ancient shaman reading goat entrails to determine where to hunt.”

Risk managers tend to fit into four archetypes – the “four horsemen” of the discipline:

  1. “Actuaries” – The practice of actuarial science emerged in the 19th century as a way for insurance companies to manage risk. Professional standards constrain today’s actuaries to make mathematically sound calculations and to report their findings honestly. Actuaries are limited, however, to insurance, a lesson that insurance giant AIG learned during the financial meltdown of 2008. AIG collapsed not because of the insurance policies it issued but because of the credit default swaps (CDSs) in which it engaged. Common sense dictates that CDSs are insurance, but technically and legally, they’re not, so they were outside the purview of AIG’s actuaries.
  2. “War quants” – This group of engineers and scientists modeled enemy operations and created new weapons during World War II. The war quants – so nicknamed because of their focus on quantitative analysis – started with such problems as combating German submarines or estimating Germany’s manufacturing output. As the war progressed, physicists working on the Manhattan Project faced the challenge of modeling fission reactions. They came up with “Monte Carlo simulations,” which adjusted for the uncertainty of nuclear reactions by using probabilities to guess at the trajectories of neutrons. One physicist, John Von Neumann, helped develop both Monte Carlo models and game theory, which posited that many decisions in business were like contests between a single player and nature, an unpredictable opponent that could behave in irrational ways. Game theory later evolved into decision analysis. Two common themes bond all these methods: mathematical rigor and their proponents’ skepticism toward the squishy approaches that often pass for risk analysis.
  3. “Economists” – Only after the war quants emerged did economists begin to apply math to matters of risk. Some began to study uncertainty in terms of probabilities. Modern portfolio theory, options theory and behavioral economics all sought to apply empirical observations to assumptions of human rationality. While economists have made great strides toward understanding risk more deeply, their models still tend to underestimate the likelihood of financial disasters.
  4. “Management consultants” – The least respectable of the four horsemen, management consultants use emotion to sell solutions that don’t work particularly well. Consultants are all too eager to play the “FUD” card – to use “fear, uncertainty and doubt” to goose the demand for what they’re selling. Compared to the methods actuaries and economists use, consultants’ simple systems often are little more than “crackpot solutions.”

A Field Rife with Confusion and Errors

It’s no wonder that risk management, as a discipline, doesn’t always manage risk well. Flawed thinking and sloppy approaches often lead to a spotty record. The field faces “seven challenges”:

  1. “Confusion regarding the concept of risk” – A widespread misconception distorts what risk really is. Too often, managers use the words “risk” and “uncertainty” interchangeably. The latter simply means not knowing the outcome of an event in advance. A coin flip comes with uncertainty, as does a weather forecast or a football game. Risk includes uncertainty, but it puts a monetary value on an outcome. For instance, an energy-company assessment might conclude, “There is a 40% chance the proposed oil well will be dry, with a loss of $12 million in exploratory drilling costs.” “Volatility” is another word confused with “risk,” but, like uncertainty, it doesn’t necessarily create risk.
  2. “Completely avoidable human errors in subjective judgments” – Intuition is the most basic tool for risk management, but it’s imperfect. The mind isn’t suited for the bloodless calculations that go into assessing probabilities, because it falls prey to biases and bad math. The relatively new field of “judgment and decision-making psychology” documents such mistaken thinking. People aren’t good at distinguishing patterns from randomness. If an imaginary risk is more vivid, you are more apt to overplay its likelihood; for instance, air travelers will pay more for terrorism insurance than for policies covering other risks. Perhaps the most fatal error is “overconfidence.” Humans believe their predictions, even when those predictions prove wrong. People lie to themselves, insisting they knew all along something bad might happen. Overconfidence also shapes management decisions: Often, a CEO or investor enjoys a success but can’t replicate it, suggesting that chance rather than skill was the reason for the win.
  3. “Entirely ineffectual but popular subjective scoring methods” – Practitioners often develop risk models that are outside accepted decision analysis norms. Ordinal scales, in particular, can be misleading, because they measure relative ranking, not “actual magnitudes.” On the typical risk matrix that uses a scale of one to four, a four-star score isn’t four times as good as a one-star score. Managers compound this error when they base a risk score on surveys that ask respondents to rate risk with phrases such as “very likely” or “very unlikely,” sentiments that don’t account for human overconfidence.
  4. “Misconceptions that block the use of better, existing methods” – Executives sometimes wave off risk management measures because they insist that their situation is unique and no one could possibly measure it, or that the uncertainties they face are too ambiguous to gauge. Such thinking is simply wrong and forestalls better models.
  5. “Recurring errors” – Managers may perform a risk assessment but never match what really happened against what the model predicted. Is your risk model any good? If you don’t check it out through “back-testing,” you’ll never know. Firms also fall victim to the “risk paradox” – the nonsensical practice of managing for small risks but ignoring big ones. For example, before the 2008 crisis, most banks evaluated individual loans, but many didn’t account for how risks affected their overall portfolios.
  6. “Institutional factors” – Risk managers are often isolated within and outside their firms. These professionals need to involve themselves more fully in their businesses and among their peers to share common issues.
  7. “Unproductive incentive structures” – While risk management is everyone’s job, few executives get points or bonuses for getting it right. Too much of what passes for risk management is simply “compliance and the use of so-called best practices,” which can be cold comfort in an emergency.

Tips for Fixing Risk Management

If you make risk management a priority, improving your methodology is easy. To start, forget relying on your gut instincts or hiring a consultant to create a mathematically suspect scorecard. Instead, employ “calibrated probabilities” within Monte Carlo simulations. Successful risk managers learn to start with “decomposition,” that is, breaking a risk down into individual pieces. Complex math can be intimidating, but many tools are available that take care of the number crunching and let the risk manager focus on managing risk. It takes just a few days to train someone to use an Excel-based Monte Carlo program, such as Oracle’s Crystal Ball, Palisade Corporation’s @Risk and Douglas W. Hubbard’s Applied Information Economics (AIE).

“Prediction is very difficult, especially about the future.” (Niels Bohr, Nobel physicist)

As a risk manager, approach your job as if you were a scientist: Demand proof that something works and be willing to change your mind as the evidence dictates. Just because you’ve bought some software and run a model doesn’t mean your job is done. Risk managers know that models are imperfect, so maintain a scientist’s devotion to skepticism and peer review. As you build and use your model, constantly test it and gauge the results. Keep an open mind – your model is quite likely to deliver bad news or surprises. Other strategies for successful risk management include:

  • Stop making excuses – You may think that your situation is so uncommon that it’s impossible to model or that you don’t have enough data. You can assess just about anything, and your organization probably has enough data to generate a useful model.
  • Perfection is not a reasonable goal – No model will be flawless, but you can build a model that’s more accurate and useful than whatever method you’re using now.
  • Expand your time horizons – Look beyond the typical five-year increments in your risk modeling. Even an event that takes place only once every several decades can have a disproportionate impact on your business.
  • Make risk management a priority – Build risk assessment into your organization’s infrastructure. Assign a “chief risk officer” who will model risk, back-test models and adjust them as the world changes.

About the Author

Risk consultant Douglas W. Hubbard is the author of How to Measure Anything: Finding the Value of Intangibles in Business.