5 ways to evolve the modeling process after COVID-19

2

RIsk analyst considers how to evolve the modeling process after COVID-19For the past few months, I’ve somewhat obsessively tracked COVID-19 developments – watching as we chip away at the uncertainty of this novel virus. I’m fascinated by the process of moving from total uncertainty to a point where this pandemic is understood and solved.

I’m a risk analyst, so it’s my nature to focus on uncertainty. I measure uncertainty, and on good days I even reduce uncertainty. I always considered risk analytics forward-looking – because what’s uncertain is the future. But it wasn’t until this crisis that I realized it’s really experience, or what we’ve observed, that is the obsession of the risk analyst. It’s experience that helps us understand risk, make decisions and optimize outcomes.

So, it’s very unnerving in this current crisis to feel that our experience is suddenly so lacking. The rapid spread of COVID-19 has had such an unimaginable impact and has left us overwhelmingly isolated in the present. We’re moving as quickly as possible to incorporate the current environment into our decision-making – but it is a lot of work and a lot of pressure.

Our immediate need is to incorporate our current experience into our models and decisions as quickly as possible. In doing this, we’d like to achieve not only near-term improvement, but a more resilient process going forward. Here are five key considerations for addressing the modeling process:

  • Welcome new data.
  • Incorporate expert judgement.
  • Analyze the hypothetical.
  • Build for fast flexibility.
  • Monitor for early warnings.

Welcome new data

Responding to COVID-19 means looking at new data sources for our models so we can capture the way things are changing and the rapid pace of change. Models must accommodate the evolving business reality.

Many business models rely on inputs that are observed infrequently, like monthly or quarterly. Such is the case for a lot of macroeconomic data. This generally hasn’t been an issue in the past – but it’s a real problem now when we’re seeing sudden, and massive, economic shocks.

For more responsive models, consider changing data inputs to something that updates more frequently. For example, you could use weekly jobless claims for unemployment. If that’s not fast enough, consider using mobility indices that indicate how the virus may spread.

Mobile phone data, transactional data, news and social media data are all new data sources with the potential to give a more current view than traditional model inputs.

Incorporate expert judgement

As you attempt to trace a chain of events from the spread of the virus itself to its societal and economic impacts, you find a tremendous amount of uncertainty. Given the amount of uncertainty, and the difficulty in mapping the crisis impact to model inputs, it’s not a bad time to turn to expert judgement. Many firms have been doing this.

Prior to the current crisis, the trend was to move away from expert judgment in favor of data-driven models. However, no model trained on historical data would be able to anticipate the current crisis. An analyst familiar with their model’s use and well-read on current events may know a more accurate override. It is much faster to incorporate a manual override than to wait on data or retool a model.

The model override and the basis for the override should be transparent and documented within the modeling process. The original model should be kept for comparison with the override as actual outcomes are available. As much as possible, the thought process for the override should be incorporated into the modeling process to inform future decisions.

Analyze the hypothetical

The modeling process must be able to run ad hoc scenarios. It must also allow to you stress and challenge assumptions throughout the model. Working with the hypothetical lets us build experience without the pain of actual impact.

Before starting, carefully trace your models and identify all assumptions. For example, are you assuming certain values take only positive values? When oil prices dropped below zero in April, many models failed. Stress tests should target your key assumptions.

Regularly run scenarios that, while perhaps unrealistic from an historical standpoint, show you the limitations of your models. Stressing model inputs by various magnitudes is generally straightforward. Other assumptions, however, may be more deeply ingrained in the model. The value of stress testing is not just to see how decisions are affected, but to see how well models do when assumptions no longer hold.

It’s good to be prepared to run hypothetical scenarios in a crisis. It allows you to simulate responses and identify optimal actions. And it enables you to quickly incorporate new data and expert judgement to assess outcomes.

Build for fast flexibility

One thing we’ve learned from this crisis is that things can change in big ways much faster than we thought. Prior crises have not moved the economy anywhere near the magnitude or speed that we’ve seen with COVID-19.

Here are tips for building a more responsive modeling process:

  • Continue to develop advanced data-driven models. These models adapt faster to sudden changes due to their non-linearity. Many machine learning models can take in large volumes of quickly moving data. Unsupervised machine learning models are a good complement in most modeling processes to identify anomalies in real-time.
  • Integrate with good technology. Changing production models in business is impractical during a crisis. But keeping existing models and enabling things like self-service access to results, automated monitoring, and model governance can improve efficiency and reliability.
  • Ensure transparency – it’s key to quickly adapting models. Seeing the logic of decisions, noting how decisions get made – and against what assumptions and inputs – is critical to knowing how to extend or change a model as circumstances demand.

Monitor for early warnings

Finally, identify where you can get early notice that something is changing or going wrong. Waiting for model performance metrics – or worse, for losses to materialize – is too late. Instead, monitor the data that goes into models. Monitor leading indicators. For the COVID-19 crisis, hotel occupancy rates, movie ticket sales and cell phone location data are some of our early indicators.

When we build models, we base them on experience or historical data. As the world changes, our models may not know what to do. We need to observe how individual variables and intra-variable correlations change. You can quantify this and automate the process to enable continuous monitoring.

There are huge benefits to early warnings, including the ability to mitigate losses.

Aim for long-term resiliency

Lessons learned from the coronavirus prove the need to incorporate our experience into our decisions in a timely fashion. We’re learning from this crisis just how fast that process must be at times.

To achieve stability and resiliency, modeling processes must be transparent and well-governed. Models must be designed to reveal their logic and limitations and be flexible enough to receive new data or expert judgement. Only then can they incorporate the vital lessons that come only from experience.

Learn how SAS can help you adapt and plan for the future
Share

About Author

Katherine Taylor

AI Specialist, Risk Analytics

Katherine Taylor specializes in the applications of advanced analytics in financial services and risk management. Her main area of interest is the adaptation of machine learning methods into financial risk management, considering the analytics advantages and addressing practical hurdles around model interpretability and governance. Prior to joining SAS, Katherine was a quantitative analyst for a large electric utility. She holds degrees in economics, political science, and financial mathematics.

2 Comments

  1. Hi Katherine,

    Nice Article, Thanks for Sharing.
    My question is : How the market stabilization effects upcoming future aspects after COVID-19 crisis?

    Thank You,
    Shamant.

    • Katherine Taylor
      Katherine Taylor on

      Market stabilization, including policies like payment deferment, stimulus checks, loan/rent forgiveness etc., are most commonly incorporated into models as expert judgement overlays. Alternatively, the model may be parameterized to capture the impact or to indicate a regime switch. Scenario analysis is good for this. In the case of machine learning models, market stabilization policies may also be a form of concept drift, forcing the modeler to revisit and relabel the model's training data. Modelers should make note of the way policies have been incorporated so that they can revert models when policies end.

Back to Top