David Tennenhouse
Contributing writer

5 essential guard rails for keeping ML models on track

Opinion
21 Dec 20215 mins
Artificial IntelligenceEmerging TechnologyIT Leadership

As machine learning makes more use of deep neural networks, businesses are increasingly dependent on a technology that experts donu2019t fully understand. Guard rails are required to ensure safe and predictable operating environments.

porting converting merge switch train track
Credit: Thinkstock

There is no question that AI and machine learning (ML) will play an increasingly vital role in the development of enterprise technology and support a wide range of corporate initiatives over the next several years.

Worldwide revenues for the AI market, including software, hardware, and services, are expected to reach $341.8 billion this year and grow at an annual rate of 18.8% to break the $500B mark by 2024, according to market researcher IDC. And, by 2026, 30% of organizations will routinely rely on AI/ML-supported insights to drive actions that could result in a 60% increase in desired outcomes (indeed, 30% may be a low estimate).

Despite the optimism, the dirty secret of the deep neural network (DNN) models that are driving the surge in ML adoption is that researchers don’t understand exactly how they work. If IT leaders field a technology without understanding the basis of its operation, we risk a number of bad outcomes. The systems could be unsafe in the sense that they can be biased, unpredictable, and/or produce results that cannot be easily understood by their human operators. These systems can also contain idiosyncrasies that will be exploited by adversaries.

When ML is applied to mission critical applications, CIOs and their engineering teams are faced with a paradox, the choice between the better results ML can offer vs. the risk of bad outcomes. This can even become a moral dilemma. Suppose a DNN used to process medical images can recognize certain forms of cancer better than the typical practitioner. Are we morally obliged to field this technology, which can have life-saving positive effects, even if we don’t know how it achieves its results?

A long-term goal of some ML researchers is to develop a more complete understanding of DNNs, but what should practitioners do between now and then, especially when bad outcomes can involve risk to life and/or property?

Establishing machine learning guard rails

Engineers have faced similar situations in the past. In the early days of aeronautics, for example, we didn’t have as complete an understanding of the underlying physics or ability to analyze aircraft design. To compensate for that lack of understanding, aeronautics engineers and test pilots would identify the operating envelope within which the aircraft could be safely flown and then took steps—through flight control systems, pilot training and so on – to ensure the aircraft is only operated within that safe envelope.

That same approach of developing a safe and predictable operating envelope can be applied to ML by creating guard rails that keep ML models on track and minimize the possibility of unsafe and/or unpredictable outputs.  The following are some suggested approaches in establishing ML systems with greater safety and predictability:

1. Identify the range of model outputs that are considered safe. Once the safe output range has been identified, we can work our way backwards through the model to identify a set of safe inputs whose outputs will always fall within the desired envelope. Researchers have shown this analysis can be done for certain types of DNN-based models.

2. Install guard rails “in front” of the model. Once you know the safe range of inputs, you can install a software guard railin front of the model to ensure that it is never shown inputs that will take it to an unsafe place. In effect, the guard rails keep the ML system under control. Even though we don’t know exactly how the model arrives at a specific output, we will know that the outputs are always safe.

3. Focus on models that generate predictable results. In addition to keeping the outputs within a safe range, we also want to know that the models don’t produce results that wildly swing from one part of the output space to another. For certain classes of DNNs, it is possible to ensure that if an input only changes by a small amount, then the output will change proportionately and will not unpredictably jump to a completely different part of the output range.

4. Train models to be safe and predictable.  Researchers are finding ways to subtly change the training of DNNs so that they become amenable to the above analysis without compromising their pattern recognition capabilities.

5. Remain agile. Since this is a fast-moving space, the key is to build guard rails into the ML architecture, while retaining the agility to evolve and improve them as new techniques become available.

The task at hand for IT leaders is to ensure the ML models they develop and deploy are under control. Establishing guard rails is an important interim step, while we develop a better understanding of how DNNs works.

David Tennenhouse
Contributing writer

David Tennenhouse is the Chief Research Officer at VMWare, Inc.