Keeping AI Algorithms from Going Awry


Adoption of artificial intelligence (AI) and machine learning (ML) is increasingly rampant in financial services, across areas that include algorithmic trading (AT), fraud detection, customer service, and mortgage lending, to name a few. Yet glitches do happen, particularly in the wild and woolly world of AT. What’s been going on around building governance and control of AI and ML, from both technological and regulatory perspectives?

Errors blamed on algorithmic trading are numerous and especially egregious, due to their impact on world markets. Despite the rapid evolution of AI and ML technologies, these software snafus aren’t disappearing.

  • In the famous “Flash Crash” of May 6, 2010, the Dow Jones average nosedived more than 1000 points in ten minutes before making a quick rebound. Although the cause of the crash remains a controversial topic, it's clear that at least one algorithmic trader learned about and exploited a way to deceive the market making software in use at the time.
  • For 17 minutes in August of 2013, Goldman Sachs disrupted trading on leading exchanges, mistakenly buying at least 800,000 contracts linked to equities and exchange funds. Due to a software configuration error, Goldman's algorithm-driven automated trading systems accidentally sent indications of interest as actual orders to be filled at the exchanges, mispricing each transaction at $1.
  • In August of 2019, a stock-market data feed run by the NYSE experienced technical troubles which caused delays in issuing end-of-day values of the Dow Jones average and S&P 500. Although the algorithms of some financial services companies picked up on the mishap, the algorithms of most did not, leading those firms to grab the wrong data as end-of-day prices. The companies later detected the error through traditional reconciliation methods.

Internal control and governance

Financial institutions can exert direct governance and control over their own algorithms, although not those of their trading partners. In fact, for many banks, internal control and governance are mandatory.

The SEC has rules requiring firms with market access to have proper controls in place to prevent technological errors from impacting trading. Goldman was fined $7 million for violating these market access rules in its 2013 trading glitch. The investment bank also suffered $38 million in losses from canceled orders and price adjustments.

What's more, in 2010, the FRB and OCC issued supervisory guidance on model risk management which applies to AI and ML model in addition to other data models.

“Model risk occurs primarily for two reasons: (1) a model may have fundamental errors and produce inaccurate outputs when viewed against its design objective and intended business uses; (2) a model may be used incorrectly or inappropriately or there may be a misunderstanding about its limitations and assumptions,” according to the document.

In 2015, the FDIC adopted the FRB/OCC document, with some modifications, as guidance for all FDIC-supervised banks with total assets of over $1 billion. According to the FDIC, an effective model risk management framework should include the following:

  • disciplined and knowledgeable development that is well documented and conceptually sound,
  • controls to ensure proper implementation,
  • processes to ensure correct and appropriate use,
  • effective validation processes
  • strong governance, policies, and controls.

Complex data models are well known to be riskier than others, and AI and ML are highly complex technologies. These models demand even more governance than other data models. The consequences of deploying a model that isn’t properly tested and managed can be severe. The Flash Crash of May 6, 2010 is one example of a model-based failure.

The dynamic nature of these models means that they need frequent performance monitoring, ongoing data review and benchmarking, improved contextual model inventory understanding, along with detailed contingency plans, notes Banking Exchange.

On the other hand, AI and ML can also work as excellent tools for establishing a central model governance and model management framework and applying it across all models, not just AI models. This broad governance framework will provide a strong awareness of the interdependency between models.

Other regulations emerging

Newer regulations are starting to emerge, some affecting algorithmic traders. World trade exchanges are interdependent, of course, and traders are based all over the globe. Governments around the world are at various stages of taking action to control and govern algorithmic trading.

One concern for regulators is that due to it volume, frequency, and automated nature, AT can artificially inflate market volatility by magnifying upward and downward market trends, according to Mondaq. The European Union (EU) has targeted this situation through the Markets in Financial Instruments Directive 2, which requires algorithmic traders to do stress tests on their algorithms and to provide kill switch functionality in the event that an algorithm malfunctions.

Another problem endemic to AT is the “black box” syndrome, in which software developers keep their algorithms secret for proprietary reasons. Still another is that high frequency trading (HFT) -- a trading method which uses algorithms to transact large numbers of orders in fractions of a second -- tends to imitate the actions of other traders without insight into the value of the company whose shares are being traded. AI bias is yet another concern, according to the Alan Turing Institute.

The US hasn't yet passed any legislation on AI and ML, although the SEC issued guidance in 2017 to investors and to registered traders on the use of robo-advisers, or algorithms which offer personalized financial advice.

In May of 2019, Congresswoman Maxine Waters (D-CA), chairwoman of the House Committee on Financial Services, announced the creation of separate task forces on financial technology and artificial intelligence. In February of 2020, for example, the Task Force on Artificial Intelligence convened for a hearing on the topic, “Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services.”

AI and ML at FIMA

Are you interested in management of AI and ML data? Check out these four sessions at FIMA 2020:

  • Next Generation Data Quality to Meet the Needs of AI and Analytics
  • The Role of Data in Producing Good Predictive Analytics
  • Defining Success With Machine Learning and AI
  • Emerging AI Innovations From FinTech to Support and Advance the Business

Return to Blog