In a previous blog post, I looked at AI-enabled language analytics technology and how it is being adopted by banks to detect conduct abuse in the financial markets. Since artificial intelligence is now powerful enough to drive a car or win a debating competition, my expectation is that it won’t be long before this level of problem-solving heft will be put to work to not only detect but also to prevent misconduct in the financial markets and the mistreatment of customers.
Here are three topical indicators that suggest to me that we’re nearly there:
- Financial institutions are now announcing almost daily that they are applying AI to provide a fresh view of old behavioural challenges.
- I’ve had a compelling first glimpse of several new regtechs at work, including an AI tool that zooms in on misconduct. It’s an impressive development conceptually, but more importantly its practical value needs to be talked about.
- Regulators have recently expressed very public support for regtech in general, and AI in particular.
Focusing on this last point, at a recent Cambridge Risk event, the FCA’s new senior behavioural economist, Dr. Karen Croxson, advised policymakers and regulators to start actively “harnessing big data techniques” to make more socially benign use of the “troves of rich granular data” that the financial sector is sitting on. Until now, she reminded us, financial firms’ primary use of big data has been to “exploit latent advantages in previously unchallenged markets… to extract consumers’ maximum willingness to pay”.
This is a bold but realistic manifesto. Institutions should brace for the impact of a change in regulatory attitudes as the FCA and other authorities combine the surging power of AI with big-data analysis in the cause of markets and consumer protection. Now is the time for bank businesses and their senior risk, compliance and controls leaders to re-evaluate policies, practices, and technologies.
The evolution of surveillance
Regulators and indeed the banks have their eye on an predictive analytics capability that comes close to eliminating market abuses and misconduct, or at least making it unwisely risky for the perpetrators. I’m calling this Surveillance 3.0 and will explore what it means in future blog posts. However, we first need to consider the limitations of current rules-based analytics (Surveillance 1.0) and the transition now taking place towards a more intelligent approach (Surveillance 2.0).
Prior to the global financial crisis (GFC), regulators depended on the industry to police itself via self-reported analyses of structured data. As we now know, none of those tick-box, econometric indicator sets managed to warn anyone about the problem of ‘What Actually Happens’ in trading spaces.
In 2008, during the height of the GFC, Her Majesty The Queen asked academics at the London School of Economics what everyone was wondering: “Why did nobody notice it?”. That the problem came as a surprise to so many was partly a failure of conceptual thinking, partly of practical application, and generally a failure to observe and talk with the first-line salespeople who knew of, and hid, abuses. Her observation that “people had got a bit lax” was devastatingly accurate.
Strong rules, weak results
The first version of automated conduct monitoring, Surveillance 1.0, introduced lexicon analytics technology. Though more advanced than earlier manual processes, it relied on known and static rule sets (checking by using narrow assumptions of ‘acceptable and unacceptable’ returns). It applied these broadly against electronic communications data but kept the data in silos. This inflexible approach resulted in a high number of false positives (type 1 errors), forcing institutions to spend lavishly on compliance organizations with the human capacity to re-check system alerts manually.
Besides this practical fail, 1.0 maintained many of the conceptual failures of earlier manual practices:
- First, the more rigid the structure, the more that people will only respond to the questions asked. A specific audit question may fail to identify or produce a clear expression of a source of risk (such as, in 2008, the risks of sudden reversal of market sentiment and the liquidity drought that followed it).
- Second, its reactive approach allowed little space for respondents to raise concerns or spell out newly perceived risks in their own terms. Institutions continued to report risk using pre-structured econometric data (‘financials’), mostly generated by the process of trading contracts in markets. This quantitative, structured-data approach deterred people from asking any awkward, qualitative questions – such as concerns about human behaviour.
- Third, it looked at data department-by-department, without characterising the broader firm-level picture.
As the GFC reminds us, we really needed to look upstream of contract-making to get closer to the source of misconduct: the human trader making the sale, and the management that controlled and incentivised them. A structured data approach effectively prevented anyone from thinking like this.
The clue is in the name
Come 2013, there was an obvious clue in the choice of branding of the UK’s new regulator: as a ‘Conduct’ authority, the enforcer’s focus would evidently now be on human, behavioural risk rather than paper contract risk. To achieve this, new regulatory reporting would need to better detect and report misconduct.
Those of us who are behavioural analysts assumed that this would mean the regulator more directly observing practitioners and listening closely and intelligently to how they interacted with one other in real time. Under recent initiatives for ‘culture audits’ and ‘smell tests’, both from conduct and prudential regulators, this observation process has begun to happen. Although, as we’re now five years into the FCA’s existence, the pace of change hasn’t been frantic.
What the new regulatory approach has done, however, is open up the reporting landscape to allow in qualitative discussions. For example, one strand of the ‘culture audit’ approach looks for any significant gaps between what people say they’re going to do and what they actually do, i.e. how expressed values differ from actual behaviours. Regulators now focus on certain behavioural traits that are hard to quantify, such as a firm’s reflexivity (capacity to learn from mistakes), the health of its challenge function, and its success in prevention of customer detriment.
Interviewed on these points, institutions’ newly-accountable Senior Managers and Certified Persons are beginning to realise that the structured data analyses won’t cut it. Nudged by regulators, they have ordered their firms to collect and analyse unstructured data. This includes emails, chat messages, and phone calls – material that most people would call “ordinary conversations” (or, if like me you’re a discourse-analysis nerd, “natural language”). This is where Surveillance 2.0 has made progress.
The foundations of an intelligent future
Surveillance 2.0 makes useful sense of data sets which had previously been too large and too unstructured for humans or simple algorithms to process. By adopting next generation language analytics technologies, which use machine learning to identify patterns, anomalies and relationships, institutions have been able to use a 2.0 approach to take a big step forward in understanding the potential root causes of conduct risk.
Machine learning has proven to be significantly more accurate than old, lexicon-based analyses, producing significantly better signal-to-noise ratio that leaves compliance and controls organisations with more time to investigate possible abuses. Moreover, it has started to offer much richer insights into a firms’ culture, exposing previously hidden patterns of behaviour. Many global banks have made significant progress in adopting a 2.0 model, but as I shall explore in my next blog post, they understand that provides the foundations of a more intelligent approach – the fabled Surveillance 3.0 – yet is not quite the finished article. Stay tuned for more!
More by Roger Miles
Where’s Behavioural Regulation Heading?