Is the risk management industry on the verge of an AI-powered revolution?

Craig Adams, Managing Director at Protecht, examines why it is important to weigh up AI-powered systems versus the challenges to find the sweet spot for risk management.

When ChatGPT first launched to the public in late 2022, it took the entire world by storm, reaching one million users in five days and surpassing 100 million monthly users in just two months. It didn’t take long before major organisations like BT were announcing plans to replace thousands of workers with Artificial Intelligence (AI). At the same time, newspapers were quickly filled with stories about AI-powered systems already doing a better job than trained humans at various business tasks and applications.

Of course, not everyone has embraced the arrival of AI. Countries such as Russia and China moved swiftly to ban it, while major question marks around things like data privacy still linger heavily in the air.

However, with the lid to Pandora’s Box now well and truly lifted, not only is AI here to stay, but it is set to revolutionise the way many organisations manage their key business functions forever – risk and compliance included.

AI-powered systems: Opportunities vs risks

Let’s be crystal clear, AI-powered systems present huge opportunities across risk and compliance functions, particularly in areas like automating everyday mundane tasks, offering rapid assessments, and better understanding/management of the risks faced. Whether it’s identifying gaps in policies and control frameworks or analysing thousands of pages of regulations across multiple jurisdictions in a matter of seconds, the potential is truly enormous.

However, that’s not to say it doesn’t create a few risks of its own as well. First and foremost, risk management and compliance functions are in the very earliest stages of AI integration, which means the current lack of understanding will almost certainly lead to teething problems and mistakes. Indeed, in many organisations, risk professionals find themselves working round the clock to understand how best to retrospectively integrate AI into long-established, well-run programmes and processes.

Furthermore, AI-powered systems are far from flawless in their current iteration. For all the positive headlines generated, ChatGPT has also garnered numerous negative ones as well, particularly relating to high-profile gaffs, biased content, and limited knowledge of the world beyond 2021 (at least for now).

Therefore, in order to make the most of AI’s vast potential without falling foul of its current limitations, industry professionals need to look very closely at both the opportunities and challenges it presents before finding the right path forward to successful implementation.

Understanding the technology, its application, and its risks should all be considered fundamental requirements for risk managers before partial or full-scale deployment is even considered.

Chatgpt,Chat,Bot,Screen,Seen,On,Smartphone,And,Laptop,Display
© shutterstock/Ascannio

Harnessing the opportunities of AI-powered systems

Just like many other industries, one of the biggest opportunities that AI presents to risk and compliance professionals is its ability to automate time-consuming and repetitive tasks that humans often struggle with because of their mundane nature.

For example, AI-driven customer service solutions have been shown to not only reduce operational costs but also to improve the quality of service.

Evidence like this explains why customer service orientated organisations like BT are already investing so heavily in AI-powered opportunities – they are among those with the most to gain in terms of both efficiency and cost reductions. These same motivations can be applied to businesses with significant risk management functions.

Behind the customer service function, however, AI has the potential to provide invaluable insights into an organisation’s risk profile by analysing vast amounts of data at a pace incomparable to human capabilities.

For instance, AI can be used to assess thousands of pages of complex global regulations before making accurate recommendations on exactly where specific regulations apply. This kind of capability can significantly reduce the workloads of risk and compliance professionals, enabling them to spend much more of their time on strategically important activities while also improving overall business security.

However, it’s important to note that AI-powered systems are only ever as good as the data they have to work from. If AI relies on flawed data, it may fail to identify critical risks or comply with relevant regulations and start influencing the reasoning of the AI system itself.

It’s a situation that is somewhat reminiscent of the early days of computer science in the 1950s, where the phrase ‘garbage in, garbage out’ was first coined – the point being that the quality of output is determined by the quality of the input.

Organisations must therefore ensure that the data feeding into their AI-powered systems is accurate and unbiased at all times, which isn’t easy. Failure to do so raises not only the risk of serious errors but also huge reputational damage to the organisations involved and the application of AI across the profession.

Another crucial concern is the potential replacement of human workers and the impact on the wider employment market. While it’s clear that AI will increasingly be used to automate a range of functions currently carried out by human members of staff, replacing people entirely isn’t without its drawbacks. Most obviously, there is an inherent and irreplaceable value in human insight, judgement, and decision-making, especially in areas as critical as risk management, where experience plays a massive role across the board.

Finding the sweet spot for risk management

So, how can organisations find the sweet spot that allows them to enjoy the benefits of AI-powered systems while guarding themselves against the inherent risks?

Here is a best practice checklist that will ensure a structured approach for AI deployment with full transparency and visibility across the risk management function:

  • Start by assessing AI’s impact on the organisation’s overall risk profile and identify any compliance challenges created as a result.
  • Develop organisational controls such as an AI policy that defines the acceptable use of AI by employees and technology controls that limits access and monitors the use of AI services over the web in line with your policy.
  • Raise awareness through employee communication and training on what they can and can’t do around AI and outline the risks with knowledge gaps or even information fabrication from this sort of technology can bring.
  • Define your risk appetite around AI so you can agree on how hungry or adverse you are as an organisation on embracing both the opportunities and the downside risk it represents when it goes wrong and develop metrics to measure.
Business,,Technology,,Internet,And,Network,Concept.,Young,Businessman,Shows,The
© shutterstock/Photon photo

In the long term, it is vital to establish effective controls over the remit given to AI-powered systems and their performance levels. These should include a commitment to manual oversight, ongoing ad-hoc testing, and the implementation of any other relevant mechanisms to ensure AI operates within the organisation’s risk appetite and compliance framework.

In this context, a hybrid approach, where AI and humans work in tandem, is most likely to provide the best results.

Growing with the AI-powered revolution

It’s important to remember that we are only at the beginning of a very exciting journey with AI-powered systems.

If ongoing hurdles can be overcome, there’s no reason why AI can’t have a huge impact on the capabilities of risk management and compliance functions in the future.

As regulatory environments around the world continue to change rapidly, AI’s ability to adapt and provide insights into emerging risk requirements at speeds far beyond human capability is likely to prove invaluable.

On the other hand, question marks continue to hang over AI and will likely do so for a long time to come.

There’s already a growing voice of industry experts calling for AI to be better controlled and regulated until more is known about it over time. How this conversation unfolds will likely have a significant impact on just how much various governments and industries choose to embrace it.

While many organisations will find the prospect of implementing AI-powered risk management daunting, there’s never been a better time to start exploring it.

Contributor Details

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network