The importance of responsible AI in a rapidly evolving landscape

Dr Paul Sant, Head of Computer Science at The University of Law, discusses the importance of developing responsible AI amidst cybersecurity challenges.

It is undeniable that since OpenAI launched ChatGPT and we became aware of generative AI there has been intense scrutiny around what may be possible. There has also been a significant increase in the number of products in the artificial intelligence (AI) ecosystem.

We have witnessed remarkable advancements and innovative technologies emerging from the realm of AI.

However, there is a darker side to this progress. What if an AI tool develops a ‘mind of its own’ and performs actions we did not intend? What if AI surpasses its programmed capabilities, or worse, if cyber attackers exploit AI for malicious purposes?

These are not mere speculations, but potential risks that demand our attention.

Challenges in developing responsible AI

All of these are possible, and there has certainly been a lot of work behind the scenes, but should we really be worried, won’t the AI engineers recognise this? Won’t the work of governments and bodies like the EU put in place policies and regulations to stop this?

Well, the answer is mixed – certainly, developments such as the EU AI Act have set out strong intentions to ensure the development of responsible AI, and governments such as the UK and the US have put guidelines together.

However, this has to be balanced with the huge perceived potential of what generative AI and the move toward general artificial intelligence can bring to a capitalist society.

So, while there is certainly a will to ensure the development of responsible AI, some conflicting perspectives make this more challenging.

cybersecurity
© shutterstock/metamorworks

AI has cybersecurity implications

To put things into perspective and show that those who are looking to exploit the power of AI for less desirable activities (i.e., cyber attackers), consider this: There are AI tools out in the market currently that can take a small (less than 15-second) sample of a voice and create an artificial intelligence version that sounds identical.

Is that not ‘cool,’ I hear you say? Well, yes, it is rather ‘cool’.

However, consider that some organisations, in an attempt to prevent cyber attackers from accessing things like your bank account via online accounting, have made moves towards so-called two-factor (or event multi-factor) authentication, where one of those factors is ‘your voice is your password’.

Imagine now you have undertaken an interview somewhere, and it has ended up on YouTube, or you have a website with a video introduction where you speak – what is to stop a cyber attacker from taking that sample, stripping out your voice, and feeding it into an AI tool that can generate speech on the fly?

The answer is ‘nothing really’; now imagine they call up an online bank, play back the sample when asked, and then that provides access to your bank account because the system checking the password cannot determine whether the AI sample is your ‘real’ voice.

This is a concerning example, but rest assured, those who work in cybersecurity are working hard to counter scenarios like this.

Researchers working in large language models also ensure that those who try to alter these models do so with unintended and harmful consequences. A lot of work is going on to ensure that responsible AI dominates in order to keep us all safe.

Realising the true potential of responsible AI

The challenge we face is that the world of technology moves so quickly, and products come to market without us being able to fully understand their wider impact.

Of course, that is exactly what the cyber attackers are hoping for – that one chance for a mistake to be made or an unintended consequence not to have been considered.

However, by being aware, careful, slowing down, and thinking before we act (as users), we can reap all the benefits that these exciting technologies have to offer without exposing ourselves to risk.

There are certainly exciting times ahead, and we hope that we see the true potential of responsible AI win-over.

There are a lot of cyber defenders working to keep us all safe, so watch this space.

Contributor Details

Dr Paul
Sant
The University of Law
Head of Computer Science

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network