The 3 limitations of AI-driven cyber attacks

Richard Ford, CTO of Integrity360, outlines and debunks the myths surrounding AI-driven cyber attacks.

From contextual threat prioritisation to task automation, Artificial Intelligence (AI) has been a key cybersecurity trend for several years.

However, the controversy surrounding the implications of increasingly sophisticated technologies has reached new heights following the release of several impressive natural language processing tools.

Be it ChatGPT, Microsoft’s Jasper AI, or several other alternative chatbots, industry concern is mounting over the potential for such applications to be used to create sophisticated malware and hacking tools.

From writing in the specific style of authors to producing complex streams of code, this wave of AI models has truly broken new ground, completing various tasks in a highly sophisticated manner.

Yet there are now worries that the ability of such tools to harness huge amounts of data and expedite processes could be used and abused by nefarious actors.

But exactly how much truth is there to this? Are such concerns valid?

Some industry players are already exploring the potential security implications of ChatGPT and similar tools to obtain an understanding of the threats that they pose.

CheckPoint, for example, developed a phishing email on the platform that included an Excel document carrying malicious code which could be used to download a reverse shell to a potential victim’s endpoint.

Of course, experiments such as these are concerning.

However, many of these specific use cases don’t reflect the current state of AI technology, nor the lengths that cybercriminals are typically willing or able to go to.

Three key limitations of AI-driven cyber attacks

It’s important to recognise that most cyber attacks are relatively basic in nature, carried out by amateur hackers known as ‘script kiddies’ who rely on repeatable pre-built tools and scripts.

With that said, there are even more sophisticated attackers that have the skills to uncover vulnerabilities and develop tailored, novel threat techniques to exploit them.

The truth of the matter is that current AI tools simply aren’t advanced enough to create malware that is more dangerous than those strains that we’re already facing.

Current AI tools are limited in several ways, preventing them from creating the kind of malware that many people fear.

1. AI isn’t equipped to deal with ambiguity

Current AI tools struggle to navigate situations where there is no definitive answer.

We see this in cybersecurity professionals’ own limitations in using AI for defensive purposes – while such tools may flag suspicious activities, they still need human input to determine this for definite.

As AI improves in sophistication, some tools are beginning to emerge that are improving in their ability to deal with ambiguous situations, yet these are both relatively rare and not often in the arsenals of threat actors.

2. AI is restricted by data

We must also recognise that all AI engines, regardless of whether they have been built for good or bad purposes, are limited by the data they ingest.

Like any other model, a malware-specific AI algorithm would need to be provided with massive amounts of data to learn how to evade detection and cause the right kind of damage. And while such datasets undoubtedly exist, there are further limitations as to exactly how much AI models can learn from them.

3. Human brains are better at present

Critically, it’s also important to understand that human brains ultimately remain superior to Artificial Intelligence at the minute.

While AI has shown it is incredibly useful in accelerating processes, automating tasks, and identifying threats in a largely accurate manner at speed, these models still require the support and knowledge of experienced cyber professionals.

Indeed, we still need a combination of technology and skilled people in cybersecurity, and this is no different in cybercrime.

The threat posed by AI remains largely theoretical

© shutterstock/Andrey Suslov

That is not to say that there are no risks at all. Indeed, the primary security concern surrounding ChatGPT and its alternatives at present, is the potential for such tools to be used to democratise cybercrime.

There is clear evidence to suggest that innovative cybercriminals may begin to readily embrace such platforms to write credible phishing emails, or even code evasive malware.

However, this arguably wouldn’t be any significant development in current black-market dynamics.

Indeed, phishing-as-a-service (PhaaS) and ransomware-as-a-service (RaaS) providers have been offering less experienced threat actors with toolkits for some time, enabling those with limited-to-no technical skills to carry out attacks.

As platforms that are open and freely accessible, ChatGPT and other similar natural language processing models have the potential to exacerbate this issue.

For example, attackers can customise their approach to a specific target, using ChatGPT to more impersonate a trusted source, gain access to sensitive information, and carry out fraudulent activities.

However, the reality is that current AI tools are not nearly sophisticated enough to create truly advanced malware that can evade detection and cause serious damage in their current state.

With this in mind, many scare stories about AI and cybersecurity must be taken with a pinch of salt.

It’s not to say that AI won’t play a more serious role in developing and executing sophisticated cyber-attacks in the future – it almost certainly will.

Instead, we’re saying that the threat posed by AI right now remains largely theoretical, and therefore there’s no reason for us to assume the worst.

Combatting AI-driven cyber attacks

We must remember that the vast majority of attacks are still being carried out using very basic methods. Therefore, by investing in some simple yet effective security measures, organisations can successfully combat huge swathes of current cyber threats with relative ease.

Organisations must get ahead of the curve and embrace evolving technologies such as Machine Learning tools themselves. These can be very effective at identifying potential threats and responding to attacks at speed, capable of recognising and flagging potentially malicious patterns or activities in network traffic.

At the same time, it’s important to continue embracing the value that human-related security efforts can bring to the table. Indeed, the true value of Machine Learning-led defence tools can only be maximised when they are managed by trained security professionals that are able to respond to the additional intelligence that is proved to them at speed.

It is also vital to focus on continually improving the cyber education and awareness of the entire workforce with training programmes and communications centring around key protocols.

If all employees are aware of social engineering and phishing attacks and the potential implications attacks of this kind can have, they will be both more vigilant and better equipped to reduce the risks of successful attacks.

Now is not the time to panic. By investing in effective security measures, combining these with the expertise of trained professionals, and educating employees about commonly encountered cyber threats, networks and systems can be effectively secured against most AI-driven cyber-attacks.

Contributor Details

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network