Could AI be inspiring threat actors?

Tony King, SVP International at NETSCOUT, discusses the potential for AI to drive the increase in threat actors and cybercrime.

In the past year, the potential of Artificial Intelligence (AI) has encouraged businesses across all industries to discover new AI use cases to improve operational agility and reduce costs. However, the AI boom has been a muse that has inspired cybercriminals, too.

As 2023 progressed, there was a marked increase in the use of AI by threat actors to increase the efficacy of cyberattacks. This year, the trend is expected to continue, with AI usage by cybercriminals increasing further.

Threat actors are increasingly using AI

Unfortunately, for organisations trying to defend themselves from cybercrime, AI holds the potential to aid attackers in conducting a range of malicious activities. Threat actors are constantly uncovering new ways of utilising AI to improve their chances of success.

For example, cybercriminals can use AI capabilities to launch distributed denial-of-service (DDoS) attacks, making them more impactful.

Threat actors use AI during DDoS attacks by employing expert systems that optimise attack vectors based on reconnaissance scans and real-time performance test results. This allows cybercriminals to ascertain which attack methods are effective, increasing the damage an attack inflicts on a given target.

After a whirlwind year for the cybersecurity sector, developing cybersecurity awareness across businesses is especially pertinent given the surge in malicious activity. In the first half of 2023 alone, a total of 7.9 million DDoS attacks took place – which equates to 44,000 attacks per day, according to NETSCOUT’s latest Threat Intelligence Report. This represents a 31 per cent increase globally compared to the previous year’s period.

The steep increase in the occurrence of DDoS attacks demonstrates that cybercriminals are reshaping attack methods to inflict as much damage as possible, as well as more frequently than ever before.

The unique threat posed by generative AI

On the generative AI side, threat actors are focused on social engineering – to create realistic-looking emails and documents that are very difficult to distinguish from genuine articles. This allows for generative AI-driven phishing using advanced language models.

What’s more, cybercriminals are also using malicious generative AI tools, including WormGPT and FraudGPT, to carry out targeted phishing campaigns at a larger scale than ever before. These tools allow attackers to compromise business emails and use machine learning to produce deceptive content respectively.

Weaponising generative AI has introduced new methods of accessing personal information. For instance, the use of deepfake audio enables bad actors to imitate trusted voices for fraudulent transactions, while generated deepfake images or videos can even bypass biometric facial identification.

threat actors
© shutterstock/Andrey Suslov

The UK government recently released a report on the challenges posed by generative and frontier AI, in which it said both were likely to increase cybersecurity risks. The Safety and Security Risks of Generative Artificial Intelligence to 2025 report noted that cyberattacks, online fraud and impersonation are the most likely security threats to emerge from AI misuse.

The report also predicts “faster-paced, more effective and larger-scale cyber-intrusion via tailored phishing methods or replicating malware”.

However, it foresees generative AI being more likely to exacerbate existing risks rather than create new dangers in the coming years.

Organisations fighting fire with fire

Nonetheless, AI also improves defences, helping organisations develop more timely and actionable threat intelligence to defend targets from threat actors. Given today’s threat landscape, organisations have been placing more value than ever before on threat intelligence, as it helps businesses broaden coverage, accelerate response, and reduce the operational overhead of their defences.

Generative AI is also being used to improve efficiency in security, as some tools now provide a natural language chatbot to advise analysts, optimising their effectiveness. Using AI to improve efficiency is becoming more and more pertinent as the number and complexity of attacks continue to grow and as budgets tighten in many organisations – limiting their ability to hire additional human security resources.

The good, the bad and the AI

While the onus is on companies to protect their customers from cyberattacks by remaining vigilant, internet users can minimise the risk of falling victim to cybercrime. Threat actors often use fear tactics with phishing attacks, so users must be sceptical of urgent-sounding messages.

Also, it pays to stay informed about common phishing tactics and to be careful about anything unexpected.

As generative AI has provided cybercriminals with the ability to mimic voices and facial identification and create well-written correspondence that doesn’t contain the tell-tale signs of deception, it is becoming increasingly difficult for consumers to defend themselves simply by ‘being careful’.

There is an opportunity for service providers to deliver new levels of protection to consumers, both to drive additional revenue and reduce the success rate of the criminals involved.

AI is expanding possibilities for both threat actors and defenders, with its potential growing as the world learns more. The many challenges the technology has already solved and the unprecedented problems it has created speak to the seemingly limitless possibilities AI introduces to the cybersecurity realm.

Contributor Details

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network