Artificial Intelligence: A tool for assault or defence?

Edy Almer, Product Management Director for Threat Detection and Incident Response at Logpoint, outlines the concerns and wider uses of Artificial Intelligence.

The rise of Artificial Intelligence (AI) in the form of Language Learning Models (LLMs) is already significantly disrupting modern society. We’ve seen objections to these systems’ use of copyrighted material on which they are trained, concerns over whether it can be relied upon due to ‘hallucinations’ and bias, and its potential to cause harm.

As a result, the EU is rushing through regulation in the form of the AI Act which is expected to come into force before the year is out.

As the Act reflects, the general consensus is that AI is here to stay, and from a cybersecurity perspective it’s imperative that we build our understanding of how it can be used against us or to aid us in our defence. This is because Artificial Intelligence will significantly escalate the competencies of threat actors, making it much harder to detect and defend against attacks.

For example, it can make phishing attacks far more convincing, has been tricked into writing malware, and can be used to generate disinformation.

Artificial Intelligence can improve security processes

Right now, a chief concern is that it will see more sophisticated attacks available for sale on the dark web. The New Scientist states that Artificial Intelligence could cut costs for cybercrime gangs by up to 96%, seeing the price point for developing toolkits fall. This in turn will lower the cost of entry and skill levels needed to perpetrate attacks.

So it’s a small wonder that organisations are gearing themselves up for AI attacks, with over half anticipating a successful AI-fuelled attack will come to pass during the next eight months.

But the odds are, for once, stacked slightly in our favour because LLMs are constructed to do good. They have a number of built-in safeguards that attackers need to work around in order to achieve their malicious aims.

In ChatGPT, for instance, filters are applied to the content it uses to train itself and to its responses to prevent it from coming up with unsafe content. As the security sector doesn’t face the same constraints, it can use the technology to dramatically improve security processes and defence capabilities.

Making sense of data

Take, for example, the Security Operations Centre (SOC). Here, enormous quantities of risk data are analysed to determine potential threats to the business and its systems.

Converging security technologies like SIEM, SOAR and UEBA can rapidly crunch through that data and provide the focus needed to determine genuine versus false positives, improving the speed with which the team can get to the information and reducing the likelihood of alert fatigue that can see genuine alerts missed.

In the event of a breach, the SOC team will supplement the indicators of compromise (IoC) with other information obtained from public sources. This is then used to form a response plan but again the data involved is considerable.

Being able to apply an LLM can see that data is ingested, understood, and summarised more quickly, shortening the all-important Mean Time to Response (MTTR) that can make the difference between an attack being mitigated versus it being allowed to run rampant.

Speeding up intel

Automation is already used for threat defence within the modern Security Incident and Event Management (SIEM) system. The tactics, techniques, and procedures (TTPs) used to orchestrate an attack can be aligned with a specific Security Orchestration and Response (SOAR) playbook, triggering automatic incident response and reporting processes.

Applying an LLM such as ChatGPT to those processes can see the output from the playbook quickly summarised to provide the CISO with recommendations for remediation and improvements. Such intelligence can then be relayed to the board, helping inform future security decisions and investment.

© shutterstock/Gorodenkoff

LLMs can also be used to write security documentation, from policies to compliance reports, substantially alleviating workloads and freeing up the SOC team to focus on what they do best – using their intuitive skills to outwit attackers – which Artificial Intelligence cannot replicate.

Using AI to generate awareness

In addition, security teams can use AI to the same intents and purposes as their nemeses. While the attacker will need to find workaround prompts to reverse shells and develop malware, DevSecOps can legitimately use it to hunt for issues and security check code. They’ll still need to validate the outputs, but it should save them considerable initial effort.

Similarly, while attackers might use Artificial Intelligence to craft phishing attacks, the security team can also create bespoke security awareness campaigns. These use AI to scrape details from resources such as the employee’s social media accounts and company logs for email addresses to personalise training on how to avoid a phishing attack.

Managed Security Services Providers (MSSPs) and their customers may benefit too from integrating AI into their security services. A key differentiator is the speed with which the MSSP can monitor and notify their customer base of an incident and provide mitigation advice.

Using ChatGPT, they can accelerate data aggregation and analysis, automating the process and allowing the MSSP to focus on recovery and getting the customer back to BAU.

All of these positive use cases are now driving development of AI-enabled security solutions. The market is now actively seeking to invest, with a Blackberry survey revealing that 82% of IT decision makers plan to buy AI-driven cybersecurity solutions over the next two years and 48% before the year is up.

It’s now down to the vendor community to embrace the technology, provide their customers with Artificial Intelligence that integrates with their existing offerings to experiment with and equip them with tools to counter the coming threats.

Contributor Details

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network