Business leaders must prioritise building resilience in their AI security systems, implementing protection against both conventional cyberattacks and AI-specific threats like data poisoning.
However, government-led regulation remains essential for establishing standardised frameworks for AI safety and security, argues Darren Thomson, Field CTO EMEAI at Commvault.
The global AI race has reached new heights with the US government’s announcement of a $500bn AI initiative, including the landmark Project Stargate partnership with OpenAI, Oracle, and Softbank.
This development, coupled with the UK’s recent AI Action Plan, marks a pivotal moment in the international AI landscape.
While both nations demonstrate clear ambitions for AI leadership, a concerning gap is emerging between aggressive growth agendas and the regulatory frameworks needed to ensure secure, resilient AI development.
The growing regulatory gap
The current contrast between regulatory approaches is stark. The EU is progressing with its comprehensive AI Act, while the UK maintains a lighter-touch approach to AI governance. This regulatory divergence, combined with the US government’s recent withdrawal of key AI safety requirements, creates a complex landscape for organisations implementing AI systems in today’s globalised world.
The situation is particularly challenging given the evolving nature of AI-specific cyber threats, from sophisticated data poisoning attacks to vulnerabilities in AI supply chains that could trigger cascading failures across critical infrastructure.
British businesses now face the unique challenge of deploying AI solutions globally without clear domestic governance frameworks. While the government’s AI Action Plan shows commendable ambition for growth, there’s a risk that insufficient regulatory oversight could leave UK organisations exposed to emerging cyber threats, potentially undermining public trust in AI systems.
The plan to establish a National Data Library, which will support AI development by unlocking high-impact public data, brings its own security concerns: How will the data sets be built? Who is in charge of their defence? How can data integrity be assured for years to come when they are part of several AI models at the heart of public, corporate and private life?
By contrast, the EU is progressing with its AI Act, an all-inclusive, legally enforceable framework which plainly puts AI regulation, transparency and harm prevention first. It outlines clear commitments for safe AI development and implementation, such as obligatory risk assessments and considerable penalties for non-compliance.
Evolving AI security protocols
The continuing regulatory deviation makes for a complicated environment for companies tasked with building and deploying AI security solutions.
Divergence creates an irregular playing field and, potentially, a much more dangerous AI-enabled future.
Companies must, therefore, establish a path for progress that balances innovation with risk management, integrating strong cybersecurity protocols that are modified for the new demands driven by AI, particularly when it comes to data poisoning and the data supply chain.
Poisoning the well
Data poisoning is the term for malicious actors purposefully manipulating training data to change the outcomes of AI models. This might be nuanced alterations that are hard to spot, maybe minor alterations that produce errors and wrong responses, or cybercriminals could change the code to allow them to ‘hide’ inside a model and take control over its performance.
Such hard-to-spot interference could gradually put an organisation in danger, encouraging poor decision-making and eventual ruin. Or, in a political context, it could foster prejudices and encourage bad behaviour.
As compromised data can mix seamlessly with legitimate data, these attacks are, by nature, difficult to detect until the damage has been done. Data poisoning can best be addressed by robust data validation, anomaly assessment, and ongoing oversight of datasets to spot and eliminate malicious data. The poison can happen at any time, from initial data collection to introduction via the data repository to contagion from other corrupt sources during the data lifecycle.
Defending the data supply chain
The establishment of the National Data Library underlines the risks of supposedly safe models becoming corrupted and, from there, spreading quickly up and down the supply chain.
In the coming years, many organisations will rely on these AI models for their daily business so any infection could flow rapidly. Cybercriminals already use AI to boost their attacks, so the prospect of corrupt AI entering the supply chain bloodstream is chilling.
Corporate leaders will, therefore, need to build robust protection measures that support resilience across the supply chain, including proven disaster recovery plans.
In practice, this means putting critical applications first while also defining what minimal viable business looks like and establishing an acceptable risk posture. Companies can then be assured that, in the event of an attack, essential back-ups can be rebuilt rapidly and entirely.
Keep up to date on the risk landscape
It is clear that AI has the potential to supercharge innovation while, at the same time, opening the door to new threats, particularly when it comes to security, privacy and ethics.
As AI becomes more integrated into every company’s infrastructure, the potential for malicious breaches will increase significantly.
The best way forward in terms of risk mitigation is to maintain robust safeguards, ensure transparent development, and uphold ethical values. By balancing innovation with zero tolerance of abuse, organisations can take advantage of AI while defending against corruption. Ultimately, however, only government-enforced legislation can help us all establish AI safety and security frameworks globally.