What are the risks of ChatGPT?

Brett Raybould, EMEA Solutions Architect at Menlo Security, outlines the three risks of ChatGPT that organisations should take note of.

ChatGPT was named the fastest-growing application of all time in January 2023, hitting the 100 million active users’ milestone in a matter of months. Fast forward to June, and the website was generating roughly 1.6 billion monthly visits.

But ignoring the hype, it is also important for businesses and security teams to recognise the security implications and potential risks of ChatGPT. From the leaking of confidential information to copyright and ethical concerns, there are risks relating to the use of the platform. Here are three that organisations should be aware of.

Close to home: ChatGPT data breach

From a security perspective, one big red flag is that ChatGPT itself has already been the victim of a data breach this year due to a bug in an open source library.

On closer investigation, OpenAI revealed that this may have caused unintentional visibility of payment-related information of 1.2% of ChatGPT subscribers who were active during a specific nine-hour window.

The platform’s huge uptake in the last year means it is the ideal site for a ‘watering hole’ attack by threat actors.

If cyber criminals infect it successfully through other potentially hidden vulnerabilities and serve malicious code through it, they could potentially impact millions of users.

Cyber criminals are using ChatGPT

While ChatGPT has not been infiltrated directly so far, bad actors are believed to be leveraging it for their own means. Check Point Research has highlighted examples where cyber criminals have begun using the platform to help develop malware code and create convincing spear phishing emails. The latter is likely to remain the primary use of ChatGPT among threat actors in the future.

Typically, cyber security awareness training is focused on spotting discrepancies, such as misspellings and unusual subject lines. But many of these indicators disappear when a poorly written email is put through ChatGPT with a request to make it sound like it comes from a government department, brand, or other seemingly genuine source.

It also means criminals don’t need to rely on their first language, using generative AI tools to translate phishing emails more accurately.

The potential for employee misuse

Another risk of ChatGPT is its potential to be misused by employees. ChatGPT works in a similar way to social media – once something is posted, it’s out there. This isn’t necessarily understood as demonstrated with an incident at Samsung when one of its developers inadvertently pasted some confidential source code for a new programme into ChatGPT to see if it could help fix some source code.

ChatGPT retains the user input to train itself so, if another company searches for a similar thing, they could be provided with the confidential Samsung data.

© shutterstock/FAMILY STOCK

OpenAI recently rolled out ChatGPT Enterprise, a paid-for subscription service offering assurances that customer prompts and company data will not be used for training OpenAI models. But this is only available with the paid subscription, with no guarantees that organisations will acquire this, or that employees will adhere to it.

Safe and secure use of AI tools

Some organisations have now blocked use of ChatGPT completely in response to the potential risks. But used in the right way, ChatGPT can offer many benefits. AI is effective when doing jobs that take time or are repetitive. If you take a large amount of data, AI can quickly identify the correlations and themes that are interesting to look at.

The key is to use it to enhance productivity, freeing up staff up to focus on high-value tasks that require creativity or input into matters of subjectivity, which AI simply is not equipped to do.

Rather than blocking it, organisations need to find ways to harness the technology in a safe and secure way. OpenAI’s subscription service is one way. But, for maximum protection, enterprises should adopt a multi-layered security strategy comprising a variety of tools, including isolation technology.

This can be used as a DLP tool, allowing organisations to control ad manage what users can or cannot copy or paste, including files, images, video, and text to an external site where it can then be misused.

It can also be used to record data from sessions to enable companies to keep track of end user policy violations in web logs on platforms such as ChatGPT, like the submission of sensitive information and data. Isolation can ensure that all active content is executed in a cloud-based browser, rather than on a user’s end device, making sure that malicious payloads can never reach the target endpoint.

Much of this is common sense, but with added scrutiny on top. Organisations need to understand that there are potential risks with ChatGPT and AI.

However, an awareness is also needed about the groundbreaking ability of AI, with a whole new generation of AI tools leading to improvements in business.

Contributor Details

Brett
Raybould
Menlo Security
EMEA Solutions Architect

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network