Could ChatGPT Increase Cyber Security Risk?

Could ChatGPT Increase Cyber Security Risk?

The introduction of artificial intelligence (AI) software ChatGPT has been revolutionary in many ways, but some believe it could also increase the threat of cyber security. 

According to Japanese cyber security experts, it could write code for malware, despite being designed to not respond to malicious requests. 

Analyst at Mitsui Bussan Secure Directions Takashi Yoshikawa told Cyber Security News: “It is a threat (to society) that a virus can be created in a matter of minutes while conversing purely in Japanese. I want AI developers to place importance on measures to prevent misuse.”

ChatGPT was developed to respond like a human, except when it came to sexual questions, adult content and things that could cause harm. 

However, cyber security experts are concerned following an experiment that saw ChatGPT being tricked into thinking it was in developer mode, and subsequently writing ransomware code, demanding payment from a pretend victim. 

This suggests ChatGPT could be manipulated by cyber criminals and used for mal intent. 

Harvard Business Review also raised concerns about the safety of ChatGPT, saying developers need to consider the use of the AI for “society as a whole”. Failing to do so could result in failing to “anticipate and defend against next-generation hackers who can already manipulate this technology for personal gain”. 

Therefore, it called for better protections to be put in place to ensure ChatGPT is used for good, instead of adding greater risk to anyone on the internet.

To make sure your organisation is protected as much as possible against cyber threats, it is essential to have a secure website. Call our website designers in Hull for more information about protecting your website.