Getty Images Plus
ChatGPT could boost phishing scams
The conversational AI tool could make it hard to detect attackers due to how conversant and grammatically correct it is. A way to combat that is to create a detector.
As ChatGPT grows more popular among writers and creators, another group also is likely to use the technology: scammers.
Currently, OpenAI, the creator of the hugely popular conversational language model, restricts some misuse of the technology -- for example, preventing it from saying or doing things that could be racist.
However, Microsoft -- a major investor in OpenAI -- recently revealed plans to incorporate ChatGPT into its Azure AI services, which is likely to open up possibilities for wider use of the technology.
And as the technology advances, it could make phishing attacks easier.
Chester Wisniewski, principal research scientist at Sophos, a security software and hardware vendor, recently studied how easily users can manipulate ChatGPT for malicious attacks.
In this Q&A, Wisniewski discusses what might need to be done to combat the use of ChatGPT for these attacks.
How can ChatGPT make it easier for those with bad intentions to launch phishing attacks?
Chester Wisniewski: The first thing I do whenever you give me something is figuring out how to break it. As soon as I saw the latest ChatGPT release, I was like, 'OK, how can I use this for bad things?' I'm going to play to see what bad things I can do with it.
Looking at a tool like ChatGPT from a security standpoint, you may go in two different directions. You can look at what technologically it could do, which I've seen some research about out there. For example, can we get it to write malicious programs? Can we get it to write viruses? Can we get it to do bad things like that? And then there's also the social aspects of it.
I briefly looked into the technical aspect, like could you get it to write malware? Of course, yes, you can bring it into helping you write malware. We're already good at detecting computer programs that do bad things. It really doesn't matter if it's written by a guy named Ivan or a woman named Carol or an AI bot called ChatGPT. Bad code is bad code. I'm not terribly concerned about the technical aspect.
Where I did find leverage was in the social side of how easy it is to have a conversation with ChatGPT and how well it writes, specifically in American English. For the last 10 years or so, I've been doing quite a bit of research on the impact of how we do security awareness training. The No. 1 thing I hear from users is the way they detect a lot of attacks that end up in their mailbox is because the spelling is wrong, the grammar is wrong. While the English may be correct, it's often British English. Many of the people writing the phishing texts are from Africa, India or Pakistan. So, they'll end up having u in the word color, or organization will be spelled with an s instead of z, or things that Americans pick up on.
If you start looking at ChatGPT and start asking it to write these kinds of emails, it's significantly better at writing phishing lures than real humans are, or at least the humans who are writing them. Most humans who are writing phishing attacks don't have a high level of English skills, and so because of that, they're not as successful at compromising people.
My concerns are really how the social aspect of ChatGPT could be leveraged by people who are attacking us. The one way we're detecting them right now is we can tell that they're not a professional business. ChatGPT makes it very easy for them to impersonate a legitimate business without even having any of the language skills or other things necessary to write a well-crafted attack.
What kind of AI tools do we need that can detect if a phishing attack is written by a bot such as ChatGPT?
Wisniewski: We've turned a corner with AI now, between Microsoft's demonstration of [text-to-speech model] Vall-E being able to impersonate people's voices and ChatGPT's remarkable ability to have a conversation. We've rounded a corner now where human beings are ineffective at telling whether they're being tricked by a bot or whether they're talking to a real human being. I don't know that we can ever fix that.
Chester WisniewskiPrincipal research scientist, Sophos
From a human standpoint, what this really means is we need to drastically change our expectations and our approach to what we think humans are going to do.
We cannot rely on users to detect whether something is realistic. People will continually be tricked. The technology is too good, and humans are never going to get better. We're not going to have version 2.0 of humans. So we're at our limits on the technology.
That's where it's going to be interesting. There's quite a few experiments out there being done by all different groups. The most interesting one I've seen -- there's a research group out there called Hugging Face. Hugging Face has built a model that reliably detects text generated by ChatGPT.
So, you can run your own AI model for things like email filtering. We're probably going to need to implement something like that. We'll be able to run incoming bodies of emails, just like we look at them for spam signs to see if it's trying to sell us a Russian bride or some heart medicine or whatever. We're going to be able to start detecting it with programs that tell us 'The text in this email actually was written by a human' and 'We have 92% confidence it was probably actually written by ChatGPT or some similar ML [machine learning] model.' I think, with improvements over time, we'll get pretty good at knowing things it creates.
But, for example, if people start writing ad copy or doing copywriting with ChatGPT, there may be a lot of 'legitimate' texts that come into your email that are actually written by ChatGPT. That's not a scam. So, I don't think it'll be solved technologically, but I think we can help the humans make decisions.
The first step is that direction where we're going to start implementing the ability to detect things written by it so that we can give humans a heads-up about 'something's weird with this.'
ChatGPT and everything like it is a rules-based system. It has a set of rules that it follows, and it cannot break the rules that have been set for it. And that predictability in the way that it programmatically works inside the computer means we can always tell that a model like that did it. It has certain signs that give it away, that it followed the rules. Humans never follow rules.
How could regulation help curb abuse of tools such as ChatGPT for phishing attacks?
Wisniewski: Legislation, I suspect, isn't going to be terribly effective, because no matter what you tell the companies to do in terms of AI, you're in essence outlawing having AI at all. The only way to not have the abuse of it is for it not to exist. There's no stopping this train. The train has left the station, and no amount of laws are going to undo this technology being available to the world.
Editor's note: This Q&A was edited for clarity and conciseness.