Scammed by a Robot: How AI is Revolutionizing Online Fraud

Sat Jan 14 2023

Posted inPhishing

With the release of OpenAI’s ChatGPT, advanced language models are a hot topic. While the technology continues to advance, cyber criminals are finding new and alarming ways to use the technology for their nefarious purposes. This raises concerns among security experts, as these models have the power to automate and scale fraud and scams on an unprecedented level.

Phishing

Large crime organizations are already using GPT-based AI models to generate highly convincing phishing emails and text messages. These messages are designed to mimic legitimate communications from banks, government agencies, and other organizations that people trust. By using GPT-based AI models to generate these messages, cyber criminals can more quickly createrealistic, personalized phishing messages that are hard to distinguish from the real thing.To make things worse, tools like Jasper.ai also lower the barrier for fraudulent correspondence in the victims’ native language, and automatically following up upon their requests, giving them less time to realize they are being scammed.

Identity theft

While fake face generators are already a common tool for fraudulent organizations to make up a fictional human workforce, GPT-basedmodels make it easier toimpersonate both fake as well as real people online. For instance, criminals may create fake social media profiles or dating profiles with artificially generated messages that sound like they're from real people. This can be used to scam people out of money or personal information.

Stable diffusion impression of a dark room full of scam computers

Fake websites

A common way to quickly get into the pockets of gullible consumers, is to set up fake web shops. Often, these sites are quite cheaply produced and of poor quality, making most of their visitors suspect illegitimate practices. However, fraudsters are already leveraging AI to create fake websites that look very professional, since the underlying data sets are trained upon millions of real homepages. Apart from shopping fraud, these sites can be used to steal people's personal information or credit card details. Next to the design and architecture, robots can quickly generate realistic-looking text, pictures, and even fake celebrity endorsement videos that make the phony sites almost impossible to distinguish from the real thing.

Prevention

There is good news as well. While the new generation of language models are a powerful tool for cyber criminals, they can also be used to detect and prevent fraud. Leading companies like MasterCard and Vectra.ai are using advanced AI techniques that go beyond language models alone to automatically flag suspicious emails and text messages, and to identify fake websites.

The importance of keeping the finger on the pulse

The bottom line is that as GPT-based AI models become more prevalent, the threat of fraud and scams will also increase. It's crucial for individuals and organizations to stay vigilant and aware of the potential risks. The technology is here to stay, and we must learn to adapt and protect ourselves accordingly.As with every emerging technology, a basic understanding of the risks and opportunities should be shared by everyone in your organization. After all, nobody wants to relive the chaos and ignorance that caused blockchain fraudsters to run away with billions of dollars.


Share this article