Truth, Inspiration, Hope.

Scammers Increasingly Using AI for Fraud; Governments Slow to Regulate

Published: March 7, 2023
AI-Scmmers-Regulation-Getty-Images-1247553442
A visitor watches an AI (Artificial Intelligence) sign on an animated screen at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona. (Image: JOSEP LAGO/AFP via Getty Images)

Scammers are increasingly turning to artificial intelligence (AI) in order to bilk unsuspecting targets out of their hard earned money. 

According to a report by the Washington Post thousands of victims have lost millions of dollars to scammers who are using AI to mimic the voices of loved ones pleading for help.

An AI model, developed by Microsoft, called VALL-E that can accurately simulate a person’s voice after analyzing just three seconds of audio burst onto the scene early this year and was quickly adapted to perpetrate fraud.

“VALL-E can take a three-second recording of someone’s voice, and replicate that voice, turning written words into speech, with realistic intonation and emotion depending on the context of the text,” reads an official VALL-E website

The developers of the AI speculate that the technology could be used for high-quality text-to-speech applications and could assist content creators when combined with other AI models like GPT-3. However, the tech was immediately adopted for a darker purpose, often targeting the elderly.

In conversation with the Washington Post, Ruth Card, a 73-year-old grandmother said she received a phone call from who she thought was her grandson, Brandon, who was pleading for money in order to post bail, prompting her and her husband to rush to the bank to secure the funds.

“It was definitely this feeling of … fear. That we’ve got to help him right now,” Card told the Washington Post, adding that, “We were sucked in. We were convinced that we were talking to Brandon.”

Card uncovered the scam before handing over any money. However, not everyone was so lucky.

Canadian couple loses $21,000

A Canadian couple have reportedly lost $21,000 to a scammer after receiving a call from who they thought was a lawyer representing their son, who claimed that their son had been jailed for killing a diplomat in a car accident. 

Benjamin Perkin told the Washington Post that the scammers used an AI-generated voice that mimicked his voice which pleaded with his parents for money. A man later followed up with Perkin’s parents claiming to be his lawyer saying that he needed $21,000 for legal fees. 

Perkin told the Washington Post that the voice was “close enough for my parents to truly believe they did speak with me.”

Believing their son was in trouble, the couple collected the cash together and sent it to the scammer as Bitcoin. While they admitted the call sounded suspicious, they still sent the funds and didn’t realize they had been scammed until they received an actual call from their son. 

The couple filed a police report with Canadian authorities, but admitted to the Washington Post, “The money’s gone. There’s no insurance. There’s no getting it back.”

Perkin speculated that the scammers obtained samples of his voice via videos he posted on YouTube about snowmobiling. 

Is regulation coming?

Currently, there are a multitude of AI tools accessible to the general public that can create convincing text, generate an image or video with a simple prompt or, as in the case of phone scams, convincingly mimic anyone’s voice.

Calls to regulate the technology are rising. However, unlike in the past, governments seem reluctant to take action. Mark MacCarthy, writing for Brookings, wrote in March 2020, “Regulation is seen as a cost, a hindrance, a delay, or a barrier which must be reluctantly accepted as a last resort only if absolutely necessary.”

In the 1970s when the credit card industry was just emerging, consumers were on the hook for any fraudulent transactions on their cards, even if their card was stolen or lost. 

The U.S. Congress addressed the issue by passing the 1974 Fair Credit Billing Act, which sought to limit cardholder liability and stopped credit card companies from passing losses due to fraud over to cardholders. 

The legislation inspired confidence in the credit card industry, allowing it to grow into a robust, trustworthy system and inspired innovation. 

“However, policymakers have forgotten this beneficial side effect of regulation, preferring to give industry players free rein to deploy emerging technologies as they see fit,” MacCarthy wrote. 

The White House did release a “Guidance for Regulation of Artificial Intelligence Applications” in 2020, which established a framework for future lawmakers to craft legislation on, but it does not appear to have resulted in any meaningful action.

READ MORE:

People need to trust the tech

As companies continue to embed AI in their products and services lawmakers’ attention is being shifted away from protecting data to how that data is being used by software. 

A 2020 white paper entitled “On Artificial Intelligence—A European Approach to Excellence and Trust” sought to lay an AI legal framework, and argues that regulation is essential to the development of AI, inspiring trust in consumers and spurring innovation. 

The paper argued that as technology becomes an ever more central part of every aspect of the human experience, people need to be able to trust it. “Trustworthiness is also a prerequisite for its uptake,” the authors of the paper argued.

However, despite government action, new unregulated AIs are flooding the internet — many available for free to anyone — with experts predicting that the market for AI chips will grow exponentially until the end of the decade.  

As scammers continue to adopt AI to conduct fraud, trust in the technology will continue to erode, and recent gaffs have only eroded that trust further. 

In February this year, Google unveiled Bard, an AI chatbot that was released as a competitor to CHATGPT which went viral at the end of 2022 for its ability to generate convincing text.

However, a factual error generated by Alphabet’s chatbot — that was used in an advertisement launching the tech — caused the company’s share price to plummet by US$100 billion.

In the ad, the bot was presented with the question, “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?”

The chatbot spit out a number of answers, but one response was glaringly incorrect. The bot claimed that the JWST will be able to take the very first pictures of an exoplanet, something that was achieved by the European Southern Observatory’s Very Large Telescope in 2004.

AI industry is out of control

Cynthia Rudin, the Earl D. McLean, Jr. Professor of Computer Science, Electrical and Computer Engineering, Statistical Science, Mathematics, and Biostatistics & Bioinformatics at Duke University, recently told Wral Tech Wire that “AI technology right now is like a runaway train and we are trying to chase it on foot.”

She argued that big tech companies are not incentivized to create ethical tools, but are only incentivized to do whatever creates profit. 

“The problem is when they say things like ‘we want to democratize AI’ it’s really hard to believe that when they’re making billions and billions of dollars,” she said, adding that “So, it would be better if these companies weren’t monopolies and people had a choice of how they wanted this technology to be used.”

Rudin said that governments “should definitely step in” and regulate the tech, pointing out that “It’s not like they didn’t have enough warning.”

She noted that recommender systems — the AI used by numerous platforms to recommend content — have been in use for years, yet governments have not placed any type of regulations on them and people have little to no say on how the technology is used. 

When asked what the worst-case scenario is, Rudin said, “misinformation is not innocent.”

“It does real damage to people on a personal level. It’s been the cause of wars in the past. Think of World War II, think of Vietnam. What I’m really concerned about is that misinformation is going to lead to a war in the future, and AI is going to be at least partly to blame,” she said.