According to a recently released report by the United Nations Interregional Crime and Justice Research Institute entitled,”Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes,” experts fear that terrorist activity could be supercharged if bad actors were to use artificial intelligence with malicious intent.
“The reality is that AI can be extremely dangerous if used with malicious intent,” Antonia Marie De Meo, director of the institute, wrote in the report.
“With a proven track record in the world of cybercrime, it is a powerful tool that could conceivably be employed to further facilitate terrorism and violent extremism conducive to terrorism,” she added,
De Meo says that terrorists could use anything from self-driving cars to facilitate bombings to augmented cyber attacks that would be more destructive.
She also fears that terrorist organizations could use AI to find easier paths to spread hate speech, incite violence, or recruit new members.
Success
You are now signed up for our newsletter
Success
Check your email to complete sign up
Her report concludes that law enforcement agencies must strive to stay ahead of the technology in order to counter the threat.
READ MORE:
- HP to Receive $50 Million for Oregon Semiconductor Facility
- Jake Sullivan Meets With Xi in Rare Meeting as Washington Navigates Sino-US Tensions
- Japan on Edge Following ‘Unprecedented’ Airspace Breach By Chinese Military Plane
Anticipating its use
The report says that law enforcement is tasked with a tall order, anticipating how terrorists could possibly use the technology in ways that no one has considered before, and then figuring out how to stop these bad actors from employing these methods.
The report echoes a collaborative study between NATO COE-DAT and the U.S. Army War College Strategic Studies Institute, “Emerging Technologies and Terrorism: An American Perspective,” which argued that terrorist groups are already exploiting AI to recruit and carry out attacks.
In the study’s forward, the authors wrote, “The line between reality and fiction blurs in the age of rapid technological evolution, urging governments, industries, and academia to unite in crafting frameworks and regulations.”
The study provides examples as to how terrorist organizations are already using the technology to their advantage, including how bad actors use OpenAI’s ChatGPT to “improve phishing emails, plant malware in open-coding libraries, spread disinformation and create online propaganda.”
“Cybercriminals and terrorists have quickly become adept at using such platforms and large language models in general to create deepfakes or chatbots hosted on the dark web to obtain sensitive personal and financial information or to plan terror attacks or recruit followers,” the authors wrote.
As AI models become more sophisticated the authors believe their malicious use will only increase in the future and they argue that how these models work, specifically, how “sensitive conversations and internet searches are stored,” will require more transparency and controls.
READ MORE:
- Social Media Warfare: Inside the CCP’s Campaign to Smear Falun Gong, Shen Yun
- Telegram Founder Pavel Durov Under Formal Investigation in France, Not Allowed to Leave: Prosecutors
- America Bets Big on Nuclear as Coal Plants Wind Down
Terrorist’s ‘jailbreaking’ AI models
According to more research, published by West Point’s Combating Terrorism Center, published earlier this year, terrorists have moved beyond improving their current tactics to “jailbreaking” AI platforms.
“Specifically, the authors investigated the potential implications of commands that can be input into these systems that effectively ‘jailbreak’ the model, allowing it to remove many of its standards and policies that prevent the base model from providing extremist, illegal, or unethical content,” the authors wrote.
The authors explored five different AI platforms to see how they could be exploited.
They found that Google’s Bard was the most resilient to jailbreaking, followed by ChatGPT models.
The study concluded that jailbreak guardrails need to be constantly reviewed and that “increased cooperation between private and public sectors” will be required to keep these guardrails intact and up-to-date.