Truth, Inspiration, Hope.

Advanced AI Models Could Pose Serious Risks to US National Security: Reuters

Published: May 18, 2024
AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration taken, June 23, 2023. (Image: REUTERS/Dado Ruvic/Illustration/File photo)

The Biden administration is poised to open up a new front in its effort to safeguard American artificial intelligence (AI) from China and Russia, with preliminary plans to defend the most advanced AI models, according to a recent report from Reuters. 

There is concern from government and private sector researchers that America’s adversaries could use the models to wage aggressive cyber attacks, or create potent biological weapons and use them against the country. The AI is capable of mining vast amounts of information through the whole internet, and then creating new potentially dangerous threats.

Deepfakes and misinformation

Deepfakes are realistic, yet fabricated videos, created by AI algorithms trained on copious amounts of online footage. They can be seen surfacing on social media, blurring fact and fiction, for example in the polarized world of U.S. politics.

New “generative AI” tools such as “Midjourney” make it cheap and easy to create convincing deepfakes. Such synthetic media has been around for several years, but has changed really fast over the past year.

Researchers said in a report released this March that image creation tools powered by artificial intelligence from companies including OpenAI and Microsoft, can be used to produce materials to promote election or voting-related disinformation, despite each having policies against creating misleading content. 

Some disinformation campaigns simply harness the ability of AI to mimic real news articles as a means of disseminating false information.

Major social media platforms like Facebook, Twitter, and YouTube have made efforts to prohibit and remove deepfakes, but their effectiveness at policing such content varies.

For example, last year the Department of Homeland Security (DHS) said in its 2024 homeland threat assessment that a Chinese government-controlled news site, using a generative AI platform, pushed a false claim that the United States was running a lab in Kazakhstan to create biological weapons for use against China.

On Wednesday National Security Advisor Jake Sullivan, speaking at an AI event in Washington, said the problem has no easy solutions, because  it combines the capacity of AI with “the intent of state, non-state actors, to use disinformation at scale, to disrupt democracies, to advance propaganda, to shape perception in the world.”

“Right now the offense is beating the defense big time,” he added.

Biological and computer warfare

Foreign bad actors gaining access to advanced AI capabilities are increasingly concerning the American intelligence community, think tanks and academics. Researchers at Gryphon Scientific and Rand Corporation noted that advanced AI models can provide information that could help create biological weapons.

Gryphon studied how large language models (LLM) could be used by hostile actors to cause harm in the domain of life sciences and found they “can provide information that could aid a malicious actor in creating a biological weapon by providing useful, accurate and detailed information across every step in this pathway.” 

Large Language Models are computer programs that draw from massive amounts of text to generate responses to given queries.

They found, for example, that an LLM could provide post-doctoral level knowledge to trouble-shoot problems when working with a pandemic-capable virus.

Rand Corporation’s research showed that LLMs could help in the planning and execution of a biological attack, for example suggesting aerosol delivery methods for botulinum toxin.

In its threat assessment, DHS added cyber actors would likely use AI to “develop new tools” to “enable larger-scale, faster, efficient, and more evasive cyber attacks” against critical infrastructure, including pipelines and railways.

China and other adversaries are developing AI technologies that could undermine U.S. cyber defenses, DHS said, including generative AI programs that support malware attacks.

In a February report Microsoft said it had tracked hacking groups affiliated with the Chinese and North Korean governments, as well as Russian military intelligence and Iran’s Revolutionary Guard, as they tried to perfect their hacking campaigns using large language models.

Reuters contributed to this report.