Truth, Inspiration, Hope.

State-Sponsored Hackers Are Leveraging Microsoft-Backed AI in Espionage Efforts: Report

Alina Wang
A native of New York, Alina has a Bachelors degree in Corporate Communications from Baruch College and writes about human rights, politics, tech, and society.
Published: February 14, 2024
Big-Tech-Bets-on-Artificial-Intelligence-to-reinvent-search-Getty-Images-1459359000
The logo of the U.S. computer and micro-computing company, Microsoft is visible on the facade of its head office on Jan. 25, 2023 in Issy-les-Moulineaux, France. The tech giant will be holding a press conference on Feb. 7 where it’s expected that the company will unveil the “new Bing,” a search engine that exploits the capabilities of OpenAI’s ChatGPT. (Image: Chesnot via Getty Images)

According to a recent report by tech giant Microsoft, hackers backed by the governments of China, Russia, Iran, and North Korea have been utilizing its artificial intelligence (AI) tools to advance their cyber-espionage efforts. 

The tools, which are provided by OpenAI, are backed by Microsoft and are designed to leverage AI technology for a variety of applications. These include natural language processing and machine-learning mechanisms aimed at helping different industries simplify workflows and enhance user experience.

The announcement, which was made public on Feb. 14, is shedding light on the sophisticated means through which these state-sponsored entities are advancing and honing their hacking skills. 

A hacker’s arsenal

The report from Microsoft (MSFT.O) outlines the detection of activities by hacking groups affiliated with entities such as Russian military intelligence, Iran’s Revolutionary Guard, and the governments of China and North Korea. These groups have been experimenting with language models including ChatGPT and code interpreter tools, the report notes. 

Tom Burt, Microsoft’s Vice President for Customer Security, emphasized the company’s stance against the misuse of AI technology for unethical purposes. He noted that such technology has evidently become a tool in the arsenals of hackers seeking to “refine their methods” and deceive their targets more effectively.

“Independent of whether there’s any violation of the law or any violation of terms of service, we just don’t want those actors that we’ve identified — that we track and know are threat actors of various kinds — to have access to this technology,” Burt told Reuters in an interview. 

RELATED: AI Wars: Sam Altman Returns to Helm OpenAI Amid Unprecedented Employee and Investor Support

In a decisive move, Microsoft also announced a comprehensive ban on the use of its AI products by state-sponsored hacking groups, and underscored its commitment to “preventing the exploitation” of its tools by foreign entities. 

Senior cybersecurity officials in the West have long been sounding the alarm on the potential misuse of AI by malevolent actors. “This is one of the first, if not the first, instances of an AI company coming out and discussing publicly how cybersecurity threat actors use AI technologies,” said Bob Rotsted, leading cybersecurity threat intelligence at OpenAI. 

Meanwhile, the response from the accused nations was varied. While Russian, North Korean, and Iranian officials have not commented on the accusations thus far, a spokesperson for China’s embassy in the U.S., Liu Pengyu, countered the allegations with a statement opposing “groundless smears and accusations against China.”

Instead, Liu advocated for a “safe, reliable and controllable” deployment of AI technology that aims to “enhance the common well-being of all mankind.” 

Ethical and environmental implications

The incident marks a significant development in the ongoing discourse about the security implications of rapidly-evolving AI technologies. These concerns include the potential for misuse in spreading misinformation, privacy breaches, intellectual property theft, and the manipulation of public opinion. 

In addition, there are worries regarding accountability, ethical use, and the challenges surrounding the regulation of such tools without stifling innovation or freedom of expression.

RELATED: AI in the Courtroom: OpenAI Faces Lawsuit Over Defamatory Claims Made by Chatbot

There are also environmental concerns associated with the growth of AI due to their extensive hardware and resource needs. These technologies, including language models, contribute to carbon emissions that exceed the aviation industry. This is driven by the demand for ever larger datasets and models. 

According to a study by Data Center Frontier, a single medium-sized data center may use up to 360,000 gallons of water everyday for cooling purposes. However, the full environmental impact of AI tools remains uncertain as public data on the scale of these resources is still lacking. 

Staying the course

While Microsoft and OpenAI characterized the hackers’ exploitation of AI tools as “early-stage” and “incremental,” the implications of this development are far-reaching. The fact that these state-backed groups are exploring and integrating AI into their espionage toolkit without achieving significant breakthroughs, as noted by Burt, does not diminish their potential threat.

“We really saw them just using this technology like any other user,” said Burt, adding that a “nuanced understanding” of how these technologies are being incorporated into broader hacking strategies should be a priority. 

Experts are also noting that Microsoft’s revelation should serve as a wake-up call to the global community about the dual-use nature of AI technologies. 

“These risks are real and here now, not in a science fiction future,” a study from the Oxford Internet Institute found. “AI is already reinforcing and exacerbating many challenges already faced by society, such as bias, discrimination and misinformation.” 

As the digital landscape continues to evolve, the incident reaffirms the importance of vigilance, ethical guidelines, and comprehensive strategies to ensure that the benefits of AI are harnessed responsibly.