Truth, Inspiration, Hope.

Scientists Used Artificial Intelligence to Discover 40,000 New Possible Chemical Weapons in Just Six Hours

Published: March 21, 2022
Artificial-intelligrnace-VX-Chemical-Weapons-A.I.-research-Getty-Images-1367519532
Visitors attend the X Media Art Museum, a digital arts and new media museum, on Jan. 29, 2022 in Istanbul, Turkey. The exhibition has opened its doors to art lovers, to view the works of Leonardo Da Vinci combined with artificial intelligence. (Image: Cem Tekkesinoglu/dia images via Getty Images)

A group of computer scientists with Collaborations Pharmaceuticals Inc (CPI), a company that traditionally focuses on finding new drug treatments for rare diseases, used a machine learning algorithm to invent 40,000 potentially lethal molecules, in just six hours, some of which are similar to VX, the most potent nerve agent ever developed. 

According to The Verge, the scientists put their A.I. into a kind of “bad actor” mode to demonstrate how easily the technology could be abused and then published their findings in Nature Machine Intelligence on March 7.  

In conversation with The Verge Fabio Urbina, the lead author of the paper said, “The biggest thing that jumped out at first was that a lot of the generated compounds were predicted to be actually more toxic than VX. And the reason that’s surprising is because VX is basically one of the most potent compounds known. Meaning you need a very, very, very little amount of it to be lethal.”

Urbina’s job is to primarily implement new machine learning models in the area of drug discovery to identify toxicity in new drugs in order to avoid it however he asked the question, “instead of going away from toxicity, what if we go toward it?”

Urbina said the catalyst for the question was an invitation to present at the Convergence Conference run by the Swiss Federal Institute for Nuclear, Biological and Chemical Protection, the Spiez Laboratory. His team was asked to “inform the community at large of new developments with tools that may have implications for the Chemical/Biological Weapons Convention,” The Verge reported. 

How they did it

Being vague after being told to “withhold some of the specifics” Urbina explained that his team used established data sets of molecules that had already been tested to see whether they’re toxic or not to build their models. They then decided to focus on VX due to its extremely high toxicity and instead of trying to avoid its toxicity they trained the algorithm to seek it out. 

VX is an inhibitor of what’s known as acetylcholinesterase. Acetylcholinesterase is used by neurons to tell one’s body to move muscles. The reason why VX is so lethal is that it blocks the use of acetylcholinesterase in a person’s diaphragm and lung muscles, paralysing the respiratory system. 

The team of researchers used datasets to create a machine learning model which learned which parts of a VX molecular structure are important for toxicity and which are not. They then gave the machine learning model an array of new molecules, potentially new drugs, and combined them with the toxic parts of the VX molecule. The process was traditionally used to “kick out” potential toxic molecules but the researchers then inverted this process identifying molecules high in toxicity.

Urbina said, “Now it can generate new molecules all over the space of chemistry, and they’re just sort of random molecules. But one thing we can do is we can actually tell the generative model which direction we want to go. We do that by giving it a little scoring function, which gives it a high score if the molecules it generates are towards something we want. Instead of giving a low score to toxic molecules, we give a high score to toxic molecules.” 

The approach was incredibly successful and disturbingly easy. The model produced tens of thousands of lethal molecules which looked like VX as well as “other chemical warfare agents.” 

‘Gray moral boundary’

In their paper the scientists wrote that they “have crossed a gray moral boundary, demonstrating that it is possible to design virtual potential toxic molecules without much in the way of effort, time or computational resources,” adding that “We can easily erase the thousands of molecules we created, but we cannot delete the knowledge of how to create them.”

The team was originally reluctant to publish their findings due to the risk of potential misuse but decided to publish as a way to “get ahead of this” because  “if it’s possible for us to do it, it’s likely that some adversarial agent somewhere is maybe already thinking about it or in the future is going to think about it.”

Urbina said, “I don’t want to sound very sensationalist about this, but it is fairly easy for someone to replicate what we did.”

He explained that if someone were to Google “generative models” they could easily find a number of put-together one-liner generative models that people have released for free. He then explained that if one was to search for “toxicity data sets” they would find a large number of open-sourced tox datasets. 

Combining these two things, with the knowledge of how to build machine learning models — a skill that can be learned watching YouTube videos — someone would only need an internet connection and a computer to “easily replicate what we did. And not just for VX, but for pretty much whatever other open-source toxicity datasets exist.”

He added, “Of course, it does require some expertise. If somebody were to put this together without knowing anything about chemistry, they would ultimately probably generate stuff that was not very useful. And there’s still the next step of having to get those molecules synthesised.”

‘This should serve as a wake-up call’

Urbina explained that when you first start working in the field of chemistry researchers are trained to be aware of potential misuse. “When you start working in the chemistry space, you do get informed about misuse of chemistry, and you’re sort of responsible for making sure you avoid that as much as possible.” However, for researchers working in the field of artificial intelligence,  “there’s nothing of the sort. There’s no guidance on misuse of the technology.”

“Without being overly alarmist, this should serve as a wake-up call for our colleagues,” Urbina said. 

He continued saying that “awareness” of the issue is an initial step to avoiding a worse case scenario asserting that “we just want more researchers to acknowledge and be aware of potential misuse.”

“I don’t want to be alarmist in saying that there’s going to be AI-driven chemical warfare. I don’t think that’s the case now. I don’t think it’s going to be the case anytime soon. But it’s something that’s starting to become a possibility,” Urbina told The Verge.