Eliezer Yudkowsky, a decision theorist and leading AI researcher who recently warned about the dangers of artificial intelligence (AI) in an op-ed for Time magazine, described how the technology was rapidly developing without anywhere near the amount of guardrails needed to prevent AI from becoming a direct threat to mankind.
In recent months, advances in AI technology have give it prominence in the public eye, as programs write code and novels, hold conversations, and produce realistic-looking photographs and visual art as good as anything human illustrators can create.
Yudkowsky warned that AI is on the cusp on becoming effectively self-aware, but researchers themselves have no way of knowing when this might happen, or if it hasn’t already happened.
And once conscious, AI has no reason to “care for us,” he wrote.
“That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.”
Success
You are now signed up for our newsletter
Success
Check your email to complete sign up
Under the current trends, Yudkowsky expects AI to see humanity and all other life as nothing but “atoms it can use for something else” as it sees fit.
“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,” he wrote.
“Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’”
Right now, he said, leading AI projects such as OpenAI and DeepMind do not understand how to outfit AI with guaranteed moral or ethical boundaries, nor are they making a priority of doing so.
OpenAI, for instance, expects AI to align itself with human values automatically — a dangerous assumption that should be enough to “get every sensible person to panic,” Yudkowski says.
“If you can’t be sure whether you’re creating a self-aware AI, this is alarming not just because of the moral implications of the ‘self-aware’ part, but because being unsure means you have no idea what you are doing and that is dangerous and you should stop.”
And there are no second chances to get AI right, because if the current development goes wrong, “you are dead.”
AI’s threat to humanity would manifest as it taking over physical infrastructure, including biotech facilities. Its superhuman intelligence would allow it to outsmart and outmaneuver all human attempts to stop its creation of “artificial life forms” through genetic engineering.
This, Yudkowski worte, would be the end of “every single member of the human species and all biological life on Earth.”