Human dreams have been the subject of curiosity for fortune tellers and psychoanalysts for many years. But what if artificial intelligence (AI) starts hallucinating? What comes out from such hallucinations and what challenges will this pose for us? Thanks to some brilliant researchers, we now have an inkling of the potential of AI hallucinations and the problems this might create.
Dreamed up psychedelic images
Google is investing heavily in AI research. The company’s Artificial Neural Network (ANN) put out a series of psychedelic images after being fed with several visuals over a period of time. Once a sufficient number of images were fed to the network, the team behind the project asked the AI to interpret and enhance a specific image anyway it likes. And the results were surprising.
The AI was able to distinguish between various images and was able to generate new ones thanks to the millions of visuals already in its memory. It was also able to generate images according to the request of the researchers.
So, if a person asked the AI for a starfish, the network was able to churn out an image of a starfish from its memory. Next, the team inputted images into the higher level layers of the AI. Here, the output of the machine was extremely abstract and artistic, even unexpected at certain times.
“The network created unanticipated results: trees becoming crystalline architectures, leaves translated into magical birds and insects. Essentially, these “over-interpretations” are an abstracted, fractalized fusion of previously learned features, produced by this feedback loop,” says an article published by The New Stack.
GPU manufacturer Nvidia was able to create more interesting results with its “hallucinating” AI — it created a slow-motion video from a standard video. Usually, shooting a high-quality slow-motion video is an expensive affair since it requires specialized equipment.
The AI was able to accept videos at 30fps and convert them into fluid slow motion by “hallucinating” extra content. It basically added extra frames in between the existing frames to lengthen the video and make it look like a professionally shot slow-motion video, potentially offering a cheaper way to create such videos in the future.
“Using Nvidia Tesla V100 GPUs and cuDNN-accelerated PyTorch deep learning framework, the team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames,” the company says in an official blog post.
The problem with AI hallucination
While the concept of AI hallucinations might seem interesting, it can create a lot of problems in real life situations. Since the AI is essentially adding its own interpretation to an inputted image, it can end up acting against the very purpose for which it was created. A good example of this would be an AI tasked with the security of a building.
If the AI’s job is to record images and identify certain suspects who enter the building, then it must perform exactly that in order to fulfill its task. However, if the AI starts “hallucinating,” adding its own imagery over the recorded footage or replacing a face with another face it thinks is “similar,” then the security of the building is compromised.
And in areas like defense, transportation, etc., an AI that hallucinates can turn out to be a disaster. So, even though a hallucinating AI is an interesting concept, we bet that real-world applications would rather choose to have an AI that functions as expected rather than something that aimlessly dreams away.