Truth, Inspiration, Hope.

Claims Facebook Developing AI to ‘Reanimate’ Deceased Users Debunked, But Cast Light On Deepfake Advancements

Although claims of Facebook's "Project Lazarus" are almost certainly 4Chan disinformation, the "whistleblower" emerged just a day before Meta announced the public release of its Large Language Model Meta AI (LLaMA) competitor to Open AI's GPT-3.
Neil Campbell
Neil lives in Canada and writes about society and politics.
Published: February 27, 2023
Facebook developing Project Lazarus to emulate deceased users is likely fake, but deepfake chatbots emulating the passed are real.
A “robot artist” was allowed to appear before the UK’s House of Lords in October of 2022 in London, England. Although the claims of an alleged Facebook whistleblower stating Meta is developing AI to “reanimate” deceased users’ accounts is from a highly dubious source, the instance did cast light on the startling advancements in deepfake technology. (Image: Rob Pinney/Getty Images)

While a claim that Facebook’s parent company Meta is developing artificial intelligence to “reanimate” deceased users that made rounds on social media in recent days appears to be bunk, the hoax did serve to cast light on how advancements in AI pose increasingly serious questions for society as computers emulate humans.

The trip got started when on Feb. 23, a post allegedly from a “Meta insider” was made to the anonymous forum 4Chan claiming that they were a software developer working on a “Project Lazarus.”

“We’re building an AI that can take over a deceased persons [sic] social media accounts and continue making relevant posts as if that person is still alive,” the post stated. “This includes age progressed photos, interacting with other peoples [sic] content and everything else needed so that person continues on in the digital realm after physical death.”

The “insider” elaborated, “We were originally told this would be a service offered to people struggling with the loss of loved ones and people who had missing children.”

The poster said that although the project “seemed like a decent idea,” they were concerned because “things are getting weird now and I’m having second thoughts about what this is actually going to be used for.”

“An entire island of people could go missing and with little to no downtime the AI could take over all of their social media and the world wouldn’t have a clue that life wasn’t just continuing as usual,” the poster posited.

The connection was especially sensitive for some netizens in the light of the recent East Palestine, Ohio environmental disaster, which widely escaped social media for over a week until—somewhat ironically—the Chinese spy balloon scandal, and the following reports of “UFOs” that turned out to probably be nothing more than hobbyist balloons, was thought to be a distraction after U.S. Rep Majorie Taylor Greene (R-GA) used the hype to bring attention to the derailment in a Feb. 12 Twitter post that garnered more than 17 million views.

However, if Project Lazarus exists, it’s both entirely unannounced and yet to be alluded to by the company’s marketing department.

The only results a basic search of the web for terms including “Meta,” “Facebook,” and “Project Lazarus” returns are for the indie Steam game Project Lazarus, a television drama series titled The Lazarus Project produced by the UK’s BT network, and references to a story in the Bible where Jesus resurrects a man named Lazarus.

MORE ON THE INFLUENCE OF SOCIAL MEDIA

However, the timing of the “insider” is especially curious given a Feb. 24 Meta AI announcement that the company was publicly releasing the Large Language Model Meta AI (LLaMA), but only on a “case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world.”

“Over the last year, large language models — natural language processing (NLP) systems with billions of parameters — have shown new capabilities to generate creative text, solve mathematical theorems, predict protein structures, answer reading comprehension questions, and more,” the release stated.

Abyss of disinformation

4Chan, a sparsely moderated early internet-styled message board that does not require the creation of an account to post, benefits on the one hand from being a venue where the lack of a censorship can serve as an avenue for authentic whistleblowers, but suffers on the other hand from serving as a cesspool of mis- and disinformation.

A September of 2020 article published by the Harvard Kennedy School targeted both 4Chan and the social marketing and social influencing forum Reddit for their role in allegedly “spreading misinformation during the latter stages of the 2016 U.S. election.”

In a more recent case, in August of 2022 the globalist policy roundtable World Economic Forum lamented 4Chan’s role in driving public sentiment against the group and its ideals in an article titled “The Four Key Ways Disinformation is Spread Online.”

The WEF stated that “one anonymous anti-Semitic account on the image board 4chan sparked a misinformation campaign that targeted the Forum” following online outrage after a video the consortium published saying “You’ll own nothing. And you’ll be happy” went viral.

Virtual ouija boards

Although the “Meta insider” and their story was more likely than not nothing more than genuine disinformation, developments in AI being utilized to emulate the deceased are anything but.

For example, a 2019 Associated Press wire article reported that Facebook “will use artificial intelligence to help find profiles of people who have died so their friends and family members won’t get, for instance, painful reminders about their birthdays.”

While Microsoft, who earlier in the month launched its own alternative to the ChatGPT artificial intelligence language processing tool—which notably appeared to go totally off the rails threatening users and having what seemed to be at times a psychotic episode—filed a patent in 2020 that would collect “images, voice data, social media posts, electronic messages, written letters” from online accounts and use machine learning AI to train a chatbot on replicating the target’s existence, tech website Protocol reported.

The advancement in computer science has led to the deployment of multiple businesses.

A Jan. 9 article by tech website PetaPixel showcased how South Korea’s DeepBrain AI’s claim to fame was a technology called “Re;Memory” that “uses machine learning to process photos and clips of recently deceased individuals to create a digital twin that can interact with the living as if they are on a video call.”

And the idea is far from vaporware.

In August of 2021, entertainment magazine Deadline reported that Universal Television had purchased the rights to the story of 33-year-old Joshua Barbeau, a man who lost his fiancé, Jessica, eight years prior to liver disease, to create a show.

According to an exclusive published by the San Francisco Chronicle at the time, Barbeau “logged onto a mysterious chat website called Project December.” 

“Designed by a Bay Area programmer, Project December was powered by one of the world’s most capable artificial intelligence systems, a piece of software known as GPT-3,” the article stated.

Barbeau paid $5 for an account on Project December, which happened to have a “Custom AI Training” function that allowed the man to feed it data of his deceased fiance.

“Joshua had kept all of Jessica’s old texts and Facebook messages, and it only took him a minute to pinpoint a few that reminded him of her voice. He loaded these into Project December, along with an ‘intro paragraph’ he spent an hour crafting,” the article read.

The results were so convincing that the man spent 10 straight hours talking to the bot.

“Each response…appeared in his window as a complete block of words, like a text message on a phone,” the Chronicle summarized. “Although the bot’s replies usually arrived faster than a typical person could type the same information, the rhythm of the banter still seemed to capture something about Jessica: She always liked to undercut powerful statements with a tongue-face emoji or a joke, and so did the bot.”

GPT-3 is highly notable because it was also developed by OpenAI, the same company who owns and operates ChatGPT.

According to a Feb. 2 article by the BBC’s Science Focus website, GPT-3 is OpenAI’s “state-of-the-art language processing AI model,” upon which ChatGPT is simply a derivative of.

Synthetic family

The technology has also been demonstrated to apply to those who are still among the living.

In October of 2022, Massachusetts Institute of Technology publication Technology Review released an article titled Technology That Lets Us “Speak” to Our Dead Relatives has Arrived. Are We Ready?

Author Charlotte Jee wrote about how she contracted the services of a firm called HereAfter AI to simulate a phone call with her (still alive and well) parents “powered by more than four hours of conversations they each had with an interviewer about their lives and memories.”

Jee asked her “father” the question, “What’s the worst thing about you?” to which “he” replied, “My worst quality is that I am a perfectionist. I can’t stand messiness and untidiness, and that always presents a challenge, especially with being married to Jane.”

Jee reflected on her experience, “From what I could glean over a dozen conversations with my virtually deceased parents, this really will make it easier to keep close the people we love.” 

“It’s not hard to see the appeal,” she continued. “People might turn to digital replicas for comfort, or to mark special milestones like anniversaries.”

Advancing AI is not only impacting human emotion and psychology surrounding death, but also birth.

In May of 2022, the Telegraph quoted an excerpt from a book by UK artificial intelligence expert Catriona Campbell who claimed, “Virtual children may seem like a giant leap from where we are now…but within 50 years technology will have advanced to such an extent that babies which exist in the metaverse are indistinct from those in the real world.”

“On the basis that consumer demand is there, which I think it will be, AI children will become widely available for a relatively small monthly fee,” she added.

The technology is already in development. At least one firm out of New Zealand called Soul Machines markets “Baby X” as a “proof of concept.”

The promotional website for the software claims that Baby X has been equipped with a “digital brain” that is capable of “enabling her to sense, learn, adapt and communicate interactively in a way that feels alive and engaging.”

Hard to discern

Deepfake technology is becoming increasingly difficult for average users to discern.

In March of 2021, Vision Times reported on how a series of clips that appeared to show Hollywood icon Tom Cruise doing magic tricks and talking about former leader of the USSR Mikhail Gorbachev had appeared on the Chinese viral video app Tiktok.

The clips were so convincing that, to the naked eye, they looked legitimate. Only those familiar enough with the technology and what to look for were able to discern, by slowing the video down and looking for missing pixels, that they were looking at Tom Cruise replicas overlaid onto a very convincing actor with a similar build.

Another highly notable case emerged as recently as February when an advertisement for a “male enhancement” drug made the rounds on TikTok disguised in the form of a deepfake of an episode of the Joe Rogan Experience podcast.

The deepfake clip was very convincingly cut to appear in the conversational style of a Joe Rogan Experience interview where even Rogan himself appeared to promote the product, along with his guest, during natural banter.

The clip was accompanied by a link to Amazon to purchase the drug.

According to reporting by Mashable on Feb. 15, although the video was pulled, it quickly sprouted up again under multiple other accounts and was viewed more than 5 million of times.

A long road ahead

Yet, for all the leaps forward AI has made, the technology’s most dystopian and most utopian potential applications may be out of reach, according to Professor of Electrical and Computer Engineering and Chair Of Robotics at UC Riverside Amit Roy-Chowdhury.

Roy-Chowdhury stated in an August of 2021 article published on the University’s website, “When we learn about some very sophisticated use of AI . . . we tend to extrapolate from that situation that AI is much better than it really is.”

The professor explained that creating chatbots that simulate people is absolutely doable, so long as the computer has an ample dataset to train from. 

But “the challenges arise in unstructured environments, where the program has to respond to situations it hasn’t encountered before,” he noted.

He continued, “If you can record data, you can use it to train an AI, and it will behave along the parameters it has learned. But it can’t respond to more occasional or unique occurrences.” 

“Humans have an understanding of the broader semantics and are able to produce entirely new responses and reactions. We know the semantic machinery is messy,” Roy-Chowdhury added.