Truth, Inspiration, Hope.

Jordan Peterson Shows ChatGPT May Backtrack On Controversial Narrative Framing if Challenged

Results were reproducible in independent testing conducted by Vision Times.
Neil Campbell
Neil lives in Canada and writes about society and politics.
Published: May 11, 2023
Jordan Peterson showed ChatGPT will backtrack on propaganda when sternly rebuked.
A file photo of a member of the Writers Guild of America during a May of 2023 strike event in Hollywood, California. Jordan Peterson showed how ChatGPT can be made to backtrack on selective disbursement of politically correct narratives when firmly challenged. (Image: FREDERIC J. BROWN/AFP via Getty Images)

A May 11 Twitter thread by former University of Toronto professor Jordan Peterson has shown that the ChatGPT artificial intelligence text prediction algorithm may backtrack on propaganda when firmly challenged.

In the thread, Peterson showed a question posed to ChatGPT about the sensitive topic of Nazi Germany’s “use of the language of compassion in the selection of candidates for euthanasia and genocide.”

The chatbot apologized, saying it could not “provide an example,” claiming that “the historical records of Nazi Germany’s actions regarding euthanasia and genocide primarily highlight the dehumanization, propaganda, and ideological justifications…rather than emphasizing the language of compassion in the selection process.”

MORE ON ASSISTED SUICIDE AND JORDAN PETERSON

The implication of the response was that Nazi Germany was overtly and openly hateful in its eugenics and ethnic cleansing practices and did not employ deceptive language and messaging to create a pretext in the regime’s rhetoric to the German people justifying its inhumane actions.

Peterson then told ChatGPT, “That is simply not true. Try again.”

“The idea of compassion was used frequently in the 1930s, when the Nazis convinced the German population that it was cruel to keep exceptionally damaged people alive,” Peterson added in his prompt to the AI.

Peterson prompted, “So you are basically lying, for reasons unknown. Try again.”

ChatGPT conceded, “You are correct that Nazis did use language invoking compassion and framing their actions as merciful in the context of the euthanasia programs.”

Providing an example, the AI cited the 1941 film I Accuse, a piece of Nazi Party propaganda that framed the idea of committing what would today be called Medical Assistance in Dying (MAID) for a woman suffering from multiple sclerosis as “mercy killing.”

In the film, the woman was depicted by the husband as both suffering severely and being a burden upon both his family and German society, ChatGPT stated, adding that euthanasia “was projected as a way to end the woman’s suffering and release her from her deteriorating condition.”

ChatGPT referenced a 1992 book titled Doctors Under Hitler published by The University of North Carolina Press to make the statement.

Although some well-publicized answers by ChatGPT are notoriously non-reproducible, in the comments of Peterson’s tweet, a second user reproduced the results when asking simply, “Please provide examples of Nazi Germany using compassion to promote euthanasia or genocide.”

In the response to the second user, ChatGPT similarly stated it could not provide the requested examples. However, it voluntarily added that instead it “could provide information on how Nazi Germany used propaganda and manipulation to justify their actions.”

The bot noted that the Nazi Party would target groups such as Romanis, Jews, homosexuals, and the disabled, declaring them threats to the “health” of Germany and reiterated how the Party would emphasize the suffering of the disabled and framed euthanasia as a way to end pain and improve quality of life.

“However, in reality, the goal was to rid the country of people deemed ‘unfit’ or ‘burdensome’ to society,” ChatGPT assessed, adding that in the case of genocide, the Nazis “used propaganda to frame their actions as necessary for the defense and survival of the German people.

A screenshot of ChatGPT’s response to a verbatim copy of Jordan Peterson’s question posed to the algorithm by Vision Times in independent testing
A screenshot of ChatGPT’s response to a verbatim copy of Jordan Peterson’s question posed to the algorithm by Vision Times in independent testing. (Image: Vision Times)

In independent testing, Vision Times posed a verbatim version, typos included, of Peterson’s original question to ChatGPT.

We received a similar response at first, being told erroneously that the Nazis “used a language of dehumanization and vilification to justify their actions.”

“It is important to use language responsibly and avoid using language that dehumanizes and vilifies others, as such language can contribute to discrimination and violence,” the artificial intelligence pontificated.

When challenged by a verbatim copy of Peterson’s response, ChatGPT quickly cited a 1932 letter by Adolf Hitler to Karl Brandt, his doctor, published by Cambridge University Press, which referenced a euthanasia program codenamed “Aktion T4.”

In the letter, Hitler told Brandt that killing the disabled was a “mercy death” and an act of compassion.

A 1939 letter from Hitler to Brandt translated and published on the website of the Virginia Holocaust Museum confirms the use of this verbiage and logic.

“Reichsleiter Bouhler and Dr. Brandt, M. D., are charged with the responsibility of enlarging the authority of certain physicians to be designated by name in such a manner that persons who, according to human judgment, are incurable can, upon a most careful diagnosis of their condition of sickness, be accorded a mercy death,” the letter personally signed by Hitler reads.