Truth, Inspiration, Hope.

Legal Consequences for AI Developers at Stake in Supreme Court Case

Victor Westerkamp
Victor resides in the Netherlands and writes about freedom and governmental and social changes to the democratic form of nations.
Published: May 2, 2023
the-united-states-supreme-court-is-seen-in-washington-u-s-march-27-2023
The United States Supreme Court is seen in Washington, U.S., March 27, 2023. In June, the U.S. Supreme Court will rule on upholding legal immunity from YouTube — and subsequently that of other AI developes like ChatGPT and OpenAI. (Image: EVELYN HOCKSTEIN/Reuters)

In June, the U.S. Supreme Court will rule on whether to uphold a legal immunity from YouTube — a ruling that may have far-reaching consequences for AI developers like ChatGPT and OpenAI. 

The justices are due to rule whether Alphabet Inc’s YouTube can be sued over its video recommendations to users. That case tests whether a U.S. law that protects technology platforms from legal responsibility for content posted online by their users also applies when companies use algorithms to target users with recommendations.

What the court decides on this issue will affect not only social media such as Youtube but also numerous AI-powered websites such as OpenAI’s ChatGPT and Google’s Bard that similarly generate potentially harmful content based on commands from users.

These AI developers, meanwhile, should be wary of legal claims following unforeseen consequences when the information provided can be used for criminal activity or even be fatal if it gets into the wrong hands.

The case being decided by the Supreme Court had come to a head following the appeal filed against YouTube by the family of Nohemi Gonzalez, a 23-year-old California student allegedly shot to death in a 2015 terrorist attack in Paris. 

READ MORE

The plaintiffs accused YouTube of providing “material support” to terrorism. They alleged that YouTube, through the video-sharing platform’s algorithms, had unlawfully encouraged certain users to view videos from the Islamic State terrorist organization, which has been linked to the attack.

“The debate is really about whether the organization of information available online through recommendation engines is so significant to shaping the content as to become liable,” said Cameron Kerry, a visiting fellow at the Brookings Institution think tank in Washington and an expert on AI. “You have the same kinds of issues with respect to a chatbot.”

Section 230

Currently, AI developers like YouTube and Open AI enjoy a high degree of liability immunity from suits over content as distributed across their platforms.

This relative immunity is currently governed by Section 230 of the Communications Decency Act of 1996. During hearings this February, however, Supreme Court justices discussed whether perhaps this liability threshold should not be lowered.

The current case focuses primarily on whether Section 230 immunity should continue to apply to AI models that, in addition to being trained to sift through masses of online data, on top of it, can independently provide authentic output. Courts have not yet ruled on whether an answer from an AI chatbot is covered. 

Sen. Ron Wyden (D-OR), who helped draft that bill in the House of Representatives, said the liability shield should not apply to generative AI tools because such tools “create content.”

“Section 230 is about protecting users and sites for hosting and organizing users’ speech. It should not protect companies from the consequences of their own actions and products,” Wyden said in a statement to Reuters.

‘Not really creating anything’

The technology industry has urged the enforcement of Section 230, saying tools like ChatGPT merely function as search engines, directing users to existing content in response to a query.

“AI is not really creating anything. It’s taking existing content and putting it in a different fashion or different format,” said Carl Szabo, vice president and general counsel of NetChoice, a tech industry trade group.

Szabo said a weakened Section 230 would be impossible for AI developers and expose them to a flood of lawsuits that could stifle innovation.

Hany Farid, a technologist and professor at the University of California, Berkeley, said it is beyond imagination to argue that AI developers should be immune from lawsuits over models they have “programmed, trained and deployed.”

“When companies are held responsible in civil litigation for harms from the products they produce, they produce safer products,” Farid said. “And when they’re not held liable, they produce less safe products.”

Reuters contributed to this report.