OpenAI, the company that owns the trendy ChatGPT artificial intelligence text generator, paid workers in Kenya as little as $1.32 an hour to conduct manual labor required to build a self-censorship module that would force the AI’s outputs to be “less toxic.”
The workers were required to manually read through hundreds of instances of vulgar, violent, obscene, and sexually graphic content.
A Jan. 18 article published in Time Magazine explained that during ChatGPT’s development, OpenAI ran into a problem where “it was a difficult sell, as the app was also prone to blurting out violent, sexist and racist remarks.”
Time explained that the problem arose because the bot was trained “on hundreds of billions of words scraped from the internet.”
While somewhat euphemistically describing the darkest parts of the internet as merely “replete with toxicity and bias,” the article stated that the Kenyans’ day-to-day work involved personally filtering content that “described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.”
OpenAI employed a firm called Sama, headquartered in San Francisco and which claims to have “helped 50,000 people lift themselves out of poverty.”
The outlet stated that in the production of their article, Time “reviewed hundreds of pages of internal Sama and OpenAI documents, including workers’ payslips, and interviewed four Sama employees who worked on the project.”
All workers spoke on the promise of anonymity “out of concern for their livelihoods.”
Cheap labor to build AI censorship
What Sama’s Kenyan brigade was contributing to the creation of was what Time described as an “additional AI-powered safety mechanism” that would serve as a Big Tech social media-style self-censorship system.
“To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms,” the article stated.
“The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild.”
Adding, “That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user.”
In a statement to Time provided by OpenAI for its article, the company claimed a self-censorship and speech harmonizing module was “a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content.”
The consequences of subjecting human beings to reading vile and profane content for hours each day were fully manifest in Sama employees, Time found.
One man was paraphrased by the outlet as stating that he “suffered from recurring visions” after being made to read a graphically detailed scene involving bestiality and child sexual abuse.
Severe psychological effects
“You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture,” the man was quoted as saying.
“Three employees told TIME they were expected to read and label between 150 and 250 passages of text per nine-hour shift. Those snippets could range from around 100 words to well over 1,000,” the article noted.
All four employees said not only were they scared by the work, but were offered little support from Sama beyond the offer of sessions with “wellness” counselors, which they could not take advantage of due to being driven with getting more labels completed per hour of each shift.
Time stated that the after effects were so severe that Sama was forced to cancel its contract with Open AI eight months early, despite their being worth $200,000 USD and the workload being split between approximately three dozen workers who were paid no more than $2/hour.
Time noted, “The contracts stated that OpenAI would pay an hourly rate of $12.50 to Sama for the work, which was between six and nine times the amount Sama employees on the project were taking home per hour.”
Staff were paid an additional $70 per month as a “bonus” for being exposed to obscene content.
A Sama spokesperson told Time that workers were only expected to complete 70 labels per shift and could make as much as $3.75 per hour.