ChatGPT’s surprisingly human voice came with a human cost

A darkly-colored illustration of an iPhone floating in grey space. The screen displays the shadow of a person reaching out towards the viewer.

Popular, eerily-humanlike OpenAI chatbot ChatGPT was built on the backs of underpaid and psychologically exploited employees, according to a new investigation by TIME

A Kenya-based data labeling team, managed by San Francisco firm Sama, reportedly was not only paid shockingly low wages doing work for a company that may be on track to receive a $10 billion investment from Microsoft, but also was subjected to disturbingly graphic sexual content in order to clean ChatGPT of dangerous hate speech and violence. 

SEE ALSO:

Gas, the app for compliments, has been bought by Discord

Beginning in November 2021, OpenAI sent tens of thousands of text samples to the employees, who were tasked with combing the passages for instances of child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest, TIME reported. Members of the team spoke of having to read hundreds of these types of entries a day; for hourly wages that raged from $1 to $2 an hour, or a $170 monthly salary, some employees felt that their jobs were “mentally scarring” and a certain kind of “torture.”

Sama employees reportedly were offered wellness sessions with counselors, as well as individual and group therapy, but several employees interviewed said the reality of mental healthcare at the company was disappointing and inaccessible. The firm responded that they took the mental health of their employees seriously. 

The TIME investigation also discovered that the same group of employees was given additional work to compile and label an immense set of graphic — and what seemed to be increasingly illegal — images for an undisclosed OpenAI project. Sama ended its contract with OpenAI in February 2022. By December, ChatGPT would sweep the internet and take over chat rooms as the next wave of innovative AI speak. 

At the time of its launch, ChatGPT was noted for having a surprisingly comprehensive avoidance system in place, which went far in preventing users from baiting the AI into saying racist, violent, or other inappropriate phrases. It also flagged text it deemed bigoted within the chat itself, turning it red and providing the user with a warning.   

The ethical complexity of AI

While the news of OpenAI’s hidden workforce is disconcerting, it’s not entirely surprising as the ethics of human-based content moderation isn’t a new debate, especially in social media spaces toying with the lines between free posting and protecting its user bases. In 2021, the New York Times reported on Facebook’s outsourcing of post moderation to an accounting and labeling company known as Accenture. The two companies outsourced moderation to employee populations around the world and later would deal with a massive fallout of a workforce psychologically unprepared for the work. Facebook paid a $52 million settlement to traumatized workers in 2020.  

Content moderation has even become the subject of psychological horror and post-apocalyptic tech media, such as Dutch author Hanna Bervoets’s 2022 thriller We Had to Remove This Post, which chronicles the mental breakdown and legal turmoil of a company quality assurance worker. To these characters, and the real people behind the work, the perversions of a tech- and internet-based future are lasting trauma. 

ChatGPT’s rapid takeover, and the successive wave of AI art generators, poses several questions to a general public more and more willing to hand over their data, social and romantic interactions, and even cultural creation to tech. Can we rely on artificial intelligence to provide actual information and services? What are the academic implications of text-based AI that can respond to feedback in real time? Is it unethical to use artists’ work to build new art in the computer world? 

The answers to these are both obvious and morally complex. Chats are not repositories of accurate knowledge or original ideas, but they do offer an interesting socratic exercise. They are quickly enlarging avenues for plagiarism, but many academics are intrigued by their potential as creative prompting tools. The exploitation of artists and their intellectual property is an escalating issue, but can it be circumvented for now, in the name of so-called innovation? How can creators build safety into these technological advancements without risking the health of real people behind the scenes?

One thing is clear: The rapid rise of AI as the next technological frontier continues to pose new ethical quandaries on the creation and application of tools replicating human interaction at a real human cost.

If you have experienced sexual abuse, call the free, confidential National Sexual Assault hotline at 1-800-656-HOPE (4673), or access the 24-7 help online by visiting online.rainn.org.

Comments are closed.

Post Navigation