Modern Hell #25: ChatGPT is People
If AI is after our jobs, what are we doing to protect them?
This week, Modern Hell introduces a slightly new (expanded) format. Stay tuned at the end for important updates for both free and paid subscribers.
Language-recognition and generative AI programs like OpenAI’s ChatGPT don’t become what they are by simply scanning everything that humans have already written. Collection is only part of the process, and it naturally involves gathering the worst of what humans have to say as much as it does the good or useful things. Which means it all has to be vetted to edit out the obscene. This is what happens to ensure social media platforms like Facebook stay relatively clear of most horrors. This is what happened with ChatGPT. And like Facebook, OpenAI turned to low-wage workers to do it.
“In its quest to make ChatGPT less toxic, OpenAI used outsourced Kenyan laborers earning less than $2 per hour,” TIME reported last week. OpenAI had already made a pretty good generative language program, GPT-3, but it had some flaws – namely, that it was “prone to blurting out violent, sexist and racist remarks.” To mitigate this problem, OpenAI built a separate program that had been diligently trained to recognize toxic content – things like violence, hate speech, and sexual abuse – so that its next iteration, which became ChatGPT, wouldn’t repeat it. But those examples needed to be labeled in the first place. ChatGPT needed to learn from something – or someone.
Enter the team from Kenya. They did the labelling. And it sounds about as bad as you might expect. One worker told TIME that “he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. ‘That was torture,’ he said.”
ChatGPT’s arrival has been compared to that of the smartphone in its potential to completely change the way we live. Its creation, or curation to be more exact, is reminiscent of smartphones as well. Modern phones are powered by minerals mined under poor conditions by workers, some of whom are children, making a terrible wage. The phones are assembled by others in different but similar conditions: sub-standard, low-wage, and with little to no protections, either physical or mental. Both smartphones and artificial intelligence use and abuse unseen human labour to produce the technical wizardry we see every day.
It’s depressing. But it’s also a reminder that, as wondrous as our technology may be, it doesn’t appear from nowhere. More to the point: Humans create technology. And despite how it may feel, or what we may hear when new tech emerges, as its creators, we also can influence its impacts on humanity.
High on the list of fears associated with artificial intelligence (at least in the West) is that it will replace people, most immediately at work. These fears surface with every new iteration of AI, and have again with the advent of ChatGPT.
“ChatGPT could make these jobs obsolete,” the New York Post declared this week, listing everyone from investment bankers and software engineers to journalists and graphic designers. “No single technology in modern memory has caused mass job loss among highly educated workers. Will generative AI really be an exception?” the Atlantic recently asked, fearfully, suggesting the answer might be “yes”. Then, on Thursday, BuzzFeed announced that it will use OpenAI to “enhance its quizzes and personalize some content for audiences,” the Wall Street Journal reported. BuzzFeed stock quickly rose 150%.
A dystopian version of the future would have us all lose our regular jobs only to be employed as AI trainers, teaching the programs producing all the world’s creative content – writing, art, films, marketing copy, building designs, etc. – how to behave more like ideal humans. One day we might all be Kenyan labourers, so to speak. Maybe that’s the most just future for those of us who’ve reaped the benefits of that work while paying it almost zero attention as we profess amazement at our purportedly advanced civilization. More likely, we’ll land somewhere between that reality and another, where AI is integrated into our daily lives and in which we do engage in its training, much as we train the algorithms that deliver us content via our social media feeds already.
What seems worth remembering in any event is that outcomes aren’t inevitable just because they feel that way. We can expect people and companies to use technology when it’s available and cheap. But using technology doesn’t necessarily have to mean that we get used, too – and certainly not to the point that we become obsolete. The rule says that we shape the tools, then the tools shape us – but being shaped by our technology shouldn’t mean that we get destroyed by it.
Hearing the news about BuzzFeed’s plans, writer Hamilton Nolan tweeted “media unions, we really need to get together and figure out standards to deal with this in our contracts sooner not later. It’s coming.” Indeed, we already possess the tools we need to protect us from a future, or even a present-day, that we don’t want. Maybe instead of focusing so much on how our new technological tools will proliferate, we should think instead about how best to make those that protect human interests, like unions, more available – whether for white collar workers in North America or content-labelling labourers in Kenya.
Notes from Hell
A small collection of other stories & things worth reading and thinking about
Your email address is betraying your identity and desires
Too late for me – and probably for you – but the New York Times has a good primer on why you shouldn’t give away your email too willingly. Your email address is important to companies and you can do more to better protect it.
“An email could contain your first and last name, and assuming you’ve used it for some time, data brokers have already compiled a comprehensive profile on your interests based on your browsing activity. A website or an app can upload your email address into an ad broker’s database to match your identity with a profile containing enough insights to serve you targeted ads.”
Donald Trump is back on Facebook and Instagram
But, as Charlie Warzel notes at the Atlantic, they both feel a bit like spent forces. He noted their “mutual decay”:
“Each thrives by hijacking attention and monetizing outrage, and they’ve benefited each other: The Trump campaign spent millions of dollars on more than 289,000 Facebook ads over the span of just a few months in 2020, according to an analysis by The Markup. But lately, both appear to have lost the juice. Many people still support Trump, and many people still use Facebook products, but the shine is gone—and that matters.”
The “This is Fine” dog meme turns 10
Also at the Atlantic is Megan Garber on the meme that’s come to define the past decade.
“This Is Fine”…is a work of near-endless interpretability: It says so much, so economically. That elasticity has contributed to its persistence. The flame-licked dog, that avatar of learned helplessness, speaks not only to individual people—but also, it turns out, to the country.”
NFTs are dead, but still cringe
Finally, it’s now been a year since this segment on The Tonight Show in which Paris Hilton and Jimmy Fallon discussed their Bored Ape NFTs in what was perhaps the most excruciating conversation late-night talk show conversation in recent memory, which is really saying something.
Fallon and Hilton are now among a cadre of celebrities listed in a lawsuit that claims they “misled their followers into buying BAYC NFT’s among other unregistered securities by Yuga Labs, to pump up their value, causing buyers to purchase ‘losing investments at drastically inflated prices’,” according to the Hollywood Reporter.
Notes on Hell
Housekeeping items
First of all – thank you! Thanks for reading and subscribing, whether your sub is free or paid. The support is much appreciated in either case.
Some updates:
Modern Hell: Modern Hell posts (like this one) will remain free to all subscribers – for up to 2 weeks. After that, they will automatically be placed behind the paywall.
Political Hell: I’ve added a new page to this newsletter called Political Hell. The first post, dealing with some revisionism around Canada’s F-35 fighter jet purchase, is up. Political Hell posts will likely deal mostly with Canadian politics, but there’s every possibility they’ll veer into UK and U.S. topics as well. All Political Hell posts will be behind a paywall.
Mailbag: I’ve seen this idea work elsewhere and I’d like to give it a try. Want to disagree with me on something? Go for it. Want to ask me a question about Hell? Hit me. I’ll post Q&As when there’s a decent number. (Note: If you have other general feedback you’d like to share, please do so – even if it’s not for the Mailbag.)




