The Moderator and the Troll: content moderation in the age of Elon Musk

Nick Hilton
7 min readNov 11, 2022

--

This piece was first published earlier this week on my newsletter, Future Proof. As is the nature of writing about tech, some things may now be slightly out of date — subscribe to Future Proof in order to avoid waiting to read in the future!

Are you bored of me and everyone you know wanging on about Elon Musk and Twitter? Well, park that boredom for a second, because on today’s Future Proof we’re talking about content moderation.

Content moderation is one of those media issues that’s both very important and very dry. It heavily impacts the way that brands emerge into the world — sites with lighter touch content moderation, like, say, Reddit, have a distinct identity from those with a more heavily curated style, like Wikipedia. But in the middle there’s the great sea of social media, from the shores of TikTok to the banks of the great river Facebook. These are technology products, populated by user created content, which poses a huge central question: who is responsible for that content? The platform? Or the creator?

There is a deep streak of libertarianism in the internet, one that is currently rising closer to the surface. Think how many evangelists of our technological future have championed that freeing, self-governing, ideology, from PayPal founder Peter Thiel to Wikiman Jimmy Wales, who chose Ayn Rand’s The Fountainhead as his book of choice on Desert Island Discs. And with Musk’s acquisition of Twitter, and a push to ensure “free speech” on the platform (whatever that means), content moderation is a sexy topic of dinner party chitchat again.

So I dialled up content moderation expert — and writer of the superb Everything in Moderation newsletter, which you must subscribe to — Ben Whitelaw, to ask a few questions about the current brouhaha and where things are going in the world of content moderation.

Why are people talking about content moderation right now?

In short because of Elon Musk. The new owner of Twitter is seemingly figuring out his views on speech in real-time while the world (by which I mean politicos and media types) watches on with bated breath. In the space of just over a week, he’s threatened to form a moderation council (much like Meta’s Supreme Court-style Oversight Board), vowed to make the platform “the most accurate source of information about the world” (huh?) and run a poll in which the only two answers were “freedom of speech” and “political correctness”. The man is playing both moderator and troll and he’s showing no sign of stopping either. And, despite an increase in hate speech since he took over which threatens the human rights of millions around the world, the self-professed Techno King duly went ahead and culled 15% of the company’s Trust and Safety team. It’s almost too improbable to be true.

What are people’s anxieties about the Musk takeover of Twitter?

Everyone is worried about something and with good reason; activists and policy professionals worry about the rise of misinformation; cybersecurity folks are concerned about the hacking or leaking of sensitive data and government officials are nervous about the close proximity of Chinese and Saudi Arabian nationals to the deal. And then there are the celebs, journos and politicos — certainly the loudest of them all — who are upset about their very important blue check becoming subject to an $8 a month surcharge as part of Musk’s efforts to raise cash and “defeat the bots and trolls” (don’t ask me how).

There’s also the fact that Birdwatch, its volunteer community of note-takers designed to fact check tweets, is in the balance after Musk reportedly clashed with the team last week. The programme has been almost two years in the making and was only recently rolled out to US users after showing promise. Its shuttering would sound the death knell on Twitter as a safe fun and fascinating place to be online.

What have been twitter’s historic issues with content moderation?

Twitter’s list of moderation controversies is longer than the list of Musk’s alleged children: just off the top of my head, there’s political shadow banning, Covid-19 misinformation, the suspension of congresswoman Marjorie Greene Taylor, warning labels (including one for “hacked materials” following the Hunter Biden leaks), the verification of white supremacists, outsourced moderator mistreatment (along with most platforms, in fairness), and, you won’t need reminding, the suspension of Donald Trump.

Despite all that, Twitter actually has a pretty good reputation in trust and safety circles for its work keeping users safe over the last decade and half. The company has a reputation for pushing back against governments seeking to hide or pull down posts it doesn’t like, most notably in the case of the Indian government in May earlier this year and Vijaya Gadde, Twitter’s former head of legal, policy and trust, was renowned for going into bat for free expression (ironic considering Musk’s stand) more times than almost anyone else. Who knows what will happen now. 280-character answers on a postcard.

Who would you recommend to keep tabs on the topic over the coming weeks?

It doesn’t look like Musk’s myopic views on moderation are going away so it’s worth getting genned up. I’ve created a Twitter list that includes a wide range of practitioners and experts in the online safety space which might be useful for your readers but I’d draw special attention to Jillian C York (Electronic Frontier Foundation), Daphne Keller (Stanford), Juliet Shen (Grindr), Evelyn Douek (er also Stanford), Kat Lo (Meedan), Julie Owono (Meta’s Oversight Board) and Mike Masnick (Techdirt), all of whom I’ve learnt a lot from in the four and a bit years I’ve been writing Everything in Moderation.

Ben is an expert in content moderation (he currently works for the FT) and a responsible journalist. My instinct is to a much greater degree of irresponsibility.

I am aware of the dangers of disinformation, the hazards of harassment and the scourge of spam. And I am pro-tackling each of them, provided it doesn’t encroach upon the basic delivery of the product. But the moderation of each has major issues.

Despite the hilarious idiocy of “alternative facts” and “fake news” the reality is that much of what is posted on the internet is subjective, and is open to multiple interpretations. Facts do not always agree with one another. It does not make one a lie and one the truth, it just means that datasets don’t always align and science is constantly involving. The attempt to arbitrate the subjective is a tricky task, placing technology companies (who are servants to many masters, including investors and shareholders, advertisers, and consumers) as referees in a game they’re also playing. Personally, I think they’re on a hiding to nothing.

Targeting harassment is clearly the most important issue facing content moderation, because it’s the one that has the most profound real world impact (measuring the impact of disinformation is incredible difficult, and I suspect overstated; humans have always found ways of propagandising and lying to one another). But targeted harassment is a real, tangible issue. But it’s also fairly intractable. Platforms cannot keep pace with the workarounds. Banning has become a fruitless endeavour. Instead, what’s needed is a filtration system, so the vitriol never gets seen.

And then there’s spam. For all that Elon Musk talks about freeing Twitter up, letting people loose on the platform to be a town square, he’s also exhibited himself to be very anxious about the proliferation of bots on the platform (hence using it as a key excuse to try and get out of the $44bn deal). Most bots are of the harmlessly parasitic variety endemic on the web — trying to shill crypto or lure people into a scam that involves sending money to an internet café in Ghana — but some are clearly trying to manipulate public opinion in favour of state actors. This is a problem that technology could solve, if the problem weren’t constantly being further entrenched by technology. Which means we’ve ended up in an attritional tech v tech forever war, where the tools to unearth spam improve at a similar rate to the tools to improve spam generation.

This is all just a depressive way of me saying that I think most of the issues around content moderation are here to stay. And I also think that the market demand, in the digital sphere, will compound these issues.

What do I mean? People of my generation (the first digitally native one) were not savvy users of the internet. We signed up to Facebook and Twitter with our real names, and left a career-destroying trail of breadcrumbs. We said stupid things as teenagers on the internet, and all that is now linked to the adults we’ve become. The modern teenager knows this. They prefer a form of anonymity. They stay off platforms like Facebook which simply mirror IRL social dynamics, instead going for cross-border, less-traceable apps like TikTok. And they have developed a whole communication system — the disappearing message — designed to leave no trace. Snap is one of the biggest communications companies for Gen Z, built on a promise that, after you send it, a message no longer exists for sender or recipient.

The reality is that these trends make content moderation harder. If you suddenly had the passport details of everyone on Twitter, you would end 99% of the targeted harassment. But ID checks are (god willing) unlikely. In point of fact, the trend is towards a greater anonymity. The biggest emerging apps of the last few years — TikTok, Snap, Twitch — all have anonymity fairly hard-coded into their DNA.

Can you fight the market? Musk clearly thinks not. And whatever you think of Elon Musk, he usually gets what he wants. Whether he fashions the world to his image, or his image to the world, is relatively inconsequential. For him, the internet is not a digital facsimile of the real world but a new space, in which to reimagine yourself and your social conditions. How you square that with responsible content moderation is a question I wouldn’t envy having to answer.

For now, I’ve no interest in abandoning Twitter: so please follow me there.

--

--

Nick Hilton
Nick Hilton

Written by Nick Hilton

Writer. Media entrepreneur. London. Interested in technology and the media. Co-founder podotpods.com Email: nick@podotpods.com.

Responses (1)