Skip to content

Can editors – or algorithms – save the news?

Read Wednesday, 19 Apr 2017

Artificial intelligence is both a problem and a solution in the fight against fake news, writes Connor Tomas O’Brien. But what if we don’t value objectivity and balance in the news as much as we think we do?

Share this content
Illustration: Jon Tjhia

In mid-summer, I sat on a picnic rug and watched a small plane trailing condensation across the sky.

Chemtrails,’ a friend said, adopting the shaky-voiced timbre of a rusted-on conspiracy theorist. ‘A workmate of mine believes all that, you know,’ he added. ‘Chemtrails, “jet fuel can’t melt steel beams”, lizard people … everything.’

We laughed. How was it possible to inhabit a world of perfectly interlocking falsehoods? How did people fill the unknown and unknowable with so much fantastic nonsense?

The rest of the summer, it felt like every time I looked up, I could see the trails – left as the by-product of international flight, or as the deliberate work of skywriters – drawing apart and dissipating. When I looked down, at the glowing screen cradled in my hand, I could see something else: unbelievable news reports and images of unknowable origin, jammed into status updates that were being shuttled around the world with seeming abandon. The buttons beneath each of these offered the power to share or embellish further – to become one more voice in a giant game of Chinese whispers.

So many of the lies were tiny, based on misconceptions that were amplified by those who simply wanted to believe they were true. Some were escapees from satirical websites. Many, of course, originated on content farms in which bullshit was commodified, or politically invaluable, as long as it comported with enough readers’ biases. Often, the source of each lie was the first thing to disappear in its rapid movement across social networks.

How was it possible to inhabit a world of perfectly interlocking falsehoods?

Conspiracy theories, once propagated by the few, now floated everywhere.

Naming the beast

Every few years, it seems, our lexicon fails us, and we are forced to formulate new shorthand to define exactly how we are being hoaxed online. In many cases, to define a scam is to drain it of its power, which is why new words are valuable: they promise a form of inoculation. It’s harder to be ‘spammed’ or ‘phished’, ‘catfished’ or ‘clickbaited’, once you can recognise and name the specific shape of each of these forms of deception.

‘Fake news’ is a term that should be clarifying and inoculating, but it is not. There are still obvious questions, that may never be adequately answered, about where ‘fake news’ comes from and why it is seeded to us. Is it a form of ‘active measure’ espionage fashioned to manipulate entire populations, or is it simply opportunism – a kind of ‘clickbait gone wild’, disseminated by the apolitical and unscrupulous in order to drive traffic and make a quick buck? ‘Yellow journalism’ is nothing new, and neither is propaganda, but perhaps what is new about ‘fake news’ is the almost complete blurring of lines between the two – the sense that falsehoods, some compatible and some contradictory, are now emerging from so many different parties that it’s impossible for anybody to pin down who is producing them and why.

Image of former presidential candidate Hillary Clinton and a fabricated quote about the National Rifle Association.
A Hillary Clinton meme circulated during the US presidential election containing a quote found to be fabricated.

There is, perhaps, another reason fake news retains its power over us, even as we are beginning to understand we are being scammed: it is intoxicating. Unlike spam or old-fashioned clickbait, which reveal their true nature pretty quickly, fake news rarely appears designed to trick its targets in obvious ways. The damage fake news wreaks is almost entirely in the form of social externalities, as our consensus fractures and we splinter into tribes hiding behind the bundle of untruths we happen to find most pleasing. Once that splintering occurs, it becomes very difficult to put the cat back in the bag. At some point, there is no shared reality to which we can return. For the credulous, it becomes easy to believe anything. For the sceptical, meanwhile, it becomes difficult to believe anything at all. Either way, fake news wins.

Lies, damned lies … and loopholes

In late February, Mark Zuckerberg released a manifesto of sorts – a 5700 word ‘Note’ outlining Facebook’s social responsibilities over the coming decade. The post seemed a bid to atone for, and deflect criticism of, Facebook’s role as a primary vector through which fake news has developed a foothold within our culture – and potentially altered the outcome of the recent US Presidential election.

‘Giving everyone a voice has historically been a very positive force for public discourse because it increases the diversity of ideas shared,’ wrote Zuckerberg. ‘But the past year has also shown it may fragment our shared sense of reality.’

This claim was unprecedented, marking a movement away from Facebook’s prior position in which users were held responsible for placing themselves within their own ideological filter bubbles. Now, it seemed, Facebook’s management were beginning to recognise that the network itself needed to bear some responsibility for supporting the spread of inaccurate and biased information.

If AI can be used to identify fake news then AI – or old-fashioned human beings – can produce articles that those filters miss.

Another shift has occurred over the past two years: a recognition within Facebook that it is increasingly becoming the principle source of news for many of its users. As recently as 2015, Andy Mitchell, Facebook’s director of media partnerships, was reported to have argued publicly that Facebook should not be anyone’s primary news source or news experience and the network was ethically accountable to its users only for ‘creating a great experience’, whatever that meant. Now, in his 2017 manifesto, Zuckerberg appeared to be admitting that this was wrong-headed. Facebook had now ‘evolved beyond a space for entertainment and social interaction, and toward a primary space for the dissemination of public discourse.

Facebook’s PR department has decided to focus on presenting Artificial Intelligence (AI) – machine learning, in particular – as the social network’s solution to fake news. Yann LeCun – the leader of Facebook’s Artificial Intelligence Research group – has argued that his team has recently managed to fine-tune the network to prevent misinformation from spreading. ‘It turns out that identifying fake news isn’t so different than finding the best pages people want to see,’ he said in a recent interview with Backchannel.

Perhaps this is true but, as almost any technology company knows, the battle to stop scammers from identifying and exploiting loopholes is a game as old as the internet. If AI can be used to identify fake news then AI – or old-fashioned human beings – can produce articles that those filters miss. Our shared definition of ‘fake news’ is nebulous, leaving plenty of room for hoaxers to change tack repeatedly. If a site’s content is identified as ‘fake’, it’s easy for its owner to rebrand it as ‘satire’, or to A/B test hundreds of variations of sites or articles to identify what kinds of little lies best slip through the cracks.

Google spellcheck suggests changing 'views' to 'news'
Actual Google spellcheck

The last resort for fake or hyper-biased ‘news’ purveyors is to exploit the primary weakness of all social networks: their reliance on advertising as a key revenue source. Unlike most search engines, which flag their advertisements clearly, social networks benefit from an almost complete collapse between advertising and editorial. We click on Facebook ads at an unprecedented rate because we rarely notice they’re advertisements at all. The ability to track users across the internet means data-mining and analysis firms can craft hyper-specific messages based on the demographic and psychological profiles of individual users. These messages can appear as ‘dark posts’ on Facebook – sponsored posts that do not appear to anybody but the intended recipients.

If a data-mining firm is contracted by a political party or an ideologically motivated organisation and runs social network advertisements to manipulate individuals to change their beliefs, what kind of protection can or should a social network offer its users? Influential data firm Cambridge Analytica, for example, claims to have between 3000 and 5000 ‘data points’ on every American citizen. This isn’t surprising, considering how indifferent most of us now are to the cookies and trackers that follow us across the web, but it’s how these data points are unified and acted on that is troubling. After years of accumulating ‘Big Data’, firms are now beginning to consolidate their work, engaging in microtargeting campaigns that can distort an individual’s sense of reality by feeding them different messages than those fed to their neighbour. In the lead-up to the US Presidential election, for example, Trump’s team admitted that Cambridge Analytica was working to suppress the black vote by continuously feeding certain African-American voters ‘dark’ material drawing attention to Hillary Clinton’s 1996 reference to some African-American youth as ‘super-predators’.

Image of Donald Trump and a quote about Fox News viewers that has been found to be fake
The quote in this widely circulated Donald Trump meme has been found to be fake.

In effect, there may be negligible difference between consuming outright falsehoods and being continuously drip-fed hyper-partisan journalism. Another possibility, in the long run, is individuals being targeted for ideological change by multiple competing entities, caught in a cross-fire of amped-up and individualised propaganda. At this point, it is plausible to imagine social networks running endless streams of hyper-targeted, hyper-manipulative sponsored posts designed to pull users to-and-fro, from one ideological extreme to the other.

In one sense, this is what politics – and advertising – has always entailed. The difference is that social networks are in the process of morphing into AI-‘curated’ newspapers with readerships of one. If politicians or brands can shape-shift to present differently to every one us, maybe none of us will see the same reality. As reporters Berit Anderson and Brett Horvath argue, ‘Elections in 2018 and 2020 won’t be a contest of ideas, but a battle of automated behaviour change. The fight for the future will be a proxy war of machine learning.’

Of course, there is always the dark possibility that this is what we really want. One of the perverse and unacknowledged pleasures of social networks is the ability to enter virtual spaces at will in which our beliefs – or delusions – are supported and repeated back to us. The narcotic effect of Facebook or Twitter resides at least partly in our ability to curate our social and political worlds, finding comfort in the plushness of our ideological cushions. There may be a strong ethical case to be made for drawing users’ attention to their truth-deficient media intake, but no clear business case for it. If we want to share and receive our little lies – and if it turns out that we desire those lies more than big and unpalatable truths – then the owners of social networks may need to become very creative with the algorithms that determine what appears before us. Perhaps the truth is a kind of vegetable that must now be somehow disguised before it can be snuck into our media diets.

Legacy media and known quantities

Ultimately, it seems unlikely that AI alone can save us from sharing falsehoods. The idea of entrusting algorithms to curate our view of the world is part of what has made fake news possible in the first place. When it comes to what appears on Facebook’s newsfeed, there is no central point of accountability – as Facebook’s conflicting, ‘evolving’ statements reveal. What we see on social media is the result of a complex interplay between our own desires and the imperatives of friends, news outlets, advertisers and Facebook’s impenetrable AI systems.  

That’s why one of the most interesting trends over the last few months has been an unprecedented resurgence of support for flagship media organisations like the New York Times – even as overall popular trust in the mass media has fallen, in the US, to its lowest level in polling history. These two trends are not necessarily contradictory. After all, it was the growing scepticism around impartiality and objectivity in mass media outlets that caused the initial drift of conservative and swinging voters away from them, as kind of over-corrective ‘protest vote’ against mainstream media bias. The result of this drift has been the profusion of swirling, blinding clouds of falsehoods and exaggerations.

The role of publishers and editors as gatekeepers is as fraught as it has ever been, of course. In 2015, for example, Timothy P. Carney warned, in the New York Times itself, that there were substantial issues in ‘personnel movement’ within the institution, with those taking liberal positions being elevated to high-level editorial roles. At the same time, it is the very fact that publications like this are willing and able to identify their own failings that renders them principled, even if fallible. It is the fact, moreover, that we know who is producing, editing, and publishing these pieces of ‘old school’ journalism that provides accountability. At the very least, we are provided with enough context to decide what is worth approaching sceptically. More importantly, major ‘legacy’ news organisations tend to have far more to lose than upstarts – from Breitbart to Buzzfeed to the nebulous web of ‘fake news’ sites – that are more willing to breach longstanding ethical codes in order to garner increased traffic over the short term.

Of course, there is always the dark possibility that this is what we really want.

There is strong evidence to suggest that the return to the old media gatekeepers is being led by liberal-leaning readers. But it’s interesting to consider that the Australian – which rarely attempts to obscure its conservative tilt – is attempting to capitalise on readers’ desire for context and accountability. ‘For facts that aren’t alternative,’ reads the copy in a current advertising campaign for the News Corp masthead. Despite some political differences between the Australian and the New York Times, what they share is a longstanding tradition within broadsheets of attempting – though not always succeeding – to separate opinion from hard reporting. On social networks, these same distinctions are blurred, which explains why these publications are attempting to draw readers as far away from them as possible.

The battle for these institutions is far less about page views – the bluntest of metrics – than it is about winning editorial control over the entire feed of news that passes before us. Neither algorithms nor human editors can claim to be free from bias, but editors are at least known quantities. Over the past 20 years, the technological utopian narrative has been that media gatekeepers are unnecessary and that we would be far better off without them. Over the past decade, meanwhile, the collapse of many legacy news organisations has been received by some with a kind of fanfare, or at least a shrugging sense of the inevitable. Now, we may be recognising, with a start, why they were so important to begin with.

Stay up to date with our upcoming events and special announcements by subscribing to The Wheeler Centre's mailing list.

Privacy Policy

The Wheeler Centre acknowledges the Wurundjeri Woi Wurrung people of the Kulin Nation as the Traditional Owners of the land on which the Centre stands. We acknowledge and pay our respects to all Aboriginal and Torres Strait Islander peoples and their Elders, past and present, as the custodians of the world’s oldest continuous living culture.