Facebook and Twitter need a redesign to fight junk news
As the use of social media to spread misinformation soars, it’s time for companies such as Facebook and Twitter to redesign their platforms, says Professor Philip Howard, head of the Oxford Internet Institute, UK.
According to a recent study by the institute, organised social media manipulation has risen from being active in 28 countries to 48 in the space of one year. Prof. Howard studies the rise of computational propaganda and spoke to Horizon about how misinformation works and how it has evolved.
Political bots, computational propaganda. How does it all work?
‘The three key ingredients are big data, which is used to target audiences, social media platforms like Twitter, Facebook or WhatsApp, and autonomous agents – bots and algorithms – that can massively magnify the distribution of propaganda.
‘The end goal is to spread junk news and disinformation, exercise censorship and undermine trust in public institutions.’
Last month your institute said a third of tweets about September’s Swedish general election were from outlets promoting junk news. In July you said that use of bots is soaring despite measures taken by Facebook and Twitter. Which countries are the worst offenders?
‘This is not just about authoritarian regimes. Some of the worse actors are actually regular political parties in democracies. They spend the most money and are the most innovative in manipulating public opinion. Authoritarian regimes usually just copy the tools and techniques that emerge in democracies.
‘However, I think the Russians are the most advanced. They are very good at coming up with propaganda campaigns that are divisive and polarising in cultures where there is already a lot of political polarisation. So, in the UK context, they’ve been active on Scottish independence, and in the United States they’ve been active on race issues. Democracies are particularly vulnerable because we don’t want to silence people and we want to respect opinions.’
Why is it spreading globally?
‘Partly, it is happening where government agencies feel threatened by junk news and foreign interference, and are responding by developing their own computational propaganda campaigns in response. Some countries are simply copying each other. So the Turks, the Iranians, they see that the Russians can do this work and so they decide to start their own little intelligence agency that will do social media manipulation.’
You’ve been studying the impact of algorithms and bots on political discourse as part of a project called COMPROP. How do you know whether political bots sway voting patterns?
‘This is one of the toughest questions and I’m not sure we’re close to an answer. However, we do know that misinformation campaigns have a long-term effect on public opinion. We can still measure the number of British people who think they will save £350 (€399) million a week from Brexit. And you can still measure the number of young Americans who think that the 2016 presidential candidate Hillary Clinton was involved in some sort of paedophilia ring in pizza parlours in Washington DC.’
I’m sure I couldn’t be manipulated by junk news, so why are others so gullible?
‘Most of us think of ourselves as sophisticated news consumers. But most people have cognitive shortcuts when making political decisions. Some of those shortcuts are just ways of protecting our time.
‘Selective exposure is one. We choose to look to sources of information that are compatible with what we believe. We tend to stick to politicians that we’ve believed in the past. These are just features of how we think: we can manage them a little, but mostly they are not things that we can do much about.
‘In an ideal world, we’d all see a few news stories from the other end of the political spectrum from the one we’re on. And we’d all have a few friends who disagree with us, who would debate us once or twice a month. But we don’t set ourselves up for those kinds of political discourses very often.’
'Some of the worse actors are actually regular political parties in democracies.'
When are bots most effective?
‘They work best when they reinforce what someone else has said. And they tend to only work when a human takes the bot content and shares it as their own. So usually the stuff that these bots generate is nonsense or ridiculous. But if you can get a mainstream politician to believe some component of it and retweet it – then suddenly the story will have traction.’
Can we fight it with new technologies?
‘We’re trying to use natural language processing to get better at capturing and identifying automated political messaging. However, that is leading to an arms race because some political actors are using the same toolkit to generate more sophisticated automated messages.’
How will we be manipulated in the future?
‘The latest improvement seems to be in doctored images. For example, there were images of Muslim women storming the beaches of Greece. These are known to be doctored but the quality of the fakery is getting better and better so it’s becoming hard for us, in an automated way, to identify which images are faked. We also think that computational propaganda is increasingly taking the form of sponsored ads and SEO (search engine optimisation).’
Do you have any solutions?
‘I think the time for self-regulation has passed. What will make the difference will be when the platforms show some civic leadership and design for democratic conversation. The question is what kind of gentle nudges can public policymakers give to help engineers make good design decisions. And there I think the answer varies from platform to platform.
‘For Twitter I think accounts that are automated should be labelled as such – a little bot sign next to those account names. For Facebook I think figuring out how it can transfer to newspapers some of the profits it makes from advertising and using news is going to be very important for the news industry. I haven’t worked out what I would recommend for WhatsApp.’
Can you imagine a future in which all information we receive on social media is reliable or easily verifiable?
‘I’ll say yes because although I share the cynicism of a lot of actors I don’t share their fatalism. I think there are other organisations in public life that could have a significant role in restoring public trust, such as libraries, universities and civil society groups.’
This interview has been edited for length and clarity.
Junk vs fake news
Whereas the term fake news applies to stories that are provably false, the Oxford Internet Institute classifies news as junk if at least three of the following five criteria apply.
Professionalism: It doesn’t adhere to journalistic standards, such as transparency and accountability.
Style: It uses emotionally driven language.
Credibility: It relies on false information and conspiracy theories.
Bias: It is ideologically skewed and highly biased.
Counterfeit: It mimics professional news media and disguises commentary as news.
The research in this article was funded by the European Research Council. If you liked this article, please consider sharing it on social media.
- Science vs fake news – the fight is on
- Scientists explore underwater frontiers with submersible tablet computers
- Pushing the bounds of vision could reveal hidden worlds
- New wave of medical ‘deep tech’ can help coronavirus response – but there’s resistance
- Five things to know about: making self-driving cars safe