Who knew that it would take a worldwide epidemic in the form of the COVID-19 Coronavirus for big tech companies to finally take more responsibility and a more pro-active approach in tackling misinformation online?
Last month, Facebook announced that users that like and comment on posts that share harmful conspiracies about the epidemic will be redirected to the WHO page, and Twitter said that users who tweet about the debunked 5G coronavirus theories will be advised to read government-verified information about the technology.
While I think it is great that companies are blocking and taking down content that is providing misleading and false information about the coronavirus, social media companies, and especially Facebook, could arguably have taken a more pro-active approach in tacking misinformation much earlier, and have in some cases arguably been complicit in its spread.
In 2018, Facebook garnered criticism for being too slow to deal with Russia’s alleged social media misinformation campaign. In the same year in Brazil, an electoral court ordered Facebook to remove links to 33 fake news stories attempting to smear a vice-presidential candidate. And we remember the deeply unethical Cambridge Analytica scandal. In recent days, Twitter took the decision to label one of President Trump’s tweets as misleading, yet Zuckerberg claimed that Facebook won’t take similar measures on their platform as they are not ‘arbiters of truth’. Facebook, it seems, would rather be the policeman turning a blind eye than have some integrity.
Most damningly though, the Wall Street Journal reported recently that in 2018 the Facebook leadership were presented with evidence that its algorithms ‘exploit the human brain’s attraction to divisiveness and if left unchecked will feed users with more divisive content and further divide the country’. Mark Zuckerberg and other executives were presented with clear evidence that shows his public rhetoric that Facebook helps build bridges and establishes open communities is factually wrong. In fact, the exact opposite seems to be happening: Facebook through its increased polarisation of content is sowing discord and obscuring discussion in order to prioritise user engagement. I understand that there is a difference between misinformation and polarization of news content, but there is some degree of overlap. While polarised content is not always misinformation, misinformation is almost always polarising.
Facebook, in this instance, did little probably because sadly it is in their interest not to do so. A famous phrase on the internet says ‘if you’re not paying, you’re the product’, and in Facebook’s case this is most definitely true. As Facebook is free to use, the company makes money through advertising, and they know that the longer they can keep you online means more revenue to them as more advertisements are being viewed. Cal Newport goes as far as likening Facebook to a slot machine, giving you small treats through sensationalist articles in exchange for minutes of your attention. Hence, it is arguably in Facebook’s interest to feed users with polarising and misleading content as these types of posts spur debate, comments and shares, which in turn infects other users.
It is not just Facebook alone though, and other platforms have let misinformation somewhat flourish too. YouTube, the video-sharing site owned by Google, have frequently had problems with misleading content that in some cases may actually radicalise individuals through their endless stream of content. The podcast Rabbit Hole by the New York Times explores this brilliantly and shows how YouTube’s algorithm can recommend videos by misleading and frankly dangerous individuals, such as Steven Molyneux, whose factually flawed race-baiting content I don’t feel shouldn’t be on-site in the first place. YouTube recently tried to address this by promoting traditionally reputable news providers like PBS and the Guardian, and almost completely removed independent news channel from the recommended videos page altogether. Is this the right move to tackle misinformation? Rather than penalising all small independent news channels, why not instead blacklist the channels that have been proven to be repeat offenders at spreading misinformation, or does that take too much effort? A blanket ban seems a bit unfair on honest independent news channels that don’t share misleading content.
Ultimately, tackling misinformation in an era where an ever-increasing number of us are using sites like Facebook and YouTube is bound to be a challenge, but it is up to the executives of these companies at least to tackle this crisis to the best of their ability. Mark Zuckerberg and other social media bosses often don’t want to take down content on their platforms for fear of being partisan, but these individuals wield great power, and democracy may suffer if they don’t tackle this issue.