Fake news around elections had many social media platforms deployed "fact-checkers", but have they really been effective, and what other options remain?
Download our latest report, Lessons From Brexit: How To Protect European Union Citizens From Fake News to read our full review of how fake news surrounded the 2016 referendum, and how blockchain technology can contribute to journalism and social media.
In the wake of the 2016 UK Referendum and the US presidential elections from the same year, fake news, misinformation, and disinformation were found to be rampant.
Fake news is a term coined around these times to refer to misinformation and disinformation. However, because this term has been thrown around so cavalierly during these periods, often being leveled at people who hold opposing views, some more specific terms have been employed.
To this end, misinformation and disinformation are used as more detailed and less politically connotated words to discuss the spread of false information, according to a CBC report.
Misinformation refers to the accidental spread of false information. This occurs when information is not fact-checked, often by social media users, before it is shared.
Disinformation, though, is the deliberate spread of false information. This has been found to be perpetuated by individuals, governments, and political parties.
Multiple studies prove that both misinformation and disinformation are rampant on social media, especially in the lead-up to prominent elections. We recently released a report that gathers many of these studies, entitled “Lessons From Brexit: How To Protect European Union Citizens From Fake News.”
This report identified “a network of 13,493 active Twitterbots” that was active before the 2016 UK Referendum but disappeared quickly after the vote. The report went on to say that the manner in which the information was shared “indicates that botnets were designed to echo user sourced information and highly biased comments” relating to the Brexit campaigns.
Most platforms, such as Facebook, Google, and YouTube have fact-checking policies and employ third-party fact-checkers, but there are some serious issues with these systems. Facebook, for example, announced in late September 2019 that they won’t be fact-checking political ads. While they claim this policy is in aid of free-speech, policymakers have brought up concerns that it impedes on the free speech of social media users as they may base their political decisions and votes on disinformation.
Facebook’s own employees penned a letter that states this policy “doesn’t protect voices, but instead allows politicians to weaponize our platform.” This letter states that political ads are precisely the ads that need to be monitored most and fact-checked to the highest standards, as they have the potential to cause the most harm by affecting the outcome of elections.
Facebook’s targeting tools aggravate this problem, as they enable advertisers to aggressively single out certain audiences. US Representative Alexandria Ocasio-Cortez recently questioned Facebook Founder and CEO, Mark Zuckerberg, on this point in a public hearing, asking if political ads targeting black voters and giving a false election date would be allowed to run. Zuckerberg claimed that ads such as this would be taken down, but did not give a concrete reason to why or even how this ad would be taken down if political ads aren’t being fact-checked.
The US government is not the only one concerned with the lack of effective fact-checking, though. In July 2019, the EU Parliament claimed that “more remains to be done” in regards to combating the spread of fake news. This statement goes on to say that online platforms need to “intensify their cooperation with fact-checkers and empower users to better detect disinformation.” Overall disinformation is identified as a notable threat to “democratic processes” in the EU. The steps that have been taken to mitigate the spread of misinformation are not enough, as “[d]isinformation is a rapidly changing threat.” Therefore, the tools used to combat disinformation and the subsequent misinformation need to keep up with the rate at which it is able to adapt.
But, while the EU Parliament calls for more stringent social media fact-checking, that is about all they can do. Implementing laws that limit the amount of fake news allowed is a very slippery slope into censorship. This is clear in Russia’s fake news laws, which “imposes large fines on those who demonstrate ‘blatant disrespect’ towards the government, the Constitution, the Russian flag, or the Russian public online,” according to the our report. While this in of itself does not bode well for free speech, it gets even more unclear, as there is no set definition for what constitutes as ‘blatant disrespect.’ Rather this is left up to the courts to decide, and is largely regarded as censorship.
While Facebook continues to work on developing fact-checking, even while rolling out policies that do not align with this, Twitter has little to no fact-checking policy. Rather, they believe that users are responsible for determining the legitimacy of their news sources, and that they, “as a company, should not be the arbiter[s] of truth,” according to a blog post by Vice President Colin Crowell.
Crowell’s post goes on to claim that the “job” of Twitter is to “keep people informed about what’s happening in the world.” He continues, saying that “Twitter’s open and real-time nature is a powerful antidote to the spreading of all types of false information.” Apparently he fails to see how this quick turn-around time of news on Twitter also enables the rapid spread of misinformation, which is concerning, seeing as he views Twitter as an information source.
Despite their lack of fact-checking, and to some what appears to “look like the company[…] giving license to would-be hoaxers and imposters” according to one Poynter article, Twitter has taken some actions that other sites haven’t. Specifically, while Facebook won’t be fact-checking any political ads, Twitter has actually banned political ads altogether.
Clearly these policies are lacking. Loopholes are built in, and often the fact-checkers that these companies employ cannot keep up with the quantities of ads, groups, and posts. But, what besides employing fact-checkers can these companies do?
Actually, there are several potential solutions, with emerging blockchain technologies specifically providing some very promising options. Blockchain’s peer-to-peer network is able to provide users with greater control over the manner in which they source their news and more transparency in their news sources, all while bypassing the potential censorship that comes with tighter laws. Our report, “Lessons from Brexit: How To Protect European Union Citizens From Fake News,” outlines many of these potential solutions.
The New York Times is already using blockchain in the form of a proof of work/concept. This method uses blockchain technology to “encrypt photographs and videos with details of the date, time, and location of their origin, as well as how they were edited and published.” Blockchain, as an immutable technology cannot be edited retrospectively, meaning that this information is trustworthy for any user who wants to track the media to ensure that it hasn’t been taken out of context or edited to misrepresent the contents.
While this technology was implemented in July 2019, there have been no updates yet. But, The Times calls for much wider support and participation in such products, in hopes of making the internet a reliable source of news. With the rapid rate of evolution that mis- and disinformation are able to support, we recommend looking to technology that cannot be corrupted and have high hopes for the use of blockchain beyond cryptocurrencies. To find out more about fake news, blockchain, and potential solutions to misinformation, please read the full report here.
Maggie is a writer, researcher, and editor. Trained in literature, critical theory, and gender studies, they are now exploring the ways that technology is changing the landscape of human interaction.