Fake news is in the news again. Both in the US and in Europe. Facebook recently announced that it was allowing Congressional scrutiny of advertisements bought by Russians during the 2016 US election, as part of an ongoing investigation.
It is a hard path to walk. Facebook has faced criticism for not banning certain groups (such as those promoting anti-Semitism), and for banning content about others (such as a group of Rohingya insurgents in Myanmar). It is easy enough to say that Facebook should be able to tell the difference, but it is also entirely possible that the two groups use quite similar language. An algorithm is going to struggle between ‘good’ and ‘bad’ ways of trying to drum up opposition to a particular group. It may be too much responsibility to lay on the shoulders of private companies.
Recently, the misinformation that Europe’s first animal brothel is opening in Copenhagen this month has been spread in russian media. The purpose is allegedly to create an understanding in Russia of the western democracies as being decadent countries with perverted freedom rights – as a response to the western criticism of Russia’s hard line towards the LGBT society in the country. And it is not a danish problem. The Catalonia referendum is being used in Russia to present the “news” that Europe is falling apart and Spain is to be compared to Ukraine.
Options for managing fake news
Prominent politicians are generally under close media scrutiny, but it is still difficult to identify and remove fake political news. It is therefore unsurprising that it is next to impossible for social media sites to police fake news about corporations in any effective way. Facebook’s approach of suggesting ‘related stories’ linked to all news stories is a good way of helping to develop users’ critical thinking, but by itself it is not going to be the whole answer, and neither is asking users to flag fake news stories.
In Denmark, the government has created a special unit with the purpose of revealing (russian) misinformation and to act when fake news are spreading with the purpose of weakening our western democracies. The 3 FTY small unit will sniff through the ocean of information from all over the world and this can only be administered effectively with the help of advanced analytics.
Bots or humans?
Machine learning has been used to identify which accounts are spreading fake news stories, and may be one way to help. In a study about Twitter, researchers used machine learning to identify that social bots played a clear role. The strategies used were clever: The bots promoted stories before they went viral. They also targeted individual users, making sharing and retweeting more likely through a personal network. Finally, they made themselves look human by changing location. These strategies made them very hard to detect as bots, and make the stories more likely to spread. Although this study was only about Twitter, other networks would probably be as vulnerable.
Social media sites could use two main options to help remove the bots spreading fake news: Machine learning solutions detects and shuts down bots, and CAPTCHAs are being used, a proven way of distinguishing bots from people. Professional text mining tools also do a difference. Perhaps the real issue is that it is much easier to spread fake news using bots than to stop it from spreading. It is a little like an ‘arms race’: We need to use machine learning techniques to develop and guide counter-bots to reply.It is much easier to spread fake news using bots than to stop it from spreading #MachineLearning Click To Tweet
Analytics, algorithms and other activity
Should companies take their own action? The answer lies in whether fake news really matters to them. PepsiCo’s recent experience suggests that it does. A single fake news story caused a drop in both sentiment and share price for the company. And other companies should take note: Although sentiment recovered quickly, share price had still not fully recovered several months later. Companies therefore need to act to limit and manage the impact of fake news stories.
Doing so is a challenge, though. There are thousands of fake or distorted news stories at any given time. The impact of managing every last one is likely to be quite major. Companies need to prioritise. Fortunately, just as artificial intelligence and analytics are being used to develop fake news, they can also help to combat it. For example, the alva algorithm scores sentiment of news stories, but tempers this with a measure of influence, so a more influential news site will result in a higher (more urgent) score. Web sniffer products can help to find news stories before they go viral, and give companies time to develop a counter-attack.
Having detected a fake news story, perhaps using one of these techniques, what can companies do about it? First and foremost, the impact needs to be managed, just as with any other ‘bad news’ story. The company needs to spread the word that the story is fake, for example, by flagging it in as many places as possible as ‘disputed by third-party fact-checker’. Sometimes it may be possible to add something to fact-checking websites such as Snopes.com.A single #fakenews story can cause a drop in both sentiment and share price for a company #MachineLearning Click To Tweet
Entering the echo chamber
Maybe the real issue, though, is that people believe what they want to believe. One of the reasons why social media spreads fake news so fast is that it acts like an ‘echo chamber’: It is designed to show people content that they will like, and therefore reinforces their point of view. Perhaps the best way is to encourage fact-checking and a healthy dose of cynicism. The important message here is that technology can help identifying the risks almost immediately when they appear, and technology can orchestrate the counter-campaign of encouraging fact-checking.
The weapons race in the battle of the truth has elevated one step with robots on both sides of the war.