Misinformation and disinformation are consistently present online, especially in times of panic and crisis like these. Many of us are unsure about the threat posed by the spread of COVID-19, and we flock to the internet for facts and figures to better understand the situation at hand.
As an expert in social media, specifically livestreaming social media, I have studied over 1 million streams since the launch of Periscope and Meerkat. Fast forward to today, and the net is full of attention-grabbing headlines, videos and posts about the current COVID-19 situation. And when this content appears on social media — and it’s misleading in any way — many platforms opt to remove those posts completely.
Unfortunately, given the scale of daily social media traffic, taking down malicious, harmful or misleading content is an impossibility. Just consider the fact that Twitter reported it had an average of 126 million daily users, while Snapchat reported 60 million users, and Facebook reported a staggering 1.2 billion.
This sheer volume of content presents a unique problem that will simply overwhelm traditional methods of addressing disinformation. In fact, in a 2017 study, The Brookings Institution conducted an analysis conducted after the 2016 election. Their conclusion: “The authors worry that the outpouring of false news overwhelms fact-checkers and makes it impossible to evaluate disinformation.”
While this is a useful practice in theory, there are more problems to unpack here. Deleting posts or streams with no explanation leaves viewers exposed to disinformation with no idea that it was false or inaccurate. They are left searching for more information and become confused when the post they are viewing disappears with no warning.
Online disinformation is an asymmetric threat. Left unaddressed, it can incite worldwide panic, confusion, and governmental and societal destabilization. When streams are deleted, viewers become confused and panic. Bad actors are relying upon these responses and will flood the market with automated disinformation on a scale never before seen. To combat this, we can use technology to enact large scale countermessaging with facts from trusted sources.
If a piece of content is not accurate or factual, it should be removed — but replaced with another image or message containing a live link to reputable sources, such as the Centers For Disease Control website for COVID-19 information.
There are many players in the industry working to fix this problem. I’ve personally patented a method and technology to address online disinformation by searching for it on a large scale and inserting factually accurate information in a stream chat or in place of the removed content.
This idea is being similarly accepted by governmental agencies like the Department of Homeland Security. Big tech companies are also searching for ways to deal with this problem. Other companies are developing systems to fight misleading content, including search engine link removal or additional posts that dispel false information. MIT has also developed crowdsourced judgment systems to help overshadow malicious content. The only drawback here is that these systems may require external action, meaning consumers will need to leave a certain post to find the right information.
All of the methods I’ve discussed here suffer from one shortcoming, which is cooperation from the platforms themselves. In a previous article, I quoted Facebook CEO Mark Zuckerberg, who said, “As I’ve thought about these content issues, I’ve increasingly come to believe that Facebook should not make so many important decisions about free expression and safety on our own.”
For any solution to be effective, online and social media organizations must admit their problem and embrace technologies to combat their errors. Without them allowing outside, neutral technologies to “look behind the curtain,” they become exacerbators of the problem and inhibitors of any solution. If we learn only one thing from this crisis, it should be that a lack of transparency online can fuel worldwide calamity.
As we wait for those changes to take place, there are a few ways consumers can start to better identify disinformation on their own terms. Start with these methods:
• Organically share this live link to the CDC coronavirus portal in all of your social media posts.
• To the highest extent possible, do not rely upon unofficial social and online media sources. Rather, go directly to federal, state or local government websites. If we limit conflicting information, we help reduce panic, fear, and most importantly, additional strain upon the healthcare, public safety and other critical access systems.
• Finally, take this time to teach your children, grandchildren, extended family and friends that in times of crisis, we must become more self-reliant. Technology, while helpful, should never be a primary method for survival. As we are now seeing, social distancing, washing hands and cleaning surfaces are important ways to stop the spread. These tried-and-true methods of combatting an outbreak have been relied upon for hundreds of years.
In the end, it’s on us to make the most of this situation and stop panic in its tracks, whether we’re helping develop systems to combat disinformation or helping in our own way to educate others. Above all, let’s take care of each other — especially the sick and elderly.