The novel coronavirus, COVID-19, has put pressure on social media platforms to try to deal with floods of misinformation.
Even getting a handle on the size of the problem is tricky because while social platforms are publicly keen to tackle the problem, they’re less forthcoming with numbers that might show how much false information is circulating and how they’re working on the problem.
It’s clear that coronavirus is a hot topic. In a blog post on Jan. 29, Twitter revealed it had seen over 15 million tweets on coronavirus over the previous four weeks. A brief look at Google Trends shows that coronavirus searches began to grow in Canada around Jan. 18 and spiked on Jan. 31, though it’s not possible to determine exact numbers.
One way to get an idea how much false and misleading information is circulating is by looking at what the fact-checkers are facing.
An internal spreadsheet compiled by the International Fact-Checking Network documents over 370 coronavirus-related debunks done by fact-checkers and journalists around the world since Jan. 22. The IFCN was started in 2015 by the Poynter Institute, a non-profit journalism training organization, and brings together fact-checkers around the world to share information and establish best practices.
Louis Baudoin-Laarman, a fact-checking journalist with the wire service Agence France-Presse who focuses on Canada and the U.S., says he estimates the wire service has already done between 30 and 40 debunks around the world in just the past two weeks.
AFP is one of the third-party fact-checkers partnering with Facebook, which means once an AFP journalist has written a story and flagged that a particular piece of information is false to Facebook, the platform will take steps to limit that information’s spread and to surface correct information instead.
Baudoin-Laarman says he finds that COVID-19 misinformation falls under four main categories:
- false information about cures;
- false information about where new cases are being discovered;
- images or videos purportedly showing people who are sick or falling over from coronavirus;
- false information about the origin of the virus.
In a blog post on Jan. 30, Facebook said it would begin to remove content containing conspiracies or false claims that had the potential to cause physical harm to users, such as incorrect medical information.
When contacted by CBC News, a Facebook Canada spokesperson was unable to reveal any figures about how much content may have been removed.
In Twitter’s Jan. 29 blog post, the service noted it had taken steps to ensure the platform was protected from “malicious behaviours” and that it would remove accounts engaging in those behaviours.
The most high-profile instance came when Twitter permanently suspended the account of Zero Hedge, a far-right site that purports to share finance and economic information, for circulating a theory blaming one Chinese scientist, without evidence, for creating COVID-19.
Buzzfeed reported that Zero Hedge had posted the personal contact information of the scientist online, a practice known as doxxing, and encouraged others to get in touch.
Twitter hasn’t revealed how many accounts it may have banned. A week after the coronavirus post, Twitter also announced it would remove or label deepfakes — videos altered using AI to make it seem as if someone said or did something they didn’t — and other deceptively altered content that could cause harm. (Facebook announced at the start of January it would ban deepfakes.)
Google and YouTube did not provide information about removing content or altering search results but did provide information about a special search alert set up for the novel coronavirus.
Partnerships with health organizations
Facebook, Twitter and Google have prioritized links, pop-ups or search results directing users to information from official health partners like the World Health Organization and the Public Health Agency of Canada.
The WHO itself has a page dedicated to myth-busting claims about the virus. Some of the claims on the WHO site include whether hand dryers can kill the virus (they can’t), if it’s safe to get a letter or package from China (yes) and if putting sesame oil on your skin will protect you from coronavirus (no).
When Canadians search Twitter in either official language for coronavirus information, the first result they’ll see is a tweet from Dr. Theresa Tam, Canada’s chief public health officer, directing users to a page with up-to-date health information.
Anna Maddison, a spokesperson for Public Health, told CBC News that the agency worked with Twitter in May 2019 on a #KnowTheFacts notification to direct Canadians to factual information on vaccines.
“The government of Canada applauds efforts by social media and technology companies to increase access to evidence-based information on their platforms,” Maddison wrote in an email.
On YouTube, Instagram, and Facebook, users who search for coronavirus information may see search results or pop-ups directing them to the World Health Organization.
However, Facebook still may surface information that is less relevant to users. A recent search by CBC News included results from news organizations, but also links to an event listing called ‘Carnaval Coronavirus’ Brazil that appears to be a joke, and a link to a public health event in Sweden.
On Instagram, users may first be directed to accounts that purport to aggregate coronavirus news but aren’t associated with any media organization, or even virus meme accounts, without seeing a pop-up directing them to the WHO.
On Google, users first see recent news related to COVID-19 but are also directed to the World Health Organization for more.
What you see when you search for coronavirus:
Google’s search liaison, Danny Sullivan, announced on Twitter on Jan. 30 that the search giant had put in an SOS Alert on coronavirus searches. The alert surfaces the latest news on the topic as well as relevant local news. It will also link to safety tips and health information.
A look at Google Canada Trends shows searches for coronavirus started around Jan. 18 and spiked at the end of January, though it was not possible to determine exact numbers.
Underneath coronavirus videos, YouTube now may include a disclaimer about the media organization and a link to Wikipedia, or a link to the World Health Organization’s information page about coronavirus.
For example, under a CBC News video, there’s a note that the organization is a Canadian public broadcaster, while a CGTN video is accompanied by a note saying it is funded “in whole or in part by the Chinese government.”
Facebook says it limits information flagged by third-party fact checkers as false, and instead shows the correct information from the fact-checkers. Facebook also says it sends notifications to users who have shared or are trying to share the content, to let them know it has been fact-checked.
Facebook says it is blocking or restricting hashtags used to spread misinformation on Instagram. And, when users search for coronavirus, they may see a pop-up directing them back to the World Health Organization.
Canadians searching Twitter in both French and English for coronavirus will see is a tweet from Dr. Theresa Tam, Canada’s chief public health officer, directing users to a page with up-to-date health information.
A thread started on Jan. 31 from the subreddit Ask Science was featured on the main page of Reddit, helping users find information about the novel coronavirus. The thread has since been moved off the main page but is still available at the top of the Ask Science page.
A spokesperson for Reddit says one subreddit has been quarantined related to coronavirus misinformation. A quarantine means the page won’t show up in search results, and when users to try access it directly, they’ll be shown a warning message.